double[] or float[] vs JVector3 class

Do people prefer to represent 3D points in space using arrays of double / float primitives, or a custom class?
I use a class similar to the one below, but I’m wondering if there’s a speed benefit (even if it’s a small one) to using an array of doubles instead?


public class JVector3
{
    double x = 0;
    double y = 0;
    double z = 0;
    
    JVector3 ()
    {
        this.x = 0.0;
        this.y = 0.0;
        this.z = 0.0;
    }
    
    public JVector3 (JVector3 v)
    {
        this.x = v.x;
        this.y = v.y;
        this.z = v.z;
    }
}

The only single immediate advantage I can see to using double[] instead of JVector3 is the ability to quickly iterate through all the points in the array:


    for (int i=0; i < darray.length; i++)
        darray[i] *= 6;

Thoughts?

EDITE: I kinda missed the point of the question; I do float only…but see below

My vector class does both primatives and arrays - and also can do a NIO buffer representation. I store the base as floats, but have an uninitiazlied array and FloatBuffer pointer. If the methods are called I lazy instantiate the array or NIO buffer (only once with an if (array == null) ), then populate it and return the array or buffer. This saves on memory as well because you only instantiate what you actually need. For further optimization, I have the option of setting a flag that says the data is constant, therefore the buffer or array population only happens once if called and then never again unless the flag is flipped.

Arrays all the way for me, where the number of elements is suitably large. Object overhead of having 100k vertices would amount to 1.2MB. Which also unnecessarily taxes the garbage collector.

Cas :slight_smile:

If you’re running Client VM and you care about performance, don’t use a big float[]. It will be slower than fields.

If you care about quick and clean coding, don’t use float[]s as it will be a nightmare to debug.

Float fields are normally more than accurate enough for all math operations, and take less ram than doubles. Put as much data structures in tiny objects as you want, it will really help you build your project quickly and hassleless. Not much can outperform field-access and array-access is definitly slower.

But… all rules change when you start die-hard game-programming ;D

Thanks for the replies!
Is this really true? :

I thought array access was very fast if referencing it using an index e.g. myvalues[2]
Take the following:

byte[] b = new byte[3];
// sets aside an area of memory 3 x 8 bits in size (plus any additional JVM stuff)

byte[0] // goes to memory location start, reads off 8 bits
byte[1] // goes to memory location start + (81), reads off 8 bits
byte[2] // goes to memory location start + (8
2), reads off 8 bits

I’m probably thinking too much about this, but I’ve been using Java for over 5 years and come from a C++ background where it was really handy to know all this.

Oh, and I’m using doubles because I couldn’t believe the result of the following code was 0.049999997 !


   float f = 0.0f;
   f += 0.01f;
   f += 0.01f;
   f += 0.01f;
   f += 0.01f;
   f += 0.01f;
   System.out.println (f);

Client VM:

byte[] b = new byte[3];
byte a = b[0]; // VM: is “0” equal or more than 0, is “0” less than 3 ? if so, read from memory and store into a, else: throw java.lang.ArrayIndexOutOfBoundsException
byte b = b[1]; // VM: is “1” equal or more than 0, is “1” less than 3 ? if so, read from memory and store into a, else: throw java.lang.ArrayIndexOutOfBoundsException
byte c = b[2]; // VM: is “2” equal or more than 0, is “2” less than 3 ? if so, read from memory and store into a, else: throw java.lang.ArrayIndexOutOfBoundsException

Server VM can optimize that away, but is only included in the SDK, not the JRE.

ah, of course. It takes fewer operations to fetch the values from member variables as these checks don’t have to be done.

I use vecmath classes for the general purpose usage but when it comes to rendering part, you have to fill one of the missing feature of the java language ; struct (i.e. Point3f[] that would not be stored as an array of pointers but as a continuous block of memory, compatible with other languages).

In my engine, I have a few wrapper classes that implements a generic interface IDataArray<> for struct arrays. The wrapper helps for access to the data and for the type of storage (NIO or java array, but you could imagine mapped OpenGL VBO for example).

This type of architecture is slower when accessed from java but way faster when accessed from native code so it depends on your needs.

             Vincent

So theoricaly what’s the best way to represent 3D space in term of performance and memory usage ?
Many small objects or float/double array ?

NIO buffers and VBO’s

To furtehr explain Vorax’s comment.

The savings you get by putting the data directly in video memory and not hacing to copy it around the system far outwiegh these other issues.

Just curious, how about these 3 variations:

of course the following should be the fastest and it doesn’t consume much memory:

class JVector3 {
  private int x;
  private int y;
  private int z;

 public float getX() {
    return x;
 }

 public void setX(float  val) {
     x = val;
 }
 ..
}

but using a NIO Floatbuffer, ich have to call the put method 3 times for every JVector3, having a float array a single method call is sufficient. This coudl be a slow down using dymanic geometry for graphics applications.

Therefore, the next variation uses a float array and because all members are private and final the array size cannot change after constructing. The same is true for the elments using final lookups. As a result the JVM should be able to do the bounds check ones while constructing.

class JVector3 {
  private final float[] data;
  private final int x;
  private final int y;
  private final int z;

 public float getX() {
    return data[x];
 }

 public void setX(float  val) {
     data[x] = val;
 }
 ..
}

Finally, using a single index in the code and again all fields are private aures that the elemts of a JVector3 are in a neighbours in the memory. (Asuming the JVM puts the array into a single region). Furhtermore a JVM, could cache x+1, x+2 for a fast access.

class JVector3 {
  private final float[] data;
  private int final x;

 public float getY() {
    return data[x+1];
 }

 public void setY(float  val) {
     data[x+1] = val;
 }
}

just my random thoughts on it, don’t really know whether they are right…

ha ha, zero you should script several million runs of operations on each and post out the results!

You missed my point, in general the first variation always should be the best, the other both may used as a bridge to a vertexbuffer, through an array which can be stored in a FloatBuffer with one call. if possible only one read and write operation should be done druing an update.

Hey, why not have mutliple implementations of a vector class ? That is what OOP is for :wink:

public abstract class Vector2 {
  public abstract float getX();
  public abstract float getY();

  public abstract void setX(float val) throws UnsupportedOperationException;
  public abstract void setY(float val) throws UnsupportedOperationException;

  public <T extends Vector2> add(Vector2 right, T sum) {
     sum.setX(this.getX() + right.getX());
     sum.setxYthis.getY() + right.getY());
     return sum;
  }
}
public class StructVector2 extends Vector2 {
 public float x;
 public float y;

  public float getX() { return this.x; }
  public float getY() { return this.y; }

  public void setX(float val) { this.x = val; }
  public void setY(float val) { this.y = val; }
}
public class ArrayVector2 extends Vector2 {
 private final float[] data;
 private final int x;

  public ArrayVector2(float[] data, int x ) {
    this.data = data;
    this.x = x;
  }

  public float getX() { return this.data[x]; }
  public float getY() { return this.data[x+1]; }

  public void setX(float val) { this.data[x] = val; }
  public void setY(float val) { this.data[x+1] = val; }
}

Writes to native direct byte buffers are optimized out to straight memory access, no method call.
(At least in server VM.)

Thats part of the magic that gets us C speed acessing native memory.

With my very limited C knowledge, I wrote a vertex-array transformer (with 4x4 matrix) using pointer arithmetic.

Even then the Sun Server VM blew the C application away. Java code was 50% faster. I used “-O2 -6” as optimisation parameters in the free Borland compiler, and didn’t use SSE/SIMD, so that might have been the catch. Anyway, I was surprised :slight_smile:

The JVM has probably used the SSE scalar operations which are much faster than the ordinary FP instructions that I suspect the Borland compiler used.