I’d like to ask an opinion from you guys. Usually Odejava objects are best to keep under certain scale because of ODE.
But on the Xith3D side people usually would like to scale all Odejava objects e.g. bigger. The right place to do this is to add scaling factor to every position and translation methods in Geom and Body objects.
Because Body.getPosition() and Body.getQuaternion() calls are made very often (once per object per frame), do you see any performance loss on following code:
public Vector3f getPosition() {
return new Vector3f(
Ode.floatArray_getitem(posArray, 0),
Ode.floatArray_getitem(posArray, 1),
Ode.floatArray_getitem(posArray, 2));
}
comparing to this code:
public Vector3f getPosition() {
if (scale != 1) {
Vector3f result = new Vector3f(
Ode.floatArray_getitem(posArray, 0),
Ode.floatArray_getitem(posArray, 1),
Ode.floatArray_getitem(posArray, 2));
result.scale(scale);
return result;
} else {
return new Vector3f(
Ode.floatArray_getitem(posArray, 0),
Ode.floatArray_getitem(posArray, 1),
Ode.floatArray_getitem(posArray, 2));
}
}
I am not too good at JVM optimization but my gut feeling says that this does not affect to performance in any way, JVM optimizes these things very nicely, or am I complete fool on this case?
Below is an explanation taken from ODE FAQ:
12.8. Should I scale my units to be around 1.0 ?
Say you need to simulate some behavior on the scale of a few millimeters and a few grams. These small lengths and masses will usually work in ODE with no problem. However occasionally you may experience stability problems that are caused by lack of precision in the factorizer. If this is the case, you can try scaling the lengths and masses in your system to be around 0.1…10. The time step should also be be scaled accordingly. The same guideline applies when large lengths and masses are being used.
In general, length and mass values around 0.1…1.0 are better as the factorizer may not lose so much precision. This guideline is especially helpful when single precision is being used.