Hi! I’m not only a new forum member, but also a new OpenGL programmer.
I’m writing a Java/JOGL application which has selecting and picking functionality, and I’m finding that, when processing hits, all of the hit objects are coming back with zero minDepth and maxDepth.
When retrieving the minDepth and maxDepth from the select buffer, I’m getting depthInt = -2147483648, which converts to a float value of 0.0 when using this algorithm:
depthFloat = 1f + (float)((long)depthInt)/0x7fffffff
(The algorithm above is borrowed from page 469 of Andrew Davison’s Pro Java 6 3D Game Development.)
My understanding is that, if minDepth is 0.0, then the hit object is thought to be at the near clipping plane, and if minDepth is 1.0, then the object is thought to be at the far clipping plane.
My first thought was that perhaps the camera’s frustum spans too much Z space, resulting in lower-than-needed precision when OpenGL tries to determine their depths. This doesn’t seem to be a problem, though, because:
- the meshes that I’m displaying have been glScalefed to fit within a 4x4x4 cube
- the meshes all reside within the camera’s frustum
- the camera’s frustum has a near clipping plane distance of 1.5, and a far clipping plane distance of 20.0
After searching this forum, I stumbled across this post, which suggests that I should ensure that my select buffer should use native byte order. My buffer is declared as follows:
IntBuffer selectBuffer = BufferUtil.newIntBuffer(4096);
According to the JOGL API JavaDoc, “the returned buffer will have its byte order set to the host platform’s native byte order.” So this doesn’t seem to be the problem.
The problem seems to occur regardless of the number of meshes that I display.
Does anyone have any suggestions about what I might be doing incorrectly?
Thanks in advance!