ByteBuffer issue

I am doing something like this:

ByteBuffer buffer = Bufferutil.newByteBuffer(size);

buffer.putShort(s);

glTexSubImage3D(GL.GL_TEXTURE_3D, 0, 0, 0, 0, w, h, d, GL.GL_ALPHA, GL.GL_Short, buffer.asShortBuffer());

But the results I’m getting looks like there is a problem with endianness.

Anyone see anything wrong with this approach?

When I use putFloat(f), GL_FLOAT and buffer.asFloatBuffer() the program works as expected…

Try calling order(ByteBuffer.nativeOrder()) on the ByteBuffer after you create it, before you write anything.

Try using ShortBuffer buffer = BufferUtil.newShortBuffer(), then buffer.put(…short value…) and in the gl call just use buffer.
Pretty sure BufferUtil lets you create a short buffer as well. I doubt this will change anything but just in case.

This did not change anything. :frowning:

I copied the values from my bytebuffer to a shortbuffer but I got the same results. But I need to have a bytebuffer so that the class can support both bytes and floats as well.

I am also sure that the buffers contain correct values before being sent to OpenGL.

I just found out that the problem is not endianness but something related to:

scale = 1.0f / (maxValue - minValue); //max = 4080, min = 0
gl.glPixelTransferf(GL.GL_ALPHA_SCALE, scale);

When using GL_FLOAT this works as expected. But with GL_SHORT the scale must be something like 5 to represent a useable range.

What am I doing wrong?

Looks like this is my issue:

[quote]GL_RGBA
Each pixel is a four-component group: for GL_RGBA, the red component is first, followed by green, followed by blue, followed by alpha. Floating-point values are converted directly to an internal floating-point format with unspecified precision. Signed integer values are mapped linearly to the internal floating-point format such that the most positive representable integer value maps to 1.0, and the most negative representable value maps to -1.0. (Note that this mapping does not convert 0 precisely to 0.0.) Unsigned integer data is mapped similarly: the largest integer value maps to 1.0, and 0 maps to 0.0. The resulting floating-point color values are then multiplied by GL_c_SCALE and added to GL_c_BIAS, where c is RED, GREEN, BLUE, and ALPHA for the respective color components.
[/quote]
From: http://developer.3dlabs.com/documents/GLmanpages/glteximage3dext.htm