glDrawElements or glInterleavedArrays

Okay, thinking logically it’s a lot better to leave OpenGL to do byte-float conversions as necessary. It knows what’s best and will do it as fast as humanly (computerly?) possible anyway. ;D Doesn’t need me slowing it down. Still not sure of the best way to store colour data in bytes though. There must be a better solution than just using [0…127]?

I’ve done a lot more learning about signed/unsigned now, and am beginning to get the hang of it. Whatever you want to do, the first thing you seem to have to do is promote everything to int anyway… why’d they even bother with byte? ::slight_smile: :wink:

Because the hardware deals in values 0…255 internally the GL drivers are almost certainly sending them in the range 0…255 as well. It takes a quarter of the bandwidth as well which is a very significant saving.

I use 0…255 for my bytes eg: gl.color4ub((byte)255, (byte)255, (byte)255, (byte)255)

Cas :slight_smile:

I’ve had to leave mine as floats because dealing with colors as unsigned bytes appears to be unsupported in most of the shader languages going forward. Just about all of them define color as a float3 or float4. In fact the smallest datatype that they deal with in most drivers going forward with respect to any new features is a int (32 bit) or a half (16bit float).

Before anyone starts wondering what this has to do with OpenGL2.0 - glslang takes its input the same as Cg. In fact unless you’re very API knowledgable, you could easily mistake the shader language in OpenGL2.0 for Cg.