LWJGL VBOs - How do do memory optimizations?

I recently came across this article: https://www.khronos.org/opengl/wiki/Vertex_Specification_Best_Practices#Attribute_sizes, claiming some memory optimizations could be done by, for example, using


to store texture coordinates in VBOs.
As I understand it, this has to be set when calling


. Using anything other than


results in distorted colors/texture coordinates/the model not showing at all, depending on which data (vertices, colors, normals, …) I tried to store using a different format.

Do I have to convert my data stored in the VBOs as



Thanks for your help.

For integral data types, use GL30.glVertexAttribIPointer (note the capital “i” in the name). When using glVertexAttribPointer, the type specifies the host/client-side type and everything will be cast/converted to float for the shader invocation.
See: https://stackoverflow.com/questions/28014864/why-do-different-variations-of-glvertexattribpointer-exist#answer-28014920

From glVertexAttribLPointer() I take that after all it’s very well possible to use doubles in OGL for vertex positions and in the end matrices too?

Also, how would I prepare my data? It’s nice that OGL can handle texture coordinates as shorts, but obviously, we cannot give it a Java short array because we need digit precision.

When you use GL_UNSIGNED_SHORT you can only have a precision of exactly 65536 possible values, linearly distributed between 0.0 and 1.0 when you use normalized=true for glVertexAttribPointer. So a Java short value of (short)0 will map to 0.0 in the shader and a value of (short)~0 (=65535) will map to 1.0 in the shader.
Likewise, when you want texture coordinates above 1.0 (for texture repeat/mirroring) you can specify normalized=false in glVertexAttribPointer and then the short values will be converted to float values for the shader without normalization to the 0.0…1.0 range.

That precision is more than enough for texture coordinates I figured.
I can’t really get it to work properly. In theory, I only need to change normalized of glVertexAttribPointer to be true and enable GL_NORMALIZE so that the values get transformed before passing them to the shaders, correct? I’m still storing my usual float array in the VBO.
The texture coordinates seem somewhat distorted, showing only a white line going across the textured area where usually the texture containing a text would be displayed correctly.

No. When you want to store shorts, you need to store shorts - not floats. Obviously.

ShortBuffer sb = <ShortBuffer filled with shorts>;
glVertexAttribPointer(attributeIndex, 2, GL_UNSIGNED_SHORT, true, 0, 0L);

GL_NORMALIZE is completely orthogonal to what you are doing. GL_NORMALIZE is used to normalize normal vectors specified via glNormal or glNormalPointer, so is old Fixed-Function Pipeline stuff and completely does not apply to what you do.
Do research/lookup the stuff you are using: https://www.khronos.org/registry/OpenGL-Refpages/es1.1/xhtml/glEnable.xml (search for GL_NORMALIZE)

Thanks for the link; that will be very useful in the future!
In what range would I put my texture coordinates them in the ShortBuffer? Obviously, I cannot store usual 0.0F to 1.0F as I’d usually do. Would I actually be storing pixel perfect coordinates and OGL would take over normalizing them to values between 0.0F and 1.0F for use in shaders (kind of misphrased it before but that is and was my main issue understanding this)?

Thanks for your help so far, it’s been great and I at least got the indices (glDrawElements) working using GL_UNSIGNED_SHORTs.

I already detailed this in: http://www.java-gaming.org/topics/lwjgl-vbos-how-do-do-memory-optimizations/38876/msg/371062/view.html#msg371062

My stupidity will be on public display for all of eternity. Thank you. I kind of didn’t get it because it didn’t make sense to me at all that you’d suddenly be declaring, for example, texture coordinates as values between 0 and 65535 just to save on some memory.

You’d probably want to write some utility method to still work with float values and let the method convert to the appropriate short value. Something like this:

public static short f2us(float f) {
  return (short) (f * 0xFFFF);

ShortBuffer sb = ...;

…working on that right now :point: