VBO Best Practices - Interleaved Indexing?

I am trying to establish some basic functionality using VBO and VAO setups, but it is so hard to find good examples and tutorials. They either use poor practices, don’t use all elements (i.e. just vertex and color or just vertex and shaders), or are using old methods to implement. I want to use the best up to date methods but can’t seem to pull it all together.

Here is my basic logical setup:

I have a class that initializes everything, then runs the main rendering loop. Once done, it destroys used resources then quits.
I have a class that contains all of the functions used in creating my OpenGL window such as setting options like fog or lighting.

These two run together and function quite well, so they are not the subject of this post.

I have a class that contains 2 main functions along with everything needed to support said two functions:
initialize()

  • This class contains a static initializer() that creates a new instance of itself and passes in critical variables
  • This class contains a HashMap that keeps track of all instances and allows me to call all of their draw() when needed
  • This class constructor contains methods to load and parse a Blender.obj file
  • This class parses data into appropriate buffers giving me Vertex, Texture coord, Normal, and Index buffers ready made
  • This class then creates the VAO/VBO/IBOs (code to follow)
    draw()
  • This method sets up environment for drawing this object, draws it, then cleans up after itself (code to follow)

The logic of all of this is so that I can initialize a copy of this class for each object I want to draw to the 3D world. It would load the data from external files, and this class would contain all the functions needed to draw the object using vertex, normal, texture coordinates, color (tinting), lighting, etc…

The data in an .obj file is stored in a method that makes an interleaved indexed VAO look like the best option. Just in case that isn’t a real thing, let me explain

The data for each vertex is stored in the .obj file in numbered order - insert directly into vertexBuffer
The data for each normal is stored in the .obj file in numbered order - insert directly into normalBuffer
The data for each texture is stored in the .obj file in numbered order - insert directly into textureBuffer
The data for each face is stored in the .obj file in numbered order - insert directly into indexBuffer in order: v, t, n, v, t, n, v, t, n,

So the data is stored in separate VBOs but the indexes are interleaved.

However I can’t get OpenGL to understand this as easily as I do. how do I do this?

Here is my initialization code for a simple triangle: (reading from external file removed for clarity)
Data:
[spoiler]


            vBuffer = BufferUtils.createFloatBuffer(1 /*faces*/ 
                                                   *3 /*vertexes per face*/
                                                   *3 /*floats per vertex*/);
            vBuffer.put(new float[]{-0.1f, -0.1f, 0f});
            vBuffer.put(new float[]{+0.1f, -0.1f, 0f});
            vBuffer.put(new float[]{+0.0f, +0.1f, 0f});
            vBuffer.flip();

            tBuffer = BufferUtils.createFloatBuffer(1 /*faces*/ 
                                                   *3 /*coords per face*/
                                                   *2 /*floats per coord*/);
            tBuffer.put(new float[]{-0.1f, -0.1f});
            tBuffer.put(new float[]{+0.1f, -0.1f});
            tBuffer.put(new float[]{+0.0f, +0.1f});
            tBuffer.flip();

            nBuffer = BufferUtils.createFloatBuffer(1 /*faces*/ 
                                                   *1 /*normal per face*/
                                                   *3 /*floats per normal*/);
            nBuffer.put(new float[]{0.0f, 0.0f, 1f});// Straight towards camera
            nBuffer.flip();

            iBuffer = BufferUtils.createFloatBuffer(1 /*faces*/ 
                                                   *3 /*index per face*/
                                                   *3 /*int per index*/);
            iBuffer.put(new int[]{1,1,1});  // vertex, texCoord, normal index
            iBuffer.put(new int[]{2,2,1});
            iBuffer.put(new int[]{3,3,1});
            iBuffer.flip();

[/spoiler]
Bindings:
[spoiler]


        // Generate VAO Container for model
        vaoHandle = glGenVertexArrays();
        glBindVertexArray(vaoHandle);

            // Generate Vertex Buffer Object
            vHandle = glGenBuffers();//<editor-fold defaultstate="collapsed" desc="Vertex">
            // Bind vertexes to Handle
            glBindBuffer(GL_ARRAY_BUFFER, vHandle);
            glBufferData(GL_ARRAY_BUFFER, vBuffer, GL_STATIC_DRAW);
            // Put the VBO in the attributes list at index 0
            glVertexAttribPointer(0,        // Attribute Position Counter/Pointer - Must match layout in shader?
                                  3,        // Size of each item
                                  GL_FLOAT, // Format of each item
                                  false,    // Normalized?
                                  3,        // Stride
                                  0);       // Array Buffer Offset
            // Deselect (bind to 0) the VBO
            glBindBuffer(GL_ARRAY_BUFFER, 0);  // Unbind VBO
            //</editor-fold>

            // Generate Texture Buffer Object
            tHandle = glGenBuffers();//<editor-fold defaultstate="collapsed" desc="Texture">
            // Bind vertexes to Handle
            glBindBuffer(GL_ARRAY_BUFFER, tHandle);
            glBufferData(GL_ARRAY_BUFFER, tBuffer, GL_STATIC_DRAW);
            // Put the VBO in the attributes list at index 3
            glVertexAttribPointer(1, 3, GL_FLOAT, false, 3, 1);
            // Deselect (bind to 0) the VBO
            glBindBuffer(GL_ARRAY_BUFFER, 0);  // Unbind VBO
            //</editor-fold>
            
            // Generate Normal Buffer Object
            nHandle = glGenBuffers();//<editor-fold defaultstate="collapsed" desc="Normal">
            // Bind vertexes to Handle
            glBindBuffer(GL_ARRAY_BUFFER, nHandle);
            glBufferData(GL_ARRAY_BUFFER, nBuffer, GL_STATIC_DRAW);
            // Put the VBO in the attributes list at index 2
            glVertexAttribPointer(2, 3, GL_FLOAT, false, 3, 2);
            // Deselect (bind to 0) the VBO
            glBindBuffer(GL_ARRAY_BUFFER, 0);  // Unbind VBO
            //</editor-fold>
            
            // Generate Index Buffer Object
            iHandle = glGenBuffers();//<editor-fold defaultstate="collapsed" desc="Index">
            // Bind vertexes to Handle
            glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, iHandle);
            glBufferData(GL_ELEMENT_ARRAY_BUFFER, iBuffer, GL_STATIC_DRAW);
            // Put the VBO in the attributes list at index 4
            glVertexAttribPointer(3, 3, GL_INT, false, 3, 0);
            // Deselect (bind to 0) the VBO
            glBindBuffer(GL_ARRAY_BUFFER, 0);  // Unbind VBO
            //</editor-fold>
            
        // Deselect (bind to 0) the VAO
        glBindVertexArray(0);
        


[/spoiler]

Here is my draw code so far for the object:
[spoiler]


        glUseProgram(shaderProgram);        
        // Bind chosen VAO and Enable it
        GL30.glBindVertexArray(vaoHandle);
        GL20.glEnableVertexAttribArray(0); // Vertex
        GL20.glEnableVertexAttribArray(1); // Texture
        GL20.glEnableVertexAttribArray(2); // Normal
        GL20.glEnableVertexAttribArray(3); // Index
        
        
        glDrawElements(GL_TRIANGLES,    // type
                       indexCount,      // count of indexes groups (currently 3)
                       GL_INT,          // format of data
                       0);              // offset?
        
         
        // Put everything back to default (deselect)
        GL20.glDisableVertexAttribArray(0); // Vertex
        GL20.glDisableVertexAttribArray(1); // Texture
        GL20.glDisableVertexAttribArray(2); // Normal
        GL20.glDisableVertexAttribArray(3); // Index
        GL30.glBindVertexArray(0);
        glUseProgram(0);        


[/spoiler]

I know there is some stuff here for shaders, but that isn’t the main focus of this post, however if you want to elaborate on that too, be my guest. If not answered here, I will make another post about the shaders side later.

OK, so I could be wrong about this, but I’m pretty sure you’re supposed to make VAOs and then the idea is to put those VAOs into a VBO.

Also, you don’t use glVertextAttribPointer to define size of you buffer when creating it, or at least I’ve never seen anyone do it that way. What you want to do is build your buffer, generate a buffer in openGL, bind that buffer and then call glBufferData and fill that buffer, then unbind.


float[] vbo = createInterleavedArray();
		
FloatBuffer buffer_data = Buffers.newDirectFloatBuffer(vbo.length);
buffer_data.put(vbo);
buffer_data.flip();
gl.glGenBuffers(1, m_vbo_handle, 0);
gl.glBindBuffer(GL.GL_ARRAY_BUFFER, m_vbo_handle[0]);
gl.glBufferData(GL.GL_ARRAY_BUFFER, vbo.length * Buffers.SIZEOF_FLOAT, buffer_data, GL.GL_STATIC_DRAW);
gl.glBindBuffer(GL.GL_ARRAY_BUFFER, 0);

Now the code I posted only deals with VBOs, but I’m pretty sure you can grab the info from a VAO and put it into the VBOs data.

Also, what you’re doing doesn’t look like interleaving data, to interleave data you’d store the data in a buffer/array like you wrote for how objects are defined in .obj files, eg. [v, n, t, v, n, t, v, n, t, v, n, t]. So you only need one buffer to put in your VAO/VBO (your vertices and normals are in the same buffer). Then you can build multiple interleaved VAOs and put them all into a VBO.

Ok, from the research I have been finding, there is a lot of confusion about this. The way I understand it right now, is that a VBO or Vertex Buffer Object is the actual buffered object that gets uploaded to the VRAM. If you use more than one VBO though, you link them together into an array or Vertex Array Object, VAO.

[quote] A Vertex Array Object (VAO) is an object which contains one or more Vertex Buffer Objects and is designed to store the information for a complete rendered object.
[/quote]
courtesy of https://www.opengl.org/wiki/Tutorial2:_VAOs,VBOs,Vertex_and_Fragment_Shaders(C/_SDL)

Part of my confusion is that many of the tutorials create then display the VBOs in one patch of code. I am trying to separate initialization from drawing, so still mixed up on where each piece goes. Would the attribute pointer then go in the draw part to be called every frame?
something like this?


        glBindVertexArray(vaoHandle);
        glEnableVertexAttribArray(0); // Vertex
        glVertexAttribPointer(0, 3, GL_FLOAT, false, 3, 0);

I know that the 0 in the enable statement matches the first 0 on the pointer statement, but I can’t figure out how to go beyond that. VAOs can have up to 16 (0-15) of these attributes.

Correct, I am not interleaving data, I am interleaving indexes. I don’t know if that is even possible, but that’s what I want to know.

I could do some sort of post processing where I take the interleaved indexes, and use that to build 1 interleaved VBO containing all the information I need. If that is the accepted way of doing it, then ok, but I wanted to try to save myself the step if possible.

If I’m not mistaken, that’s not possible. If you’re using a model format (like OBJ) that uses interleaved or separate indices, you’ll need to preprocess the data for use with OpenGL. This was recently discussed here.

OpenGL and OBJ files use indices for different purposes. OBJ files use indices solely to save space by eliminating duplicate attributes, which is why you have separate indices for each attribute. A cube with simple texturing would only need 8 vertex positions, 4 texture coordinates and 6 normals for example, which saves memory. If you were to store each position/texcoord/normal combination, you’d need 24 of them, so it saves a lot of space.

On the other hand, the primary goal of OpenGL’s indexed rendering is NOT to save storage but to avoid redundant processing. This is done by having a post-transform vertex cache that allows vertices to be reused without having to be processed again as long as they’re in the cache. Since GPUs are only capable of rasterizing triangles, this is very useful when drawing the quads our example cube is made of. Each face is essentially a quad, so without this vertex cache, you’d have to have process 6 vertices to form 2 triangles, but with the vertex cache the two shared vertices are only processed once, reducing that to 4, saving 33% of the work.

The vertex cache works on the assumption that a vertex shader will always produce the same result when it is run with identical input. This is why indexed rendering only supports one index, which is used to get all vertex attributes from VBOs. The GPU hardware can simply check if the cache contains the post-transform data of a given index, and if it does it can skip the vertex shader. If each attribute had its own index, the complexity of this cache would increase a lot, as ALL attribute indices would have to match. It’s no longer a simple integer comparison, but a comparison between N integers. The hardware simply does not support that.

That being said, it is possible to emulate what you want to do. It’s possible to store the attribute indices in a normal VBO and store the attribute data in TBO (texture buffer objects). That way, the vertex shader can use the index provided to read vertex attributes from the attribute TBOs. Since TBOs are essentially VBOs you can randomly access using texture reads, you can emulate the .OBJ index system if you want. I seriously doubt that it’d be a good idea when it comes to performance and implementation complexity though. It’s simply not worth the extra job and memory saved.

Great, thanks for finally putting my mind to rest. Ok, so Pre-processing my data into an interleaved VBO seems strait forward enough. I will attempt to implement this tonight and see how it works (update soon as I am done)

how does this work? (code)

I am not sure if I understood any of that. Do you know of any good tutorials or wiki pages that explain this? maybe with code, diagrams, or videos?

There is no code for it. It’s automatic, transparent and done in hardware. There’s actually no guarantee that the GPU you’re running on actually has such a cache, but it’s safe to assume there is one. All you need to know is that when using indexed rendering (as OpenGL defines it) GPUs are able to skip rerunning the vertex shader in many cases.

Here’s the OpenGL.org article on the post-transform cache: https://www.opengl.org/wiki/Post_Transform_Cache.

TL;DR: They wanted to make it simple, so they only support one index per vertex.

It sounds like I may have gone a bit overboard explaining things. What I basically tried to explain with that paragraph was that the hardware developers tried to keep things simple, which is why OpenGL only supports vertex indexing, not attribute indexing like OBJ files have. To give you some Java-like pseudo-code of how the post-transform cache works:



int vertexIndex = getNextVertexIndex();
ResultOfVertexShader result = null;

for(int i = 0; i < postTransformCache.size(); i++){

    if(postTransformCache.get(i).index == vertexIndex){
        result = postTransformCache.get(i).data; //We found it in the cache! Just use it instead!
        break;
    }
}

if(result == null){
    //Not found in cache. Run vertex shader and store the result in the cache.
    result = runVertexShader(getVertexShaderInput(vertexIndex));
    postTransformCache.add(vertexIndex, result); //Ejects an old cache entry if the cache is full
}


This is all implemented in fixed-functionality hardware that’s able to do this extremely quickly. Now, try to figure out how to do this if you have one index per attribute. The problem is that to determine if two vertices are equal, you can no longer simply compare a single vertex index to figure out if the vertices are the same. You need to check if each and every vertex attribute index matches that of the stored cache entry. This’ll be messy, so try to read it through a few times if it’s not clear what’s happening.


int[] attributeIndices = getNextVertexAttributeIndices();
ResultOfVertexShader result = null;

for(int i = 0; i < postTransformCache.size(); i++){

    int[] cacheAttributeIndices = postTransformCache.get(i).getAttributeIndices();

    //Ack! We need to compare every single attribute index with the one stored in the cache!
    boolean equal = true;
    for(int j = 0; j < attributeIndices.length; j++){
        if(attributeIndices[j] != cacheAttributeIndices[j])
            equal = false; //This attribute differed! It's not a match!
            break;
        }
    }

    if(equal){
        result = postTransformCache.get(i).data; //We found it in the cache! Just use it instead!
    }
}

if(result == null){
    //Not found in cache. Run vertex shader and store the result in the cache.
    result = runVertexShader(getVertexAttributes(attributeIndices));
    postTransformCache.add(attributeIndices, result); //Ejects an old cache entry if the cache is full
}

The nested loop in there is much more difficult to implement in hardware than the first one. It’s much easier to compare a fixed single index than to compare a varying number of indices. It’s most likely implemented with Content-addressable memory so that it can instantly find if the vertex is already cached.

Ah, it all makes sense now! Thanks for explaining it better.

So, if I understand it correctly, if I use a single classic interleaved VBO and forget all about the VAO, Indexing, and etc, then it will only run the vertex shader when a new vertex location is requested. Ones that have already been run through the shader will remember the previous calculations and run with it? Not counting other attributes like color, texture, and normals, just in reference to vertexes?

Just to make sure there’s no confusion, sometimes the term vertex is used to refer only to a position in a graph or mesh, but in this context ‘vertex’ refers to all data associated with a geometrical vertex in OpenGL (position at minimum, but also texture coordinate, color, and so on if present). The discussion of indexing and caching here is in reference to all data associated with a vertex, not just the position.

Also, keep in mind that caching is an ‘under the hood’ detail, and although you may concern yourself with it if you wish (there are ways to optimize mesh data for cache coherency), you don’t have to, and it’s probably simpler not to unless you have some need for it.

Right. I come from a 3D Modeling background. Blender, Sketchup, etc… To me, I think of a Vertex as primarily a fixed point in 3D Space. That point can then have attributes like color, texture coords, normals, etc where a face meets that point in space. Whether or not it means just the coords or the whole thing depends on context. Am I talking about a position or a component of a face. Makes sense to me that way.

So, here is my code now. This functions to draw the object, however at the moment there is no color, it is just white. I would have to attach a texture probably or enable lighting to verify the tex coords and normals.

How does this look?

The Data:


            vertBufferObj = BufferUtils.createFloatBuffer(1 /*faces*/ 
                                                         *3 /*vertexes per face*/
                                                         *3 /*floats per vertex*/
                                                         +1 /*faces*/ 
                                                         *3 /*coords per face*/
                                                         *2 /*floats per coord*/
                                                         +1 /*faces*/ 
                                                         *3 /*normals per face*/
                                                         *3 /*floats per normal*/);
            vertBufferObj.put(new float[]{-0.1f, -0.1f, 0f});   // vertex
            vertBufferObj.put(new float[]{0f, 0f});             // texture
            vertBufferObj.put(new float[]{0f, 0f, 1f});         // normal
            vertBufferObj.put(new float[]{+0.1f, -0.1f, 0f});   // vertex
            vertBufferObj.put(new float[]{0f, 0f});             // texture
            vertBufferObj.put(new float[]{0f, 0f, 1f});         // normal
            vertBufferObj.put(new float[]{+0.0f, +0.1f, 0f});   // vertex
            vertBufferObj.put(new float[]{0f, 0f});             // texture
            vertBufferObj.put(new float[]{0f, 0f, 1f});         // normal
            vertBufferObj.flip();

The Bind:


            // Generate Vertex Buffer Object
            vHandle = glGenBuffers();
            // Bind vertexes to Handle
            glBindBuffer(GL_ARRAY_BUFFER, vHandle);
            glBufferData(GL_ARRAY_BUFFER, vertBufferObj, GL_STATIC_DRAW);
            // Deselect (bind to 0) the VBO
            glBindBuffer(GL_ARRAY_BUFFER, 0);  // Unbind VBO

The Draw:


        // activate and specify pointer to vertex arrays
        glEnableClientState(GL_VERTEX_ARRAY);
        glEnableClientState(GL_TEXTURE_COORD_ARRAY);
        glEnableClientState(GL_NORMAL_ARRAY);
        
        glBindBuffer(GL_ARRAY_BUFFER, vHandle);
        glVertexPointer(3, GL_FLOAT, 8*4, 0);       //  specify pointer to vertex coords array
        glTexCoordPointer(3, GL_FLOAT, 8*4, 3*4);   //  specify pointer to texture coords array
        glNormalPointer(GL_FLOAT, 8*4, 5*4);        //  specify pointer to normal array - No Size (always 3)

        // draw a cube
        glDrawArrays(GL_TRIANGLES,  // type
                     0,             // offset - Where to start
                     36);           // quantity - How many to draw

        glBindBuffer(GL_ARRAY_BUFFER, 0);
        
        // deactivate vertex arrays after drawing
        glDisableClientState(GL_VERTEX_ARRAY);
        glDisableClientState(GL_TEXTURE_COORD_ARRAY);
        glDisableClientState(GL_NORMAL_ARRAY);

Next Step: Shaders

Like Jesse said, in OpenGL vertex is defined as all the attributes that make up a vertex in a triangle, not just position. A vertex has “vertex attributes”, such as a position, a normal, a color, texture coordinates, bone weights for skinning, etc.

@Za’Anzabar
Your code looks good, but you seem to have completely skipped all indexing. Even if .OBJ’s indexed rendering isn’t usable with OpenGL, it’s still often usable to only store each combination of position/texcoord/normal once and use indexed rendering. Storing the exact same vertex attributes twice prevents the GPU from reusing the vertices. Here’s an example of rendering a quad using indexed rendering to draw 2 triangles using only 4 vertices and 6 indices:

This also works with interleaved vertex attributes. If you tell OpenGL to draw index 5, OpenGL pulls the vertex attributes by reading at the point (index*stride + offset) for each attribute. When you call glDrawArrays(GL_*****, 0, 10), it’s the same as calling glDrawElements() with an index buffer with the indices (0, 1, 2, 3, 4, 5, 6, 7, 8, 9).

Yes, I temporarily abandoned indexing in favor of interleaving. I my research, it seems the two are not used together. When I find tutorials that are sequenced like here:
http://www.java-gaming.org/index.php?topic=24272.0
It seems like indexing is used, then we move on to more advanced forms of VBOs using interleaving.

I don’t see how to use them together. I understand that if all you have is vertex position data, then it works wonders saving a ton of time/power. But when you add even one other attribute like color, it all goes to hell in a hand basket and I can’t get a render to function. Hence the whole point of this thread.

For now, until I figure out a better way, I wrote a method to take the indexed .obj data, and merge it all together into one interleaved VBO, used as seen above.

It seems to me to be more processor intensive to calculate if a given combination of position/texcoord/normal is already described. At most, I would think a given combination of those 3 attributes would only be used a couple of times. Over hundreds of vertexes, it would take longer to check if one is duplicated than to just use it again, wouldn’t it?

The tutorial you linked does show indexed rendering. Indexed rendering is always preferred for rendering 3D models (or in any other case where each vertex would need to be duplicated if indexed rendering was not used). A 3D model that has a smooth surface is usually reused more than 4 times. A grid constructed with triangles averages 5. There is no problem at all with interleaving the attribute data into a single VBO and using an index buffer to reuse vertices. I’d suggest you start with the code for indexed rendering in the tutorial and simply adapt it to interleave the color and position data into a single VBO, and change the glVertexPointer() and glColorPointer() calls to refer to the same VBO but with new strides and offsets. Feel free to ask if you get any problems.

Concerning performance, you’re way overestimating how expensive it is. When I load .obj files I load them like you, but then I run an optimization pass on them which generates an index buffer and removes duplicates, and I do this for models with 60 000 vertices. It only takes a few milliseconds, so it’s completely worth it since it’s only done once at program start but helps increase performance for the lifetime of the program.

Re-use levels like those of verticies applies if you are using tassellation to break up a continuous plane. If you consider a simple cube with only 8 positions, each position has 3 different normals, and is therefore 24 total verticies, half of which are used once, rest are twice. Any non-tassellated object will only re-use any vertex more than once if and only if it is used by more than one triangle on a continuous plane, which will likely happen but rarely. Once I develop my models to use tassellation, then I will worry about efficiency with indexes.

That said, it doesn’t mean I don’t want to understand how it works. So, let me get this straight if I can:
Scenario:
I have an Interleaved VBO containing a collection of vertex position, color, and normal data in the pattern: VvvCccNnnVvvCccNnnVvvCccNnn
I would generate an index of unique vertex combinations that points to the vertex positions in that VBO. (i.e. 0, 94, and 184) (4 as they are floats)
If I set my pointers up right, to have the correct offset (3
4) and stride (94), then if I understand this correctly, it will get its color and normal information relative to the indexed vertex:
when Index says (0
4), it will automatically get the color from the 1st color group at position (34) and normals at (64)
when Index says (94), it will automatically get the color from the 2nd color group at position (124) and normals at (154)
when Index says (18
4), it will automatically get the color from the 3rd color group at position (214) and normals at (244)

Am I even remotely close to accurate?

[quote]Any non-tassellated object will only re-use any vertex more than once if and only if it is used by more than one triangle on a continuous plane
[/quote]
Well, that totally depends on the topology of your mesh. And I think by “continuous plane” you meant “manifold.”
If you have for example a mesh with a sharp corner with a moderate tessellation, then the corner vertex can easily be reused tens of times for each face it is connected to. Or think about the poles of your good old GLU sphere. :slight_smile:

As for your indices: No.
Even with the most complicated vertex layout, your indices would still just be [0, 1, 2] in your example.
The indices are not per vertex attribute, but per whole vertex. They are also not measured in bytes, but in “vertices.”
This is quite clever of OpenGL, because if the indices were byte-offsets, then you could not use multiple VBOs for the same draw call, which you actually can do in OpenGL.
It is the vertex bindings that define for each vertex binding point (your vertex array index) how the format of a particular vertex attribute is layed out: its datatype, its dimension, its offset and its stride.
The indices then just index into a whole vertex, and are not a byte-index into a particular vertex attribute.

Yes, but it’s very common to have smooth meshes that don’t have sharp corners, in which case you want the normal to smoothly fade over the surface, so sharing normals between triangles is normal as well.

You’re partly right. OpenGL indices are not byte offsets, but simply indices. OpenGL looks for each attribute at byte offset (vertexIndex * stride + offset) where stride and offset are from the gl*****Pointer() calls you made before, so to get the third vertex you simply pass in 2 (0 is the first one).

i was trying to follow this thread but my brain kinda hurts from that …

now, you did drop the idea of converting .obj to a VBO 1-1 due to the different indexing behaviour right ? i mean it’s not working like that, so you’re unwarpping the mesh into a solid strip of triangles even if it looks like a waste of memory ?

your code look good so far, i think.

assuming the VBO layout is VVVCCCNNN, 3 vec3 elements, 9 floats :

  • stride width is 3+3+3 = 9 floats, *4 = 36 bytes.

  • total VBO size is : stride (36) * number of vertices = x bytes.

  • offset to vertex-position : 0

  • offset to color : 3 float, *4 = 12 bytes.

  • offset to normal : 6 floats, *4 = 24 bytes.



GL15.glBindBuffer( GL15.GL_ARRAY_BUFFER, your_vbo_id );

  GL11.glEnableClientState(GL11.GL_VERTEX_ARRAY);
  GL11.glEnableClientState(GL11.GL_COLOR_ARRAY);
  GL11.glEnableClientState(GL11.GL_NORMAL_ARRAY);

    GL11.glVertexPointer(3,GL11.GL_FLOAT, 36, 0);
    GL11.glColorPointer(4,GL11.GL_FLOAT, 36, 12);
    GL11.glNormalPointer(GL11.GL_FLOAT, 36, 24);

      GL11.glDrawArrays(GL11.GL_POINTS, 0, number_of_vertices); // or GL11.GL_TRIANGLES or whatever

  GL11.glDisableClientState(GL11.GL_NORMAL_ARRAY);
  GL11.glDisableClientState(GL11.GL_COLOR_ARRAY);
  GL11.glDisableClientState(GL11.GL_VERTEX_ARRAY);

GL15.glBindBuffer( GL15.GL_ARRAY_BUFFER, 0 );


now you’re trying to mix in a [icode]GL15.GL_ELEMENT_ARRAY_BUFFER[/icode] ?

o/

int stride = (94);
int colorOffset = (3
4);

when Index says (0), it will automatically get the color from the 1st color group at position (0 * (94) + (34)) or (34) and normals at (0 * (94) + (64)) or (64)
when Index says (1), it will automatically get the color from the 2nd color group at position (1 * (94) + (34)) or (124) and normals at (1 * (94) + (64)) or (154)
when Index says (2), it will automatically get the color from the 3rd color group at position (2 * (94) + (34)) or (214) and normals at (2 * (94) + (64)) or (244)