Logic behind texturing a VBO cube

Scenario, you have a cube. It’s made of 8 vertices referenced by 24 indices (drawn with GL_QUAD, not GL_TRIANGLE). To texture the cube, you create a texture buffer. This is where it confuses me:

Texcoords are defined per vertex?

If you look at the front of your cube, the top left vertex is the top right vertex of the side that is 90º clockwise. This means that for texcoords to be per vertex, every second face’s texture will be backwards.

It makes more sense to me to define texcoords per index element, rather than per vertex. But a couple of stackoverflow questions had answers saying that’s not possible - and if it is, I can’t work out how to do it.

What am I missing? How would you define texcoords for a VBO cube with 8 vertices, 24 indices, drawn in quads, and with the texture maintaining the same orientation on each face?

Can you post your code to construct your cube?

I don’t see how you can create a cube with 7 vertices. Cubes require 8.

And with the 8 vertices you would define 6 quads… one for each side of the cube. Each quad would have its texcoords (4 vertices) defined.

Oh, sorry. I meant 8 vertices. I’m just so used to looking at them labelled 0-7. (edited the original post)

I don’t actually have code to construct a cube at this point. I have code that draws thousands of them in a minecraft-styled world, with only faces that are exposed being added to the index buffer. I’m just trying to figure out the logic behind texturing them. It was vexing me, but I think I see it now.

http://dl.dropbox.com/u/18809996/cubes.png

This uses a random intensity of a green colour element. But it appears to be per physical vertex.

So two cubes next to each other would have a total of 12 vertices. 48 indices - but then I would remove the adjoining face from each cube, leaving 40 indices. I just need to work out how to define texcoords per index-element rather than per vertex.

The whole idea of using an Index Buffer full of indices, is that you only need to define each vertex (point in space) once, then reference to their position in the Vertex buffer with your indices.

[EDIT: Code that builds the indices and texcoords is below]

This code:
The draw method is called every frame.
buildIndices is called when the current chunk is edited (perhaps once every ten seconds).
buildColours is called instead when I’m debugging.
When using buildColours, the colours are defined per vertex in physical space.

This is the result:

http://dl.dropbox.com/u/18809996/textured.png

This is the texture being bound:

http://dl.dropbox.com/u/18809996/debug.png

This is the code:



public void draw() {
      GL11.glEnableClientState(GL11.GL_VERTEX_ARRAY);
      ARBVertexBufferObject.glBindBufferARB(ARBVertexBufferObject.GL_ARRAY_BUFFER_ARB, _verticesID);
      GL11.glVertexPointer(3, GL11.GL_FLOAT, 0, 0);
      
//      GL11.glEnableClientState(GL11.GL_COLOR_ARRAY);
//      ARBVertexBufferObject.glBindBufferARB(ARBVertexBufferObject.GL_ARRAY_BUFFER_ARB, _coloursID);
//      GL11.glColorPointer(4, GL11.GL_FLOAT, 0, 0);

      _texture.bind();
      GL11.glEnableClientState(GL11.GL_TEXTURE_COORD_ARRAY);
      ARBVertexBufferObject.glBindBufferARB(ARBVertexBufferObject.GL_ELEMENT_ARRAY_BUFFER_ARB, _texcoordsID);
      GL11.glTexCoordPointer(2, GL11.GL_FLOAT, 0, 0);
      
      ARBVertexBufferObject.glBindBufferARB(ARBVertexBufferObject.GL_ELEMENT_ARRAY_BUFFER_ARB, _indicesID);
      GL12.glDrawRangeElements(GL11.GL_QUADS, 0, _indices.limit(), _indices.limit(),
                        GL11.GL_UNSIGNED_INT, 0);
      
    }
	
	// This adds indices for faces that are exposed (visible)
	public void buildIndices(Block[][][] in)
    {
        int sy = _size.getY() + 1;
        int sx = _size.getX() + 1;
        int sz = _size.getZ() + 1;
        
        // create indices
        _indices = BufferUtils.createIntBuffer(sy * sz * sx * 24);
        _texcoords = BufferUtils.createFloatBuffer(sy * sz * sx * 24);
        for(int y = 0; y < _size.getY(); y++)
        {
            for(int z = 0; z < _size.getZ(); z++)
            {
                for(int x = 0; x < _size.getX(); x++)
                {
                    if(in[x][y][z].getMaterial() != Material.Air)
                    {
                        int w = sx;
                        int d = sz;
                        int h = sy;

                        int zw = z * w;
                        int ywd = y * w * d;
                        int wd = w * d;


                        int v0 = x + zw + ywd;
                        int v1 = v0 + 1;
                        int v2 = x + w + zw + ywd;
                        int v3 = v2 + 1;
                        int v4 = x + wd + zw + ywd;
                        int v5 = v4 + 1;
                        int v6 = x + w + wd + zw + ywd;
                        int v7 = v6 + 1;

                        float[] tl = new float[] {0.0f, 0.0f}; // Top left of the texture
                        float[] tr = new float[] {1.0f, 0.0f}; // top right
                        float[] bl = new float[] {0.0f, 1.0f}; // bottom left
                        float[] br = new float[] {1.0f, 1.0f}; // bottom right

						// "connectivity" is a construct of bools that refer to whether or 
						// not another block is connected to the face in question.
						
						// true when there is a block next door, which tells us that the 
						// face is not visible, so don't add its indices to the buffer.
						
                        if(!in[x][y][z].getConnectivity().getFaceState(Face.Front))
                        {
                            addIndices(new int[] { v0, v1, v5, v4});
                            addTexcoords(new float[][] {tl, tr, bl, br});
                        }

                        if(!in[x][y][z].getConnectivity().getFaceState(Face.Right))
                        {
                            addIndices(new int[] { v1, v3, v7, v5});
                            addTexcoords(new float[][] {tl, tr, bl, br});
                        }

                        if(!in[x][y][z].getConnectivity().getFaceState(Face.Back))
                        {
                            addIndices(new int[] { v3, v2, v6, v7});
                            addTexcoords(new float[][] {tl, tr, bl, br});
                        }

                        if(!in[x][y][z].getConnectivity().getFaceState(Face.Left))
                        {
                            addIndices(new int[] { v2, v0, v4, v6});
                            addTexcoords(new float[][] {tl, tr, bl, br});
                        }

                        if(!in[x][y][z].getConnectivity().getFaceState(Face.Bottom))
                        {
                            addIndices(new int[] { v2, v3, v1, v0});
                            addTexcoords(new float[][] {tl, tr, bl, br});
                        }

                        if(!in[x][y][z].getConnectivity().getFaceState(Face.Top))
                        {
                            addIndices(new int[] { v4, v5, v7, v6});
                            addTexcoords(new float[][] {tl, tr, bl, br});
                        }
                    }
                }
            }
        }
        
        _indices.flip();
        _texcoords.flip();
        VBOHandler.bufferElementData(_indicesID, _indices);
        VBOHandler.bufferData(_texcoordsID, _texcoords);
    }
	
	
	// I've been using this to debug. It just populates the colour buffer with a random 
	// intensity of green... Per vertex.
	public void buildColours()
    {
        // create colours
        int vl = _vertices.limit() / 3;
        _colours = BufferUtils.createFloatBuffer(vl * 4);
        Random rand = new Random(System.currentTimeMillis());
        for(int x = 0; x < vl; x++)
        {
            
            addColour(new float[] {0.0f, 0.8f + (rand.nextFloat() / 5), 0.0f, 0.0f} );
        }
        
        _colours.flip();
    }

It’s really hard to get texturing working on cubes that share vertices between surfaces as you’ll need multiple texture coordinates for each point. The easiest way to get texturing working is to just stop reusing the vertices and have 24 vertices (3 duplicates of them all) and you won’t have a problem with it. If you want to get working normals, it’s impossible to share vertices, as you simply need 3 different normals for each vertex. If you were to share vertices, you’d end up with a cube that is lit like a sphere! xD
I see you also want to reuse vertices between blocks connected to each others. This has the same problem. Consider a dirt block being next to a stone block. The vertices bordering between the two blocks would have to have 2 texture coordinates, one for dirt and one for stone. I seriously doubt any horrible hack would be able to outperform just duplicating those vertices.
This is basically the same problem of drawing an old SNES style tile map. Preferably one would want to draw it line by line using quad-strips, but that doesn’t really work due to the same problem as you’re having. Now that I think about it, it would be possible to accomplish it by having a texture of mapWidth X mapHeight size with the tile index of each tile, and then in a shader first lookup what tile you should draw, then generate the tile texture coordinates from this index and the supplied (0 to 1) texture coordinates to do another “dependent” texture lookup. Yeah, doable, but once again, I seriously doubt it will be faster.
Theoretically, you could do the same with a 3D texture for a block. However all I see in such a case is a shader full of if-statements, a ridiculously huge 3D texture and slideshow-like performance.

In short: You’re overdoing it. Just go with a basic approach with duplicate vertices, and optimize it if needed later (but probably not by sharing vertices =P). “Premature optimization root of all evil” after all, and I’m the biggest hypocrite on Earth for telling you that. xD

Thanks for that response. I’ve been doing a lot of googling on the matter since I made my post. I was actually working towards implementing a shader to do it. :slight_smile:

Duplicating vertices like that just seems too inefficient and too simple to implement. :wink: I actually already had that working a week ago - but I was trying to increase performance by eliminating as many vertices as possible.

Just for discussion purposes; Wouldn’t duplicating vertices create tessellation issues? Especially with floating point accuracy taken into account with vertices that are generated algorithmically?

[quote]Duplicating vertices like that just seems too inefficient and too simple to implement. :wink: I actually already had that working a week ago - but I was trying to increase performance by eliminating as many vertices as possible.
[/quote]
Reducing vertices might be the wrong approach. In my own (very limited) experience with Open GL I stumbled across the following:

  • Vertex Arrays vs immediate mode gave me a massive performance increase (I think 10x)
  • 4 bytes for color instead of 3 bytes for color in my vertex arrays gave me a 2x performance increase
  • Calling glBindTexture just once for a spritesheet instead of every single quad also gave me a massive performance increase

I’m sure there’s a lot more… considering that you’re making cube-world, the number of vertices is not going to be much of a performance factor imho.

I agree with Loom_weaver, the number of vertices shouldn’t be a problem. And if what he says is true about the number of color bytes, try to reduce the size of each vertex as much as possible.

  • For a cube world, every coordinate is an integer, so 3 shorts should be enough for the position. If you want a larger map than 65536^2, just use local coordinates relative to the player as you’re not gonna have that long draw distance anyway.
  • Texture coordinates will always be 0 or 1, so just use bytes for them. If you don’t have any lighting or any other special effect affecting the cube’s color, you shouldn’t even need any color data. That means every vertex only need 2*3 + 2 bytes = 8 bytes of data. Even if you want color, it’ll only be 11 bytes (12 with padding).
  • The cube data shouldn’t change very often, so keep it in a vertex buffer object. When a chunk becomes outdated because of a change (added a cube, removed a cube, e.t.c.) mark that chunk as outdated. Note that if a single cube changes, the neighboring chunks may be affected too. Just mark all of these as outdated, and update them when you want to draw them. With frustum culling, this can increase performance a lot as changes far away or behind the player aren’t updated immediately, and that might save you an update if multiple changes happen before you actually look at that chunk.
  • You should definitely draw each chunk with a single draw call. CPU performance should be your main limitation, not GPU performance if you do things right.

Finally: Frustum culling saves you insane amounts of CPU and GPU cycles. I got an pretty good idea on how to do this. Use a pathfinding-like flood algorithm in 3D. Start at the chunk the player is standing in and check all the 6 neighbor of the start chunk using a quick sphere-to-plane distance check (basic frustum culling, I have very easy to use code for this). Keep on “flooding” the world only adding the ones that pass to the drawing list. You’ll end up with a list of chunks that are inside the view frustum volume. Looping through this list should be very CPU effective (just bind VBO, bind VAO and drawElements for each chunk).

Just realized:

Sharing vertices like that is basically the most awful thing you can due to memory accessing. I even ran a test on this long ago but completely forgot about it. If you have the vertices needed for a cube spread out all over a vertex buffer, things are going to be slower.

I’m building quads from a grid of vertices. One quad is made of 9 vertices evenly placed with 8 triangles. All of them connect to the middle one (I hope you get the picture). In my first test I just placed all the vertices line by line and then constructed my quads using 24 (3*8) indices. In my second one I placed 9 vertices for each quad (the 4 corners being duplicated 4 times, the 4 edges being duplicated twice, the middle one being unique), and then 24 indices to form the quad.

Shared vertices: 3328200 vertices, 19660800 indices --> 62 FPS
Unique vertices: 7372800 vertices, 19660800 indices --> 81 FPS
Don’t share vertices. It’s not flexible for texture mapping, and it’s slower. It’s as simple as that.

I’ve rewritten most of my world-handling code to use 24 vertices per cube (six faces, four verts each). I get roughly the same framerate now as I had before, which is always good. All I’m wondering now is, what’s the equivalent to this:

glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER, GL_NEAREST);

for use with VBOs?

No difference? You could use samplers instead though. They are a lot more convenient than setting texture state like that, but they limit you to DX10 compatible cards (which I don’t think is a problem).

Cheers. I worked out what I was doing wrong.

As I fully expected, I just ran into this.

http://dl.dropbox.com/u/18809996/tesselation%20issues.png

Taking particular note of the white and magenta dotted lines on the corners of blocks. :-/

Uh, how are you getting seams between identical vertices? That doesn’t make any sense. Duplicated vertices should obviously have identical positions after the matrix multiplications… I have a program doing this not getting any seams at all. Am I missing something? >_> And why do you need floating point positions in the first place? Do you have PI-sized blocks?

Random thoughts:

  • Texture bleeding, maybe because of antialiasing?
  • Blending enabled?
  • Is there any reason the colors are white and magenta?