So let me start with my question…
What is the most efficient way to work with normals when rendering a model? Or more specifically how do you work with normals while using glDrawElements? Because if the normals are indexed WITH the vertices of the model, that will always create wrong normals: ( A corner of a cube for instance will be one vertex with three different normals depending on which side of the cube the vertex is being rendered with, but there is only one index ever referencing that vertex, so you will always miss out on those two other normals that the vertex requires to render correctly )…
Let me try to explain better…
For the sake of simplicity I am not including any lighting code or coloring code, the thing I am interested in is the normals for right now, I will draw a cube:
A cube is composed of 8 different points.
public final float cube[] =
{
-0.5f,-0.5f,-0.5f, //vertex 1...
-0.5f,-0.5f,0.5f,
-0.5f,0.5f,-0.5f,
-0.5f,0.5f,0.5f,
0.5f,-0.5f,-0.5f,
0.5f,-0.5f,0.5f,
0.5f,0.5f,-0.5f,
0.5f,0.5f,0.5f
};
The cube can then be broken up into triangles and indexed with an integer array.
public final int indices[] = {
1,3,0 , //triangle 1...
0,3,2 ,
5,1,4 ,
4,1,0 ,
7,5,6 ,
6,5,4 ,
3,7,2 ,
2,7,6 ,
5,7,1 ,
1,7,3 ,
6,4,2 ,
2,4,0
};
Here is the normal array for the cube that I have worked out ahead of time.
public final float normals[] = {
-1.0f,0.0f,0.0f, -1.0f,0.0f,0.0f, -1.0f,0.0f,0.0f, -1.0f,0.0f,0.0f, -1.0f,0.0f,0.0f, -1.0f,0.0f,0.0f, //v1 v3 v0 v0 v3 v2
0.0f,-1.0f,0.0f, 0.0f,-1.0f,0.0f, 0.0f,-1.0f,0.0f, 0.0f,-1.0f,0.0f, 0.0f,-1.0f,0.0f, 0.0f,-1.0f,0.0f, //v5 v1 v4 v4 v1 v0
1.0f,0.0f,0.0f, 1.0f,0.0f,0.0f, 1.0f,0.0f,0.0f, 1.0f,0.0f,0.0f, 1.0f,0.0f,0.0f, 1.0f,0.0f,0.0f, //v7 v5 v6 v6 v5 v4
0.0f,1.0f,0.0f, 0.0f,1.0f,0.0f, 0.0f,1.0f,0.0f, 0.0f,1.0f,0.0f, 0.0f,1.0f,0.0f, 0.0f,1.0f,0.0f, //v3 v7 v2 v2 v7 v6
0.0f,0.0f,1.0f, 0.0f,0.0f,1.0f, 0.0f,0.0f,1.0f, 0.0f,0.0f,1.0f, 0.0f,0.0f,1.0f, 0.0f,0.0f,1.0f, //v5 v7 v1 v1 v7 v3
0.0f,0.0f,-1.0f, 0.0f,0.0f,-1.0f, 0.0f,0.0f,-1.0f, 0.0f,0.0f,-1.0f, 0.0f,0.0f,-1.0f, 0.0f,0.0f,-1.0f //v6 v4 v2 v2 v4 v0
};
To draw this information in immediate mode is simple enough. You can jump through the array like so…
gl.glBegin(GL.GL_TRIANGLES);
for(int i = 0; i < indices.length; i++){
gl.glNormal3f(normals[(i * 3)] , normals[(i * 3) + 1], normals[(i * 3) + 2]);
gl.glVertex3f(cube[ (indices[i] * 3) ],
cube[ (indices[i] * 3) + 1],
cube[ (indices[i] * 3) + 2]);
}
gl.glEnd();
gl.glFlush();
The previous code creates a float array of size 8 for the cube vertices, an int array of size 36 for the indices array, and a float array of size 108 for the normals. This is because a normal is specified for every vertex in the cube. This draws a rather good looking cube when lighting and such are enabled.
My Problem is that I cannot reproduce this using glVertexPointers and glNormalPointers…
I can render without normals easily like this…
//vertexBuffer and indicesBuffer are of types FloatBuffer in java.nio.*;
gl.glEnableClientState( GL.GL_VERTEX_ARRAYS );
gl.glVertexPointer( 3 , GL.GL_FLOAT , 0 , vertexBuffer );
gl.glDrawElements( GL.GL_TRIANGLES , 36 , GL.GL_UNSIGNED_INT, indicesBuffer );
So now if we would like to enable lighting and still have a good looking model, we need to apply the normals.
but there is a problem. When you render like this…
gl.glEnableClientState( GL.GL_VERTEX_ARRAY );
gl.glEnableClientState(GL.GL_NORMAL_ARRAY );
gl.glVertexPointer( 3 , GL.GL_FLOAT , 0 , vertexBuffer );
gl.glNormalPointer( GL.GL_FLOAT , 0 , normalBuffer );
gl.glDrawElements( GL.GL_TRIANGLES , 36 , GL.GL_UNSIGNED_INT, indicesBuffer );
The model looks wrong… it looks to be improperly using the normals and lighting. Does OpenGL use the index supplied to glDrawElements to jump through the normal array? And if so, the beginning normals will be correct but when applying the normal to a shared vertex on another side of the cube it will give it the same normal it did originally… which is wrong… The only way I found around this was to put the vertices in order based on their index… ie
public final float cube[] = {
-0.5f,-0.5f,0.5f, -0.5f,0.5f,0.5f, -0.5f,-0.5f,-0.5f, -0.5f,-0.5f,-0.5f, -0.5f,0.5f,0.5f, -0.5f,0.5f,-0.5f, //v1 v3 v0 v0 v3 v2 (from the indices)
0.5f,-0.5f,0.5f, -0.5f,-0.5f,0.5f, 0.5f,-0.5f,-0.5f, 0.5f,-0.5f,-0.5f, -0.5f,-0.5f,0.5f, -0.5f,-0.5f,-0.5f, //v5 v1 v4 v4 v1 v0
0.5f,0.5f,0.5f, 0.5f,-0.5f,0.5f, 0.5f,0.5f,-0.5f, 0.5f,0.5f,-0.5f, 0.5f,-0.5f,0.5f, 0.5f,-0.5f,-0.5f, //v7 v5 v6 v6 v5 v4
-0.5f,0.5f,0.5f, 0.5f,0.5f,0.5f, -0.5f,0.5f,-0.5f, -0.5f,0.5f,-0.5f, 0.5f,0.5f,0.5f, 0.5f,0.5f,-0.5f, //v3 v7 v2 v2 v7 v6
0.5f,-0.5f,0.5f, 0.5f,0.5f,0.5f, -0.5f,-0.5f,0.5f, -0.5f,-0.5f,0.5f, 0.5f,0.5f,0.5f, -0.5f,0.5f,0.5f, //v5 v7 v1 v1 v7 v3
0.5f,0.5f,-0.5f, 0.5f,-0.5f,-0.5f, -0.5f,0.5f,-0.5f, -0.5f,0.5f,-0.5f, 0.5f,-0.5f,-0.5f, -0.5f,-0.5f,-0.5f //v6 v4 v2 v2 v4 v0
};
That removes the need for the indices array, and now I will render the cube with a call to glDrawArray() (which unlike glDrawElements, plows through the vertices array instead of hopping around. ) When I do that the cube is rendered correctly but that more than doubles the amount for data used by the program by turning the vertex array from size 8 to size 108… also how are you supposed to use glDrawElements with Normals at all?
It seems as though I may be missing something very fundamental… I can post images as well as try and explain better, but it seems to me that glDrawElements was not setup to work properly with normals.
Any help or advice is greatly appreciated
Thanks Jon