[Solved] Applying transformations to vertices

OK so some background…

I’m building a 3D game engine from scratch (NIH syndrome detected). I have managed to get Skeletal animation into my engine. The only problem I’m running into at this point is skinning my mesh (getting the vertices to move with the bones).

I have my mesh stored in a VBO and it is drawn using a shader, but I’m moving the joints in the “old fashioned way” by applying transformations to the model view matrix by calling openGL functions. I want to keep the engine like this until release, because I only have two characters in-game at any one time so I don’t want to write another shader, or add support for these transformations into my current shader. (If it is easier then I guess I can)

As I said I have the bone animation in, I can draw the bones using this recursive function and it looks great


private int drawJoints(GL2 gl, Joint b, int index)
	{
		//Get the transform from the current frame for the current bone
		Transform t = m_animation.m_frames[m_animation.m_curr_frame].m_transforms[index];

		//Push the current matrix on the stack
		gl.glMatrixMode(GL2.GL_MODELVIEW);
		gl.glPushMatrix();
		
		//Update the model view matrix with our current transform
		gl.glTranslatef(t.getTranslate().m_x, t.getTranslate().m_y, t.getTranslate().m_z);
		gl.glScalef(t.getScale().m_x, t.getScale().m_y, t.getScale().m_z);
		gl.glRotatef(t.getRotation().m_z, 0.0f, 0.0f, 1.0f);
		gl.glRotatef(t.getRotation().m_y, 0.0f, 1.0f, 0.0f);
		gl.glRotatef(t.getRotation().m_x, 1.0f, 0.0f, 0.0f);
		
		//draw the current joint
		gl.glPointSize(t.getScale().m_x * 400);
		gl.glBegin(GL.GL_POINTS);
			gl.glColor3f(0.0f, 0.5f, 0.5f);
			gl.glVertex3f(0.0f, 0.0f, 0.0f);
		gl.glEnd();

		//Now for each of the children on this bone, draw them
		if(b.m_children.length > 0) 
			for(int i = 0; i < b.m_children.length; i++)
				index = drawJoints(gl, m_joints[b.m_children[i]], index+1);
		
		//Pop the current matrix off the stack
		gl.glPopMatrix();
		
		//return the index of the current bone
		return index;
	}

Now if I want to attach the vertices

Now I can draw the vertices with their transformed locations(not a change in actual position, but using this technique it looks like they are moving), but I need to actually apply these transformations to the vertex locations so I can send the new vertex positions to my VBO. Can I just grab the ModelView matrix after I perform the transformation on the bone and apply it to each vertex that this bone influences?

Obviously this would be a rigid bind, so for a smooth bind it will be more complicated, but you do think that this will work, or am I over complicating it?

Well firstly, the absolute simplest and best way of doing this as you have described it is (as with most things) to use shaders. You pass a uniform array of joint matrices and attribute joint weights for each vertex. The shader is actually pretty simple.

But doing it your way, also possible. You can transform the vertices cpu side and put them in the cpu pre-transformed. I would counsel against making the matrix in OpenGL and retrieving it to do the transformations. If you are going to do things on the cpu, do it all on the cpu. Or rather better “cpu -> gpu” than “cpu -> gpu -> cpu -> gpu” because in these sort of situations, “->” ie transferring data to (and even more so) from OpenGL is actually the slowest part by quite a bit despite the gpu being able to do processes faster.

Doing matrix calculation cpu side also allows the possibility of caching animation matrices which depending on animation and model size/detail can be the better option.

I don’t recognize the library you are using but certainly LWJGL has it’s own Vector and Matrix classes which are fully capable of doing these operations and I expect others such as LibGDX and JOGL are similar; so doing the calculations yourself isn’t even that complex.

Well I am trying to do all the transformations in one go and then do a sublist to change the vertices that moved. So it should be cpu->gpu and then the next cycle do the same.

I guess I’ll look into doing it on the shader. I’m just having a hard time wrapping my head around sending this info over to the shader and updating the correct vertices.

Thanks for your input.

Well perhaps I can help with that. I’ll post a few pertinent sections of my own skeletal animation code.

The vertex program. Or at least a section of it. Because of my in-app shader build process, I don’t have direct access to the entire shader.


attribute vec4 inPos;
attribute vec4 jointWeights;
attribute ivec4 jointIndices;
uniform mat4 bindShapeMatrix;
uniform mat4[~nJointMatrices~] jointMatrices;
uniform mat4[~nJointMatrices~] invBindMatrices;

void main() {
    vec4 vbsm = inPos * bindShapeMatrix;
	
    outPos = 
        ( ( vbsm * invBindMatrices[jointIndices[0]] * jointMatrices[jointIndices[0]] ) * jointWeights[0] ) +
        ( ( vbsm * invBindMatrices[jointIndices[1]] * jointMatrices[jointIndices[1]] ) * jointWeights[1] ) +
        ( ( vbsm * invBindMatrices[jointIndices[2]] * jointMatrices[jointIndices[2]] ) * jointWeights[2] ) +
        ( ( vbsm * invBindMatrices[jointIndices[3]] * jointMatrices[jointIndices[3]] ) * jointWeights[3] );
}

Where “~nJointMatrices~” should take the value of the maximum number of joints a model of yours has.

The onPreRendering() method from my SkeletalRenderMode which is called before a batch of skeletal renders. (VertexData.vertexAttribPointer()) mirrors the OpenGL function in parameters)


public void onPreRendering(VertexData vd) {
    shaderProgram.use();
    vd.vertexAttribPointer(0, 3, GL_FLOAT, false, 52, 0); //inPos
    vd.vertexAttribPointer(1, 3, GL_FLOAT, false, 52, 12); //Normal Vector - not shown in shader
    vd.vertexAttribPointer(2, 2, GL_FLOAT, false, 52, 24); //Tex Coords - not shown in shader
    vd.vertexAttribPointer(3, 4, GL_BYTE, false, 52, 36); //jointIndices
    vd.vertexAttribPointer(4, 4, GL_FLOAT, false, 52, 50); //jointWeights
}

The onPreDraw() from same. This is called before each individual skeletal model’s render. (The value of bindShapeMatrixVar, invBindShapeMatrix and jointMatrix in this instance would be “bindShapeMatrix”, “invBindMatrices”, “jointMatrices”. The variable names in the shader)


public void onPreDraw(Skeleton data) {
    data.uploadSkeleton(shaderProgram.getId(), bindShapeMatrixVar, invBindShapeMatrix, jointMatrix);
}

The uploadSkeleton() method from Skeleton. (matrixBuffer is just a FloatBuffer that is reused for uploading skeletal matrices)


public void uploadSkeleton(int programId, String bsmVar, String ibmVar, String jmVar) {
    bindShapeMatrix.putIn(matrixBuffer);
    ShaderUtils.setUniformMatrix(programId, bsmVar, matrixBuffer);
    for(int i = 0; i < joints.length; i++) {
        joints[i].uploadJoint(programId, ibmVar, jmVar, matrixBuffer);
    }
}

And the uploadJoint() methods from Joint. (the value of indexString is “[index]” where “index” is the flattened index of this joint)


public void uploadJoint(int programId, String ibmVar, String jmVar, Matrix parentWJM, FloatBuffer matrixBuffer) {
    Matrix.times(tempMatrix, parentWJM, jointMatrix);
    tempMatrix.putIn(matrixBuffer);
    ShaderUtils.setUniformMatrix(programId, jmVar + indexString, matrixBuffer);
    invBindMatrix.putIn(matrixBuffer);
    ShaderUtils.setUniformMatrix(programId, ibmVar + indexString, matrixBuffer);
    for(int i = 0; i < children.length; i++) {
        children[i].uploadJoint(programId, ibmVar, jmVar, tempMatrix, matrixBuffer);
    }
}
    
public void uploadJoint(int programId, String ibmVar, String jmVar, FloatBuffer matrixBuffer) {
    uploadJoint(programId, ibmVar, jmVar, new Matrix(), matrixBuffer);
}

The are a couple of caveats to this code. Firstly - there are a maximum of four joints that any vertex can be influenced by. My models are simple so this is fine for me, if you need more then you can move to arrays rather than vec4s. Secondly - there is a fixed size to the joint matrix arrays. This could be fine, or it could be a waste of space. Variable arrays can be achieved by using textures to send data, or you could be clever and send the matrices of several small models in one go. I’m sure there are more caveats but it has been a while since I wrote this so I forget.

My apologies for the somewhat convoluted nature of this code - I have a framework I have to conform to - but I hope it should serve as a concept example.

Well… I guess I’m an idiot. I’ve been staring at this for a couple days and I don’t understand what’s happening. I took a look at this page http://www.opengl.org/wiki/Skeletal_Animation and it kinda helped me to understand, but I still don’t know what I’m sending to the GPU, outside of the inPos, and how the outPos is calculated…

Anyway I bought the OpenGL Development Cookbook and I’ll have that by the end of the week. Hopefully that will help, I guess my brain just doesn’t want to process this, fail :emo:

of course I start figuring this out in the middle of the night! 8)

edit: So questions on the joints

What is the bindShapeMatrix? is this the current joint’s default pose(TPose)?

am I correct in assuming that the mat4 jointMatrices are the current matrices for each joint? (not the default, but their transforms in the current frame?

If my assumptions are correct what are the invBindMatrices? are those the default pose matrices as well?

Yes. The jointMatrix is the joint’s transformation in the current frame.

So first off, I don’t entirely understand the maths behind skeletal animation (and I don’t know what TPose is) but I will share what understanding I do have. Essentially, the joint’s transformation is not calculated in object space but in joint space. Joint space is a unique space for each skeleton. The bindShapeMatrix takes the vertex into joint space, where you apply the joint’s transform, then the invBindMatrix takes the vertex back into object space. Why the invBindMatrix is unique to each joint, I could not tell you.

That is my understanding - incomplete or flawed as it may be. I am ashamed to say that this is one of the (few) things where I succeeded in implementing something and therefore did not question the maths behind it.

Again, I hope I have been of some help.

Quew8

Personally I send in a few float buffers and some uniform variables , It doesnt implement rotation however you could probably do that easily by using the other free variable in the 4d vertex.
To note , if you have large objects that will translate together(space ships in my case) its very useful , same if you are using it to render a map that doesnt move too much , if you dont have this then you will want to send in a a vertex buffer that goes through 10 positions storing (object x ,y z ,transformation x , y ,z , rotation center x,y,z,rotation value) As I havent done this and im not an openGL expert I do not know how efficient this would be but its probably better in the long run that using .translatef or .rotatef.

shader.bind();

		// Bind the vertex buffer
		GL11.glBindTexture(GL11.GL_TEXTURE_2D, t.id);
		GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, vertid);
		GL20.glEnableVertexAttribArray(0);
		GL20.glEnableVertexAttribArray(1);
		GL20.glVertexAttribPointer(0, 2, GL11.GL_FLOAT, false, 0, 0);
		GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, texid);
		GL20.glVertexAttribPointer(1, 2, GL11.GL_FLOAT, false, 0, 0);
		GL20.glUniform2f(transformid, translation.getx(), translation.gety());
		GL20.glUniform1f(scalarid, scalar);
		// Draw the textured rectangle
		GL11.glDrawArrays(GL11.GL_TRIANGLES, 0, vertcount);

		GL20.glDisableVertexAttribArray(0);
		GL20.glDisableVertexAttribArray(1);
		shader.unbind();

So for the example , to show that each position has 10 variables per vertex then you can use GL20.glVertexAttribPointer(0, 10, GL11.GL_FLOAT, false, 0, 0); instead of GL20.glVertexAttribPointer(0, 2, GL11.GL_FLOAT, false, 0, 0);. So this method I just described (not the code given above) , is much much much more efficient at having lots of objects each with unique shapes and transformations (probably more favourable for 3d due to the heavy number of rotations performed) and as an added bonus its fairly easy to implement you simply have to write the code above with a few different variables and remove a couple of lines and then build a float buffer creator that takes these into account.

@quew8: ahhh bummer. OK, so could you help explain what you are sending for the bindShapeMatrix and the invBindMatrices?

Say I had an arm skeleton with a shoulder, elbow, and wrist joints. Then I wanted to transform a vertex that is only attached to the wrist(1 joint). What would be sent for the bsMatrix and ibMatrix?

FYI: the “T-Pose” as it’s often called is the default pose of a character, standing straight up with arms stretched out to the side, hence the T. :wink:

@lcass: try not to take offense, but that has nothing to do with skinning. (the point of this thread) The drawing in my code only deals with drawing joints(individual points) and writing it in intermediate mode is far easier than building a vertex array and sending it to a shader, esp. since I’m only drawing the joints for testing.

I’m afraid I cannot be much help here. I create my models in Blender and export them as COLLADA (.dae) files. Parsing through them gives you the bindShapeMatrix and invBindMatrices.

(What follows is guess work)
I’ve just looked through the COLLADA files for a couple of my models and in all of them the bindShapeMatrix was the identity matrix. Now these are models with a very simple rigging (it was more to test my COLLADA parser than anything else) so perhaps the bindShapeMatrix is only necessary for more complex skeletons? But the invBindMatrices were not identity matrices which leads me to conclude that your original idea which I shot down in flames is probably mostly right. So perhaps the invBindMatrix is the combination of the joint’s default transform with the inverse of the bindShapeMatrix and so when the BSM is the identity matrix, the IBM is just the joint’s default transform.

Pure speculation, but speculation that makes sense to me.

OK I think I know what’s going on, but one last question before I implement. Is the root joint of your skeleton always at the origin in blender? :point: That is the case from what I’m reading.

In case you don’t know root, not trying to be rude, it’s the parent of all the joints(if you move this joint all the other joints move with it).

So I modified my last post cause I’m pretty sure I figured that out. If you’d be so kind could you take another look at your files and tell me how the invBindMatrices relate to the jointMatrices? Are they the inverse of the parent Matrix? If it’s not obvious then no worries, I’m just hoping that COLLADA holds an index into the joints that says what matrix it’s using. I suppose I could do this myself, are COLLADA files legible to us humans?

Anyway, thanks a bunch for helping as much as you have I’ll have to include you in my “special thanks” section.

If anyone else could shed some light on this that would be great too.

I’m afraid it is not obvious. I tried to extract the right numbers from a file to test it out and that came out as a no, but I am almost certain that I didn’t get it right. Sorry. What you’re saying definitely makes a lot of sense in any case.

COLLADA files are XML based and I am told that technically makes them “human readable.” But I think that just means text based rather than pure data. It is certainly possible to follow them once you know what they are about but data is essentially split up into several great long lists of numbers and you have to jump to the right index in that list the whole time. The format is also a bit “all over the place” to allow for great flexibility. Fine for a computer, bit confusing for a human. So in short: us humans can read them but, only with experience and then only with difficulty. Quicker to write a program to read them for you frankly even if it’s just one or two models.

Hmmm, reading the old specification for COLLADA seems to shed some light on this. My models are simple enough I should be able to read through and figure it out, esp since I have all the info from my fbx files.

I’ll post back when I figure out how this is related, I think I may even do a tutorial on skinning(if I’m sure I know how it works).

Specifications are the programmers holy texts. Well if your tutorial explains the maths, I’ll sure as hell read it.

Yeah so I figured this out, theoretically. I still need to implement, but the inverse matrix is not the parent one it’s the current joint you’re playing with, but it’s from the bind pose, not the current pose. You need to do this so that when you apply the current matrix it will move the joint and their vertices to the correct positions(keeping their offsets).

So after I implement if it goes smoothly I’ll definitely be doing a tut, written and video. Reason being that I haven’t found one that’s good for people who don’t understand the math portion(why it works) or how to build the skeleton manually.

OK, well I didn’t want to start a new thread so I’ll post here. I finally got around to messing with this again and I ran in to a problem…

Nothing moves, no errors, just little boxes sitting there.

I can verify that I am updating the joint matrices every frame and that the corresponding indices and weights are being sent as well. I figure that something should at least break, but the model just sits there…

Anyway some code…

Vertex Shader:


in vec3 vert_pos;
in vec4 joint_weights;
in ivec4 joint_indices;

uniform mat4 bind_shape_matrix;
uniform mat4 joint_matrices[26];
uniform mat4 inv_bind_matrices[26];

void main(void)
{
   vec4 vbsm = vec4(vert_pos.x, vert_pos.y, vert_pos.z, 1.0) * bind_shape_matrix;
   
   vec4 pos = 
        ((vbsm * inv_bind_matrices[joint_indices.x] * joint_matrices[joint_indices.x]) * joint_weights.x) +
        ((vbsm * inv_bind_matrices[joint_indices.y] * joint_matrices[joint_indices.y]) * joint_weights.y) +
        ((vbsm * inv_bind_matrices[joint_indices.z] * joint_matrices[joint_indices.z]) * joint_weights.z) +
        ((vbsm * inv_bind_matrices[joint_indices.w] * joint_matrices[joint_indices.w]) * joint_weights.w);
   
   
   gl_Position = gl_ProjectionMatrix * gl_ModelViewMatrix * pos;
}

Fragment Shader:



out vec4 out_color;

void main(void)
{
   out_color = vec4(1.0, 1.0, 1.0, 1.0);
}

Drawing:

	private void drawMesh(GL2 gl, GFX_Engine engine, int tga_handle)
	{
		gl.glDisable(GL.GL_CULL_FACE);
		gl.glDisable(GL.GL_BLEND);
		engine.m_simple_shader.useShader(gl);
		gl.glActiveTexture(GL2.GL_TEXTURE0 + tga_handle);
		gl.glBindTexture(GL2.GL_TEXTURE_2D, tga_handle);
				
String bind_shape_string = "bind_shape_matrix";
		String inv_bind_string = "inv_bind_matrices";
		String joint_string = "joint_matrices";
		
		Matrix4 bind_shape = new Matrix4().initIdentity();
		FloatBuffer matrix_buffer = Buffers.newDirectFloatBuffer(bind_shape.toArray().length);
		matrix_buffer.put(bind_shape.toArray());
		matrix_buffer.flip();
		engine.m_simple_shader.setAttributeMatrix4f(gl, bind_shape_string, matrix_buffer);
		
		//Upload all the joints
		for(int i = 0; i < m_bind_pose.length; i++)
		{
			Matrix4 joint = m_animation.m_frames[m_animation.m_curr_frame].m_transforms[i];
			String joint_mat = joint_string + "[" + i + "]";
			matrix_buffer = Buffers.newDirectFloatBuffer(joint.toArray().length);
			matrix_buffer.put(joint.toArray());
			matrix_buffer.flip();
			engine.m_simple_shader.setAttributeMatrix4f(gl, joint_mat, matrix_buffer);

			Matrix4 inv_bind = m_inv_bind_pose[i].m_matrix;
			String inv_joint_mat = inv_bind_string + "[" + i + "]";
			matrix_buffer = Buffers.newDirectFloatBuffer(inv_bind.toArray().length);
			matrix_buffer.put(inv_bind.toArray());
			matrix_buffer.flip();
			engine.m_simple_shader.setAttributeMatrix4f(gl, inv_joint_mat, matrix_buffer);
		}

		engine.m_simple_shader.setAttribute1i(gl, "diffuse_map", tga_handle);

		int vert_pos = gl.glGetAttribLocation(engine.m_simple_shader.m_program_index, "vert_pos");
		int color = gl.glGetAttribLocation(engine.m_simple_shader.m_program_index, "color");//removed from shader
		int tex_coord = gl.glGetAttribLocation(engine.m_simple_shader.m_program_index, "tex_coord");//removed from shader
		int joint_weights = gl.glGetAttribLocation(engine.m_simple_shader.m_program_index, "joint_weights");
		int joint_indices = gl.glGetAttribLocation(engine.m_simple_shader.m_program_index, "joint_indices");

		gl.glBindBuffer(GL.GL_ARRAY_BUFFER, m_vbo_handle[0]);
		gl.glEnableVertexAttribArray(vert_pos);
		gl.glEnableVertexAttribArray(color);
		gl.glEnableVertexAttribArray(tex_coord);
		gl.glEnableVertexAttribArray(joint_indices);
		gl.glEnableVertexAttribArray(joint_weights);
		gl.glVertexAttribPointer(vert_pos, 3, GL.GL_FLOAT, false, Buffers.SIZEOF_FLOAT * Vertex.SIZE, 0);//start
		gl.glVertexAttribPointer(color, 3, GL.GL_FLOAT, false, Buffers.SIZEOF_FLOAT * Vertex.SIZE, Buffers.SIZEOF_FLOAT * 3);////removed from shader
		gl.glVertexAttribPointer(tex_coord, 2, GL.GL_FLOAT, false, Buffers.SIZEOF_FLOAT * Vertex.SIZE, Buffers.SIZEOF_FLOAT * 6);////removed from shader
		gl.glVertexAttribPointer(joint_indices, 4, GL.GL_BYTE, false, Buffers.SIZEOF_FLOAT * Vertex.SIZE, Buffers.SIZEOF_FLOAT * 8);//32
		gl.glVertexAttribPointer(joint_weights, 4, GL.GL_FLOAT, false, Buffers.SIZEOF_FLOAT * Vertex.SIZE, Buffers.SIZEOF_FLOAT * 12);//48
		
		gl.glBindBuffer(GL.GL_ELEMENT_ARRAY_BUFFER, m_ind_handle[0]);
		gl.glDrawElements(GL.GL_TRIANGLES, m_num_indices, GL.GL_UNSIGNED_INT, 0);
		
		gl.glDisableVertexAttribArray(vert_pos);
		gl.glDisableVertexAttribArray(color);
		gl.glDisableVertexAttribArray(tex_coord);
		gl.glDisableVertexAttribArray(joint_weights);
		gl.glDisableVertexAttribArray(joint_indices);
		
        gl.glUseProgram(0);
	}

And finally here’s how I built my interleaved array just in case:

private float[] createInterleavedArray()
	{
		float[] interleaved_array = new float[m_vertices.length * Vertex.SIZE];
		
		for(int i = 0, j = 0; i < interleaved_array.length; i+=Vertex.SIZE, j++)
		{
			interleaved_array[i+0] = m_vertices[j].m_position.m_x;
			interleaved_array[i+1] = m_vertices[j].m_position.m_y;
			interleaved_array[i+2] = m_vertices[j].m_position.m_z;
			interleaved_array[i+3] = m_vertices[j].m_normal.m_x;
			interleaved_array[i+4] = m_vertices[j].m_normal.m_y;
			interleaved_array[i+5] = m_vertices[j].m_normal.m_z;
			interleaved_array[i+6] = m_vertices[j].m_uv.m_x;
			interleaved_array[i+7] = m_vertices[j].m_uv.m_y;
			interleaved_array[i+8] = m_vertices[j].m_influences[0].m_bone_index;
			interleaved_array[i+9] = m_vertices[j].m_influences[1].m_bone_index;
			interleaved_array[i+10] = m_vertices[j].m_influences[2].m_bone_index;
			interleaved_array[i+11] = m_vertices[j].m_influences[3].m_bone_index;
			interleaved_array[i+12] = m_vertices[j].m_influences[0].m_bone_weight;
			interleaved_array[i+13] = m_vertices[j].m_influences[1].m_bone_weight;
			interleaved_array[i+14] = m_vertices[j].m_influences[2].m_bone_weight;
			interleaved_array[i+15] = m_vertices[j].m_influences[3].m_bone_weight;
		}
		return interleaved_array;

The one thing I could see was these two lines:


gl.glVertexAttribPointer(joint_indices, 4, GL.GL_BYTE, false, Buffers.SIZEOF_FLOAT * Vertex.SIZE, Buffers.SIZEOF_FLOAT * 8);//32
gl.glVertexAttribPointer(joint_weights, 4, GL.GL_FLOAT, false, Buffers.SIZEOF_FLOAT * Vertex.SIZE, Buffers.SIZEOF_FLOAT * 12);//48

So when you create the buffer, you specify the joint indices as a float each. But then you interpret the indices as a byte in your pointer. That can’t be good.

Also a byte is obviously one byte long and you have 4 of them for the indices. However you have the offset of joint_weights (and I would guess the stride as well) set as if the they were 4 floats (for 16 bytes).

But to be honest I would have thought that these issues would have caused more problems. But correct them and we’ll see what happens.

Oh, duh…

OK so is there anyway I can keep the indices in the float buffer object I have so I don’t need to bind two buffers? I know it would probably look pretty hacky, but I’d like to avoid doing this:

gl.glBindBuffer(GL.GL_ARRAY_BUFFER, m_vbo_handle[0]);
		gl.glEnableVertexAttribArray(vert_pos);
		gl.glEnableVertexAttribArray(color);
		gl.glEnableVertexAttribArray(tex_coord);
		gl.glEnableVertexAttribArray(joint_weights);
		gl.glVertexAttribPointer(vert_pos, 3, GL.GL_FLOAT, false, Buffers.SIZEOF_FLOAT * (Vertex.SIZE - 4), 0);//start
		gl.glVertexAttribPointer(color, 3, GL.GL_FLOAT, false, Buffers.SIZEOF_FLOAT * (Vertex.SIZE - 4), Buffers.SIZEOF_FLOAT * 3);//12
		gl.glVertexAttribPointer(tex_coord, 2, GL.GL_FLOAT, false, Buffers.SIZEOF_FLOAT * (Vertex.SIZE - 4), Buffers.SIZEOF_FLOAT * 6);//24
		gl.glVertexAttribPointer(joint_weights, 4, GL.GL_FLOAT, false, Buffers.SIZEOF_FLOAT * (Vertex.SIZE - 4), Buffers.SIZEOF_FLOAT * 8);//32
		

		gl.glBindBuffer(GL.GL_ARRAY_BUFFER, m_bone_ind_handle[0]);
		gl.glEnableVertexAttribArray(joint_indices);
		gl.glVertexAttribPointer(joint_indices, 4, GL.GL_FLOAT, false, Buffers.SIZEOF_INT * 4, 0);//32

Not in the same FloatBuffer object but you should stick it in a ByteBuffer.

So your “creating the buffer” code will look like this.


private ByteBuffer createInterleavedArray() {
    ByteBuffer bb = BuferUtils.createByteBuffer(m_vertices.length * 52);
    for(int j = 0; j < m_vertices.length; j++) {
         bb.putFloat(m_vertices[j].m_position.m_x);
         bb.putFloat(m_vertices[j].m_position.m_y);
         bb.putFloat(m_vertices[j].m_position.m_z);
         bb.putFloat(m_vertices[j].m_normal.m_x);
         bb.putFloat(m_vertices[j].m_normal.m_y);
         bb.putFloat(m_vertices[j].m_normal.m_z);
         bb.putFloat(m_vertices[j].m_uv.m_x);
         bb.putFloat(m_vertices[j].m_uv.m_y);
         bb.put(m_vertices[j].m_influences[0].m_bone_index);
         bb.put(m_vertices[j].m_influences[1].m_bone_index);
         bb.put(m_vertices[j].m_influences[2].m_bone_index);
         bb.put(m_vertices[j].m_influences[3].m_bone_index);
         bb.putFloat(m_vertices[j].m_influences[0].m_bone_weight);
         bb.putFloat(m_vertices[j].m_influences[1].m_bone_weight);
         bb.putFloat(m_vertices[j].m_influences[2].m_bone_weight);
         bb.putFloat(m_vertices[j].m_influences[3].m_bone_weight);
    }
    bb.flip();//Or whatever
    return bb;
}

and then setting up your pointers.


glVertexAttribPointer(vert_pos, 3, GL_FLOAT, false, 52, 0); //3 * 4 = 12 bytes
glVertexAttribPointer(color, 3, GL_FLOAT, false, 52, 12); //3 * 4 = 12 bytes
glVertexAttribPointer(tex_coord, 2, GL_FLOAT, false, 52, 24); //2 * 4 = 8 bytes
glVertexAttribPointer(joint_indices, 4, GL_BYTE, false, 52, 32); //4 * 1 = 4 bytes
glVertexAttribPointer(joint_weights, 4, GL_FLOAT, false, 52, 36); //4 * 4 = 16 bytes

So you see ByteBuffers are great because you can mix and match data however you want.