In blender, the texture coordinates seem to be working fine, but I have no idea why it’s doing this in my rendering like this. It seems like each time the “mark edge” is used to map, it creates a weird effect on the texture. This is what is looks like in my engine:
This is what it looks like in blender:
I’m not sure if it’s OpenGL or my fault. If you think I am loading texture coordinates incorrectly, tell me and I will post my code and see what I’m doing wrong.
This may be way too obvious, but just to be sure, are you accounting for the fact that OBJ format doesn’t necessarily have one-to-one correspondence for the different vertex attributes? (For example, two vertices can share position data but not texture coordinates.)
I don’t know if Collada is better, but OBJ is fine, provided it offers all the features you need. The use of multiple indices is a little inconvenient on the loading side, but it’s not really a problem. You just have to write some bookkeeping code (or use an existing library or something) that rearranges the mesh data so it can be indexed as OpenGL expects it.
To be clear, I don’t actually know that that’s what the problem is here. It’s just a guess. (If your OBJ loading code isn’t too long, you could post/pastebin it here so we can take a look.)
Not very long actually. This code is a little messy, I don’t have my original code anymore (I made a jar application that takes in the obj file and simply outputs a simple file that has an array of indices, vertices, normals, and texcoords in a single line for easy loading), so this is just the decompiled version. http://pastebin.com/u6LDikV2
Looks a little suspicious. At first look at least, it seems that (for example) if a vertex position occurs once with one texture coordinate and a second time with a different texture coordinate, the two vertices will be treated as the same vertex, and the second texture coordinate will overwrite the first. It seems to me this could easily cause the kinds of errors you’re seeing (and in fact it seems quite consistent with them).
Again, I could be wrong about this, but if you don’t know for sure that the code you posted is correct, it might be worth taking a second look at it. In short, correctly implemented OBJ-loading code needs to be able to handle mixed indices. (If you have any doubt as to whether your model has any mixed indices, just look through the text file for face entries where the indices are not all the same.)
Ok, I rewrote most of my code, but I’m not sure exactly how I order all of the data correctly. I got the vertex array and indices figured out, but how do I order the uv and normal indices correctly? Obviously, opengl only has one index buffer, so how do I apply the other index arrays? Here is the new code: http://pastebin.com/gdeCTfh4
It’s been a while since I wrote any OBJ-loading code, but I’ll try to describe how I remember doing it. No guarantees I’ll get it right though.
First you read all the vertex positions, texture coordinates, and normals (or whatever) into separate arrays, just as you’re doing currently. That’s the easy part.
Next we look at the faces. A typical face might look like this:
f 1/1/1 2/1/1 3/2/1
We’ll call each slash-delimited group of indices an ‘index set’. So in the above, the first index set is 1/1/1, the second is 2/1/1, and the third is 3/2/1.
Instead of thinking of the indices in the sets separately (as you appear to be doing currently in your code), think of each index set as being a unique entity. The idea is that each unique index set in the OBJ file corresponds to a unique single vertex in OpenGL. So in the above, 1/1/1 would end up being vertex 0 for OpenGL, 2/1/1 would be vertex 1, 3/2/1 would be vertex 2, and so on. If at some later point in the file the index set 2/1/1 recurred, that would be OpenGL vertex 1 again.
Here’s some pseudocode showing how you might build the OpenGL data from the OBJ index sets:
var unique_index_sets = [] var opengl_vertices = [] var opengl_indices = [] for each face in OBJ file for each index_set in face var opengl_index = unique_index_sets.indexOf(index_set) if opengl_index == -1 opengl_vertices.push(new Vertex( original_positions[index_set.position], original_uvs[index_set.uv], original_normals[index_set.normal] )) opengl_index = unique_index_sets.size unique_index_sets.push(index_set) end if opengl_indices.push(opengl_index) end for end for
Just as a heads up, right before I posted this I realized my pseudocode was wrong and revised it, so you can extrapolate from that that it might be wrong this time as well. I think the general idea is sound though.