OBJ Mutli-Vertex Attribute Indices

Hi,

I’ve been working on Wavefront OBJ model loading, and I’ve got a basic working API going on. But my problem is that the indices in the OBJ specification are allowed to have multiple vertex attribute indices. By vertex attributes I mean positions, normals, and texture coordinates. Like so:

v x, y, z
vn x, y, z
vt u, v
usemtl ...
f v1/n1/t1 v3/n3/t3 v2/n2/t2

But my problem is that in OpenGL you can only specify indices grouping vertices and normals and texture coordinate’s indices. So if the OBJ file had something like this: (which blender exports as actually):

f v1/n32/t4 v8/n11/t39 v90/n12/t34

I would have to create a special vertex for that specific face and do some wierd tracking to draw it correctly in the OpenGL indices…

Does anyone have a solution to this? OBJ loaders are quite popular here…

You could take a look at LibGDX’s .obj loader. https://github.com/libgdx/libgdx/blob/master/gdx/src/com/badlogic/gdx/graphics/g3d/loader/ObjLoader.java

It seems pretty complete.

the spec allows [icode]f v/vt/vn …[/icode]

but also [icode]f v//vn …[/icode] and [icode]f v/vt …[/icode]

means, split the whole line by space, starting looping at index 1 (or first non-empty which is not ‘f’), each part split by slash and test the lenght :

  • if 2, you have a v/vt format
  • if 3, you have a v/vt/vn format …

loop over slash-splitted parts, test if element is empty. if element 1 is empty you have the v//vn format.

eh … about the elements. opengl allows drawElements, which is useful for unique-vertices. but it supports only unique vertices. the unique normals and texcoords are not directly usable with that. you’d have to copy it.

What about 3 texture coordinates? That’s allowed too… xD
Thanks for the tip though.

… Copy?

Wow, yeah… That’s pretty complete. 500 lines :o I’ll look it over, and post here if I have any more questions.

[quote=“Ecumene,post:4,topic:53109”]
you mean, tex-coords like x and xy and xyz and probably xyzw ?

[quote=“Ecumene,post:4,topic:53109”]
i mean, when you setup a array-buffer (vertices) you can just copy all unique vertices as provided by the .obj file. thats pretty good. then when you continue with the element-array-buffer (indices) and again you can just reuse all the face definitions (f v//…) as they come in, but everything else (normals, tex-coords) are unique by the format but there is no way to pass a end element-array-buffer (afaik) which points into a array-buffer of normals.

so by copy i mean… for each attribute (normals, tex-coords) you need to create a array-buffer, same size as the vertex-buffer (regardless of how many unique-attribute-values you have) and copy the attribute-values into the same position (not byte-position but array-index) as the face-vertex. even if you have a flat square made of lots of vertices/faces, you only have one normal - which needs to be copied into the normal-buffer for every freaking vertex. which is a waste of memory. sorry, i’m not too good with teaching ;).

you need to gather all values first, pack them nicely into “face” objects, get organised, and then build a couple VBOs which work with the draw-method of your choice. a 1 to 1 translation from .obj (probably any format) to opengl is not really possible.

i found this a while ago : https://github.com/g-truc/ogl-samples/blob/master/tests/gl-430-draw-without-vertex-attrib.cpp

it’s a very interesting approach. they do not use any attributes at all, just toss the whole mesh, all unique vertices, normals, coords, whatnots (faces, edges) into a huge SSBO. using a custom geometry shader to access the SSBO and pass out attributs to the vertex shader on the fly. no VBO used at all.

pretty much an extension to drawElements by adding more element-arrays. i dont think drawing that way is faster but for sure it’s one of the most memory-saving ways. still gotta try it out tho’ :slight_smile:

[quote=“basil,post:5,topic:53109”]

I’m pretty sure he means multiple sets of 2D texture coordinates. It can be useful if you want to read from shared textures or light maps in addition to normal textures. I don’t think .obj supports this. Light map coordinates are usually generated by your own engine, so you can store them in your own format.

[quote=“basil,post:5,topic:53109”]

I think what he’s trying to say is that there’s no way to directly draw a vertex with OpenGL the way .obj files store them. You can’t just dump your list of vertex positions, vertex normals and vertex texcoords into a VBO each and then have one index buffers for each attribute. OpenGL only supports a single index buffer, so when creating your VBO you essentially need to store every single unique combination of position+normal+texcoord that you have in the .obj file. If all three are identical you can store it once and just index it, but even if only the normal differs for example, you’d still need to store the position and tex-coords again.

Exactly! Is there an easy technique to fix this? I’m kinda pulling my hair out…

EIDT: I’ve been thinking of passing a very large vertex/normal/texcoord array into a vertex shader and doing it there, but I’m not sure how that’d be performance wise.

You can store your data in textures and sample it manually in your vertex shader, but it’ll be a lot slower. You’d lose vertex caching and the overhead of the texturing will most likely be noticable. I doubt you have enough data to make it worth the memory savings. Just duplicate them.

EDIT:
It’ll almost certainly not be worth it. Even with hundreds of thousands or even millions of vertices, we’re talking about memory usage of 10s of megabytes at most in the first place, and with that number of vertices, you’d need 32-bit indices for each attribute as well, which means that each “vertex” is essentially 4*3 bytes of index data, which is pretty huge. Normals (and sometimes tex-coords) can be stored as shorts, which means that they use fairly little memory. A normal can with some clever packing be stored as 2x16-bit shorts + a sign bit to reconstruct Z, while texture coordinates often don’t need 32-bit float precision so they can also be stored as (unsigned) shorts. This means that both normals and texture coordinates only use 32 bits of memory in the first place, so replacing them with 32-bit indices plus a massive table of data would actually use even more memory.

Even caching vertex positions this way will hardly be worth it. In a normal 3D model, it is usually pretty rare to have two vertices which have the same position but not the same normal or texture coordinates. This usually indicates a UV-seam, which your models shouldn’t have a significant number of as they both look like crap and complicate a lot of things, so indexing the vertex positions would not save very much. I’d say that less than 20% of the vertices would need to be duplicated at all, and the additional memory used by the position indices you’d need to add would probably outweigh this. IN ADDITION, you’d need to store the 3D-position in a RGBA texture since the GPU automatically pads 3-channel textures to 4 channels, so you’d essentially need to add 33% more memory there as well, unless you store it in a GL_R32F texture and do 3 samples per vertex, which would just be ridiculous.

TL;DR: Pack your data, don’t index attributes.

Not to mention that if you have a million vertices in a single .obj file, you’re doing something wrong in the first place.

Okay, I’ll get to work on what _brazil said. Seems like the best option. ( The copying, not the SSBO’s they require OpenGL 4.3, but are still pretty neat )