Per poly texture?

I’m sorta new to Xith, but our last 3D system was capable of putting a unique texture on each polygon of a mesh object. All the Xith demos I’ve seen (yet) apply a single texture to an entire object. Is there any way to do the former?

You’ll either need to create a special texture that contains all your textures and map the texture coordinates appropriately

or

Create a Shape3D for each polygon.

There is a good reason for this restriction, honest. :slight_smile: Well, actually I believe its due to the ultimate aim of the eventual OpenGL calls drawing all polygons of the same texture in one fell swoop.

Kev

[quote]You’ll either need to create a special texture that contains all your textures and map the texture coordinates appropriately
[/quote]
Yes.
This is what many good 3d artists do anyway: pack as many different textures of a modell into one texture page to avoid texture context swapping as much as possible.

[quote]This is what many good 3d artists do anyway: pack as many different textures of a modell into one texture page to avoid texture context swapping as much as possible.
[/quote]
True to a certain degree for models but for sure not for the level geometry. And to split an indoor level or even a terrain into a bunch of shapes by their textures is a really bad approach imho, because it makes a lot of tasks (like collision detection and response) unnecessarily difficult. Why won’t xith let the programmer decide how many textures a shape should use!?

Actually, you’re wrong about the collision detection. Having everything as a single big lump of geometry makes collision detection horribly slow. With a lot of separate objects, you can quickly cull almost everything before getting down to the per-triangle intersection tests. BSP, cells and portals, oct trees etc, all rely on splitting the geometry down to small sets of data spatially located to reduce the number of tests needed.

[quote]all rely on splitting the geometry down to small sets of data spatially located
[/quote]
Yeah, but having to split your dataset based on different textures is NOT splitting your dataset based on spatial proximity. The two are mutually exclusive.

[quote]Actually, you’re wrong about the collision detection. Having everything as a single big lump of geometry makes collision detection horribly slow. With a lot of separate objects, you can quickly cull almost everything before getting down to the per-triangle intersection tests. BSP, cells and portals, oct trees etc, all rely on splitting the geometry down to small sets of data spatially located to reduce the number of tests needed.
[/quote]
That’s not exactly what i was talking about. Storing all the level geometry in one shape doesn’t mean that you can’t use spatial subdivision on it. It’s even easier IMO. If you split your level into a lot of texture separated objects, you either have to store the corresponding object of each polygon in your octree (for example) or (even worse) calculate an octree for each one.
Allowing just one texture per shape really is a bad decision IMHO. Imagine a Doom3 level that would have been build that way. How many shapes would that require? Gazillions? And what for? Just to minimize texture state changes?

It all depends on definition. Shape in xith3d/java3d is ‘collection of geometries with same appearance’. You cannot make it contain multiple appearance - it is against definition.

What you want, is some different kind of object which has many-to-many relationship between geometries and appearances. Let’s call it CompositeShape. You would probably need to add some kind of index for each polygon, pointing to correct appearance from given CompositeShape and engine would split polygons into separate shapes internally and group it itself to minimize state changes. Thing would get more complicated in case of dynamic shapes - it would have to be done in smart way to avoid copying data on every update.

Now, question is, do you really need CompositeShape ? So far, you have used two arguments: level geometry modelling and level geometry collisions.
For modelling, if your 3d editor mixes all textures in one big shape, thing about loader/converter which will split it into separate shapes per texture. How much work it can be ? One page of code ? Anyway, I doubt that you will put a lot of level into one shape, because of culling. You should partition your level anyway to avoid swamping GPU with non-visible objects.
As for the collisions, there is no requirement to use the same shapes you use for rendering for collision. If you have some kind of super-optimized collision representation of your level geometry, use it for collisions as whole - there is no need to split it per-texture.

Our system is based on user-created content from the average computer user today, similiar to how the Web made it possible for anyone to be a publisher. This means our content isn’t ultra-optimized by professional 3D artists familiar with texture packing, etc. We’re finding our users are using applications like TrueSpace which allow for per-polygon textures, and I don’t want it to be my fault that their content isn’t acceptable to our system. Additionally, there’s a future design that includes painting directly within our system, where we provide the 3D mesh and they can paint the polygons individually to taste; so it’s not just a matter of file loading, it’s also a dynamic reconstruction issue.

So I guess in such a case, I just make each polygon a Shape3D?

Is there more overhead switching between hundreds of individual sub-objects, instead of checking an if-then switch per polygon? Is it possible to create a custom object on the application level that can feed its polygons directly to Xith, or is that too low-level for the application level?

Will you vary only textures or also other states per polygon ? For example, can single polygons in shape be wireframe, lit/non-lit, have different shaders, etc ? If yes, then probably one shape per polygon is good choice - it is going to be painfully slow anyway. If you vary only textures, but share all other properties, then it will be probably better to have specialized object type for that.

If we are talking about painting on objects dynamically, maybe just per-object textures are then answer ? Prerender all needed things to big texture (single per object) and then perform all updates to it according to uv mapping of polygon. For painting with brush it is probably only choice anyway, unless you talk about ‘select one of predefined textures for each polygon’.

@abies: I don’t think that it’s a very good argument to say that something is against definition and therefor not possible for a reason. Maybe not the feature but the definition is questionable in that case?!
Anyway, i agree that there are workarounds. As you mentioned, i could write my own loader to split my level into separate shapes and i can also use different geometry and spatial subdivision for my collision detection than i do for the rendering. But i don’t think that that’s the point when using a 3d engine like xith. It should offer such things and not force myself to reinvent the wheel here.
Another example: Imagine a 3d editor, where the user can load textures and assign them to every polygon he wants (i.e. he’s texturing an untextured level). If i understand the current definition correctly, this is almost impossible with xith, because it would require every polygon to be a single shape (or to split and create shapes everytime he changes a texture, which sounds even worse). For a level with 20000 polygons (which is not much), this would require up to 20000 shapes. Am i right?
I’m sorry, but if i am (even to a degree), i don’t think that the current approach is a very good one. Other engines i know are doing it differently and IMHO better. I think that this is something that should be rethought.

[quote]@abies: I don’t think that it’s a very good argument to say that something is against definition and therefor not possible for a reason. Maybe not the feature but the definition is questionable in that case?!
[/quote]
Xith3d tries to be mostly compatible with java3d as far as class concepts are concerned. Shape3D is well defined in java3d - so IMHO, if you want something different, it should be a different class, instead of putting very different functionality into old class.

[quote] For a level with 20000 polygons (which is not much), this would require up to 20000 shapes. Am i right?
[/quote]
Inside editor - yes. Inside game - they could be grouped into bigger entities.

Can you tell what 3d engines allow assigning different texture per each polygon in same shape for models ? I know it happens for levels - but for models ?

I think that Shape3D is good enough for most models. You just need different entity for representing the world geometry.

Ok, i now understand the reason for Shape3d to behave the way it does for compatibility reasons with Java3d. However, a class that allows different textures for polygons would be a very valuable addition IMO.
JPCT (http://www.jpct.net/forum/viewtopic.php?t=88) is something i’m using from time to time and that offers support for both ways. By default, you assign textures for the whole object but the 3ds loader can handle multiple textures/object and the API lets you obtain a PolygonManager from each object which offers this option too. That’s very convenient for coloring picked polygons for example. You just need about two lines of code to highlicht the polygon under the mouse pointer by simply changing its texture.

It’s not just a compatibility with Java3D that’s the issue here. It’s the compatibility with the graphics card as well as the rendering API. Even if Xith3D did have some object type that allowed per-polygon texturing, then it still has to break the entire thing up into a lot of sub-objects each with their own separate geometry and texture. That’s the way both OpenGL and Direct3D work. So saying it is by definition is absolutely correct.

It’s going to be very inefficient doing per-polygon texturing as it will have to do that work potentially every frame. There’ll be a lot of data replication due to the need to create multiple copies of each vertex for each polygon that uses it with a different texture and so forth. You’re far, far better off doing that at the application level where you can control the the entire process and do it most efficiently for your application requirements. To give you an example, my .3ds loader takes the per-polygon texturing model and converts it through to the same setup that Xith3D uses - single shape per texture. That takes about 300 lines of code to perform. Now, think about how that would effect performance if it had to be executed for every polygon, every frame.

Also, saying that spatial locatility is not a problem shows a fundamental lack of understanding of how geometry optimisation is used to gain massive performance increases through the use of standard algorithms. It’s pretty clear by the definition of what you are calling a shape, and what everyone else knows a shape to be a very different. What you’re completely confusing is the difference between a content development tool/environment, and a realtime graphics rendering API. The requirements and abstractions are very different. Saying that a content developer needs this and thus a programmers API should support it is like saying chalk and cheese are both a nice tasting after-dinner snack. They’re very different beasts. Your job as the tool writer is to work between those two worlds and map the content developer’s worldview, into a realtime 3D graphics worldview in the most efficient manner possible for your particular application. What you need and what I need given the same data set are going to be very different when it comes to the optimised code for rendering.

[quote] Yeah, but having to split your dataset based on different textures is NOT splitting your dataset based on spatial proximity. The two are mutually exclusive.
[/quote]
They are not exclusive by any stretch of the imagination. Objects using the same texture usually are spatially located in the same place. Think about walls inside a building - you’ll have a a heap of polys all using the same sets of textures located together. Off that you’ll have another room with another set of textures - possibly the same, possibly different. If they’re the same, you could keep them inthe same shape object if you wanted to, but it’s more efficient not too as they can’t be seen from this room and so culling them before it gets to rendering is a good strategy. If your graphics card is not transformation bound in your application, then leaving all the polygons that share a single texture in a single shape object (and thus a single glVertexArray) may well be higher performance option than spatially separating out objects with the same texture. You can use either technique based on your own application and hardware needs, but they are not exclusive.

[quote]Objects using the same texture usually are spatially located in the same place.
[/quote]
I don’t see how you can make this assumption. By that rationale, every tree in a forest could contained in a single object (since all trees could share the same texture) but it’s silly to assume that every tree would be close to another.

Likewise, my company logo texture is used on a few dozen different avatar models, and it doesn’t make sense to create a 1-polygon object just for the logo on their backs, when it’s hundreds of avatars spread over several thousand square kilometers…

[quote]Saying that a content developer needs this and thus a programmers API should support it
[/quote]
My job is to fulfill the product requirements. If the product requirements are that typical users can paint per-polygon textures onto their avatars, then it’s my job to figure out how. If the Xith answer is “make each of the 2000 polygons into 2000 seperate objects” then that, to me, is not a feasible answer. You’re telling me there’s less overhead with switching between 2000 Shape3Ds than any other possible solution?

Our current engine sorts the polys by texture in each object at loadtime. To draw, it changes to the first texture, calls OpenGL to draw polys A thru B, changes to the second texture, calls OpenGL to draw polys C thru D, etc. Best case scenario, we switch to one texture and make a single OpenGL call to draw polys A thru Z, which I fail to see how could be any more efficient. Worse case scenario, we make NumPoly calls with a texture switch and single polygon draw, which I hardly see how could be any better.

It is more efficient because you are only dealing with one texture instead of n. OpenGL only needs to load and store 1 texture. I believe you will find this approach much faster even when using raw-opengl calls.

Will.

[quote]To give you an example, my .3ds loader takes the per-polygon texturing model and converts it through to the same setup that Xith3D uses - single shape per texture. That takes about 300 lines of code to perform. Now, think about how that would effect performance if it had to be executed for every polygon, every frame.
[/quote]
Sorry, but i really don’t get what you are trying to tell me here. That changing the texture state requires 300 lines of code? For sure not.

[quote]It’s pretty clear by the definition of what you are calling a shape, and what everyone else knows a shape to be a very different. What you’re completely confusing is the difference between a content development tool/environment, and a realtime graphics rendering API…
[/quote]
I’m calling a shape whatever xith is calling a shape and i couldn’t care less about what the programmer of the API is doing with this shape/model/whatever internally. I’m just interested in a feature that makes sense to me (and not just me). Maybe it’s hard to implement and doesn’t fit nicely into the current code but should i really care about this as the “user” of the API? Back to my example: If your loader loads a level with a single texture, it would make it either one shape containing all the polys or it would make one shape for every poly because i somehow told it to do so. The later solution is totally out of question for me. That’s far away from the optimized state you are talking about. The former solution explodes when i’m trying to change a texture of a single polygon. I would have to reorganize the whole shape and split it into two seperate ones and so on and so on.
So i think you are basically telling me, that i can use xith for writing a game where almost everything is static (texture wise) but not for writing the tools for creating it!?
In my opinion, an engine’s task is to abstract from the underlying rendering layer and its requirements. If it forces me to build wierd workarounds to get what i want (if its reasonable, which it is in this case), it has failed in this part IMHO.

You can dynamically change texture by painting on it. You can add decals with bullet holes. You just cannot randomly change texture on single polygons without making them separate shapes. I understand that you need this functionality, but I have yet see any game which would use such functionality. I have even played some kind of childish point-and-color game few years ago, but it allowed to color entire shape with one color/texture - not specific polygons (you would probably have problem explaining child why sphere is not a sphere but bunch of polygons).

Problem is that your use case is so strange to most people that there is a trouble grasping why exactly it is needed. Can you explaing the exact cases where it is needed ?

[quote]Problem is that your use case is so strange to most people that there is a trouble grasping why exactly it is needed. Can you explaing the exact cases where it is needed ?
[/quote]
Well, “needed” is a bit too much because it’s not something i’m making money with. Not even something that will evolve into a real game. It’s just a fun project that i’m working on from time to time to learn things about 3d. The idea is this: There are birds hidden in a tree (with around 1000-2000 leafs) and you have to hit them using a fireball ;D
I started this little project using the mentioned jpct engine and i wanted the leafs to burn down when hit by the fireball. In the earlier version of the engine, i had to create an object for every leaf too. Just like i would have to in xith . That’s because the engine was able to detect the collision itself but it couldn’t tell me which leafs were affected when i stored them all in one big object. That was quite slow. According to my profiling, most of the time was spent in the collision detection between my fireball and all the leafs. A newer version introduced the possibility to get the list of affected polygons (i.e. the leafs) from a collision. With this, i can easily maintain my own list of burning leafs, let them burn some time by changing the texture to an animated fire and finally i change the texture to a “burned leaf” one. (You can spot the birds better through the burned leafs :wink: )
That’s what i’m doing and that’s what i’m using this feature for. I don’t really need xith to implement it, because i don’t plan to use xith ATM. I was just wondering why something so obviously needed (to me at least) isn’t possible with this engine.
And finally, albeit i’m not writing one, i think it’s very usefull for texturing work in an editor. But i already mentioned that.