Per poly texture?

[quote]There are birds hidden in a tree (with around 1000-2000 leafs) and you have to hit them using a fireball ;D
I started this little project using the mentioned jpct engine and i wanted the leafs to burn down when hit by the fireball. […] With this, i can easily maintain my own list of burning leafs, let them burn some time by changing the texture to an animated fire and finally i change the texture to a “burned leaf” one. (You can spot the birds better through the burned leafs :wink: )
[/quote]
Create a texture with multiple stages of leaf burning in various places and then change texture coordinates of particular leaves. All leaves will be drawn in single opengl call, animating ‘burning’ is as simple as changing tex coords on few vertices. I would personally think about some kind of particle system for leaves - they could whirl from blast or just wind, fall down etc.

As you can see, no need to have per-polygon textures here and with current classes you will get a lot better performance than with any texture-switching solution.

Well, the first thing we have is we’re displaying the MOLA data of the surface of Mars. There’s 1,095,761,920 polygons representing a virtual surface area of 222,534,366 sq km. We break the data into 64x64 poly plates that are roughly 30 km sq in size, load 16 plates at a time, for a total of 65K quads (for the world alone) loaded at any given time. Each quad is roughly half a kilometer in size. We subdivide into 131K of tri polys.

I’d like to break the plates into smaller sizes, but there’s already 250K plates.

Now you can’t tell me that you can load a single texture that covers 30km of area (the size of one grid plate) with an acceptable resolution that you won’t get sick looking at it from 5 feet away when you stand on the surface of it.

So we paint each half-kilometer-square polygon individually to obtain the necessary texture resolution. We use different textures (sometimes runtime generated to ensure randomness) on each polygon so you don’t see the pattern repetition. I wouldn’t think the correct solution is to create 131K seperate objects for each individual tri poly, or a single texture that is 16K x 16K pixels in size (256x256 texture by 64x64 polys)

We also need to be able to highlight a specific polygon. This isn’t just “draw a decal on the spot”, it’s “here’s exactly the polygon represented by these 3 data points” because we’re looking for data visualization accuracy. For me, I just mapped a highlight texture onto it.

We’re already dealing with 131K polygons just for the background, and even that isn’t enough direct visualization, so we’ve “sacrificed” to reach down that low. That doesn’t yet include the 50 avatar model representing the professor and his students as they stand on Mons Olympus either.

We also use the same 3D engine for our MMORPG, our virtual shopping center, a virtual physics and chemistry lab, and a 3D games development platform. So this isn’t a “just use Java3D for large-scale visualization instead” problem.

[quote]You can add decals with bullet holes… but I have yet see any game which would use such functionality.
[/quote]
Well, at least that explains the resistance I met to every suggestion yet. I’m also the one that asked if it would be possible for a simple callback for the application to provide textures to the model loaders rather than model loaders taking it upon themselves to assume how to load textures directly (since we don’t have, or even know to have, the textures at the time of model loading) but met with equal resistance of “why in the world would you ever not have a texture on disk available immediately when the model is being loaded?” (besides, of course, needing to obtain the texture from a delayed source [ie, runtime streamed from the network], needing to runtime procedurally generate the texture, having the texture already in application memory via a previous operation, keeping stats on texture usage frequency given an arbitrary model set, … ) Not everything written in 3D is Quake, but it seems like we get resistance for any suggestion that doesn’t pertain to a game.

The mentality here isn’t “Sure, we can figure out how to provide that capability for your needs”… it’s “Why in the world would you ever possibly need to do that in a game”… and one of the big reasons why we haven’t moved to Xith yet. We’re also looking at jME and jPCT. I know you guys aren’t paid, that we’re not paying you for support, that this is all a volunteer effort, and I understand. But most projects of this nature we deal with, the devs are hyper to see their system used in fields beyond their original aspirations, and giddy to include new capabilities that they themselves never imagined. We shouldn’t have to justify why we need certain capabilities.

[quote]It is more efficient because you are only dealing with one texture instead of n. OpenGL only needs to load and store 1 texture. I believe you will find this approach much faster even when using raw-opengl calls.
[/quote]
Sure, and your two-seater car runs faster than my 50-person passenger bus. Speed comparisons are moot when we’re talking systems with two different capabilities. To say “It’s faster if you just drop that capability” doesn’t say much when you need that capability in the first place, now does it? This is the equivalent of “Our program runs faster because it only prints Hello World”.

[quote]Well, the first thing we have is we’re displaying the MOLA data of the surface of Mars. There’s 1,095,761,920 polygons representing a virtual surface area of 222,534,366 sq km. We break the data into 64x64 poly plates that are roughly 30 km sq in size, load 16 plates at a time, for a total of 65K quads (for the world alone) loaded at any given time. Each quad is roughly half a kilometer in size. We subdivide into 131K of tri polys.
[/quote]
We could start with that. You have very specific requirements - and now we can start to think how to solve the problem.

Well, you should. Xith3d is game engine, not ‘visualise-everything-and-a-bit-more’ engine. This doesn’t mean that other things cannot be added, but there has to be a specific request for them, together with explanation of why it is needed - because maybe some different solution, one which will fit well in engine AND solve your problem can be found (vide my solution to burning leaves).

Now, back to your problem. Do you have 131k different textures at once in system ? And you perform texture context switch 130 thousand times per frame, not to mention uploading all these textures to GPU each time if needed ? How do you store the textures in main memory ?

On side note, I suppose that you have already investigated this possibility, but just in case - have you tried big main texture with detail texture on top of that ?

We’ve tried everything from 1 texture per plate (which, given a 256x256 texture, results in each texture pixel being 100+ meters in size), up to 1 texture per poly (which is 131K textures and completely infeasible). So we checked into using 4 textures, 8 textures, etc… now to prevent pattern repetition, that means that you gotta distribute the different textures accordingly, like the colored squares of a chessboard. As a result, if we lump all the polys with a similiar texture into one object, it’s like lumping all the red squares of a chessboard together… they collectively occupy a huge surface area with equally massive holes in between, which just runs havoc on collision detection and picking… since the textures are evenly distributed, each “texture set” eventually occupies the entire plate (just like the “set of red squares” covers the entire surface area of the chessboard, equally with the black squares). That’s why when somebody said “similiar texture objects are usually spatially close to each other”, I can say that given our project, a polygon may have 2000 “texture twins” and each may be 50 kilometers away with five different 10 kilometer holes betwen them.

As far as choosing the number of textures (which will affect how many polys use that texture), we haven’t settled on a good value yet. Basically we’d like to push as close to 1 texture per poly as the user can handle (for maximum resolution), but naturally we don’t get anywhere near that in actuality.

I’d understand resistance if I was asking for something beyond reasonable, far beyond feasibility, etc. But the few things I’ve asked (texture loading callback, a capability for per-poly texture, etc) seem like fairly simplistic issues, even if they aren’t for the mainstream folks writing Quakes. The static seems to be far more along the lines of “You’d never need that in a Quake” rather than “That’s only a few lines of code you need, we can do that even if we don’t use it ourselves”. If only game-specific features are going to be considered, then you need to bill Xith as a “game API” rather than as a “lean scenegraph renderer”.

Note: In Open Source projects, change usually happens when it fits within the currently-conceived framework, and you are willing to do it yourself. Mailing lists are good for finding ways to work with the current system. The same holds for corporate projects, but they have a well-established system to pay for both changes and support.

Regarding the specified problem…
Problem: Desire to load a pallette of numerous discrete textures and then pseudo-randomly map them to a terrain map of Mars. A reasonably-sized texture map for all of Mars looks horrible when zoomed in, while one that looks good up close exceeds the GPU’s capabilities when zoomed out.

Proposed solution: Load a discrete texture per polygon. Load N textures of X*Y resolution. Pseudo-randomly map them to discrete polygons in an object. Restructure the Xith API to allow individual polygons in an object to have unique, separately-loaded textures.

Xith solution: Use texture coordinates to map each polygon.
Load 1 texture of NXY resolution (so (NX)Y or X(NY) or some other arrangement). Pseudo-randomly map texture coordinates to discrete polygons in an object.

Analysis:
In either method, the same amount of data has to be loaded to display the same amount of detail. If either method fails due to excessive texture size, then the other method would also fail.

For this specific problem, it sounds like level-of-detail (LOD) features should be used to manage the texture mapping problem. Using this concept, a system can be set up to fractally (or recursively) map the terrain as one gets closer, hence eliminating the need to trade-off between 130k textures and decent resolution. You will probably want to define different texture sets for each level. Using mipmap levels may help. Simply divide “big” polygons into smaller ones as you get closer…

Conclusion: The need for 1 texture/polygon is caused by a questionable design decision (want 130k unique plates at once) rather than a fundamental limitation in the Xith API.


Regarding the “delayed texture loading/texture callback” situation… The purpose of this is to allow for the creation of Xith objects before their texture is loaded/determined, correct? How about creating dummy objects with a standard texture, and then fixing them with the correct texture later? com.xith3d.scenegraph.Texture.setImage may help.

In this way, the programmer has more control over how and when to update textures than any cookie-cutter callback/delayed display routine could provide.

As for a paint-by color system, a 256*256 pixel rainbow image could provide the pallette, and the x-y coordinates of each color would be specified as the three TextureCoordinates for each polygon as it is painted. This scheme makes all 16-bit colors available. It might be better to use a GIF-like indexed color table, though. Offhand, I don’t remember how big a texture can become before it slows down standard graphics hardware.

Thank you for a reasoned analysis of my problem. However, I’m not quite sure I understand.

A single polygon of the Mars data is roughly half a kilometer in size. Drawn at full size (ie, user is standing on the surface), even a 256x256 texture on that single poly is just barely sufficient resolution. If we assume that we allow a 5% repetition factor, that’s still 20 seperate images, and that only barely covers the fact that all the 16 neighboring polys won’t have the same graphic as the current one. 20 images at 256x256 each packed makes for one seriously large single texture, somewhere between 1024x1024 and 1280x1280. If we assume that UV mapping from a single texture is the solution to this problem, what’s the largest texture that Xith can handle? Which would be easier on the graphics card, one insanely large texture with UV mapping, or 20 smaller textures with per-poly mapping?

As far as the delayed texture loading… I’m not sure what you mean.

  1. I don’t know which textures to download/generate unless the model loaders tell me what they need.
  2. The model loaders don’t tell me which textures are needed.
  3. The model loaders fail to load because the textures are unavailable.

My recommendation seemed simple enough. Make it an option for the application to be responsible for providing the textures, and the model loaders simply request them from the application. The default can be that the model loaders attempt to load from disk directly for backwards compatibility. The implementation would be as simple as a model loader being: “If I have a TextureProvider assigned, ask it for the texture I need; else, try to load from disk myself”. I think it’s an error on the part of a model loader to make assumptions of where, when, and how textures are made available during runtime.

The application, once it knows which textures are needed, can always provide a temporary dummy texture and update it later with the real thing.

FWIW: The reason for Xith’s “change texture coords” instead of “change polygon textures” is that the first is usually accelerated in the GPU while the second isn’t.

Which would be easier on the graphics card, one insanely large texture with UV mapping, or 20 smaller textures with per-poly mapping?

Either one would be bad. Which one is worse depends on hardware details and the exact sizes involved. In other words, while the texture can fit into the proper cache, the large UV-mapped texture is faster. When it becomes too big, the smaller textures will be faster.

I remember seeing benchmarks/guidelines for various texture sizes, but I don’t remember the numbers. It should be a fairly simple benchmark to code, though I don’t have the time right now. A conservative estimate is probably around 32x32 to 64x64 pixels, depending on the graphics card generation.

A single polygon of the Mars data is roughly half a kilometer in size.

Herein lies the problem. Assume the user’s monitor is 1024 pixels wide, and they are looking straight down with a field of view of +/- 45 degrees. This is convenient since sin(45)=cos(45)=0.71; thus (viewable width)=2*(viewing height).

At 256 km up, the view is 512 km wide; each 500 m polygon covers a single pixel. At 500 m up, the view is 1km wide; each polygon covers half the screen. From 1 m (roughly human height), the view is 2 m, or 2/500=0.004 times the size of your base polygon.

Photo-realistism at a 1 m height therefore requires your texture for a 500 m polygon to have 1024*500/2=256,000 pixels in width. Yet at 256 km up, photo-realism only requires a single pixel per polygon texture.

Thus fixing your polygon size to be 500 m is causing a nasty tradeoff between excessive texture size and unacceptable image quality. Therefore, your only solution can be to make your polygon mesh finer as your view gets closer to it, and coarser as the viewer moves away.

Each quad is roughly half a kilometer in size.
Your current scheme is to assign 2 triangles per quad. An improved scheme is to further subdivide it into 2, 8, 32, 128, … triangles dynamically, based on the height above the surface. With some clever coding, this can be made to happen rather seamlessly. (e.g. match the general light/dark/color patterns whenever you do a split, and split before the viewer is close enough to be bothered by the change)

There are several approaches to doing this, depending on your specific needs. Look at terrain demos for inspiration.


I misunderstood the delayed texture loading problem. I agree that this seems like a limitation with the current model loading interface. However, I don’t have enough experience with the model loaders to comment. If things are as you say, you’re probably stuck downloading the whole model before using it (the easy solution) or implementing the fixes yourself.

I’d recommend starting a new thread on “delayed images and model loaders” or somesuch to see what others have to say.

[quote]I’m calling a shape whatever xith is calling a shape and i couldn’t care less about what the programmer of the API is doing with this shape/model/whatever internally. I’m just interested in a feature that makes sense to me (and not just me). Maybe it’s hard to implement and doesn’t fit nicely into the current code but should i really care about this as the “user” of the API?
[/quote]
Yes. What makes a good non-realtime system, does not make a good realtime system. The two objectives are almost diametrically opposed. Non-realtime is about handling as much detail as possible, in as configurable way as possible. Realtime is about doing as little as possible between the user code and the graphics hardware. Anything that has to be calculated had better result in a net increase in performance, not a decrease. It is there to provide optimisations for speed so that an end user does not have to write the same thing over and over every time they want to write a 3D application. For example, view frustum culling, picking and state sorting.

[quote] I don’t see how you can make this assumption. By that rationale, every tree in a forest could contained in a single object (since all trees could share the same texture) but it’s silly to assume that every tree would be close to another.
[/quote]
You don’t have to make that assumption at all. There is a single Texture object that can be shared between all the trees. Then, the trees can be spatially separated. So long as you use the same texture object, then state sorting can also be to your benefit, and the culling algorithms will remove useless data eliminating a large percentage of those tree geometries from view. By placing all those trees into a single geometry/shape, you’ve now caused a great amount of problems from a performance perspective. You cover a very large spatial area, so that no matter which direction you face, the scene graph cannot cull some of the geometry. So, instead of only rendering 10K vertices, you now have to render 100K.

[quote]If the Xith answer is “make each of the 2000 polygons into 2000 seperate objects” then that, to me, is not a feasible answer. You’re telling me there’s less overhead with switching between 2000 Shape3Ds than any other possible solution?
[/quote]
Quite simply - yes. It is going to gain you far, far greater performance benefits than doing it the other way. The difference in performance grows at an exponential rate as you increase the number of objects in scene. There’s a darn good reason why every game engine since Doom I have been running spatial partitioning algorithms. It’s certainly not for the programmer or content developer ease of use. Besides, there is no need to have a single shape for every polygon. As others have pointed out, the standard way of solving this problem is to use texture coordinates. Group each object into a single shape (eg a tree model) and then use the texture coords to modify the tree on a per-object basis. It’s not that hard to do and pretty much any programming book that is about game development talks about how to implement these strategies.

[quote]Sorry, but i really don’t get what you are trying to tell me here. That changing the texture state requires 300 lines of code? For sure not.
[/quote]
If you are going to run multiple textures per object, then yes, it will take this many lines of code. That’s precisely the model that the .3ds file format uses internally. Each Object chunk consists of material lists that link to the texture. To work out how to change this to something suitable for openGL to draw, you have to iterate through each object list, breaking apart the coordinate array into a smaller array, set the texture and material state, then send the array to OpenGL, lather, rinse, repeat for all textures on the object. It’s a horrible, inefficient process because of all the for loops that need to be executed per object, per frame.

So what I see here is a case of Having a Hammer problem. Everything looks like a nail. It also appears that JeremieHicks does not have any experience rendering geospatial data. These problems have been solved time and again by the big geospatial engines out there. It’s nothing new in his requirements by any means. You’re using your knowledge of 3DS max, which is designed for doing non-realtime graphics, into assuming that the same techniques are used for realtime, which they’re not.

If you really want to do large scale terrain rendering, I suggest you wander over to the Virtual Terrain Project (commonly known as VTP) at http://www.vterrain.org and have a read through the hundreds of links to the various large-scale terrain rendering algorithms that they have there. You’ll most likely want to look at ROAM or one of the CLOD algorithms. But, in general, what I am seeing here from both of the parties wanting this is a lack of knowledge about fundamental graphics techniques. Do yourself a favour and grab a few books on game engine design or visualisation design and get familiar with the various algorithmic options available. It will save you a heap of time asking questions like this and getting the same “lack of interest” responses.

j3d.org has an implementation of ROAM available in the generic sense, and a specific implementation on top of Java3D. Porting that to work with Xith3D should take very little work. There’s a few bugs in it that are not sorted out, but it will solve all these questions you’re already working on. Managing texture resources and managing polygonal resources can be separated into two orthogonal systems. That’s the way the big rendering engines like Performer work. What you’re asking for is above the design scope for what Xith3D and other scene graphs are aiming for. You can implement these techniques on top of them, but they are not part of the core API for a very good reason - the technique to use is highly application-specific.

As a side note, and the Xith3D guys are probably going to be cranky at me for mentioning this here: Xith3D probably will not be the engine that you’ll want to use if you need to deal with anything more than a single CPU machine. If you’re really doing large-scale terrain visualisation, then you’ll want to make use of my project - Aviatrix3D, which is specifically designed for the visualisation crowd. Multithreaded internals, pluggable rendering pipeline strategies, scales from PC to CAVE with only 2-3 lines of code change etc etc. Still uses JOGL internally for the rendering.

@java: After reading your posts about why Xith3D doesn’t support multiple textures/shape where jPCT does, i think i (as the author of jPCT) can help to clarify some things.
Basically, you are right: jPCT can do this while Xith3D can’t. But there are reasons for this. I think that the Xith-guys already did a good job on explaining why their baby doesn’t support this feature. Maybe you can live with that, maybe you can’t…it all depends on your needs.
Now for the reason why jPCT can do this: It would be stupid not to…for jPCT. Other than Xith3D, jPCT is a software/hardware hybrid engine (just like the Unreal1/Unreal Tournament engine was), i.e. it can do both: Software and hardware rendering with a similar feature set. Therefor, it can’t do what Xith3D does: Let the graphics card do all the transformation and lighting stuff. It has to provide its own T&L pipeline written in good old and pure Java. It can’t rely on the graphics card for that…it IS the graphics card! For such an engine, there is no speed penalty when changing textures. You can do that thousands of times in a frame…it simply doesn’t matter. That’s for software rendering. When using hardware rendering, it does matter somehow. Anyway, jPCT’s hardware renderer has to be seen as an addition to the software renderer. The pipeline that every triangle has to pass is basically the same for both renderers (with some optimizations for the hardware one). jPCT doesn’t really care if you use software or hardware for rendering until the very end of the process (in fact, you can use both at the same time). Therefor, it can’t do a lot of things that Xith3D can do to speed up things…on the other hand, it can do a lot of things that Xith3D can’t…simply because Xith3D transfers control to the GPU where jPCT keeps everything in its own hands. That’s the reason why its hardware accelerated performance is still quite good compared with other, more hardware oriented engines. (In fact jPCT can render the Quake3 level taken from a Xith3D demo faster than the Xith-demo does and with a better collision detection…:P)
Long story short: Multiple textures/object is a no-go for Xith3D according to its design. For jPCT, it will still affect performance but not that much and because the software renderer supports it, the hardware renderer has to support it too (that’s the basic idea behind the whole engine). Ob the other hand, you’ll find a lot of things that Xith3D can do that jPCT can’t. Again, it all depends on your needs.

BTW: For killing birds with a fireball, both engines should do… ;D