i’m working on a simple real-time animation with a sphere. i plan on simplifying scene complexity for rendering quality, but i would like to be able to use a large (30 on screen at a time) number of spheres. an LOD system seems like a good idea, but what’s better for this special case:
create a routine to tesselate an sphere from an implicit or parametric representation (this giving me great control over level of detail), OR, implement a traditional system with multiple resolution versions of my sphere’s done as a pre process…
I would recommend pre-computing LODs. Otherwise you will burn CPU cycles tessellating your implicit parameterization each frame. This project gives an example of tessellating general implicit surfaces and it isn’t computationally cheap.
thanks for the reply. i will go with precomputing. i knew on the fly tesselation would be expensive, but thought i might get away with it by keeping the scene simple, but alas, i have to focus java on traversing the scene graph, and performing AI functions.
your implicit surfaces work is really cool. so we’re talking real-time blobbies?
Yes, the system ran in real time. Unfortunately the conclusion I came to was that implicit surfaces were very hard to control and almost impossible to texture map, at least without combining them with some other representation like NURBS patches as in Pedersen’s work (cited in my thesis).
very interesting work. you were testing with seven parallel cpus? the solution you came up with was pretty good though – i mean real-time is relative. if you weren’t constrained by today’s processing limitations, you could incorporate more cycles to the triangle orientation and structural integrity checking (and greater sampling!). as someone transitioning into real time from slow 3D, i find it VERY challenging (i’m totally lost most of the time – GLSL is NOT like renderman shading language!).
the funny thing is, maybe in ten years real-time game programming will look and be devloped more like today’s high-resolution cgi – like high res renderman. the machinations people are going through to get real-time will look like the nintendo 8-bit days. lol. and think of the implications of a quantum computer able to sample many time states at the same time: real-time, super high resolution (as in enough to fill the human stereo field of view), with depth of field, totally realistic physics, acoustics, optics, sensory feedback… i could go on and on… time to put my inner geek back in the cage… lol
Thanks for the compliment. My work didn’t offer many contributions. The most significant was probably the tessellator for Witkin and Heckbert’s implicit surface particle simulator (including Pedersen’s modifications for stability) and maybe some of the modifications for hierarchical animation, though I think others had come up with similar techniques either before me or at roughly the same time.
Anyway, it was certainly an interesting software engineering problem. The machine was an 8-way SGI Onyx2 and I tended to let the system fork off seven threads for parallel rendering. One of the interesting things I found was the lack of scalability of the algorithm despite the fact that at least parts of it were perfectly parallelizable. My intuition and conclusion about this was that it was due to the NUMA (non-uniform memory access) nature of the machine. Each CPU had some RAM which was “local” and which was much faster to access than other RAM across the crossbar. The data set was inherently going to be spread across multiple CPUs’ RAM because even if you subdivided space there were going to be regions where particles belonging to another CPU had to have force influences on this CPU’s particles. Also particles move spatially and I don’t remember whether the system “transferred ownership” of those particles from CPU to CPU although I vaguely recall attempting to allocate memory locally to a given CPU for this purpose. I think the system would behave quite differently on today’s CPUs. (The software has bit-rotted in the last six years and the precompiled Windows version doesn’t seem to run any more.)
You have an interesting perspective coming to real-time from the offline (RenderMan) world. I came from the opposite direction (started in 3D with real-time work) and haven’t scratched the surface of what shaders and multipass rendering can do. Knowing what has been possible for a long time in the off-line world gives you the advantage that you know what will eventually be possible with GPUs. My poor intuition tells me that as floating-point frame buffers and arbitrary-length shaders become ubiquitous that nearly all of the off-line shader primitives will make their way into the real-time domain yielding very exciting possibilities. Of course it’s also possible that this has already happened and that I’m just behind the times.
lol. well i doubt you’re behind the times. optical simulation has become very sophisticated over the past ten years – just THINK about the siggraph examples of ten years ago! lol
i wish i had paid more attention to basic CS classes! i was always interested in algorithms, which works well in the offline world. but man, now i have to parse files, manage buffers, so much work! i wish there was an app like maya or even flash where one could just experiment more readily. i looked at some of the engines, but they always seem to have some kind of achilles heel. so now i’ll just keep slogging along… who’d have guessed that tesselating a sphere is such a pain! lol