JOGL-Java3D Renderer Progress

Hi all, though I would post a quick snap of my renderer’s progress and open a forum I will post to as I go along.

So far the renderer is traversing J3D graphs and rendering transforms, indexed geometry generated from several loaders I have.
Lighting, normals, UVs, texture loading and multitexture is working correctly. (sloppily skipped vertex colors… ;-))

Here is a snapshot of the world model we used in the fighter we showed at GDC 2002. The left snaps are in the new JOGL viewer and the right are in the original Java3D viewer. BTW, those are not “real” dynamic shadows, they are shadow texture maps generated in Maya :slight_smile:

Those look great ^_^, keep up the good work man

Looks cool! ;D

What sort of fps are you getting? In comparison to Java3Ds ?

Kev

Looks great. What do you see as the main difference between this and the Xith3D renderer?

Frame rates…We don’t need no stinking frame rates…

Well, the renderer is embedded/called from a simple single pass graph traversal of an actually J3D graph, not a set of duplicate classes. Because of this, there is no state sorting or management of any kind. Which is bad for performance, but good for render correctness which is actually “the point”. My next step is to add simple view frustum culling against the hierarchy bounds to generate a render list instead of the “inlined” render traversal, however the example renderer will not change, other than step through the draw list that is split into opaque and transparent list so transparency can be done after opaque.

This is pretty much the slowest way to render (each shape3D sets a complete geometry and render state) but ideally the most render error-free. For J3D render compliance this will be the base-line implementation. This renderer will look exactly like the same J3D graph except for intentional differences. There are many J3D features that our group is never planning to implement, but released in this form perhaps the community will add those they want if they choose, as we will also as time goes by.

Once we have a good base-line, then a new in-game performance renderer will be developed based on conforming to the render correctness of this engine.
.
.
Long ago I wrote the OpenFLT loader for Java3D, which we recently reposted and begun maintaining again. One thing that really bothered me about the SGI Performer/FLT loader was that the colors and materials weren’t EXACTLY the same as modeled in Creator and was incredibly frustating. This was the fault of the loader not Creator I came to find out later. So when I started writing the Java3D FLT loader, I took extra care to make absolutely sure that materials in Creator looked pixel perfect in Java3D. Lots and lots of test models and there was some material munching I had to do to make it happen. But it was very much worth it for the artist because before they would over-spend HOURS making changes in Creator so that Performer run-time would look how they intended. This base-line renderer is meant to provide the same level of correctness going from current J3D scenes to JOGL and can be used to test other JOGL based engines content rendering. For example. on the same machine, for the same view configuration and loaded model and screen rez, a snapshot SHOULD be a perfect pixle copy. A quick image add/sub in PhotoShop can find any differences and help sort it out.

Hi all,
Latest update…
Not a lot of talk right now, it’s rest time.
I got quite an ass-kicking setting this one up but here it is.
Shadow Buffers (texgen/register combiners/mult-texure/CopyTex-render to texture) + vertex buffers from Java3D scene graph. (whew)
The Hulk model is just an example model my best artist made for fun for testing.
More explanation later :slight_smile:

Made mod to newer image. This shows the old light map textures AND the new dynamic shadow buffer maps on the ground. This could be confusing, but I used this model to make sure multitexture with shadow buffer blending was working and to see how different the dynamic light frustum shadows are from the generated light maps. Getting shadow buffers to look like global/sun shadows will be interesting I’m sure! Also, my textures are not alpha yet, so the shadows are columns not the vented columns they should be…

Very nice looking!

Thanks!
I would love to post the runnable, but I am using my own hacked JOGL build right now that enables vertex buffers.
As soon as there is an official JOGL build with them, I’ll try to set up a web-start demo on our site.

Nice :slight_smile: So when you say shadow buffers, is that shadow maps? The whole render from the lights point of view and depth compare method?

The edges look very sharp, what kind of huge resolution are you using for the shadow map texture?

Here is the one of the papers and demo/source I used to work with. This does render the depth texture and does do the per pixel depth compare, however, using register combiners makes it more like a pipeline configuration than a “pixer-shader” IMHO.

http://developer.nvidia.com/object/shadow_mapping.html

http://developer.nvidia.com/object/Shadow_Map.html

The depth texture I am rendering is 512x512 which is a bit large but for cards with the hardware it still screams. Kevglass asked about FPS before and was joking somewhat when I said I didn’t care. FPS is secondary to getting the render good and implementing special render and shader techniques like this. However, we do want it fast enough to actually use in games, we are just expecting to use current mid-level and up 3D hardware from here on out which does support these hardware features. (Think GDC 2004-level ;-))

Finally set up alpha textures…
And then alpha shadows, which was a bit of side fun since rendering the depth map also must mask out alpha part from the depth write :slight_smile:
Anyway, here’s a snap.

The vented columns are alpha textures.
Notice they cast correct vented shadows everywhere AND on each other.
4 rendering stages just for the set up, then the texture combiners grid through combining it all.
Hardware is amazing :wink:

(note: rendering alpha after opaque, but no inter-alpha sorting yet)

Wouldn’t it be a better choice to replace shadow maps with stencil ones? I mean I’m noticing some very obvious aliasing on the edges of the coils’ shadows.
Good work btw :slight_smile:

I’m not sure what you mean…
Do you mean use a stencil made texture for the shadow instead of a depth map? As far as the depth map technique, you have to have a depth texture to compare against the actual depth to know what is in shadow…Even if you did use a stencil texture, it would have the same aliasing…
I dont’ think I understand your suggestion, please explain :slight_smile:

[quote]Wouldn’t it be a better choice to replace shadow maps with stencil ones? I mean I’m noticing some very obvious aliasing on the edges of the coils’ shadows.
Good work btw :slight_smile:
[/quote]
If you mean switching from shadow maps to shadow volumes, then thats not really a good idea. Getting alpha tested geometry to work with shadow volumes is nigh on impossible :frowning: And theres so many neat tricks with alpha testing that I’d count that as a major limitation.

We’re going to see shadow maps becomming the standard soon anyway, at the moment the only thing holding them back is the fact that point light sources are just too damn expensive…

Yes, true point light source = 6 (cubic) Shadow Map passes (gulp!). Not to mention sorting out the layers for objects that cross the cube edges (fagetaboutit!).

We will be using only the one above to create “directional” light shadows so everything is shadowed at least from that one set up. Other ligthing special effects such as weapons or magic, will be dealt with on a individual basis for the near future.

But… count on cubic shadow map support as a future feature because as you say, it is the future way to go.
Such a nice (brute-force) complete shadow method…

I need to learn this technique. I must admit there are many things about stencil shadows I don’t like. The funny thing was I thought shadow maps were too expensive for moving objects and too high on texture memory…is this some new variation? Is this a good all purpose shadowing technique?

As long as you have hardware support, it cooks.
The major cost after that is that you render everything multple times.

1: Render from light view - anything that casts shadows (no color though, but alpha test for alpha texture objects)

2: TexGen depth map onto entire scene (or whatever you want to receive shadows ) so that’s a new set of UVs for everything.

3: Config texture combiners ( or pixel shader soon ) and render the scene for the view, taking care to shift for multi-texture layers.

For both, I just render the entire view scene so shadows are complete.
So the big cost is pixel fill however…
If you use DrawArrays or Elements and it is not pre-binded (i.e. moved to the cards) it is gonna cost you in geometry transfer too. That is why I hacked my JOGL version to support Vertex Buffers first thing, so all my static geometry is on the card for best possible performance. It these kind of improvements that give so much more gain over the nit-picking draw list and scene traversal optimizations, IMHO

BTW, I’m sure we’ll be able to exchange some goodies between this and Xith once I have some “cleaner” code. :slight_smile:

Yeah I really need support for vertex buffers. I added some object level profiling to Xith3D and found I was spending 30 percent of rendering time on trees, but only 9 percent on all the terrain combined. I am sorting and binding only when I need to, but really, sending the geometry over on every frame is just unnecessry in many cases… caching it on the card would be just awsome.

I was speaking of shadow volumes as someone else mentioned it, sorry for not expressing my point clearly enough.
I like shadow volumes because “I” think they’re the easiest to implement right after the cheap, uni-plan shadows (casting a shadow on a flat surface by projecting the model’s triangles onto it).
One of my concerns however is when the camera gets into the shadow volume or when the latter is capped by the near plan as artifacts will appear and ruin your scene for sure.
I read what John Carmack (my hero :P) had to say about the said issue and I’m in the process of implenenting his “reverse” method.
I read that shadow maps take two passes the first being rendered from the light source to evaluate depth first, and then applying the shadow maps depending on the z in the depth buffer.
Well that’s cool and somehow fast if you use small shadow maps (256256) and don’t intend to make your light source a point.
Pixar (toy story) uses shadow maps of 4k to get rid of the aliasing but I don’t think that any modern hardware can support textures over 2048
2048.
Anyways I must run see ya

Java Cool Dude,
I highly recommend downloading the demos/paper I posted in this forum earlier.
Once you see the demo run you may be able to make your decision if you like it or not.

I will say, it was quite “challenging” to set up for a general scene graph, not just a quick demo.
However, we believe it to be THE solution and aren’t looking back :smiley: