Scenegraph Design

For a while now I’ve been working on a scenegraph (yes, why on earth would I want to do that) that is unique in that it separates the state management from the scene management (two trees, optimized and updated separately). IMO I find this very understandable and easy to use since you would describe how the content looks (state tree) and then where the content happens to be (spatial tree).

However, one area that is a little tricky are when states have spatial components (namely fog and lights). From a structural point of view, does it make more sense:

  1. to add lights and fog to the spatial tree and have any children be lit
    or …
  2. to make lights and fog part of the state tree since they affect how content looks

Pros and cons of 1: geometry is lit/fogged based on spatial location, which makes sense. Harder to integrate lit and non-lit geometries in the same spatial tree. Harder to work well with the state tree (given my current implementation).

Pros and cons of 2: content description of how something looks is consistent for any type of state, however the programmer has to manage when geometries are lit.

Opinions are appreciated, thanks!

I’m also working on a little scenegraph (why on Earth…!), and I’m dealing with similar design design decisions. I’m just building it for my own projects, so it get refactored every few days, and gradually the beast begins to take shape.

In my design, the ‘spatial states’ are in the ‘spatial tree’ (a hierarchy of ‘transform nodes’), they have a bounding-volume, and every other ‘transform node’ (and the items/geometry it holds) that happens to intersect that ‘spatial state node’ get that state attached.

That way you can easily (for example) make a car (with lights) drive through a scene and only make the items in the bounding-volume of the lights get a specific state.

The only downside is that any item can have any state, which can be worked around by dynamically generating the shaders… (i’m not there yet…)

You know what! I’m working on a scenegraph as well. Atlhough its a bit more simplistic design. The scenegraph has a list of render passes. Each render pass has list of shapes. A shape contains all the data needed to render it: state, transform and geometry. There are utility classes that can be used to create transform trees. But all it does is update the shape list. I could also create a state tree if I needed it.

As for your question. You could do as Java3D and add influence bounds/trees. Ligths are added to the state tree. But it also contains a list of bounds, where objects inside the bounds or lit, overriding the state tree. Or a list of spatial trees. That way the user can decide.

Or you could force the user to add it to both trees. Its location in the state tree defines what nodes are lit. It’s location in the spatial tree defines the location of the ligt. If you add a light to a car (headlight), it should follow the car as it moves.

At the moment I have a light node that is in the spatial tree that updates a light’s position, but nothing will be influenced by the light unless the light is in the state tree. I’ve been considering allowing some states (ones with spatial aspects) to be attachable to the spatial tree, and then they get accumulated at the leaf level and merged with the rest of the states in the state tree.

I’m hesitant to use influence bounding or intersections since that would be computationally more expensive and would have to be updated each frame instead of when the tree structure changes.

Side note on my scenegraph:
Geometry isn’t part of the spatial tree, it’s in the state tree. This makes it very easy to set up state for different passes of the same scene
States farm actually opengl code onto a state peer, which right now are implemented for fixed functionality. When I need to, I can use the peers to dynamically generate the shader code without having to build a layer on top of my scenegraph or modifying the core components.

Everybody writes a scenegraph :s

This is an eternal dilemma for the reasons you’ve stated. I don’t think there’s a perfect solution, so I would go with whichever solution makes it less likely that the user of your scene-graph screws up (even if that’s only you!) - and in that case it’s option number 1.

Yep, that realization has been staring me in the face for some time. I’ve been reluctant to accept it since it involves changing some substantial things, but I think I’ve finally figured out how at an implementation level and should begin updating soon.

I just wanted to share my thoughts on this after working on a simplistic engine for a few months.

At first, I really liked the idea of scenegraph. It’s a very intuitive concept. However, as I have begun to use it more, and look at it’s design, I realize it’s a limited tool. I’m currently refactoring based on the observation that what you really want is a database, with different views. A scenegraph is really just one view of the database. It’s the relational view. You can also have a bounding volume hierarchy view. You can have an event handling view, etc.

Anyway, what I’ve noticed is that many engines, and graphics people use the scenegraph as the central database. This creates a scenario where you have a Node population explosion. If the scenegraph is behaving as your defacto database, everything must be expressed as a node. So, then you are deriving objects form Node class, or interface, and you have a Node population explosion – everything starts to either has a node, or is a node. Whereas, if you look at your game objects ( or whatever you purpose is ) as simple IDs, and have various views of those IDs, the system becomes simpler.

So, I’m suggesting that people utilize scenegraphs to handle relations between objects, and perhaps as an even messaging structure, and that one utilize a central database as your main store for information about your game objects.

This is only a conclusion after a few months of prototyping and playing. I’m very very curious as to people’s views on this.

cheers

If you have nor pedagogical neither scientific reasons to write a scenegraph, maybe you should avoid reinventing the wheel and use an existing one.

Yes this is the right way. You have now valuable experience which lot of people are missing and thus still do single tree scenegraphs. Though I’m more in favor of specialized renderers because they can do miracles not possible (or hard) with universal structure. Of course some parts of it might look like multi tree scenegraphs :slight_smile:

Having own experience is best teaching tool ever. Because you know exactly why is something bad and also why it is in some rare case very good instead of using generally better approach (which would be very bad in some rare case).

;D I use something even better, something that you could call a multi graph scenegraph.

That is what I meant by “pedagogical interest”. Good idea.

yeah, i needed a specialized renderer. most of the opengl stuff i’ve run into is focused on 3D. whereas the stuff i’m working on is mostly 2.5D ( fake 3D ), so i have specialized needs. however, i just brought this up because as i write the code i realized that the collection of Node types just kept growing and growing because everything needed to be expressed as a Node. after a while i thought that maybe that wasn’t the best way, and wanted to see what others had run into. are you saying it’s NOT worth creating a more generic ID-based approach? sorry, just trying to clarify because i’m not sure what you were suggesting, but want to know since you are clearly an experienced JOGL user.

I think your idea is very interesting, and probably quite useful if done right. There was an article I read before starting down that path that describes an idea very much like yours (don’t know the link anymore). In the end, I’ve found a good approach for my interests, that lies somewhere between a single scene graph and views of a database:

There are two main components, the scene graph that describes the relations and a renderer that takes in “render atoms” that describe the minimal set of things needed to render an object (geometry, appearance and location). This enables the renderer to be used as is, or implement radically new organizational tools to describe the scene. Because my knowledge of databases and anything of that nature is limited, I chose to implement a simple scene graph that relies on the renderer.

My work is almost complete and if everything goes as planned, the engine should be extremely flexible and very efficient. I’ll post an announcement when I’ve finished what’s needed to make it usable.

Bad idea! I tried this along time ago and unless you have some way of automatically (and very quickly) creating a octree (or similar) from spatial transforms, then neighbouring objects which would the user assume to be lit wont be…

It will also make shadows very difficult.

What I did to resolve this is have the Light as a Transform inside the tree, but have the LightState in each spatial whereby you can attach a Light to it. This way, your Light will move with the scenegraph and have controllers on them…etc, but will also influence the correct spatials.

HTH, DP :slight_smile:

it sounds like we’re approaching it from a similar angle. i have been basing my work on this thesis paper:

http://www.google.com/url?sa=t&source=web&ct=res&cd=1&url=http%3A%2F%2Fwww.plm.eecs.uni-kassel.de%2Fplm%2Ffileadmin%2Fpm%2Fpublications%2Fchuerkamp%2FDiplomarbeiten%2FJohannes_Spohr_thesis.pdf&ei=i3dySdKkLZy6MvfhgBw&usg=AFQjCNErYb7RpieGzHnUA2G_U_R8-AI2wA&sig2=NC_A8oH9p9o-jLAppwyclA

while most renderers are basically variations on a theme, i found the way he organizes the flow fit the way i imagine it more closely. for example, while there is much amazing information in Eberly’s book, this thesis paper just made more sense to me.

in mr. spohr’s thesis, the system provides a nice atomic rendering scheme that is implementation independent just as you describe. in his system, the atomic unit is the RenderJob. at first, i was kind of grafting the eberly / spatials / scenegraph approach onto spohr’s, but, i soon realized that they were at odds with each other because of the dependence on the scenegraph. in spohr’s approach, there is still a scenegraph used ( so far, it seems it’s silly and/or nearly impossible to get away from a scenegraph entirely ), but i realized it wasn’t DEPENDENT on one, and that maybe by shoehorning everything into the well trod eberly approach isn’t exactly what’s required. anyway, that’s how i came to the conclusion that maybe the scenegraph should only be used in a limited scope.

i still haven’t done enough to really evaluate that hypothesis though, and have much to do. but it helps to bounce some ideas around a bit and see what people are doing.

thanks for your input.

cheers

Originally I decided with the approach you mention. However, it became quite tedious to me to setup lighting in a scene that way. In the end, I figured out a way to put lights directly in the scene with very little overhead. Each light in a scene was assigned a bounds for it’s light. Every visible scene leaf would be checked with every visible light bounds. If the bounds intersected, then the leaf would be lit.

@skytomorrownow
That thesis is quite interesting, thanks for posting the link

I agree that it is a tedious process adding lights which is why I implemented the bounds system on top of the State. When a spatial intersects a Light’s Bounds, the Light gets added to the spatial’s state tree.

The reason to why this is superior to the simple bounds method is that this can be done during traversal of the scene on a worker thread. I found that with a multithreaded engine design, state integrity was not always guaranteed and this was one way of doing it.

But I’m glad you’ve got a solution that works for you :slight_smile:

DP