Future changes in the rendering code

This is a continuation of this thread.

[quote="<MagicSpark.org [ BlueSky ]>,post:11,topic:28379"]
…or at least can you explain what is really going to be changed ? And if a node is moved on the scenegraph, will the renderbin(s) be destroyed and recreated or will it be reused ? What is exactly a RenderBin ? If my idea on renderbins is right how costly it is to translate a node into a RenderBin ?
[/quote]
Well then… I’ll try to explain, what I’ve done so far in this concern and what I’m planning to do…

First about the RenderBins: A RenderBin is a class that imitates a dynamic array like Vector or ArrayList. I guess it was written before these ones were present in Java or before they were performand. (In the future) I’ll check if we can replace this functionality of the RenderBin with an ArrayList.

It further holds a list of RenderBuckets, which on their part hold exactly one instance of RenderAtom and some additional information. I think we can move these additional information into the RenderAtom class to have less objects in the boat. But that’s for the future, too.

A RenderAtom holds exactly the information of one Shape3D that’s needed by OpenGL to render it. You can regard it as a “flatten” version of the shape taken out of the scenegraph. So the localToVWorld Transform3D is one of the information.

There’re more than one RenderBin in the system but two of them are the most important ones: the opaque and transparent bin. The opaque bin holds all opaque shapes and the transparent bin all transparent ones. There’re more RenderBins and to sum them up:

  • RenderSetupBin: So far it only holds the information about a background color. I’ve set this feature to deprecated to be able to waste this RenderBin in the future to not have to maintain it and gain performance. The background color funktionality is now a simple setter of the Canvas3D class. Which is btwe even more logic. The background color is nothing more than the color, OpenGL clears the screen with.
  • BackgroundBin: Holds all information about Background Nodes with NORMAL camera mode except the background color which resides in RenderSetupBin.
  • BackgroundBin: Holds all information about Background Nodes with FIXED camera mode except the background color which resides in RenderSetupBin.
  • BackgroundBin: Holds all information about Background Nodes with FIXED_POSITION camera mode except the background color which resides in RenderSetupBin.
  • ForegroundBin: Holds all information about Foreground Nodes with NORMAL camera mode
  • ForegroundBin: Holds all information about Foreground Nodes with FIXED camera mode
  • ForegroundBin: Holds all information about Foreground Nodes with FIXED_POSITION
  • ShadowBin: Holds all Shadow Nodes.
  • OpaqueBin: Holds all transparent (translutient) Nodes.
  • TransparentBin: Holds all opaque Nodes.

The last two will always be the most inhabited ones.

The Shapes are spread over all these RenderBins to sort them by states. OpenGL can them get one state change (glEnable) before a whole bin is rendered and not one state change before each shape. States changes are said to be expensive.

Before each and every single frame all RenderBins are cleared and refilled by traversing the whole scenegraph. If we can track all changes to be known of to spread the shapes over the bins (this is the thing I need to check), we can put the shapes into a differen bin, in the moment the appropriate change is done. Maybe it would be a first pragmatic thing to but them all into one big RenderBin when the’re created (not each frame) and iterate to bin 10 times (as often as how much RenderBins are present now) and render only the Atoms (Shape3Ds) that match the current state. I guess this is possible anyway and will be less expensive than traversing the whole scenegraph and clearing and filling the RenderBins each frame.

During the necessary changes for the “low level multipass rendering” I moved the RenderBins from the Renderer class to the BranchGroup class to have one set of RenderBins for each pass. The place in the BranchGroup offers another important advantage: When a modification on a shape has been done, we can move upwards in the scenegraph tree and find the root BranchGroup (if the shape is live) and make the necessary changes in the RenderBinProvider (which holds the 10 RenderBins), which is assotiated with the BranchGroup. When a Node is not live this can’t be done and doesn’t need to be done. When a Node is added to a parent, the modifiacation in the RenderBinProvider is done for the Node itself and possibly in it’s subnodes if it is a Group.

Any questions? I hope this explains my plans ;D

Marvin

I rather think they were introduced by design, since they express a special purpose (to be a container for renderable objects). Maybe this is not so important anymore since we have generics by now, but it is harder to express and document the purpose of this list if it has no class representation. Maybe something like


public class RenderBin extends Vector<RenderAtom>
{
// additional code
}

could be a good idea.

I think the separation of RenderBucket and RenderAtom was intended to make it possible for the renderer to store additional information without loosing it, when the stored RenderAtom changes or is recreated/replaced in the lifetime of the scene. So it might be a good concept to reduce recalculation/resorting effort, but not properly utilized in the current code base. Dunno it it is better to throw them away (since they are apparently not used to archive this kind of performance gain) or to think about how they can be used to our advantage.

As far as I understand the initial thought behind xith, it was designed to create RenderAtoms/RenderBins for all kind of rendering stuff and OpenGL communication, so you don’t have to call methods on “low level” components like Canvas3D directly to perform such tasks. Storing the background color in a RenderBin enables the scene graph to contain nodes to change it, that could be controlled by scene graph functionality like e.g. a Switch. Am I wrong here?

Are the nodes aware of the RenderAtoms/RenderBins they are represented with? I always thought the two layers - scene graph and rendering - are strictly separated, so that the scene graph has no knowledge of the underlying rendering mechanism. Also the user should IMHO not be confronted with rendering specific classes while using the scene graph (from API-design perspecitve). So moving the RenderBins to a scene graph class don’t seem right to me.

Would that work in regard to the possible dynamic nodes/branches inside a scenegraph like again a Switch? Wouldn’t that clutter the nodes with a lot of logic belonging to the renderer? How would you for example take culling into account in this approach, so that culled renderbins are not updated?

Just some questions to better understand the side effects of these.

Well, the RenderBin shouldn’t be a fully featured Vector. I much more thought of replacing the (dynamicised) array by a List implementation (like Vector or ArrayList) and make use of Collections.sort(). But everything inside the class. If we’d extend Vector with RenderBin, we’d have to take care of overriding all the modification methods like add(), so it is better to just have one addAtom() method like it is for now and have a private field of type List instead of RenderAtom[].

A RenderAtom is once created and attached to a Shape3D object and is never removed from it (except when the shape itself is wasted). Any time a shape has changed, these additional information is resetted. So these information is always fixed to exacty one and the same RenderAtom.

Canvas3D is not “low level”. It even is the most high level class inside the render package. It even was part of the scenegraph package before I moved it into the render package. It defines the absolute high level abstraction interface to the window, that is rendered on.

Well, good point. Don’t know if anyone ever wants to change the background color by a Switch node. But it is a possibility one should not loose. You’re right. To make it possible without the necessity of having this RenderSetupAtom for only the Background color, I suggest to have a much cleaner way: Write an interface named “Switchable”, which has only one method: setEnabled(boolean). This interface is then implemented by the Node class and the Switch makes use of it. Then you could write your own implementation of the Switchable interface to anything you like. The Switchable object will then not really by in the scenegraph like a Node would be, but it would be part of the scenegraph in this way.

I really like this idea and will write and commit it. I’ll also write a first implementation of it (for the TK) for switching the background color.

They’re not “strictly” separated, but “mostly” separated. The Shape3D class has a field of type RenderAtom just like the RenderAtom holds a field of type . So it actually is aware of the one attached RenderAtom, but doesn’t make use of this knowledge (but it should do on an abstract way to increase performance). And it isn’t aware of the RenderBins. Attaching the RenderAtom object to the Shape3D instance is necessary to not being forced to hold them in a Map<Shape3D, RenderAtom> and Map<RenderAtom, Shape3D> or something like that. It is a pragmatic and performand way.

The two packages cannot even be strictly separated. The scenegraph has very low knowledge of the renderer while the renderer has a very high one.

For the multipass rendering just a better idea came to my mind. We/I could add a set of RenderPass objects to the Renderer class instance where each of them knows of a single instance of BranchGroup. This way the Renderer/scenegraph would be better separated in this point and the logic would even be better.

To get away from the each-frame-renderbin-refresh thing:
The there should be an interface called “NodeChangeListener” or something like that, of which an instance is added (or set. No need for a list of them!) to any Shape3D instance the Renderer will find. So the Renderer can update the RenderBinProvider in case of any change on the shape.

To further improve the performance for the case of several changes on the node at once there should be a flag to set by a beginTransaction/commit mechanism. First invoke the beginTransaction() method on the shape, then make changes while no RenderBin is updated, the invoke the commit() method and the RenderBinProvider is notified of the node change through the NodeChangeListener only once.

EDIT: Of course the Group class needs such a listener, too, to notify the listener of child add() or remove() (detatch).

See above.

I don’t think a whole RenderBin is ever culled. I think I even know it, but I may be wrong. Will check it.

Marvin

I seem to have misunderstood you (verfluchte Axt ;)). I thought you want to replace the whole RenderBin class with all it’s occurences by Vector and my proposal was just a compromise to keep the class. But now I realize you just want to get rid of the array in RenderBin, which I can understand. On the other hand it might have been a performance optimization to use arrays like discussed in this otherwise unrelated thread

I saw that the (freshly removed :o) RenderBucket contained a distanceToView property some time ago: koders.com archive, so it was important to have it to be able to setup multiple views to the same scene. But since xith is not depth sorted anymore it is really just an object more.

I knew “low level” was the wrong word. I just wanted to say, that all (animatable) modifications on the render state should be made through the scene graph which would result in a RenderBin.

I think you lost me here. How would you integrate this in the rendering and/or scene graph management? Would there not be the danger to blur the API in respect to what subsystem is used for what purpose? And isn’t there an equivalent abstraction provided by Behaviours (java3d - the xith version has no docs :slight_smile:

seems resonable :slight_smile:

I like that

I thought about that myself, but I think there are too many cases where in a moving animated world where the Listener get notified for nodes that are not relevant for the next frame. I rather thought about a prediction cache of nodes most probably relevant the next period of time based on the last position, culling status, animation nature etc. is updated on a long running thread and just ask them about changes that needs a RenderAtom update. I general think about this as a good idea but I don’t know how difficult this would be to implement, cause of the scene graph being a moving target.

Mathias

I refactored this property to the RenderAtom class as well as the map property ;).

I guess “animatable” is once more the wrong word for changing the background color ;D. “Dynamic” was a better word ;). And with the Switchable interface, taht I described above this would be possible without “directly” accessing the Canvas3D instance.

What do you mean by this?

Well, maybe. The Behaviors system is bloated. Such an interface would be quite handy and useful.

Don’t know if I got you right (or you got me right), but the cache would mean additional memory being wasted, wouldn’t it? I think the listener will never be notified of not changed nodes as far as I can survey this. And it is a really slim solution that should be fast ;).

Marvin

I rethought the thing with the Switchable interface and wasted the idea ;). The Switch just works another way. And it certainly makes sense to have the possibility to hold background color information in the scenegraph, even if it is always wise to doublecheck, if you can’t set the background color by invoking the setBackgroundColor() method of the Canvas3D class. The background color information in the scenegraph makes sense for e.g. scenes loaded by a model loader, where the background color is nested in the model.

You should keep in mind, that removing a Background Node won’t reset the background color to default. So the Switch Node will only switch the background color, if the current active switch child also contains a Background Node (with a differen color). But maybe we could add support for it, if anybody needed it.

Marvin

And what if there are two backgroud nodes in the scengraph ? Conflict.

That’s one of the reasons, why I found it illogic to have the background information in the scenegraph. Well two background nodes will conflict in general. But for the background color information part in it: At the end the BackgroundColorRenderAtom objects will be in the RenderSetupBin of the RenderBinProvider in some order. The last in that order will override all previous ones.

Maybe it was a good idea to have a single allowed place (in the BranchGroup) to put Background and Foreground Nodes, since it will never be of interest to have more that one of each in the scenegraph (or in one render pass). Do you agree?

Marvin

I think your canvas.setBackgroundColor() is fine. I don’t see why we should use nodes/renderatoms/bins/buckets/whatever for that.

Well, if one creats the scene (or this part of it) by coding, you should always prefer to use Canvas3D.setBackgroundColor(). But if the background color is to be set by e.g. a level loader. This possibility is necessary I think. But we should really prefer to have one common place for “the single one” Background Node and “the single one” foreground node in a RenderPass. Am I right that there should be no need for more than one of each of these nodes per RenderPass?

This would enable us to not being forced to handle the RenderSetupBin each frame ;).

hat happens in the case you want to have two views of the same scenegraph then?

The problem might be that I am always looking for a catch, so the Listener implementation seems too simple to be good enough :wink: (despite the challenge of managing a lot of references, so that no listener stays connected) Also memory waste/garbage generation always hits my mind when using the observer pattern, but this is only be true with multicast event notification. If you only execute a callback on a single listener and just pass the node as only parameter, there should be no object creation, so it’s just fine.

About the unnecessary listener notification: I thought of circumstances, where complete branches are ignored for rendering because of some nifty optimization technic and so the changes to the “invisible” nodes could rather be ignored. After rethinking that asumption, it’s in the responsibility of the user to manage the scene graph in a way to suppress changes to nodes, that will not be rendered.

Also I could explain the cache idea a bit better, but I am too lazy to do so in written form and also got to the conclusion that it would not fit right into xiths architecture… and that I wanted to revive the terrain renderer in the first place :wink:

Mathias

The setDistanceToView and getDistancetoView methods are never used, but the field distanceToView. This field is recalulated every frame once per View. So this definitely won’t be a problem.

Marvin

Done. :slight_smile:

Now the way to go is to create a new BranchGroup (only the empty constructor has remained) and add it to a Locale. Then create an instance of RenderPass, which is linked with the BranchGroup instance and pass it to the addRenderPass() method of the Renderer class:


VirtualUniverse universe = new VirtualUniverse();
Locale locale = new Locale();

BranchGroup bg = new BranchGroup();
bg.addChild( bla ); // add your scene stuff here
locale.addBranchGraph( bg );

// notice: the RenderPass constructor receives the BranchGroup instance as the first parameter!
RenderPass pass = new RenderPass( bg, new RenderPassConfig(RenderPassConfig.PARALLEL_PROJECTION, ...add more parameters?...) );
universe.getRenderer().addRenderPass( pass );

When you’re using (Ext)Xith3DEnvironment is is a (small) bit easier:


Xith3DEnvironment env = new Xith3DEnvironment();

BranchGroup bg = new BranchGroup();
bg.addChild( bla ); // add your scene stuff here
env.addBranchGraph( bg );

// notice: the RenderPass constructor receives the BranchGroup instance as the first parameter!
RenderPass pass = new RenderPass( bg, new RenderPassConfig(RenderPassConfig.PARALLEL_PROJECTION, ...add more parameters?...) );
env.addRenderPass( pass );

Enjoy ;D

Marvin

And even easier with this new method:


Xith3DEnvironment env = new Xith3DEnvironment();

BranchGroup bg = new BranchGroup();
bg.addChild( bla ); // add your scene stuff here

env.addBranchGraph( bg, new RenderPassConfig(RenderPassConfig.PARALLEL_PROJECTION, ...add more parameters?...) );

Enjoy ;D

Marvin