[quote="<MagicSpark.org [ BlueSky ]>,post:6,topic:28121"]
- which vertex format do you think would be more efficient ? How would you do a “compact vertex format” ? All in a single float[] ?
[/quote]
There are lots of information on NVidia and ATI informations developer web sites about performance issue. For vertex format, it is advised to use ‘compact’ vertex format, that is to said that all you array pointers points to the same memory block of the VRAM. That means that you need to pack all your datas into a single VBO and use array pointer with a non 0 stride and offset. It is also advised to align your data eventually padding them. I never tried this last bit. There are lots of ways to implement it and there is not such a thing than a universal geometry class among the different 3d engine.
[quote="<MagicSpark.org [ BlueSky ]>,post:6,topic:28121"]
- how to implement that efficiently ? Or, in other words, how should we sort the calls ?
[/quote]
I don’t understand what you mean by ‘that’. If it is the vertex format, I did not found any perfect answer in any engine. For my own engine, I have choose to let the programmer creates tuple array which references memory blocks that holds the data in NIO buffers. This means that data management is left to the engine user. This may not sound satisfying but It really fits well with the design of my engine (see below).
[quote="<MagicSpark.org [ BlueSky ]>,post:6,topic:28121"]
- what exactly gets recomputed each frame ?
[/quote]
The scenegraph is traversed for each frame. Each node is tested for culling, each node goes through all the renderNode / getAtom methods of the View class. The cache system caches render atom with some sort of lazy updating (which is rather buggy). In the Quake3 benchmark, Xith3D spends most of the time traversing the scenegraph. With a system that only works on modification, you just don’t spend a single millisecond traversing nodes that did not change. For the (limited) understanding of Java3D that I have, this is one of the main difference ; Java3D creates a clone of the node which is updated when its user node sends change events.
[quote="<MagicSpark.org [ BlueSky ]>,post:6,topic:28121"]
BTW, are you working on a new engine ? How is it going ? And how do you handle these issues ?
[/quote]
I am working on my own engine and it is going rather well. The development is not as fast as I would like but my first child is born a few month ago, I have switched job and I’am moving into a new flat, so it’s a bit more difficult to keep up with my hobby projects…
Regarding the design of my engine. I tried something fairly different from the engines I used (CristalSpace then OpenSceneGraph, then Java3D, then Xith3D).
The first point is that I discovered that there is no perfect design for an engine ; depending on the project I had, I prefered a very high level engine with medium performance, or one tailored to top-down view or another one that handles shadows with a technique that was ok for me,…
Therefore, the main idea of my engine is just to be a scenegraph framework ; the scenegraph is composed of nodes implementing the INode interface that’s all. The INode interface is very minimalist ; a node has an optional name (get/set), a parent INode (get/set), extension points and can have listeners for change on these fields. That’s all.
The second idea is to introduce ‘interpreters’ which as you can guess interprets the scenegraph. Example of these interpreters are bounds interpreters (computes and update bounds), environment interpreter (maintain a list of each node implementing the IEnvironmentNode interface that influence a node and updates the list of influenced nodes that each IEnvironmentNode maintains, example of this are light and fog), graph interpreter (provides a way to traverse the scenegraph), transform interpreter (maintain a matrix stack of the transform), scene interpreter (define what compose a universe)… These interpreters are completely isolated from the scenegraph. Most interpreters are built using plugins this allows to extend the scenegraph very easily ; create a node, create a plugin for the interpreters you need, your done !
Renderers are built upon this system. The core video renderer defines its own interpreter which keeps the render frame up-to-date. Only change events are processed.
Culling is implemented using a culling system object which is in charge of sending the enter/exit render frame events.
So in short the general idea is ;
- either use the core nodes and be satisfied with the medium range performance you will get,
- or define nodes adapted to your application that will allows you to reach better performances.
So far it works very well. I have converted a few of my own (small) projects to test the engine and it works well. The point that I did customize for nearly each project where the culling system (it really depends a huge lot on your project) and the appearance system.
Regarding the state of the engine (you asked in another thread if I would release it, open sourcing it or not). This engine is not meant to become commercial ; I do not have the time not the competence for this. I will eventually release it as an open source project but I don’t think I have matured it enough for this. There is a really big difference between something that seems a good design and a working design with a few full applications that prove it.
Ouch, that was a long answer. Hope it is what you expected.
Bye
Vincent