True reflection using Xith3D

I was thinking about making a demo that simulates true reflection and refraction using Xith3D, but then it hit me that we don’t have a renderToTexture() function in the View class, therefore I had to either think about doing something else, or simply digging my way via JoGL …
Since I grew quite attached to Xith and this community, I thought about few things (read API changes) to help us implement an offscreen rendering method and make this scenegraph a little more flexible.
Let me explain my idea;
If we have some sort of stack on which we pile up all the Groups/Branchgroups that we wish having on the to-be-rendered textured, or even simpler, a single BranchGroup holding multiple children that get submited to view.renderToTexture(some parameters here including width height etc, BranchGroup groupToTexutre) which returns a Texture2D that can be used in a large variety of ways.

This can also help with hardware Shadow mapping and other stuff as well.
Discuss

I think about Render-To-Texture as about kind of multipass rendering.

Currently Xith3D already has a mechanism for separating different rendering stages to different passes via RenderBins (i.e. Rendering Setup, Background, Opaque pass, Transparent pass, Foreground, etc.). So my idea is to expand this concept and give developer more control on

  1. What is rendered on which pass (like a list of passes where shape should be rendered)
  2. Sorting policy
  3. Order of rendering passes
  4. Pass-specific View-level settings (for ex., camera position, buffer settings)
  5. [to think about] caching of results for some passes.

This way, we can easily implement

a) Render-To-Texture, when we define extra rendering pass for every generated texture, and associate it with target Texture, and with shapes to appear on it;

b) Stereo rendering (true stereo and stereoglyph), when we replace two major predefined passes with four new passes, defining different view transforms for each.

This concept fits very well in current Xith3D architecture.

Any suggestions are welcome.
Yuri

…and as of reflections, we then can try to find a way how to bring them on scenegraph/appearance level (by introducing something like ReflectionAttributes) and completely hide this from developer.

Yuri

sound tops ;D

bumpy bumpy bump bump ;D
I need this :stuck_out_tongue:

BUMP! ;D +10 v0tes from me

This is a major change in rendering process, so I decided to close as many issues as possible before starting, and apply all proposed patches before making sourcetree incompatible with them - I don’t want to leth project loose efforts that people put into it. That’s why we have some slow-downs with this specific case.

Yuri

Here I will try to provide more details on how I see True Reflections to be implemented with multipass rendering.

True Reflections can be implemented two ways:

  1. using PBuffers and render-to-texture, and
  2. using render-to-framebuffer and call glCopyTexImage2D(…) afterwards.

First, we should understand the general concept of multipass rendering that I propose to implement in Xith3D. The basic idea behind this concept is that we want to create rendering schema once and then minimize manual control over rendering.

I suggest to introduce new RenderingPass object that will contain all the information needed to set up and perform rendering for this pass:

a) Rendering Target (complete frame buffer, color buffer or specific color plane, depth buffer, texture, etc);
b) View transform associated with this pass;
c) Projection matrix;
d) Sorting mode;
e) Enabled status;
f) Other relevant information.

In the View, we will build a list of rendering passes, that will define a set of passes to perform and order of their execution.

In every Node (Shape3D or Group) we will maintain a set of RenderingPass objects where specific Node should be rendered (can be BitSet, or true Set).

To implement True reflections with TextureCubeMaps, we will have to create 7 groups of passes (6 groups for CubeMap, and 1 group for actual View rendering). Each group will contain 2 passes if we have transparent objects, and 1 pass if we don’t.

Then we will introduce new kind of ImageComponent2D, which will be acting as a target for RenderingPass, and use it to texturize objects on which we want to render reflections from other objects in the scene (target objects). We associate all objects, except target objects, with all 7 groups of passes, and associate target objects only with actual view rendering pass.

This is just approx. exmplanation, of course if should be discussed/described in more details.

Alternative solution may be in introducing alternate Canvas3D/View/etc. wich will be capable of rendering to texture and then manually control when we want to render to texture.

I don’t see any reason why we can not have both approaches implemented.

Yuri

Using render to texture seems the best choice for those who seek max performance and rendering speed.
I remember reading some Nvidia papers which covers up almost everything about off-screen rendering, and the conclusion was that rendering directly to a texture offers the best speed and performance.
Therefore, I would encourage any approach that goes in that direction instead of doing costly glCopyTexImage2D calls etc…
Now as I explained in the other thread exposing pBuffers, I would like to have some control over the rendering and more specifically how the view group behaves.
In my upcoming demo (which requires a Radeon 9600+ or Geforce Fx 5200+) what I do is, I render the scene to a texture using the lookAt function to capture the six faces of the scene cube, updating every image making the TextureCubeMap object one at a time, and then finally on the 7th pass rendering every thing to the scene.
Now one have to be very carefull about what and how he accomplishes the described algorithm since it can kill the performance more than what a 7 passes render already does.
This algorithm is easily done in LWJGL and JoGL, however converting to Xith would be a bit more tricky since it involves the following steps:
1 - view.getTransform().lookAt(new Vector3f(shape’s s location),
new Vector3f(direction vector starting at the shape’s location and expanding in the left, right, up, down, front, back directions),
new Vector3f(up vector);
2-textureCubeMap.setImage(level, face, view.renderToTexture());

repeat step 1 and 2 six times.

3-view.render()

I know for sure that the TextureShaderPeer has a updateTexture() function that would be suitable for updating our TextureCubeMap, so only little work should be now waiting for Yuri’s digging…
Happy coding ;D