Best prastice in using offscreen buffers and Threads

Can anyone tell me if I’m following the best practice in using offscreen buffers (pbuffers) and threads in the following senario. I have some objects that will render different layers. At a certain moment I need to display these layers and so I set a flag

buildTerrainLayer=true;

and call the GLCanvas display method:

SwingUtilities.invokeLater(new Runnable()
{
public void run()
{
glcanvas.display();
}
});

This will call the main display method which has the following code inside:

if (buildTerrainLayer)
{
buildTerrainLayer = false;
OffscreenRenderManager.buildTerrainLayer(drawable);
return;
}

Map texture to texture coordinates and display;

The OffscreenRenderManager will build the layer using pbuffers and copy the layer to a shared texture id and at the end of that process will call the glcanvas.display method, which will then display the texture image on the glcanas.

My question is, can I spawn the OffscreenRenderManager to another thread so it will build the layers image lazily and then copy this to the texture id shared between the main glcanavas and the pbuffer canvas, and then when that is finished request a refresh of the main glcanvas display, or is it necessary for the pbuffer code to run in the same thread as the main glcanvas display. What is the best practice where shared context’s and pbuffers and the main glcanvas are concerned.

By default JOGL forces all OpenGL work onto a single thread internally, even pbuffer-related work. This is necessary for good stability with most vendors’ OpenGL drivers on all platforms. You can still do useful work in background threads like decompression of big textures, but the actual initialization of the OpenGL texture object should be factored out from this work, and the expensive portion done outside a GLEventListener.

You can share the texture object between your pbuffer and your main GLCanvas. When you call pbuffer.display() the GLEventListener’s work may be moved onto a different thread.

It is possible to change JOGL’s threading behavior (this is documented in the JOGL User’s Guide) but I strongly recommend that you don’t try this due to the stability issues you will inevitably run into.

Ok, so to make sure I’m understanding you correctly, the init method of the pbuffer should be invoked on the same thread as the main glcanvas stuff, but the code within the pbuffer.display method can be executed on another thread, which at completion can ask for the main thread to refresh the whole stuff.?

[quote]You can share the texture object between your pbuffer and your main GLCanvas. When you call pbuffer.display() the GLEventListener’s work may be moved onto a different thread
[/quote]
Who may move the GLEventListener’s work to another thread, jogl or me if I want.

No. This is a bit of a simplification, but: the only way you can interact with a GLDrawable is by calling its display() method. This will cause your GLEventListener’s callbacks to be invoked. Which thread they are invoked on is an implementation detail. However, it is currently the case that they are all serialized on one thread, so attempting to gain parallelism in your application by calling two different GLDrawables’ display() methods on two threads will not have the effect you want.

JOGL may move the work onto another thread. The application does not have API-level control over which thread does OpenGL work, although this can be tuned with system properties (not recommended).

So then to achieve some lazy offscreen parallelism when I need to say rebuild a texture map comprised of many layers, I should spawn off chunks of intermediate work onto other threads, and when these are complete then call the pbuffers display method to render the offscreen texture image using the intermediate output, then refresh the whole thing.

YES?

I think that sounds correct. Basically you want to keep your GLEventListener’s callbacks as short as possible for the same reason you shouldn’t block the AWT event queue thread. I would only do a significant amount of restructuring if you have already found that this is a bottleneck in your application.

Yes, my application displays maps made up of possibly many layers (30+ sometimes). I had hoped to use the hardware accelerated rendering of the pbuffers offscreen to do a lot of the merging of layers lazily and then to up date the main display when the merging was done. The hardware rendering improved the performance but it still is a time consuming process when you have a lot of layers.

What I will do now is try to get some intermediate work cached and then call the pbuffer code to merge that into the final texture image or images that get displayed.

One other thing. Is there any speed or other advantage to use the new extentions that are equivalent to pbuffers, even thought there are not many drivers that support it at this time.

My understanding is that EXT_framebuffer_object is more efficient than using pbuffers because it doesn’t require an OpenGL context switch to render to the framebuffer object. However, from what I’ve read, this will probably only show up as a speed improvement in your app if you’re doing hundreds of context switches per frame.

Thanks for the help