Vote time: RFE - Developer controlled swap buffer

This has been requested enough times by the commuity that its time to bring this feature to a head and get a vote out about whether or not we ever intend to do this so people will know. Personally I’m in full agreement that buffer swapping is something that should be doable by the developer and after having had to go through the implementation og JOGL for the OSX stuff I really don’t see any technical reason why doing it isn’t possible or why this would lead to some performance degredation so I think this is fair game for the community process that we have in place.

All of the workarounds to not being able to call swap buffers that I have tried thus far involved thread acrobatics that I consider unnecessary and unwelcome. If we have dissenting voices - they should chime in.

/me votes “yes”

+1
:wink:

Can see this is going to be an point of contension :slight_smile:

+1.

Kev

What’s the difference between that and using glDrawable’s setNoAutoRedrawMode() and display()?

You can’t do stuff like

in the same method as it is now, as all listeners on the drawable must return before the buffers get swapped.

One example of why you might want to do that is for progress bars when setting up a lot of opengl lists and textures.

+1.

Definitely +1.

setNoAutoRedrawMode() does not turn off the swapping buffers in on-screen contexts.

There should be a way of disabling swap buffers at least temporarily for specific canvas/context. This can be beneficial for implementing pick modes using GL_SELECT rendering mode where you don’t need to swap buffers after rendering.

Yuri

Yes.

Yes we need it. Especially for synchronizing the display of multiple buffers, for things like stereoscopic systems or cave environments.

+1

A’ight. I’m meeting with Ken and perhaps some others today and I will ask and see what is entailed in getting such a thing into the specification. There may be some heavy lifting required to actually make the change cleanly (and we as a community may need to make the change) - but I will find out what we need to do going forward to make this work.

As I started looking at the architecture - there doesn’t seem to be a reason that we couldn’t expose the swapbuffer functionality while keeping the core JOGL intent of wanting to swap intact. Really it seems that all that is required it to make swapping visible. This doesn’t necessarily require exposing objects that we don’t want to expose from an architectural perspective as we can use a Composite pattern to get the swap functionality up at a higher level for our use and leave the low level code mostly alone - which would be ‘A Good Thing.’ More tomorrow on how it goes :slight_smile: Maybe I should take Ken out and get him drunk ;D

I had an opportunity to meet Ken Russel last night and amongs other things we had a chance to discuss some of the potential architectural changes to the APIs, the strength of the community, etc. Ken is clearly a very cool and intelligent individual and it was good to associate some ‘faces’ to the people associated with this initiative. I won’t get into much detail other than to say we aren’t working with the bottom of the barrel :slight_smile:

One of the things we’ve discussed is that we should go forward with the design of swap buffers being controlled at least in part by the developers. One of the other things we discussed is that we as a community can actually present a reference implementation of this to the community so that we can factor it into (hopefully) the 1.1 release of JOGL. To that end its time to discuss what the best approach is to achieving swap buffer functionality without breaking the soft requirement that we want to still allow the core JOGL engine to call swap buffers as IT needs to so shortly I think we need to start taking submissions on how we want to solve the problem architecturally.

If anyone has any ideas on how this could/should be best architected (lets not design a functional spec yet), please post your ideas as its clearly in our court to go beyond the ‘it doesn’t work’ to the ‘we would like it to work like this for these reasons’.

Any takers?

The only compelling reason I personally see for adding this functionality is that there are some kinds of applications that can not be implemented using JOGL without giving buffer swapping control to the end user. Two examples are given in this thread and a third in this one. I’d like to ask moorej, greggpatton and swh in particular to provide a prototype implementation that works for their applications.

I’d like to see the API kept minimal. Perhaps new methods setAutoSwapBuffer(boolean), boolean getAutoSwapBuffer(), and swapBuffer() added to GLDrawable. They must be specified as being advisory (because they will have no effect for single-buffered visuals) and it should be made clear that most applications should not need to call them. Note that the implementation is not quite trivial because the swapBuffers() call must be done with the underlying OpenGL context made current.

At least adding of methods setAutoSwapBuffer(boolean) of boolean getAutoSwapBuffer() without even manual swapBuffer() would be alreay enough to solve some questions regarding use of GL_SELECT rendering mode.

Yuri

I entered an RFE, Issue #38, that contains the modifications I’ve made to the code that allows turning off the automatic buffer swap. The changes are minimal but it does add an extra boolean check to the swapBuffers method. I don’t know if it adds enough overhead to swapBuffers to cause performance concerns or not. It’s currently not an issue for me.

Initially I created new factory classes and new canvas and context classes, but that requires being able to extend GLCanvas or duplicating code.

Gregg

I started coming up with a reference implementation but I am having to change many more things to really get it where I want it to go. For this first “trial” version are we to alter the least amount possible, or really make the implementation that we think is right. Once you allow OpenGL calls to be made outside of a current context, one needs to be able to query wether that context is current. That would be a lower level change than maybe at this stage is necessary. Any thoughts?

moorej

Gregg: the testing of the flag down in the swapBuffers implementation should be fine.

Jason: I think ultimately we are going to need to keep track of a per-thread “current context stack” in order to support calling display() recursively and for being able to call display() of other GLDrawables from within one’s display() routine (which currently is not supported though I think this fact isn’t well documented). Once we have this information it should be pretty easy to implement a test to see whether a given context is current on this thread. If you’re interested in taking a crack at it then it should probably be implemented with a ThreadLocal pointing to a Stack of GLContext objects. Once makeCurrent() has succeeded the GLContext object should be pushed; upon free() the GLContext should be popped and the next one up the stack should be made current again. It seems to me that this might be somewhat tricky to get right so we should probably file an RFE for it to track it.

Ken, how will this impact some of the more interesting issues that remain with JOGL windows being minimized and not being realized again when they are maximized again? I’ve seen this particicularly in JInternalFrames where a JOGL contact is embedded in an MDI manner.

It won’t affect or fix this behavior. Proper destruction and recreation of OpenGL contexts still needs to be implemented in JOGL.

What I’m wondering is whether or not these two fixes should be developed by the same folks as they both relate to the same section of code. How do we get events from the windowing system down to the GLContext?