Finally! Thank you! JSR-231 works the way I want opengl support to work! :D

Now I can finally have opengl in java without the silly listener structure from JOGL!

A spinning triangle now yields the insane number of thousands of fps it should, without any synchronisation with the awt display threads or returning from methods to cause buffers to flip. :smiley:

* Markus_Persson grins and starts shuffling some code around in wurm

[edit: 251?]

What are the specific changes that makes it better? I’ve not looked at the new release yet.

I hardly think the listener structure is silly and still recommend it as the most portable way of writing applications. Nonetheless, you’re welcome for the new design. Kevin Rushforth, Daniel Rice and Travis Bryson from Sun deserve all of the credit for the refactoring of the API into GLDrawable / GLContext classes. Please post here if you have any problems with the new implementation, although it’s received quite a bit of testing and seems to be universally more robust and correct than JOGL 1.1.1.

Forcing opengl to behave in a way it really doesn’t is very silly indeed, not to mention pointless and makes it harder to port opengl code from other platforms to java.

I strongly discourage anyone from using the default GLCanvas with the listener structure… you’ll get nothing but headache from it.

I’m sorry for expressing such strong views about this, but I just cannot for the life of me see anything good about it. It does not speed up development time, it does not provide more robust code, and it does not speed up the rendering.

may be a stupid question because i dont really know jogl and java opengl api but reading that topic i was asking myself why thoses classes are called GLDrawable and GLContext and not : OGLDrawable and OGLContext with respective super class GLDrawable and GLContext ?

Bruno

It does produce more robust code. In the GLEventListener paradigm the library can decide on what thread the OpenGL work is performed instead of leaving this up to the user. Most vendors’ OpenGL drivers are still not really robust in the face of multithreading, and the inherent multithreaded nature of the AWT and Java makes it all too easy to do OpenGL work on more than one thread, even just accidentally. When JOGL switched to executing all GLEventListeners on a single thread instead of multiple threads, the library became much more stable where before it had produced crashes in some situations on almost all supported platforms. Users’ code did not change in any way when this single-threading was introduced.

The listener mechanism also happens to decouple the OpenGL work from the specific widget being drawn into so that the same code can trivially (and correctly) draw into e.g. a pbuffer, a lightweight Swing widget, or a heavyweight AWT widget.

You’re right about the speed issue although for every real-world application I’ve seen the overhead of the listener mechanism (and associated single-threading, etc.) has been small. It doesn’t necessarily look good for microbenchmarks or 200+ FPS games but is it really necessary to drive the display any faster than the monitor’s refresh rate?

Sorry, but the GLEventListener kludge is most certainly NOT more robust code than a simple singlethreaded while (true) { render(); swapbuffers(); } loop.

The listener mechanism might have advantages in a very few specialized cases, but it should absolutely not be the only provided way of doing opengl, as was the case in the past.

And, yes, rendering should be done much faster than the monitor’s refresh rate if you want to have some cpu cycles left over for things such as AI, networking and game logic. (At least if you want your game to run at the refresh rate)

The simple rendering loop above is the specialized case, not the listener mechanism. Applications which are event-driven don’t have the rendering loop structure. The rendering loop above assumes the target component isn’t being removed from the component hierarchy and re-added later as in many applications. The GLEventListener interface handles these situations and more.

That having been said, I agree that the listener mechanism shouldn’t be the only way to access OpenGL, as I used to think. It took a while for me and others to see how to incorporate both approaches into the same library.

The listener mechanism doesn’t block unnecessarily or deliberately slow down the rendering to the screen refresh rate any more than your rendering loop above does. If vsync is enabled then both approaches will block on the swapBuffers call.

The listener structure is the specialized case. It’s based on the more generic, proper way of exposing opengl.
Even you own code has the listener structure use the direct context control instead of the other way around (which would be silly).

My reason for preferring direct control over the listener structure is very similar to my reason for preferring jogl over java3d. Control, flexibility and speed.
It’s not right for all applications, of course, but for pretty much all non-turnbased games, I’d say that using either java3d or the listener structure in jogl is Wrong™.

But… my statement in the topic still holds. I’m very happy you finally took away my last thoughts about switching to LWJGL(*) by making this work. :slight_smile:

(* I’m not dissing LWJGL. It would just be a lot of work. ;))

It is true that the listener mechanism is based on the lower-level GLContext APIs. Note however that the JSR-231 APIs are unique in that the GLContext.makeCurrent() call provides enough information to the caller to indicate whether the context was newly created during this makeCurrent call (or failed to be made current because the e.g. the underlying window was not yet realized). Code using the GLContext directly must check the return value from this method in order to correctly handle the window being realized/unrealized, properly reloading textures/display lists, etc. The GLEventListener mechanism avoids scattering this checking code throughout the application and puts it in one place. This is why it more easily supports more general application structures than using the GLContext APIs directly.

Have you done any benchmarking to support this that you can share with us? Just curious what the difference could be :slight_smile:

It’s just simple logic. Any CPU time not spent drawing the display is available for processing AI, networking and game logic. Library code should always have efficiency as one of the goals when it will run along side other code (in this case the rest of the game or simulation that is being rendered).

I think Markus_Persson forgot to write a not after rendering and before should…

DP

The GLEventListener stuff is ideal for things like level editors or test programs where you want to have opengl work nicely with swing.
For (many/most) games, not so much. =)

[quote]It’s just simple logic. Any CPU time not spent drawing the display is available for processing AI, networking and game logic. Library code should always have efficiency as one of the goals when it will run along side other code (in this case the rest of the game or simulation that is being rendered)
[/quote]
Well, yes, even I understand that :wink:

The thing is that going from a loop-based system to an event-based system is normally such a thin layer that you will hardly notice the difference, in fact Ken said so in one of his messages.

So that’s why I wanted to know why Markus was so happy. If it was only for the direct control, fine, perfectly understandable, but the performance difference seen by him surprised me.

And if there’s a way to substantiate Markus’ claim there might also be a way to improve the situation… or not. DUnno. Like I said, just curious :slight_smile:

Once again it comes down to a question of style. Performance is probably basically unaffected.

Cas :slight_smile:

Well, yes, even I understand that :wink:

The thing is that going from a loop-based system to an event-based system is normally such a thin layer that you will hardly notice the difference, …
[/quote]
The part you quoted was specifically about running faster than the refresh rate, not about the event-based vs. loop-based points ???

Heh. Not really.

The gleventlistener structure synchronizes each render to the awt dispatch thread.
So if you have a single really long render call in which you, say, load a bunch of textues and set up your VBO structures, no other awt or jogl components work during that time either.

If you’re making a game with a single thread doing all rendering, direct control over the contexts is less buggy, faster, and easier. Especially if you’re using pbuffers.
If you’re making a game with several threads doing the rendering, you can use the gleventlistener structure. But the only real benefit it has is to make sure it’s a single thread doing all the rendering. (Meaning you’re back at my previous point)

[quote]The part you quoted was specifically about running faster than the refresh rate, not about the event-based vs. loop-based points
[/quote]
True, but if you read all of the thread it kinda makes sense :slight_smile:

It is true Ken introduced the refresh rate, but knowing Markus is a developer on a MMORPG-type game it is obvious he will want the fastest rendering possible so he has some time left for other things (AI for example). So any system that introduces to much overhead, even if it eases development in some cases, is out of the question.

I just wanted to know how an event-driven layer like JOGL uses could introduce a lot of overhead.

But reading Markus’ reply above I think it is not so much overhead (like saying: “look this spinning cube does 300 fps when using a loop and only 200 when using events”. Still would like to hear if that is the case.) but more a case of having more control when to do rendering, when to do AI etc and still have a responsive GUI even if the work takes a while.

At least hat is how I understand it now :slight_smile:

Test case:
A skinned animation test application

http://www.mojang.com/notch/screenshots/modeldisplay.jpg

All tests were run with JDK 1.5.0_04.
The fps weren’t measured until they had been stable for ten seconds, to account for warmup time, and were measured as a counter that get increased every time a frame is rendered, then gets System.out’ed and reset once every second, according to System.currentTimeMillis.

With direct context control and a while(true) loop:
Client JVM: 2185-2195 fps
Server JVM: 2450-2460 fps

With the GLEventListener structure:
Client JVM: 800-825 fps
Server JVM: 835-845 fps

Notes and conclusions:
Please note that this slowdown is CONSTANT per frame, not linear, meaning that the longer you take to render a frame (and the lower fps you have), the lower effect the slowdown of the GLEventListener structure is. You do not need 2500 fps.

For the server JVM, the average rendering time was 0.41 ms with direct context management, and 1.19 ms with the GLEventListener, meaning the GLEventListener overhead is 0.78 ms per render on my computer.
For the client JVM, the numbers are 0.46 ms for direct context, 1.23 for GLEventListener, and an overhead of 0.77 ms.

This means that if you want to run your game at X fps, and run Y amount of AI at the same time, you’ll have 0.77 ms more per frame to do so if you don’t use the GLEventListener.
If your target fps is 60, that computes to 4.8% extra time for game logic and rendering operations if you change from GLEventListener to direct context control.
If your target fps is 100, the gain is 8.3%.

I have to point out that it’s usually not considered worth optimising for a gain of 5-8%. If the code for your application becomes easier with the GLEventListener structure, USE IT.
But if the direct context control is better suited (because you’re making, say, a fullscreen game), you’ll also gain some rendering speed by not using it. And that never hurts, right?

[edit: cleared up the bit about the slowdown being constant]