Test case:
A skinned animation test application
http://www.mojang.com/notch/screenshots/modeldisplay.jpg
All tests were run with JDK 1.5.0_04.
The fps werenât measured until they had been stable for ten seconds, to account for warmup time, and were measured as a counter that get increased every time a frame is rendered, then gets System.outâed and reset once every second, according to System.currentTimeMillis.
With direct context control and a while(true) loop:
Client JVM: 2185-2195 fps
Server JVM: 2450-2460 fps
With the GLEventListener structure:
Client JVM: 800-825 fps
Server JVM: 835-845 fps
Notes and conclusions:
Please note that this slowdown is CONSTANT per frame, not linear, meaning that the longer you take to render a frame (and the lower fps you have), the lower effect the slowdown of the GLEventListener structure is. You do not need 2500 fps.
For the server JVM, the average rendering time was 0.41 ms with direct context management, and 1.19 ms with the GLEventListener, meaning the GLEventListener overhead is 0.78 ms per render on my computer.
For the client JVM, the numbers are 0.46 ms for direct context, 1.23 for GLEventListener, and an overhead of 0.77 ms.
This means that if you want to run your game at X fps, and run Y amount of AI at the same time, youâll have 0.77 ms more per frame to do so if you donât use the GLEventListener.
If your target fps is 60, that computes to 4.8% extra time for game logic and rendering operations if you change from GLEventListener to direct context control.
If your target fps is 100, the gain is 8.3%.
I have to point out that itâs usually not considered worth optimising for a gain of 5-8%. If the code for your application becomes easier with the GLEventListener structure, USE IT.
But if the direct context control is better suited (because youâre making, say, a fullscreen game), youâll also gain some rendering speed by not using it. And that never hurts, right?
[edit: cleared up the bit about the slowdown being constant]