So basically mine is so much slower because I’m letting J2D do the blending and this is causing a lot of VRAM reads that you guys don’t end up doing?
After correcting the bug that positions the FPS indicator behid the window Title bar (at least on Mac) I see that Tom’s example still only gets a max of 22 fps on OS X. Sometimes it goes to 13 fps for a moment - but is usually around 20-22 fps.
Come on Apple engineers - you are caught up to Sun in terms of JRE release versions… now start optimizing!
As it stands for any action game you must use JOGL or LWJGL on the Mac, nothing else performs well enough.
not exactly, Toms couldn’t do what yours is doing.
It still forfills the contract set out by cas…
but from the perspective of a proper game, its useless
I’ve been trying these demos on various machines here at work (they’re all good testcases for us, especially for the new OGL pipeline). I’m finding that performance for nonnus29’s testcase is relatively poor with OGL enabled, but I think that’s because we’re using OGL to copy a software (unaccelerated, non-managed) image to the backbuffer and flipping on every frame.
Just to back up what you and Abuse suspected, managed images will no longer be accelerated once you call getRaster() or a related method. In this context, I don’t see why you need to modify any image arrays directly. It would be great if we could see your source code. But from what I can tell, a more optimal approach would be something like:
- load bigimage.gif
- copy each tile from bigimage.gif into its own managed image (createCompatibleImage())
- render each tile directly into the BufferStrategy backbuffer (no need for an
intermediate BufferedImage) - call strategy.show()
If you follow this approach, everything should be accelerated, and with OGL enabled, every tile will be cached in a texture, and the snowflakes will be alpha blended to the backbuffer all at hardware speeds. Let me know if this makes sense. It would be great to see an updated testcase.
Thanks,
Chris