CreateCompatibleImage using different bit depths

I’ve noticed that on older video cards I can a achieve a significant increase in frame rate in window mode if I set the desktop from 32 bits to 16 bits, for example. However, I’m wondering if it’s possible to achieve the same “effect” by enumerating through the GraphicsConfiguration array return by GraphicsDevice.getConfigurations() and then selecting the gc that corresponds to the desired bit depth. The idea is to allow the user the select a “performance” or “quality” mode based upon the speed of their hardware.

However, all of the GraphicsConfigurations returned seem to contain no distinguishing information, but if I watch them in the debugger, I can see that each GC is in fact a sun.awt.Win32GraphicsConfig and that each one contains a variable call “pixel_fmt” that seems to contain the desired bit depth information I’m looking for. However, since this class is not exposed, this is obviously the wrong way to obtain the information. Calling gc.getColorModel().getColorSpace() returns 24 for each one.

So anyway, is there a way to get the supported bit depth for a particular GraphicsConfiguration through the abstraction? Or is my understanding of the purpose of the GraphicsConfiguration wrong? Is there a better way to obtain an accelerated image for n bits of color without forcing the user to change their desktop color space than what I’m trying to do? Or is this not even a possibility technically?

Thanks for any help.

The info you want is in the DisplayMode class…
Maybe gc.getDevice().getDisplayMode().getBitDepth() ?

Thanks for the info. It seems like it should work, but it only returns 32 for every gc. I can’t seem to crack this nut. However, the more I’ve thought about it the more I wonder if it’s worth the effort. If the desktop is in 32 bit mode, I wonder if there is actually any performance improvement in drawing images using 16 bits other than the reduced memory requirement. And even then I wonder if the desktop/video card/whatever would just convert the 16 bit into a 32 bit image before displaying it anyway. My main intention was to experiment, but it doesn’t appear as this is even a possibility currently.

For best performance convert your sprites to match the bit depth of the display when you load them. Otherwise there could be some slow on the fly conversions happening in your game loop. (This seems to be a particular nuisance on the Mac which really likes to have the image format of sprites match the display.)

GraphicsConfiguration objects correspond to pixel formats.
On windows the depth will always be the same as the desktop depth. On other platforms (i.e. Solaris sparc) there are video boards which can handle windows with different depths, so there you can have GraphicsConfigurations with different depths (GC objects correspond to X11 visuals on unix).

As for converting sprites to the format of the screen: it’s not necessary, because they’ll be cached in vram, so the conversion happens only once when they’re copied from system memory to vram, after than it’s vram->vram copy (well, at least, in our implementation)…

Excellent, thanks for the clarification. So GraphicsConfigurations do enumerate pixel formats as I thought they should, it’s just that Windows doesn’t really allow mixing different bit depths hence all the GCs returning 32 bits. That also answered a number of other questions I had about the GC. I always felt uneasy about simply calling getDefaultConfiguration to select the GC wondering if I was missing out on or not taking advantage of a more appropriate GC for what I wanted to do. Now I know, at least on Windows, that they’re all pretty much the same.