Image Compression/Decompression

I was wondering if there was a way one can do image compression/decompression directly in hardware via openGL. As of now, I’m currently using jpeg compression on one machine via IPP, then sending that information over a network, then I have to decompress the image on the receiving end. Is it possible to take this compressed image data and directly have openGL display it? In any case, I’d prefer to have openGL do the decompression for me to minimize the dependency on the CPU. Any ideas?

Cheers

Maybe write a CG shader to render the image to a screenwide polygon and supply the compressed data as a texture…

I think I might try using S3TC, however, I don’t have a graphics card on the encoding side. Anyone know of a software based S3TC algorithm exists that OpenGL can later use to decode on the fly?

[quote]I think I might try using S3TC, however, I don’t have a graphics card on the encoding side. Anyone know of a software based S3TC algorithm exists that OpenGL can later use to decode on the fly?
[/quote]
Yes, some Adobe plugins write the S3TC format as well as a small command line based tool. One such thing is on Nvidia’s developer page: DDS Utilities. It converts a PNG/TGA/etc to the S3TC 1c, 1a, 3, and 5 lossy format. The smart Xnview does read St3c/Dds files.
In case you’re interested in ST3C format details, please have a look at the OpenGL extension named GL_EXT_texture_compression_s3tc.

Please bear in mind that, although S3TC is lossy, it’s not comparable to JPEG. S3TC’s filesize is much larger because the aim is lightning fast realtime de-/compression. It “just” reduces the amount of alpha & colour bpp information per pixelgroup but doesn’t apply any form of RLE/LZW/whatever. (So don’t wonder your very black texture has got the same compressed size like a filled texture.)
Of course the output is smaller than its uncompressed raw original: by the factor 1/2 to 1/6. So to use compressed textures is a good idea - at runtime.

Since lossless compressed PNG files are much smaller than lossy compressed S3TC files, I would suggest to store PNGs and let OpenGL compress them at runtime on the fly - optionally, because high end 3d cards won’t need compressed textures at all and uncompressed ones look better.

How to tell OpenGL to compress a texture is simple; please see this thread: Texture compression.
Or visit the Nvidia OpenGL extension page and have a look at the OpenGL extension named ARB_texture_compression. It explains, too, how to use compression, plus how to download an OpenGL compressed texture from the card and upload it later on, etc.

Using the GL_ARB_texture_compression has the big pro you don’t rely on any specific format like S3TC. So the card will choose the format which it handles best.

I’m not quite sure what you mean when you say:
" I would suggest to store PNGs and let OpenGL compress them at runtime on the fly "

Do you mean decompress them? I’m assuming if you store PNG, you’re storing a compressed version. Do you mean have openGL decompress them on the fly?

What I never understood w/ OpenGL’s compression stuff is it hides how it does the compression. If it says that it’s done GL_RGBA compression, what exactly does that mean? What’s the format exactly?

I basically don’t have a GL implementation doing the compression, but I want a GL implementation to do the decompressing and displaying on the screen. How do I make sure that the openGL implementation can decompress the compressed file I provide?

[quote]I’m not quite sure what you mean when you say:
" I would suggest to store PNGs and let OpenGL compress them at runtime on the fly "

Do you mean decompress them? I’m assuming if you store PNG, you’re storing a compressed version. Do you mean have openGL decompress them on the fly?
[/quote]
No, sorry for the unclarity. I meant your code which talks to OpenGL, loads and decompresses the PNG (via ImageIO.load to a BufferedImage for example). When it binds the BufferedImage’s databytes to OpenGL it says: compress it (lossy).

[quote]What I never understood w/ OpenGL’s compression stuff is it hides how it does the compression. If it says that it’s done GL_RGBA compression, what exactly does that mean? What’s the format exactly?
[/quote]
I don’t think it hides it away. There are two ways to achieve on-the-fly compression when binding a texture by calling
gl.glTexImage2D(GL.GL_TEXTURE_2D, 0, internformat, …)

  1. The more generic way is to let OpenGL decide which compression format it likes most by using the ARB tag by using:
    internformat = GL.GL_COMPRESSED_RGB_ARB

It’s mentioned in the other linked thread: after this you can ask what compression it actually used, as well as to which size it compressed the texture, by calling
glGetTexLevelParameteriv(…)
Please see the other thread for details on that.

  1. You could directly ask for a ST3C compression, by using:
    internalformat = GL.GL_COMPRESSED_RGBA_S3TC_DXT5_EXT;

For method 1) your card must support “GL_ARB_texture_compression”, while for method 2) it must support “GL_EXT_texture_compression_s3tc”. As said, you can query extensions via gl.isExtensionAvailable(…).
(I don’t right now which of the various DXT1…5 formats a card actually supports when it says it’s got the GL_EXT_texture_compression_s3tc implemented. Because I never uses it.)

[quote]I basically don’t have a GL implementation doing the compression, but I want a GL implementation to do the decompressing and displaying on the screen. How do I make sure that the openGL implementation can decompress the compressed file I provide?
[/quote]
By asking at runtime if the card supports GL_EXT_texture_compression_s3tc and then using the appropriate image-format and internal-format parameters at texture bind time (they’re mentioned somewhere on the Nvidia page as well as the official OpenGL extension registry I think).

The files you can compress to ST3C at deploy time with one of the Nvidia tools I mentioned, for example.