RLE compression for textures

I have a set of textures to display (16384x8192 when combined) that consume too much memory to be practical. Standard DXT compression won’t be enough here. Many of their texels are the same color however. In theory, if I could compress the textures with run-length encoding http://en.wikipedia.org/wiki/Run-length_encoding it would dramatically reduce their required memory.

I’m not aware of anything in opengl that does this, so I was thinking of trying to do it crudely with a fragment shader. For each horizontal line on an uncompressed texture, I would create a short 1D texture with a repeating pattern of a texel encoding the color to display and a texel encoding how many texels that color extends for in the uncompressed texture. Let me elaborate:

(Assume I’m only dealing with a single color channel here)
Here are the first 15 texels of the 1st line of a texture: 1 1 1 1 5 5 6 28 28 28 28 28 28 28 28
Compressed form: 1 4 5 2 6 1 28 8

The bold number indicates the color, and the following underlined number represents how many texels that color is repeated in the uncompressed texture. As you can see the “compressed” sequence can be much smaller than the normal sequence if it has very long sequences of the same color.

In a fragment shader, I would be able to calculate the index of the desired texel in the hypothetical uncompressed texture if I’m not mistaken. I could then iterate through the compressed texture until I reach the desired texel data. I’m concerned about whether all of this texture accessing is feasible and how this eliminates the nice automatic mipmapping/texture filtering.

Is there anything that I might have overlooked when thinking of this approach? Is it even a good idea? Is there a better approach (or even just a different one) that you can think of? I should probably mention that I am planning to alter these textures slightly from time to time, but they will always have large areas of the same color at any instance.

Definitelly not a good idea :slight_smile: Use clever texture caching instead, remember how mipmapping works and what mipmap levels you see on screen, typically only a few textures need to be in mipmap level 0 (the biggest). So have your RLE (or otherly compressed images) in memory/disk and stream them as required. You can look up Carmack’s MegaTexture technique for similar approach.

That’s about 64mb with dxt1, isn’t it? OpenGL automatically swaps these around as necessary. As long as all required textures for a given scene fit into vram everything should be fine.

Did you already try it?

I guess it all comes down to how you are accessing the data. If you expect any sort of random access just say no to RLE. You say S3TC won’t cut it, but spend a little more time looking at the algorithm, you’ll see that there’s nothing requiring it be applied to 4x4 blocks, you can actually choose any block size you want. Same thing goes for the reconstruction equations. All of which should be easily doable in a shader.