I have a set of textures to display (16384x8192 when combined) that consume too much memory to be practical. Standard DXT compression won’t be enough here. Many of their texels are the same color however. In theory, if I could compress the textures with run-length encoding http://en.wikipedia.org/wiki/Run-length_encoding it would dramatically reduce their required memory.
I’m not aware of anything in opengl that does this, so I was thinking of trying to do it crudely with a fragment shader. For each horizontal line on an uncompressed texture, I would create a short 1D texture with a repeating pattern of a texel encoding the color to display and a texel encoding how many texels that color extends for in the uncompressed texture. Let me elaborate:
(Assume I’m only dealing with a single color channel here)
Here are the first 15 texels of the 1st line of a texture: 1 1 1 1 5 5 6 28 28 28 28 28 28 28 28
Compressed form: 1 4 5 2 6 1 28 8
The bold number indicates the color, and the following underlined number represents how many texels that color is repeated in the uncompressed texture. As you can see the “compressed” sequence can be much smaller than the normal sequence if it has very long sequences of the same color.
In a fragment shader, I would be able to calculate the index of the desired texel in the hypothetical uncompressed texture if I’m not mistaken. I could then iterate through the compressed texture until I reach the desired texel data. I’m concerned about whether all of this texture accessing is feasible and how this eliminates the nice automatic mipmapping/texture filtering.
Is there anything that I might have overlooked when thinking of this approach? Is it even a good idea? Is there a better approach (or even just a different one) that you can think of? I should probably mention that I am planning to alter these textures slightly from time to time, but they will always have large areas of the same color at any instance.