sRGB textures

Hi, i’m having some troubles with sRGB textures. Here’s my code to create the texture:


...
g_gammaTexture = glGenTextures();
glBindTexture(GL_TEXTURE_2D, g_gammaTexture);

for (int mipmapLevel = 0; mipmapLevel < pImageSet.getMipmapCount(); mipmapLevel++) {
	SingleImage image = pImageSet.getImage(mipmapLevel, 0, 0);
	Dimensions dims = image.getDimensions();

	glTexImage2D(GL_TEXTURE_2D, mipmapLevel, GL_SRGB8, dims.width, dims.height, 0,
		GL12.GL_BGRA, GL_UNSIGNED_INT_8_8_8_8_REV, image.getImageData());
}

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_BASE_LEVEL, 0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, pImageSet.getMipmapCount() - 1);
glGenerateMipmap(GL_TEXTURE_2D);

glBindTexture(GL_TEXTURE_2D, 0);

If you want to see the rest of the code: https://github.com/rosickteam/OpenGL/blob/master/rosick/mckesson/IV/tut16/GammaCheckers02.java

This http://www.arcsynthesis.org/gltut/Texturing/Tut16%20Mipmaps%20and%20Linearity.html is the tutorial i am rewriting using LWJGL, i suggest to quickly read it to understand my problem.
Now i’m asking you any advice you can give me for using sRGB…am i generating mipmaps in the correct way?
I think the problem is i am not loading the texture correctly…could the error be caused by a wrong bytebuffer(image.getImageData())?
I’ve checked it a lot of times and it seems correct to me…

To summarize, here is what i get:

http://www.arcsynthesis.org/gltut/Texturing/Gamma%20Checkers.png

and here is what the texture should be:

http://desmond.imageshack.us/Himg408/scaled.php?server=408&filename=immaginerzj.png&res=medium

Any help is greatly appreciated.

i heard, that there is something like gamma correct scaling of images.
I never used srgb textures, but perhaps the genMipmaps function doesn’t work well with linear textures.
what do you get when you generate the mipmaps with an external tool, like from nvidia or amd

glGenerateMipmap is not required to perform filtering in linear space, unless EXT_texture_sRGB_decode is supported. So that is most likely the problem.

The best option is to perform your own mipmap generation. This is advisable even for non-sRGB textures, because a) you get to control the filtering algorithm (can use bicubic or more advanced algorithms) and b) most artist-made textures are in sRGB space anyway and downsampling in linear space is the only proper way to generate mipmaps.

The process is simple: read sRGB -> convert to a linear fp format -> downsample using fp arithmetic -> convert back to sRGB. It can also be very easily translated to GLSL or OpenCL, so it’s fast for both offline generation and in real-time for dynamic textures.

Why is getting gray surprising? I admit it looks kind of weird though. Anyway, try enabling anisotropic filtering:

if(GLContext.getCapabilities().GL_EXT_texture_filter_anisotropic){
	float max = glGetFloat(EXTTextureFilterAnisotropic.GL_MAX_TEXTURE_MAX_ANISOTROPY_EXT);
	System.out.println("Enabling " + max + "x anisotropic filtering");
	glTexParameterf(GL_TEXTURE_2D, EXTTextureFilterAnisotropic.GL_TEXTURE_MAX_ANISOTROPY_EXT, max);
}

What’s a SRGB texture?

Standard RGB, as in pre-corrected for the basic standard “monitor gamma”. I don’t think it makes a lick of difference when it comes to to scaling though, as the above checkerboard demonstrates. Anisotropic filtering should help somewhat though.

Thanks for the responses.
I’m new at programming OpenGL, and this is my first code on mipmaps, srgb and so on…sorry if i’ll say something wrong.

@Danny02, Spasi
In the original tutorial (which is written in c++) there isn’t glGenerateMipmap(), i had to add it otherwise the textures would be black.
Perhaps this is the problem…? How can i check if mipmaps were generated with other tools? Are they stored in the texture data?

@theagentd, sproingie
The problem isn’t getting gray, but getting a ‘darker’ gray then the expected one. And yes, with anisotropic filter the texture ‘propagates’ correctly, but this is not my problem :slight_smile:
For srgb texture see the link i write in the first post, basically is a texture with color differents from ‘normal’ RGB

[quote=“sproingie,post:5,topic:38355”]
It makes a big difference actually. Try comparing a texture in Photoshop, as the artist designed it, with how it looks in a game without gamma-correct rendering, it won’t match. The result will get worse, depending on how many linear filtering operations have been performed on the non-linear texture data. The issue is not that hard to solve, but you’d be surprised how many games get this wrong.

The 3 most common sources of filtering error are: 1) mipmap generation 2) texture sampling 3) lighting calculations. Every time you perform an addition between a texel value and something else, the texel has to be in linear space. You can fix 1) with custom mipmap generation 2) with sRGB textures and 3) with sRGB textures or simple pow() functions in the shader.

There’s no way around 2), the texture sampling hardware has to know that it’s dealing with non-linear data. It first has to convert from gamma-space to linear, perform the linear/mipmap/anisotropic filter and then return the texture color to the shader. Older hardware cannot do this, but this tends to be the weakest source of error.

edit: Some games use a 2.0 gamma exponent instead of 2.2, an approximation that allows them to use sqrt(x) and x*x in the shader, instead of pow(x, 1.0/2.2) and pow(x, 2.2) which are more expensive. I wouldn’t recommend it these days.

You might want to change the Math.sqrt(…) to Math.pow(…, 1.0/gamma)
I modified the post to handle arbitrary values of gamma correctly, with some hints from Spasi.

This is where the magic happens:


   public static int[] half(int[] argbFull, int w, double gamma)
   {
      int h = argbFull.length / w;
      int w2 = w/2;
      int h2 = h/2;

      int[] argbHalf = new int[argbFull.length >>> 2];

      for (int y = 0; y < h2; y++)
      {
         for (int x = 0; x < w2; x++)
         {
            int p0 = argbFull[((y << 1) | 0) * w + ((x << 1) | 0)];
            int p1 = argbFull[((y << 1) | 1) * w + ((x << 1) | 0)];
            int p2 = argbFull[((y << 1) | 1) * w + ((x << 1) | 1)];
            int p3 = argbFull[((y << 1) | 0) * w + ((x << 1) | 1)];

            int a = gammaCorrectedAverage(p0, p1, p2, p3, 24, gamma);
            int r = gammaCorrectedAverage(p0, p1, p2, p3, 16, gamma);
            int g = gammaCorrectedAverage(p0, p1, p2, p3,  8, gamma);
            int b = gammaCorrectedAverage(p0, p1, p2, p3,  0, gamma);

            argbHalf[y * w2 + x] = (a << 24) | (r << 16) | (g << 8) | (b << 0);
         }
      }
      return argbHalf;
   }

   static int gammaCorrectedAverage(int a, int b, int c, int d, int shift, double gamma)
   {
      double x = Math.pow(((a >> shift) & 0xFF) / 255.0, gamma);
      double y = Math.pow(((b >> shift) & 0xFF) / 255.0, gamma);
      double z = Math.pow(((c >> shift) & 0xFF) / 255.0, gamma);
      double w = Math.pow(((d >> shift) & 0xFF) / 255.0, gamma);

      return (int) Math.round(Math.pow((x+y+z+w) * 0.25f, 1.0 / gamma) * 255.0);
   }

You can find much better explanations and pictures here:

Half-way down in the second post, there’s a comparison of two spheres, one with gamma-correct rendering and one without. That yellow-ish ring around the specular in the left image is the most obvious clue you can find in games that don’t do gamma-correct rendering.

@Riven thank you for the code, i’ll try it :slight_smile:

@Spasi, thanks again for your attention, and for the links i will read tomorrow :). Your words lead me to another questions:

I have used the same resources (.dds) that the original author uses in his project, and the compiled c++ runs perfectly on my pc so it’s not an hardware problem…so why i had to add glGenerateMipmaps() to get the texture displayed? Are there some differences between c++ OpenGL and LWJGL?

And also, i opened the dds with gimp + plugin, i see only the texture, not texture + mipmaps (like the image on “Gamma and Mipmapping” post you write to me) so i think the mipmap are generated on the go, am i right? How are they generated without glGenerateMipmaps()?

These two points are not clear to me ???

Last thing i want to highlight, the tutorial divides the ‘textures srgb’ part from the ‘gamma correction’ part:
-the ‘G’ key switches between a lRGB checkerboard and the sRGB one (which is ‘brighter’, but not in my lwjgl application - my problem);
-the ‘A’ key switches between the ‘no gamma’ shader (which does nothing) and the ‘gamma’ shader, which perform this correction:

void main()
{
	vec4 gamma = vec4(1.0 / 2.2);
	gamma.w = 1.0;
	outputColor = pow(texture(colorTexture, colorCoord), gamma);
}

(which makes the textures even brighter - and it works perfectly)

If this is the source you’re porting, then all mipmap levels should come from the texture. I’ll check the .dds file tomorrow. There’s no difference between C++ OpenGL and LWJGL, so it’s either a problem of your DDS loader or you have the wrong .dds file.

The only difference between a normal RGB and an sRGB texture is in the texture sampling. When you sample an RGB texture, you get the raw data unchanged. When you sample an sRGB texture, you get the texture data in linear space. That means, you get the result of pow(texture(tex, coord), 2.2). Well, not exactly, hopefully the pow is done before linear/mipmap filtering, but in any case the end result is in linear space. So, you use that texture sample however you like (add lighting etc), then you need to output the final color. The problem is that, unless you’re doing HDR rendering, you need to go back to sRGB space. This can be achieved in two ways:

  • Do an explicit pow(color, 1.0 / 2.2) in the shader and write the result to the output color. This is what the shader code you posted does.
  • Use an sRGB framebuffer. That way you can output the linear-space color from the shader and the GPU will do the gamma-correction for you. An sRGB framebuffer basically provides the inverse functionality of an sRGB texture.

Yes, that is the source :slight_smile:
I thought the fault could be my dds loader (which is itself a part of the port of the same project), but i have debugged together the java and c++ code, and all the variables i could check had the same values…and the textures are working for previous tuts (but always with glGenerateMipmaps()), so i have no more ideas, and decided to ask for help here. I´ll be waiting for your reply on the .dds file :slight_smile:

OK, I checked the files. Both checker_linear.dds and checker_gamma.dds in the gltut/Tut 16 Gamma and Textures/data folder contain mipmaps. The checker_gamma one has been generated with gamma-correction. So, you shouldn’t need to use glGenerateMipmaps and using checker_gamma.dds + the above shader should result in gamma-correct rendering.

Good grief, nobody has ever told me about this stuff before :confused: Another bunch of things to learn about. Fortunately has a bit less effect in 2D games.

Cas :slight_smile:

I’ve mentioned this elsewhere, but a good free book that covers alot of ground is: “Principles of Digital Image Synthesis” url=http://realtimerendering.com/Principles_of_Digital_Image_Synthesis_v1.0.1.pdf[/url]. It’s a little old, but the fundamentals don’t change.

@princec, Roquen
I’ve googled for some time before posting here, and i haven’t found many resources on this topic…so thanks for the link :slight_smile:

@Spasi i downloaded the nVidia texture tool and check the textures, and it says they have mipmaps. So i re-debugged the dds loader, and found the error :slight_smile: i was confused because of gimp didn’t show mipmaps to me, so i though it was some sort of sRGB problem. Now the mipmaps are correct, and they are loaded from the texture data, as you told to me. Thank you for your time and your explanations, now i understand both mipmaps and sRGB :slight_smile:

I’m sorry if I sound like a complete idiot here, but I completely fail to see the point in hacking around a user-specific hardware problem with software. Doesn’t gamma-correction effectively reduce the quality of the texture since the non-linear gamma color is put in a byte? Doesn’t this screw up (additive only?) blending badly? Isn’t it better to let the user apply gamma correction in his monitor or his graphics drivers, since if he want gamma correction, wouldn’t he want it on everything? Why why why???

You might wonder why the monitor actually performs this gamma correction, well, if you render a gradient from black to white (or any other pair of colors) in RGB space, only after the gamma correction, it looks like a linear gradient. This compensates the non-linear perceived luminance by the human eye. The same effect can be found in analogue photographs, where increased exposure to light yields a non-linear decrease in remaining pigment in the picture. The non-linearity allows humans to view scenes where light intensities wildly vary (up to factor 10,000) without adjusting the diameter of the pupil.

You might find this an interesting read:

These pages can help you to adjust the gamma of your monitor:
http://www.lagom.nl/lcd-test/ (i’m watching this on a dual-monitor setup and 1 monitor is surprisingly perfect, and the other is horribly off)

Gosh and you young-en’s have it easy. Monitor responses are much more uniform than they used to be. But you still see radical differences between say LCD camera/phone displays and your average computer/TV screen (other than just luminance). To slightly derail the thread, RGB is always (even properly gamma corrected) non-uniform. Which means than if you think of a color as being some vector is 3D space, then moving some fixed distance does not produce a uniform change in perceived color. You have to move further in some directions than in others to be just noticeably different. Specifically the eye is more sensitive to luminance changes than chromatic.

My comment on creating a gradient between arbitrary colors was indeed a bit misleading. I think moving linearly through HSL color space will solve that (or at least give much better results).