sRGB textures

But what the hell is the point of doing that IN A PROGRAM, let alone precalculated in a texture?! It’s a limitation of the hardware, so it should be solved either by the hardware, or more realistically by drivers! I know how to adjust gamma correction, I just don’t see any reason at all for doing it myself!

I’m on a laptop, and the monitor sucks balls. The top of the monitor needs one gamma setting and the bottom needs another due to the viewing angle, so I can’t even get a good image with gamma correction! I did increase the gamma slightly since it made gradients look better, specifically anti-aliased geometry in motion, but this caused INSANE banding for darker colors which was simply ridiculous, so I immediately disabled it again. Driver gamma correction for antialiasing gradients also looked like shit in motion, but that might be mainly because you can’t tweak it at all. So WHY?! Just tell me a single reason for only gamma-correcting a single texture instead of the whole screen.

Because the gamma-correction is already backed into the texture (unless it’s sRGB).
You can’t do (correct) calculations with it, until you reverted the gamma-correction.

It’s not a limitation of the hardware, it’s a feature to adjust output for humans.

Wait wait wait! So it’s to LOAD quality-ruined gamma-corrected textures into normal color space textures, or what?

If it’s a device made by humans for humans and it’s not doing what it should (display linear colors when fed linear colors) it sure is a limitation in my book… >_>

No, it’s a way to convert a linear gradient (black to white in RGB space) into a non-linear gradient (on monitor) to get the retina to make chemical reactions that the brain interpretes as a linear gradient.

(If you had a monitor emitted a specific amount of light for ‘128,128,128’ and half of those photons when displaying ‘64,64,64’, it wouldn’t look half as bright, for the human eye)

@theagentd: download the book I linked. I’m pretty sure all of this is covered.

Example/Tutorial of a rather simple pixel shader to correct the gamma can be found here, might help.

You fail to understand that this has nothing to do with hardware. Unless you’re using floating point textures, there’s no way you could use linear-space with an 8-bit per channel texture without a severe impact on quality. You can think of sRGB as a lossy compression method for packing more “dark” info into 8-bit RGB textures. That’s because the human eye is more sensitive to dark details than bright details. And this compression comes for free, you don’t even have to do anything to make it happen. When an artist works on a texture inside Photoshop, they make it so that it looks pretty on their monitor. But that monitor works in gamma 2.2 space, so the RGB texture they just made is in gamma space by definition. There’s nothing you can do about it and there’s no need to. The same is true for most other kinds of images, photographs you take with your phone for example.

Now, when you want to manipulate that texture, specifically when that data participates in additions, the math fails unless it’s first converted to linear-space. It’s math, it’s not a hardware problem.

I also want to clarify that gamma setting you see in your graphics driver. That’s the gamma function applied to the incoming RGB values before being displayed on the monitor. That’s the inverse of the texture’s gamma encoding. Specifically, the texture looks nice in Photoshop because:

linear RGB -> texture sRGB = pow(RGB, 1.0 / 2.2) -> monitor RGB = pow(sRGB, 2.2) = pow(pow(RGB, 1.0 / 2.2), 2.2) = RGB, so your eyes see the original linear gradients.

Now, things change a bit in your engine. You have:

sRGB -> [FANCY RENDERER] -> monitor.

You have inverse gamma data coming in, the monitor expects inverse gamma data coming out. This implies that no matter what your renderer does, it has to output inverse gamma data. Some examples:

// RGB texture pass-through
// No correction needed, the texture is in inverse gamma by definition!
// That's the reason most simple sprite engines look fine.
out = texture(tex, coord);

// sRGB texture pass-through
// The GPU has converted the sample from inverse gamma to linear for us, we need to correct.
out = pow(texture(tex, coord), 1.0 / 2.2);

// RGB + lighting
// We're adding something, need to go linear, then gamma correct.
out = pow(pow(texture(tex, coord), 2.2) * diffuse + specular, 1.0 / 2.2);

// sRGB + lighting
// The sample is already linear (and even has correct linear/mipmap filtering).
out = pow(texture(tex, coord) * diffuse + specular, 1.0 / 2.2);

The final observation that should also be interesting to you is that I said “no matter what”. This means that even if you use floating point textures exclusively and you perform linear HDR rendering all the way, you STILL have to do gamma-correction during tone-mapping. Your tone-mapping operator needs to be gamma-aware or you simply go from linear to gamma-space as the last step in the tone-mapping shader.

Let me see if I got this straight:

  • Artists create images, but since they are making them with a monitor they will make it look good with the gamma correction.
  • When we, the game makers, load those images we need to undo the gamma correction from the textures to be able to do correct calculations.

And I still don’t get it. Why are the artists making incompatible textures? Why are we doing this correction when loading the texture instead of preprocessing the texture?

And why the f*ck does monitors expect inverse gamma data? That makes just as much sense as saying that you have to add Pi to each color channel or multiply each channel by 10 or something just because “the monitor expects it”. Maybe I’m just being stupid… ._.

It’s because the human eye has massive sensitivity in low light intensities compared to high light intensities. The scale is logarithmic, and you’d need to store a huge range of numbers if you wanted to store enough intensities of light such that you could get smooth darker gradients and also see black and white. Unfortunately when you’ve got 8 bits you’ve only got 256 values to play with. If you want to see smooth gradients in the darks, you’d need all 256 values just to cover, say, the first third of the available range in the monitor. So the output from the computer is encoded using this power scale thing, which basically gives you exponentially more as you get higher, just so you can get to white by the time you reach 255, but also see a consistent difference on the screen between each 1 point of difference.

The end result is, all the data is usually stored in ram as this exponentially encoded RGB stuff, which means when you come to mipmap it using simple linear maths, eg. (a + b) / 2, you get completely the wrong answer. You need to convert the log scale into linear scale first, then do the sum, then turn it back again. This goes for pretty much all blending operations in OpenGL, and that’s why there are a bunch of extensions for dealing with this stuff, and why computer graphics are suddenly getting more realistic looking, because finally the GPUs have the actual power to be able to do this in realtime, and also why mostly nobody knows about it outside of hardcore engine coding :slight_smile: Or that’s my guess anyway.

Cas :slight_smile:

All of this stems from the real world problem of creating an additive color display device. It’s been know (since at least the 1930s) that RGB was a terrible way to describe color. The only reason we use RGB is because it’s possible to build such a device. A good art package allows the artist to create images with a specific target color response…but all of that information is lost if not stored in a supporting format.

_<

I understand WHY we have/need gamma correction, I’m just questioning HOW it should be done…

Well, I think the linked articles from Spas pretty much explained exactly what to do and when to do it.

Cas :slight_smile:

I think theagentd is asking why we need to convert from gamma to linear in the shader (or sRGB sampler), instead of doing it as a preprocess or when loading the texture data. The answer lies in what I said about sRGB being a form of compression. If you perform the linear conversion ahead of time and store the result in a 8-bit-per-channel texture, you lose information. This code sample should provide enough evidence that it’s true:

byte[] colors = new byte[256];
for ( int i = 0; i < colors.length; i++ )
	colors[i] = (byte)i;

for ( int i = 0; i < colors.length; i++ ) {
	// 8-bit-per-channel RGB source in gamma-space
	byte gammaSrc = colors[i];
	// The result of texture(tex, coord) in the shader.
	float gammaSample = (gammaSrc & 0xFF) / 255.0f;
	// The RGB value in linear-space, converted in the shader or through sRGB sampling.
	float gammaValue = (float)Math.pow(gammaSample, 2.2);

	// 8-bit-per-channel RGB source in linear-space
	byte linearSrc = (byte)(gammaValue * 255.0);
	// The result of texture(tex, coord) in the shader.
	float linearValue = (linearSrc & 0xFF) / 255.0f;

	System.out.println(gammaValue + " - " + linearValue);
}

If you run it and inspect the output, you’ll see that every value is different except the two extremes, 0.0 and 1.0. More importantly, sampling the linear-space texture results in the first 20 or so values being clipped at 0.0.

[quote=“theagentd,post:29,topic:38355”]
They expect inverse gamma data because every piece of visual information ever made, whether it’s photographs, textures, videos or even subpixel font antialiasing, has been designed with gamma output in mind. With good reason too. It couldn’t have been Pi, or 10x, or anything else, because the gamma curve emulates the light and color response in human eyes. It makes perfect sense and helps computer systems get the most quality out of only 8 bits of information per color channel.

Thanks Spasi. I think that cleared it up a bit, but I’m still skeptical. :stuck_out_tongue: I’ll just have to do as my artist says then. xd

I was doing some tests and something important came up. When going from 8bit gamma to fp linear and back, the correct behavior is to round instead of truncating:

byte src = ...
double d = Math.pow((src & 0xFF) / 255.0, 2.2);
// This is wrong
byte trg = (byte)(Math.pow(d, 1.0 / 2.2) * 255.0);
// This is correct
byte trg = (byte)Math.round(Math.pow(d, 1.0 / 2.2) * 255.0);

So if you use Riven’s code above, the last line should be:

return (int)Math.round(Math.pow((x+y+z+w) * 0.25f, 1.0 / gamma) * 255.0);

Without rounding, a gamma-to-linear-to-gamma conversion of a 0-255 gradient will differ in 65 places and 24 gradient steps will be lost.