[Solved] Shadowmapping depth bias

I’m having trouble finding a proper depth bias for my meshes (the comparison value in the following snippet)

float sampleShadowMap(vec2 texcoords, float comparison)
{
   vec4 depth_val = texture2D(shadow_map, texcoords);
   return depth_val.z < comparison ? 0.0 : 1.0;
}

here’s how I calculate it now


   vec4 shadow_remove_persp;
   shadow_remove_persp.x = shadow_coord.x / shadow_coord.w;
   shadow_remove_persp.y = shadow_coord.y / shadow_coord.w;
   shadow_remove_persp.z = shadow_coord.z / shadow_coord.w;
   shadow_remove_persp.w = shadow_coord.w / shadow_coord.w;

 float depth_bias = shadow_remove_persp.z - (10.0 * model_scale / shadow_map_size);

and it works great, sometimes. Is there really not a way to calculate a good depth bias without just guessing?

  1. Use hardware depth testing instead. It’s faster and gives you free bilinear filtering.

  2. Use glPolygonOffset() to apply the bias when rendering the shadow map instead of when sampling it. glPolygonOffset() can use the slope of the triangle to dynamically pick a bias, which is better than you can code yourself.

  1. OK maybe I’m miss understanding this, but I am rendering and applying the shadow map in glsl(two passes, obviously?). Isn’t that hardware?

  2. I’m confused. By using glPolygonOffset() how does it apply the bias in such a way to know what the shadow value should be?

Edit: do you mean I should enable glPolygonOffset() when rendering from the light’s position and then not even bother with doing a comparison?

Your edit is correct. The problem with shadow mapping is that to be able to do a mathematical comparison between a projected fragment and the shadow map, you’d need both infinite floating point precision and infinite shadow map resolution. To avoid this, we use a bias. The thing is that GPUs have a way of pushing everything away from the light when rendering the shadow map. This is a better time to apply it, as the bias calculation can take the slope of the triangles being rendered into account. When rendering the shadow map in the first pass, call glEnable(GL_POLYGON_OFFSET_FILL), then glPolygonOffset() to tweak the setting. Remember to disable it afterwards.

GPUs also support hardware depth comparisons. In addition, it also supports doing 4 depth tests and bilinearly interpolating between the result of the depth test to achieve some basic filtering:

Without:

With:

These both run at the same speed (the FPS counter at the top is affected by the screenshot saving). See the shadow mapping part of this page http://www.java-gaming.org/index.php?topic=28018.0 to learn how to set it up. It’s quite simple.

Thanks!

In regards to the optimizations though I’ve heard that certain drivers had trouble supporting sampler2DShadow and textureProj(), I have not tested this myself and am wondering if you have ran into any issues?

No drivers I’ve ever used have had problems with sampler2DShadow, and our game has been run on a lot of different computers. I personally test it on an AMD, an Nvidia and an Intel card at home, and none of them have ever had problems with hardware shadow testing.

About textureProj(), it is a bit redundant nowadays. Graphics cards don’t actually have hardware for the projection anymore, so the w-divide is simply done in “software”. textureProj() also isn’t usable if you’re doing PCF, as if you’re taking multiple samples you need to do the z-divide first to not mess up the sampling offsets of the extra samples. Definitely use shadow samplers, but textureProj() you can avoid.

OK cool, good to know.

Thanks!