GLSL Shadow mapping on ATI cards

Hi

I have problems running my GLSL shadow mapping code on ATI cards. I works perfectly on NVidia cards. Doess anybody know, what could be a problem with the code?

vertex program:


uniform int shadowMapUnit;
varying vec4 shadowTexCoord;

void main()
{
    gl_TexCoord[0] = gl_MultiTexCoord0;
    
    // shadow texture coordinates generation
    shadowTexCoord = gl_TextureMatrix[ shadowMapUnit ] * gl_ModelViewMatrix * gl_Vertex;
    
    // vertex calculation
    gl_Position = ftransform();
}

fragment program:


uniform sampler2D texture0;
uniform sampler2DShadow shadowMap;

// This value is interpolated across the current primitive and fed by the vertex-shader.
varying vec4 shadowTexCoord;

float getShadow()
{
    // Extract shadow value from shadow-map.
    float shadow = shadow2DProj( shadowMap, shadowTexCoord ).z;
    
    //return( shadow * ( 1 - gl_LightModel.ambient ) );
    if ( shadow < 1.0 )
        return ( gl_LightModel.ambient.r );
    else
        return ( 1.0 );
}

void main()
{
    // Extract base color from first texture-unit.
    //vec4 baseColor = texture2D( texture0, gl_TexCoord[ 0 ].st ) * gl_LightModel.ambient;
    vec4 baseColor = texture2D( texture0, gl_TexCoord[ 0 ].st );
    baseColor.r *= gl_LightModel.ambient.r;
    baseColor.g *= gl_LightModel.ambient.g;
    baseColor.b *= gl_LightModel.ambient.b;
    baseColor.a *= gl_LightModel.ambient.a;
    
    if ( shadowTexCoord.w < 0.0 )
    {
        // No back-projection!
        gl_FragColor = baseColor;
    }
    else
    {
        float shadow = getShadow();
        
        vec4 color = shadow * baseColor;
        // restore original alpha value
        color.a = baseColor.a;
        
        gl_FragColor = color;
    }
}

Marvin

I have no ati card to test atm. It’s just a guess, but try to move the gl_FragColor call out of the if constraints:


    vec4 color = baseColor;
    if ( shadowTexCoord.w >= 0.0 )
    {
        float shadow = getShadow();
        color *= shadow;
        // restore original alpha value
        color.a = baseColor.a;
    }
    gl_FragColor = color;

Thanks for the suggestion. But it unfortunately didn’t help.

I think, I have identified the two problematic lines:


shadowTexCoord = gl_TextureMatrix[ shadowMapUnit ] * gl_ModelViewMatrix * gl_Vertex;

and


float shadow = shadow2DProj( shadowMap, shadowTexCoord ).z;

shadowTexCoord is always (0.0, 0.0, 0.0, 0.0). I have no idea, why this is. I have verified, that both modelViewMatrix and the correct texture unit’s texture matrix are set properly. And I can transform the vertex coordinate “manually” through (gl_ProjectionMatrix * gl_ModelViewMatrix * gl_Vertex) instead of ftransform(), which leads to the expected geometry. Hence none of these matrices can contain just zeroes. Stays gl_TextureMatrix. But I can’t see, how this matrix could contain wrong values.

With shadowTexCoord always zero, shadow2DProj() can’t do anything else than always return (1.0, 1.0, 1.0, 0.0), which it always does.

Is there anything in the first quoted line above, that ATI cards might not like?

Marvin

What happens if you make the values in your texture matrix equal to the values in your projection matrix?

So check whether…

`textureMatrix

projMatrix`

results in:

`gl_TextureMatrix[ shadowMapUnit ] * gl_ModelViewMatrix * gl_Vertex

gl_ProjectionMatrix * gl_ModelViewMatrix * gl_Vertex`

If not, there is a bug in the ATi driver.
If so, there is something wrong with your textureMatrix… and nVidia handles it differently than ATi… :persecutioncomplex: or something like that.

Looks like there is indeed a bug in the ATI driver. I will see, if I can report it to the ATI support.

Thanks a lot for your help.

Marvin

just to be sure … what happens when you replace shadowMapUnit with the real number ?

… I remember I ran into some stupid bugs just cos’ the arrays index need to be constant … or something. :slight_smile:

Yeap. This was the problem. Thank you very much.

Marvin

yay, you’re welcome :slight_smile: