SSAO Problem

I’ve got a problem with an SSAO shader I’m working on. For some reason, it doesn’t seem to do anything at all. I’m correctly getting the position and linear depth (see pics below), but unfortunately it’s not outputting anything… Whole screen is white. You can assume that I’ve debugged it to the point that it’s definitely the shader code, not something CPU side.

I store the depth and normals in one texture then linearize the depth in the SSAO shader. Here’s the main code:


void main() {
	vec4 normDepth = texture2D(u_normalDepthTexture, v_texCoords);
	vec3 position = CalcPos(normDepth.a);
	//mat3 rotMat = CalcRotMat(normDepth.xyz);
	
	float occlusion = 0.0;
	
	for(int i = 0; i < u_sampleKernelSize; i++)
	{
		vec3 sample = u_sampleKernel[i];
		sample = sample * u_radius + position;
		
		vec4 offset = u_proj * vec4(sample, 1.0);
		offset.xy /= offset.w;
		offset.xy = offset.xy * 0.5 + 0.5;
		
		vec4 sampleColor = texture2D(u_normalDepthTexture, offset.xy);
		float sampleDepth = sampleColor.a;
		float linearSampleDepth = (2.0 * u_projAB.x) / (u_cam.y + u_cam.x - sampleDepth * (u_cam.y - u_cam.x));
		
		if(abs(position.z - linearSampleDepth) < u_radius)
		{
            occlusion += (linearSampleDepth <= sample.z) ? 1.0 : 0.0;
        }
	}
	
    gl_FragColor = vec4(1.0 - (occlusion / float(u_sampleKernelSize)));
}

and CalcPos if necessary:


vec3 CalcPos(float depth)
{
	float linearDepth = (2.0 * u_cam.x) / (u_cam.y + u_cam.x - depth * (u_cam.y - u_cam.x));
	vec4 pos = u_invProj * vec4(v_texCoords * 2.0 - 1.0, linearDepth, 1.0);
	return pos.xyz / pos.w;
}

I haven’t implemented noise or kernel orientation yet, I just want to get the basics working.

Reconstructed position:

http://puu.sh/gZtE2/5ee9cb5f1a.jpg

Depth (non linear):

http://puu.sh/gZtGC/d837394c30.png

I’d appreciate any help. Thanks :slight_smile:

I think you’re mixing up two view space reconstruction algorithms. You can either reconstruct depth using the hardware depth value (raw value from a depth texture) and the inverse projection matrix, OR reconstruct depth using the linear depth value and a frustum corner vector.

Either


vec3 ndc = vec3(v_texCoords, depth) * 2.0 - 1.0;
vec4 pos = u_invProj * vec4(ndc, 1.0);
return pos.xyz / pos.w;

or



uniform vec3 nearFar; //This should be filled with vec3(far * near, far, far-near)
uniform vec2 frustumCorner;

...

float linearDepth = nearFar.x / (nearFar.y - depthValue * nearFar.z);
return vec3((texCoords * 2.0 - 1.0) * frustumCorner, -1) * linearDepth;

Secondly, you’re kind of doing things backwards. The best quality SSAO is gotten by sampling pixels in a certain radius around the current pixel (in 2D) and unprojecting these pixels into view space 3D. Then you can calculate the view space distance from the current pixel for each sample and modify the occlusion based on that. You are calculating 3D positions around the pixel, projecting them to the screen and comparing the depth of the pixel there.

Thanks for the response. I’ll fix those problems, they seem pretty easy. I understand the implementation a lot better now.
When you say ‘frustum corner’ can I use any of them? I was under the assumption that there’s 8. I see you have a vec2 there… I think I’m thinking of the wrong frustum corners…

The frustum vector is simply the normalized XY of the top right frustum corner. You can easily compute this by multiplying the 4D vector (1, 1, 1, 1) with the inverse projection matrix on the CPU, dividing by W and then divide XY by Z to normalize it. It’s basically a vector that says "when Z increases by 1, XY increases by ". I use a modified LibGDX math library, so I do it like this:


		frustumCornerTemp.set(1, 1, 1).prj(inverseProjectionMatrix); //This transforms the vector (1, 1, 1, 1) with the matrix and divides with W.
		frustumCornerX = frustumCornerTemp.x / -frustumCornerTemp.z;
		frustumCornerY = frustumCornerTemp.y / -frustumCornerTemp.z;

Just got some more time to work on this recently. Here’s my code now:


// in Java, here's the vec2 I pass in for u_frustumCorner:

Vector3 tmpFrustCorner = new Vector3(1, 1, 1).prj(cam.projection.cpy().inv());
Vector2 frustCorner = new Vector2(tmpFrustCorner.x / -tmpFrustCorner.z, tmpFrustCorner.y / -tmpFrustCorner.z);

// in shader 

vec3 CalcPos(float depth)
{
	float linearDepth = depth;
	return vec3((v_texCoords * 2.0 - 1.0) * u_frustumCorner, -1) * linearDepth;
}

void main() {
	vec4 normDepth = texture2D(u_normalDepthTexture, v_texCoords);
	float linearDepth = normDepth.a;
	vec3 position = CalcPos(normDepth.a);
	vec3 normal = normDepth.xyz;
	//mat3 rotMat = CalcRotMat(normDepth.xyz);
	
	float occlusion = 0.0;
	for (int i = 0; i < u_sampleKernelSize; ++i) {
	   vec2 sample = poisson16[i] * u_radius + v_texCoords;
	   float depthAtPixel = texture2D(u_normalDepthTexture, sample).a;
	   vec3 viewSpaceSample = CalcPos(depthAtPixel);
	  
	   float rangeCheck= abs(linearDepth - depthAtPixel) < u_radius ? 1.0 : 0.0;
	   occlusion += (depthAtPixel <= linearDepth ? 1.0 : 0.0) * rangeCheck;
	}
	
    gl_FragColor = vec4(1.0 - (occlusion / float(u_sampleKernelSize)));
}

And here’s the result:

http://puu.sh/hcQHO/26aac53e88.png

It looks like it’s actually doing something here… Am I doing something wrong, or am I just encountering a common artifact that can be fixed somehow?
Thanks.

It looks like you’re getting self-occlusion, e.g. that objects are shadowing themselves. Your shader also is a bit weird. You don’t even use the position and viewSpaceSample variable after calculating it. Your SSAO should compare the view-space position of each sample with the view-space position of the pixel being processed.

Try something like this in your loop:


vec3 sampleVector = viewSpaceSample - position; //3D vector from pixel to sample

float normalWeight = clamp(dot(normal, normalize(sampleVector)), 0.0, 1.0); //Gives higher weights to samples in the direction of the normal ("in front of" the pixel)
float distanceFalloff = clamp(1.0 / (1.0 + length(sampleVector)), 0.0, 1.0); //gives higher weight to samples close to the pixel

occlusion += normalWeight*distanceFalloff;


You also probably want to add a random rotation to your offsets to trade the banding for noise, which can be removed by blurring the result.

This is giving me a similar, albeit much more grainy, output… I must be doing something wrong when I’m storing the normals or depth. All the math for the actual SSAO seems perfect from what I can tell. I thought I had to use the view space normals, tried multiplying the normal by the inverse of the projection matrix to get that, and it still didn’t work correctly. Maybe I’m not using the correct depth?

I store normals and depth like


gl_FragColor = vec4(v_normal, v_position.z / v_position.w);

where v_position is


v_position = u_projViewTrans * u_worldTrans * vec4(a_position, 1.0);

and I have also introduced a LinearizeDepth function, so now I know for sure that depth is linearized. But, still no dice. Also, when I use my other code, not using the normals to do the falloff, I still get banding, as expected.
So, either I’ve got to find a different way to get rid of self occlusion, or I’ve got to fix some error with my normals… Ideas, theagentd? Thanks a bunch for all your help btw.

Your normals should only be processed by the object normal matrix and the view normal matrix. A normal matrix can be generated by taking a matrix, setting its translation part to 0, inverting it and then transposing it. In LibGDX, that’s


viewNormalMatrix.set(viewMatrix).setTranslation(0, 0, 0).inv().tra();

objectNormalMatrix.set(objectMatrix).setTranslation(0, 0, 0).inv().tra();

You can NOT multiply the normals by the view matrix or the projection matrix.

EDIT: Also, there is a way of calculating a normal from the linear depth buffer by looking at (at least) 3 different depth values, converting them to view space positions and then doing a cross product. This produces incorrect normals at depth discontinuities, and will look horrible on noisy objects like vegetation and many other things, hence it’s preferred to get the normal stored.