Reconstructing position from depth texture (LibGDX)

Hey everyone! New to the forum. Came here desperately looking for help with something I’ve been struggling with over the last few days. ???
For some bizarre reason, there seems to be something wrong with my matrices when I try to reconstruct the world position from a depth texture - everything seems to skewed.
Here is some of my GLSL code:

Vertex Shader:



attribute vec4 a_Position;     
                 attribute vec4 a_Color; 
                 attribute vec2 a_texCoords;
                 varying vec4 v_Color; 
                 varying vec2 v_texCoords; 
                 void main()                   
                 {                             
                    v_Color = a_Color;
                    v_texCoords = a_texCoords; 
                    gl_Position =   a_Position;   
                 }    

Fragment Shader:


#ifdef GL_ES
                 precision mediump float;
                 #endif
                 varying vec4 v_Color; 
                 varying vec2 v_texCoords; 
                 uniform sampler2D u_depthMap;
                 uniform mat4 u_invProjView;
                 
                 vec3 getPosition(vec2 uv, float depth) {
                      vec4 pos = vec4(uv, depth, 1.0)*2.0-1.0;
                      pos = u_invProjView * pos;
                      pos = pos/pos.w;
                      return pos.xyz;
                 }
                 
                 
                 void main()                                   
                 {                                            
                   float depth = texture2D(u_depthMap, v_texCoords).r;

                   gl_FragColor = vec4(getPosition(v_texCoords, depth), 1.0); 
                 }


Here is an image of what appears immediately:
http://imgur.com/HEiV3qO

And after I move the camera:
Imgur

[s]You should not apply interval mapping to your depth and w=1.0 values in getPosition in your fragment shader.
Only to the xy-coordinates (uv).
Change line 10 of the fragment shader listing from:

vec4 pos = vec4(uv, depth, 1.0)*2.0-1.0;

to:

vec4 pos = vec4(uv*2.0-1.0, depth, 1.0);

[/s]

Thanks, but sadly it didn’t work. :frowning:

Also, here is how I am rendering the depth texture:

Vertex shader (for rendering the depth texture, all objects are rendered using this and then the texture is stored as the depth texture):


void main() {

        v_texCoord0 = a_texCoord0;
	vec4 pos = u_worldTrans * vec4(a_position, 1.0);
        gl_Position = u_projViewTrans * pos;
	v_depth = gl_Position.z;
}

Fragment shader:


void main() {
		gl_FragColor = v_depth;
}


Ah, it seems my advice actually was wrong and you already do it correctly :slight_smile: as this post points out:


But furthermore: You store linear depth; or in other words: gl_Position.z does not contain the depth value stored by OpenGL in the depth buffer, because it is missing the perspective divide as well as the interval mapping of the value to [0…1]. The formula you are using however applies to the non-linear depth value stored by OpenGL. So, you might just want to sample a GL_DEPTH texture instead.

I’ve also noticed some formulas use the inverse projection matrix - while some use the inverse projection * view matrix. Can anybody explain to me why the matrix used isn’t constant?

Generally, you must use the inverse of whatever matrix you used to convert from coordinate space A (whatever A is) to NDC, when you then want to get back to space ‘A’.
If the matrix is inverse(projection * view) then ‘A’ originally meant to be “world coordinates”, because (projection * view) was then used to convert those “world coordinates” through the camera/view transformation and through the perspective projection to finally obtain NDC space.

If the matrix used was only inverse(projection) then in that particular example there probably was no “world space” -> “view space” transformation necessary, since maybe there was no (movable) camera.
But this implies that the original transformation to convert from view space to NDC also only used the projection matrix to do that.
So which ever matrix you choose to do your transformation, you naturally must choose the inverse of that matrix to do the opposite transformation.

Thanks for the awesome replies, KaiHH. It’s greatly appreciated.
I tried using gl_FragCoord.z in the fragment shader to store the depth, and the results are getting better.

Still a little bit of strange movement, but definitely getting there. ;D

The objects seem like they’re transparent or something. I definitely have depth test and depth write on too. Hm…

And when I move…

Alright. This is the best I’ve gotten. It’s exactly the same as what I get if I simply output worldMatrix*vec4(position, 1.0) to the screen.

This is by using gl_FragCoord.z, and the reason why depth test didn’t seem to be working before is because I had depth disabled on my frame buffer object.

In case you are having problems getting things to work, I implemented reconstructing position from depth as a LWJGL 3 demo, so you can have a look at it:
Java source: Java source
Depth-only shader: Vertex Shader and Fragment Shader
Fullscreen shader: Vertex Shader and Fragment Shader

I do not use a color drawbuffer to render the depth values to, but instead use a GL_DEPTH_COMPONENT texture as depth buffer attachment when rendering to the FBO. This way we slightly save bandwidth, since depth needs to be written anyways.

If you might have issues with the matrix inversion, this is implemented in Matrix4f.invert(). The demo uses a Camera implementation that does the computation of the view and projection matrices as well as the inverse of them based on GLU-like lookat and perspective methods.
You can use them.