LibGDX How do I get the depth buffer from FrameBuffer?

Hello guys, now I’m working at a depth of field shader.
Here’s how it works:

I render the whole scene to a FrameBuffer then apply the shader on it.
But, there’s a problem: How do I pass the depthbuffer as an uniform to the shader? I couldn’t find it anywhere in FrameBuffer’s methods.

Any help is appreciated.

the framebuffer should be configured with a depth-attachment attached to it, which is a texture after all.

internal-format should be GL_DEPTH_STENCIL or GL_DEPTH_COMPONENT and format should go like GL_DEPTH24_STENCIL8 or GL_DEPTH_COMPONENT24.

now when you render the color-attachment in a fullscreen-quad/triangle pass, just throw in the depth-texture like you do with any other texture.

next, sampling the depth-texture - i cannot rember the “proper” way.

Ok, how do I attach the depth?

The same way you attach a colour buffer. Check out the “attachment” parameter.

If it’s just a depth buffer then it is GL_DEPTH_ATTACHMENT and for a depth and stencil buffer you want GL_DEPTH_STENCIL_ATTACHMENT.

You got the internal format and format switched around. Internal format (the format the GPU stores the data on the GPU in) should be GL_DEPTH_COMPONENT16, GL_DEPTH_COMPONENT24 or GL_DEPTH_COMPONENT32F, or if you also need a stencil buffer, GL_DEPTH24_STENCIL8 or GL_DEPTH32F_STENCIL8. Format (the format of the data you pass in to initialize the texture in glTexImage2D()) shouldn’t really matter as you’re probably just passing in null there and clearing the depth buffer for the first use anyway, but you need to pass in a valid enum even if you pass null, so GL_DEPTH_COMPONENT is a good choice.

To give you a list of steps:

  1. Create a depth texture with one of the internal formats listed above.
  2. Attach the depth texture to the FBO using glFramebufferTexture2D() to attachment GL_DEPTH_ATTACHMENT or GL_DEPTH_STENCIL_ATTACHMENT if your depth texture also has stencil bits.
  3. Render to the FBO with GL_DEPTH_TEST enabled. If it’s not enabled, the depth buffer will be completely ignored (neither read nor written to). If you don’t need the depth “test” and just want to write depth to the depth buffer, you still need to enable GL_DEPTH_TEST and set glDepthFunc() to GL_ALWAYS.
  4. Once you’re done rendering to your FBO, you bind the texture to a texture unit and add a uniform sampler2D to your shader for the extra depth buffer.
  5. Sample the depth texture like a color texture in your shader. A depth texture is treated as a single-component texture, meaning that the depth value is returned in the red channel (float depthValue = texture(myDepthSampler, texCoords).r;). The depth is returned as a normalized float value from 0.0 to 1.0, where 0.0.

thanks D. :slight_smile:

Ok, I will try that, but first I want to clarify something:
If my clipping range is between 1.0f and 1000.0f, then the normalised depth value from the shader (0.5 in this case) means 500f?
Anyway, thanks for help.

No, that is not the case. The depth value you get is calculated in a specific way to give more precision closer to the camera. This means that 0.5 will refer to something very close to the far plane, something like 3.0 (very rough estimate). How this is calculated depends on your projection matrix (the clipping range you mentioned). It is possible to “linearize” the depth value if you know the far and near planes.

now i remember …

[icode]in vec2 st0; // texture coords
uniform vec2 dim; // textureSize(depthtex)

uniform sampler2D depthtex;

uniform float znear;
uniform float zfar;
uniform float zrange;[/icode]

[icode]#define linearize(depth) ( (znear * zfar) / (zfar - depth * zrange) )[/icode]

and then [icode]float linear_depth = linearize(texelFetch(depthtex,coords,0).r);[/icode]

while [icode]float zrange = zfar - znear;[/icode]

and [icode]ivec2 coords = ivec2(dim * st0);[/icode]