[LibGDX] Deferred Lighting

Hello JGO!

I’m trying to implement deferred lighting for the first time following this tutorial: http://learnopengl.com/#!Advanced-Lighting/Deferred-Shading.
I was wondering if there is an integrated way in LibGDX to use multiple render targets. I saw that the FrameBuffer class only contains one ColorAttachment and you can’t add more via in-built methods.
Should I use a SpriteBatch to render to the gBuffer or should I use meshes?
Should I just use LWJGL classes directly and avoid the LibGDX in-built ones or how should I do it?

Basically my question is how to implement Deferred Lighting in LibGDX.

Thanks in advance for any help! :slight_smile:

There isn’t, I’m afraid. For our project I’ve copied and modified FrameBuffer to allow for multiple color attachment, alternate buffer formats, and an optional stencil buffer.

Doesn’t matter, both work. You just need to be prepared to write and use custom shaders.

The libGDX classes provide (almost) everything you need. There are only a few exceptions, e.g. OpenGL functions&constants not exposed in the GL20/GL30 interfaces because they are not supported by GLES.

It’s just a matter of setting up multi-render-targets & shaders. Oh, and GL states. I found it relatively straightforward if you target desktop. Despite GL states. They suck. :wink:

The nice thing about libGDX is that it moves out of your way if you say so, and lets you work with GL functions directly.

Please note that the useGL30 path has been updated with the latest release. It’s not really been usable before that.

That article looks extremely outdated. No one has ever stored the position in a texture. You always reconstruct the view space position from the depth buffer.

Could you recommend a better tutorial? As I have been trying to get deferred rendering (with LWJGL) working for weeks, using this tutorial as my main guide.

I didn’t use a tutorial when I implemented it. Just reading about the concept was enough for me. What exactly are you having trouble with?

Well, how do you reconstruct the position from depth if you are using an orthographic projection?

It doesn’t matter what kind of projection you’re using. You can easily reconstruct the view space position anyway. The idea is to upload the inverse of the projection matrix to the shader, reconstruct the NDC (normalized device coordinates) of the pixel and “unproject” it using the inverse projection matrix, hence it works with any kind of projection matrix.

NDC coordinates are coordinates that go from -1 to +1 in all 3 axes. When you multiply the view space position by the projection matrix in the vertex shader when filling the G-buffer, you calculate NDC coordinates, and the GPU hardware maps XY to the viewport and Z to the depth buffer. We can undo the projection, but first we need to get all the data to do that.

First of all, you need the XY coordinates. These are easy to calculate. They basically go from (-1, -1) in the bottom left corner to (+1, +1) in the top right corner. The easiest way is to calculate them from gl_FragCoord.xy, which gives you the position (in pixels) of the pixel. Divide by the size of the screen and you have coordinates going from (0, 0) to (+1, +1). Remapping that to (-1, -1) to (+1, +1) is easy. The Z coordinate is the depth buffer value of that pixel, but the depth buffer value also goes from (0) to (+1) and needs remapping. With this, we have the NDC coordinates of the pixel. Now it’s just a matter of multiplying the NDC coordinates with the projection matrix and dividing by the resulting W coordinate.


uniform sampler2D depthBuffer;
uniform vec2 inverseScreenResolution; //Fill with (1.0 / screen_resolution) from Java.
uniform mat4 inverseProjectionMatrix;

...

vec2 texCoords = gl_FragCoord.xy * inverseScreenResolution; //Goes from 0 to 1
float depthValue = texture(depthBuffer, texCoords); //Goes from 0 to 1

vec3 ndc = vec3(texCoords, depthValue) * 2.0 - 1.0; //Remapped to -1 to +1

vec4 unprojectResult = inverseProjectionMatrix * vec4(ndc, 1.0);

vec3 viewSpacePosition = unprojectResult.xyz / unprojectResult.w;

//Use viewSpacePosition for lighting


An example G-buffer layout for deferred shading is:

COLOR_ATTACHMENT0: GL_RGBA16F: (diffuse.r, diffuse.g, diffuse.b, )
COLOR_ATTACHMENT1: GL_RGBA16F: (packedNormal.x, packedNormal.y, specularIntensity, specularExponent)
DEPTH_ATTACHMENT: GL_DEPTH_COMPONENT24: (depth)

EDIT: Actually, if you’re only using an orthographic projection, you don’t need the W-divide (but it doesn’t harm to keep it there).
EDIT2: Also, there are lots of optimizations you can do to this. I opted to just give you the basics before diving into those. I can answer whatever questions you have about deferred shading.

So if I understand it correctly this is how I should do it:

  • Bind the frame buffer.
  • Bind the currently drawn object’s normal map and specular texture
  • Start drawing the diffuse textures using a shader wich takes a sampler2D normal, specular aswell
  • Fill the COLOR_ATTACHMENT0 with the sampler2D(diffuse, texCoord), where texcoord is the texCoord of the diffuse texture
  • Fill the COLOR_ATTACHMENT1’s .rgb with the sampler2D(normal, texCoord).rgb and the .a with the specular with sampler2D(specular, texCoord).r
  • End drawing
  • Draw the FBO’s texture with specific shader including all lights.

This is how I’d do it but as you can see I’m unsure how to draw the lights. Should I pass all the light’s positions as a uniform array, or should I just render each light one by one on the FBO?
Also where exactly would I use the shader code you provided (position from depth).

I’d love it if you could elaborate on this a little. :wink:

First of all, you generally have both a specular intensity (how much specular light is reflected by the surface) and also a specular exponent/roughness/glossiness (how mirror-like the surface is), which affects the shape of the specular highlight. You often need to store both of them. In your case, you can store the specular exponent in the first texture’s alpha component, as you only need RGB for diffuse textures.

The lighting process is pretty simple but you’re mistaken on a few points. You do not want to process all lights on the screen in a single fullscreen pass. The simplest and fastest way of rendering lights is to generate light geometry. For a point light, that’d be a sphere. Spot/cone lights are a cone/pyramid thing. Directional lights (sun/moon) are indeed a fullscreen pass. The idea is to only process the pixels which are inside the light volume. In the lighting pass, you write to a single GL_RGBA16F render target with additive blending to add up all the lighting results.

i’m a bit confused. Cyraxx do you need help with