Practical Post Processing

I’m implementing a post processing system into my engine for… well post processing, and I’ve been debating ways to do it. I finally settled on a system that goes like this:

  1. Render whole scene to a texture.
  2. Render said texture to a quad with a shader “postProcess”. (I have to invert the y of the texture coordinate in my vertex shader for some reason)
  3. Align quad to the user and render it.

This way, I can edit the whole scene with any shader I please, and quite easily.
However, this seems like a bit of a hack. Is this the general way post processing is implemented, or is there a different way? Also, with this, I can’t simply sample gl_FragCoord.z and linearize it for depth coordinates, I have to render the depth buffer to another FBO and send that in to my post process shader.

I don’t really get what you are asking, so I will just comment on random bits that you said.

I don’t think it feels like a hack; this should work just fine.

Where did z depth buffers come into the picture?

I’m asking about the best way to implement a post processing system into an existing engine. Check http://en.wikipedia.org/wiki/Video_post-processing if you don’t know what it is.

The depth buffer is important in many post processing effects, so I need to have an easy, non expensive way to access it. Currently I have to render it to a whole nother FBO and send that in to my post processing shaders in order to use it, and that may not be the best way. However this is the way I will have to do it once I implement deferred rendering, so I may want to just keep it this way.

http://www.opengl-tutorial.org/intermediate-tutorials/tutorial-14-render-to-texture/#Using_the_depth

Read this and the shadow and light mapping stuff on the same site.

I’ve actually already read the shadow stuff on that site, I’ve got shadows in the engine already as well.
But I’ve not seen the depth stuff on the site yet, thanks for that!

Edit:

I’ve done a bit of research, and I understand the subject a bit more.

In order to do what I want (have the regular rendered buffer as well as the depth buffer) without more than one FBO, I will need to implement multiple render targets. To do this, I pass in multiple attachments, formats, and internal formats as an array. In my case, I’d pass in GL_RGBA and GL_COLOR_ATTACHMENT0 for the regular render and GL_DEPTH_COMPONENT and GL_DEPTH_ATTACHMENT for the depth. Then use glDrawBuffers (plural) instead of the regular glDrawBuffer. However, where I get stuck is where sample each one in my post process shader. I pass in one texture, from the FBO, however how do I sample from the depth buffer vs the regular render?

Edit 2:
It was late, and I was asking a silly question. Nevermind the above question.