I’m implementing a post processing system into my engine for… well post processing, and I’ve been debating ways to do it. I finally settled on a system that goes like this:
- Render whole scene to a texture.
- Render said texture to a quad with a shader “postProcess”. (I have to invert the y of the texture coordinate in my vertex shader for some reason)
- Align quad to the user and render it.
This way, I can edit the whole scene with any shader I please, and quite easily.
However, this seems like a bit of a hack. Is this the general way post processing is implemented, or is there a different way? Also, with this, I can’t simply sample gl_FragCoord.z and linearize it for depth coordinates, I have to render the depth buffer to another FBO and send that in to my post process shader.