Compute + Geometry Shaders

So I’ve been reading up on these a bit recently and I have a couple of questions:

  • Can gs be used to improve bump mapping efficiently
  • What would be more efficient in a compute shader. (shadow mapping, lighting, mesh skinning , FXAA)

Geometry shaders are generally not ver commonly used. They’re useful for expanding points into quads or so. In some cases you may want to calculate per-face normals or do layered rendering, but most of the time the overhead of simply having a geometry shader is so high that there are better ways to do those things.

Compute shaders can be used to improve performance of some algorithms. It’s either that the flexibility allows for a more efficient algorithm, or that shared memory can be used to avoid redundant calculations.

Usually compute shader way is slower than pixel shader variant if you don’t know exactly what you are doing. If you are just learning ignore every other shader stage than vertex and fragment.

a pretty good article here about how to use compute shaders for realtime raytracing.

Exactly. If you don’t know what you’re doing with compute shaders, you’re gonna end up with something slower than a fullscreen quad and a pixel shader. Compute shaders are more “low-level” than pixel shaders. It’s actually really hard to beat the efficiency of pixel shaders, textures and render targets unless you have some very specific cases. Just directly porting your fullscreen quad pixel shader to a compute shader is in almost all cases going to make it slower. You need to take advantage of the flexibility and/or shared memory in shader groups to get any advantage out of compute shaders.

Always use full screen triangle. Around 8% more performance for free. http://michaldrobot.com/2014/04/01/gcn-execution-patterns-in-full-screen-passes/

I use a Geometry Shader to implement a particle emitter, you pass in the center points of each particle and it outputs quads which always face the camera.

Have you considered using point sprites for that?

Well I have implemented what I need into my game dev lib. I just want to learn more about these shaders. So I can implement in the future.

That is valid use case but there are plenty better ways to do it. http://www.slideshare.net/DevCentralAMD/vertex-shader-tricks-bill-bilodeau

You need huge amount of knowledge before compute make any sense. Get that before using vs and ps.

GS is also nice for generating proper lines and joints.

And I ported it to OpenGL ES 3.1 for Android with a repo here:

Compute shaders are basically OpenCL light with the added benefit of being a little more integrated into the OpenGL pipeline, so it’s easier to access texture data for instance, etc. With SSBOs you can shuffle a lot of data to be processed on by the GPU via a compute shader instead of the CPU. Eventually I’ll be getting around to porting some of the Nvidia Gameworks OpenGL examples to another related repo as the one above and the ComputeParticles example is nice:
http://docs.nvidia.com/gameworks/content/gameworkslibrary/graphicssamples/opengl_samples/computeparticlessample.htm

To be honest, the focus of that tutorial series is not meant to be much about OpenGL Compute Shaders as it is about the physics and mathematics of light, which is eventually being applied to ray tracing and happens to use OpenGL Compute Shaders as one possible means for hardware-accelerated computing.
It just explains enough about OpenGL Compute Shaders in order to get something running and to understand the accompanying source code.

Anyway, I am certain that there are far better tutorials explaining the mechanics of OpenGL Compute Shaders and the underlying compute model, known from CUDA and OpenCL.
And if not, maybe someone wants to contribute one to the growing LWJGL Wiki? :slight_smile:

Still thanks to kappa and Catharsis for mentioning it! :wink:

btw. Catharsis, the link to the Wiki article has changed, as it has been moved to a new Github repository, LWJGL/lwjgl3-wiki.

Maybe it would be better to contribute to the “Articles & tutorials” section right here on JGO.

Again, this depends on if you’re able to use any of the performance related features of compute shaders. It’s possible to optimize many common post-processing shaders like blurring, lighting and other things.

Point sprites suck. They’re limited by the maximum point size of the OpenGL implementation, which is 63 or 64 pixels if you’re lucky. Not a limitation you want when running through a smoke cloud.

I understand very well how light, shadow mapping and gpu skinning works. I’m very familiar with vs and fs. I just was wondering if it’s worth learning cs and gs to improve my current features.

I’ve never come across a situation where I needed a particle bigger than 64x64 though.

So one neat thing about compute shaders is that they exist within the OpenGL pipeline. So you can use normal texture sampling in compute shaders which is not how OpenCL work.

A simple invert op w/ a sampler:

https://github.com/typhonrt/modern-java6-android-gldemos/blob/master/java6-android-gldemos/src/main/java/org/typhonrt/android/java6/gldemo/open/gles31/invert/ComputeInvertSampler.java#L92

And in the compute shader:
https://github.com/typhonrt/modern-java6-android-gldemos/blob/master/java6-android-gldemos/src/main/assets/shaders/open/gles31/color/invertTextureSampler.comp#L38

I also provide the imageLoad version too:
https://github.com/typhonrt/modern-java6-android-gldemos/blob/master/java6-android-gldemos/src/main/assets/shaders/open/gles31/color/invertTexture.comp#L29

That is valid use case but there are plenty better ways to do it. http://www.slideshare.net/DevCentralAMD/vertex-shader-tricks-bill-bilodeau
[/quote]
I’m currently sticking to opengl 3.3 so I can’t use a lot of those directx techniques in opengl because they don’t exist in 3.3

[quote]Have you considered using point sprites for that?
[/quote]
I haven’t. they look interesting. I will give them a try. the 64 size limit shouldn’t be a problem. Thanks for the heads up

p.s. How do you quote a specific person? My quotes just say quote!