Geometry shaders are generally not ver commonly used. They’re useful for expanding points into quads or so. In some cases you may want to calculate per-face normals or do layered rendering, but most of the time the overhead of simply having a geometry shader is so high that there are better ways to do those things.
Compute shaders can be used to improve performance of some algorithms. It’s either that the flexibility allows for a more efficient algorithm, or that shared memory can be used to avoid redundant calculations.
Usually compute shader way is slower than pixel shader variant if you don’t know exactly what you are doing. If you are just learning ignore every other shader stage than vertex and fragment.
Exactly. If you don’t know what you’re doing with compute shaders, you’re gonna end up with something slower than a fullscreen quad and a pixel shader. Compute shaders are more “low-level” than pixel shaders. It’s actually really hard to beat the efficiency of pixel shaders, textures and render targets unless you have some very specific cases. Just directly porting your fullscreen quad pixel shader to a compute shader is in almost all cases going to make it slower. You need to take advantage of the flexibility and/or shared memory in shader groups to get any advantage out of compute shaders.
I use a Geometry Shader to implement a particle emitter, you pass in the center points of each particle and it outputs quads which always face the camera.
And I ported it to OpenGL ES 3.1 for Android with a repo here:
Compute shaders are basically OpenCL light with the added benefit of being a little more integrated into the OpenGL pipeline, so it’s easier to access texture data for instance, etc. With SSBOs you can shuffle a lot of data to be processed on by the GPU via a compute shader instead of the CPU. Eventually I’ll be getting around to porting some of the Nvidia Gameworks OpenGL examples to another related repo as the one above and the ComputeParticles example is nice: http://docs.nvidia.com/gameworks/content/gameworkslibrary/graphicssamples/opengl_samples/computeparticlessample.htm
To be honest, the focus of that tutorial series is not meant to be much about OpenGL Compute Shaders as it is about the physics and mathematics of light, which is eventually being applied to ray tracing and happens to use OpenGL Compute Shaders as one possible means for hardware-accelerated computing.
It just explains enough about OpenGL Compute Shaders in order to get something running and to understand the accompanying source code.
Anyway, I am certain that there are far better tutorials explaining the mechanics of OpenGL Compute Shaders and the underlying compute model, known from CUDA and OpenCL.
And if not, maybe someone wants to contribute one to the growing LWJGL Wiki?
Still thanks to kappa and Catharsis for mentioning it!
btw. Catharsis, the link to the Wiki article has changed, as it has been moved to a new Github repository, LWJGL/lwjgl3-wiki.
Again, this depends on if you’re able to use any of the performance related features of compute shaders. It’s possible to optimize many common post-processing shaders like blurring, lighting and other things.
Point sprites suck. They’re limited by the maximum point size of the OpenGL implementation, which is 63 or 64 pixels if you’re lucky. Not a limitation you want when running through a smoke cloud.
I understand very well how light, shadow mapping and gpu skinning works. I’m very familiar with vs and fs. I just was wondering if it’s worth learning cs and gs to improve my current features.
So one neat thing about compute shaders is that they exist within the OpenGL pipeline. So you can use normal texture sampling in compute shaders which is not how OpenCL work.
[quote]Have you considered using point sprites for that?
[/quote]
I haven’t. they look interesting. I will give them a try. the 64 size limit shouldn’t be a problem. Thanks for the heads up
p.s. How do you quote a specific person? My quotes just say quote!