Is convolution supposed to be slow?

I’ve been playing around with the imaging functions provided by OpenGL. I’ve written a small program to load an image and render it using glDrawPixels(). I’ve also created a small 3x3 gaussian blur matrix which I’ve enabled with glConvolutionFilter2D(). However, it takes nearly one second to render a 1024x1024 window. Even if I take advantage of the kernel’s symmetry and use a separable convolution filter instead, the performance is still terrible. Am I doing something wrong, or should I expect the OpenGL imaging functions to be slow?

Mark McKay

Do you have the imaging subset enabled on your GL drivers (GL_ARB_imaging) ? If not, it will do it all in software, which can be very slow.

Well, GL.isExtensionAvailable(“GL_ARB_imaging”) is returning true, so I know it’s available. Is there something I need to do to explicitly enable it? Can this be done in software at runtime?

No-one uses GL’s convolution support (at least, no games do) so it’s highly unlikely you’ll find any consumer graphics card supports it properly in hardware.

If you want a regular gausian blur then you can easily do it on the CPU (although that won’t be cheap obviously). If you want it done in realtime you can do it with rendering to a texture and GLSL (easiest) or the fixed function pipeline (slower, more tricky but works everywhere).

If you have ARB_fragment_program support, take a look at the HDR demo in the jogl-demos workspace; it generates a separable Gaussian blur kernel at run-time and applies it to an off-screen floating-point pbuffer.