2D Shadow Performance

Currently our apps render a lot of simple drop shadows.

Our approach is very basic:

  1. Render a silhouette of the image in grayscale
  2. Apply a gaussian blur (via a ConvolveOp)
  3. Render this underneath the original image at a certain offset

(This is basically applied from Vincent Hardy’s “Java2D Graphics” book.)

The catch is that the ConvolveOp takes a long time for large images. This makes sense to you and me: a 800x600 image, with a blur matrix of 6x6, comes to about 17.2 million calculations. But our users aren’t impressed with this explanation. :slight_smile: So my questions are:

  1. Is there a pure Java approach to a faster shadow? It doesn’t have to be a kosher Gaussian blur… anything that allows varying degrees of blurring the edges is fine.

  2. Is there another road we should look down? JOGL, JNI, etc?

determine the direction of the shadow, start at the opposite edge of the image and loop towards that direction. when you reach the end of the image (a.k.a. encounter a transparent pixel) start to draw black pixels, and fade it out after some more loops

Hmmm, run JPhotoBrush and perform the gausian blur/ smooth! Maybe you’re doing something wrong!

Maybe (a) 3x3 smooth, (b) reduce pixels to 50%, © 3x3 smooth reduced image and display that! Are you doing something odd, like creating lots of new buffers each time!

Thanks for the replies.

I’m not entirely sure what 2playgames is suggesting. Will that let me achieve a blurred arbitrary shape?

I looked at JPhotoBrush: I think this performs a much smaller blur than what we perform. We let our kernel size range from 0 (so it’s an unblurred silhouette) to about 30 pixels (to give a decent blur).

keldon85 was right: there was a significant memory leak in our existing model. We were using Vincent Hardy’s CompositeOp to run 2 filters consecutively. It turns out the first filter was always passing “null” as its destination image, so we were making a new image. So I cleaned that up.

Also I wrote my own ConvolveOp that skipped continuous blocks of the same color. So for large continuous shapes, our shadows now render in about half the time they used to. For discontinuous shapes (like text) rendering time is about the same.

So rendering a shadow with a kernel size of 25 on a 400x400 square now takes about 131 ms on my iMac. (It used to take 282 ms.)

We’d still be very interested in other approaches to shadows, but at least we can report back that we made some progress. :slight_smile:

Maybe try not calculating pixels you will not be able to see, which will only make gains as more pixels are obscured. There are a few things you can do when creating large blurs (like 30 pixels), and that is to borrow from mip mapping. You first compute low resolution versions of the image depending on the accuracy you want, for example you might store image/1, image/2, image/4 and so on, or just image/1, image/4, image/16. Then rather than doing a 61x61 convolution you end up doing much less calculations by using the lower resolution version of the image as it is further away.

For example call the centre pixel when applying the convolution (0,0), rather than computing for all (i,j) where -31<i<31 and -31<j<31, you use lower the lower resolutions to calculate further pixels since they make less of a difference anyway. Right now the complexity is O(n^2) per pixel where the blur size is n/2, you should be able to use this technique to get it to O(n log n).