Coded this yesterday. It’s an implementation of screen-space self-shadowing. Think screen-space ambient occlusion, but instead of checking pixels around the center pixel I just raycast towards the sun to find any immediate occluders. Sadly it’s hard to show this off without a comparison, so take a look at these TWO sets of images:
http://screenshotcomparison.com/comparison/158537/picture:0
This was rendered with fairly low-res shadow maps, allowing SSSS (lol) to really shine.
Normal shadow maps with wide filtering has some big problems. A wide PCF filter causes pixels angled towards a light to shadow themselves as the flat PCF filter “cuts” into the side slope. To fix that you need a high slope depth bias, which pushes shadows far away for shadow casters seen at a sharp angle from the light’s perspective. This causes “peter panning”, where the shadows appear to originate some distance away from the object, making it look like it’s floating in the air. This looks very ugly and easily causes very visible artifacts where some places that are clearly occluded are still lit up.
Another big problem is the difficulty with getting object-scale self-shadowing, AKA small details shadowing other details. In many cases this requires a ridiculously big shadow map, and even with cascaded shadow maps (the above images use 6 cascades @ 1024x1024 each) precision is simply far to low when the camera is close up to provide any coherent shadowing of details like small ridges or fingers occluding each other. Sure, I could just increase the shadow map size to 4096x4096 and call it a day, but that’s gonna cut a sizable chunk of my FPS away (not to mention 400MBs of VRAM for 6 cascades).
However, there’s a cool little trick/hack you can do that is very similar to ambient occlusion. For ambient occlusion, you sample a number of pixels around each pixel to figure out if they’re blocking ambient light to the center pixel. It can have a very nice effect despite only being able to work with the very limited data that’s in the depth buffer. Now, shadow maps can handle large-scale shadows perfectly fine. We can get nice, soft shadows, but they suffer when the resolution is too low when the occluder and the ocludee are very close together. The depth buffer, however, is fairly accurate when looking in a 10-50 pixel radius around each pixel, which ambient occlusion has proven time and time again. So, I decided to simply raycast a few pixels away towards the sun for each pixel in an attempt to find an obvious occluder. It’s far from perfect; the pictures above show the best-case scenario, but it has a lot of promise. For geometry much finer than the shadow map resolution that originally didn’t get any shadows at all, this technique can add a nice sharp contact shadow bringing out a lot of detail. That the shadow map shadows are pushed away from shadow casters is no problem since SSSS (lolll) works almost perfectly in those cases, basically filling the gap. It only works in the worst cases of shadow maps where you can obviously both the occluder and the supposed-to-be-shadowed geometry, so they cover each other’s weaknesses well.
There are of course times when the shadows get noisy due to missing information in the depth buffer, but this technique has a lot of promise as a band-aid to low-resolution shadow maps while being cheaper than increasing the resolution. I plan on doing some more research, but this looks very promising. In some cases 1024x1024 shadow maps + SSSS could have similar quality to 4096x4096 shadow maps only, although with softer distant shadows.