Solving aliasing - noisy rendering

One of my biggest bugbears with game rendering is aliasing: Shimmering pixels, crawling pixels, ‘jaggies’, etc. It just looks wrong.

If been thinking a lot about how to solve this, and my current conclusion is that aliasing is essentially a kind of static ‘distortion’ of the intended image as a result of not having enough bandwidth, while this static distortion should actually have resulted in noise.
I’ve tried to write the idea down in a lot more detail in this document. I’m no great writer, so if anything is unclear or is just plain wrong, let me know.

I’ve also made a small proof of concept, which simulates rasterisation issues by scaling down a high-resolution image to a smaller res image (16 times less pixels).
You can download this little test program here (executable jar), with the sources here. The test program was quickly hacked together, so by no means is it a nicely designed thing; it’s just a proof-of-concept.

To use the program:

  • Start the executable jar
  • You see a downscaled image showing lots of aliasing artefacts. I’ve drawn a white circle around a particularly problematic area.
  • Click on the image to make key events register.
  • There are 6 rendering modes:

PRESS ‘0’:
The default rendering with no anti-aliasing

PRESS ‘1’:
Added ‘noisy rasterisation’ (see document), halved colour depth, added noise.
This mode already sort of ‘solves’ aliasing at half the bandwidth (because of the reduced colour depth), but the image is very grainy.

PRESS ‘2’:
Traditional 2xMSAA. It’s already better than the default no-aa rendering mode, but there are still aliasing issues.

PRESS ‘3’:
Added 2x ‘noisy’ MSAA, and reduced overal noise while maintaining the halved colour depth. This should be approximately the same bandwidth as no-aa (the default mode).

PRESS ‘4’:
Traditional 4xMSAA. Better than 2xMSAA, but the circled part still shows aliasing issues.

PRESS ‘5’:
Added 4x ‘noisy’ MSAA, and reduced overal noise while maintaining the halved colour depth. This should be approximately the same bandwidth as traditional 2xMSAA.

PRESS ‘6’:
Added 4x ‘noisy’ MSAA, and reduced overal noise but using full 24bit colour depth. This should be approximately the same bandwidth as traditional 4xMSAA. Compared to tradional 4xMSAA, the problematic circled area renders correctly at the cost of noise.

PRESS ‘n’ to toggle the added layer of noise (but not the ‘noisy rasterisation’) to see what the effect is.

Perhaps I’ve just reinvented the wheel somewhere, and I’m not really an expert in 3D rendering in great detail, but I’m quite enthusiastic about the results.
Essentially it sort of ‘solves’ aliasing by replacing it with ‘natural’ noise, where noise should be happening in real life.
Of course it’s of limited use because in many cases one would actually prefer aliasing over noise (nobody wants a noisy Mario), but for naturalistic scenes I think it’s quite effective.

What I would be very interested in is an OpenGL implementation of this. Is this even possible?
Adding a simple layer of noise is easy enough, but I wouldn’t know how to add noise in the rasterisation process (essentially randomising aliasing artefacts).
Any other thoughts?

PS.
I’d like to add that what this proof-of-concept demo doesn’t demonstrate is how it completely solves the typical ‘shimmering’ and ‘crawling’ associated with aliasing when slowly panning the camera around: The image stays completely consistent here.
The only trade-off is some visible (but natural looking) noise, which I think is the only correct trade-off to make.

Also, I’d like to point out that the demo doesn’t do the concept justice completely because it stretches the image without any filtering. So there is still some ‘blockyness’ (a commented-out line in the source code solves this by adding RenderingHints, but is very expensive in pure java so framerate will be low in the demo by enabling filtering. You can also disable the stretching by changing the ‘SCALE’ constant to 1 in the source code).

Finally, the best modes to compare in the demo are ‘4’ (i.e. pressing ‘4’, which is traditional 4xMSAA) and ‘6’ (which is ‘noisy’ 4xMSAA). In my opinion there’s a clear improvement there in mode ‘6’, especially when moving the camera around (not implemented in the demo yet).

Wow, this is good stuff, I’m impressed. I took a look at the demo and that is actually a really nice aesthetic, like you mentioned in your paper (that was also a good read btw), not to mention an actually quite useful thing. :slight_smile:

I don’t know enough about OpenGL to say how feasible this is, but at first glance at least it seems like it would be in the realm of shaders.

I’m definitely staying tuned for more.

Your source link points to the executable file.

Found it: aatest-src.jar

Lol

[quote=“erikd,post:1,topic:43395”]
Spatial jittering of fragments is not possible with rasterization, unless you’re doing point rendering. You can probably apply jitter within the triangle (e.g. when sampling the textures of alpha tested surfaces), but it’s not very practical and there’s still nothing you can do about edges. Note that your demo basically does ray-tracing, not rasterization.

Temporal jittering on the other hand is very easy, you can simply apply some sub-pixel randomization to the camera frustum. It won’t result in the noise-everywhere effect you’re looking for, but it does work great and gives you high-quality antialiasing for minimal cost. The problem is that ghosting occurs when fps drops or there are big movements on screen. CryEngine has implemented a few workarounds for this, you can read more here.

Finally, an important detail for your bandwidth calculations. With MSAA depth/coverage is evaluated and stored multiple times for a fragment, but shading is only done once. This is true for all the low-end and practical (performance-wise) modes. For example 4xMSAA is 1 color + 4 coverage samples. Also, since multiple coverage samples won’t improve anything within the triangle, GPUs can afford to apply MSAA only to the triangle edges, for even greater bandwidth savings. Your technique works more like super-sampling antialiasing (SSAA), since the original un-scaled image has full color information. Temporal AA also results in the same quality as SSAA, for static scenes at least.

Thanks Spasi, that was very informative.
To be honest my knowledge isn’t that deep about this stuff on a low level, so I don’t want to pretend I have thought of something nobody else had that will change the world (apparently I even confused MSAA with SSAA to begin with).
This was just an idea that I had based on the basic premise that lack of bandwidth should ‘naturally’ incur noise on all levels instead of simply settling with a fixed distortion of the whole image.

But apparently this can not really be implemented through the existing APIs. Perhaps on the driver level?

I know about the temporal jitter technique (I think it’s also used in MGS4 and DMC4), but it’s not quite what I was looking for; I’d really prefer this to be per pixel, and without involving blending the complete frames.

[quote]Also, since multiple coverage samples won’t improve anything within the triangle, GPUs can afford to apply MSAA only to the triangle edges
[/quote]
Does this actually happen in practice somewhere (considering how expensive MSAA still is)?

Also, thanks for the feedback BurntPizza and Several Kilo-Bytes (I fixed the link) :slight_smile:

Quite interesting solution to aliasing! well done.

Might i suggest a more dynamic scene? i.e. perhaps a slowly panning/horizontal scrolling scene so that aliasing is that more noticeable? and thus your technique shown to be superior?

I must admit that I find the noise far too distracting at the current frame rate on my machine. is it possible to perhaps create a faster version that doubles the frame rate? or create a video and then double the playback speed? I think that might solve my distraction issues.

In addition to what moogie said. I wonder if there is a way to blur or blend together 2 or more frames so that the random static/noise is less apparently while maintaining true to what is being done here.

Sorry for bumping this super old thread, but I’ve recently been working a bit on a little 3D raycasting/raytracing engine (all in CPU) where I implemented some of the ideas that I posted here some years ago.

What I didn’t implement was reducing color depth and adding noise to all pixels. What I did implement was dithering all pixel’s ray directions, and blending previous frames.


No anti-aliasing, so lots of image break-up.


With dithered ray-directions, which seems like an improvement (at least the break-up is now randomized, as it should imho). But it is noisy.


The same as above, but blended with previous frames, which to my eyes looks amazingly smooth. The blending gets disabled as soon as you move the viewport, but when you stop (and blending is engaged again) it’s surprising how quicky the image resolves to something like this.
Overall the performance penalty is less than 5% on my machine.

I’m trying to improve this by not disabling the blending when moving/rotating, by figuring out where the pixels were in the previous frame. I think most of this can be pre-calculated so the performance penalty shouldn’t be too bad (hopefully).
The limitation will of course still be when ‘new’ pixels enter the viewport, which will still be noisy for a few frames.

The images above were using a 512x512 b&w texture with all single-pixel lines, meant to really stress any anti-aliasing method. Using more naturalistic textures, even without blending it’s still pretty effective and the noise not as apparent (especially at high frame rates).

I’ll share code soon, but anyway I hope it’s interesting.


Just in case you wondered how a scene like that would look like with a 720 degrees field-of-view :upside_down_face:

Cheers,
Erik

7 Likes