Aliasing on textured quads

Hey guys. We use JOGL to render our game and we are currently getting really bad aliasing when we have angled terrain. See image below. Everything in our game is drawn using a textured quad. It seems like implementing some type of anti-aliasing would be overkill for our simple 2d game. What types of techniques could we use to eliminate this aliasing, perhaps some type of texture filtering? I have heard that bilinear filtering can reduce aliasing.

If you do not have any kind of zooming, then simply adding a transparent 1 pixel border around the texture used will give you perfect ( = 256x) antialiasing thanks to bilinear filtering. This can however introduce seams between objects, but it’s easier to avoid by carefully placing your objects instead of using MSAA or (shrug) supersampling for it. Polygon smooth sadly hasn’t worked for since I started using LWJGL. It’s not implemented by neither NVidia or AMD, and with line and point smooth becoming deprecated, things are looking grim for built-in 2D antialiasing. Like I said, you can use bilinear filtering (completely free), or if you need really really want, we can talk shaders, but the effect will be the exact same. xD

Wow that works nicely! You are our savior theagentD! The only problem now is that if you shrink the texture smaller than its native size the aliasing returns. You can see this in the image below. Is there any work around to this or do we just have to not modify the size of our angled terrain?

The image is broken (HTTP 500 error) but that doesn’t matter. I did say that it didn’t work with zooming… ._.

The most simple solution is use mipmaps and trilinear filtering (GL_LINEAR_MIPMAP_LINEAR) and load mipmaps manually that all have a 1 pixel transparent border. This does have a huge drawback of making objects smaller the more you zoom out.

512 pixel wide mipmap: leftmost and rightmost pixels are transparent. 510/512 = 99.6% of original texture size.
128 pixel wide mipmap: leftmost and rightmost pixels are transparent. 126/128 = 98.5% of original texture size.
16 pixel wide mipmap: leftmost and rightmost pixels are transparent. 14/16 = 87.5% of original texture size.
4 pixel wide mipmap: leftmost and rightmost pixels are transparent. 2/4= 50% of original texture size.

Well, you see the problem. To avoid just increase the width of the transparent border to a much larger value, wasting texture space but making the smaller automatically generated mipmaps look as they should. With a 1 pixel border, the first mipmap will only have a 0.5 pixel transparent border (= better than nothing but still very clearly jagged). Increasing this to an 8 pixel border will make the first (4 pixels), second (2 pixels) and third (1 pixel) level mipmaps look right, allowing you to shrink it to 1/8th size while keeping the antialiasing. Basically, 8 pixel border = 1/8th the size with antialiasing, but the border should be a power of two. This technique is the one I recommend you to use, as it is simple at works perfectly.

Of course, if you know me you’ll know that I won’t be satisfied without a perfect solution. It is possible to emulate GL_POLYGON_SMOOTH as it would have been if it was implemented using a shader. Like I said, you can compute the coverage percentage in a fragment shader and set the alpha value to this, but there is one thing that complicates things immensely: The rasterization process. The fragments that reach the fragment shader is the ones that have their CENTER covered by the triangle, but pixels “outside” the triangle according to this rule might still be slightly covered (less than 50% of course, but this still gives very visible aliasing). We need something called conservative rasterization which gives us all the fragments that are intersected or covered by the triangle. As far as I know there is no easy way to implement this in OpenGL, but if there is, someone PLEASE TELL ME!!! I’ve tried to emulate it using a geometry shader that runs on screen space coordinates and pushes all the edges out by sqrt(2)/2 (half the diagonal of a pixel) and then finds the intersections of these new lines. This works and is not too slow, but it has a huge problem: it distorts textures because the polygon size is increased. You’ll have to manually extrapolate the new texture coordinates for each corner and well, doing this in a geometry shader isn’t the best idea. We’re talking hundreds of mathematical operations. I’ve never even tried implement.
The stupid thing is that this is the exact same thing that OpenGL does for you if you have MSAA enabled. The center is used for value interpolation (unless centroid interpolation is enabled) while the triangle might not even cover the center of the pixel, meaning that the new texture coordinates are extrapolated. Why is there no glEnable(GL_CONSERVATIVE_RASTERIZATION) or glRasterizationMode(GL_CONSERVATIVE)?! T______T I have so many things I want to do with this that I haven’t been able to do with textures because of this stupid limitation!
Even with a perfectly emulated GL_POLYGON_SMOOTH, you’d still end up with seams between tightly placed objects due to rounding errors and blending. This is doesn’t really matter though, just avoid them by placing statics objects a millimeter closer to each other. For dynamic objects (a good example is stacking boxes using physics) it isn’t noticeable at all.