Time to resurrect this thread once again! I’ve fixed the seam problem once and for all!
The cause (skip this if you don’t care)
There was actually two different problems causing this. The implementation above actually had a very minor problem with “seams” when scaling veeeery slowly, but it was very difficult to notice and only visible with subpixel scaling or translation. The other problem was triggered by using mipmaps. The generated texture coordinates wreaked havoc on OpenGL’s built-in LOD selection (which mip-level to use).
The universal seam problem
This was caused by me expecting floating point math to make sense. Rounding problems suck. There was a veeeeeery small chance that with extreme edge cases the texture filtering on the tile index lookup texture returned the index for one tile while the local texture coordinate generation calculated texture coordinates for a different tile.
http://img10.imageshack.us/img10/702/badytexturecoord.png
Notice how the white at the top of the tile also appears at the bottom seam of the tile. The tile index was gotten from the center tile, but the local texture coordinates were calculated for the tile below.
I solved this by simply storing the X and Y coordinates of each tile in the tile index texture too. That way the local texture coordinates will always be for the tile index fetched. Now the tile index texture is a GL_RGB16 texture. Tile indices are stored in the the red channel, and map X and Y is stored in the green and blue channels.
The mipmap seam problem
This was a lot harder to track down, but after working on a per-pixel distortion shader which also used dependent texture reads I realized the problem. It’s due to how OpenGL calculates which mip-level to sample from. Basically OpenGL calculates the mip level to use by checking how the texture coordinates change over 2x2 pixel area. This allows it to determine how quickly the texture coordinates change, and can then pick a mip level depending on the texture size. It also allows anisotropic filtering to work. However, it’s possible to confuse OpenGL into picking the wrong LOD value, and this is exactly what’s happening for my generated texture coordinates.
http://img341.imageshack.us/img341/2884/texturecoords.png
These are the local texture coordinates of each tile. The problem are the edges, because the texture coordinates aren’t continuous there. Since it checks the values over a 2x2 area the rate of change might be calculated over 2 or even 4 different tiles, each having vastly different values (one close to 1, one close 0). The result is that the shader samples from a very small mip level for edges, usually the smallest one. I solved this by calculating the LOD value on the CPU (very easy), sending this value as a uniform to the shader and sampling from the texture with texture2DArrayLod() with the precalculated LOD value.
Code
First, here’s the new fragment shader.
#extension GL_EXT_texture_array : enable
uniform vec2 mapSize;
uniform sampler2D tileTexture;
uniform sampler2DArray tilesetTexture;
uniform float lod;
void main()
{
vec2 texCoords = gl_TexCoord[0].st;
//tileResult contains (tileIndex, x, y). Multiply by 65535 to convert the normalized values to shorts.
vec3 tileResult = texture2D(tileTexture, texCoords / mapSize).rgb * 65535.0;
//Extract...
float tile = tileResult.r;
vec2 tilePosition = tileResult.gb;
gl_FragColor = texture2DArrayLod(tilesetTexture, vec3(texCoords - tilePosition, tile), lod);
}
The important Java code changes include:
- The tile index texture now is a GL_RGB16 texture which contains (tileIndex, x, y) per tile. The level generation and single tile updating code has been updated.
- Mip maps have been generated and enabled (the code was already there, just commented out).
- Texture LOD is calculated and passed on to the tile renderer from Java. A GLSL uniform for this is updated each frame. LOD is calculated with the following code:
double size = Math.min(TILE_WIDTH, TILE_HEIGHT) / currentScale;
float lod = (float)(Math.log(size) / Math.log(2));
renderer.render(lod);
The renderer then supplies this to the shader before rendering each frame:
glUniform1f(lodLocation, lod);
Since MediaFire no longer likes me, I’ve uploaded the code to JGO’s pastebin:
Test Java program
Vertex shader
Fragment shader
Test tileset from Chrono Trigger
Performance
Mostly unchanged. 0.5 to 1.0 milliseconds for a fullscreen quad on mid-range hardware (1000 - 2000) FPS. Highest seen was just under 3 millisecond (370 FPS) for extremely zoomed out views (over 1 million tiles visible). Enabling mipmaps slightly improves performance for zoomed out views since smaller textures are used.
EDIT: I enabled SLI on my GTX 295 for the test program and ran it at 1920x1080 in fullscreen. On the default zoom level I got 3000 FPS and really scary whistling sound from my graphics card… High FPS = scary. o_O
Compatibility
I looked up the texture array extension, and it’s supported by OGL2 level AMD cards, but not NVidia cards. In other words, this program requires a DX9 AMD card or a DX10 Nvidia card = an AMD HD2000+ series card or an Nvidia 8000+ series card. It’s possible to ditch the texture array, but it requires some pretty big changes in the shader to pick out tiles directly from a normal 2D texture and it breaks mipmap and bilinear interpolation support since you’ll get bleeding between tiles. However, that would decrease the requirement to any card supporting shaders.
Congratulations! You just read an insanely long post!
TL;DR: Seams completely eliminated and mipmaps are now supported!