OpenGL: Specifying texture coordinates for 2D scrolling tile maps

I’m using OpenGL to render a 2D tile map, and so far everything’s working quite well; Each tile is specified in vertices as a series of two triangles (6 points), which have been loaded into a buffer and drawn using glDrawArrays in mode GL_TRIANGLES. These vertices are specified in relation how far along the viewport they should be rendered (e.g. To render the sprite in the centre of the viewport, the x component of the vertex would be ‘0.5’).
Likewise I have an almost identical series of texture coordinates per sprite (two triangles) except the texture coordinates are specified in relation to how far along the texture they’re positioned. So far so good.

In the case of the vertices it’s fine that these coordinates never change, as I can just pass the camera scroll offsets into the vertex shader and move them along manually, but for the texture coordinates, these values can change entirely depending on what tiles are visible in the viewport.

How would you normally specify texture coordinates for scrolling tile maps? Is it common for the buffer to be re-initialised with the new texture coordinates as new tiles are revealed, or is there a cleaner way of achieving this?

This might be a vague question so I can clarify things if necessary. :slight_smile: Any help appreciated.

Clustering some number of tiles, say 32x32 tiles, into a region/group and loading and displaying them once they become visible (or about to become visible), is a totally reasonable approach. Since for a seamless/continuous scrolling you need to make a tradeoff somewhere between the extremes:

  • loading and displaying the whole map at once
  • loading and displaying individual tiles
    You want neither, so you could load and display regions/groups of tiles. The good thing is that you only need at max 4 such regions/groups to be in memory and displayed (given the projection of such a region is larger than your viewport). Once a region gets sufficiently far out of view you can unload it.

Thanks for the reply. :slight_smile: Chunk loading does sound like the best way to do this. I’d imagine I could even reuse the vertex buffer as the values will be the same from one chunk to the next, and the camera offset that’s being passed into the vertex shader would be the only value that changes.
So chunks could ultimately just be used to keep track of texture coordinates I’d imagine.

On a slightly unrelated note, when I come to render the foreground objects am I correct in thinking that each object needs to be rendered with it’s own draw call? For static objects I’d imagine this wouldn’t be necessary and I can batch these together into a single call, but for movable objects this sounds like it might become tricky, as each object has it’s own position that can change potentially every update and invalidate it’s position in the vertex buffer.

You can use hardware instancing for the same geometry and just use an instanced buffer object storing only the vec2 positions of the objects.
So, let’s say, you have a thousand little spheres moving around. Possibly all with a different color and of course different positions.
Then you can have:

  • 1 non-instanced buffer object for the geometry of the sphere (or rather circle in your 2D case)
  • 1 instanced buffer object holding 1000 x vec2 values for the 2D position of each circle
  • 1 instanced buffer object holding 1000 x vec3 values for the RGB color of each circle
    Then you can handle all of this using a single draw call. And when the positions of the circles change you can just update the positions buffer object.

Reading up on hardware instancing now and it’s exactly what I’m after. :slight_smile: Thanks for your help!