Hi All!,
Bit of a Java newbie here so please forgive my rather basic questions
I’m writing a basic game engine in Java, and am now trying to optimise the way I handle images (best to start early, rather than get too dug-in, I think).
Currently, before the game loop starts I load in an image containing all of the tiles I want to use and store it in a BufferedImage object. The image is composed of 32x32 tiles.
Then, at render time, I make a call to a function that draws the level tiles. At present, it’s just all floor tiles - but I wanted to see what performance I’d be getting.
When drawing the tiles, I cycle through the tile map - a simple 2D array of objects - and work out whether they’ll be in the viewport. If so, I render them using the getSubImage(…) method of the BufferedImage class.
Here’s the level tile rendering block:
for (int y = 0; y < HEIGHT_IN_TILES; y++)
{
for (int x = 0; x < WIDTH_IN_TILES; x++)
{
// calculate the position of this tile
double tileX = scrollX + (x * TILE_SIZE);
double tileY = scrollY + (y * TILE_SIZE);
double tileCX = tileX + (TILE_SIZE / 2);
double tileCY = tileY + (TILE_SIZE / 2);
tileCX *= GameTest1.GAME_SCALE;
tileCY *= GameTest1.GAME_SCALE;
double halfTile = TILE_SIZE / 2;
// check if this is offscreen
boolean canRender = false;
if (tileCX >= -(halfTile * GameTest1.GAME_SCALE) &&
tileCY >= -(halfTile * GameTest1.GAME_SCALE) &&
tileCX < (GameTest1.GAME_WIDTH + halfTile) * GameTest1.GAME_SCALE &&
tileCY < (GameTest1.GAME_HEIGHT + halfTile) * GameTest1.GAME_SCALE)
{
canRender = true;
}
if (canRender)
{
tilesRendered++;
g.drawImage(blitTile(art.tilesetMain, tileMap[y][x].imageMapIndex), (int)tileX, (int)tileY, null);
}
}
}
The call made when drawing the actual image references another method, ‘blieTile(…)’, which is as follows:
public static BufferedImage blitTile(BufferedImage bi, int imageIndex)
{
int cx0, cx1, cy0, cy1;
int offX, offY;
// determine the image offset
if (imageIndex == 0) { offX = 0; offY = 0; }
else { offX = offY = 0; }
// set up the offset pixel values
cx0 = Level.TILE_SIZE * offX;
cy0 = Level.TILE_SIZE * offY;
cx1 = cx0 + Level.TILE_SIZE;
cy1 = cy0 + Level.TILE_SIZE;
try
{
return (BufferedImage)bi.getSubimage(cx0, cy0, cx1-cx0, cy1-cy0);
}
catch (Exception ex)
{
System.out.println("Error: Couldn't blit tile");
}
return null;
}
This results in the relevant tiles being drawn on-screen, as per the following image:
In a 1024 x 768 window, roughly 221 tiles are drawn each frame
Incidentally, I know this isn’t proper ‘blitting’, per se, but that leads me on to my question.
I’ve read reports about people using methods to copy pixels to/from a (static?) resource at render time - boosting draw performance. At the moment, my code seems to run around the 75/76 FPS mark (with a BufferStrategy in place), but I’ve heard of people managing to take similar projects up into the high hundreds, if not over a thousand FPS by adjusting the way small graphics like this are drawn.
I think it’s got something to do with storing a 2D array of pixel data for each image tile, and then creating a new image from that data at runtime? Or am I way off the mark here?
Are there resources online where you can learn these methods? I’ve tried searching around, but this seems to be a very specific type of drawing method.
I’m not after a code answer, that would be cheating - and obviously wouldn’t help me learn anything at all. If I could be pointed in the right direction, or be told what methodology I should consider, I would be most grateful.
Thanks for your time!
EDIT: Added screenshot of running project / corrected some typos.