Hello everyone, I’ll start out by saying I’m most definitely NOT an expert at Java and have been a lurker around here for some time and still can’t understand half of everyone’s amazing solutions to problems around here. Be forewarned that I may also not understand any responses to this thread either
That being said, recently I’ve been trying to create a block based 2D game with random terrain creation.
My levels are created using a 2D array of my own little Block class, the only thing really stored in each block is its type as an Enum. My main problem arose when I filled out the “ground” underneath the perlin noise generated horizon by filling it entirely with blocks (for now). I’m using a “zoom scale” of 2 so 1 pixel = 2x2 pixels in reality if that makes any difference.
I wasn’t using any third party libraries or anything and I found that my CPU usage was hitting 25% and above, which I believe really means I’m using 100% of one CPU SO I poked around on here and discovered libGDX as I assumed that the reason my CPU was sky-rocketting was because it was drawing around 3417 blocks at most per tick and I should probably be using the GPU to do that instead. FYI, I’m not iterating through the massive level array, I’m just iterating through what is on screen based on the camera’s viewport.
After painstakingly (well not really, libGDX is quite nice) converting my little game into libGDX I noticed that the game ran a WHOLE LOT smoother which I was very happy with as I hate that little stutter that windowed Java games seem to have, however, the CPU usage hasn’t really changed in the slightest and I’ve been looking around for ages for possible reasons/solutions and I’ve drawn a blank every time.
I’m COMPLETELY new to anything OpenGL by the way.
Here’s the details of how I’m doing things, PLEASE feel free to tell me better ways to do things if there are any or at least point me in the right direction. I DO research as much as I can though.
- I’m attempting to use a sprite sheet of 8x8 tiles loaded as a Texture
- There’s currently only four tiles in that Texture
- I’m using the Sprite.split() method to turn it into a 2D TextureRegion array
- I’m storing the each TextureRegion in that TextureRegion array in a hashmap (loosely based on Kev Glass’s old sprite store tutorial) so I can simply call the key to whatever TextureRegion I want when I need it
- I AM concatenating a string for the key but it only does that on the initial load of the sprite sheet Texture
- That means the key IS a string, I don’t know if that affects performance
- Between batch.begin() and batch.end(), the render method iterates through the section of the level array shown on the screen, I’m calling a batch.draw() method for each Block on screen and getting the TextureRegion from the hashmap based on the Block’s type. This draw method appears to be what’s eating up all my CPU.
- I did run VisualVM on it and about 94.9% of my CPU usage was used up by org.lwjgl.opengl.GL11.nglDrawElements[native] if that helps
Is there something I’m doing wrong? I ran Terraria and checked its CPU usage and with a similar amount of blocks on screen it was sitting around 1-4% CPU. I know there’s something called a TextureAtlas, I’m not sure if that would speed anything up as isn’t what I’m doing essentially the same thing? Is what I’m doing somehow making my Texture unmanaged? I called the .isManaged() method and it returned true every time.
Another thing that happens, if there’s only a few Blocks on the screen, sometimes the CPU will drop to around 1-4%, which is where I kinda expect it to be but then, if I move the camera like ONE ROW across, the CPU jumps straight back to 25%. I don’t know if this helps with identifying the problem but surely a couple more rows/columns of blocks doesn’t require that much CPU. I’m very confused and exhausted.
If anyone needs to see the code, I’ll post it up.
If you read this far, I love you.