Issues with drawing 2D generated terrain efficiently

Hello everyone, I’ll start out by saying I’m most definitely NOT an expert at Java and have been a lurker around here for some time and still can’t understand half of everyone’s amazing solutions to problems around here. Be forewarned that I may also not understand any responses to this thread either :wink:

That being said, recently I’ve been trying to create a block based 2D game with random terrain creation.
My levels are created using a 2D array of my own little Block class, the only thing really stored in each block is its type as an Enum. My main problem arose when I filled out the “ground” underneath the perlin noise generated horizon by filling it entirely with blocks (for now). I’m using a “zoom scale” of 2 so 1 pixel = 2x2 pixels in reality if that makes any difference.
I wasn’t using any third party libraries or anything and I found that my CPU usage was hitting 25% and above, which I believe really means I’m using 100% of one CPU SO I poked around on here and discovered libGDX as I assumed that the reason my CPU was sky-rocketting was because it was drawing around 3417 blocks at most per tick and I should probably be using the GPU to do that instead. FYI, I’m not iterating through the massive level array, I’m just iterating through what is on screen based on the camera’s viewport.

After painstakingly (well not really, libGDX is quite nice) converting my little game into libGDX I noticed that the game ran a WHOLE LOT smoother which I was very happy with as I hate that little stutter that windowed Java games seem to have, however, the CPU usage hasn’t really changed in the slightest and I’ve been looking around for ages for possible reasons/solutions and I’ve drawn a blank every time.
I’m COMPLETELY new to anything OpenGL by the way.
Here’s the details of how I’m doing things, PLEASE feel free to tell me better ways to do things if there are any or at least point me in the right direction. I DO research as much as I can though.

  • I’m attempting to use a sprite sheet of 8x8 tiles loaded as a Texture
  • There’s currently only four tiles in that Texture
  • I’m using the Sprite.split() method to turn it into a 2D TextureRegion array
  • I’m storing the each TextureRegion in that TextureRegion array in a hashmap (loosely based on Kev Glass’s old sprite store tutorial) so I can simply call the key to whatever TextureRegion I want when I need it
  • I AM concatenating a string for the key but it only does that on the initial load of the sprite sheet Texture
  • That means the key IS a string, I don’t know if that affects performance
  • Between batch.begin() and batch.end(), the render method iterates through the section of the level array shown on the screen, I’m calling a batch.draw() method for each Block on screen and getting the TextureRegion from the hashmap based on the Block’s type. This draw method appears to be what’s eating up all my CPU.
  • I did run VisualVM on it and about 94.9% of my CPU usage was used up by org.lwjgl.opengl.GL11.nglDrawElements[native] if that helps

Is there something I’m doing wrong? I ran Terraria and checked its CPU usage and with a similar amount of blocks on screen it was sitting around 1-4% CPU. I know there’s something called a TextureAtlas, I’m not sure if that would speed anything up as isn’t what I’m doing essentially the same thing? Is what I’m doing somehow making my Texture unmanaged? I called the .isManaged() method and it returned true every time.

Another thing that happens, if there’s only a few Blocks on the screen, sometimes the CPU will drop to around 1-4%, which is where I kinda expect it to be but then, if I move the camera like ONE ROW across, the CPU jumps straight back to 25%. I don’t know if this helps with identifying the problem but surely a couple more rows/columns of blocks doesn’t require that much CPU. I’m very confused and exhausted.
If anyone needs to see the code, I’ll post it up. :slight_smile:

If you read this far, I love you.

You don’t NEED OpenGL to use the GPU.
There is one method for creating an image off the GPU called createVolatileImage(), you can use that for your games. Makes them run a LOT smoother. I won’t go into full explanation, but here’s a good tutorial on how to use it.

I did actually look into that and try and use those in my non-libGDX version of the game, sorry I forgot to mention that. I just couldn’t get them working properly and nothing I did with them seemed to resolve the same CPU maxing out issue. This may have largely been because I don’t really know how to use them well at all and could easily have been doing something wrong but I did follow the tutorials and stuff pretty much to the letter.

It may have had something to do with me using BufferedImages in the sprite sheet hashmap and drawing onto the graphics object of a BufferedImage and then scaling that image 2x bigger and drawing it with a BufferedStrategy. It was a bit messy though and could have been entirely wrong.

In any case, I quite like this libGDX thing though and will probably continue to use it instead for the moment. The main drawback for me was letting go of the gameloop I made that I was very proud of :frowning:

I don’t believe you would have any issues with using a BufferedImage. Would you mind sending me your source code for the regular java version of the game? I’ll see what I can do to help.

I love you too :* :stuck_out_tongue:

Eh… Oh he would. Believe me. I was creating Terraria in Java2D back then in my first days with Java, too.
What envolved out of it was WorldOfCube in plain LWJGL, which run much, much better.

Anyways, to your problem, Silver:
It seems like you probably don’t cap your Framerate. What is your framerate? It’s totally normal to have 100% CPU usage, if you keep running your game at max FPS possible. Terraria probably caps FPS to 60, so it lets the CPU make a pause for the freetime it has after each frame.

LibGDX defaults to capping the FPS, afaik, so probably it’s a problem with your driver, when LibGDX can’t do vsync. Try out [icode]Gdx.graphics.setVSync(true);[/icode] anyways.

And: Give us the framerate at which your game runs.

Thanks for your interest in my issue everyone :slight_smile: Sorry for my delay in responding, it was about 2:30am when I wrote the topic.
I was quite encouraged to hear they I may not be capping the fps because that was something I hadn’t thought of, however, before I even put that setVsync(true) in, I put a System.out.println(blahblahblah.getFramesPerSecond()) in and lo and behold, the almost dreaded number 61 came out :frowning:
Meaning that’s probably not the problem.

It also wouldn’t entirely explain the huge jumps between 1-4% and 25%, rarely is the CPU usage between those two…

As requested, here’s my pre-libGDX code: https://dl.dropboxusercontent.com/u/1289341/Help/LightspeedOld.zip
(I simply zipped the Eclipse project folder, hope that won’t be a problem)
It may be a little messy as I think I was playing around with VolatileImages at the time and may not have entirely changed it all back to BufferedImage.
I guess I don’t really mind posting the post-libGDX code as it stands, if anyone wants to have a look at it.
https://dl.dropboxusercontent.com/u/1289341/Help/Lightspeed.zip
Edit: I forgot to mention, WASD to move, R to make a new level.

Any and all help is most appreciated, thanks again :slight_smile:

I just downloaded and tried out the libGDX version of it and… I’ve got no issues. Watching my task manager I was hovering around 15% total CPU usage with about 6% being either Eclipse or your program.

Ah, I’ll be toying around with this and see if I can redo the game-loop a bit.

I guess in this case, the game LIGHTSPEED runs at 60FPS-SPEED!
Budum Tss.

So that 15% CPU was definitely not 100% of one of your CPU cores? Because 25% is 100% of one CPU core for me.
I don’t know much about accelerated graphics but I’m kind of expecting 1-4% CPU as the game currently stands.

Light travels at 60fps right? :wink:

It’s all about a balance between memory and CPU usage.

I could get a voxel engine running @60 fps (capped) and only using 3-5% CPU (Intel i5 processor).
But it used up too much RAM so I had to redesign parts. In the end it was balanced around 200MB RAM and 5-20% CPU.

But yeah, 25% is a lot.

My current project, Guardian II, uses 7-12% CPU and ~70MB RAM. It’s a 2D game, and uses brute-force collisions for attacks and hitboxes.

Try using jvisualvm to find the bottleneck.

He already did that :slight_smile:

I guess there is nothing more to do for you :slight_smile: For us it runs perfectly, and don’t worry if it uses 100% of one of your cores. It runs perfectly at 60 fps, so everything else is premature optimization in my eyes…

In other words: Just keep on going :slight_smile:

Nope. I went back and retried it! I got between 5-6% CPU usage and being on a quad core, I’d like to assume that’s not 100% of one core. I was able to ramp it up by hammering the R key and regenerating a bunch of times in a row (Which reminds me, I ended up with maps that had their bottom at cursor level quite a few times). The ramp up hit a MAX of 17%, but hovered around 13%.

I don’t see anything wrong on mine. Though, I do have to say that the process is holding onto almost 400 Megs of ram and if you don’t have that much extra (Due to having a ton of other processes vying for memory) you might end up with an memory thrashing. Not sure how Java handles that and whether the task manager would catch that as part of the processes’s CPU time though.

I would be very interested to know how you managed to do that! :slight_smile:
Yeah, I’m aware of the flat level creation, it’s mostly a matter of changing the amplitude, I’ve yet to really play around with making different “biomes” and stuff, I’ve still gotta wrap my head around 2D perlin noise first I think. Mostly cosine interpolating diagonally… shudder…