Mapping Positions Onto The Screen

Okay, so I’m making a 2D pixel based game. It uses authentic pixelation, that is, I don’t try to artificial make things look pixelated by drawing “blocky” art. Instead, I’m drawing everything to a small image (320 X 180) and scaling it up to the largest power of 2 that can fit inside the users screen (to preserve square pixels). The problem I’m having is with mapping the positions of objects to the 2D graphics. I use Graphics.drawImage() to draw the graphics to the screen like so:

x, y, width, and height are all floats:

g.drawImage(image, (int) (x - width / 2), (int) (y - width / 2), (int) width, (int) height, null);

X and Y are the positions of the object, with a camera offset subtracted from it. The camera offset is also represented by floats for X and Y. My questions is, is this the best way to map floating point numbers to integers for rendering? Because I will frequently get cases where objects will seem to be out of sync with each other, or something. Like when the camera moves past two stationary objects, they will move across the screen but sort of “jitter” compared to each other. I don’t know if it’s a problem with the way I’m rounding them off to integers or what.

Any advice?

And the object positions are floats as well? In that case, this is what basically happens:

Step 1:

Obj1@10.3
Obj2@11.8
Cam@0

-> (int) (10.3-0) = 10, (int) (11.8-0) = 11, delta=1

Step 2:

Obj1@10.3
Obj2@11.8
Cam@0.5

-> (int) (10.3-0.5) = 9, (int) (11.8-0.5) = 11, delta=2

Step 3:

Obj1@10.3
Obj2@11.8
Cam@1

-> (int) (10.3-1) = 9, (int) (11.8-1) = 10, delta=1

As you can see, the gap between Obj1 and Obj2 jumps from 1 to 2 pixel and back to 1, which causes the jitter. Subtracting (int) camPos instead of the float value should fix this.

Thanks a bunch for the swift reply. I actually tried that method of casting the camera position to integers before performing the calculation, but for some reason the jitter remains. This is what I’m currently trying:


// x and y are the position of the object, they are FLOATS
// cameraX and cameraY are the camera's position, also FLOATS
// screenWidth is the width of the drawing area, this is an INTEGER
// screenHeight is the height of the drawing area, this is an INTEGER

// Offset our position by the camera's position
// But also offset by half the screenWidth,
// so that the camera's position represents the center of the camera view
float offsetPositionX = x - (int) cameraX + (screenWidth / 2);
float offsetPositionY = y - (int) cameraY + (screenHeight / 2);

g.drawImage(image, (int) (offsetPositionX - image.getWidth() / 2), (int) (offsetPositionY - image.getHeight() / 2), image.getWidth(), image.getHeight(), null);

This is the same code on all objects. I’m still not exactly sure what’s wrong.

I would take two objects as an example, and print out each of these values and the final result. That should help to find the issue.

Thanks a lot for your help! I was going to try your second suggestion when I noticed that I had accidentally forgotten that I had two different classes for my game objects! When I implement your first suggestion, I had only implemented it in one of the classes, not both. I went ahead and fixed it in the other one and now it works perfectly.

(Silly mistake on my part. That’s what I get for trying to code at 2:00 AM :P)

Thanks again! :slight_smile: