Need help with 2d camera

Hello, I’m trying to make the camera clamp so it doesn’t go outside the tiled map, although the scaling of the map rendering is interfering and I’m not sure what else to try.

Render code

    public void render(GameContainer gc, Graphics grphcs) throws SlickException {
        grphcs.scale(scale, scale);
        map.render((int) -camera.x, (int) -camera.y, 0); 
    }

Portion of the update code to clamp the camera

        float w = ((map.getTileWidth()) * map.getWidth());
        
        if (camera.x > w) {
            camera.set(w, camera.y);
        }

I’m using Slick2D if that has any relevance. Thanks for reading.

float w = ((map.getTileWidth()) * map.getWidth());
        
        if (camera.x > w) {
            camera.set(w, camera.y);
        }

Try to extract the screen width from the map width.
Now the right edge of your map ends at the left side of the camera, what means it wont be drawn on-screen.

I have yet to solve this issue myself but how do you obtain your scale value? I believe the camera.x and camera.y need to be changed each time you change the scale. I should be back with a solution.

I said nothign about scale, the update is right, but u need something like this:


float w = ((map.getTileWidth()) * map.getWidth());
        
        if (camera.x + camera.width > w) {
            camera.set(w - camera.width, camera.y);
        }

The method I have in my code would work if the map was drawn with the original size.
In Slick2D however, to “zoom” a TiledMap you have to set the scale of the entire graphics object, it’s not possible to just supply the desired width/height to be drawn in.
In turn this scaling puts the clamp calculation wrong, even if you do for example “clampvalue * scale”.

I got it to work ;D! Visually drawing it out on paper really helped.

This may not be the best way to do it but for now it works, I’ll continue to try and improve it.

Here is how I was able to lock the x direction for the camera

current_Number_Of_Tiles_On_Screen = windowWidth / (tileWidth * scale);
tiles_Off_Screen = mapNumberOfHorizontalTiles - current_Number_Of_Tiles_On_Screen;
gamePixels = tiles_Off_Screen * (tileWidth * scale);
number_Of_Pixels_Extra = gamePixels / scale;

if (input.isKeyDown(Input.KEY_A)){
	if (mapX < 0) {
		mapX += mapMovementSpeed;
	}
}
if (input.isKeyDown(Input.KEY_D)){
	if (mapX > -number_Of_Pixels_Extra) {
		mapX -= mapMovementSpeed;
	}
}

I’m going to have to try that.
I have another question on a similar/related topic to this.
Is the scaling of the whole graphics context increasing performance or would it be faster to “scale” it yourself by adding a draw width/height variable to the render functions?
If it makes no difference I’ll just make a modified TiledMap/Layer/TileSet class for my needs.