Updated translate? (Extended Convo)

You really need to get a grasp on the fundamentals of what is going on when you call glLoadIdentity and glTranslatef. Here’s a link that might help get you there:

Google “OpenGL modelview matrix” for more stuff. Understanding how this works is rather important in putting your own renderer together.

So I read that, googled what you said and read some other articles. I think I still understand the same stuff though regarding the world view and camera view. I think it made it more clear though that the worrld moves in the opposite direction (which I didn’t really understand until now). But still, the problem I don’t get that I actually read in one of the articles is how the transform method works regarding different objects. So I felt like it was saying that when you run the translate method, you move the whole world that direction. So I’ve just been sitting here thinking, and I still don’t get how the HUD would work regarding this. In theory, I get it, and how it should work. It’s just from what I read, it shouldn’t work. They gave an example for using it, and maybe that shows what I want, but everything you showed or i found was in 3D (so I could ignore the Z). I think I get why I don’t need to use glLoadIdentity(), but it makes my life a lot easier regarding the camera. I’m still wondering if that messes up the HUD, but I need to sit down and test things out more (cause I just woke up).

Anyway, the whole matrix stuff kinda blew my mind atm, but maybe I’ll look at it closer later. I’m not trying to be super fancy with the camera, I really just 1. want it to follow the player 2. stop when the player reaches the start or end of the level and 3. have a hud that is on the screen during the level. I’v gotten 1 and 2 to work perfectly, so it’s just understanding how to make this HUD work. from what I was told (or the way I take it), you just transform to where the HUD is at the end and it should work. It didn’t, so idk.

Maybe it is better to understand what happens when you actually do not call any matrix functions at all. In that case, everything you draw is in the so called “clip space.”
Since you are not doing 3D and don’t have any perspective projection, this “clip space” is also equal to the “normalized device coordinates” (NDC) space.
In NDC space, whenever you draw something at (0, 0) it appears in the middle of the viewport.
When you render something at (-1, -1) it appears in the left-bottom corner of the viewport.
And likewise (+1, +1) will appear at the top-right corner.
So by default, your viewport ranges from -1…+1 in X and Y.
This is the default setting of OpenGL.

When you do glTranslate and maybe glScale and glRotate, you are changing that “frame of renference.” For example, when you glTranslate(-1, 0) and then draw something at (0, 0), it will how up on the left edge of the viewport, since glTranslate “adds” (-1, 0) to the coordinates of everything you render.
This is why it appears that “the world” is moved by that (-1, 0) amount.

Now to draw your HUD, you must first get clear what the frame of reference for your HUD actually should be (i.e. in which coordinate system should your HUD be?)
Should it be in pixel coordinates? Should it be in inches?
Should (0, 0) be the bottom-left corner or even the top-left corner?
This decision affects how you must transform the previously mentioned default NDC coordinate space to this desired space.

For example, if you want your HUD space to be in pixel coordinates with the top-left corner being (0, 0) and the bottom-right corner being (width, height) you simply do this:


glLoadIdentity();
gTranslatef(-1.0f, 1.0f);
glScalef(2.0f / viewportWidth, -2.0f / viewportHeight, 1.0f);
// draw call

It is also very important to recognize how your vertices are being transformed when you apply multiple transformations like I did in the above example.
OpenGL uses post-multiplication, which means that your vertices are effectively first transformed by the LAST transformation immediately before your draw call, which above is glScalef().
What we want to achieve is that any vertex that is specified in pixel coordiantes will map to some coordinate in NDC space. And we defined above that we wanted pixel coordiante (0, 0) to be our top-left corner of the viewport and (width, height) be our bottom-right viewport corner.
So, we want to build a transformation that does this mapping for us.

In the above example, when a vertex “arrives” at glScalef(), this transformation will effectively first divide the vertex coordinates by half the viewport size and also invert the Y axis.
So (width, height) will become (2.0f, -2.0f) and (0.0f, 0.0f) will remain (0.0f, 0.0f).
We see here that this space begins to resemble NDC. Both spaces now have a total length of 2.0 in X and Y. The only thing that remains is that we need to translate our vertex to the right position.
Therefore, the scaled vertex is now translated by (-1, +1), which will result in the scaled (2.0f, -2.0f) to become (1.0f, -1.0f), which is the bottom-right viewport corner.

So, whenever you are having troubles finding how you must transform something to appear at a certain spot, it is always helpful to begin with that default NDC space [-1…+1] and work your way towards NDC from the space that you want your vertices to be defined in.
This of course becomes more complicated when you do 3D with “lookat” and perspective projections, but for 2D this is a viable way.

EDIT:

The above used a method that requires you to think about the transformations you need to apply to a vertex in order to get it from pixel-space to NDC space, which is what OpenGL expects and what it does when rendering something.

Depending on how you want to see your transformations, this can be either intuitive or not.
What you might instead prefer is:
“How do I do the opposite transformation from NDC to pixel?”
And here it is:


glScalef(viewportWidth * 0.5f, -viewportHeight * 0.5f, 1.0f);
glTranslatef(1.0f, -1.0f, 0.0f);
// get the opposite of this transformation... somehow

You see that now we are thinking in the “opposite” direction. We are now thinking, which transformations should we apply to a desired NDC coordinate to become its corresponding pixel coordinate. The order of transformations applied to a vertex still reads the same as above by applying the LAST transformation step first, which is glTranslatef.

Since what we did here is really the “opposite” of what OpenGL expects (i.e. OpenGL can only express transformations in terms of “how to get from X to NDC”), we now need to create the opposite transformation for OpenGL from that.
And since OpenGL stores transformations as 4x4 matrices, this can be done using a matrix inversion.

Since OpenGL itself does not provide any fixed function to do that, we could express this transformation using JOML for example:


m.scale(viewportWidth * 0.5f, -viewportHeight * 0.5f, 1.0f)
 .translate(1.0f, -1.0f, 0.0f)
 .invert();

Now, all you need to do is to get the matrix in a FloatBuffer (via Matrix4f.get()) and upload it to OpenGL via glLoadMatrixf(). That’s it.

Ok, just to make this clear here are my OpenGL settings

    public void initGL() {
    	//loads OpenGL settings
        GL11.glEnable(GL11.GL_TEXTURE_2D);
        GL11.glClearColor(0.4f, 0.4f, 0.0f, 0.0f);
        GL11.glEnable(GL11.GL_BLEND);
        GL11.glBlendFunc(GL11.GL_SRC_ALPHA, GL11.GL_ONE_MINUS_SRC_ALPHA);
        GL11.glMatrixMode(GL11.GL_PROJECTION);
        GL11.glLoadIdentity();
        GL11.glOrtho(0, 800, 0, 600, 1, -1);
        GL11.glMatrixMode(GL11.GL_MODELVIEW);
    }

So I believe I’m using the bottom left for 0,0 , not the midddle. (I could be wrong about that from what you’re saying but that’s where I believe my 0,0 is). My Screen or Viewpost is 800 x 600, and well… ya that’s all the openGL stuff.

Also…


glLoadIdentity();
gTranslatef(-1.0f, 1.0f);
glScalef(2.0f / viewportWidth, -2.0f / viewportHeight, 1.0f);
// draw call

[quote]In the above example, when a vertex “arrives” at glScalef(), this transformation will effectively first divide the vertex coordinates by half the viewport size and also invert the Y axis.
So (width, height) will become (2.0f, -2.0f) and (0.0f, 0.0f) will remain (0.0f, 0.0f).
[/quote]
the width is 800 and the height is 600

So that’s glScalef(1/400,-1/300,1)

I’m confused to what math you did.

[quote]What we want to achieve is that any vertex that is specified in pixel coordiantes will map to some coordinate in NDC space. And we defined above that we wanted pixel coordiante (0, 0) to be our top-left corner of the viewport and (width, height) be our bottom-right viewport corner.
[/quote]
Let me just guess, are you saying 800 (width) = 1 and 600 (height) = 1. So from 1 to -1 is 800 to -800 and 600 to -600?

ok, so assuming that’s correct, what you said means both would be 1, which would be correct through what you did. After reading over this I’m confused to what this really means, because my 0,0 is at the bottom left, would that mean by default it’s already (2,2) ? So I think now I’m confused as to what glScale is actually doing, because when I tried putting it above the HUD render code… it just didn’t work very well lol.

Oh… Oh… I think I’m getting somewhere with this. So lol I just commented the glTranslate and glScale you gave, and just had glLoadIndentity. This did weird things, but it helps! So since my camera is pushed to the player right before the player renders, all the blocks stay positioned the same as they would and don’t move with the player as they should (it’s weird, idk how to explain what’s happening). Now the creatures/their projectiles (everything drawn after the player) moves, so the blocks and the player aren’t but everything else is. This has me thinking that I want this to happen, but just for the HUD. (The stuff is moving in place with the camera, but it’s not what I want.) So maybe if I move the camera stuff that’s before the player renders to before anything renders, it may actually help. Now I also should say this could be caused because it’s moving everything at the position of the camera, when i just want the hud.

I’ll go try it out and let you know what happens.

EDIT: smh… i moved the camera code to before anything renders… all i have in the HUD is glLoadIdentity() then it draws it… and it looks like it works perfectly.

Literally, here’s my HUD coce:

public class HUD
{
	protected Level level;
	Picture pic = new Picture(50, 300, 175, 175);
	
	public HUD(Level level)
	{
		this.level = level;
	}
	public void draw()
	{
		GL11.glLoadIdentity();
		pic.draw();
	}
	public void update()
	{
		//something
	}
}

I swear I tried this before and it didn’t work :confused: