Maybe it is better to understand what happens when you actually do not call any matrix functions at all. In that case, everything you draw is in the so called “clip space.”
Since you are not doing 3D and don’t have any perspective projection, this “clip space” is also equal to the “normalized device coordinates” (NDC) space.
In NDC space, whenever you draw something at (0, 0) it appears in the middle of the viewport.
When you render something at (-1, -1) it appears in the left-bottom corner of the viewport.
And likewise (+1, +1) will appear at the top-right corner.
So by default, your viewport ranges from -1…+1 in X and Y.
This is the default setting of OpenGL.
When you do glTranslate and maybe glScale and glRotate, you are changing that “frame of renference.” For example, when you glTranslate(-1, 0) and then draw something at (0, 0), it will how up on the left edge of the viewport, since glTranslate “adds” (-1, 0) to the coordinates of everything you render.
This is why it appears that “the world” is moved by that (-1, 0) amount.
Now to draw your HUD, you must first get clear what the frame of reference for your HUD actually should be (i.e. in which coordinate system should your HUD be?)
Should it be in pixel coordinates? Should it be in inches?
Should (0, 0) be the bottom-left corner or even the top-left corner?
This decision affects how you must transform the previously mentioned default NDC coordinate space to this desired space.
For example, if you want your HUD space to be in pixel coordinates with the top-left corner being (0, 0) and the bottom-right corner being (width, height) you simply do this:
glLoadIdentity();
gTranslatef(-1.0f, 1.0f);
glScalef(2.0f / viewportWidth, -2.0f / viewportHeight, 1.0f);
// draw call
It is also very important to recognize how your vertices are being transformed when you apply multiple transformations like I did in the above example.
OpenGL uses post-multiplication, which means that your vertices are effectively first transformed by the LAST transformation immediately before your draw call, which above is glScalef().
What we want to achieve is that any vertex that is specified in pixel coordiantes will map to some coordinate in NDC space. And we defined above that we wanted pixel coordiante (0, 0) to be our top-left corner of the viewport and (width, height) be our bottom-right viewport corner.
So, we want to build a transformation that does this mapping for us.
In the above example, when a vertex “arrives” at glScalef(), this transformation will effectively first divide the vertex coordinates by half the viewport size and also invert the Y axis.
So (width, height) will become (2.0f, -2.0f) and (0.0f, 0.0f) will remain (0.0f, 0.0f).
We see here that this space begins to resemble NDC. Both spaces now have a total length of 2.0 in X and Y. The only thing that remains is that we need to translate our vertex to the right position.
Therefore, the scaled vertex is now translated by (-1, +1), which will result in the scaled (2.0f, -2.0f) to become (1.0f, -1.0f), which is the bottom-right viewport corner.
So, whenever you are having troubles finding how you must transform something to appear at a certain spot, it is always helpful to begin with that default NDC space [-1…+1] and work your way towards NDC from the space that you want your vertices to be defined in.
This of course becomes more complicated when you do 3D with “lookat” and perspective projections, but for 2D this is a viable way.
EDIT:
The above used a method that requires you to think about the transformations you need to apply to a vertex in order to get it from pixel-space to NDC space, which is what OpenGL expects and what it does when rendering something.
Depending on how you want to see your transformations, this can be either intuitive or not.
What you might instead prefer is:
“How do I do the opposite transformation from NDC to pixel?”
And here it is:
glScalef(viewportWidth * 0.5f, -viewportHeight * 0.5f, 1.0f);
glTranslatef(1.0f, -1.0f, 0.0f);
// get the opposite of this transformation... somehow
You see that now we are thinking in the “opposite” direction. We are now thinking, which transformations should we apply to a desired NDC coordinate to become its corresponding pixel coordinate. The order of transformations applied to a vertex still reads the same as above by applying the LAST transformation step first, which is glTranslatef.
Since what we did here is really the “opposite” of what OpenGL expects (i.e. OpenGL can only express transformations in terms of “how to get from X to NDC”), we now need to create the opposite transformation for OpenGL from that.
And since OpenGL stores transformations as 4x4 matrices, this can be done using a matrix inversion.
Since OpenGL itself does not provide any fixed function to do that, we could express this transformation using JOML for example:
m.scale(viewportWidth * 0.5f, -viewportHeight * 0.5f, 1.0f)
.translate(1.0f, -1.0f, 0.0f)
.invert();
Now, all you need to do is to get the matrix in a FloatBuffer (via Matrix4f.get()) and upload it to OpenGL via glLoadMatrixf(). That’s it.