[OpenGL] What is the next step after you have created a matrix?

Note that for practice codes like I’m doing right now, I’m following a strict rule:
No jumping around: Basically, you don’t create objects just for the sake of categorizing the code into objects and what objects they should be. Instead, the main focus is the order of code execution that the compiler runs when you’re stepping through (using breakpoints or hot-swapping), where all the codes are placed in one single function, and is ordered in a fixed pipeline without having the compiler calling methods/functions here and there and jumping all over the place in the source code.

TL;DR: It’s to cut down the numbers of function calling and just focus on the order of code execution.

Here’s the entire code that follows in that strict rule:

package core;

import static org.lwjgl.opengl.GL11.GL_FLOAT;
import static org.lwjgl.opengl.GL11.GL_LINEAR;
import static org.lwjgl.opengl.GL11.GL_LINEAR_MIPMAP_NEAREST;
import static org.lwjgl.opengl.GL11.GL_MODELVIEW;
import static org.lwjgl.opengl.GL11.GL_PROJECTION;
import static org.lwjgl.opengl.GL11.GL_RGBA;
import static org.lwjgl.opengl.GL11.GL_TEXTURE_2D;
import static org.lwjgl.opengl.GL11.GL_TEXTURE_MAG_FILTER;
import static org.lwjgl.opengl.GL11.GL_TEXTURE_MIN_FILTER;
import static org.lwjgl.opengl.GL11.glBindTexture;
import static org.lwjgl.opengl.GL11.glGenTextures;
import static org.lwjgl.opengl.GL11.glLoadIdentity;
import static org.lwjgl.opengl.GL11.glLoadMatrix;
import static org.lwjgl.opengl.GL11.glMatrixMode;
import static org.lwjgl.opengl.GL11.glTexImage2D;
import static org.lwjgl.opengl.GL11.glTexParameteri;
import static org.lwjgl.opengl.GL11.glViewport;
import game.Game;

import java.awt.image.BufferedImage;
import java.nio.ByteBuffer;
import java.nio.ByteOrder;

import javax.imageio.ImageIO;

import org.lwjgl.opengl.GL30;
import org.lwjgl.util.vector.Vector3f;

public class Something {
	
	private int texture;

	public Something() {
		init();
	}
	
	private void init() {
		float[] pixels = null;
		BufferedImage img = null;
		try {
			img = ImageIO.read(Something.class.getResource("/icon.png"));
			pixels = img.getData().getPixels(0, 0, img.getWidth(), img.getHeight(), pixels);

		}
		catch (Exception e) {
			e.printStackTrace();
			return;
		}
		
		texture = glGenTextures();
		glBindTexture(GL_TEXTURE_2D, texture);
		glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_NEAREST);
		glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
		glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, img.getWidth(), img.getHeight(), 0, GL_RGBA, GL_FLOAT, ByteBuffer.allocateDirect(pixels.length * 4).order(ByteOrder.nativeOrder()).asFloatBuffer().put(pixels));
		GL30.glGenerateMipmap(GL_TEXTURE_2D);
		glBindTexture(GL_TEXTURE_2D, 0);
		

		glViewport(0, 0, Game.WIDTH, Game.HEIGHT);
		{
			//Perspective Matrix creation
			final float FOV = 45f;
			float yFactor = (float) Math.tan(FOV * Math.PI / 360f);
			float xFactor = yFactor / ((float) Game.WIDTH / (float) Game.HEIGHT);
			float near = 1f;
			float far = 1000f;
			
			glMatrixMode(GL_PROJECTION);
			glLoadIdentity();
			glLoadMatrix(ByteBuffer.allocateDirect(16 * 4).order(ByteOrder.nativeOrder()).asFloatBuffer().put(new float[] {
					1f / xFactor, 0f, 0f, 0f,
					0f, 1f / yFactor, 0f, 0f,
					0f, 0f, -(far + near) / (far - near), -1f,
					0f, 0f, -(2f * far * near) / (far - near), 0f
			}));
		}
		
		glMatrixMode(GL_MODELVIEW);
		glLoadIdentity();
		
		{
			//View Matrix creation
			Vector3f eye = new Vector3f();
			Vector3f center = new Vector3f();
			Vector3f up = new Vector3f();
			
			eye.x = 0f;
			eye.y = 0f;
			eye.z = 0f;
			center.x = 0f;
			center.y = 0f;
			center.z = -1f;
			up.x = 0f;
			up.y = 1f;
			up.z = 0f;
			
			eye.normalise();
			center.normalise();
			up.normalise();
			
			Vector3f zaxis = center.negate(eye);
			zaxis.normalise();
			Vector3f xaxis = new Vector3f();
			Vector3f yaxis = new Vector3f();
			Vector3f.cross(up, zaxis, xaxis);
			xaxis.normalise();
			Vector3f.cross(zaxis, xaxis, yaxis);
			yaxis.normalise();
			
			float[] viewMatrix = {
					xaxis.x, yaxis.x, -zaxis.x, 0f,
					xaxis.y, yaxis.y, -zaxis.y, 0f,
					xaxis.z, yaxis.z, -zaxis.z, 0f,
					0f, 0f, 0f, 1f
			};
			
			glLoadMatrix(ByteBuffer.allocateDirect(16 * 4).order(ByteOrder.nativeOrder()).asFloatBuffer().put(viewMatrix));
		}

		
		//TODO: No idea what to do next. 
		
		//As in:
		//	#1. Do I start work on texture and vertices binding?
		//	#2. Do I continue to make the model matrix? (And how, precisely? On Android, projection, view, and model matrices are separately calculated, so it's easier to say.) 
		//	#3. Anything else like matrix multiplications within the MODELVIEW state?
		//	#4. Do I start working on what I should do when I'm drawing frames? (On Android, GLSurfaceView.Renderer's onDrawFrame() is equivalent to this.)

	}
}

Basically, that covers it all. I don’t know what is the next step for me to take, since I’m porting my own Android OpenGL ES practice codes into LWJGL practice codes, and I’m a bit lost. Any hints are welcomed.

Thanks in advance. I’m going to sleep, as my brain is really foggy after all these calculations.

If you’re calculating your own matrices then it’s best to be using the programmable pipeline, as the fixed function pipeline does that for you with the functions such as glTranslatef(). However, as I’m assuming that this is for learning purposes, it doesn’t really matter.

[quote]#1. Do I start work on texture and vertices binding?
#2. Do I continue to make the model matrix? (And how, precisely? On Android, projection, view, and model matrices are separately calculated, so it’s easier to say.)
#3. Anything else like matrix multiplications within the MODELVIEW state?
#4. Do I start working on what I should do when I’m drawing frames? (On Android, GLSurfaceView.Renderer’s onDrawFrame() is equivalent to this.)
[/quote]
#1: Yes, since you’re using the fixed function pipeline, you can set up the vertices and texture co-ordinates and render them as before, preferably using buffer objects.
#2: I’m no expert as I’m currently learning about the transformations myself, however I believe that the view matrix is the inverse of the camera position and orientation. The model matrix however is used to transform vertices from model space (the vertices defined relative to the origin) to a position in the world. So for example, if you defined a rectangles vertices between the ranges 0-1, e.g: (0.1, 0.1), (0.1, 0.9), (0.9, 0.1), (0.9, 0.9), then you’d multiply each vertex by the model matrix to get its position in world space. Consider the model matrix looked like this:

Xx, Xy, Xz, Xt,
Yx, Yy, Yz, Yt,
Zx, Zy, Zz, Zt,
Wx, Wy, Wz, Wt

10, 00, 00, 1000, .1 10*.1 + 0*.1 + 00 + 10001 = 1001
00, 10, 00, 1000, * .1 = 0*.1 + 10*.1 + 00 + 10001 = 1001
00, 00, 00, 0000, 0 0*.1 + 0*.1 + 00 + 01 = 0
00, 00, 00, 0001 1 0*.1 + 0*.1 + 00 + 11 = 1

Multiply that by the other positions and you get (1001, 1001, 0, 1), (1001, 1009, 0, 1), (1009, 1001, 0, 1), (1009, 1009, 0, 1). You’ve effectively scaled each vertex by 10 on the X and Y axis and translated each vertex +1000 on the X and Y axis. Rotation is slightly more complicated but you can Google it. Your rectangle is in world space.

Now you’ve got a problem. Your window might only be 800x600, so your vertices can’t be seen.

                            . .
                            . .

|‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾|
| |
| |
| |
| |
| |
|________________|

This is where the view matrix comes in. You need to uniformly transform your vertices so that they can be seen on the screen. If you want the camera to be at position (500, 500) then your view matrix will hold a translation of (-500, -500). This is because the camera can’t move, however the world can, so you move the vertices (-500, -500). The view matrix also looks like this:

Xx, Xy, Xz, Xt,
Yx, Yy, Yz, Yt,
Zx, Zy, Zz, Zt,
Wx, Wy, Wz, Wt

01, 00, 00, -500,
00, 01, 00, -500,
00, 00, 01, 0000,
00, 00, 00, 0001

There is no need to do two separate transformations. They can be combined in to one. The combined matrix would look like this:

10, 00, 00, 1000 - 500,
00, 10, 00, 1000 - 500,
00, 00, 01, 0000,
00, 00, 00, 0001

And if each vector was multiplied by this matrix, you would end up with something like this:

|‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾|
| . . |
| . . |
| |
| |
| |
|________________|

Note: The process of multiplying by the model matrix is not needed if the buffer object holds world space coordinates, since it would essentially be an identity matrix.

So to translate that to code, you would need to:

  • Set up the view matrix for the modelview matrix (the negation of the camera position and orientation)
  • For each vertex, transform it in to world space by modifying the modelview matrix to include the world space transformations
    [li]Render and undo those transformations (unless all vertices have the same transform applied to them)

I hope that makes sense. If it doesn’t, just ask and I’ll provide you a little code snippet to help explain.

#3: Only the above^^
#4: You bind the buffer holding the model space data, apply the transformations to camera/eye space (hopefully I’m not mistaken in saying that multiplying by the modelview matrix transforms directly to eye space), then use glDrawArrays() to render the object.

:o
I didn’t realise I knew so much about the transformations. For the past week I’ve been getting confused but writing this has cleared my head. I hope this is 100% correct though (I’m almost certain it’s correct) as I wouldn’t want to be giving you the wrong information. If I’ve made a mistake someone can point it out.

Anyway, like I said, if you’re having difficultly writing out the code still, then just reply and I’ll give you a small snippet to start you off. Just make sure you give it a try first. :slight_smile:

Also, make sure you do the matrix multiplication in the right order.
With the exception of inverse operations (eg: scale up then scale down), AB != BA.

And then I should use glPushMatrix() and glPopMatrix() to manipulate the model matrices for each individual models and meshes, right?

Is this how OpenGL 2.0 and Android’s OpenGL ES 2.0 work by putting model matrix buffers into shaders and let the shaders do the calculations?

With shaders, you pass the matrices in to the shader as uniform variables, then you multiply the position by the matrices to get the new position, e.g:


in vec3 position;
uniform mat4 matrix;

void main() {
    gl_Position = matrix * vec4(position, 1.0);
}

The matrix you pass in would be the combined modelview and projection matrix (the matrices multiplied together). The reason you should multiply the matrices on the CPU is to avoid having to multiply the matrices for every vertex.

Can someone tell me how to multiply viewMatrix and modelMatrix inside glMatrixMode(GL_MODELVIEW)? I had tried to use glPushMatrix() and glPopMatrix(), but it got me even more confusing than ever.