OpenGL - Batch Renderer with Multiple Objects and their Transformation Matrices

Hi, I have created a possibly incomplete Batch Renderer(linked below). But, I have just come to a standstill as to how I apply a Transformation Matrix if their are multiple separate(different pos, rot, scale) objects in the same batch. How do i do this? Do i apply the objects transform(pos, rot, scale) to each of it vertices as it is put into the batch? If so how(rotation and scale - position is just added)?

Batch Renderer Code

Thanks.

Assuming you wanted to ask, how to transform a vertex by your (translation/rotation/scale) model matrix, then:
[icode]modelMatrix.transformPosition(vertexToTransform)[/icode]
or
[icode]modelMatrix.transformPosition(vertexToTransform, transformedVertex)[/icode],
which is just a basic matrix/vector multiplication, also performed by the GPU when running the shader and executing a matrix * vector operation.

Ah, JOML to the rescue. Thanks :slight_smile:

Actually, i have the following render code:

renderer.AddMesh(null, new Matrix4f().translate(-0.4f, 0.2f, 0).scale(.1f).rotate(rot));

which is this:


public void AddMesh(Mesh mesh, Matrix4f transformMatrix){
    AddVertex(transformMatrix.transformPosition(new Vector3f(-.2f, .2f, 0)), new Color(1, 1, 1), new Vector2f(0, 1));
    AddVertex(transformMatrix.transformPosition(new Vector3f(-.2f, -.2f, 0)), new Color(1, 1, 1), new Vector2f(0, 0));
    AddVertex(transformMatrix.transformPosition(new Vector3f(.2f, -.2f, 0)), new Color(1, 1, 1), new Vector2f(1, 0));
}

public void AddVertex(Vector3f pos, Color vertexColor, Vector2f textureCoords){
    if(Vertices.remaining() < 9){
        //We need for space in buffer, so flush it
        Flush();
    }

    float r = vertexColor.r;
    float g = vertexColor.g;
    float b = vertexColor.b;
    float a = vertexColor.a;

    float texX = textureCoords.x;
    float texY = textureCoords.y;

    Vertices.put(pos.x).put(pos.y).put(pos.z)//Vertex Position
            .put(r).put(g).put(b).put(a)//Vertex Color
            .put(texX).put(texY);//Vertex Texture Coord

    NumOfVertices += 1;
}

The rot variable(quaternion) is being changed every update:

rot.rotateZ(1f * deltaTime);

Using this code, the triangle i am rendering seems to be getting bigger and smaller

???

Is it just me?

Can you show how you are instantiating and initializing the ‘rot’ quaternion?
Chances are that your ‘rot’ quaternion is not normalized after creation, which can likely happen when you use the Quaternionf(float, float, float, float) constructor by hand.
In the non-normalized case, the quaternion also contains a scaling component.
To combat this now, you can manually renormalize the quaternion after the rotateZ() invocation by calling .normalize() on the quaternion.

This is how the rot quaternion is initialised:

Quaternionf rot = new Quaternionf();

Normalising after .rotateZ() works. Is their a way to make it so i don’t have to normalise it every time?

Usually, you should not ever need to renormalize, because standard java.lang.Math.sin()/cos(), which is used by rotateZ(), is extremely high precision and will yield a sufficiently normalized quaternion.
I just tried everything with rotateZ() to let that quaternion become unnormalized (including rotating by a very huge magnitude angle) to make any scaling effect apparent. But it just wouldn’t become unnormalized.
Now I am curious as to what happens with your quaternion.
You could also just ditch quaternions altogether and just use Matrix4f.rotateZ(angle) instead of rotate(rot) after your translate().scale(). It has the same effect.

EDIT:
But generally, when you apply small delta rotations every frame to a quaternion or matrix, then floating point inaccuracies will build up, and there will come the time when the scaling effect becomes noticeable. So your best bet is to normalize every once in awhile. Or just do not apply deltas to a quaternion/matrix but instead accumulate the angle and rotate once every frame by the complete angle and also modulo the angle.

OK, that’s weird.

Thanks for the help anyway.

It’s not unreasonable to use double quaternions/matrices for this reason. I use double precision view and projection matrices and convert the matrices to float matrices when uploading them to OpenGL.

Wow, thanks @theagentd! Using Vector3d’s, quaterniond’s, etc. has fixed the issue :slight_smile:

Another very common problem is when you have a big world and need good precision matrices. Let’s say you have a camera tracking an object at position (10 000 000, 10 000 000, 0). In this case, you would end up adding 10 000 000 to the XY coordinates of the model’s vertices when multiplying by the model matrix, then subtracting 10 000 000 again when multiplying by the view matrix (which transforms the matrices to positions relative to the camera). At a scale of 10 000 000, 32-bit floats only have a 1.0 precision, meaning that every single vertex will essentially be rounded to the nearest integer coordinates after this transformation. If you were to precompute a model-view matrix by multiplying the two matrices together, you would end up with a similar but less severe precision problem as the individual vertices would not suffer from horrible precision (the shape of the model isn’t completely broken) but the position of the camera and the translation of the model will still suffer from the same precision problem, causing objects to appear to stutter around. However, if you were to compute the model-view matrix at double precision and then finally upload it to OpenGL at 32-bit precision the matrix multiplication is carried out with such precision that you would need larger values than 10 000 000 before the precision of the final 32-bit matrix starts to suffer at all, and even farther before you actually start noticing problems due to that.

Yikes! I don’t like this file… you are a very stoic programmer! X)

Nonetheless, let me point normals in your direction. (HA FUNNY JOKE)

Each object has 3 matrices it uses and 3 3d vectors it uses. Although it is based on one vao and 3 common, but variable amounts of vbos. This is my approach to it. Don’t forget the vertex count or indices count (ibo)!

Since two of these matrices are generated separately from the object, we don’t need to store any references to these, but due render time, pass them through for the shader. This leaves us with one matrix we need to modify. This is a transformation matrix, and you can call it the Model View matrix. The model view matrix determines, from the 3 vectors where the vertices are drawn. The other two matrices are the projection matrix and the camera matrix. This is multiplied after the projection and camera view matrix. The camera matrix has MANY names, but at the end it just transforms the world as a camera.

So the first thing you need is a class that holds everything unique to every single element you render. Then you need a class that holds everything common, static for each model. Finally, you can just store a list of the class that holds everything different. Come render time, you just loop over it.

soo… Pseudo code…



public class Positional {
final Vec3 position = new Vec3(), scale = new Vec3(1,1,1), rotation = new Vec3();

public Mat4 generateMatrix() {
final Matrix4f matrix = new Mat4();
matrix.translate(-position.x, -position.y, -position.z);
matrix.rotate(rotation.x, 1, 0, 0);
matrix.rotate(rotation.y, 0, 1, 0);
matrix.rotate(rotation.z, 0, 0, 1);
matrix.scale(scale);
matrix.finalize(); // this executes calculations at once, a small improvement... cuts down on calls
}

}




public class Model {

private int vao, vertex_count;
private int[] vbos, int[] offshore_vbos;

private ArrayList<Positional> models;

public Model(int vao, int[] vbos, int[] offshore_vbos) {
this.vao = vao;
this.vbos = vbos;
this.offshore_vbos = offshore_vbos;
models = new ArrayList<Positional>();
}

public void addModel(Positional model) {
models.add(model);
}

public void removeModel(Positional model) {
models.remove(model);
}

public ArrayList<Positional> getModels() {
return models;
}

}


Now you can see from here you don’t need any “batch” stuff anymore. You just need to loop through once you bind your vao. I STRONGLY suggest that you create a class that takes in a model and renders it. You only need 1-3 render methods, which is a great thing.

Also, I want you to check out my ModelLoader class… or something that generates VAO and VBOS.
http://www.java-gaming.org/index.php?action=pastebin&hex=11db92c574115

If you need help on the render method I’d love to help <3
Anyone: If you have anything to suggest to me, I’m all ears.

Edit:
Checked the posts in more depth. RIP I don’t use quaternions… RIP I think I misunderstood the context of what OP was trying to do.

@Hydroque

Actually, I am coming to the point where the way I am batching is becoming messy and hard to work with. So all of what you said is very helpful. So thanks.

Although I am not quite sure what you mean by, or what the ‘offshore_vbos’ is/are?

Thanks.

Edit:

Just looked through the ModelLoader class you uploaded - So are the ‘offshore_vbos’ just indices vbo’s?

While it appears like it is that way, thats pretty much what it is. I have NOTHING to pass in as a vbo (per vert/frag) so there is extendability.

I am VERY glad that I helped you out. If you have any specific questions you need answered, you can always contact me through any means available.

I am having trouble rendering, eg. Whenever I call glDrawElements my code does the ‘Java™ Platform SE binary has stopped working’ thing.

I have checked and the crash is definitely caused by glDrawElements.

Relevant Code:

Renderer: http://pastebin.com/nzjRGke1
Mesh: http://pastebin.com/R2yCjLv2
RenderModel: http://pastebin.com/0b1E2pnf
Object/GameObject: http://pastebin.com/Sw4eAfYi

I have tried using breakpoints to check for any problem but I don’t see any.
Am I forgetting to bind something?

Without looking at your code: A likely reason for a draw call crash like this is:
You have a vertex attribute enabled which is not backed by a buffer object.
I.e.: you either glEnableClientState(GL_VERTEX_ARRAY/GL_TEXTURE_COORD_ARRAY/GL_NORMAL_ARRAY/…) but did no glVertexPointer/glTexCoordPointer/glNormalPointer/… or you did glEnableVertexAttribArray(index) but did no corresponding glVertexAttribPointer(index, …) with a buffer object bound.

Thanks will try this once I have fixed a new MAJOR problem(Possibly related) were my game loop is constantly updating logic and not rendering at all.

Not sure that is what the problem is. I have compared my code to an earlier -working- project and I am doing the same things(just structured differently). I am going to keep looking through though and testing things.

Edit:

Found the problem: I had a ‘in vec4 color’ variable in my vertex shader which I forgot to remove and so my uv data was being forward to it instead of the texcoord location.

Im glad you have everying working. If you post a zip of your files I’ll take a look through the code and spot any abnormalities.