LWJGL - Rotate/Translate using fixed functions or shaders?

I’m still learning how to use/create GLSL Shaders for my 3D-FPS LWJGL game. Should I use fixed functions to handle rotation/translation in my game, such as glRotated() and glTranslated(), or should I pass values to a shader that will handle rotation/translation for me? I’m wondering which will be better both for optimization and convenience, such as if I also need a shader that would handle lighting, would it be better to also have a shader (if you can even use multiple shaders in a single VBO, my current method of rendering), or would if be better/easier to make other shaders if I just use fixed LWJGL functions for rotation/translation?

For example, here’s how I currently do translations for VBOs:


glPushMatrix();
	glRotated(rotation.x, 1, 0, 0); //rotation.x is the object's yaw
	glRotated(rotation.y, 0, 1, 0); //rotation.y is the object's pitch
	glRotated(rotation.z, 0, 0, 1); //rotation.z is the object's roll
	glTranslated(position.x, position.y, position.z);
	glDrawArrays(GL_TRIANGLES, 0, model.faces.size() * 3); //Actually drawing the VBO after translation/rotation is applied
glPopMatrix();

glPushMatrix() and glPopMatrix() surround the code because every render I have a camera that also translates/rotates everything according to the game’s camera’s rotation/position.

I could continue doing this without issue I think, or I could pass the rotation/translation values to a shader to handle rotation for me. What do you guys think? Also, would it be possible to use multiple shaders for a VBO, such as first applying a translation/rotation shader, and then applying a lighting shader, or would I have to have a single shader that does translation, rotation, and lighting?

There are plenty of threads already about this, but…

There are pros and cons to both. I would recommend just moving over to shaders. The reason is as your code base gets bigger and performance gets to be more of an issue you’re going to want to avoid working with both pipelines. Since you’re already working on rendering you might as well.

As far as shaders go you’d have to render the same geometry again to use a different shader program, but there is a vertex shader and a fragment shader that can be attached to each program. Generally you perform vertex operations (rotate,translate,scale) in the vertex shader, and lighting in the fragment shader.

Fair enough, I’ve already started moving to using shaders for everything graphical. I apologize if there are previous threads just like this one, I was unable to find one asking this specific question.

Alright, I just finished moving everything involving graphics to shaders. Now, using shaders, I have translation, rotation, ambient lighting, and texturing. :slight_smile: