Rendering

Ok, I’ll just throw a few things your way, and you can do with them what you will. (Maybe others will jump in as well.)

First, I’d recommend just getting things working, irrespective of performance concerns and so on. In other words, fix the crash (if you haven’t already) and get things rendering (one way or another) using VBOs.

Also, this is tangential (sort of), but I’d recommend anchoring your quads at the center rather than a corner. That is, instead of this:

vertexData.put(new float[]{x, y, x + width, y, x + width, y + height, x, y + height});

You’d do something like this (untested):


final float extentX = width * 0.5;
final float extentY = height * 0.5;
vertexData.put(new float[]{
    x - extentX, y - extentY,
    x + extentX, y - extentY,
    x + extentX, y + extentY,
    x - extentX, y + extentY
});

This will be more natural for most things, including rendering.

As for how to go about rendering, there are a lot of different ways to do it, more than could be reasonably summarized here, so I’ll just touch on a couple things.

The simplest approach would be to create a separate VBO for each quad size you need, then for each entity, set the transform and other render state, bind the appropriate VBO(s), and render (this would apply using the programmable pipeline as well). Some drawbacks of this method are a) you’re only storing a little geometry in each VBO, which makes for a lot of VBO overhead and isn’t really how VBOs are best used, and b) you have to make a lot of draw calls, which can be costly on some platforms. It may not even perform better than immediate mode, necessarily, but it at least has the advantage of using more modern techniques.

The more usual method addresses both of the issues mentioned above, but naturally is more complex to implement. The general idea is to batch your geometry so as to make a minimum number of draw calls. This basically necessitates transforming the geometry yourself. This is pretty simple if all you’re doing is translating and maybe scaling, but for more complex transforms you’ll probably want to have a good math library available.

The procedure goes something like this (off the top of my head, so I may not hit everything):

  • On startup, create a set (could be just one, depending) of VBOs in ‘stream’ mode, of sufficient size to hold the largest batch you ever expect to render in one go. If you’re not sure how large the largest batch will be, you can grow the buffers as needed, or limit batch sizes.
  • We’ll set aside texture atlases for now - they can make for better batching, but you can do plenty of optimization without them.
  • For rendering, you want to sort everything you’re going to render by state so as to minimize how often you have to break batches. Things that break batches are anything that can’t change in the middle of a draw call (more or less), like blend mode, textures, and programs. So for example, let’s say you have a bunch of entities that use texture A, and a bunch of entities that use texture B. Assuming there’s nothing else that would require you to split them up, you’d want to render all the entities with texture A, then all the entities with texture B. Note that even without batching this can be a win if your rendering system skips redundant state changes, because you’re not changing texture state as often.
  • For a 2-d game, you may also have to factor in layering (i.e. painter’s algorithm) in your render order. Or, you could use the z-buffer and draw things at different z depths.
  • For each batch (same blend mode, texture, etc.), you pre-transform all the geometry and put it in your VBOs (e.g. using glBufferSubData()). Then you make your draw call.

Again though, I’d start with just getting what you have now working, and then make incremental improvements.

so if i wanted to move something left in my code, all i need to add is the subdata method? I’ll look it up more as I’m still on my phone, but a lot of the stuff you said was way over my head xD. I’ll reread it when I’m on my computer, but I guess the way I render everything in a level is way different from what you’re saying to do. I’ll fix up the rendering, get it working, then let you know where I stand.

Well, something like that :slight_smile:

[quote]I’ll look it up more as I’m still on my phone, but a lot of the stuff you said was way over my head xD.
[/quote]
Yeah, I wouldn’t worry about that stuff right now. ‘Get it working, then optimize if needed’ is probably a good plan.

Ok, first off I can say the crashing has been fixed from fixing the color. The thing I don’t get is… the color. Lol it’s weird, I just want a picture to load, but like it has a lot of shadows/blackness over it. I don’t know what colors to use, and I have no clue if it’s better to just not use a color vbo.

Picture of it: http://gyazo.com/185aa9496271e535856130f7e8c45036

Also, I was looking into the SubData method, and it confuses me how it really works. I also saw people complaining about it taking too long? I don’t kow exactly, but here’s the example of it that kinda confused me.

Wiki: http://wiki.lwjgl.org/wiki/The_Quad_updating_a_VBO_with_BufferSubData

I know the Wiki is using shaders, but idk. I thought SubData would just let you give it the new cords t osomething, but it’s way different than that.

The wiki gave this bit of code:

GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, vboId);
         
        // Apply and update vertex data
        for (int i = 0; i < vertices.length; i++) {
            TexturedVertex vertex = vertices[i];
             
            // Define offset
            float offsetX = (float) (Math.cos(Math.PI * Math.random()) * 0.1);
            float offsetY = (float) (Math.sin(Math.PI * Math.random()) * 0.1);
             
            // Offset the vertex position
            float[] xyz = vertex.getXYZ();
            vertex.setXYZ(xyz[0] + offsetX, xyz[1] + offsetY, xyz[2]);
             
            // Put the new data in a ByteBuffer (in the view of a FloatBuffer)
            FloatBuffer vertexFloatBuffer = vertexByteBuffer.asFloatBuffer();
            vertexFloatBuffer.rewind();
            vertexFloatBuffer.put(vertex.getElements());
            vertexFloatBuffer.flip();
             
            GL15.glBufferSubData(GL15.GL_ARRAY_BUFFER, i * TexturedVertex.stride, 
                    vertexByteBuffer);
             
            // Restore the vertex data
            vertex.setXYZ(xyz[0], xyz[1], xyz[2]);

I guess I just don’t get what glBufferSubData is doing, because everywhere I look it talks about 4 bytes and what not. If this is what I should use to “move” my objects left, right, up, and down… then there’s has to be a simple way of doing this.

Great!

[quote]The thing I don’t get is… the color. Lol it’s weird, I just want a picture to load, but like it has a lot of shadows/blackness over it. I don’t know what colors to use, and I have no clue if it’s better to just not use a color vbo.
[/quote]
I’m not sure what’s causing the color problem, but there’s no requirement to use colors. If all you’re doing is using full white with unity alpha (1, 1, 1, 1) for the colors, you can just drop the colors entirely.

[quote]Also, I was looking into the SubData method, and it confuses me how it really works. I also saw people complaining about it taking too long?
[/quote]
Regarding it taking ‘too long’, I wouldn’t worry about that. Probably what you read had to do with comparing different methods of updating mesh data dynamically with respect to performance. It’s true that using glBufferSubData() in a straightforward way may not be the most efficient solution possible, but I don’t think that’s a concern here.

[quote]I guess I just don’t get what glBufferSubData is doing, because everywhere I look it talks about 4 bytes and what not. If this is what I should use to “move” my objects left, right, up, and down… then there’s has to be a simple way of doing this.
[/quote]
I’m certainly happy to explain further, but keep in mind you’re not required to use glBufferSubData(). If it’s not making much sense at the moment, you could just not worry about it for now and stick with static meshes.

I also feel obliged to mention again that there are frameworks (like LibGDX) that will handle all this for you. Note that I’m not necessarily advocating for this. The low-level stuff is important, and after all, someone has to understand it or we wouldn’t have frameworks like LibGDX in the first place :slight_smile: On the other hand, productivity is also an important goal, so just keep in mind that the option of using an existing framework is always available.

If you want to stick with your current path, I’d suggest getting things working with static meshes (if you haven’t already), and then re-evaluating if at that point performance isn’t acceptable or you want to implement a more involved rendering system.

What do you mean by static meshes? I’ll do it if it means not using a library, as I’m trying to stay away from that. xD

Also if you wanna explain more about SubData feel free. Honestly, you’ve been great at helping me understand things, and I really just want to learn as much as possible.

Not to mention… every source I go to just copy’s and pastes a response saying the same confusing thing.

[quote]What do you mean by static meshes?
[/quote]

[quote]Also if you wanna explain more about SubData feel free.
[/quote]
Well, I admit it can be a little hard to explain, but I’ll try to answer both the above questions in one go.

First, by ‘static’ I just mean that the mesh data never changes after you initially create it.

When you allocate a buffer in OpenGL (using glBufferData), you get to supply a hint for how you expect the buffer to be used. For example, you’d specify ‘static’ if you never expect to change the buffer data after the initial upload, or ‘stream’ if you expect to change the data frequently. To the best of my knowledge these are just hints and don’t actually have any effect on what you can or can’t do with the buffer later, but for performance reasons you should try to choose a hint that matches how you intend to use the buffer.

In simple terms, glBufferData and glBufferSubData do the following:

  • glBufferData allocates memory for a buffer, and optionally allows you to upload data to the buffer as well.
  • glBufferSubData allows you to upload data to a buffer that’s already been allocated via glBufferData.

So, glBufferData is more or less required (that is, you have to call it at some point if you want to use the buffer at all), but glBufferSubData is entirely optional and only need be used if you want to update the data in an existing buffer for some reason.

A lot of times in OpenGL you just want to create a mesh and never change it. This is often referred to as a ‘static’ mesh both because ‘static’ means unchanging, and also because the corresponding OpenGL usage hint is called ‘static’. So whenever I say ‘use a static mesh’ I just mean to create a mesh once using glBufferData (with the ‘static’ usage hint), and then never change it.

Sometimes however you want to modify the data in an existing buffer. There are a lot of use cases for this: batching (like I was describing earlier), animation, deformation, and so on. The use case that could be of immediate interest to you, I think, is batching. But, to repeat myself, I don’t think you necessarily need to start with that, as it can be a little complicated.

That’s pretty long already, so I’ll stop there, but feel free to ask for clarification if you need it.

I guess the thing I’m still stuck on is… so the vertex Vbo I have saves the data for the location. You’re saying a static mesh doesn’t change, but for a game the texture for each creature/item/entity can and most likely will move. Therefore the data will change, so I would need to do something to chanhe that data That’s what I thought the SubData mathod was for, but I didn’t understand how that was done.

Now regarding batching, I’ve never worked with this before and have no clue what to do with it. So I may just be missing what you’re saying, but I don’t know what to do exactly to move the coords for each “creature/texture”. If batching is the way to do it, I still have no clue how to do it with VBO’s xD.

I think I understand buffers… but I guess the way data (x,y) for each vertex is stored/changed is a little unclear for me cause Idk what to do to change it :confused:

[quote]Now regarding batching, I’ve never worked with this before and have no clue what to do with it. So I may just be missing what you’re saying, but I don’t know what to do exactly to move the coords for each “creature/texture”. If batching is the way to do it, I still have no clue how to do it with VBO’s xD.
[/quote]
To keep things simple, let’s forget about batching for now. It’s just an optimization, and one that you may not even need.

[quote]I guess the thing I’m still stuck on is… so the vertex Vbo I have saves the data for the location. You’re saying a static mesh doesn’t change, but for a game the texture for each creature/item/entity can and most likely will move. Therefore the data will change, so I would need to do something to chanhe that data That’s what I thought the SubData mathod was for, but I didn’t understand how that was done.

I think I understand buffers… but I guess the way data (x,y) for each vertex is stored/changed is a little unclear for me cause Idk what to do to change it :confused:
[/quote]
Although I think I understand what you’re getting at above, there are several things there that are inaccurate or incorrect as stated. There are probably some things that need to be cleared up conceptually here. Apologies if this seems too basic, but it might be worth clarifying some terminology.

A mesh, as you probably know, is a collection of data such as vertex positions, texture coordinates, and so on that can be used to render something. (The quads you’re working with are meshes, just simple ones.) A texture is an image that’s ‘painted’ onto mesh geometry. For a simple 2-d game, entities are often just quad meshes with a single texture (I think that’s what you’re doing).

A transform specifies how some set of geometry should be positioned, oriented, and otherwise shaped and placed in the world. Transforms can be pretty complex, but for a 2-d game all you usually need is translation (the position of the object), and maybe scale (its size) and orientation (what angle it’s at). If your entities don’t change size and never rotate, all you need is translation (position). In the OpenGL fixed-function pipeline, there’s a family of functions (e.g. glTranslatef() and so on) that allow you to specify the transform for the next draw call.

An entity in your game will typically consist of a mesh, a texture, and a transform. I think some of your confusion is in thinking that the mesh and/or texture somehow have to ‘move’ if your entity moves. They don’t. The mesh and texture can be entirely static and unchanging; you move the entity by changing the transform.

So:

mesh+texture+transform = entity

Mesh and texture are static, transform changes as needed.

Once again I’ll stop there due to length, but maybe that will help clear things up a bit.

Ok thank you for clearing up mesh’s and how quad’s are a simple form of them. I get how the textures work with quads, but the thing I still don’t get is this. The way I was rendering each object before I used VBO’s, each vertex would be based off of an x and y, and that would run every frame. So in a sense, those coordinates would be updated every frame based on the x and y of the object. Now, with VOB’s, those coordinates are made and stay the same no matter what x and y change to. That’s because the coordinates are created once, and saved.

I’m actually using the transform method in game my currently

	public void moveRight(float length)
	{
		direction = 1;
		//Safely moves right by the length provided if possible
		if(canMove("Right",length) == true)
		{
			//checks if the user will stay in the position it should and can actually move to where it wants to
			if(this.x < 350 && (this.x + length) <= 350)
			{
				this.x+= length;
			}
			else if(this.x >= 350)
			{
				GL11.glTranslatef(-length,0,0);
				this.x += length;
			}
			//GL15.glBufferSubData(GL15.GL_ARRAY_BUFFER, 10, vertexData);
		}
	}

This is used in moving left, and right for the player.

From my understanding, this moves the entire world, since what acts as a “camera” is always looking at the same place. I have this all working just fine for the moment, but it doesn’t translate single objects in a game. Maybe I have a misunderstanding with glTranslate, but I would think I need to update the coords for the VBO still. Also I get how the texture is static and I just need to change the texture coords (even though I’ve never done it before), but Idk. I’m not changing the size either atm, just the location.

Anyway, I think my confusion lies based on one thing you said.

[quote]I think some of your confusion is in thinking that the mesh and/or texture somehow have to ‘move’ if your entity moves. They don’t. The mesh and texture can be entirely static and unchanging; you move the entity by changing the transform.
[/quote]
I thought the transform method transforms the world… so I guess my mind is just blown lol. Maybe the 2 zeros are what dictates that. I feel like a OpenGL noob xD But really, if the transform method is a llot more flexible than what I think, my life may be a lot easier.

I think I should probably clarify (for you and anyone else reading) that the approach I’m describing is only one of various approaches you can take. The reason I’ve been sticking with it is that it’s arguably simpler than approaches that use batching, and it seems like a good starting point for using VBOs. But, I don’t want to give the wrong impression that it’s the only way to do it.

That said, I think I’m starting to follow your line of thought more clearly. Originally, you were just specifying the vertex positions for each quad manually, e.g.:

GL11.glVertex2f(x, y);

And so on. So, when you switched to VBOs, your first thought (I think) was to do the same thing with the VBO, that is, to modify the vertex positions in the VBO (just as you were doing above with glVertex2f) whenever the entity moved. That makes perfect sense, so I get where you’re coming from there.

And in fact, you absolutely could do it that way. Furthermore, doing it that way is more or less required if you want to batch multiple entities into a single draw call. It’s probably also the way most games and game frameworks do it. The reason I steered away from that approach is that it can be more complicated to implement, and is less straightforward with VBOs than it is with immediate mode.

So to clear things up a little, there are, roughly speaking, two approaches you can take to rendering an entity:

  • Set a transform for the entity to specify its position, rotation, etc., and then render a static mesh using that transform.
  • Just leave the transform at identity (basically ‘no transform’), and manipulate the quad vertex positions directly, as you were doing initially and as you suspected needed to be done with VBOs as well.

So to restate, you can either change the transform and leave the vertex positions alone, or you can leave the transform alone and change the vertex positions. As far as what the user sees onscreen, the results will be the same either way. Both methods are valid, and each has its situations in which it’s appropriate. I’ve been advocating for the ‘change the transform’ method because it’s important to understand that method, and it’s simpler in some ways, but both approaches are valid.

I’m trying (unsuccessfully) to keep these posts from being too long, so I won’t get into a lengthy discussion of transforms here. Here’s a couple hints though. First, from what you posted I think you’re probably using glTranslatef() incorrectly. It looks like you’re accumulating transforms frame-to-frame, and you don’t want to do that. You shouldn’t be calling any OpenGL transform functions from your ‘move’ functions. Those calls should only be made from your render functions. Rendering an entity would look something like this:

clear the transform (glLoadIdentity)
set the translation (glTranslatef)
set render state (textures, etc.) and make your draw call

If you intend to have a moving camera in your game (e.g. one that follows the player), things are a little more complicated, but not much.

I thought you couldn’t easily make a camera follow the player except for how I was doing it. Honestly, I was just going by what I saw in a tutorial on youtube. I would love to hear how that is done (correctly), but sorry for causing so many long threads xD. Also, I’m not dealing with any rotations or anything atm, just translations (people can jump but that’s still translations). Also, just to be clear, I have a game (instance of it) running a loadScreen method every frame, which then draws every object loaded using a draw() method. This calls each objects drawEntity or drawTile method or what not, which would be the updateVBO() method i have. IF that’s the correct way to do it as you said, then I’m confused on how I should implement the translate method into everything (Player Camera vs. VBO’s).

Also, if there’s already a lengthy tutorial just for this out there that explains everything I’m asking, I’ll look at that instead of having you write really long threads xD. Up to you though, I’m just here to learn about the right way to make games (and make my game before August xD).

[quote]Also, if there’s already a lengthy tutorial just for this out there that explains everything I’m asking, I’ll look at that instead of having you write really long threads xD. Up to you though, I’m just here to learn about the right way to make games (and make my game before August xD).
[/quote]
There’s a lot of info out there, but honestly I wouldn’t know where to send you. With all the changes OpenGL has undergone, tutorials are kind of all over the place in terms of what they cover and whether they’re up to date. Maybe someone else can offer some suggestions though.

Unfortunately it’s hard to give these subjects good treatment in short forum posts, so I may just be adding to the confusion here. I’ll go ahead and mention a couple more things though.

Fortunately, how all this works at a conceptual level (transforms, etc.) doesn’t really change in modern OpenGL, so even though the details will be different, all this knowledge will remain relevant. In simple terms, the difference as far as transforms go is just that the fixed-function pipeline does a lot of the work for you, while in modern OpenGL you have to do more of it yourself. But, the concepts are the same.

The camera thing is unfortunately a little hard to explain without getting into some complex concepts like transform inversion and concatenation. Instead of getting into all that, I’ll just offer some pseudocode (off the top of my head, may be errors) showing what rendering a single frame with a camera that follows the player might look like:

clear the transform (glLoadIdentity)
glTranslatef(-player.x, -player.y)
for each entity (including the player)
push matrix stack (glPushMatrix)
glTranslatef(entity.x, entity.y)
render entity
pop matrix stack (glPopMatrix)
end for

Note that everything having to do with OpenGL transforms is confined to the rendering code - you don’t have to call OpenGL transform functions from anywhere else or accumulate transforms in OpenGL or anything like that. Put another way, when it’s time to render a frame, you always start with a ‘clean slate’ as far as the OpenGL transform is concerned, and then go from there.

I get what you’re saying regarding the render code. I get how that could be a better use of the translate method. Maybe I need to mess around with it more just to fully understand it, but even so there’s one big issue here that I’m still not getting.

VBO’s store the coords once in a FloatBuffer.

That VBO is then rendered.

When that VBO needs to move… the coords need to move let’s say x += 10

Are you saying when I run the translate

glTranslatef(-player.x, -player.y)

it says 0,0 to that spot?

Cause I have my game set so 0,0 is in the bottom left.

clear the transform (glLoadIdentity)
glTranslatef(-player.x, -player.y)
for each entity (including the player)
  push matrix stack (glPushMatrix)
  glTranslatef(entity.x, entity.y)
  render entity
  pop matrix stack (glPopMatrix)
end for

What I get from this is that if the game starts at 0,0 , and you wanna go by the player, it goes the opposite x and y of the player, and if the player is at 350,1 … the 0,0 would be at -350, -1. Then it goes through each object, and translate’s the “0,0” coord to those objects x,y. Wouldn’t this make the screen freak out?

[quote]Put another way, when it’s time to render a frame, you always start with a ‘clean slate’ as far as the OpenGL transform is concerned, and then go from there.
[/quote]
At least… that’s how I take that sentence. Starts clean, and you move over to those coords.

But with VBO’s… how would I move it? The coords of the VBO are stored in a floatBuffer, and I would need to update those. I just don’t get within code how I would move that VBO arround with glTranslate. I’m starting to get what you’re saying, but that just keeps getting in the way of me understanding.

Also, one last thing.

[quote]The camera thing is unfortunately a little hard to explain without getting into some complex concepts like transform inversion and concatenation.
[/quote]
Ya Transform Inversion seems like something I should wait to learn until I learn glTranslate correctly xD. Although if it makes having a “camera” way easier, I’m all for learning it.

Ok, I just looked it up and found this.

I may try and look at shaders (as suggested in this), but I have a feeling it’s way more complicated, and I have no clue what glBufferData does.

Maybe I’ll try it out now (if I have time), or later today. But I guess in the end what you were trying to explain to me is deprecated anyway. So idk :/. Shaders are what’s used now, so I guess it wouldn’t hurt to blow my brain up a little bit more with this stuff xD.

Edit: I found an example for shaders, and wanted to make sure this is the right thing I’m looking for.

http://schabby.de/opengl-shader-example/

I don’t know for sure if this is the right version (I believe I’m using Lwjgl 2), and I don’t know if this is VAO’s, not VBO’s.

From what I see, this is just VBO’s but with a little more code for storing/passing data. A lot of the code is writing/reading to text files. So It’ not “that” scary/intimidating, but I have no clue how it works still after skimming it xD

I just skimmed it, but the shader example you linked to looks like it’s probably up to date.

There’s a lot that could be covered here, but I’ll just try to address your confusion about ‘moving the VBO’ when an entity moves. Note that although the details differ, the concepts are the same regardless of whether you’re using modern OpenGL or the fixed-function pipeline (you’re currently using the latter).

Generally when you render something in OpenGL, a transform is applied (recall that a transform specifies how geometry should be shaped and placed in the world, e.g. scale, rotation, translation, etc.). With the fixed-function pipeline (which you’re currently using), you set up the transform using functions like glTranslatef(), and then OpenGL automatically applies the transform when rendering. With the programmable pipeline (the modern way), you manage the transform yourself and apply it in a vertex shader. (Strictly speaking you don’t have to apply a transform in your vertex shader, but you almost always will.)

Note that with the fixed-function pipeline, this isn’t specific to VBOs. OpenGL will apply the current transform to any rendered geometry, whether it’s immediate mode or in the form of VBOs or vertex arrays or whatever.

So how does the transform thing work? Say you have a vertex at position (1, 2). If your transform is identity (that is, the transform doesn’t change the geometry at all), then the vertex will render at (1, 2). If your transform is glTranslatef(3, 4), the vertex will be rendered at (4, 6), because (3, 4) will be added to every vertex that’s rendered. If your transform is glTranslatef(-2, 7), the vertex will be rendered at (-1, 9). And so on.

Consider the example of glTranslatef(3, 4), which produces a vertex of (4, 6). Using an identity transform and just submitting a vertex of (4, 6) directly would produce the same result visually. Example (untested):

// This:
glLoadIdentity();
glTranslatef(3, 4, 0);
glBegin();
glVertex2f(1, 2);

// Would have essentially the same result as this:
glLoadIdentity();
glBegin();
glVertex2f(4, 6);

When you do it the first way - letting the transform do the work - you never have to change the vertex (1, 2). That’s why it’s not necessary to change the contents of a VBO in order to move an entity; you can let the transform do it instead. However, you can do it the second way if you want as well. (And in fact the second way was how you were doing it initially.)

Anyway, you may just need to do some experimentation to get a handle on things. Set up some simple test cases where you can manipulate the transform and/or mesh geometry and see how things change. Seeing some simple test cases in action might help clear things up for you.

Ok, I’ll most likely try grabbing the shader example I linked, move it with the translate, and once I get it to work I’ll put it in my game. Thanks for the help, and if I need any other help regarding this I’ll just reply to you or something and ask here xD.

Sorry for having you explain so many things xD, I’m hoping I get this under control soon. It means a lot to have someone like you help explain these things in such detail.

[quote]Sorry for having you explain so many things xD, I’m hoping I get this under control soon.
[/quote]
No need to apologize :slight_smile: This material can be difficult. I think just experimenting and getting some experience will help though.

So I just made a project and tried using the shader. I was able to get it to load up, but the issue is I can’t get it to translate :/. Using what you said, I would translate before it renders, and then the vertex’s would end up being in the right place. I did see it say it was drawing a VAO… sso now I’m confused. IS this not using VBO’s like it said?

main loop

while( Display.isCloseRequested() == false )
		{
			GL11.glClear(GL11.GL_COLOR_BUFFER_BIT | GL11.GL_DEPTH_BUFFER_BIT);
			
			// tell OpenGL to use the shader
			GL20.glUseProgram( shader.getProgramId() );
			
			// bind vertex and color data
			GL30.glBindVertexArray(vaoHandle);
			GL20.glEnableVertexAttribArray(0); // VertexPosition
			GL20.glEnableVertexAttribArray(1); // VertexColor
 
			// draw VAO
			GL11.glDrawArrays(GL11.GL_TRIANGLES, 0, 3);
 
			// check for errors
			if( glGetError() != GL_NO_ERROR )
			{
				//throw new RuntimeException("OpenGL error: "+GLU.gluErrorString(glGetError()));
			}
			
			// swap buffers and sync frame rate to 60 fps
			Display.update();
			Display.sync(60);
		}

I tried putting this in before/in the middle of/after it rendered the shader


if(Keyboard.isKeyDown(Keyboard.KEY_LEFT))
{
	GL11.glTranslatef(10,0,0);
}

Or… is this not the way you explained? I may need to look back at what you said again, but I guess it’s not cause you get coords from the player and what not. I just wanted to see this move when I pressed the left arrow key, but idk xD. Sigh, maybe I should find a video tutorial or something lol.

Couple things really quick. First, this:

if(Keyboard.isKeyDown(Keyboard.KEY_LEFT))
{
   GL11.glTranslatef(10,0,0);
}

Isn’t how functions like glTranslatef() should be used. First, you’re mixing user input and entity updating with rendering, which is generally to be avoided from a ‘separation of concerns’ point of view. Transform-related calls like glTranslatef() should go in your rendering code, not your input or update code. Also, it looks like you’re trying to accumulate transforms from frame to frame, but generally it’s better to take a ‘clean slate’ approach and create the transform(s) from scratch each frame (see my earlier pseudocode).

Also, my discussion of functions like glTranslatef() was limited to the fixed-function pipeline, but your latest example is using the programmable pipeline. There was sort of a ‘transition period’ in OpenGL (I think) where some aspects of the fixed-function pipeline were available to shader programs, but I don’t know if the fixed-function transforms ever were - I’d have to do some research to find out. In any case, when you’re using the programmable pipeline, you no longer use functions like glTranslatef() and so on; instead, you create the transform matrices yourself and assign them to shader uniforms.

I definitely don’t want to steer you in the wrong direction here, and of course you should pursue whatever path interests you the most, but tackling the programmable pipeline at this point might be kind of jumping into the deep end a little. The programmable pipeline can be difficult even if you already have a full grasp of transforms and so on; if you don’t yet have a full understanding of that material, the programmable pipeline might be a little much to deal with.

A lot of ground has been covered in this thread. I don’t want to contradict my own advice, but now that I have a better idea of what you’re doing, I’m thinking it wouldn’t necessarily be a terrible idea to go back to your original immediate mode code (I assume you still have that available) and work with that some more. Yes, immediate mode is slow and deprecated and isn’t really something you want to use in production code, but working with immediate mode for a while could allow you to get a more firm footing regarding the fundamentals, which will better position you to tackle things like VBOs and shaders.

I think the key conceptual thing to understand here is what transforms are and how they’re used. I think as long as you’re updating transforms in your update or input code and allowing them to accumulate from frame to frame within OpenGL, there’s probably a conceptual disconnect that needs to be addressed.

At this point I imagine you and I are the only ones reading this thread :slight_smile: Since the topic has drifted somewhat, I suggest that if you’re still finding the transform stuff to be problematic, you start a new thread specifically on that topic. That’ll get some new eyes on your questions and perhaps get you a wider range of feedback.