OpenGL Questions

Consider I have this field in my vertex shader:

in vec4 position

Now consider that I’ve set up my vbo, created my shaders, bound my vertex attributes, etc. and now I’m rendering the vertices. My code is:


glUseProgram(program);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 4, GL_FLOAT, false, 0, 0);

glDrawArrays(GL_TRIANGLES, 0, 3);

glDisableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glUseProgram(0);

Question: in what order is everything done?
My understanding (I’m assuming things here) is that when glDrawArrays is called, the glVertexAttribPointer call formats the data in the VBO and it sends that data to the attribute index specified by the first argument. Since attribute index 0 is enabled, the position field is initialized? Am I correct?

glVertexAttribPointer() does not “format” the data in the VBO. It simply explains how to interpret it.

[icode]glVertexAttribPointer(0, 4, GL_FLOAT, false, 0, 0);[/icode]
Arguments:
1: Which attribute location this should be put in. Basically, which shader input variable should we store this in?
2: The number of components of this attribute. If 4, then the shader input variable has to be a vec4.
3: Data type.
4: Should the data be normalized? Used when uploading bytes, shorts and ints. If true with GL_UNSIGNED_BYTE, then the byte range 0-255 is mapped to 0.0 to 1.0. If false, then treated as 0.0 to 255.0. If normalized and GL_BYTE, the signed byte range is mapped to -1 to 1.
5: How many bytes each vertex is. 0 = tightly packed, in which case it’ll calculate the size of this variable and use that. In this case, that’s 4 components times 4 bytes per float, so 16.
6: Offset in bytes. As with stride, useful when having more than one attribute interleaved in the VBO.

As you can see, nothing here actually modifies the VBO. It tells what to do with the data in it. When you then call glDrawArrays(), OpenGL will read vertex data from the VBO based on the vertex attribute setup. [icode]glDrawArrays(GL_TRIANGLES, 0, 3);[/icode] renders 3 vertices. For each vertex, it goes over all enabled attributes and reads data from a VBO as specified by each corresponding call to glVertexAttribPointer(). For the first vertex (vertex 0), it’d look at attribute 0, see that it should read 4 floats from the position (offset + vertexID * stride) = (0 + 016), so it reads byte 0 to 16. For the second, it sees (0 + 116) = 16, etc…

I took a couple of days break to clear my head and I decided to try and learn the matrix maths behind all the translations etc. which I realised was a mistake not to do before. Anyway, I read this page: http://www.opengl-tutorial.org/beginners-tutorials/tutorial-3-matrices/. Before I plod on let me re-post this image that RobinB posted previously.

The first few lines confused me.

So first off, what are a scenarios where we’d need a direction vector? Is this related to the direction of scaling an object or something? Also I thought that W could be between 0 and 1. What happens if the W component is defined as lets say 0.25?

Translation:
Lets now refer to this image that it provides.

http://www.opengl-tutorial.org/wp-content/uploads/2011/04/translationExamplePosition1.png

Lets say we changed that matrix to:

1, 1, 0, 10
0, 1, 0, 0
0, 0, 1, 0
0, 0, 0, 1

We’d get (30, 10, 10, 1). We get a change to the X coordinate but we’re multiplying it by our Y coordinate. Can someone explain of what use this is? Do any of the OpenGL functions modify that part of the matrix? Also what is the translation column actually for? Couldn’t you do translations in the X/Y/Z columns?

When you rotate over the Z axis:
the incoming X affects the outgoing Y
the incoming Y affects the outgoing X

That is when ‘OpenGL uses that part of the matrix’.

Ahh thanks, I found a page which explains everything.

Next Question:
How do the stride and offset parameters of glVertexPointer() and the like work? I’ve tried searching around and I can only find defines the byte offset between data etc. explanations which I don’t understand and C++ related explanations which uses sizeof() as a parameter which I don’t understand. Examples would be appriciated.

Say I have data packed as VCVCVC;


glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBufferData(
		GL_ARRAY_BUFFER,
		(FloatBuffer) BufferUtils
			.createFloatBuffer(18)
				.put(new float[] { -0.5f, -0.5f, 0, 1f, 1f, 1f, -0.5f,
						0.5f, 0f, 1f, 0f, 0f, 0.5f, -0.5f, 0f, 0f, 0f,
						1f }).flip(), GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);

.....

glVertexPointer(3, GL_FLOAT, 6 << 2, 0 << 2);
glColorPointer(3, GL_FLOAT, 6 << 2, 3 << 2);

That would translate to:


glVertexPointer(3, GL_FLOAT, 24, 0);
glColorPointer(3, GL_FLOAT, 24, 12);

That doesn’t really make sense to me. I only have 18 pieces of data in my buffer but I’m defining the stride as 24 and one of the offsets as 12. ???

Now lets consider I have the data packed as VVVCCC.


glVertexPointer(3, GL_FLOAT, 3 << 2, 0 << 2);
glColorPointer(3, GL_FLOAT, 3 << 2, 9 << 2);

Would I be correct in saying that the stride is [icode](pieces of data * proportional gap between data) << 2[/icode] and the offset is [icode]starting position of the data << 2[/icode]? Why do you left-shift everything by 2?

And finally, how is the below used?

[quote]public static void glVertexPointer(int size, int stride, java.nio.FloatBuffer pointer)
[/quote]
Is it used when the offset can be 0 i.e tightly packed non interleaved buffers? Everything I’ve seen so far binds the buffer and uses

[quote]public static void glVertexPointer(int size, int type, int stride, long pointer_buffer_offset)
[/quote]
EDIT: Never mind I’m pretty sure that I’m correct in thinking that that function is used for VAOs. The difference between VBOs and VAOs is that the data for a VBO is placed on the GPU and you access it via a handle whereas with a VAO you have to keep creating the buffer. Am I correct?

About the stride and offset thing.The stride defines how big all your vertex attributes are in bytes.

In your example you have two vertex attributes. Each of them consist of 3 floats and a float has a size of 4 bytes, so you need 24 bytes(234 == 6 << 2) for one vertex.
So that OpenGL knows where it can find each single attribute, you define an offset into this vertex data block. In your example the position attribute is at the beginning of each block so its offset is 0. The color attribute is placed right after the position attribute, so it’s offset is equal to the size of the position attribute(3*4=12 bytes).

When you don’t want to create an interleaved data-structure(VVVCCC) you would bind the same buffer, but with different pointer offsets.

@Danny02 Thanks.
Next Question: (I think two in a day is my record ::))

Lets make this short and sweet. You can define an offset with glVertexPointer() so what’s the point of the first (not literally) parameter in [icode]glDrawArrays(mode, first, count)[/icode]?

Beware, glVertexPointer and glColorPointer are deprecated and not part of core OpenGL.

Concerning your question, “first” is the first index to start at, as defined in the docs (which you should most likely try to reference more often :wink:). If you want to render everything, you start at 0 and “count” is how many vertices there are.

So you’re saying that the first parameter is used for newer versions of OpenGL because you cannot define an offset for a VBO?

Well, there are some weird parameters in OpenGL that you’re likely to never use, although having options never hurts. :slight_smile:

In modern OpenGL you fill up buffers just as you do now (glBufferData(…)/glBufferSubData(…)) but to assign data that goes to your shaders you have to use glVertexAttribPointer(…) and using that function you can set offsets and strides, letting you use a single buffer for multiple parameters. This means that you can store your vertex, normal and texture coordinate (and even more) data in a single buffer/VBO than you can render from that using your shaders.

To answer your question: You can use offsets in pointers to tell OpenGL where your attribute begins in the buffer but you shouldn’t use this to “skip over” indices as you think right now.
If you want to skip over for example the first 5 indices you should render with glDrawArrays()'s first parameter set to 5.
I know it possibly sounds a bit overwhelming right now but if you have any questions just ask, after all that’s what this topic is for. ::slight_smile:

Ahh I understand. It’s going a little off the buffer object topic, but when do you use shaders in modern OpenGL? I’d have thought you’d use a vertex shader for the camera for example and various fragment shaders for different visual effects, however I’ve seen forum posts in the past mentioning that shaders aren’t used very often but also seen forum posts saying that they’re an essential part of modern OpenGL. After a little peek around the LibGDX source I found it uses no shaders and only uses up to v2.0 of OpenGL but runs pretty fast. Are shaders only really essential for 3D?

Shaders are an essential part of the OpenGL 2.0 + pipeline. That’s why it’s called the “programmable pipeline.”

To get a triangle on screen, you need to upload its vertex data. That’s what a VBO is for. A VAO sets up attributes for that VBO. Then when it gets rendered, the vertices first pass through a vertex shader. There is no default shader – so you have to create one and bind it. During rasterization, the pixels that make up your triangles (aka fragments) go through the currently bound fragment shader.

All of these questions would be answered with a book or some reading. :slight_smile:

http://www.arcsynthesis.org/gltut/
http://www.opengl-tutorial.org/
http://open.gl/

[quote]also seen forum posts saying that they’re an essential part of modern OpenGL
[/quote]
I didn’t ask how to use a shader program or what a shader program does. I asked when shaders are used. I’ve looked at all of those books and started to read the arcsynthesis book, but my difficulty comes with the language used, both in the context of the programming language and of the wording. I’ve downloaded r4king’s port of the code for the arcsynthesis book but I don’t learn by reading code. Currently I’m focusing on VBOs but then I’ll continue to read the book and do some experimentation.

You need shaders to even get a single triangle on screen. OpenGL doesn’t know anything about the vertex data you provide in a VBO, you have to tell it through a shader what and where to draw something.

Well I know that. I guess my original question wasn’t too clear. Let me re-word it.
How would you split up the use of shaders in a game? Would you use a vertex shader for the camera and several fragment shaders for textures and visual effect etc.? What other things would you need a shader for in for example a simple 2D platformer?
I had a peek around the LibGDX source and noticed that it doesn’t use the programmable pipeline. Why is this? Is it because the fixed function pipeline is good enough to use in 2D games? Are shaders and modern OpenGL functions that are part of the programmable pipeline only essential for content-rich 3D games?

You need a vertex and fragment shader to render anything. LibGDX definitely does use shaders and the programmable pipeline. It also has some fixed-function fallbacks so that your game will even render on older versions of Android.

A 2D game probably just needs one shader program, to get a textured quad on screen in orthographic projection. Certain effects like normal mapping (needs WebGL to run) will use different shaders, but most 2D games don’t do this.

Here’s a tutorial that goes into depth on shaders and 2D effects. I wrote it – so let me know if you have questions or find the language difficult.

Also another quick note, LibGDX uses OpenGL ES so its versions are different from OpenGL. GL ES 1.X was fixed function, GL ES 2.0 is programmable. There’s now GL ES 3.0 which adds more goodies to the programmable pipeline.

I think you don’t really understand what is a shader and what they’re used for.

In modern OpenGL (3.1+ counts as modern IMO, because that’s when all the deprecated stuff has became unsupported) you HAVE TO use shaders for getting anything to the screen. :point:
There are different kinds of shaders (vertex, fragment (or sometimes called pixel), geometry) all serving their own purposes.
In the vertex shader you usually calculate vertex position (using the vertex attribute input multiplied with your own projection, view, model, etc. matrices) and pass over incoming attributes, e.g. normals to the fragment shader, while in the fragment shader you calculate a single fragment’s (or pixel’s, although I think this isn’t really appropriate) color.

This is not a too complicated process once you understand what’s going on under the hood, however, I really suggest you to pick up a book on modern OpenGL since immediate mode isn’t really viable for rendering something else than just a few triangles, also it won’t let you do any cool effects like lighting (other than the built-in crap that’s practically useless) or anything else.
There are also a few good tutorials on the internet too, but honestly it’s extremely hard to find decent tutorials on modern OpenGL (and by modern I mean something like GLSL 3.30 and definitely not GLSL 1.20).

You should check out Davedes’s shader tutorials (he linked it in 1 or 2 posts above) because that’s a good point to start from, even though that is not really modern either, it’s way better than using immediate mode without any shaders. :wink:

Regarding my previous questions about shaders; never mind, I was meaning to ask how shaders affect the design of the game, i.e how you structure and split up the use of shaders in the game. Lets ignore that question though for now.

I’ve started to re-read the start of the arcsynthesis book. I’ve noticed that in r4king’s LWJGL code, when the buffer object is filled with data, [icode]glBindVertexArray(glGenVertexArrays())[/icode] is called. If this is commented out it logs an error, however I realise that it works without that function call. Why is this? I don’t really understand what the specification says about it, so I’d be grateful for a brief explanation.

Also I’m unclear on how the connection made between input variables in a shader and code.
If I have a position vec3 input variable bound to attribute location 0, how is the connection between the attribute and the below code made?


glUseProgram(programId);
glBindBuffer(GL_ARRAY_BUFFER, vboId);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 3, GL_FLOAT, false, 3 << 2, 0);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glDisableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glUseProgram(0);

I understand that glEnableVertexAttribArray(0) enables the use of attribute index 0. Is there a reason it has the word array in the function name? Now for the function glVertexAttribPointer(). I know what all the arguments mean, but I’m unclear with how data is passed from a buffer object to the vertex shader executable on the shader program. Does it pass each x, y, z component individually to the attribute because of the stride and the size specified?

You have to know that OpenGL is a huge state-machine. Think of an programm which is only written with global variables. Each of these function calls in question set some specific state/global var. Let’s get over them one by one.

  • glBindBuffer just set the GL_ARRAY_BUFFER state to some id
  • glEnableVertexAttribArray sets the enabled state of some Attribute(e.g. 0) to true (glDisableVertexAttribArray sets it to false)
  • glVertexAttribPointer sets all other needed state of some Attribute(e.g. 0) and also sets the buffer-object state of the Attribute to the current value of the GL_ARRAY_BUFFER state
  • glDrawArrays just gives the command to draw X shapes with the current bound shader and enabled attributes

So this code would still work if you would paste the line [icode]glBindBuffer(GL_ARRAY_BUFFER, 0);[/icode] in front of the glDrawArrays command.

Now about the [icode]glBindVertexArray(glGenVertexArrays())[/icode] thing:
First of all these two functions have nothing to do with what is also called VertexArrays (drawing directly some client array). They create and use something called VertexArrayObject(VAO) which was introduced in OpenGL 3.
What this OpenGL object does is that it has an own copy of the global vertex-attribute state(glVertexAttribPointer, glEnableVertexAttribArray). When you bind such an object all following calls which change an attribute state does change the VAO state and not the global state(global == VAO with id 0).

So you can create one VAO for each model(your VBO now) you have and only have to set the attributes once. After that, to render one specific model you only have to set the shader and VAO. So only 3 function calls are needed to render something.

So in the end [icode]glBindVertexArray(glGenVertexArrays())[/icode] is quite pointless to do because the id from the Gen call isn’t saved and can’t be reused.