OpenGL Questions

Well, it may sound harsh but try to think before you ask a question.
Last time when you was asking about the game design implementations of shaders we answered your question: They’re used for everything.
And yes, “everything” includes everything from a simple cube to real time shadows and extreme lighting effects.
You can ask for examples but the answer is that they’re everywhere. I’ve told you that the vertex shader is used for vertex position calculations and passing data to fragment shader, while the fragment shader tells OpenGL the color of the actual fragment it runs on.

glBindVertexArray(glGenVertexArrays()) simply binds an empty vertex array object, as far as I know this has no effect on rendering whatsoever, and you definitely should not get an error if it’s missing (assuming that you do everything correctly). Also it’s a bad practice since you should always store your generated object’s id so later on you can delete it when you don’t need it anymore, freeing up space VRAM.
Mod: Actually now I remember what vertex array objects do. As explained above by Danny02 they save the vertex attribute modifications and buffer bindings so if you use them all you have to do on rendering is to just bind the vertex array object and it’ll set all the buffers and vertex attribute pointers for you again so all you have to do is just render using glDrawArrays(…) or glDrawElements(…).

The glVertexAttribPointer(…) in your example tells OpenGL that the data should be sent to attribute array 0 (so basically to location 0), that the attribute has 3 components (so it will become a vec3 in the shader), the type is float, the stride between the attribute in the buffer is 3<<2 (I don’t really get it why do you do this, see my explanation on this parameter below) and that the data starts right on the first byte of the buffer.

You say that you understand every parameter in glVertexAttribPointer(…) and then you ask how does data gets passed to the shader, which clearly explains that you don’t understand what glVertexAttribPointer(…) does.
Using glVertexAttribPointer(…) you can tell OpenGL what data it should send to the shader in what format.
One really important fact is that the vertex attribute pointer will always send data from the currently bidden array buffer.
The parameters are already explained here but I will try to rephrase them for you:
index - The index of the vertex attribute array (you will also have to enable it using glEnableVertexAttribArray(index))
size - The number of components that you’re going to pass. Can be only 1, 2, 3 or 4 and OpenGL will automatically convert it to float (probably), vec2, vec3 or vec4.
normalized - If set to true OpenGL will normalize the value on access. Otherwise it should be set to false.
stride - The stride of the parameter IN BYTES. This is the most tricky value since not only the value is set in bytes but also you have to set the stride so that it will point from the first component to the next instance of this attribute. My explanation is probably crap because of my english skills but you should look it up once you start to use interleaved arrays (also remember that 1 float = 4 bytes). For now all you have to know that if you set it to 0 OpenGL will assume that the buffer is tightly packet and it can continuously read the data from it. TLDR: Set it to 0 for now, look it up later when you do more complex stuff.
offset - The offset before the attribute’s first appearance in the buffer specified IN BYTES. This one’s easier to explain than the stride: If your buffer looks like this [1, 2, 3, 4, 5, 6, …] and you want OpenGL to read the buffer from number 4 you set the offset to 12 because that equals 3x4 so it’ll skip the first 3 floats in the buffer. Also until you don’t fill up your buffer with other stuff or you don’t use interleaved arrays you shouldn’t really worry about this and just set this to 0.

Some information I wrote here might not be 100% correct theoretically because I’m a hobbyist and not a professional (even though I’m planning to be one soon ;D), however most of the stuff should be correct. I know OpenGL can be hard to learn but never give up and you’ll become good at it in no time. :slight_smile:

I want to clarify why this is in my code. In core OpenGL, there’s no more global VAO. It is now required to bind a VAO before any draw calls. Since the Arcsynthesis tutorial doesn’t cover VAO’s til Chapter 5, it just binds one and doesn’t use it yet.

This should also explain why you get an error when you remove it :wink:

Interesting, didn’t knew that. One more reason to read up on OpenGL > 3.0

Yup, and the other 99% of the questions in this thread are answered in the first 7 chapters, so read thoroughly OP :slight_smile:

I see, thanks. Does calling glGenVertexArrays() when the buffer is bound associate the buffer with the VAO, or is the association made another way?

I did previously know what the parameters were for but I was wondering how data was passed to the shader. You answered it here for me. Thanks.

I see. May I suggest adding a comment explaining this, especially since someone could confuse VAOs with VAs.

I’d find it a lot easier if the book didn’t say “this is explained in a later tutorial”. I prefer to understand why I’m writing what I am.

Hehe, another thing the Arcsynthesis tutorial made sure to be clear about (in Chapter 5).

  • The association of the currently bound ARRAY_BUFFER with the currently bound VAO only occurs at the glVertexAttribPointer(…) call.
  • The association of an ELEMENT_ARRAY_BUFFER and the currently bound VAO only occurs when you bind the buffer.

For either, after the condition is met, you can freely unbind the buffers, as the VAO now has a copy of the pointer.

You’re right. I’ll go through and add that comment to each example that has that line. In fact, I think my code just needs to be more commented. Arcsynthesis’s code is weird sometimes.

Well it’s quite difficult to explain everything at once. Patience, my friend. :slight_smile:

New Question:
I’ve been reading more of the book and I’m currently on chapter 4. I understand pretty much the first half of the page then I get lost. What’s confusing me is what it says after it explains the perspective divide. It says:

What I’ve always been confused about and what I still don’t understand is the meaning of the W coordinate. What does it represent? Up until now I’ve always got the impression that the W coordinate is just a normalized Z coordinate, but that doesn’t really make sense. All I can find on Google is “dividing by W converts clip space to NDCs”. Maybe it explains it further on in the book, but I want to understand the maths behind all the projections before continuing.

I continued to read on through the section about camera space and got even more confused. It says:

How can it be an infinite space? Why is +Z further away when in clip space -Z is further away? On the next three lines it says:

I get totally lost. It all seems very confusing. I’d appreciate if someone could try and explain camera space and possible throw in a diagram or two. The depth computation equation doesn’t spread any light on it.

That’s really all you need to know, I promise. The ‘W’ coordinate will always be 1 as far as you’re concerned (except for directional lights where it will be 0, but that is for the Lighting chapters :smiley:). Multiplying the camera world positions with the perspective matrix sets a “special” ‘W’ value in clip space that, divided into the XYZ components, will produce the correct NDC coordinate.

In mathematical terms, it is infinite space because nothing is limiting where you place your objects. Technically, it is finite as it extends from the maximum and minimum you can fit into a 32-bit float :slight_smile:

And again, don’t worry about clip space and NDC. You will never deal with them, and that chapter just likes being complete and explaining how things work under the hood. Just remember that in camera space and above layers: -Z is forward.

EDIT: I found a nice little flowchart that shows the steps your vertices take to get from your vertex data to the screen:

Your vertex data first gets transformed by your Model-View Matrix (Model Matrix multiplied by View Matrix) where the Model Matrix positions the vertex (in Object Space) at its place in the world (World Space) followed with the View Matrix positioning the World Space vertex relative to the camera, aka Camera/Eye Space (where (0,0,0) is your camera). The Camera Space vertices are then transformed by the Projection Matrix into Clip Space (which you should never need to deal with), Perspective Divide into NDC space, and finally stretching NDC space onto the GL Viewport.

I hate not knowing how something works. :frowning:
I forgot to ask something else, so while I’m at it, here goes:

Why are the frustum scale, zNear and zFar variables defined as uniform? I thought that uniform variables were used when the variable should be changed regularly. Why aren’t they just defined as attributes, i.e in?

When you say above layers you mean local space, world space and camera space?

Me too! However, knowing how the W/perspective divide works requires studying matrix theory and linear algebra. You can probably find resources online if you really really want to :slight_smile:

Uniform variables are variables that are applicable to all vertices rendered together, for example your matrices are the same for every vertex in some object.

Meanwhile, attributes are per-vertex data.

Yup.

Though wouldn’t attributes apply to all vertices rendered together? I thought the only difference was that attributes can only be changed with every execution of the program whereas uniform variables can be changed before each rendering state. Is there something else I need to know? :-\

You seem to confuse attributes and uniforms.

Attributes are per-vertex data, for example: position, color, texcoords, etc…, anything that is unique per vertex. If you are drawing an object that has 50 vertices, you upload all 50 positions, colors, texcoords, etc… to your VBO and tell glVertexAttribPointer which index each attribute is assigned to as well as enough info for it to know how big each of those 50 vertices are. When rendering with any glDraw* command, your currently bound vertex shader is called 50 times, once for each vertex. After that is done and all clipping/perspective division/rasterization happens, your fragment shader is called for each fragment (without anti-aliasing, you get 1 fragment per pixel rendered).

Uniforms are, well, uniform across that draw call. Before you call any glDraw* command, you can change your uniforms however you like. Once it’s called, each vertex/fragment shader has the same uniforms. That’s how you have the same matrices across all 50 vertices and the same texture bound for each time the fragment shader is called for example.