What type of rendering should I use?

By type, I mean immediate mode vs display lists vs VBOs vs whatever else there is.

I am trying to get an intuition for what I should be doing and when.

So, for example, what if I am trying to render a reticle on my screen. That will NEVER change EVER. How should I render it?
Another example: I’m making a voxel game. Blocks change relative infrequently (in the scope of computer speeds…) so how should I render those? I know Minecraft has been using display lists (and recently added VBO support).

So what type of rendering should I use? Are there times where I should use different types of rendering?

  1. Use VBOs, preferably with shaders. Immediate mode and display lists are both deprecated and shouldn’t be used.

a) How you draw a simple reticle probably doesn’t matter. You could place the reticle vertex data in VBO at game start and reuse it for the rest of the game.
b) For a voxel game, VBOs have big advantages. You should split up your voxel world into chunks. Whenever a voxel changes (is mined, blah blah) you regenerate the triangle mesh of the chunk and store it in a VBO. Then you can reuse it until it changes the next time again.

  1. Everything nowadays is drawn with VBOs. To get any kind of vertex data to the GPU, you should be using VBOs. There are however tricks that can be used to reuse data for improved performance. For example, it is possible to use instancing to draw the same mesh hundreds of times with a single draw call, using shaders to place each instance at a different location/rotation/etc. Another trick is using geometry shaders to transform the geometry to minimize the CPU overhead and the amount of data that has to be sent to the GPU. For example, you can draw your reticle sprite as a point with only 1 vertex detailing the position, size and color of it, and then use a geometry shader to expand it to a quad with generated texture coordinates for each corner.

VBOs are not really a “type of rendering”. VBOs are simply the most efficient and powerful way of both uploading dynamic data to the GPU every frame AND keeping reusable data indefinitely in video memory, and should always be used nowadays. What you do with the data (indexed vs non-indexed rendering, using different shaders, expanding geometry with geometry shaders, etc) is what can actually differ nowadays.

Why are shaders preferred? I mean, at the moment I have no desire for any fancy rendering. I just want to draw some geometry with textures.

Shaders are the only thing that actually exist. The fixed functionality pipeline doesn’t exist anymore. It’s all emulated by the driver using shaders if you use the fixed pipeline and it’s - you guessed it - deprecated.

Okay so, even if I want the simple functionality like I mentioned, I should use a shader of my own?

In short, yes.

There is (read: was) very little downside to using shaders (and at this point none) and like @theagentd said fixed function is deprecated, meaning there is no actual support for it (and apparently doesn’t actually exist anymore, which is good to know) so it may even go away entirely in the next version or so. If you are seriously considering programming a game with openGL (or improving your skills in general) you’ll want to use VBOs and Shaders. They are everywhere and the amount of time you spend fighting the fixed-function is not worthwhile.

Your time is better spent with the future.

I agree with @theagentd and @thedanisaur about that you should use shaders, but for different reasons.
You should use shaders in case you want your application to run on OpenGL ES (mobile devices) or WebGL (browser) now or in the future.
It is not because the fixed-function pipeline is bad or will be removed, which for the time being and for the foreseeable future will not be the case.
ISVs, or at least Nvidia, have no interest in removing it, because there are still a host of (professional) applications out there depending on it that don’t have the luxury of being rewritten or ported to “modern” OpenGL.
ISVs don’t want to get those people angry about discontinuing the support for OpenGL <= 2.1.
Saying that the fixed-function pipeline is deprecated also only is true in the context of OpenGL 3.0 and 3.1, where it is marked “deprecated” and in fact in the OpenGL 3.2 “core” profile it has been removed. But that does not mean you cannot use it anymore.
If you want to write an OpenGL application that targets OpenGL 1.5 or even just OpenGL 1.1 today, that is perfectly fine! You can do that. Even on OS X. :wink:
Just be aware that if you do so, you can only use features in more “modern” OpenGL versions via extensions or via the OpenGL >= 3.2 “compatibility” profile which some driver vendors decided to implement (all but Apple on OS X). And if you want to let your application run on a mobile device with OpenGL ES or on the browser platform with WebGL (which is becoming increasingly popular these days), you have to rewrite it to use shaders.
If you want to develop a simple desktop/PC application with simple requirements and don’t want to learn shaders, there is nothing stopping you from using the fixed-function pipeline.

It really has been removed.

Yes, you can still use the fixed function pipeline with a Compatibility profile context, but in the Core profile you HAVE to use a shader. It’s just not worth learning all the quirks of the fixed function pipeline (like glEnable(GL_TEXTURE_2D) shudder) anymore when you can do the same thing with less quirk and much more future usability and flexibility using shaders. Also, OpenGL 3+ with a compatibility profile can’t be used on Macs. You need to use an OpenGL Core profile, so if you want to use anything above 2.1 on Macs you must forsake the fixed function pipeline.

It is however faster to get something rendered with the old deprecated functionality. If you have no interest in advanced graphics programming and just want to draw some simple textured surfaces, then you could go with the old functionality to save a bit of time.

Do what @theagentd said, follow him. He is one of the OpenGL experts on this forum.

The future is the core profile, and if you are learning I would recommend to go with the OpenGL 3.x. I admit that it is easy to get something on the screen with deprecated immediate mode, and there are also several games and libraries using it, but they are slow.

To get some fun, I have implemented a class that provides a glBegin/glEnd like behavior but renders through glDrawArrays with VAO and VBOs. You might want to take a look of it - Batcher.java. A bit of warning, do use it only if you like (I’m not advertising the engine, if someone would point it out, just that class).

Before responding on that quote you should’ve read on my posting to the end. :wink:
I already layed out everything you said in your post. :slight_smile:

Anyways: Of course you should start learning shaders and modern OpenGL. I just wanted to get straight that the fixed-function pipeline is still available. :smiley:

And I am just waiting for people to declare whole OpenGL “deprecated” once Vulkan comes out, saying “Vulkan is the future. You should totally and immediately only use Vulkan from now on.”
Even though it would take like a thousand code lines to render even a single triangle with Vulkan.
And people will say “Yes, it’s the way it is. And Vulkan is the way to go. And it has sooo many times better performance compared to OpenGL, with all its command buffers. And you can even do everything super-multithreaded and take care of every bit of allocated memory for yourself. And you have fine-grained control over the residency of resources on the GPU.”
And people will cry tears and want to go back to OpenGL 1.1 when they first come into contact with Vulkan and realize that they get less done than before and need ten times the effort to do anything and have a hundred times more bugs/errors in their code. :smiley:

And that, kids, is why you should use a game-engine, not write one.

Vulkan is far from a replacement to OpenGL. Vulkan is an alternative that trades complexity for CPU performance, and OpenGL will keep on getting updates and live on. I wouldn’t recommend people who aren’t seriously interested in graphics programming to dive into Vulkan, and I probably wouldn’t people who were interested in actually releasing games to use it either, since you’re limited to fairly new cards (OGL4+ Nvidia cards, but only the absolute latest AMD cards). It’ll take a few years until Vulkan becomes a serious alternative for most people. That obviously won’t stop me from diving straight into it out of pure curiosity.

I think KaiHHs point is exactly your point, but simply one level of abstraction higher:

Just like fixed-function-pipelines piggybacks on shaders, so will OpenGL on Vulkan, eventually.

You can reiterate the above statement with increasing abstraction, until you arrive at libraries like Slick and LibGDX and engines like Unity, or (browser based) environments like Processing, to digital images, and finally to Polaroid. :wink:

nit-pick - Processing isn’t a browser-based environment (although it can target JS in the same way libGDX can).

Otherwise … good point! ;D

The problem I have with this argument for openGL and Vulkan is that it’s not that much harder to get a triangle on screen using shaders(and to do this properly I’d say shaders are even easier once you add textures and colors). The “hard work” ends up being compiling and linking the shader. Where as vulkan is literally a different mindset. Beginners will cry blood trying to get anything on screen vulkan without a substantial amount of thinking.

You are of course right with the aspect that doing more complex texturing and lighting can get done better with using shaders. I absolutely agree. No question about it.
But the problem lies in the definition of “easier.”

I’d say that it is not exactly “easier” to do proper lighting and texturing using shader, it rather is for the first time at all finally possible to do that with shaders properly, which was not at all the case with the FFP and its horrible API when it comes to combine textures, because it was…, well…, “fixed” in functionality and only could do Gouraud.
And everything new on top of that was done with lots and lots of state in that huge state machine called the OpenGL context. That was horrible. Absolutely. The FFP was very limited. But easy when you did not need anything more complex.

Now the hard work with using shaders begins with that you need to comprehend shaders. How they work. What their execution model is.
How to connect the vertex specification stage of the OpenGL API to the input variables of a vertex shader. What uniform variables are and how they work. How to transform vertices in the vertex shader with matrices. It now becomes more apparent what coordinate systems are and how to transform from one to the other. You also need to know what interpolated variables are and how those variables get interpolated between the vertex and fragment stages. What a sampler2D is and that it correlates with a texture unit. Then, how to actually do proper lighting and shading? Then, how to connect the output variables of a fragment shader to the framebuffer attachments (okay glFragColor is pretty simple).

In short: You need to learn a whole lot new concepts that you didn’t need to before, because now that simple glEnable(GL_TEXTURE_2D) and glColor3f() won’t cut it anymore when wanting to draw a simple texture and have colors. No, now you even need to program in a new language (GLSL) and know about shading and lighting models. Whether learning all this is worthwhile has to be decided by everyone on their own.
For me personally of course it is, because I like learning it and playing around with it, really. But just because I like it, doesn’t mean I should propose/advice it to others.

The FFP was simpler but in the end of course stiff and inflexible when wanting to get away from simple Gouraud shading with one diffuse texture and finally do Phong and whole lot of new areas of computing for the first time with shaders.
That fact just lies in the nature of every abstraction: Loss of flexibility and also oftentimes worse performance.

With the FFP you also did not get to know how things work in more detail, and everything was hidden behind a more or less awkward interface. But the question remains: Do you actually need to know when developing a simple game? Do you need to become a master of OpenGL just to draw a single textured and colored something? Do you need the potentially powerful toolset of OpenGL and shaders to render your game? And do we want to recommend new developers to go this route?

Those are the only questions at least I want that we ask everyone who asks about what rendering technique they should use for their Minecraft clones. :slight_smile:

Hence also @Riven’s advice to use a game engine or any other such good libraries that simplify the lifes of game developers a lot.
You get a deeper understanding of everything with shaders, yes, just like you will get a deeper understanding of OpenGL and about the hardware when using Vulkan.

Sorry, somehow I wanted to get that down. :wink:

I wasn’t going to post this, but I ignored the advice to learn shaders and hope to get others to not follow in my footsteps. I wasted a bunch of time fighting that beast. Am I better for it? I’m not so sure so…

TLDR: If you are doing anything non-trivial you should use shaders. If it’s trivial enough for fixed function you’re better off spending the time learning how to use an existing engine.


I still disagree, and you basically made my argument for me. If you are doing anything non-trivial, say for instance a cube world game, you should use shaders.

[quote]Now the hard work with using shaders begins with that you need to comprehend shaders. How they work. What their execution model is.
How to connect the vertex specification stage of the OpenGL API to the input variables of a vertex shader. What uniform variables are and how they work. You also need to know what interpolated variables are and how those variables get interpolated between the vertex and fragment stages. What a sampler2D is and that it correlates with a texture unit. Then, how to actually do proper lighting and shading? Then, how to connect the output variables of a fragment shader to the framebuffer attachments (okay glFragColor is pretty simple).
[/quote]
You have to learn similar, basically throwaway, knowledge for the fixed-function pipeline.

How to transform vertices in the vertex shader with matrices. It now becomes more apparent what coordinate systems are and how to transform from one to the other.

Not if you actually learned anything using the fixed function pipeline.

[quote]Whether learning all this is worthwhile has to be decided by everyone on their own.
[/quote]
Then use an engine like Riven said.