A rant on OpenGL's future

Some background

TL;DR: OpenGL has momentum. The number of games utilizing it is steadily increasing, which has forced GPU vendors to fix their broken drivers. Compared to 2 years ago, the PC OpenGL ecosystem is looking much better.

Almost everyone on this forum have at at least one point in their life worked with OpenGL, either directly or indirectly. Some of us likes to go deep and use OpenGL directly through LWJGL or LibGDX, while others (a probable majority) prefer the increased productivity of abstractions of OpenGL like LibGDX’s 2D support, JMonkeyEngine or even Java2D with OpenGL acceleration. In essence, if you’re writing anything more advanced than console games in Java, you’re touching OpenGL in one way or another. Outside of Java, OpenGL is used mainly on mobile phones as both Android and IOS supports it, but a number of AAA PC games have recently used OpenGL.

  • The id Tech 5 engine is based solely on OpenGL and is used for RAGE and the new Wolfenstein game, with two more games in development using the engine.
  • Blizzard has long since supported OpenGL as an alternative to DirectX in all their games to allow Mac and Linux users to play their games.
  • Valve is pushing a move to OpenGL. They’re also porting the Source engine to OpenGL, and some of the latest Source games (Dota 2 for example) default to OpenGL instead of Windows.

These are relative recent events that have essentially started a chain reaction of improvements in OpenGL support throughout the gaming industry. The push by developers to support or even move to OpenGL has had a huge impact on OpenGL driver quality and how fast new extensions are implemented by the 3 graphics vendors. Some of you may remember the months following the release of RAGE when people with AMD cards had a large number of issues with the game, and RAGE was not exactly using cutting edge features of OpenGL. During the development of We Shall Wake I’ve had the pleasure of encountering a large number of driver bugs, but I’ve also had a very interesting perspective on how the environment has changed.

  • Nvidia’s OpenGL drivers have always been of the highest quality among the three vendors, so there was never much to complain about here. My only complaint here is that they are impossible to report driver bugs to, as they never respond to anything. This is a bit annoying since when you actually do find a bug, it’s almost impossible to get your voice heard (at least as a non-professional graphics programmer working on a small project).
  • AMD’s drivers are significantly better today compared to a year or two ago, and almost all the latest important OpenGL extensions are supported. Their biggest problem is that they lag behind slightly with OpenGL extensions, leading to some pretty hilarious situations like AMD holding a presentation for optimized OpenGL rendering techniques that are only supported by their competitors.
  • Even more impressive are Intel’s advances. Around a year ago I had problems with features that dated back to OpenGL 1.3. Their GLSL compiler was a monster which had bugs that should’ve been discovered within hours of release. Even worse, they were very quick to discontinue driver development as soon as a new integrated GPU line was released. Today, they have a respectable OpenGL 4 driver compatible with all their OGL4 capable integrated GPUs, and they also support a majority of the new important extensions. Intel also takes the prize for best developer support, as I have reported 3 bugs which have all been fixed in the following new driver release.
  • OpenGL on OSX has also gotten a big improvement lately. The latest drivers support OpenGL 4.1 on all GPUs that have the required hardware, but most cutting-edge features are still missing.

What’s wrong with OpenGL?

TL;DR: OpenGL is horribly bloated. For the sake of simpler, less buggy and faster drivers that can be developed more quickly, they need to get rid of unnecessary functionality.

We’ll start by taking a look at OpenGL’s past. The main competitor of OpenGL, DirectX, has traditionally had (and still has) a large market share, and not without good reasons. A big difference between the two is how the handle legacy functions. DirectX does not have a significant amount of backwards compatibility. The most important transition happened between DirectX 9 and 10. They completely remade the API from the ground up to better fit the new generation of GPUs with unified architectures that were emerging. This was obviously a pain in the ass for developers, and many games are still being developed with DirectX 9. It was however a very good choice. Why? Because the alternative was what OpenGL did. First of all, OpenGL 3 (the functional equivalent of DirectX 10) was delayed for several months, allowing DirectX to gain even more of a head start. Secondly, they decided to instead of starting from scratch, they decided to deprecate old functionality and eventually remove it in later versions. Sounds like a much easier transition for developers, right? Here’s the catch: They also provided specifications for a compatibility mode. Since all 3 vendors felt that they were obliged to support this compatibility mode, they could not actually get rid of the deprecated functionality. In essence, OpenGL 3 is OpenGL 2 with the new functionality nastily nailed to it. The horror story continued with OpenGL 4 and the new revolutionary extensions that will make up the future OpenGL 5. OpenGL is so ridiculously bloated with functionality that haven’t existed in hardware for over 10 years. Nvidia, AMD and Intel are still emulating completely worthless functionality with hidden shaders, like fixed functionality multitexturing and the built-in OpenGL lighting system. Implementing and maintaining these functions for every new GPU that they release is a huge waste of resources for the three vendors. This is one of the sources of the traditionally bad driver support OpenGL has had. It was simply not worth it until more games using OpenGL started popping up. A fun fact is that Apple actually decided not to support the compatibility mode, so to access OGL3+ on OSX you need to specifically request a context without compatibility mode.

Sadly, the existence of the compatibility mode has encouraged many important industrial customers of the graphics cards vendors (CAD programs and other 3D programs) to become dependent on the compatibility mode, so it’s essentially here to stay. So we have 3 vendors, each with their own ridiculously massive unmaintainable driver, and we just get more and more functionality. We’re seeing a similar shift in how things are done in hardware as we did between DirectX 9 and 10 right now between DirectX 11 and 12. Interestingly, OpenGL is leading here thanks to extensions that expose these new hardware features. These are available right now on all vendors with the latest beta drivers, except on Intel which is lacking a few of them. In essence, we already have the most important features of a theoretical OpenGL 5.

Here’s what’s wrong with OpenGL at the moment. There are too many ways of doing the same thing. Let’s say you want to render a triangle. Here’s the different ways you can upload the exact same vertex data to OpenGL with.

Fixed functionality:

  • Immediate mode with glBegin()-glEnd(). Generally slow, but easy to use. (1992)
  • Vertex arrays with data reuploaded each frame. Faster but still slow as data is reuploaded each frame. (1997)
  • Create a display list. Fastest on Nvidia hardware for static data. (1997)
  • Upload to VBO with glBufferData() each frame. Generally stable performance, but slow due to additional copies and complicated memory management in the driver. (2003)
  • Allocate VBO once, upload to VBO with glSubBufferData() each frame. Slow if you’re modifying the same buffer multiple times per frame. Also requires copying of data. (2003)
  • Map a VBO using glMapBuffer() and write to the mapped memory. Avoids an extra copy of the data, but forces synchronizations between the GPU, driver thread and game thread. (2003)
  • Map a VBO using glMapBufferRange() with GL_MAP_UNSYNCHRONIZED_BIT and handle synchronization yourself. Avoids extra copy and synchronization with the GPU, but still causes synchronization between the driver thread and the game thread. (2008)
  • Allocate persistent coherent buffer, map once and handle synchronization yourself. No extra copy, no synchronization. Allows for multithreading. (2013)

So we literally have 8 different ways of uploading vertex data to the GPU, and the performance of these methods depend on GPU vendor and driver version. It took me years to learn which ones to use for what kind of data, and which ones are fast on which hardware, and which you should avoid in what cases. Today, all but the last one are completely redundant. They simply complicate the driver, introduce more bugs in the features that matter and increases development time for new drivers. We literally have code from 1992 (the year I was born, I may add) lying right next to the most cutting edge method of uploading data to OpenGL from multiple threads while avoiding unnecessary copies and synchronization. It’s ridiculous. The same goes for draw commands. The non-deprecated draw commands currently in OpenGL 4.4 (+ extensions):

  • glDrawArrays
  • glDrawArraysInstanced
  • glDrawArraysInstancedBaseInstance
  • glDrawArraysIndirect
  • glMultiDrawArrays
  • glMultiDrawArraysIndirect
  • glDrawElements
  • glDrawRangeElements
  • glDrawElementsBaseVertex
  • glDrawRangeElementsBaseVertex
  • glDrawElementsInstanced
  • glDrawElementsInstancedBaseVertex
  • glDrawElementsInstancedBaseVertexBaseInstance
  • glDrawElementsIndirect
  • glMultiDrawElementsBaseVertex
  • glMultiDrawElementsIndirect
  • glMultiDrawArraysIndirectBindless <----
  • glMultiDrawElementsIndirectBindless <----

Only the last two functions marked with arrows are necessary to do everything the above commands do. This bloat needs to go. Oh, and here’s another funny piece of information. GPUs don’t actually have texture units the way OpenGL exposes them anymore. We can immediately deprecate texture units as well and move over to bindless textures any time we want. Imagine the resources the driver developers could spend on optimizing functionality that is actually useful instead of on maintaining legacy functions, not to mention the smaller number of bugs as there are less functions that can have bugs in the first place.

Why fix what isn’t broken?

Competition. Mantle is a newcomer in the API war but has already gained developer support for many released and upcoming games thanks to its more modern, simpler API that is a better fit for the GPU hardware of today (well, and probably a large amount of money from AMD). DirectX 12 will essentially be a cross vendor clone of Mantle. Yes, OpenGL is ahead of DirectX by far right now thanks to extensions, but that won’t last forever unless they can keep the API as simple and straightforward to use, fast and bug free as the competition. We’re still behind Mantle when it comes to both functionality and simplicity. OpenGL is too complicated, too bug ridden and too vendor dependent. Unless OpenGL 5 wipes the slate clean and starts from scratch, it’s time to start moving over to other APIs where the grass is greener.

DirectX is a construction company which builds decent buildings but tear them down every other year to build a new better building. Mantle is a shiny new sci-fi prototype building. OpenGL started as a tree house 20 years ago and has had new functionality nailed to it for way too long, so technically it’s just as advanced as DirectX and Mantle; it’s just that it’s all attached to a frigging tree house so it keeps falling apart and is basically unfixable.

Agreed, and it has been talked about and mentioned many times by many famous people. Khronos is the organization that create OpenGL, and OpenGL is an open standard. AMD, Nvidia, and Intel are all part of the “board of directors,” so my question is why don’t they simply make these changes as it seems like the obvious choice and it would in fact make it cheaper and easier for them to create their own drivers.

It seems so painfully obvious that I know I must be missing something.

I think that the root of this is that the average person does not get hardware capable of using new functionality until years after it has made its debut. We then need to cater to the lowest common denominator. DX11 is finally really starting to be used widely in games it seems. It is now becoming the new DX9. However, because there are still people out there who have computers that only support up to DX9 game programmers have to use old crap.

I think it is funny how opengl 3 is now on freaking phones and the like.

I REALLY think they could completely ax everything under opengl 2. Like for the love of God I do not get why that shit still exists. It I think adds the most bloat. It seems once shaders came around all the old crap should have been dropped.

Or maybe, we just adopt Mantle?

One of the (many?) things I’ve always been so impressed with the Java Gaming community and developers in general is there ability to change and adapt. As java developers we (un)naturally add abstracting and layers in to everything we do. As you said many of us are now using engines for 2D and 3D work so why wouldn’t we just implement those engines against the next graphics library.

Are we all that attached to OpenGL? Before OpenGL I spent time with Java2D, before that with Interrupt 13h and Chain 4, before that with STOS and blit chips, before that peeks and pokes, before that my calculator :slight_smile:

Things move on but games design and games development stay interesting no matter what the medium.

Cheers,

Kev

OpenGL is currently the most viable cross-platform hardware accelerated rendering API. Mantle is only relevant for the newest AMD chips, Metal for Apple’s newest chip, and DirectX for windows.

Java is a powerful cross-platform language, It would seem fitting that we have a powerful cross-platform rendering API too.

edit - The idea is, this is more than just a resistance to change. OpenGL has something very important to the java community that the alternatives do not. If we have no other better alternatives it makes sense to push our only option to be better.

WHS. Need openGL to hit as many targets as possible with least refactoring. Wanking about wasting engineering time making multiple “back ends” for graphics rendering is just wasting time that could be put into making games.

If we were really adaptable we’d be using Unity :wink:

Cas :slight_smile:

Mantle is only for AMD and Apple chips now, OpenGL was only available for certain cards when it first came out. My point was we right abstractions so we can switch when and if we need. Why would we hand on to any particular library if it gets outdated and bloated as soon as the other options are viable?

Cas, Cas, Cas, “wanking” about wasting engineering time for multiple back ends, is exactly why we manage to hit so many platforms with so little game development effort. The problem with it for you is you don’t want to use anyone else’s library who has already done the wanking for you and the rest of the community :slight_smile:

Cheers,

Kev

We don’t have to hold onto OpenGL, as soon as Mantle becomes as supported, capable, stable and multiplatform as OGL it’ll probably be a better choice but I have a feeling that isn’t gonna happen anytime soon.
Also most of the people here learnt OpenGL for years (or at least months), graphics libraries are not easy to wrap your head around and if we have the choice between fixing OpenGL and learning a new graphics API I think most of the community here will always vote on fixing OGL (me included :slight_smile: ).

I agree on removing ALL the redundant features and go only with as few commands as needed (this way not only OGL becomes less bloated but the drivers are likely to be more stable too).
It’s painful to see newcomers troubling with OpenGL, they keep mixing up old and new functionalities and honestly it’s not their fault. Fix yo sh*t, OGL.

Also it would be nice to finally see a complete and capable OGL debugger. I know Valve has one on the way but it isn’t out yet, is it? :persecutioncomplex:

Doesn’t OpenGL ES cut out most of the legacy?

Some of it at least.

There have been other threads about what might be done to alleviate the pain points. For example, it’d probably be handy if OpenGL were a portable library on top of some other back end. OpenGL itself never needed to really be a “driver” as such. In fact a portable OpenGL library would just be the best thing ever.

Cas :slight_smile:

[quote=“princec,post:10,topic:50106”]
Regal is moving in that direction, though I’m not sure if it’ll ever get to the point of emulating OpenGL over a non-GL API (Mantle, Direct3D, etc).

Well, finally, someone has said why I was respectfully avoiding OpenGL in the first place. It is a nightmare for newcomers, and it took backwards compatibility to a whole new level of pain. Paradoxically, it is also the reason it is so popular as a platform, because your legacy programs will not break.

I’m split…

There is really no good solution. If you run to a better architecture, then you’ll be “out of date” and have to move to another after a few years. Just like a new car, your programs will have to be constantly updated. I mean, OpenGL has a novelty that is not like the console markets we have today. Remember game cartridges… yes, that will be your programs in the next 20 years.

At the same time, I think that backwards compatibility needs to be handled differently. Like, you can support the old modes, but then just write a converter (a botched one) that converts the old code into the new code architecture. It’ll completely butcher the older games speed, forcing them to upgrade to the new or have crappy performance. Honestly, though, not even this approach would work. We programmers have too much pride to change code every two years seconds…

I respect OpenGL, but like all programs that try to target everything, they get bloated. If you want cutting edge, you have to move away from the “jack-of-all-trades”. Seriously, the moment the new technologies get some ground we just have to leverage them. (Though, it is Java’s fault why we believe targeting all platforms is beautiful. Don’t get me started on how bloated Java is…) Regardless, I know that if OpenGL keeps on its current course, it’ll be a “master of none” of the technologies it has now.

Hi

As you use some bleeding edge features, you know you’re in the front line, you find bugs, it’s not surprising.

LibGDX is a middle-level API and there are other actively maintained Java bindings for the OpenGL and/or OpenGL-ES APIs including JGLFW (used in LibGDX), Android OpenGL and JOGL (JogAmp).

The build-in Java2D with OpenGL acceleration isn’t very optimized and it drives Java2D performance inconsistent. Slick2D, Golden T Game Engine (GTGE) and GLG2D are more efficient.

OpenGL is used a lot in scientific visualization and in CAD softwares too.

Maybe you should mention that the situation is quite different in mobile environments. Intel hardware is still far behind when it deals with offscreen rendering.

I don’t think about “market shares” and things like that but I have worked as an engineer in computer science specialized in 2D/3D visualization for more than 7 years and we are numerous to be mostly satisfied by the way OpenGL has evolved even though the slowness of some decision makers were really annoying. I prefer the evolution with a certain continuity rather than brutal questionable changes. Microsoft isn’t a good example, it is very good at creating brand new APIs that only work with one or two versions of Windows and abandoning them. 2 softwares on which I worked already existed in the nineties and not all corporations can afford rewriting its rendering code every three years. OpenGL isn’t only for games, it can’t be designed only to please some game programmers.

I know that almost nobody agrees with me about that here but I prefer fighting against planned obsolescence. What theagentd wrote isn’t apolitical. “Science without conscience is but the ruin of the soul” (François Rabelais).

Spasi talked about regal. JOGL provides immediate mode and fixed pipeline emulation too in ImmModeSink and PMVMatrix which is very useful when writing applications compatible with both OpenGL and OpenGL-ES.

The only bugs I’ve found in cutting edge features are the following:

  • Nvidia: When persistent buffers were first released, they had forgotten to remove the check to use the buffer while it was mapped (which was the whole point), so they were useless. (Fixed)
  • AMD: BPTC texture compression works, but can’t be uploaded with glCompressedTexSubImage() as that throws an error.
  • Intel: glMultiDrawIndirect() does not work if you pass in an IntBuffer with commands instead of uploading them to the indirect draw buffer VBO.

All other bugs were in really old features:

  • Intel: Rendering to a cube map using an FBO always made the result end up on the first face (X+), regardless of which face you bound to the FBO. (Fixed)
  • Intel: Their GLSL compiler was pretty much one huge bug. [icode]vec3 array[5];[/icode] was supported, [icode]vec3[5] array;[/icode] threw unrelated errors all over the place. (Fixed)
  • Intel: 4x3 matrices multiplied with a vec4 resulted in a vec4 while it should result in a vec3. (Fixed)
  • AMD: Sometimes on older GPUs the result of FBO rendering becomes grayscale. I have no idea. (Fixed in newer GPUs at least)
  • AMD: Allocating texture mipmaps in reverse order (Allocate level 10, upload level 10, allocate level 9, upload level 9, …) hard locks the GPU. You need to allocate all mipmaps before you start uploading to them.
  • AMD: Not sure if this is a bug, but depth testing against a depth buffer with depth writing disabled and reading the same depth buffer in the shader causes undefined results on AMD cards only.

These are the ones I can remember right now anyway. This is why I like Nvidia’s drivers, by the way.

Those examples were not meant to be exhaustive. I’m only saying that no matter how your render stuff, OpenGL matters to you since pretty much everything (except unaccelerated Java2D of course) relies on OpenGL.

I did mention that OpenGL’s use in other places is what made them decide to add so much backwards compatibility.

I can’t say much about the mobile market since I’ve never developed stuff for it, but I do know that OpenGLES 2 is essentially OpenGL 3 without compatibility mode. Is that what you meant?

I’m not entirely sure I understood what you’re saying. You’re saying that I’m not taking all applications of OpenGL into account?

This is the exact reason why we don’t need driver implemented immediate mode anymore. It took me just a few hours to write a decent clone of immediate mode with support for matrices, colors and texture coordinates. It even takes advantage of persistently mapped buffers to upload the vertex data if they’re supported by the driver. Sure, it’s probably not as fast as the driver optimized immediate mode, but frankly, if you’re using immediate mode you don’t care about CPU performance in the first place.

I believe that the OpenGL drivers have too many features which slows down development of them and increases the number of bugs. If the OpenGL spec only contained the bare minimum required to do everything it currently does, it’d shift the load away from the driver developers and let them focus on optimizing the drivers and more quickly adopt new functionality. The lack of older “features” can be easily implemented by for example open source libraries that expose an API that’s more familiar for people coming from older versions of OpenGL, and I believe that it would not be a significant effort for them to implement the stuff that they need. The important thing here is that we’d at least get rid of the completely unused game related features, like fixed functionality multitexturing and lighting.

I think that it’s clear that the game industry is pretty open to this mentality of quickly deprecating old features. See how quickly developers jumped on to Mantle for example. Maybe you’re right though. Maybe what we really need isn’t a new OpenGL version, but a version of OpenGL that is specifically made for games. But that’s pretty much what OpenGL 5 SHOULD be. OpenGL 5 most likely won’t have any new hardware features. If they completely drop backwards compatibility, it’d essentially be the same functionality as OpenGL 4 but with an API targeted for games. If the “game version” of OpenGL continued to build on the new OpenGL 5, we’d essentially get what we want from OpenGL 5. Non-game applications could still use OpenGL 4, with new features introduced as extensions over time. This wouldn’t necessarily decrease the load on driver developers, but it would make the game version of OpenGL faster to develop and maintain.

If the game industry is amenable to rapidly iterating changes, then the CAD/CAM and medical imaging industries can get off their sodding collective idle arses and rewrite their fucking code to use the new APIs, seeing as they’ve got, ooh, about 10x the money compared to the game industry. Jeez, why are industry programmers so fucking lazy? Games developers practically rewrite everything from scratch with every product they make in the AAA industry.

Cas :slight_smile:

Well yes, but, if you rewrite a medical program and it doesnt work, or doesnt work in time or contains new bugs, people are in deep shit.
And I guess its the same for like manufacturing software and all that.
The problem is that the testing of new code is just so extensive and time consuming that nobody will touch a running system.

Word.

Actually, there are several “accelerated” pipelines not relying on OpenGL, one using Direct2D/Direct3D and another one based on XRender.

OpenGL-ES on mobile is a nightmare, that’s why I don’t spend a lot of time on it for now. It works very well with Nvidia GPUs. The rest is still disappointing. Most of the time, when both ES 1 and ES 2 are available, ES 1 works correctly and ES 2 implementation is semi broken.

Personally, I prefer writing programs that support OpenGL from version 1 to version 4 rather than only supporting the very latest hardware, especially for my personal projects. I don’t have tons of money to pay a computer artist, my main game will remain ugly, I don’t see the interest of creating an ugly game requiring OpenGL 4. I don’t want to encourage people to upgrade their hardware very often. I just do my best with what I have. I prefer disabling some effects that require high end graphics cards rather than asking players to buy expensive hardware to play. I don’t try to get the most performance with the latest hardware, I try to do the same but on very old hardware. The production of monitors, memory, hard disks, graphics cards, … is polluting. I’m happy when I learn that someone who owns a Pentium 2 MMX can still play with my game, I don’t tell him “go spend 500 USD in a brand new graphics card” and I would be very happy if my game worked on the Raspberry Pi. I really want to criticize the capitalist industry. When I play with some very recent games, I really find them quickly boring, there is very often a problem of “replayability”. I don’t want to blindly follow the trends. Imagine that some years ago, some developers claimed that we would all drop Java in favor of Flex, they were wrong. In the industry, we aren’t lazy, our cycles are just a lot longer, we can’t take everything and put it into the trash bin every year.

I don’t think that we need a specific version of OpenGL for games, I just think that the current situation isn’t bad for everyone and evolving with some continuity is good even for the gaming industry. I prefer thinking about “approaching zero driver overhead” than paying attention to Mantle.

I get that you don’t think it’s worth higher hardware requirements, but you’re really not speaking for everyone. The primary reason I targeted OGL3 for We Shall Wake is performance. The only OpenGL 4 features I take advantage of if they’re available are performance or memory optimizations. By limiting the game to newer hardware I reduce the performance requirements of the game. Also, we do have modelers so our game will actually look good if we use expensive shaders and postprocessing.

The pollution argument is ridiculous. Why would you encourage people to waste energy on drawing pretty triangles at all if you care so much about pollution. Newer hardware is more energy efficient.

Finally, I couldn’t care less about if OpenGL is good enough for other applications, or even for you. It’s not good enough for me, and I’m sure a lot of game developers would agree. It’d be extremely naive for me to think that you or the CAD industry will care about what I want. This is a game developing forum. If WE don’t even say what we want, no one else is gonna do it for us.

Julien has some unorthodox views on the preservation of landfill hardware. My personal take on it is that if we didn’t die the world would be full of old grumpy bastards like me and no room for the young 'uns to flourish, and thus it should be with all the fruits of our endeavours. Obsolescence is critical to advancement.

I would like an OpenGL API with everything old thrown out apart from the absolute latest cutting edge features. And I’d like that API to be built on a client-installable redistributable front end, and with all the common functionality between vendors eg. GLSL compiler moved into that layer.

And then I’d like someone to write a Java 3D library that can compete with UDK4 or Unity 5 based on it.

Cas :slight_smile:

I speak mainly for myself first but sometimes even some software editors share my concerns when they have to support very old hardware and most OpenGL experts I met in my life who work in scientific visualization aren’t fond of Direct3D because of what you claim to be the way to go, dropping tons of features even though it breaks compatibility.

However, I wrote “I know that almost nobody agrees with me about that here but I prefer fighting against planned obsolescence”, I know that when it becomes clearly political, almost nobody agrees with me and I’m fine with that.

Yes, I understand your position and it’s great to read your advises as you explore some things that I’ll have to use one day, especially when it deals with VBOs. Riven’s posts were very useful too.

I don’t want to force people to renew their hardware when it isn’t absolutely necessary. Newer hardware isn’t always more energy efficient, video games machines have become less energy efficient except those made by Nintendo. There are several aspects to take into account about pollution including energy consumption, energy production and the production of the hardware that consumes this produced energy. Columbite–tantalite (coltan) is used to manufacture tantalum capacitors in many electronic devices and its extraction pollutes some rivers and lakes (and I don’t speak about the ethics of mining in DRC :frowning: ) … What is ridiculous?

I don’t claim that OpenGL shouldn’t evolve, I say that it shouldn’t evolve by taking Direct3D and Microsoft APIs as examples and manufacturers can still make tons of optimizations in forward compatible profiles, what’s wrong with that? Isn’t it enough to satisfy everyone?