A rant on OpenGL's future

But I can still run old DX apps with no problem. Nobody’s claiming backward compat should be ignored…it simply shouldn’t burden developers.

Julien, I think you’ve completely missed theagentd’s rant… he’s saying by all means keep old shitty hardware supported by old shitty drivers. It’s not like software just breaks all on its own over time (trololololol). What we want is new stuff, for new things. Then we can write new stuff for the new things, instead of having to write new stuff to work on old things, which burdens everybody, massively, probably contributing more to waste than anything.

As far as landfill and pollution goes… every bit of hardware sold is already landfill, eventually. If we slow down the rate of hardware production the people making that stuff will necessarily need to sell less of it… but make the same amount of money. So that means shittier hardware will both remain shitty and simultaneously become hugely more expensive. If you’re concerned about pollution then maybe a campaign for environmentally friendly materials and recycling might be a more useful venture than supporting old, slow, power-hungry, bug-ridden, difficult-to-maintain hair-loss-instigating hardware.

Cas :slight_smile:

Aren’t the forward compatible profiles designed for that?

princec, that’s why I use a scenegraph, the boring low level code has to be written once in the renderer(s) and then you take care of the rest. Ideally, I would be happy if I could write only one shader instead of writing the same shader 3 times in order to support all GLSL versions. theagentd can play with the new stuffs in the forward compatible profiles, can’t he?

Degrowth doesn’t automatically lead to an increase of unemployment (especially in an economy that follows the principle “work less, work all”) like growth doesn’t automatically lead to a decrease of unemployment. I understand what you mean by “sell less of it… but make the same amount of money” but maybe then some people should find another job and most of the money won’t go into the salaries… It is almost no longer possible to keep exactly the same job during the whole lifetime, I work for the same employer only about 2 years in average.

I kinda like you Julien, but for some reason, no matter which topic it is, with you it always ends in a political debate / issue. I’m not even sure how :stuck_out_tongue:

It’s doable because what seems to be evident or factual for some people here can be neither evident nor factual for someone else.

All of this profile stuff is duct tape and super glue.

yea, it’s like coding perl around java to get it to work.

Roquen, please can you elaborate?

Okay…you’re mission…should you choose to accept it is to read these and let me know if there’s any near future hope:


http://www.opengl.org/registry/

[quote=“Roquen,post:29,topic:50106”]
Yes, there is.

Fuck yeah 8) It’s finally happening.

I guess nvidia has some of this: https://developer.nvidia.com/opengl-driver

[quote]For the next generation of OpenGL – which for the purposes of this article we’re going to shorten to OpenGL NG – Khronos is seeking nothing less than a complete ground up redesign of the API. As we’ve seen with Mantle and Direct3D, outside of shading languages you cannot transition from a high level abstraction based API to a low level direct control based API within the old API; these APIs must be built anew to support this new programming paradigm, and at 22 years old OpenGL is certainly no exception. The end result being that this is going to be the most significant OpenGL development effort since the creation of OpenGL all those years ago.
[/quote]
f**k. Yes.

Interesting OGL4.5 extensions:

  • ARB_clip_control: Except for the purely convenience features this provides when porting from DX, it can also be used to improve depth precision slightly. This could be solved before as well, but was a bit unclear and hacky.
  • ARB_direct_state_access: FINALLY. This should eliminate almost all binding!
  • ARB_pipeline_statistics_query: Looks cool. Should make bottleneck identification easier.
  • KHR_context_flush_control: I have no idea how this magically works. Will investigate. Multithreaded OpenGL for more than just texture streaming would be amazing. EDIT: Most likely this is not very revolutionary. The improved multithreading is expected in OpenGL NG.

a ground up rewrite for OpenGL sounds great (and long overdue) and they seem to have ticked all the right boxes, however judging by the description of how they intend to work with so many parties and being aimed at mobile, desktop and console platforms (and probably the web), I’m guessing it’ll be a couple of years before they get anywhere (e.g. html5).

[quote=“theagentd,post:33,topic:50106”]
A detailed explanation of why this is useful.

Finally, some good news. :slight_smile:
I don’t even care if it takes 2-3 years, they can take their time, just let the API be nice and modern.
If they’re going to redesign the whole API maybe they’re going to drop the state machine format?

Hey Spasi,

When’s LWJGL 2.9.2 releasing so I can start playing around with GL 4.5? :slight_smile:

After a good night’s sleep, I’ve had the time to take a look at OpenGL NG more. It’s clear that Khronos is going the same direction as DirectX 12 and Mantle, thank god.

A completely redesigned API made for modern GPUs, most likely targeting the same hardware as DX12/Mantle which would be the Nvidia 600 series and the AMD 7000 series and up. I am unsure if Intel would be able to support it with their current generation of hardware; they may have they hardware but lacks the drivers.

Which leads to the second point. As with DX12/Mantle, we’re looking at a very thin and simple driver. All the old redundant features are thrown out. This should allow AMD, Nvidia and Intel to simply build a new, small driver for OpenGL NG, finally slowing down or halting further development of the old OpenGL. Newly released hardware would obviously still need an OpenGL 4.5 driver, but from now on we can assume that OpenGL 4 will get fewer updates and new extensions, but I guess we can assume that some OpenGL NG features will drip down to OpenGL 4 through extensions… Well, hopefully we’ll at least get much more stable OpenGL NG drivers which will be released faster!

With more low-level control of how the GPU and CPU work, we should be able to do some pretty cool optimizations by taking advantage of the GPU in better ways. For example, depending on what’s exposed by OpenGL NG it might be able to render shadow maps in parallel to lighting the scene. Rendering shadow maps has very low compute load as the vertex shader is usually simple and there is no fragment shader. Filling pixels and computing the depth is handled by the hardware rasterizers on the GPU, leaving the thousands of shader cores on your GPU idle. Tile based deferred lighting on the other hand is done by a compute shader which bypasses the vertex handling hardware and the rasterizer, and only uses the shader cores and a little memory bandwidth. We could essentially double buffer our shadow map and render a new shadow map while computing lighting with the previous one in parallel.

They’re also promising massively improved CPU performance. The push for direct state access (promoted to core in OpenGL 4.5) implies that this is the way OpenGL NG will work, meaning simpler, shorter and clearer code. This also means that we won’t be getting the same problem with state leaking as before, which would make it easier to avoid hard-to-find bugs. We’re also promised proper multithreading. Multithreaded texture streaming is nice and all, but hardly a replacement for being able to actually build command queues from multiple threads as Mantle and DX12 will allow. Games that use OpenGL NG will at least have the potential for almost linear scaling on any number of cores. FINALLY.

Precompiled shaders! An intermediate shader format is something that has been wanted for a long time. GLSL basically just got a lot more like Java. Instead of letting each vendor develop their own GLSL compiler with their own sets of bugs and quirks, Khronos will (or I assume they will) develop their own compiler which compiles GLSL shaders to some intermediate format, just like Java bytecode. This should result in much more predictable performance as all GPUs and vendors will be able to take advantage of any optimizations the intermediate compiler does. This is especially good for mobile, which suffers from compilers that are bad at optimizing shaders. This also means that the GPU vendors will only have to be able to compile the GLSL “bytecode” to whatever their GPUs can run, which should be muuuuuuch less bug prone than compiling text source code. We’ll only have to work with a single GLSL compiler from now on. As someone who’s encountered so many broken compilers, this is a HUGE improvement. This will speed up development a lot on my end as well.

OpenGL NG will run on phones as well! Although I believe OpenGLES does not suffer from the same bloating as OpenGL, it’s still extremely nice to be able to run the same code on both PC and mobiles. This is almost gift-wrapped and addressed to LibGDX.

Aaah… This is so great. Khronos basically gave CAD the boot and went full gamer. So many great things here. Obviously, OpenGL NG isn’t complete yet and may even result in a second Longs Peak disaster, but there are lots of reasons to be hopeful. The circumstances are completely different, with Mantle pushing development of DX12 and OGL NG.

As an aside…embedded chips are getting crazy: http://blogs.nvidia.com/blog/2014/08/11/tegra-k1-denver-64-bit-for-android/

Oh and I meant to mention another announcement which is SPIR (LLVM) seems to be going forward for OpenCL…there’s some overlap here so some of these efforts might be merged. https://www.khronos.org/news/press/khronos-releases-spir-2.0-provisional-specification