Discussion: The mother of all shading languages

TL;DR: I’m trying to come up with a write-once-run-everywhere system for shaders, so that they can be run on OpenGL, OpenGL ES and Vulkan (and possibly others in the future) without having to write the shader once for each API. There are also lots of other problems concerning reusing of shader code (lots of shaders share descriptor sets, shared functions, etc), and this is a good opportunity to tackle these problems too. In the end, the choice is between simple preprocessed GLSL (limits us to GLSL-based APIs), inventing a new shader language, or trying to use/adapt a system that already exists as the big game engines must be solving this problem somehow. I want to start a discussion about this.

Hello, everyone!

I’ve recently been working on an abstraction layer for OpenGL, OpenGL ES and Vulkan. The idea is to design an engine that allows for maximum performance in all three APIs while only requiring you to write your code once. It won’t necessarily be easier to use than Vulkan, but it will definitely be less work to use it if you plan on supporting Vulkan at all. For example, Vulkan command buffers will be emulated on OpenGL as they’re required for maximum performance with Vulkan, and they can actually be very useful for OpenGL as a kind of software display list, allowing you to generate an optimized list of OpenGL commands to call, with a lot of state tracking done internally to remove unnecessary calls. On the other hand, OpenGL has VAOs that can improve performance significantly if used correctly, while Vulkan has a much simpler system, so the abstraction will support something similar to VAOs as well. I’m getting sidetracked, but in short it’ll give you the union of all the features of OGL, OGLES and VK that are required to cram the most out of each API.

The issue I want to start a discussion about is the shaders. OGL, OGLES and VK all use GLSL, but there are significant differences between them.

  • OpenGL needs assigned binding slots for textures, UBOs, images, etc, which can either be set with a layout()-qualifier or from Java. Shader-defined layouts are not supported in Vulkan, but Java-defined binding slots is a valid workaround.
  • Vulkan requires grouping descriptors into descriptor sets which AFAIK must be set in the shader (but can be set using compile-time constants, so from Java). Set/binding aren’t supported by other APIs.
  • OpenGL ES is very similar to OpenGL, but can heavily benefit from mixed precision computing (lowp, mediump, highp) to improve performance. These identifiers are not supported in other APIs.

Since each of these APIs require some fairly different shaders, I would really like to avoid having to fill my shaders with a crapload of #ifdefs just to define the different behavior with different APIs. This is also extremely error prone as you have to rewrite a lot of the shader for each API, making it easy to introduce errors that will only occur once you actually test the shader on a specific computer/phone. I really want to keep the “write once, run everywhere” idea here too, so it makes a lot of sense to have some kind of intermediate/abstract representation of the shader that in turn is “compiled” to different kinds of GLSL for each API, with different parts automatically injected (descriptor set bindings) or removed (precision qualifiers) depending on the compilation target.

A major feature of/requirement for Vulkan is descriptor sets, so these will have to be emulated on OpenGL. An important point of descriptor sets in Vulkan is that you are able to bind a descriptor set and then change shaders, and if the new shader shares some descriptor set layouts the old sets will remain bound and continue working (just like how texture/UBO/image/etc bindings remain bound between shader switches), hence it will be very common for people to want the same descriptor set in lots of shaders. For example, probably 90%+ of my shaders will use the camera information (matrices, depth info, frustum info, etc), and having copy-pasted code in each and every shader I have that needs to be maintained is a perfectly paved way to a huge amount of future pain. Hence, it would make sense to also have some kind of import feature, so that I can define shared descriptor sets in a separate file and import it into each shader instead. At that point, I might as well add import functionality for shared functions as well, like random value generators, dithering, tone mapping, g-buffer packing, etc, since I have those copy-pasted everywhere as well.

At this point there is actually very little raw GLSL code left in the shader. Here’s an example vertex shader:


//#version injected depending on API

//Samplers/UBOs imported from predefined descriptor sets
#importset "MyDescriptorSet1"
#importset "MyDescriptorSet2"

//Vertex attributes are semi-automatically set up, so they need to be parsable:
#in highp_vec3 position;
#in mediump_vec2 texCoords;

out mediump_vec2 vTexCoords;
out lowp_float color;

//Shared function imported from separate file:
#import float rand(vec2)

void main(){
    gl_Position = viewProjectionMatrix * vec4(position, 1.0); //viewProjectionMatrix comes from imported descriptor set.
    vTexCoords = texCoords;
    color = rand(texCords);
}

At this point, the GLSL is pretty hard to recognize. Considering the pretty big number of features that I want to add, it almost makes sense to make an entire new shader language for this. That would also have the advantage of allowing us to support other APIs, like Metal Shading Language (ugh), DirectX’s HLSL, maybe even console shading languages one day. I can think of a few ideas:

  • I go with my original idea, the bastardized GLSL above, and use a simple preprocessor to replace #-commands with code and injecting #defines. It’d be the least work, but only works with GLSL-based languages.
  • Creating a whole new intermediate shading language which passes through a compiler that spits out shaders for all the different APIs.
  • Using an already existing intermediate format/system to generate the shaders. Unity and presumably other engines must be tackling this issue as well somehow to support OpenGL/DirectX/consoles.
  • A crazy idea I had was to convert Java bytecode to GLSL shaders so we could write them in Java, but I don’t think that’d actually be feasible.

What do you guys think? Would anyone be interested in working together to build such a shader system? Does anyone know how the big engines tackle this problem? Discuss, tell me what you would need to use such a system and if you’d be willing to contribute!

Be pragmatic, because life is short 'n all. Just go with the pre-compiler :point:

pre-compiled #defines to iron the languages could work pretty well, … but only if there is actually a solid inheritance among them. i dont know.

but then, in the long run, i would prefer clarity in code, with custom #tags over a blurry language.

Thanks for the input. I’m gonna roll with a minimal GLSL preprocessing system just to get things running in the first place, but it doesn’t feel like a good long-term solution.

Is it common for people to just write the same shader for different languages?

I interned at a small studio and they just rewrote the same shader.

Most of the work in shaders is actually working out what they have to do. So it’s probably just easier to write the same shader in several dialects, especially as compared to the rest of the code, they’re not particularly huge.

Then again, see Unity shaders.

Bah, Unity again.

Cas :slight_smile:

I tried to look into what they do, but I couldn’t find anything useful. It’s all VisualSuperShaderNoProgrammingRequiredWelcomingAllArtMajors nowadays for the big engines.

I am more worried about having an IDE which lets me press ctrl enter and it brings up an auto complete list :slight_smile:

One thing I would be worried about is compatibility between each other. I am not well versed with vulkan, but if vulkan were to… say have a methodology that would have to be emulated, there may be a problem.

My brilliant plan is to just get theagentd to write all my shaders, which is working out nicely :point:

Cas :slight_smile:

I’m not a big fan of attempts that try to abstract everything, because ussually this isn’t lossless. What advantage would it have to support so many different platforms? Wouldn’t it be viable to say that it’s enough to support OpenGL, OpenGL ES and Vulkan?

My own engine is a light version of you pre-processor-approach and I think it works really well and is a very simple, robust system. Though, it does only feature includes.

  1. includes should be easy, not nessecary to discuss it.
  2. offering high or low precision qualifiers is only available in OpenGL ES…so why not just precompile them away when targeting OpenGL or Vulkan? Easy and effective, unless I’m missing something.
  3. descriptorsets…yes, would be very nice to have state, or part of the state every shader needs somehow referenceable…but honestly, couldn’t one just place those in a seperate file and include it, like utility functions and everything else?

Summed up, I think the most efficient and flexible system is to just have a pre-processor and a compiler that compiles away precision attributes if not targeting OpenGL ES.

precision qualifiers… “we want this crappy device to do 3D but it’s a bit too crappy, but we can make everyone’s lives more annoyingly complicated to squeeze out pointless extra performance so you can make games for it that nobody will play”.

I wish they’d stop doing this sort of shit.

Cas :slight_smile:

Yeah, I guess this will have to do.

Nvidia’s latest GPUs are claiming that they can do 16-bit float calculations twice as fast. If I don’t need the precision, I’d love for my shader to be twice as fast.

Isn’t that what’s a half type would be used for?

I can’t help but think that this option may suffer from this issue:

You’d use it if you were too impatient to respect the low-hanging limited fruits of the hardware available and work within your means. I mean look at the tricks people have pulled over the years to squeeze performance out of hardware, spending months and months and losing most of their hair, only for it all to be rendered completely irrelevant a mere 18 months later by the hardware simply being twice as fast. Life’s too short. Relatedly:

[quote=""]
Hint hint :slight_smile:

Cas :slight_smile: