Discussion: The mother of all shading languages

TL;DR: I’m trying to come up with a write-once-run-everywhere system for shaders, so that they can be run on OpenGL, OpenGL ES and Vulkan (and possibly others in the future) without having to write the shader once for each API. There are also lots of other problems concerning reusing of shader code (lots of shaders share descriptor sets, shared functions, etc), and this is a good opportunity to tackle these problems too. In the end, the choice is between simple preprocessed GLSL (limits us to GLSL-based APIs), inventing a new shader language, or trying to use/adapt a system that already exists as the big game engines must be solving this problem somehow. I want to start a discussion about this.

Hello, everyone!

I’ve recently been working on an abstraction layer for OpenGL, OpenGL ES and Vulkan. The idea is to design an engine that allows for maximum performance in all three APIs while only requiring you to write your code once. It won’t necessarily be easier to use than Vulkan, but it will definitely be less work to use it if you plan on supporting Vulkan at all. For example, Vulkan command buffers will be emulated on OpenGL as they’re required for maximum performance with Vulkan, and they can actually be very useful for OpenGL as a kind of software display list, allowing you to generate an optimized list of OpenGL commands to call, with a lot of state tracking done internally to remove unnecessary calls. On the other hand, OpenGL has VAOs that can improve performance significantly if used correctly, while Vulkan has a much simpler system, so the abstraction will support something similar to VAOs as well. I’m getting sidetracked, but in short it’ll give you the union of all the features of OGL, OGLES and VK that are required to cram the most out of each API.

The issue I want to start a discussion about is the shaders. OGL, OGLES and VK all use GLSL, but there are significant differences between them.

  • OpenGL needs assigned binding slots for textures, UBOs, images, etc, which can either be set with a layout()-qualifier or from Java. Shader-defined layouts are not supported in Vulkan, but Java-defined binding slots is a valid workaround.
  • Vulkan requires grouping descriptors into descriptor sets which AFAIK must be set in the shader (but can be set using compile-time constants, so from Java). Set/binding aren’t supported by other APIs.
  • OpenGL ES is very similar to OpenGL, but can heavily benefit from mixed precision computing (lowp, mediump, highp) to improve performance. These identifiers are not supported in other APIs.

Since each of these APIs require some fairly different shaders, I would really like to avoid having to fill my shaders with a crapload of #ifdefs just to define the different behavior with different APIs. This is also extremely error prone as you have to rewrite a lot of the shader for each API, making it easy to introduce errors that will only occur once you actually test the shader on a specific computer/phone. I really want to keep the “write once, run everywhere” idea here too, so it makes a lot of sense to have some kind of intermediate/abstract representation of the shader that in turn is “compiled” to different kinds of GLSL for each API, with different parts automatically injected (descriptor set bindings) or removed (precision qualifiers) depending on the compilation target.

A major feature of/requirement for Vulkan is descriptor sets, so these will have to be emulated on OpenGL. An important point of descriptor sets in Vulkan is that you are able to bind a descriptor set and then change shaders, and if the new shader shares some descriptor set layouts the old sets will remain bound and continue working (just like how texture/UBO/image/etc bindings remain bound between shader switches), hence it will be very common for people to want the same descriptor set in lots of shaders. For example, probably 90%+ of my shaders will use the camera information (matrices, depth info, frustum info, etc), and having copy-pasted code in each and every shader I have that needs to be maintained is a perfectly paved way to a huge amount of future pain. Hence, it would make sense to also have some kind of import feature, so that I can define shared descriptor sets in a separate file and import it into each shader instead. At that point, I might as well add import functionality for shared functions as well, like random value generators, dithering, tone mapping, g-buffer packing, etc, since I have those copy-pasted everywhere as well.

At this point there is actually very little raw GLSL code left in the shader. Here’s an example vertex shader:


//#version injected depending on API

//Samplers/UBOs imported from predefined descriptor sets
#importset "MyDescriptorSet1"
#importset "MyDescriptorSet2"

//Vertex attributes are semi-automatically set up, so they need to be parsable:
#in highp_vec3 position;
#in mediump_vec2 texCoords;

out mediump_vec2 vTexCoords;
out lowp_float color;

//Shared function imported from separate file:
#import float rand(vec2)

void main(){
    gl_Position = viewProjectionMatrix * vec4(position, 1.0); //viewProjectionMatrix comes from imported descriptor set.
    vTexCoords = texCoords;
    color = rand(texCords);
}

At this point, the GLSL is pretty hard to recognize. Considering the pretty big number of features that I want to add, it almost makes sense to make an entire new shader language for this. That would also have the advantage of allowing us to support other APIs, like Metal Shading Language (ugh), DirectX’s HLSL, maybe even console shading languages one day. I can think of a few ideas:

  • I go with my original idea, the bastardized GLSL above, and use a simple preprocessor to replace #-commands with code and injecting #defines. It’d be the least work, but only works with GLSL-based languages.
  • Creating a whole new intermediate shading language which passes through a compiler that spits out shaders for all the different APIs.
  • Using an already existing intermediate format/system to generate the shaders. Unity and presumably other engines must be tackling this issue as well somehow to support OpenGL/DirectX/consoles.
  • A crazy idea I had was to convert Java bytecode to GLSL shaders so we could write them in Java, but I don’t think that’d actually be feasible.

What do you guys think? Would anyone be interested in working together to build such a shader system? Does anyone know how the big engines tackle this problem? Discuss, tell me what you would need to use such a system and if you’d be willing to contribute!