Shaders Oh My God

So I have given up. I have read every tut I can get mine hands on and have yet to be able to use shaders in my program. I understand the pipeline, what shaders do, how to make them in a program but have no idea how use them or write them.

It seems like in all of the tuts I have read

if (useShader) {
ARBShaderObjects.glUseProgramObjectARB(shader);
}

magically makes the shader work and draw and everything.

All the tuts I read involve 3D I just want 2D. All I want from shaders is to do render textured triangle strips…maybe some cool effects later but thats all I want.
How is this so much to ask for?
Can any one give me an actual working example with good comments as to why you are doing everything?

If you link me to lwjgl shader tut I will put a bullet in your head…I really will.

So…have you even…tried using them?

So, you know how they work, how to “make” them, but you can’t write them?
Have you actually read ANYTHING about them? Let alone the LWJGL wiki with tutorials on how to use GLSL in LWJGL.
You’re not making the much sense.

…I have read that tut and it works but my problem is all of the tuts involve 3D static objects. And they don’t explain exactly what is going on with the shader and its structure.

The part of shaders that confuses me is this


if (useShader) {ARBShaderObjects.glUseProgramObjectARB(shader);}

     //some fixed function calls here

ARBShaderObjects.glUseProgramObjectARB(0);

How does the shader know what is going on? the first line does not just magically make the shader know what you want done. Do you just put normal drawing code between the two calls and it does everything? I guess that the example just doesn’t seem to translate into anything other than drawing a box for me.

Right after posting this I found this site
http://www.sjonesart.com/gl.php
which I am currently reading like a mad man but I still am lost on how I can make a shader just draw 2d images.

I think this is the source of your confusion. The shaders aren’t “drawing” anything. They are being plugged in to two specific stages of the graphics pipeline: vertex processing and fragment processing. The rest of the work (primitive assembly, clipping, projection, culling, rasterization, etc…) is still handled by the driver. Don’t think in terms of images and objects, but rather vertices, normals, color values, texture coordinates and fragments. Those are the things you manipulate in the shaders.

A 3D object is a collection of vertices, texture coordinates, normals and color components. So are 2D images, only on a smaller scale. After all, a 2D image is usually rendered as a set of 4 points that make up 2 triangles. The kinds of manipulation you would want to do on a 3D object are often different than what you would do on a 2D image, but the principle is the same.

you can’t. shaders just manipulate the data send to them. so for example the fragment shaders main function simply gets called for every pixel before it is drawn to the framebuffer. you have to draw the resources from your java code.

thats of course a simplistic explanation not taking all the details and advanced features into account, but that should be you principal assumption

hmm…I see. So a shader is just how vertices/normals/ what ever are render.

So once a shader is setup, how would you go about rendering? Just use fixed function or vertex arrays or what ever? If so, how can you tell if the shader is actually working? And from what I have seen many shaders have variables in them, how exactly do the shaders know what these variables are and what they do?


uniform sampler2D sampler;

in vec2 texCoords;

layout(location = 0) out vec4 fragColor;

void main(){
    fragColor = texture2D(sampler, texCoords);
}

Explain? :? I am a complete noob.

On an off topic my old particle system I wrote in plain old java2D is actually faster then my opengl version. Does not look as nice as the default blending in java2D kinda sucks but still I am very impressed. 33+ fps with 50k particles. Like damn. Idk why people use opengl if its that fast. I am not drawing massive images over the whole screen but still.

If your Java2D implementation is better than your OpenGL one…you are doing something MASSIVELY wrong in your OpenGL one :stuck_out_tongue:

In a particle test theagentd made, I had up to 10 million particles running at 60FPS on my screen, while Java2D would start suffering after 100,000 :wink:

Yeah but in his system he is properly multithreading, shadering, VBOing, and all that stuff. Also I think he does the particle calculations on the gpu not the cpu. I am just using fixed function and cpu calculations. I also have more physics involved but I don’t think it slows it that much. I still am impressed by how fast java 2d is. It still can’t compare to proper opengl/gpu usage but damn.

What was each particle? An opaque rect/oval? Those should be fast…use transparency and/or rotate/scale = SLOW :slight_smile:

Particles are textured triangle strips.

The scaling I don’t think does anything to speed as far as bigger image means more scaling.

I tested 256x256 textures and 64x64 textures and almost no difference. I also don’t rotate as from what I under stand that is slow. If I ever want to rotate something in 2D and don’t have a lot of cpu power I think I would use a sprite strip with the image rotated in different positions. More memory but faster rotation.

No. A shader does not do any rendering. It manipulates the vertex/fragment data.

If you are using fixed function rendering, you aren’t using shaders. That’s what the ‘fixed’ means – it can’t be changed. To render with a shader, you pass your vertex data to the graphics card via vertex arrays or VBOs and set up the projection matrices, blending modes and any other state you need. The only difference with fixed function is that when you want a particular effect, you activate a particular shader to replace the per-vertex processing stage of the pipeline.

Here’s the complete vertex stage of the pipeline:

per vertex operations -> primitive assembly -> clip/project/viewport/cull -> rasterize

In the fixed function pipeline, all of this is happening in the driver. With shaders, you are able to completely replace the ‘per vertex operations’ stage with your own alogrithm as often as you like. Vertex data includes position, color, normal, texture coordinates and, for advanced uses, any custom data you want to associate with each vertex. This allows you to manipulate the data in any way you need to before it is actually drawn to the screen. Look at this shader, for example:


#version 330

layout(location = 0) in vec4 position;
void main()
{
    gl_Position = position;
}

The graphics card gives the position of each vertex to the shader in the variable ‘position’. The shader does no manipulation of the position at all. It simply passes it off to the next stage of the vertex pipeline untouched. Now look at this vertex shader:


#version 330

layout(location = 0) in vec4 position;
uniform float loopDuration;
uniform float time;

void main()
{
    float timeScale = 3.14159f * 2.0f / loopDuration;
    
    float currTime = mod(time, loopDuration);
    vec4 totalOffset = vec4(
        cos(currTime * timeScale) * 0.5f,
        sin(currTime * timeScale) * 0.5f,
        0.0f,
        0.0f);
    
    gl_Position = position + totalOffset;
}

It takes the same information, the position of the vertex, and modifies it based on elapsed time. It then passes the modified position on to the next stage of the pipeline. While this shader is active, all objects that are drawn will move across the screen.

This is what vertex shaders do. The basic functionality is for transformations and lighting, but you can do a variety of tricks by manipulating any of the data associated with the vertex. That’s the key point: you manipulate vertex data, then pass it back to the pipeline for final rendering. In the fragment shader, the concept is the same, but you are now manipulating the final color that is output to the screen rather than vertices.

[quote] If so, how can you tell if the shader is actually working?
[/quote]
By what you see on the screen. It’s usually fairly obvious if the shader is working or not. What is sometimes less obvious is whether or not your shader is buggy.

Some are automatically provided by the graphics card (like gl_Position above), some are based on the data you configure in the vertex arrays/VBOs (like position above), some you have to pass from your program via the shader API (like time and loopDuration in the second shader above), and some are local (like timeScale and currTime in the second shader above).

I highly recommend this tutorial, Learning Modern 3D Graphics Programming if you’ve not seen it yet. Also, the Orange Book is a good reference to keep handy.

Oh my God again! Thank you so much aldacron! You explain it so much better then most tuts I have seen by actually telling me what is going on in code ‘with’ code examples. +rep

So, the other variables you were talking about you use as you said an API? That makes sense.

I remember that link but at the time could not read it…so I forgot it. Now it is bookmarked.

Is there a place to read the orange book online? Everyone talks about it so I figure I should probably start reading it.

Again thanks for the explanation. Hopefully I will have some shaders working next weekend if school does not kill me.

http://www.bookf.net/s/OpenGL

Click on the first orange book you see (the one from July 2009) :slight_smile:

I have 3 versions: One using the CPU for updating particles (and multithreading), one doing particle updating with OpenGL using textures and one updating particles with OpenCL. The OpenGL and OpenCL versions are equally fast, but the threaded CPU version is around 1/2 as fast as the GPU versions… The point is that in both the GPU versions the CPU usage is 0-1%, as the only thing the CPU does is generate new particles.

Aldacron summed it up pretty well. Debugging shaders is a hell. You can’t step through a shader with a debugger, and reading back the rendered pixels usually gives you way too much information so it’s very difficult to actually figure out what’s going on. The way I’ve found to be most effective is simply to output debug data to the output color channels. >_>

Ok I so even with no rendering my max particle count is only around 70k before fps drops below 60. Which means, even with super fast rendering that takes next to no time at all, I can only get 70k. There must be some problem with my particles updating code.

GPU’s can handle millions of vertices and triangles per second. You have a CPU bottleneck.

So it means that I will be moving my calculations to the gpu…oh boy. :o

Still…there have got to be some optimizations I can do.

No, you just need to do it more effectively. Tip: Use MappedObjects!