Image Effects

  1. You don’t extend shaders, but you can make multiple passes via FBO rendering to an intermediate texture. Some of the “effects” I expose to users of my video engine are composite effects themselves made up of two or more passes internally. Basically you can make whatever architecture makes sense on the Java / control side as long as you are making separate passes rendering to a texture for each shader / IE “kernel”.

  2. Not exactly sure what you mean by “can you have arrays”… So describe it better. That can mean many things I suppose.

For instance in OpenGL ES 3.0 you can have “texture arrays” I’m soon implementing that into the pipeline / engine to store the last X output frames rendered. This will allow easy access in shader code to say do actual real motion blurring with past frame data without having to pass in bespoke texture data as separate samplers; IE just one texture array is passed in…

Good to keep in mind. I haven’t gotten there yet as there are other engine upgrades to make first, but OpenGL ES 3.0 has sRGB textures and framebuffers so that you can address this problem… OpenGL ES 3.0 is well worth moving to if you can since it is a major update.

  1. But how can the developer tell they want to use another shader before or after with just “a shader”. Also I want to send data from one shader to the other.
  2. I mean number arrays. So I can have an image kernal. Like so (in Java):

public Color filter(Image img, int x, int y) {
    setKernal(new int[] {
        -1,-1,-1,
        -1, 8,-1,
        -1,-1,-1,
    }
    return super.filter(img, x, y);
}

This will do some sort of edge detect. Again, sry if I’m using the wrong term here.

  1. You have CPU side control over the whole pipeline one builds for post-processing. You don’t control the pipeline between shaders in shader code itself as it’s done on the CPU side. You can share data directly between shaders via CPU side control such as FBO (textures) which are shared between shaders and you can share uniform data (variables in the shader code) between shaders. OpenGL ES 3.0 also enables a lot more in regard to general buffer objects as input / output from shaders (PBO is an example). But in the case of uniform data there are UBO (Uniform Buffer Objects). These are interesting because they allow setting uniform data of a shader efficiently in one call, but can also share data across multiple shaders. See the following for (OpenGL ES 3.0 feature):
    https://www.opengl.org/wiki/Uniform_Buffer_Object
    http://www.lighthouse3d.com/tutorials/glsl-core-tutorial/3490-2/

  2. Most definitely you can do convolution based kernels. See this code example for fragment shaders which do a Laplacian edge detection:
    https://github.com/BradLarson/GPUImage/blob/master/framework/Source/GPUImageLaplacianFilter.m

The line in the fragment shader “uniform mediump mat3 convolutionMatrix;” stores the 3x3 matrix and would be filled with the desired coefficients.

It should be noted that doing convolution based filters is generally expensive due to dependent texture reads. IE setting the color of the current pixel being processed is dependent on reading other pixel values surrounding it. You’ll want to be careful in your implementation to reduce “dependent texture reads” in your fragment shaders. A good article on optimizations for this from Brad Larson regarding the Gaussian blur filter in GPUImage:
http://www.sunsetlakesoftware.com/2013/10/21/optimizing-gaussian-blurs-mobile-gpu

In short… Yes, you can do convolution based filtering with OpenGL. There is nothing you can do in Java / CPU side that you can’t do in OpenGL.

Is there a max size on the matrix in OpenGL? So can I do something like 21x21 or something?

I think mat4 (4x4) is the largest you can get. Anything higher and you’re probably doing something wrong.

If you need lots of data, there’s always arrays, but 441 elements is a large amount and there’s probably better ways to do whatever you’re trying to do.

For built in variable types the max matrix size is 4x4. It’s handy to review the reference cards for OpenGL ES 3.0 & 3.1 for built in types and GLSL functions:


also

https://www.opengl.org/wiki/Data_Type_(GLSL)

Also there is a max array size for uniform variables depending on the OpenGL implementation. More to the point there is max uniform storage across all data types defined in a shader. See this link and “Implementation limits” section on how to query for the max sizes; usually 512 and greater is supported:
https://www.opengl.org/wiki/Uniform_(GLSL)

You can aggregate basic types in arrays and structs. For a “21x21” matrix an array of 441 floats, but you’ll have to deal with the indexing yourself and of course built in matrix operations don’t apply to a bespoke array. Not all OpenGL implementations support multidimensional arrays in GLSL. I understand that OpenGL 4.3+ should support or where the ARB_arrays_of_arrays extension is available. I understand OpenGL ES 3.1 has multidimensional arrays / ARB_arrays_of_arrays support.

So can I have some sort of shader with something like:
#use
#use

And then when my program reads the shaders it checks for lines like that and it will pass the image through those shaders first?

EDIT: just reading through this, and I have done GLSL before (Codea on iPad) but I just thought it would just be a bit over complicated for just a small little fun project.

No… There is no embedding or continuation of shader code that is controlled GPU side*. You need to control the succession of shaders in a post-processing pipeline on the CPU side in addition to whatever variables / textures are set up / shared across executing various shader code.

  • OpenGL ES 3.1 introduced indirect draw functionality that can be paired up nicely with compute shaders which allows drawing to occur after computation without CPU intervention, but this is not the same as bespoke GPU continuation like you are asking about.

At this point it may be good to get into trying some OpenGL yourself. There is a learning curve no doubt, but it will be worth it. If you happen to have an iOS device perhaps take a look at GPUImage example apps and source code. I guess for some examples there (modifying a static image) the simulator could work. I guess GPUImage also works on the desktop. I’ve never actually ran any GPUImage apps… Just snarfed* all the fragment shader code then improved it for OpenGL ES 3.0+ and my own custom pipeline… :smiley: *will attribute of course!

No, not on GPU side…
When I read the shader, line by line, I will check for a line starting with #, check what it does, remove it…

Then I will have a shader object which has the compiled shader, and all the shaders it requires…

Is this a good idea or is there another way? Again, this just seems to over complicate things when I already know about GLSL and this is supposed to be a simple little and fun project…

(Lol, my Safari on my iPad froze twice while I was typing this… Srsly, how do people think iOS is amazing?)

Technically you can do whatever you want in regard to pre-processing your own shader code before linking it. You might be able to do some sort of limited chaining of what should be independent shader passes that don’t require doing any sort of convolution kernel operations on intermediate state, but that would really make your post processing pipeline rather rigid and with no practical speed improvements. There are other fish to fry basically for performance.

By building a composable pipeline of independent image operations where each one writes to a successive FBO you can then allow the user to mix and match on the fly image operations to perform.

One of the really cool things my post-processing pipeline allows is a built in preset editor where users can interactively drag horizontally on the screen and the GL component shows dragging between between shaders / image operations by rendering the pipeline twice and combining. There is also interactive dragging between complete presets (renders the pipeline twice with each preset and combines).

Indeed… The GLSL direction is a lot to take in for what is a simple image processing API. Take what you learned from the Java side of things though and consider a GLSL implementation as it’s useful in game dev or general media based apps.

I might just use Java for know but I might add the ability for other devs to use Java, JS or GLSL later on if I choose to continue with this further… For the time being, I might just finish this, release it, and start to work on other stuff:

  • using XML for Swing layouts
  • add UDP for my game engine
  • make my first game in my game engine

Just wondering, I am using the image kernal for a lot of effects like blur and emboss… But what about effects like sepia, instant and enhance?

Why not use a scripting language like Lua or even Javascript for your effects? Then you could alter your effects at runtime.

There is fast (GLSL) and then there is slow: everything else…

BTW you can recompile GLSL code on the fly. It’s how a lot of the convolution / kernel based shader code is manipulated. IE programmatically recompiling shader code base on various settings that expand / contract the window applied.

Think about a “kernel” operation (not “a” BTW :D) for a color matrix. I’m sure there are other ways of doing a sepia tone image like an image lookup, but here is how things are implemented in GPUImage w/ a color matrix.

See:

As mentioned previously by @SHC if you have a proper matrix / vector library you’d multiply the color matrix by the color vector for the pixel essentially. Look at the GLSL code and translate that to Java basically.

Sorry if this has been posted twice, its a big topic but I’ve just came across this which is useful using LWJGL & Java

I’m not really understanding the filter you gave me, @Catharsis
Why isn’t there many tutorials for this kind of stuff? How did everybody else learn?

Everyone else learned by trial and error. I seriously doubt you would find a lot of information on these topics because we already have big software that does all these image effects for us (Photoshop).

GLSL would make everything easier though, I am telling you.