GL_MAX / LibGDX problem

Hi there,

I want to render 2 overlapping, transparent sprites. But even where they overlap, the resulting color should not be different compared to the resulting color of one rendered sprite. I read about blending in OpenGL and i think, glBlendEquation(GL_MAX) is what i need. Here is my LibGdx example:

import com.badlogic.gdx.ApplicationListener;
import com.badlogic.gdx.Gdx;
import com.badlogic.gdx.backends.lwjgl.LwjglApplication;
import com.badlogic.gdx.backends.lwjgl.LwjglApplicationConfiguration;
import com.badlogic.gdx.graphics.*;
import com.badlogic.gdx.graphics.GL20;
import com.badlogic.gdx.graphics.g2d.SpriteBatch;

public class GLMaxTest implements ApplicationListener {

    OrthographicCamera cam;
    SpriteBatch batch;
    Texture tex;

    @Override
    public void create() {
        Pixmap pixmap = new Pixmap(4,4, Pixmap.Format.RGBA8888);

        cam = new OrthographicCamera(2, 2);
        cam.position.set(0, 0, 0);
        cam.update();

        batch = new SpriteBatch(1000);
        batch.setProjectionMatrix(cam.combined);
        batch.enableBlending();

        Pixmap.setBlending(Pixmap.Blending.None);
        pixmap.setColor(new Color(1.0f,0.0f,0.0f,0.3f) );
        pixmap.fill();

        tex = new Texture(pixmap);

        //Gdx.gl.glBlendEquation(GL30.GL_MAX);
        //Gdx.gl.glBlendEquationSeparate(GL30.GL_FUNC_ADD, GL30.GL_MAX);
    }

    @Override
    public void dispose() { }

    @Override
    public void pause() { }

    @Override
    public void render() {
        Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
        Gdx.gl.glClearColor(0.0f, 0.0f, 0.0f, 0.0f);

        batch.begin();
        batch.draw(tex,0,0);
        batch.draw(tex,-1,-1);
        batch.end();
    }

    @Override
    public void resize(int width, int height) { }

    @Override
    public void resume() { }

    public static void main(String[] args) {
        LwjglApplicationConfiguration cfg = new LwjglApplicationConfiguration();

        cfg.title = "";
        cfg.useGL30 = true;
        cfg.width = 640;
        cfg.height = 480;
        cfg.resizable = false;

        new LwjglApplication(new GLMaxTest(), cfg);
    }
}

The result:

Woking as intended so far.

When I added “Gdx.gl.glBlendEquation(GL30.GL_MAX);”, this is what I got:

The alpha values seem to get completly ignored.

Then i tried “Gdx.gl.glBlendEquationSeparate(GL30.GL_FUNC_ADD, GL30.GL_MAX);”. But this line does not change anything. This is what was getting displayed again:

What am I missing? Or is this maybe even a LibGDX bug?

Oh, and yeah, first post :slight_smile:

From your description, to me it sounds like you don’t want blending enabled at all. Obviously I’ve got that wrong but perhaps you could explain a little more clearly or with a picture of what you want to be happening. Or perhaps it is just me being thick and everyone else will understand.

I’m not too sure how you set up is, but I’'m going to assume a lot of things.

I assume:
You’re using a vertex and fragment shaders (If you’re not you should think about it!)

Anyway…

You could add in this to your openGL init code



glEnable (GL_BLEND);
glBlendFunc (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);


This should turn on blending where the alpha channel is not ignored
Then in your fragment shader you can multiple the ‘amount of alpha you want’ by the fragment shaders texture you are using


//Fragment Shader code line for using alpha
gl_FragColor = texture2D(uTexture, vTextureCoordinate) * myAlphaValue;


When blending is enabled, this is what is written to the framebuffer:

framebuffer = func(sourceColor * sourceFactor, destinationColor * destinationFactor)

func = the function set by glBlendEquation(). This defaults to addition, in which case func() is a function that takes in two values and adds them together. GL_MAX = max() instead of addition.
sourceColor = the raw color you’re writing (RGBA).
sourceFactor = the first argument to glBlendFunc().
destinationColor = the color already in the framebuffer.
destinationFactor = the second argument to glBlendFunc().

If you change the alpha value of the source color, the resulting RGB values are completely unaffected. To me it sounds like you want to multiply the source RGB values by the source alpha, which means you need to call glBlendFunc() with these arguments:

glBlendFunc(GL_SRC_ALPHA, GL_ONE);

This results in the following blending equation:

framebuffer = max(sourceColor * sourceColor.alpha, destinationColor * 1)

According to the OpenGl documentation, GL_MAX ignores the source and destination factors and only uses the rgba values.

But I figured it out by myself yesterday. I just did not fully understand how blending actually works when I opened this thread. I draw the sprites with glBlendEquation(GL_MAX) to an empty framebuffer now. Then I draw the framebuffer with glBlendEquation(GL_FUNC_ADD) and glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA) to the screen. It works like a charm now.

Thanks for your replies anway :slight_smile: