Gaussian Blur Blobs?

I’m trying to implement bloom, and it’s pretty easy, but there’s one problem.

It’s working, but it looks like this from afar:

https://anuj-rao.tinytake.com/media/3fa4d8?filename=1475388942808_02-10-2016-02-14-57.png&sub_type=thumbnail_preview&type=attachment&width=700&height=416&_felix_session_id=62a90cf0c520af69cd213f003c6a6701&salt=MTAxMzQ1NV80MTcwOTY4

See those weird blobs? I’m not downsampling, but I think it’s a resolution issue. It gets worse as I move further away.

Here’s my vertical blur shader:


#version 400 core

in vec2 passTextureCoords;

out vec4 outColour;

uniform sampler2D textureSampler;

uniform float weights[5] = float[] (0.227027, 0.1945946, 0.1216216, 0.054054, 0.016216);

void main(void) {

    vec2 texelSize = 1.0/textureSize(textureSampler, 0);
    vec4 colour = texture(textureSampler, passTextureCoords) * weights[0];

    for(int i = 1; i < 5; ++i){
        colour += texture(textureSampler, passTextureCoords + vec2(0, texelSize.y * i)) * weights[i];
        colour += texture(textureSampler, passTextureCoords - vec2(0, texelSize.y * i)) * weights[i];
    }

    outColour = colour;

}


Here’s my horizontal blur shader:


#version 400 core

in vec2 passTextureCoords;

out vec4 outColour;

uniform sampler2D textureSampler;

uniform float weights[5] = float[] (0.227027, 0.1945946, 0.1216216, 0.054054, 0.016216);

void main(void) {

    vec2 texelSize = 1.0/textureSize(textureSampler, 0);
    vec4 colour = texture(textureSampler, passTextureCoords) * weights[0];

    for(int i = 1; i < 5; ++i){
        colour += texture(textureSampler, passTextureCoords + vec2(texelSize.x * i, 0)) * weights[i];
        colour += texture(textureSampler, passTextureCoords - vec2(texelSize.x * i, 0)) * weights[i];
    }

    outColour = colour;

}


Here’s what it’s supposed to look like:

As you can see, there aren’t any weird blobs in this image.

This is my second attempt to implement bloom. I had the same problem the last time, and since it’s such an awesome effect, I’d love to be able to implement it successfully. Thanks.

Welp, does anyone have any ideas? I still think it’s somehow related to the resolution of the texture, but I’ve tested it time and time again, and it doesn’t seem to be the case. Any help would be much appreciated.

EDIT: also, some of the blobs are rectangular and others are like rounded rectangles.

Are you possibly trying to read and write to the same texture?

I’m pinponging between the vertical and horizontal blur framebuffers and I’m not reading and writing from the same texture.

Because the blobs get bigger and worse as the camera moves further away, I’m guessing it’s because of the resolution. But nobody else faces the same issue on the Internet, so I’m kinda doubtful.

Is your input texture simply messed up?

By input texture, are you referring to the texture showing only the bright fragments? Either way, no, I’ve checked that as well.

Can you show me how the input to the blur looks like? I assume you do some kind of thresholding to extract the bright parts. Also, what kind of texture formats are you using?

The code for creating the colorbuffer and depth buffer attachments:


public static int createTextureAttachment(int width, int height, boolean fp, int attachment){
        int textureID = GL11.glGenTextures();
        Loader.TEXTURE_LIST.add(textureID);
        GL11.glBindTexture(GL11.GL_TEXTURE_2D, textureID);
        GL11.glTexImage2D(GL11.GL_TEXTURE_2D, 0, fp ? GL30.GL_RGB32F : GL11.GL_RGB, width, height, 0, fp ? GL11.GL_RGBA : GL11.GL_RGB, fp ? GL11.GL_FLOAT : GL11.GL_UNSIGNED_BYTE, (ByteBuffer)null);
        GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_MIN_FILTER, GL11.GL_LINEAR);
        GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_MAG_FILTER, GL11.GL_LINEAR);
        GL32.glFramebufferTexture(GL30.GL_FRAMEBUFFER, GL30.GL_COLOR_ATTACHMENT0 + attachment, textureID, 0);
        GL11.glBindTexture(GL11.GL_TEXTURE_2D, 0);
        return textureID;
    }

    public static int createDepthTextureAttachment(int width, int height){
        int textureID = GL11.glGenTextures();
        Loader.TEXTURE_LIST.add(textureID);
        GL11.glBindTexture(GL11.GL_TEXTURE_2D, textureID);
        GL11.glTexImage2D(GL11.GL_TEXTURE_2D, 0, GL14.GL_DEPTH_COMPONENT32, width, height, 0, GL11.GL_DEPTH_COMPONENT, GL11.GL_FLOAT, (ByteBuffer)null);
        GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_MIN_FILTER, GL11.GL_LINEAR);
        GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_MAG_FILTER, GL11.GL_LINEAR);
        GL32.glFramebufferTexture(GL30.GL_FRAMEBUFFER, GL30.GL_DEPTH_ATTACHMENT, textureID, 0);
        GL11.glBindTexture(GL11.GL_TEXTURE_2D, 0);
        return textureID;
    }

blur input texture:

https://anuj-rao.tinytake.com/media/3fd6fb?filename=1475576998575_04-10-2016-06-29-06.png&sub_type=thumbnail_preview&type=attachment&width=700&height=420&_felix_session_id=215f750e351292cd2efd89c95857bb32&salt=MTAxOTAyOV80MTgzODAz

The blurry image is because of my crappy screenshot software, the actual texture is completely clear.

Are you accidentally writing negative values after thresholding? A float texture can hold negative values, you know. This could mess up your tone mapping later.

Side note: You don’t need 32-bit precision, use GL_RGB16F instead.

I’m not doing tonemapping, actually. Also, no, I’m not writing negative values after thresholding.


outColour = (diffuse + specular) * passColour;
    brightColour = outColour - vec4(1);
    if(brightColour.r < 0)
        brightColour.r = 0;
    if(brightColour.g < 0)
        brightColour.g = 0;
    if(brightColour.b < 0)
        brightColour.b = 0;

It’s also not possible that my weights are incorrect, because I tried learnopengl.com’s weights and it still yielded similar results.

Forgive me, not at my computer but should you be multipying the alpha by the weight?

Also have you sampled the texture to see if the places that are dark red after are 100% black beforehand

It’s an issue with the alpha I bet. Don’t blur the alpha.

Also, you can rewrite your clamping code to [icode]brightColor = max(outColour - 1.0, 0.0);[/icode]

Or better, use premultiplied alpha textures. Blurring will never work correctly on images with alpha without it.

I haven’t enabled blending, so why should it change anything?

Well when I looked at the image it seems like it should come down to you adding a color to the thing, right? I have never done blur before. Instead it looks like your blur is dominating over changing the pixel (like shadows, yunno?)

I’m sorry Hydroque, I don’t think I get what you mean. There’s nothing wrong with the additive blending of the original texture and the blurred texture though, if that’s what you’re talking about.

Yup.