Pre-multiplied Alpha for drawing particles in one pass

Firstly, thanks to Orangy Tang for pointing this out to me.

Tips on optimising particle drawing (Thanks to davedes) can be found here: http://www.java-gaming.org/topics/storing-adding-drawing-particles/27126/msg/242110/view.html#msg242110

Here are some tutorials on pre-multiplied alpha: :point:
http://home.comcast.net/~tom_forsyth/blog.wiki.html#[[Premultiplied%20alpha]]
http://blog.rarepebble.com/111/premultiplied-alpha-in-opengl/
http://www.quasimondo.com/archives/000665.php

It doesn’t seem too complicated but you also need to have the right textures, otherwise it’s not going to work.
Firstly, in the code:

Rather than having two blend modes that we use for drawing particles:

Additive, for fire, etc:

GL11.glBlendFunc(GL11.GL_SRC_ALPHA,GL11.GL_ONE);

And Normal(or Lerp) for smoke: (I use this blendmode for drawing everything else, it should be re-set after drawing the particles)

GL11.glBlendFunc(GL11.GL_SRC_ALPHA, GL11.GL_ONE_MINUS_SRC_ALPHA);

Have just one:

GL11.glBlendFunc(GL11.GL_ONE, GL11.GL_ONE_MINUS_SRC_ALPHA);

Now, you have two different types of particles, smoke and fire. You need to keep track of these because they are drawn in slightly different ways.
Firstly setting the colour:

Additive(Fire):

GL11.glColor4f(r*a,g*a,b*a,0);

Normal “Lerp” (Smoke)

GL11.glColor4f(r*a,g*a,b*a,a);

The other part is the textures.

Additive textures must have a black background.
Normal/lerp textures have a transparent background.

Here is a sample additive texture for fire:

Here is a sample lerp texture for smoke:

If you use an additive texture with a transparent background you will get something like this(BAD):

Final results:

If I have any of this wrong, please correct me!

-roland

Some nice tips.

I happen to be writing a little tool for combining textures at the moment, and was just figuring out the importance of the distinction of addition vs interpolation. I implemented interpolation, but am now planning to give the addition option as well. So far, I’m able to do a simple scaling and translating in Simplex noise, and interpolate three such screens together. It’s gui driven, and generates a text line for the Java Simplex 2D call that can be copied-and-pasted into a procedural routine.

Thanks :slight_smile: that sounds interesting, any chance of a screenshot or something?

[quote]Thanks Smiley that sounds interesting, any chance of a screenshot or something?
[/quote]
http://hexara.com/Images/SimplexTextureBuilderDraft.JPG

I just wanted to make something to help me get a better handle on making textures, and am only starting to realize how much is involved to make a tool like this useful!

One thing I didn’t realize until I just this hour, (when I made the display boxes double in size along the horizontal) is that the way I did scaling is kind of dubious. Maybe I should define my scaling in terms of a set number of pixels rather than as a proportion of the display area? But given that screen resolutions vary a lot, even that is kind of dubious as a measure. So, producing the equations as I do in the text fields below the sliders (they don’t quite fit in the JTextFields visually) is maybe not so useful. Also, I suspect people don’t use a translation very often.

Right now it is monochromatic and alpha isn’t involved. Your post and links are very helpful in working out how to handle this once I’m ready to try including color and transparency.

Going back to the idea of adding vs lerp – I think I still want to try that out. Right now, I am just taking a weighted average for the displayed values. But I was thinking I wanted to do something like this: specify a mapping from the noise function to a number range and then add the results in. For example, have one Simplex output mapped to values from -16 to 16, and ADD these in to get the final color value. It doesn’t seem to me that there is a simple way to get the equivalent mathematical results using lerp. True?

I haven’t looked around to see if there are already tools that do these graphics manipulations with the goal of creating procedural code. Nor do I know if there is much interest in such. Most people would just use an artist to get design and create the needed tiles/textures, I assume.

http://hexara.com/Images/SimplexTextureBuilderDraft.JPG

I just wanted to make something to help me get a better handle on making textures, and am only starting to realize how much is involved to make a tool like this useful!

One thing I didn’t realize until I just this hour, (when I made the display boxes double in size along the horizontal) is that the way I did scaling is kind of dubious. Maybe I should define my scaling in terms of a set number of pixels rather than as a proportion of the display area? But given that screen resolutions vary a lot, even that is kind of dubious as a measure. So, producing the equations as I do in the text fields below the sliders (they don’t quite fit in the JTextFields visually) is maybe not so useful. Also, I suspect people don’t use a translation very often.

Right now it is monochromatic and alpha isn’t involved. Your post and links are very helpful in working out how to handle this once I’m ready to try including color and transparency.

Going back to the idea of adding vs lerp – I think I still want to try that out. Right now, I am just taking a weighted average for the displayed values. But I was thinking I wanted to do something like this: specify a mapping from the noise function to a number range and then add the results in. For example, have one Simplex output mapped to values from -16 to 16, and ADD these in to get the final color value. It doesn’t seem to me that there is a simple way to get the equivalent mathematical results using lerp. True?

I haven’t looked around to see if there are already tools that do these graphics manipulations with the goal of creating procedural code. Nor do I know if there is much interest in such. Most people would just use an artist to get design and create the needed tiles/textures, I assume.
[/quote]
That’s a good question. I think you should go with scaling in terms of 1.2x or something(So how it is now), but keep the image sizes the same and their dimensions correct (so they aren’t stretched 2:1 sideways), and use horizontal+vertical scrollbars on each texture.

Is the mapping for each individual pixel? Could you make the number (-16 -> 16) map to a RGB colour using another algorithm such as
r = ((i +X * SOME_BIG_NUMBER) % SOME_SMALLER_NUMBER)/SOME_SMALLER_NUMBER
g = ((i+Y * SOME_BIG_NUMBER2) % SOME_SMALLER_NUMBER2)/SOME_SMALLER_NUMBER2
b = ((i+Z * SOME_BIG_NUMBER3) % SOME_SMALLER_NUMBER3)/SOME_SMALLER_NUMBER3

where X Y Z are different numbers. I still don’t really know what your doing so I may have just written something completely not what you want. If that’s the case, just ignore it ;D

Sorry I don’t know much about noise functions, nor much about textures apart from the very basics and a few different types of blend modes. If you manage to make a good procedural texture maker that would be awesome! Right now I use Texture Maker Professional v3.0.3 (http://www.texturemaker.com/screenshots.php) which has a lot of procedural options but its hard to make good ones (atleast for me it is)

I’m glad my code will help you out later :slight_smile: This should be a more well known topic I think! It’s important and I hadn’t even heard about it until a few days ago :cranky:

Isn’t that because your textures themselves should be pre-multiplied?

Other than that, great post! :slight_smile: I’m a huge fan of pre-multiplied alpha in general - it makes so many things easier. Hadn’t thought of the additive blending ‘hack’ though.

Critical feature of using premult alpha: you must premultiply the alpha in all your graphics before you upload them to OpenGL! I do this at compile time but if you’re lazy you can do it at runtime:


	BufferedImage bi = ImageIO.read(inputFile);
	bi.coerceData(true); // This forces the alpha to be premultiplied, if it isn't, and does nothing, if it already is

Cas :slight_smile:

Could someone explain what pre-multiplied alpha is exactly? Google isn’t exactly helping with its big words.

It’s precisely what it sounds like: the rgb colors are multiplied with the alpha channel before anything is drawn, instead of being blended in while stuff is being drawn.

So, say if you drew a transparent red [1.0f, 0.0f, 0.0f, 0.5f] (that’s rgba) square, then what would end up on screen is the same as if you drew a square with the opaque color [.5f, 0.0f, 0.0f 1.0f]. Notice the 1.0f * 0.5f = 0.5f, hence ‘pre-multiplication.’

This is nice for large batch rendering because you can simulate blend modes just by setting the current color to draw with, instead of using glBlendFunc().

That’s what all other websites explain too…but I still don’t get the point. What do you mean ‘simulate blend modes’? Also other sources say that you still keep the 0.5f alpha after you pre-multiply, which confused me further :stuck_out_tongue:

Maybe someone else has a better explanation, but as I understand, it’s just another way of compositing colors, you don’t need to use it, or understand why it exists, nothing, just that it is an alternative method for (in the case of OpenGL) glBlendFunc(), as well as a non-implementation specific method of compositing, so if the language/api/whatever you are using doesn’t support anything like glBlendFunc you can use this. That is what I know.

@ra4king - I’m still trying to figure this out, too!

Two scenarios:
(a) opaque black background, draw [1.0, 0, 0, 0.5] red on it.
I can see where the result would be [0.5, 0, 0] to the eye. Same as drawing [0.5, 0, 0, 1.0] over this background.

(b) opaque white background, draw [1.0, 0, 0, 0.5] red on it.
Wouldn’t the result be [1.0, 0.5, 0.5] to the eye? This is NOT the same as drawing [0.5, 0, 0, 1.0] over this background.

???

Maybe this has to do with there being different forms of blending depending upon the background, which the OP talks about.

	public void paintComponent(Graphics g)
	{
		Graphics2D g2 = (Graphics2D) g;
		
		// refresh screen
		g2.setBackground(new Color(255, 255, 255, 255)); 
		g2.clearRect(0, 0, width/2, height);
		
		g2.setPaint(new Color(0, 0, 0, 255));
		g2.fillRect(width/2, 0, width/2, height);
		
		g2.setPaint(new Color(1.0f, 0f, 0f, 0.5f));
		g2.fillRect(100, 50, 100, 100);
		g2.fillRect(400, 50, 100, 100);
		
		g2.setPaint(new Color(0.5f, 0f, 0f, 1.0f));
		g2.fillRect(100, 200, 100, 100);
		g2.fillRect(400, 200, 100, 100);
	}

http://hexara.com/Images/ColorBlendQuestion.JPG

:o

@philfrei - In what way is that unexpected? In the top case the colour is still half transparent, therefore on the white background 50% of the white is added and makes it look paler. On the black, 50% of nothing is added so looks identical. You can’t ignore the alpha channel in pre-multiplied - as you’ve found, comparing it with an opaque colour that happens to have the same colour value is pointless (and confusing! :smiley: )

@ra4king Since you don’t seem to afraid of math…

If i paint the colour (r,g,b,a) onto a background colour of (r’,g’,b’) the resultant colour is (ra+r’(1-a),ga+g’(1-a),ba+b’(1-a)) in standard blend mode.

This is the same as adding (ra,ga,ba) to (r’(1-a),g’(1-a),b’(1-a)).

So we have the blend mode with ONE & ONE_MINUS_SRC_ALPHA, and we need to multiply our painting colour by alpha first (pre multiply) to get mathematically the same result as with the blend mode SRC_ALPHA, ONE_MINUS_SRC_ALPHA.

Also it should be clear why we need to keep the alpha value.

:o thanks! so what are the image backgrounds supposed to look like? (eg. black or transparent or it doesn’t matter?)

In a shader-based renderer another option would be doing the pre-multiplication in the fragment shader:

vec4 color = texture2D(...);
... // do lighting/whatever
gl_FragColor = vec4(color.rgb * color.a, color.a);

The computation cost is minimal and you don’t waste time with pre-processing textures (either offline or at runtime). Also, it avoids the posterization problem described in the OP’s third link, for a small quality gain.

That would defeat the purpose of it, if I’ve understood this right. It’s at least partly meant to solve bleeding when using bilinear filtering. By “premultiplying” the already interpolated color, you’re not achieving this.

[quote=“theagentd,post:18,topic:39549”]
That’s correct, if the texture’s getting scaled, you need premultiplied values before sampling. In most 2D games that’s not an issue however.

[quote=“Spasi,post:19,topic:39549”]

Then what’s the whole point of it? You’re emulating blending that’s done pretty much for free by dedicated hardware in a shader / with fixed functionality.