Blending Alpha Pixels?

Consider the following code:


		for(int y = 0; y < ht; y++) {

			int yp = ypos + y;

			if(yp >= pri.getHeight() || yp < 0)
				continue;

			for(int x = 0; x < wt; x++) {
				int xp = xpos + x;

				if(xp >= pri.getWidth() || xp < 0)
					continue;

				int pos = yp * pri.getWidth() + xp;
				if(pos < 0 || pos >= pixels.length)
					continue;
				
				int cv = pixels[pos];
				int nv = colors[y*wt + x];
				double alpha = nv >> 24 & 0xFF;
				if(alpha < 255) {
					nv = ColorUtils.packInt(255, cv >> 16 & 0xFF, cv >> 8 & 0xFF, cv & 0xFF);
					nv = (int) Math.round(cv * (alpha / 255) + nv * (1 - alpha / 255));
				}
				
				pixels[pos] = nv;
			}
		}

As you may be able to guess, I’m attempting to blend pixels that are translucent. This method doesn’t appear to be working very well, however. Given pixel value ‘a’ (current set value in image) and new translucent value ‘b’ (let’s assume alpha of 200 or so), how do you blend the two pixels so that ‘a’ still appears through ‘b’ but with modified coloring (i.e red + blue = purplish)?

How about Interpolating them?
Like:


interpolate(a,b,factor)
interpolate(sourceColor,destinationColor,alphaValue)

That should work.

Do you mean standard blending that looks like this?

(Where the foreground image is at 75% opacity)

Typically we use SRC_ALPHA, ONE_MINUS_SRC_ALPHA:

{foreground}*sourceAlpha + {background}*(1-sourceAlpha)

Looks roughly like this: (you don’t need to use floats)

//assuming TYPE_INT_ARGB


//get background pixels (destination)
int dstValue = pixels[dstPos];
float dstA = ((dstValue & 0xff000000) >>> 24) / 255f;
float dstR = ((dstValue & 0x00ff0000) >>> 16) / 255f;
float dstG = ((dstValue & 0x0000ff00) >>> 8) / 255f;
float dstB = ((dstValue & 0x000000ff)) / 255f;

//get foreground pixels (source)
int srcValue = pixels[srcPos];
float srcA = ((srcValue & 0xff000000) >>> 24) / 255f;
float srcR = ((srcValue & 0x00ff0000) >>> 16) / 255f;
float srcG = ((srcValue & 0x0000ff00) >>> 8) / 255f;
float srcB = ((srcValue & 0x000000ff)) / 255f;

srcR *= srcA;
srcG *= srcA;
srcB *= srcA;

//final output
float R = srcR + dstR*(1-srcA);
float G = srcG + dstG*(1-srcA);
float B = srcB + dstB*(1-srcA);

//if we're working with the screen, we typically will want A=255
//in this case TYPE_INT_RGB makes more sense to use
pixels[dstPos] = (255 << 24) | ((int)(R * 255) << 16) | ((int)(G * 255) << 8) | (int)(B * 255);

That works. Only problem is, it’s killing performance. My fixed 60fps rendering system dropped to 14-15fps using that algorithm. Do you have any ideas on how to prevent that?

I suspect there is something wrong with your code or game loop if you’re getting such poor performance.

Iterate through your pixels with a single loop, instead of two:

for (int i=0; i<WIDTH*HEIGHT; i++) {
	int y = i / WIDTH;
	int x = i - WIDTH*y;
	
	... 
}

Some other areas that could be improved:
[]Use TYPE_INT_RGB like I suggested earlier
[
]Use integers instead of converting to float and back
[]Store your sprites/colors unpacked: { a, r, g, b }
[
]Use Graphics and BufferedImages where possible to take advantage of hardware

I tried using the single for loop as you suggested and it hurt performance even more. It knocked it down to 48fps with only 3 render jobs.