The answer depends on a lot of details which you have omitted. The simple case is that the pixels
array is the backing array of an opaque BufferedImage
and what you really want to do is just a per-pixel blending calculation on top of the existing opaque colour. A more complicated case would be that the image supports an alpha channel, in which case the blending also needs to generate an alpha value, and the calculation for the colour channels depends on whether it’s a pre-multiplied image or not. In both cases the ColorModel
also matters, and in the more complicated case the precise API calls linking the image and the pixel array matter because they’ll control whether you can do the alpha and colour calculations with a single array or whether you’ll need to use a separate Raster
for the alpha channel.
In the simplest case (opaque image) the (optimised) blending calculation is
// b_rrggbb is the current value of the pixel
// We're blending a_rrggbb over with an alpha value out of 256
static int blendedPixel(int a_rrggbb, int b_rrggbb, int alpha) {
int a_rr00bb = a_rrggbb & 0xff00ff;
int a_gg00 = a_rrggbb & 0xff00;
int b_rr00bb = b_rrggbb & 0xff00ff;
int b_gg00 = b_rrggbb & 0xff00;
int blend_rr00bb00 = (a_rr00bb * alpha + b_rr00bb * (256 - alpha)) & 0xff00ff00;
int blend_gg0000 = (a_gg00 * alpha + b_gg00 * (256 - alpha)) & 0xff0000;
// In the unlikely event of getting transparency, add 0xff000000
return (blend_rr00bb00 | blend_gg0000) >>> 8;
}