Alpha / Additive blending with Composide and Graphics2D

My current project uses Graphics2D for rendering.

I’ve been thinking about how to add Additive blending to this, and after some reading up around the internets it seems this is not possible.

I’ve read the java doc’s for AlphaComposite, and I’m thinking that it would be possible with some maths and bashing my head against the keyboard to create a new AdditiveComposite class that can be used in much the same way for blending additively with Graphics2D.

On top of this, it would be possible to create any type of blending required (I’m thinking Photoshop - such as Overlay and Liniar Burn, etc.

I wanted to have a discussion on this topic, to see if this has been done before, found to be impossible or if theres a better way to go about this.

A caveat would be that any blending would have to be used in a Update/Render game loop, and therefor be as fast as possible.

So, any thoughts?

EDIT: Ohh yeh, my current project is to create a game engine from scratch, mainly as a learning experience, and also becasue I would like to have an end-product that I fully understand the inner workings of. I’d rather go the long way with this rather than ‘Go use XYZ Library, as it will do everything you need.’ ;D

It is quite possible, but i dont think it’d be accellerated.

This post got me interested, and i hacked up a bit of a test.
It’s not very fast, and I just guessed the math, so can’t guarantee that it’s fully correct.

import java.awt.*;
import java.awt.image.*;
public class AdditiveComposite implements Composite
  public AdditiveComposite()
  public CompositeContext createContext(ColorModel srcColorModel, ColorModel dstColorModel, RenderingHints hints)
    return new AdditiveCompositeContext();

import java.awt.*;
import java.awt.image.*;

public class AdditiveCompositeContext implements CompositeContext
  public AdditiveCompositeContext(){};
  public void compose(Raster src, Raster dstIn, WritableRaster dstOut)
    int w1    = src.getWidth();
    int h1    = src.getHeight();
    int chan1 = src.getNumBands();
    int w2    = dstIn.getWidth();
    int h2    = dstIn.getHeight();
    int chan2 = dstIn.getNumBands();
    int minw  = Math.min(w1, w2);
    int minh  = Math.min(h1, h2);
    int minCh = Math.min(chan1, chan2);
    //This bit is horribly inefficient,
    //getting individual pixels rather than all at once.
    for(int x = 0; x < dstIn.getWidth(); x++) {
      for(int y = 0; y < dstIn.getHeight(); y++) {
        float[] pxSrc = null;
        pxSrc = src.getPixel(x, y, pxSrc);
        float[] pxDst = null;
        pxDst = dstIn.getPixel(x, y, pxDst);
        float alpha = 255;
        if(pxSrc.length > 3) {
          alpha = pxSrc[3];
        for(int i = 0; i < 3 && i < minCh; i++) {
          pxDst[i] = Math.min(255, (pxSrc[i] * (alpha / 255)) + (pxDst[i]));
          dstOut.setPixel(x, y, pxDst);
  public void dispose(){}
} sets up a simple JFrame with a custom JPanel that draws wandering spotlights:

import java.awt.*;
import javax.swing.*;
import java.awt.image.*;

public class Prog
  public static void main(String[] a) {
    JFrame frame = new JFrame();
    MyPanel cp = new MyPanel();
    frame.setSize(400, 400);
    while(true) {
      try {
      } catch(Exception e) {

class Spotlight {
  float x, y, vx, vy;
  int w, h;
  Color c;
  static Composite acomp = new AdditiveComposite();
  public Spotlight(Color colour, int xStart, int yStart, int width, int height)
    c = colour;
    x = xStart;
    y = yStart;
    w = width;
    h = height;
  public void paint(Graphics2D g) {
    Composite oldComp = g.getComposite();
    g.setComposite(acomp);   //comment out this line to see the difference
    g.fillOval((int)x+0, (int)y+0, w-10, h-10);
  public void moveRandom(int minX, int minY, int maxX, int maxY) {
    vx += 8 * (Math.random() * 0.2 - 0.1);
    vy += 8 * (Math.random() * 0.2 - 0.1);
    vx *= 0.98;
    vy *= 0.98;
    if(x > maxX - w) {
      vx = vx < 0 ? vx : -vx;
    if(x < minX) {
      vx = vx > 0 ? vx : -vx;
    if(y > maxY - h) {
      vy = vy < 0 ? vy : -vy;
    if(y < minY) {
      vy = vy > 0 ? vy : -vy;
    x += vx;
    y += vy;

class MyPanel extends JPanel {
  Spotlight[] spots = null;
  public MyPanel() {
    spots = new Spotlight[6];
    spots[0] = new Spotlight(new Color(255, 0, 0, 128), 200, 0,   200, 200);
    spots[1] = new Spotlight(new Color(0, 255, 0, 128), 100, 200, 200, 200);
    spots[2] = new Spotlight(new Color(0, 0, 255, 128), 0,   0,   200, 200);
    spots[3] = new Spotlight(new Color(255, 0, 0, 128), 200, 0,   200, 200);
    spots[4] = new Spotlight(new Color(0, 255, 0, 128), 100, 200, 200, 200);
    spots[5] = new Spotlight(new Color(0, 0, 255, 128), 0,   0,   200, 200);
  public void moveSpots() {
    if(spots == null) return;
    for(int i = 0; i < 6; i++) {
      if(spots[i] != null) {
        spots[i].moveRandom(0, 0, this.getWidth(), this.getHeight());
      } else {
  public void paintComponent(Graphics g) {
    Graphics2D g2 = (Graphics2D)g;
    g.fillRect(0, 0, this.getWidth(), this.getHeight());
    for(int i = 0; i < 6; i++) {
      if(spots[i] != null) {
      } else {

The graphics code seems pretty smart about providing the compose() function with only the area of interest, so i think if you use it for limited small effects, it might run OK.

I noticed that my compose() function was being given 64x64 tiles, rather than all the image at once. There’s mention in the API of multithreading, but the result doesn’t appear to use it.

Digging down into the java source code, i found their alphacomposite creates BufferedImages from the rasters, then calls a native function called Blit. I didn’t look deeper than a quick glance at that file, didn’t really understand it. There seems to be a ton of boilerplate in Java.

Nice demo, works great - when all spotlights are on the same point it turns white.

There is a hw-accelerated API for this in project scenegraph ( but it seems to have been eaten by the JavaFX mess, but I know it is usable from plain old java and java2D.

Ahh, I see… interesting.

I’ve used a similar process in C# for applying things like edge-finding to CCTV images, but found it to be very slow. Then again, that was quite a heavy calculation per pixel, rather than just adding color values.

I’m not sure now the Composite/CompositeContext works - does it run in software, or instruct the graphics hardware to use that calculation? If this is handled at a pixel level in java, it would be very slow to execute in low-end systems right?

Sorry for not replying sooner,


Careful with AlphaComposite - you can’t create and use custom AlphaComposites in the security sandbox (ie. applets), so you may find it better to just do the math on a BufferedImage / Raster directly.