Blending hex colors for 2D pixel lighting

So I’m experimenting with 2D pixel lighting and having some issues with blending colors together.

A few things first.

  • I’m using standard Java2D for rendering, no shaders or opengl.
  • I’m drawing sprites, from a spritesheet, into a pixel array then rendering that pixel array with g.drawImage().
  • Performance isn’t an issue, I’m easily getting 1000fps+. Sprites are 8x8 and are scaled up as not to look crazy small./li]
    [list][li]Lights have an x, y, range, intensity & color.

I’ve been trying to change the color of each pixel, within the range of the light, to be affected by the light source; based off its intensity and color.

I’ve manage to render a light with this result, only it doesn’t look right. Its meant to be a red light.

I’ve been getting the RGB values from the original hex color and blending them with the light color, based off its range.[/list]

When you say “it doesn’t look right,” how then do you want it to look like?
You seem to do additive blending, since green ‘plus’ a bit of red gives orange (towards yellow):

newSurfaceColor = oldSurfaceColor + lightColor

There are many artistic freedoms and expectations when it comes to lighting.
But when leaning towards more physical correctness then you should multiply light color by surface color and add that to the surface color:

newSurfaceColor = oldSurfaceColor + oldSurfaceColor * lightColor

That would be somewhat physically correct. But this also means a completely red light on a completely green surface essentially becomes black and adds nothing to the surface color.

Well here’s what I get with a white light.

And here’s the code to render a light.


      //Color and intensity of light
		int col = 0xffffff;
		double insensity = 1.0;
		
		//Loop over pixels within light radius
		for(int y = ya - radius * 2; y <= ya + radius * 2; y++){
			for(int x = xa - radius * 2; x <= xa + radius * 2; x++){
				
				//If out of bounds, continue
				if(x < 0 || y < 0 || x >= width || y >= height) continue; 

				//Distance between light source and current pixel
				double dist = Util.distance(xa, ya, x, y);
								
				//Original pixel color
				int orig = pixels[x + y * width];
				
				//Original RGB values
				int orR = (orig & 0xff0000) >> 16;
				int orG = (orig & 0xff00) >> 8;
	        	int orB = (orig & 0xff);

	        	//Light RGB values
	        	int newR = (col & 0xff0000) >> 16;
				int newG = (col & 0xff00) >> 8;
	        	int newB = (col & 0xff);
	        	
	        	//Light RGB values effected by distance, radius and intensity
		        newR = (int) (newR / dist * radius * insensity);
		        newG = (int) (newG / dist * radius * insensity);
		        newB = (int) (newB / dist * radius * insensity);
	        	
	        	//Original and light RGB values
	        	int r = Math.min(255, Math.max(0, Math.max(orR, newR)));
	        	int g = Math.min(255, Math.max(0, Math.max(orG, newG)));
	        	int b = Math.min(255, Math.max(0, Math.max(orB, newB)));
	        	
	        	//Change the color of the pixel
	        	pixels[x + y * width] = (r << 16) | (g << 8) | (b);
			}
		}

Its probably accurate but its not the look I’m after.

I can’t to explain what I’m looking for so I whipped this up.

That lighting model is completely wacky. :point:
For light you need a subtractive color model.

Like I said:

newSurfaceColor = oldSurfaceColor + oldSurfaceColor * lightColor

should be your equation (plus of course some attenuation for the light’s itensity).

You currently have something like this:

newSurfaceColor = max(oldSurfaceColor, lightColor * linearAttenuationFactor)

To get correct results you MUST attenuate/multiply your light color by the surface color first, and then add that to the surface color. To do that you first MUST normalize the colors from 0…255 to 0…1. Then you can simply multiply.

Additionally, the image with the fire you showed seems to use quadratic distance attenuation (as is correct for point lights) and not linear attenuation. This results in that fast falloff of the light’s intensity after a few “meters” around the fire.
The light color in that image also seems to be simply white. Only the flame’s image is shown in red/yellow.

Note: I am going to talk about lighting as a color range between (0.0-1.0) here, not (0-255).

Lighting stacks additively. If you have two identical lights at the same distance from a given point, the point becomes twice as bright as if only one light had been there. In addition, the light reaching a given pixel should be multiplied by the unlit color of the pixel. You could write it as:


Vector3 pixelColor = ...;
Vector3 litPixel = (0, 0, 0);
for(Light light : lightList){
    lightStrength += pixelColor * light.color * attenuation;
}

This has one glaring problem. When the light intensity of a color channel reaches 1.0 (or 255), you get a very ugly area where the color is obviously clamped. In the following image you can clearly see the area where the light intensity was clamped to 1.

However, although the physical light intensity may double from having two lights, that does not mean that our eyes actually see it as twice as bright. Our eyes are much more sensitive at lower light intensities and we can’t very well differentiate between two different bright intensities. With our monitors only being able to show 256 different levels of brightness for each pixel, visualizing extremely bright lights is difficult. To be able to show extremely bright lights, you really need to use HDR, High-Dynamic Range, which is just fancy word using floating point pixel colors that can exceed 1.0 instead of 8-bit colors restricted to 1.0. After accumulating all light, you run the pixel colors through a tone-mapping function to reduce them to the (0.0-1.0). The most simple operator is (color / (color+1)), which converts values like this:

0.0 --> 0.0
0.5 --> 0.3333
1.0 --> 0.5
1.5 --> 0.6
2.0 --> 0.6666
3.0 --> 0.75
5.0 --> 0.8333

As you can see, as the raw color approach infinity the displayed color approaches 1.0, the maximum we can display. That ensures that there is always a “brighter” value to display if the light is brighter and gives a much smoother curve compared to just clamping the values at 1.0.

Another big problem: I’m assuming that your ground is very close to pure green and that your light is pure red. In this case, when you multiply together these two colors you get (0, 1, 0)*(1, 0, 0), which is equal to 0. It is VERY important to always have a little bit of each color so that extremely bright points can converge to white. Look at this picture:

http://blogg.svt.se/melodifestivalen/files/2014/02/scen.jpg

The light shafts and especially the lights all converge to white if you check the actual colors in the image. Our brains still understands what color this bright “white” is supposed to have based on the colored bloom/halo around the pixel. With the above tone-mapping operator, this effect is actually achieved:
(0.5, 0.1, 0.1) —> (0.333, 0.09, 0.09), not heavily modified and clearly red.
(50, 10, 10) —> (0.98, 0.91, 0.91), very close to white with a slight red tint.

This however assumes that neither the ground color nor the light color is a pure color. If any of these two colors’ color channels are 0, the result for that channel WILL be zero preventing the fade to white.

Ahh I think I’m starting to understand. I’m trying to figure out the best way to do this.

  • I’m thinking of having two arrays. One for the unaltered pixels and one for the lights.
  • Each frame the light array is cleared with the ambient color. So day would be a bright white with a hint of yellow and dark would be a very dark blue.
  • When a sprite is rendered it is drawn in pixel array, when a light is rendered is is draw in the light array.
  • Then when everything is rendered, combine the pixel array and the light array.

Could this be a viable way of implementing this? Thanks for your help.

Yes, that would be a great way of doing things. You’d essentially have a “light buffer” and simply multiply it with the color of each pixel. And, if you’re feeling clever, you could actually store the light buffer at floating point precision, convert the pixel color to floats, multiply them together, then tone map the result and finally write out an 8-bit value, although that could be a bit overkill. =P

Its works! (I think)

Here’s the original, unaltered pixels.

The lightmap

Then the two combined

I’m not sure if my lighting code is correct though.

Here’s how I render a light source and draw it to the lightmap.

				double dist = Util.distance(xa, ya, x, y);
				int old = lightmap[x + y * width];
				
				double oldR = Util.normaliseColor((old & 0xff0000) >> 16);
				double oldG = Util.normaliseColor((old & 0xff00) >> 8);
	        	double oldB = Util.normaliseColor(old & 0xff);
				
				double lightR = Util.normaliseColor((int) (((col & 0xff0000) >> 16) * radius / (dist * dist)));
			    double lightG = Util.normaliseColor((int) (((col & 0xff00) >> 8) * radius / (dist * dist)));
			    double lightB = Util.normaliseColor((int) ((col & 0xff) * radius / (dist * dist)));   

			    int newR = Util.normalToColor((oldR * lightR) * insensity);
			    int newG = Util.normalToColor((oldG * lightG) * insensity);
			    int newB = Util.normalToColor((oldB * lightB) * insensity);
			    
			    lightmap[x + y * width] = (newR << 16) | (newG << 8) | newB;

And here’s how I blend the final lightmap with the original pixels:

double oldR = Util.normaliseColor((px & 0xff0000) >> 16);
		double oldG = Util.normaliseColor((px & 0xff00) >> 8);
		double oldB = Util.normaliseColor((px & 0xff));
		
		double newR = Util.normaliseColor((lgt & 0xff0000) >> 16);
		double newG = Util.normaliseColor((lgt & 0xff00) >> 8);
		double newB = Util.normaliseColor((lgt & 0xff));
		
		int r = Util.normalToColor(oldR * newR);
		int g = Util.normalToColor(oldG * newG);
		int b = Util.normalToColor(oldB * newB);
		
    	return (0xff << 24) | (r << 16) | (g << 8) | (b);

The normaliseColor() & normalToColor() methods simply convert the 255 RGB value to a value between 0.0-1.0, making sure the values don’t go above 1.0 and 255.

This old thread on blend modes might be useful for what you’re doing. For best performance, if not most accurate result, I’d think you want to stick with doing this all without double conversion. Mind you, something to benchmark! Also, know when you need to clamp and not - multiply blending is self clamping.

My code that’s referred to has moved - it’s here and here.

You’re not additively adding together the lights. You’re multiplying them together. The physically correct way of doing it is to simply add them together, but as always when you work with graphics: if it looks good, it IS good.