Image Rotator

I wanted to crop an image after rotating it, and couldn’t figure out a way to do this using the Java2D AffineTransform (and not have it wreak havok with the rest of what was on the screen.

Maybe there is a way to do this and I just reinvented a wheel for nothing. But at least I finally learned something about basic 2D Transforms (and the matrix & vector math underlying).

So, below is some code that can rotate an image. First a demo graphic or two:

http://hexara.com/Images/Strawberries.JPG

Graphic for Apo!

http://hexara.com/Images/TiltedSheep.JPG

Code is here:
http://pastebin.java-gaming.org/55c3f863a26

The code: first get it working, now it needs improvement. The biggest question I have is the way I interpolate the new pixel values. As I generate the new locations for the original pixels, I store the distance to the surrounding four points (or less, if one is on the edge). Then, when determining the color of the new pixels, I take the closest three and use a weighted average based on their distances. This seems a little dubious as the the triangles are not equilateral, so the angles should also be taken into consideration. However, they are right triangles and thus the corners are pretty well spread out, and visually it seems to work pretty well.

I tried using just the closest point, and by calculating the weighted average using the square of the distance, but both shortcuts were considerably more jaggy. I am assuming someone has come up with a better way to do the interpolation of the new values, and would love to learn about it.

To use the code, create an ImageRotator object, then call the rotate() method, with three parameters (1) image to be rotated (2) radians of rotation (3) a boolean that is false if the graphic has no alpha (is a normal jpeg, for example) or true if there is alpha data that needs to be part of the interpolation. The method returns a BufferedImage of the rotated graphic.

this rotation dedicated to Nate:

http://hexara.com/Images/NatesCowTilted.JPG

Am reading that a more accurate lerp would make use of barycentric math. Am working to acquire this. Kind of nice: getting a grip on barycentric math should help with understanding the Simplex noise generation algorithm.

I guess some efficiency might be achieved by packing the color into a long, and only unpacking when it becomes time to lerp. Also, perhaps just store all the points that are candidates for “closest” and only pick the closest three when it comes time to lerp. ??? Will have to do more experiments.

Maybe don’t need to compute the new image x,y sizes directly, but can get this from the four corners (which are needed regardless in order to get the translation required to keep the new coordinates positive).

Why don’t you just use g.rotate and g.drawImage on a new BufferedImage? It will (should) take advantage of hardware if possible.

Thanks for the reply, davedes. I tried using g.rotate, but it doesn’t return an image. I wish to clip the resulting rotated image before displaying it, and would like the flexibility of performing further operations on it as well. Is there is a way to clip an image after it has been rotated using g.rotate that leaves the rest of the display space intact?

One could edit an image prior to rotating so that the rotated result will be clipped correctly, but that seems overly complicated.

Is there a way to get at data (extract a BufferedImage) from a Graphics object?

I’m working on a texture editor, and rotation might be useful there. The obvious scaling controls only work directly on the vertical and horizontal, they don’t create skews (or I’m not “getting” how to do this).

If you use a volatile image as a back buffer you can call getSnapShot() which will return a BufferedImage. I do not know how slow this though…

You could set a clip on the Graphics object itself?

Your method doesn’t seem to work for the following image:

I’d recommend modifying the int array directly, instead of calling setPixel, if you know the image type.

Another way to do this (if I understand your goal correctly) is to create a new BufferedImage, then use getGraphics and drawImage with the affine transform.

Here is a simple code example

@davedes!

First off, thanks for pointing out that one can get a Graphics2D object from a BufferedImage. That makes SO much more possible! I’ll have to play with it a bit, in terms of learning the in’s and out’s, especially with the aspects like the various RenderingHints and when it is appropriate to use them.

[quote]Your method doesn’t seem to work for the following image:

[/quote]
You are right! I’m seeing now that one can do a getType() method to determine the type of a graphic. I’ve always just used TYPE_INT_ARGB for my dabbling. It looks like I was reading alpha channel data where I thought I was dealing with red, etc.

[quote]I’d recommend modifying the int array directly, instead of calling setPixel, if you know the image type.
[/quote]
By that, do you mean using setRGB() method? OK, that makes sense. Bit shifts and adds to assemble the int should execute quickly.

I hadn’t thought much about the edge issue, beyond noticing the jaggies on my demo examples. The main place I was going to use this was with a graphic that has alpha = 0 on the edges. Computationally adding anti-aliasing–that would take some thought. And, am less likely to do so, now that using getGraphics() has been pointed out.

I did work out a mathematically correct algorithm for a linear interpolation of the color values, earlier this evening. It is a little involved, first one determines Barycentric origin and basis vectors from the color values of the translated points, then plug (x, y) values from them into an equation to solve for the value of the new point. I’m tempted to try implementing it and seeing how much better it looks (and slower it performs) than the shortcut of weighting via the distances. Am also curious how the Java writers have handled all this–maybe the source code is available. Priorities, though…

[quote]By that, do you mean using setRGB() method? OK, that makes sense. Bit shifts and adds to assemble the int should execute quickly.
[/quote]
Nope, like this:

int[] pixels = ((DataBufferInt)img.getRaster().getDataBuffer()).getData();

Now you can modify the pixel array directly. For byte[] type images, it would be DataBufferByte.

You can also use the getGraphics method to conveniently “convert” an image to the expected type.

if (image.getType()!=BufferedImage.TYPE_INT_ARGB) {
    BufferedImage newImage = new BufferedImage(image.getWidth(), image.getHeight(), BufferedImage.TYPE_INT_ARGB);
    newImage.getGraphics().drawImage(image, 0, 0, null);
    image = newImage;
}

Note that that last code sample renders the translucent image on a black ‘background’. Use a Comppsite to make the target image fully transparant prior to drawing your image on it.

A new BufferedImage of type ARGB (or any transparent type) should be initialized as fully transparent already, no?

I just described how that assumption is false.

I was questioning whether you had a concrete example of when that assumption doesn’t hold up? The underlying DataBufferInt has its arrays initialized with all values = 0 (ie. full transparent), and in all uses I’ve made that’s the result. However, I can’t find anything in the JavaDoc that states this should always be the case.

You are right.

I have to add though that I was required to add that code somewhere in 2004, because otherwise the result simply didn’t contain any translucency:

	public static BufferedImage copy(BufferedImage src, int newType) {
		if (newType == BufferedImage.TYPE_CUSTOM)
			throw new IllegalStateException();

		BufferedImage dst = new BufferedImage(src.getWidth(), src.getHeight(), newType);
@@		if (dst.getColorModel().hasAlpha()) {
@@			ImageUtil.makeTransparent(dst);
@@		}
		dst.getGraphics().drawImage(src, 0, 0, null);
		return dst;
	}

	public static void makeTransparent(BufferedImage img) {
		Graphics2D g = img.createGraphics();
		g.setComposite(AlphaComposite.getInstance(AlphaComposite.CLEAR, 0.0f));
		g.fillRect(0, 0, img.getWidth(), img.getHeight());
		g.dispose();
	}

Seems like this was a bug, and has been solved (probably ages ago). It may have been related to an ancient nVidia bug that make it impossible to use glClearColor(0,0,0,0) to glClear(…) the framebuffer, that was solved in 2006, IIRC.