Questions about TGA Files

That loader suckz, as i can say. I used the loader in the first attempt and was very frustrated that much stuff had’nt worked. The easiest way to do your own DDS Loader is to stick into MSDN and NVidia documents. I preferered later one and found some good informations.

For some time i posted a small DDS loader at the lwjgl forums, since that i still not finished the loader due the lack of time.
It misses mipmaps and all the other advance stuff. Only basic loading is supported.
Anyway, to get a closer look take a look :wink:

http://www.evil-devil.com/dlfiles/dds_loader.rar

Awesome :slight_smile:

Kev

Matthiasman has implemented DDS loading suppor for Xith3D. I don’t use it but I believe it’s working well, though I don’t know how complete it is…

Last question - I need to be able to load in TGA’s into actual Java Images for use in say… the editor.

What’s the easiest way to do this given that Kev’s code returns a ByteBuffer? I tried transforming it into an IntBuffer, then an int[] to create a BufferedImage, but it seems that I can’t do that because of the way the ByteBuffer was created.

I’d change the code to load it into a raster or something, and build the buffered image from there.

Kev

I need to be able to load in TGA’s into actual Java Images for use in say… the editor.

TGA image reader for ImageIO Available
http://www.java-gaming.org/forums/index.php?topic=2136.0

I actually tried that one last night, and it flopped on the images with transparency. That and… I didn’t really think it was all that fast in loading in. Does that have something to do again with it being a plugin for ImageIO?

Running into some issues creating the raster. I inserted the following lines in a copy of your function at the bottom.


BufferedImage img = new BufferedImage
(
	TGALoader.getLastWidth(), 
	TGALoader.getLastHeight(), 
	TGALoader.getLastDepth()
);
		
DataBufferByte bufferByte = new DataBufferByte(rawData, rawData.length);
		
img.setData
(
    Raster.createRaster
    (
	new BandedSampleModel
	(
		DataBuffer.TYPE_BYTE,
		TGALoader.getLastWidth(),
		TGALoader.getLastHeight(), 
		TGALoader.getLastDepth()
	),
	bufferByte, 
	new Point(0, 0)
    )
);
Exception in thread "AWT-EventQueue-0" java.lang.ArrayIndexOutOfBoundsException: 1
	at java.awt.image.DataBufferByte.getElem(DataBufferByte.java:183)
	at java.awt.image.BandedSampleModel.getPixels(BandedSampleModel.java:410)
	at java.awt.image.Raster.getPixels(Raster.java:1569)
	at java.awt.image.BufferedImage.setData(BufferedImage.java:1488)
	at com.stencyl.data.util.TGALoader.loadIntoImage(Unknown Source)

Some quick print statements say that the image is loading properly though, so it’s something in my code that’s bad.

URL: Media/Backgrounds/Overworld 1.tga
Width: 1024
Height: 864
Depth: 8
Size: 3145728
DataBufferSize: 3145728

Not sure which loader you’re using (I have several for different things) but some of them generate a bigger buffer than the images for a POT texture in OpenGL.

Kev

I used this copy of your code.

http://www.cokeandcode.com/code/src/render/org/newdawn/render/texture/TGALoader.java

Edit: When I applied the same lines of code to your Slick TGALoader, I got this.

java.io.EOFException
	at java.io.DataInputStream.readByte(DataInputStream.java:243)
	at com.stencyl.data.util.TGALoader.loadImage2(Unknown Source)

Both of the public versions only support non-RLE, 24bit/32bit images. Thing can go astray if you pump anything else in. They also both create only POT sized buffers. So if you’re passing in something like 1024x826, you’ll get a buffer for 1024x1024.

Kev

I’ll hold my breath on this, but I think my problem was that these weren’t 24 or 32-bit TGA’s to begin with. :confused:

I used ImageMagick, and apparently, it took my PNG’s (which I think were 8-bit) and converted them over to 8-bit TGA’s. That’s my guess because your function’s valiantly trying to load what it claims are 8-bit TGA’s. :stuck_out_tongue:

D:\>convert -colorspace RGB -depth 32 -background transparent -verbose eli.gif eli3.png
eli.gif GIF 50x50 50x50+0+0 PseudoClass 32c 8-bit 822b
eli.gif=>eli3.png GIF 50x50 50x50+0+0 PseudoClass 32c 16-bit 1.8457kb 0.030u 0:01

D:\>convert -verbose eli3.png eli3.tga
eli3.png PNG 50x50 50x50+0+0 DirectClass 8-bit 1.8457kb 0.010u 0:01
eli3.png=>eli3.tga PNG 50x50 50x50+0+0 DirectClass 8-bit 9.7832kb 0.020u 0:01

That worked for example (gives you a 32bit tga). But its quite crap like this. (Well, its not that bad… after compression its only about 25% bigger than it needs to be.)

TGA supports different 8bit modes. 15bit (5-5-5), 16bit (5-5-5-1), 24bit (8-8-8) and 32bit (8-8-8-8) afaict. However, I don’t have a clue how to create anything else than those with a 24bit palette.

Well, I guess a semi logical step would be to create a java program, which can be used to convert 8bit gif/tga files to tga with 32bit palette. And adding some stuff to the loader to handle 24 and 32bit palettes (the other modes aren’t any useful imo).

Or… create some custom format, which is easier to read/write.

Or… use dds and add the code for handling palletized stuff (in which case it would be cool if you share it).

Well, I got 24-bit TGA’s loaded into Images, only after deciding to read all the documents (which are a bit poorly worded). It was just a matter of doing this:

  1. Creating a Data Buffer and partitioning it correctly
  2. Choosing the right Sample Model and understanding what parameters to plug in
  3. Creating a writable raster and passing in the databuffer and sample model
  4. Making a color model
  5. Constructing a buffered image from raster and color model.

And this is the code that worked. I have yet to really be convinced that this is actually faster (in plain loading in, PNG wins by quite a bit still, which is not too great for the editor), but I’ll see in-game. I suppose that for some of you, this is second nature, but I had absolutely no knowledge going into this, and there weren’t any code samples online (that I could find). So not too bad for a few hours of work. :wink:


                DataBufferByte dataBuffer = new DataBufferByte
		(
			rawData, 
			rawData.length
		);
		
		int[] offsets = {0,1,2};
		
		PixelInterleavedSampleModel sampleModel = new PixelInterleavedSampleModel
		(
			DataBuffer.TYPE_BYTE,
			texWidth, 
			texHeight,
			3,
			3 * texWidth,
			offsets
		);
		
		WritableRaster raster = Raster.createWritableRaster
		(
			sampleModel,
			dataBuffer,
			new Point(0,0)
		);
		
		ColorModel cm = new ComponentColorModel
		(
			ColorSpace.getInstance
			(ColorSpace.CS_sRGB),
                        new int[] {8,8,8},
                        false,
                        false,
                        ComponentColorModel.OPAQUE,
                        DataBuffer.TYPE_BYTE
              );
		
		BufferedImage img = new BufferedImage(cm, raster, false, null);

First, just for my own knowledge. What does the flipping mean? When I passed in “false” it created an upside down image AND one that had inverted colors like the following. Does it just mean that it turned the picture upside down and reversed all the color channels (RGB -> BGR)?

http://jplatformer.thegaminguniverse.com/Funny.png

TGA images stores their color values in BGRA order and were flipped normally. The easiest would be to import them and save the RGBA an reflipped version as your own tga image. That would make it impossible for reading it with alternative software, but you don’t have to go through the B<>R flipping and image flipping.

Hmm. While I haven’t rewritten the routine for loading these into textures, just speaking for the editor part, it’s loading in significantly slower than the PNG version, presumably because the files are now megabytes big. (Well, to be precise, they are stored in a JAR file but then streamed out which decompresses them).

While I didn’t do some formal timing experiments, just plain reading them in was notably slower. When I read in all the PNG’s, it read each one instantly where as each TGA had a tiny pause (0.3 seconds perhaps?).

I’ll post up the full source of what I’m doing because I think there might be room for improvement. It should be noted that I did modify Kev’s function to not load into POT.


/**
	 * Load a TGA image from the specified stream
	 * 
	 * @param fis The stream from which we'll load the TGA
	 * @param flipped True if we loading in flipped mode (used for cursors)
	 * @return The Image
	 * @throws IOException Indicates a failure to read the TGA
	 */
	public static BufferedImage loadIntoImage(InputStream fis, boolean flipped) throws IOException {
		byte red = 0;
		byte green = 0;
		byte blue = 0;
		byte alpha = 0;
		
		BufferedInputStream bis = new BufferedInputStream(fis, 100000);
		DataInputStream dis = new DataInputStream(bis);
		
		// Read in the Header
		short idLength = (short) dis.read();
		short colorMapType = (short) dis.read();
		short imageType = (short) dis.read();
		short cMapStart = flipEndian(dis.readShort());
		short cMapLength = flipEndian(dis.readShort());
		short cMapDepth = (short) dis.read();
		short xOffset = flipEndian(dis.readShort());
		short yOffset = flipEndian(dis.readShort());
		
		width = flipEndian(dis.readShort());
		height = flipEndian(dis.readShort());
		pixelDepth = (short) dis.read();
		
		//We don't want to load into OpenGL
		//texWidth = get2Fold(width);
		//texHeight = get2Fold(height);
		
		texWidth = width;
		texHeight = height;
		
		short imageDescriptor = (short) dis.read();
		// Skip image ID
		if (idLength > 0) {
			bis.skip(idLength);
		}
		
		byte[] rawData = null;
		if (pixelDepth == 32)
			rawData = new byte[texWidth * texHeight * 4];
		else
			rawData = new byte[texWidth * texHeight * 3];
		
		if (pixelDepth == 24) {
			for (int i = height-1; i >= 0; i--) {
				for (int j = 0; j < width; j++) {
					blue = dis.readByte();
					green = dis.readByte();
					red = dis.readByte();
					
					int ofs = ((j + (i * texWidth)) * 3);
					rawData[ofs] = (byte) red;
					rawData[ofs + 1] = (byte) green;
					rawData[ofs + 2] = (byte) blue;
				}
			}
		} else if (pixelDepth == 32) {
			if (flipped) {
				for (int i = height-1; i >= 0; i--) {
					for (int j = 0; j < width; j++) {
						blue = dis.readByte();
						green = dis.readByte();
						red = dis.readByte();
						alpha = dis.readByte();
						
						int ofs = ((j + (i * texWidth)) * 4);
						
						rawData[ofs] = (byte) red;
						rawData[ofs + 1] = (byte) green;
						rawData[ofs + 2] = (byte) blue;
						rawData[ofs + 3] = (byte) alpha;
						
						if (alpha == 0) {
							rawData[ofs + 2] = (byte) 0;
							rawData[ofs + 1] = (byte) 0;
							rawData[ofs] = (byte) 0;
						}
					}
				}
			} else {
				for (int i = 0; i < height; i++) {
					for (int j = 0; j < width; j++) {
						blue = dis.readByte();
						green = dis.readByte();
						red = dis.readByte();
						alpha = dis.readByte();
						
						int ofs = ((j + (i * texWidth)) * 4);
						
						rawData[ofs + 2] = (byte) red;
						rawData[ofs + 1] = (byte) green;
						rawData[ofs] = (byte) blue;
						rawData[ofs + 3] = (byte) alpha;
						
						if (alpha == 0) {
							rawData[ofs + 2] = (byte) 0;
							rawData[ofs + 1] = (byte) 0;
							rawData[ofs] = (byte) 0;
						}
					}
				}
			}
		}
		fis.close();
		
		//End Kev's Code
		
		DataBufferByte dataBuffer = new DataBufferByte
		(
			rawData, 
			rawData.length
		);
		
		int[] offsets = null;
		
		if(pixelDepth == 24)
		{
			int[] offsets24 = {0,1,2};
			offsets = offsets24;
		}
		
		else
		{
			int[] offsets32 = {0,1,2,3};
			offsets = offsets32;
		}
		
		PixelInterleavedSampleModel sampleModel = null;
		
		if(pixelDepth == 24)
		{
			sampleModel = new PixelInterleavedSampleModel
			(
				DataBuffer.TYPE_BYTE,
				texWidth, 
				texHeight,
				3,
				3 * texWidth,
				offsets
			);
		}
		
		else
		{
			sampleModel = new PixelInterleavedSampleModel
			(
				DataBuffer.TYPE_BYTE,
				texWidth, 
				texHeight,
				4,
				4 * texWidth,
				offsets
			);
		}
		
		WritableRaster raster = Raster.createWritableRaster
		(
			sampleModel,
			dataBuffer,
			new Point(0,0)
		);
		
		ColorModel cm = null;
		
		if(pixelDepth == 24)
		{
			cm = new ComponentColorModel
			(
				ColorSpace.getInstance
				(ColorSpace.CS_sRGB),
				new int[] {8,8,8},
				ffalse,
				false,
				ComponentColorModel.OPAQUE,
				DataBuffer.TYPE_BYTE
	        );
		}
		
		else
		{
			cm = new ComponentColorModel
			(
				ColorSpace.getInstance(ColorSpace.CS_sRGB),
				new int[] {8,8,8,8},
				true,
				false,
				ComponentColorModel.TRANSLUCENT,
				DataBuffer.TYPE_BYTE
			);
		}
		
		BufferedImage img = new BufferedImage(cm, raster, false, null);

		return img;
	}

Edit: I also tried GZipping, and I saw no real improvement.

If you want quick loading, you shouldn’t be compressing it in the first place.

Compressing TGA with ZIP or GZ, is basicly the same as using PNG, because PNG uses similair algorithms.

You should do a comparison loading plain PNG / TGA files, not in a Zip or JAR.

That’s what I did already and it’s exactly the same loading time. I just threw in the compression comment to say that that’s something additional I tried as a second experiment.

dis.readByte()
might have a significant overhead, even if there is a BufferedInputStream backing it.

Try to read the whole file in a single byte[] and use that in your loops.

My gut-feeling tells me it will make it 4-8x faster… tell me how it works out

I’ll see if that helps any. It sounds plausible that it will be faster, but I’m not so sure about 4-8x faster. :slight_smile: