Texture Loading

So recently I made another minimalist tool for myself. The first 4 bits are two chars to dictate the size of the image. The rest after that is all pixel data. (Now I am interested in options to make gray images which are 3 bytes less per pixel, :point:)

To test it out, I weighted it against TWL’s PNGDecoder. Each texture was loaded into a byte buffer. I did 200 passes and stored each TextureData thereof into an ArrayList.

Ignore the 0,1 1,0 they seem like an issue, they’re not.

My Code

Some things I noticed…

  • The first call to load a texture is always the longest with a big overhead for some reason > probably caching or something? Like 12-24ms for a 100x100x4 pixel image. I loaded my .mx file (idk) and added it to an arraylist of data (which is the majority of its cost) at 89-93ms. While Png decoder ended up with 310ms.
  • Loading a PNG using twl’s pngdecoder is strictly 60% slower than loading straight pixels. Yet I could get like 1.9 kilos with png (say that, but it also says its 4.0kilos) when storing the image straight up would take 40kilos plus two chars.

So I guess I suggest to myself to keep the MX file format and use a PNG unpacker system for distribution.

I switched over to MX and I got a cool 500-700ms performance boost!

I don’t even know about texture compression via Driver/OGL yet, which I’d imagine that would be faster?

So your image format is essentially just raw data without any compression? I assume you understand that 2 bytes header can give you only 256x256 max textures.

This ain’t very minimal yet:


final byte[] body = new byte[width * height * 4];
fis.read(body);
final ByteBuffer bb = BufferUtils.createByteBuffer(width * height * 4);
bb.put(body);
bb.flip();

One less costly allocation + mem copy:


final ByteBuffer bb = BufferUtils.createByteBuffer(width * height * 4);
fis.read(bb.array(), 0, width * height * 4);
bb.limit(width * height * 4);

More random remarks:

  • Bit shifts: it’s good you know how to use them, but in this case it’s a pretty useless micro-optimisation.
  • No error checks on fis.read(). You should always check results of file operations.
  • If “added it to an arraylist of data” takes most of your load time, you are still doing something wrong.

trollwarrior1

My two byte header gives me short x short.
Yes. Raw data.

I was also wondering a few things about data sizes. I know about saving bytes have having the range that they do, but what if I split a byte into 2 subsets of 4 bytes. I would have a range of 128. Thinking even smaller, I would just use normalization too. Although the color isn’t perfect, its only offset by a shade you can’t tell the difference from. I don’t think it’d look worse or better. I am not taking out the other 127 band of colors either, because of normalization 0-1 aspect, you’d just have less possible colors.

Looking from the output i did Syso((float)i/128)) it appears that it shrinks by 0.01 each i. So maybe it would be nice.

[quote]0.9296875
0.9375
0.9453125
0.953125
0.9609375
0.96875
0.9765625
0.984375
0.9921875
[/quote]
CoDi^R

Nice catch on that. I didn’t know bb.array() was writable. I thought it was a copy.
Although you don’t need to call bb.limit(). That has information already present by the byte buffer. ff.flip() is all thats needed. Anything that sets the position, inclusive of limit I guess, would be good.

Edit: Actually, you can’t do that because BufferUtils.createByteBuffer() doesn’t have its buffer backed. Allocate Direct doesn’t do it either. allocate() will, but it needs to be a direct buffer. So rip.

What do you mean error checks. There is no error checks needed. It throws an exception if thats what you are referring to. I’ve never had something load wrong.

I’m sorry, I still didn’t get it. How does 2 byte header = 2 shorts? 1 short is 2 bytes.

If you split color channel in half (half byte) you will not get 128 colors. You will 16 colors.
1 byte = 8bits
1^8=256
1^4=16 (half byte)

EDIT-
You can compensate for reduced color channel by dithering images.

My header is 5 bytes long. 2 bytes per size element. 2 bytes is one short as you said. I have two size elements, therefore 4 bytes for the size. 2 for width. 2 for height. I have one byte for transparency.

Not when you put the number over 128. You will not get 16 colors from 32 colors.
Normalized float units for a reason :slight_smile: I never said 128 colors either. It’s just a color scale.

You said you were only going to ‘split’ bytes in half, representing each component with half a byte instead of the usual 1 byte.

Half a byte can only encode 16 different states (2^4 = 16), whereas you’re saying you get a range of 128. Of course, you can still get a range of 128, but you still only get 16 shades, which means each shade is 8 units separated form the next (which for normal human sight, is pretty noticable).

[quote] I would have a range of 128. Thinking even smaller, I would just use normalization too. Although the color isn’t perfect, its only offset by a shade you can’t tell the difference from. I don’t think it’d look worse or better. I am not taking out the other 127 band of colors either, because of normalization 0-1 aspect, you’d just have less possible colors. …
[/quote]
I stated that. You won’t get 16 colors from 32. Yes, you can’t store 32 as only 16 are supported. I think 20 bit color is the name. Idk. Doesn’t mean that more than 32 colors aren’t supported. But forgive me for not stating that I wanted to have a half byte full byte boolean option. I think that info would of helped a lot in the understanding. I also want to state that there are 4 color channels in a 32 bit color. You will have half of the 3 color channels and half of the alpha channel. While alpha channel is usually a show or not sort of deal, I am positive 128 different shades is going to be a big deal. 256256256 is 16.777m (24 bit) colors. 128128128 is 2.097m colors. While there are over 14m colors not accounted for, I don’t think any core color would be hurt. I think for basic level things such as a background, this color scheme could be good. But this is my two cents.

I renamed the file from mx to mxt. mx texture. I wanted to make a mxh file. This file will hold vertex and normal information for heightmaps so you can load and go from an editor I will make. I found that procedural based normal calculations are pretty horrendous when the heightmap is static. It can be recalculated on the fly. I found there was major lack of support of heightmaps in the freeware development. I am a blind programmer too, so forgive me :slight_smile: (I’d pay a real blind programmer capable of making video games)

Funny story. While learning how to do multitexturing via mixing colors from thinmatrix on youtube, I went off and I did a lot of other stuff.

Got tired of laying in bed with a laptop programming
Found my keyboard and mouse and put it in my slide desk thing
Attached my laptop to monitor and started off
Skipped implementing the actual multitexturing from thinmatrix
Fixed up the jar which features .MXT creation
Came up with a name for myself to be represented as
Created a logo for my jar
Added a context menu for creating .mxt files, which pops up on right clicking pngs
Installed nullsoft and made an install script for my mxer program (after learning quickly from a sample)
Planned for release of my mxer stuff
Planned for building my simplex, effective heightmap editor
Got licenses around for GNU freeware when time comes
Didn’t plan for releasing it yet, will be bundled with mxer of course
Came back to JGO (<3) to check replies and look to help/comment
Ended up here telling the story

And this is what I do every time I sit down to do something lol

Yes that is correct. 256^3 = 2^24 (24-bit) = 16.777mil. However, 128^3 = 2^21 (21-bit) = 2.097mil.

Is saving those three bits really so important? (And you can’t really do 21-bit integers, so you’re going to end up using 24 bits anyway) The VRAM of modern graphics cards is probably more than you could ever use as an indie developer (and if you’re AAA, then you won’t be going anywhere near this ‘technique’).

Besides, all of this “compression” is pointless unless OpenGL has a format that support it. Otherwise your low quality images are going to end up taking the same amount of memory as normal quality ones. And no, OpenGL does not support it. If you really want to save then just use one of OpenGL’s compressed formats (and even then I don’t think you’re going to see any benefits).

And in the end all you can show for it is a half second speedup on something that only happens once, but at the cost of losing half the quality.

Save yourself the trouble and just use PNGs.

You know, you’ve never said or supported anything I’ve ever said. Always baised.

Exempt the texture from opengl. They are unrelated at the moment. 21 bit is possible, you will have 2 bytes per pixel over 4. It would be much different looking at it as rgb only. You would have empty bytes/half bytes at the end. It isn’t a bad idea to go this route and do this. There are some benefits you could get from using half bytes and the way I am looking at it, equivalent exchange. I’d gladly sacrifice colors we don’t use for some basic gui colors. To be honest, downing the idea is like calling cmyk color model redundant.

Solid colors with or without translucency stored better
Half storage might mean faster load times (basically I don’t know if extracting each byte sector would be worth it)
Faster network transfer

Also, why not ask my intentions?

To learn.
Thank you.

Anyways, my other point on creating a heightmap format that is really easy to read is nonetheless a great idea.

Making a post just to say that I agreed wouldn’t contribute anything to the discussion, which could explain this perceived bias (or maybe you just haven’t said anything I agreed with). Don’t take it personally.

In one post you say you’re using half-bytes, in another you say you’re getting half the number of states. I’ve tried twice (and trollwarrior has once) to show you that (2^8)^(1/2) != (2^8) * (1/2), but you keep taking it as a personal attack and ignoring the point I’m actually trying to get across.

You did post this in OpenGL development so I hope you can see why I made a point of saying OpenGL doesn’t support 21-bit colour (it does support 16 bit colour).

Redundant? No. Impractical for video games? Probably.

I’m trying to get you to learn. If you would stop assuming I have some personal vendetta against you then maybe I could help you.

It’s worth noting that image size on disk (when using PNG) is in most cases far smaller than uncompressed in memory. Trying to make savings in disk usage is a worthless endeavour if it takes up the same space in memory.

Using 16-bit colour is only a worthwhile optimisation when you don’t care about quality and are limited by bandwidth. And you can’t know if you’re limited by bandwidth until you actually make a game.

Making a post just to say that you disagreed wouldn’t contribute anything to the discussion, which could explain this perceived bias. Don’t take it personally.

Half bytes means have the storage space, not half the colors. I don’t know if I typed that wrong or not.

._.