Making a pixely game, what's the fastest way to do it?

A while back I made some pixely games like this and I used glRectf to do this. I’ll post the exact code below, it’s simple. I remember changing it to vertex arrays later and actually saw a large decrease in performance. I wasn’t sure if this was because I was using FloatBuffers wrong (they were specifically what seemed the slowest to me) or if it’s simply faster to a bunch of rect calls.

Since I’ve done almost all of my more complex OpenGL work within iPhone and have had access to C arrays, I’m a noob when it comes to leveraging float buffers the correct way.

So, what are recommendations for doing this?


GL11.glDisable(GL11.GL_TEXTURE_2D);
		
//Draw each pixel.
for (int i = 0; i < pixels.length; i++)
{
	for (int j = 0; j < pixels[0].length; j++)
	{
		if (pixels[i][j] >= 0)
		{
			Color c = Globals.NES_COLOR_PALETTE[pixels[i][j]];
			GL11.glColor4ub(c.getRedByte(), c.getGreenByte(), c.getBlueByte(), c.getAlphaByte());
			GL11.glRectf((x+i)*scale.x, (y+j)*scale.y, (x+i+1)*scale.x, (y+j+1)*scale.y);
		}
	}
}

FYI looking at that not sure why I didn’t just do a glScale call beforehand - maybe I had a reason maybe I didn’t. :slight_smile:

And in an ideal world, I could draw every single pixel individually, rather than using FBOs or anything to store unchanging values. Sort of like some voxel games (may or may not) do it.

Why not use PNG sprites and save yourself countless hours of headache?

If you are really set on software rendering pixel-by-pixel, I’d imagine you might get decent performance with a PBO. Upload pixel data to a texture which is then rendered to the screen (scaled quad with nearest-neighbour filtering). For optimization you would probably want to use multiple textures and/or texture atlases – i.e. the pixel sword (which is repeated frequently) would be uploaded to a texture atlas, and then you would render each sword with its own textured quad (using the texcoords of the sword). The idea would be to minimize texture uploads as much as possible, only doing it for truly dynamic animations.

Also, I wouldn’t rely on glRect or immediate mode, especially if you plan on rendering pixel-by-pixel… :o

Just set GL_NEAREST for filtering and you get a perfectly horribly pixelated texture?

Render to a smaller texture first, then render the texture to the screen using nearest-neighbor.

:smiley:

So obviously the catch is that any object could explode into individually animated and physicsed pixels at any time, as you can see if the video I linked. Also the level can be chopped up and ruined like in Voxatron. This is voxels, which as I understand is basically just a 3D pixel. But I don’t understand voxels very well. :slight_smile:

In the video I showed you (Lego City Ransom), the characters are all textures and then their pixel data is used to generate a bunch of individually placed pixels when they die. The weapons, however, are all of the pixles drawn individually - could certainly use an FBO to be faster.

In this example thing I made: Pixel Splosion geo modding, the planet is an FBO and the pixel pieces (press P with your mouse over part of the planet) are drawn with glRect.

So anyway I see a lot of things like that may be necessary, but I’m just wondering how people did like Voxatron which is so fluid and has so much moving at once. I’ll look at PBOs, I’ve never used them.

Are you just wanting to do an old school console hardware “mosaic” mode rendering?

Not exactly. I’d like the world to be pixely but able to exist more fluidly. As in, the art might be 300x200 resolution but an individual broken pixel block can fly around at whatever resolution your monitor supports. In that case it seems a bad idea to hold onto a giant 2048x2048 FBO (or whatever size I would need) and modify the pixels within it, when instead I could draw the dynamic pixels individually.

BF3 uses 4 x 16-bit float RGBA textures to render to with 4xMSAA. I think you’re fine with your gigantic 1920x1080 FBO 8-bit RGB texture, since BF3 uses a 42.67x larger framebuffer and actually runs.

Like others suggested… simply poke colours directly into an RGB bitmap in a ByteBuffer. If you’re not intending to ever read the colours back again, use a PBO. Then render it to any size you want and let OpenGL scale the pixels appropriately.

Cas :slight_smile:

Cool I’ll give that a go then, thanks fellas.

How do you know this stuff and where could I read about it all?! ;D

There are tons of slides hanging on internet. This is kinda good source http://publications.dice.se/publications.asp?show_category=yes&which_category=Rendering

Also learn to spent countless of hours in there http://www.gdcvault.com/

Something like Sword & sworcery EP I’m guessing?

To some extent. But not nearly as pretty. :’(

Oh great…there goes my life…I won’t stop reading anytime soon…