Non-square and non-power-of-two textures?

Now and then I read about problems with non-square and non-power-of-two textures with OpenGL. I know that old OpenGL implementations had such restrictions, but I want to ask, is this still the case?

I’ve tested fairly arbitrary texture sizes on three PCs (two desktop, one laptop, all Windows 7, was too lazy to remember the graphics hardware specs of each), and noticed no problems. Do I need to go through the extra hoops in adjusting my textures, or can I continue with my arbitrarily sized textures?

NPOT textures have been standard in OpenGL ever since OpenGL 2, which was a long time ago (2004). The only question is how good performance is with NPOT textures by a given driver.

Although ever since shaders became commonplace any optimization based on the predictable patterns of fixed function texture memory access became sort of useless, so NPOT textures will work on the majority of hardware these days.

Nope, it’s not something to worry about. You may want to stick to resolutions that are multiples of 4 though or you’ll have to mess with pixel unpack settings.

Think of it like this:


int x = y << 2;

versus


int x = y * 4;

There may be a slight difference, but not anything that will contribute to any noticable difference.

(EDIT: The reason I used that comparison is because the top one only works with powers of two)

Thanks for the replies :slight_smile:

Good to know that I can be lazy with the textures …

Having power of two textures help when you want to make sprite batch - some numbers cannot be presented in float/double format with 100% precision - this may lead to artifacts, using power of two textures there will never be any artifacts related to texture coordinates.

I’ve suspected that this may be the culprit of a few unsolved texture bleeding problems out there, but I’m not sure if it’s actually real. Let’s say we have a texture containing three 20x20 sprites, making the texture 60x20 pixels large. If we want to draw the first sprite, the x texture coordinate would essentially go from 0 to 1.0/3.0. The second value would be rounded to 0.3333333432674408, which would potentially mean that the middle sprite could be sampled. How this rounding error would manifest itself would depend on how exactly the GPU interpolates texture coordinates for pixels, which probably varies between GPUs, vendors, drivers, etc.

This isn’t a problem when both the sprite and the texture is a power of two thanks 1. to all edge values being exactly representable with floats and 2. the top-left fill convention OpenGL and DirectX uses.

I didn’t think of the fractions and binary math. Good to know that the “power of 2” has more impact than just the optimization of some calculations for speed.