Foreground and screen-size texture

I’m trying to make a full-screen quad that is covered by a texture which is the pixel size of the current window… that is, a texture which covers the entire window surface, as a glass plate to draw our own in-house GUI system on. It uses the Foreground node and initially seems to function ok.

However, the quality of the resulting texture is bad. If I have a window size of 800x600 and then load an 800x600 graphic, the resulting image is not per-pixel correct. It appears as though the image itself has been downsized (losing every few pixel lines), and then that result has then been stretched full-screen.

I don’t believe I’m creating mipmaps. At any rate, the poly is being drawn only 0.6f from the camera, so I can’t imagine why it would be using a lower mipmap level.

Any thoughts?

Correct me if I am wrong but does’nt OpenGL (not version 2.0) only support textures with dimensions of powers of 2 ?

How are textures having non-power-of-2 sizes handled by Xith ? Downscaled automagically ?

Perhaps your 800x600 texture was scaled to 512x512 or 512x256 internally since may not be displayable at 800x600.

just a guess, however ???

Try to create a 1024x1024 texture to copy your 800x600 image into it (centered). Then adjust the distance of the image to match the field of view.

As far as I thought, OpenGL required power-of-2 sized textures that were a maximum size of 256 x 256. However, already thinking of that, I rescaled the texture in a graphics editor to get a feel for how much it had been rescaled down… and the quality I’m seeing appears consistent with only having been scaled down a few pixels (800x600 -> 512x512 looks far worse, with much more data lost).

I was wondering if maybe the canvas had been scaled up to 1024x1024 (next power of 2), and then using floats to UV back down to 800x600 was responsible for missing a pixel here or there.

I mean, the quality is pretty good, I’d say 90%, but to be the basis of a GUI I gotta have per-pixel accuracy. And I’d understand if I was stretching or shrinking the on-screen drawing of the texture to a size that was not the original, but I just can’t seem to figure out how it’s seemed to have lost like 10% of the image. I think it’s probably more like it’s trying to grab a pixel, and the UV float fuzziness means that it’s occasionally grabbing a pixel that’s just 1 pixel too high or low…

I did a high-quality rescale of the 800x600 GUI example screenshot and made a 1024x768 version. When I load that into my client and draw it at 800x600, it comes out with about 98% accuracy. A much better improvement, but I’d hate to think of coding a GUI system that works on an 800x600 display only by writing to a 1024x768 texture, heh.

Is there some sort of “quality vs speed” switch somewhere that I could set to do more accurate texturing?

It appears that it is, indeed, a “fuzzy float” UV issue. I’ve noticed that the distortion is more pronounced whenever the image is farther from the next power of 2 (ie, 800x600 -> 1024x1024), and the distortion is more pronounced on the axis furthest from it (ie, 800 is closer to 1024 than 600, so the X axis is less distorted than the Y axis).

I was looking through the UIOverlay code, and I see David coded up a tiled texture process, which is what we used in our own in-house 3D engine. That’s just as well though, I’d rather the full-screen image be broken into very small tiles, to help keep the “dirty texture needs updating” resolution as high as possible. Nothing worse than updating one pixel on a 1024x1024 texture and requiring the entire thing be pushed up again…