I’ve noticed something very strange. When I use a texture twice during a single call of the method called “display” in a GLEventListener, the game is twice quicker. Nevertheless, all the textures are resident according to glAreTexturesResident and I have already increased the priority of the texture that I have used for this test. I think that there is something wrong with my driver. I think that resident textures are supported on my graphics card (ATI Radeon 9250 Pro) but I assume glAreResidentTextures returns always true even though a texture isn’t resident. Then, my graphics card puts the texture I use into the VRAM only when I use it several times between two swaps of buffer even though I use glPrioritize and even though it would be faster to put it into the VRAM at the first time this texture is bound. I get 19 FPS with a screen resolution of 1280*1024 instead of 8 FPS usually. Does anyone know a workaround to force the graphics card to put my textures into the VRAM? Now I realize that people using reliable drivers might have far better performance than me even with the same graphics card. It means that TUER is fast enough and “fluid” enough to work on PCs with any graphics cards of ATI and NVIDIA created from 2000 with a really reliable driver.
The game turns at 32 FPS with the graphics card NVIDIA Geforce FX 5200 under Microsoft Windows XP whereas this card has been famous to be a bit slow. I don’t understand why the download was so slow when I did the test. The game turns at 96 FPS with the graphics card NVIDIA Geforce 6600 even with an old microprocessor (Celeron 500 Mhz).
On the other hand, I still work on some optimization to increase the performance before adding complex models into the levels.
Finally, the impacts will be visible in the game maybe tonight. The accuracy of the computation has been hugely increased but it will be better when the cell-and-portal algorithm will be ready.