ati vs nvidia - large performance differences

Hi,

I have a relatively simple app using newt windows under windows 7 64bit. It creates several fairly large textures (2kx2k) and then does an updateSubImage() to apply video from capture cards to the textures, displaying one or two of the textures at a time.

I have an ATI 5770 card and several nvidia cards, the “best” being a geforce 275. The nvidia 275 is comparable to the ATI 5770 in terms of memory, memory bandwidth, clock speed, number of stream processors, etc.

With vsync turned off, I get around 2000 fps on the ATI card and only about 300-500 on the nVidia card. With vsync on, I get 60fps on the ati, but only 45-50fps on the nvidia. Another interesting point is that the performance seems to be roughly the same with any of 3 nvidia cards I’ve tried, including an 8800GT card and a Quadro FX 1500 card, both of which I would assume to be “slower” than the geforce 275. I have tried several different driver versions, all with the same results.

Is there an extension I need to enable to get higher performance from the nvidia cards, or some difference in setup or command line flags? I’m really stumped here.

thanks,

morris

It makes no sense that you get 300-500 w/o vsync and 45 with vsync. Maybe you’re calculating the fps incorrectly?

I’m using: window.enablePerfLog(true) where window is a GLWindow. This writes a 5 second average to stderr every 5 seconds. I don’t know how it’s implemented, so I can’t comment on how it counts frames, but presumably it is consistent at least. In any case, the difference is clearly apparent with vsync since I can see frames being dropped with the nvidia card but not the ati card.

While this seems counter intuitive, I could see where some frames may take long to render and wind up going over the 16ms time of one frame and end up actually taking an additional full 16ms to be displayed. It may also be related to some threading issues I haven’t adequately addressed; as I said, it’s a simple program at this point.

However, my real question has to do with why an essentially equally performant card should drop my framerate by 3-4 times. In particular since ati has a reputation of slower opengl performance than nvidia. This is why I think I must be doing something wrong - it just doesn’t make sense. This is my first attempt using native windows, so I don’t know if I’m doing something wrong that just happens to work ok on ati, but not nvidia.

Any thoughts from people using the newt libraries would be greatly appreciated, in particular pointers to any tutorials or best practices.

thanks,

morris

The “reputations” of nvidia vs. ati are always subject to change. It may be possible that ATI is finally pulling ahead. On the other hand, it be that although the cards are performant enough, Nvidia may have issues with their texture update drivers when handling larger textures (2k x 2k) over a period of frames. If your code hasn’t changed between cards it seems weird that it could be your fault.

I agree with you and it is even worse on non-power-of-2 textures.