Has anyone out there run into something like this?
I’m trying to build my renderer from the ground up to use vertex arrays for all rendering. I’m getting a big chunk of agp memory using the nvidia extension and splitting it into 2 vertex arrays. When one fills, or runs out of data, I set a fence and swap to using the other vertex array. I think that this should in theory give me pretty optimal vertex submission, as the GPU can chew over what I’ve just sent it while I’m filling the other buffer.
That was the theory anyway…
When I come to run it I can get about 50 fps with 4000 triangles on screen. When I profile to see wtf is going on, it’s spending 50% of its time copying into the floatbuffer that is used for the vertex array!!
Now I’m only running a GF2 but it seems like a long way off the pace to me, when something like benmark is doing in the order of 30 or 40 times that.
I am just using drawArrays which seems to be discouraged, and just drawing random triangles could be a pretty pathological case but does anyone have any ideas?
