Key frame animation options.

Folks,

Anyone care to comment on the best techniques for key frame OpenGL animation?

I’m at the stage where I’ve got static objects displaying and moving about with display lists and now want to move on to animated objects.

I thought I’d start with key framing as it’s easy to understand, I can generate key frame meshes from a 3D package and understand the process of interpolating between frames.

Any comments on the following:

  1. Should I generate all frame meshes and create them as display lists? (This sounds expensive in terms of server-side/graphics card memory)
  2. Should I use client-side vertex arrays resident in main memory and send the data to the graphics card for each frame. This sounds slower than option 1) but would allow me to alter the vertex data dynamically.

If option 2) is the way to go, should I interpolate each frame at render time or should I build all animation frames up front?

If you’ve any other ideas/techniques please let me know.

Cheers.

Hi.

Although it seems out of place, this topic is interesting to me too. I’ve just implemented skeleton animation using keyframes in my engine.
I cannot give you any clues about this, since I yet have to make a lot of testing.

I was using display lists up to this point, but I don’t use them any more since creating one for each frame seems to me like a bit too much, especially when I can have say 100 frames per second or more. Another thing is that it would be nice to allow interpolation of the frames to happen on the fly.
I think you 2nd option sounds better, but I haven’t tried vertex arrays yet, though I plan to do it as my next step.

Probably VBOs would be even better, they should allow you to use your vertex buffers at nearly the same speed of the display lists. This is something I definitely have to try sooner or later.

See ya.

I recommend you to have a vertex program for the cards that can support it : you CPU will have nothing to do except sending the vertices as you usually do with static geometry + sending the matrices of the bones of your skeletal animation.

Now for the CPU version of the program, up to you.

If you use a vertex shader, how do you get feedback on where the shader vertecies go? I don’t know that much about the subject, but I though verticies that get transformed by the shader don’t have a way to transfer back out to the main program (i.e. you only change the verticies as they are about to be processed by the fragment shader)… for something like particles, this doesn’t matter, but don’t you want to know where the animation is in the CPU part of the program if you’re actually animating something more complicated?