SoftImage skeletal animation

Hi,

Has anyone had any experience in extracting keyframed skeletal animation from a Softimage v4.2 dotXSI file?

I have had some success in importing simple XSI models into 3ds Max 4 and then using the rotation samples from the 3DS ASE export, but the skeleton we really need to use gets horribly mangled.

Is there a better way to do this? Does something need to be done to the models in SI before exporting? Does anyone know of where I can find out about interpreting the SI FCurves myself without going through 3DS?

Any help will be greatly appreciated.

Kevin

Hi Kevin,

[quote]Has anyone had any experience in extracting keyframed skeletal animation from a Softimage v4.2 dotXSI file?
[/quote]
We’ve built a full dotXSI parser for our project and we’re using it in our pipeline tools. It supports all of the dotXSI v3.6 templates, which IIRC is what XSI 3.5 up to 4.2 export. It was simple to implement (the SDK docs are very helpful), although another option would be to implement a binding directly to their C++ SDK.

For skeletal animation we use XSI’s Actions. From a dotXSI file we first grab the geometry, the skeleton (most importantly, root and joint basepose and SRT transforms) and the envelope weights/indices, and after some processing/optimization we write them to our custom files. Then we get the XSI_Actions from the XSI_Mixer and export the SI_FCurves. Except from some optimization (no-op FCurves, no-op keyframes), the exported data is exactly what you see in an SI_FCurve template: keyframes and their respective values (and interpolation values, if present for cubic curves).

At runtime we use the fcurves to animate the bones. It’s really simple, you just do an interpolation of the animated value (rot-x, pos-y, etc) based on the fcurve and the current time and you’re done. Then you build the bone matrices, which is a little tricky to get right (you need to properly combine the current bone transformation, basepose transformation and everything above it in the whole skeletal hierarchy), but it’s not too much code.

[quote]Does something need to be done to the models in SI before exporting?
[/quote]
It depends on what you need from the model, but generally no. You need to decide what’s important, have a way to grab it and coordinate with your artists so that the whole procedure is as smooth as possible.

I hope this helps. :wink:

Hi Spasi

Thanks for your quick response.

I think I’m on top of grabbing the geometry etc from the dotXSI file. I am building my own parser in Java and I don’t foresee any major problems there.

(for anyone who’s interested, here are two tutorials on how to do this using the XSI FDK:
http://www.inframez.com/papers/dotxsi_dev.htm
http://www.inframez.com/papers/dotxsi_dev2.htm
)

What confused me at first was the cubic FCurves, with the mysterious left and right tangent values. While I sort-of understand what these represent, I can still find nothing on the Web that shows in any kind of detail the maths behind these and how to derive the bone matrices from them.

(I was hoping to let 3dStudio do this for me - this project is to upgrade existing 3Dstudio models
with SoftImage models. I already have a working 3Dstudio ASE importer and skeletal animation system)

However, someone on another forum suggested we export linear FCurves instead of cubics.

The result is looking a lot more like something I can use, but I still have one basic question - Do the values in the 3 Linear Fcurves for each bone represent Euler angles? If so, I should be able to convert these to axis-angles and pump these straight into my existing code to build the bone matrices? Sorry if this is a silly question - the FDK docs are very vague.

One thing I’m concerned about are the all the additional effectors and upVectors and guides etc. that are being exported from SI. I suspect that these are being misinterpreted (or even not interpreted at all) by 3Dstudio, and even the standalone XSIviewer, resulting
in the mangling I mentioned earlier. Do I need to do anything with these? If so, what? I suspect they can’t be ignored. Is there an easy way to get round this?

Our animator says

[quote]I have spoken to Matt who has given me a last resort option to retrace all the bones of the horse and set-up a simple skeleton that will be driven by my existing skeleton but be able to give me FK Rotations for each bone… If indeed that is what we need. That will be a mission but like I said I’m looking at all possible solutions last resort or not.
[/quote]
Any comments?

Kevin

[quote]What confused me at first was the cubic FCurves, with the mysterious left and right tangent values. While I sort-of understand what these represent, I can still find nothing on the Web that shows in any kind of detail the maths behind these and how to derive the bone matrices from them.
[/quote]
Oh, yeah, same problem here. I haven’t bothered solving this properly (awfully busy with other stuff, we’re linearly interpolating cubic fcurves for now), but this is what I remember from when I first implemented the animation system: XSI was exporting Hermite fcurves in the past (comparable quality, much faster to calculate, much code/details on the net), but they’ve dumped them for some reason. Now they have these cubic ones, which are powerful and flexible, but as you found out, there’s no resource for the exact math equations involved. I remember implementing something which was almost, but not completely accurate.

The left and right tangent values actually define the points of the “tangent vectors” on each keyframe. The tangent vectors are the little lines on each keyframe that you see in the low-level animation editor in XSI. Their power and complete freedom comes from the fact that these vectors can be anywhere between two keyframes (unlike other curve types), even completely breaking the curve curvature. That’s why they’re not actually “tangent values”, but rather offsets in 2D space (key-time on X-axis, key-value on Y-axis) from the keyframe they belong to. From these offsets you can compute their absolute position and then somehow solve the cubic equation (that’s the part we have problem with).

If you figure this out, sharing the solution will be greatly appreciated. :wink:

[quote]However, someone on another forum suggested we export linear FCurves instead of cubics.
[/quote]
Yes, that makes sense. It’s something I’ve told our artists too. Basically, they’re supposed to use cubic curves only when absolutely necessary (e.g. for an excellent quality animation). I guess that adding two or three more keyframes in a linear curve to approximate a cubic one would be much more efficient performance-wise, than using a true cubic.

[quote]The result is looking a lot more like something I can use, but I still have one basic question - Do the values in the 3 Linear Fcurves for each bone represent Euler angles? If so, I should be able to convert these to axis-angles and pump these straight into my existing code to build the bone matrices? Sorry if this is a silly question - the FDK docs are very vague.
[/quote]
Yes, they are euler angles. IIRC there’s also an option in XSI to convert euler angles to quaternions, but I’m not sure what is exported then.

[quote]One thing I’m concerned about are the all the additional effectors and upVectors and guides etc. that are being exported from SI. I suspect that these are being misinterpreted (or even not interpreted at all) by 3Dstudio, and even the standalone XSIviewer, resulting in the mangling I mentioned earlier. Do I need to do anything with these? If so, what? I suspect they can’t be ignored. Is there an easy way to get round this?
[/quote]
That has to do with what I said in my previous answer, that is, knowing what you need and how making it easy to get it. To make it clear, this what we’re doing:

Our first decision was to not use IK at runtime (obviously for perfomance reasons - it’s a strategy game). That means our skeletal animation consists of root/bone translations/rotations (no scaling, for several reasons), with no effector animation whatsoever. So this is what we need to get in our engine. The animators should be saving keyframes on the skeleton directly (except effectors) and not on any rig they’ve built upon it. There are two issues here: a) How can we make it painless for the animators to do their stuff (easily animating inside XSI) b) How can we export the skeletal animation without trouble.

As it turns out, these two issues can be solved together. The most important is to force your animators to build rigs without any connection (parenting) with the skeletal hierarchy. That is, the rig should be a completely different hierarchy of objects, which drive the skeleton with constraints only (this actually is THE proper way to generally build a skeletal animation rig in the 3D world). This allows for both a clean skeletal hierarchy to export and a decent rig to be built upon it.

a) Say you want to animate a chain. Normally you’d want to use IK, but only FK is allowed. Easy: simply grab the effector, put it where you want it (and allow XSI solve the IK), but then save the bone rotations instead of the effector translation. That’s what your artists should generally do. With or without a rig, with or without any “animation helpers” (e.g. custom sliders on the XSI interface), the keys should be saved on the skeleton (hint: use groups to easily mass-save keyframes). IK animation, with FK keyframes!

b) Now that you have a clean hierarchy, after parsing, go through each template and save only the roots/bones in your own custom hierarchy, just ignoring effectors on the way.

Of course an IK-to-FK conversion could be done (needs to be really robust though!), but I find our way both simple and effective.

Hi

Our animators made the changes Spasi suggested, and I’ve implemented the loading of the horse geometry and skeletal animation from the dotXSI file into my JOGL viewer, but I just can’t get the animation to look right. The legs are just rotating in all directions.

I got a simple 2-legged xsi model with 6 bones to work perfectly in my viewer, with correct vertex blending and normals and everything. But the same viewer just won’t work with the more complex horse model (46 bones).

It looks like I’m maybe applying the rotations to the wrong bones, but I’ve checked and it’s (probably) not that. The base-pose (keyframe 0) is perfect, but after that everything goes wrong.

I’m converting Euler to Quaternions like this:


      float sx = (float)Math.sin(eulerRotY / 2.0f);
      float sy = (float)Math.sin(eulerRotX / 2.0f);
      float sz = (float)Math.sin(eulerRotZ / 2.0f);

      float cx = (float)Math.cos(eulerRotX / 2.0f);
      float cy = (float)Math.cos(eulerRotY / 2.0f);
      float cz = (float)Math.cos(eulerRotZ / 2.0f);

      Quat4f q1 = new javax.vecmath.Quat4f(sy,   0.0f, 0.0f, cy);
      Quat4f q2 = new javax.vecmath.Quat4f(0.0f, sz,   0.0f, cz);
      Quat4f q3 = new javax.vecmath.Quat4f(0.0f, 0.0f, sx  , cx);
                  
      this.rotation = new Quat4f(q1);  
      this.rotation.mul(q2);
      this.rotation.mul(q3);

I found that for the simple 2-legged xsi model to work, I had to change the order to y-z-x in the creation of q1, q2, q3 above (I’ve no idea why). I’ve tried every combination here but not one corrects the horse animation.

After that, it’s a pretty generic skeletal animation system.

In this model I expect each bone and each limb to rotate (mostly) in a plane, even if the euler angles are assigned to the wrong axes, but that’s not what I’m seeing. Child bones sometimes seem to be rotating in planes at right angles to the parent’s plane, but this is not consistent to all the bones. Some bones don’t start and end on the same plane.

I found some anomalies in the horse’s Fcurve data, such as this:

  1.000000,14.634259,
  2.000000,13.496899,
  3.000000,122.101044,
  4.000000,176.119934,
  5.000000,159.457596,
  6.000000,156.837585,
  7.000000,180.112411,
  8.000000,179.619141,
  9.000000,181.027267,
  10.000000,185.909149,
  11.000000,191.765121,
  12.000000,194.263153,

The sequence is supposed to be looped, but somewhere along the way 180 degrees got added, then when looping from keyframe 12 back to keyframe 1, the bone makes a sudden 180 degree rotation in the wrong direction. The animators are looking into this, but this is only part of the problem and affects only a few joints and is easy to recognise both in the viewer and in the data.

Maybe this is the problem? Maybe there are other problems with the data that I can’t pick up
just by looking at the numbers? Maybe there’s a problem with the viewer?

I’ve been strugging exclusively with this for over a week now, and I think I’ve examined
every possible cause. Does anyone please have any suggestions?

Thanks

Kevin

OK, I’ve found that by doing the Euler to Quat conversion like this then the 180degree problem is resolved:


      Quat4f q1 = new javax.vecmath.Quat4f(sz,   0.0f, 0.0f, cz);
      Quat4f q2 = new javax.vecmath.Quat4f(0.0f, sy,   0.0f, cy);
      Quat4f q3 = new javax.vecmath.Quat4f(0.0f, 0.0f, sx  , cx);
                  
      this.rotation = new Quat4f(q3);  
      this.rotation.mul(q2);
      this.rotation.mul(q1);

There are 82944 different combinations of the above if you include negating the angles. I’ve tried many of these - some give better results than others. Does someone know the definitive way to do this when working from dotXSI?

Another odd thing is that the rotations specified in the fcurves for key=1 are equal to the SRT rotation angles, meaning that the first keyframe corresponds to the base pose of the horse (i.e. standing still) while the remaining keyframes correspond to a running action. I am getting 12 keyframes in total in the fcurves and the animators assure me that all 12 should be for the run action, and not 11. Is this discrepancy to be expected?

I found that I need to subtract the SRT transform angles from all the fcurve rotation values in order to get anything that looks like a running horse, but while the resulting animation sort-of resembles a running horse, it is not exactly correct.

I suspect it’s not mathematically correct to subtract these angles directly. I’ve noticed that the SRT rotations and translations of a joint cannot be directly added numerically one-to-one to the base pose of the parent joint to get this joint’s transform, but it does work if these are converted to matrices and then combined by multiplication. I guess the raw fcurve Euler values need to be adjusted in some way before they can be used but I have no clue as to what that adjustment is. Subtracting the SRT angles directly only gives only approximate results.

Any help will be greatly appreciated.

Kevin

Hi Kevin,

I can’t answer this in detail right now (we’ve just moved to our new offices! :D), but from glancing over our code here are some details that might help you:

  • When we read the dotXSI bone data, we create the basepose matrix (from the basebose rotation & translation) and invert it. This inverted basepose matrix is stored in our custom model format (see below how it’s used).

  • Here’s what we do after fcurve animation to get the final matrices (almost pseudo-code, copy & paste from various sources, with some comments to make it more clear):

// BONE LOCAL TRANSFORMATION ( this is what we animate )
Vector3f translationLocal;
Quat rotationLocal;

// GLOBAL TRANSFORMATION ( this is what we want to calculate )
Vector3f translationGlobal;
Quat rotationGlobal;

// The current bone matrix ( this is what we use to transform the vertices )
Matrix4f transformation;

public void updateState() {
      translationGlobal.set3f(translationLocal);

      if ( parent == null ) {
            // This is the root bone. Just copy local to global.
            rotationGlobal.set(rotationLocal);
      } else {
            // Global translation is local translation, transformed by parent's global rotation, plus the parent's translation.
            parent.rotationGlobal.transform(translationGlobal); // Quat - Vector multiplication
            translationGlobal.add3f(parent.translationGlobal);

            // Global rotation is local rotation "plus" the parent's global rotation.
            rotationGlobal.setMul(parent.rotationGlobal, rotationLocal); // Quat - Quat multiplication
      }

      // Create final bone transformation and multiply with the inverse of the basepose matrix.
      // This brings the vertex to bone-local space and then applies the final transformation.
      transformation.set4f(translationGlobal, rotationGlobal); // Matrix generation from Quat & Vector
      transformation.mul4f(baseposeInv);

      // Update the state of the child bones.
      for ( int i = 0; i < children.length; i++ )
            children[i].updateState();
}

parent is the parent bone in the skeletal hierarchy and baseboseInv is the inverted basepose matrix. The updateState method is obviously called recursively, starting from the root bone.

And don’t worry if you can’t get it to work right away. I still have some awfully funny blooper screenshots from back when I was implementing the system! ;D

Hi,

Still no improvement. I have implemented the above pseudocode but things are now much worse. ???

I am assuming that:

translationLocal for each joint is the translation values from the SRT SI_Transform for that joint

rotationLocal is the result of quaternion interpolation between the two nearest keyframes at time t. At t=0, rotationLocal is equal to the SRT SI_Transform rotation values.

I have used the following code for your Quat - Vector multiplication:


public class Quat4f ...
/*
Transforms vector v by this quat and places the result in v.

From http://www.euclideanspace.com/maths/algebra/realNormedAlgebra/quaternions/transforms/

sfvec3d transform(sfvec3d p1){
      sfvec3d p2 = new sfvec3f();      
      p2.x = w*w*p1.x + 2*y*w*p1.z - 2*z*w*p1.y + x*x*p1.x + 2*y*x*p1.y + 2*z*x*p1.z - z*z*p1.x - y*y*p1.x;      
      p2.y = 2*x*y*p1.x + y*y*p1.y + 2*z*y*p1.z + 2*w*z*p1.x - z*z*p1.y + w*w*p1.y - 2*x*w*p1.z - x*x*p1.y;      
      p2.z = 2*x*z*p1.x + 2*y*z*p1.y + z*z*p1.z - 2*w*y*p1.x - y*y*p1.z + 2*w*x*p1.y - x*x*p1.z + w*w*p1.z;
      return p2;}

*/
public final void transform(Vector3f v) {

      float vx = w*w*v.x + 2*y*w*v.z - 2*z*w*v.y + x*x*v.x + 2*y*x*v.y + 2*z*x*v.z - z*z*v.x - y*y*v.x;      
      float vy = 2*x*y*v.x + y*y*v.y + 2*z*y*v.z + 2*w*z*v.x - z*z*v.y + w*w*v.y - 2*x*w*v.z - x*x*v.y;      
      float vz = 2*x*z*v.x + 2*y*z*v.y + z*z*v.z - 2*w*y*v.x - y*y*v.z + 2*w*x*v.y - x*x*v.z + w*w*v.z;
      
      v.set(vx,vy,vz);
}

I have created the basepose matrix from the BASEPOSE SI_Transform values for each joint, then inverted it.

I am sure I’m converting all angles to radians.

A typical final transform matrix I’m getting looks like this:

[-0.23531511 0.96802396 -0.08692845 -14.608533]
[0.97119457 0.23074402 -0.059485827 10.42211]
[-0.037525482 -0.09842234 -0.99443704 -17.007164]
[0.0 0.0 0.0 1.0] ]

I’m not sure, but those translation values at the right look very large - they are in the same range as my vertex positions. Is that correct?

Kevin

[quote]translationLocal for each joint is the translation values from the SRT SI_Transform for that joint
[/quote]
For bones yes, but for chain roots (if they are animated) it may be the animated position.

[quote]rotationLocal is the result of quaternion interpolation between the two nearest keyframes at time t. At t=0, rotationLocal is equal to the SRT SI_Transform rotation values.
[/quote]
Basically yes. But we don’t do quaternion interpolation. We simply interpolate the rotation fcurves and generate the quaternions based on the final angles. If I got this right, you pre-generate a quaternion at each frame and then interpolate the quaternions? Sounds ok (and probably better/faster this way). By the way, here’s what I use to get the quaternion from the angles (rotationX/Y/Z are the input angles, this.x/y/z/theta are the quat values):

float rot = toRadiansf(rotationX * 0.5f);

float A = (float)cos(rot);
float B = (float)sin(rot);

rot = toRadiansf(rotationY * 0.5f);

float C = (float)cos(rot);
float D = (float)sin(rot);

float y = D * A;
float z = -D * B;
D = C * B;
C *= A;

rot = toRadiansf(rotationZ * 0.5f);

A = (float)cos(rot);
B = (float)sin(rot);

this.x = A * D - B * y;
this.y = A * y + B * D;
this.z = A * z + B * C;
this.theta = A * C - B * z;

[quote]I have used the following code for your Quat - Vector multiplication:
[/quote]
This is what I use:

public Vector3f transform(final Vector3f vector) {
      // tranformed vector = quat * vector * quatConjugate
      float vecX = vector.x;
      float vecY = vector.y;
      float vecZ = vector.z;

      float transX = theta * vecX + y * vecZ - z * vecY;
      float transY = theta * vecY - x * vecZ + z * vecX;
      float transZ = theta * vecZ + x * vecY - y * vecX;
      float transW = -x * vecX - y * vecY - z * vecZ;

      vector.x = -transW * x + transX * theta - transY * z + transZ * y;
      vector.y = -transW * y + transX * z + transY * theta - transZ * x;
      vector.z = -transW * z - transX * y + transY * x + transZ * theta;

      return vector;
}

About the values you posted, I’m not sure if they’re supposed to be so large. I’ll have a look at what we get and post again. Anyway, what I had found helpful was to render the final skeleton (I actually used green GL_LINES to get the same look as in XSI ;)). Visual feedback was very important, because the skeleton animation in Marathon is done entirely in the vertex shader (no chance for useful debugging there). Most importantly, it shows whether the problem is in the animation code or the rendering code.

Edit: The quat generation above is what you get if you optimize Quat(rotX) * Quat(rotY) * Quat(rotZ). It may be in the reverse order (I don’t remember/care).

Hi

I can confirm that your Quat transform and Euler to Quat code produces the same results as mine. The order for Euler to Quat is Quat(rotZ) * Quat(rotY) * Quat(rotX), which is wierd because all the books and web examples I saw were X-Y-Z. I only discovered the reverse order by trial and error. Maybe it’s a SoftImage thing. But like you say, who cares?

Yours are obviously more optimized.

My horse model has one chain root, which is animated - it is supposed to move slightly up and down as the horse runs. I am ignoring that for now. It shouldnt be too traumatic to include once the skeletal animation is working.

The way I see it, when on the first keyframe, the final transform matrix should be the identity matrix, because then the mesh should be in it’s base position. I am transforming each vertex point (straight from the SI vertex positions) by the final matrix - is that correct or am I missing something here?

A potential problem I see is that the SI_transforms in my dotxsi file don’t all have scale = 1 but I am assuming 1 when I create the matrices. However, every joint below the root has both BP and SRT scale = 1. The root has a scale of 2.102196. The basepose scale of the geometry is 1.

How does the Basepose transform of a child joint relate to that of it’s parent?

I see that the individual rotation and translation values of a child SRT cannot be added directly to the BP of the parent to give the BP of the child. (as was the case with the 3Dstudio ASE data).

If I convert the data from dotXSI into 4x4 transform matrices, then I would expect to see BPchild = BPparent x SRTchild by matrix multiplication, but this is not the case except for the translation components.

BPchild from dotXSI and the result (BPparent x SRTchild) transform the same point to completely different places. I’ve tried (SRTchild x BPparent) - that’s even worse.

I havn’t made screen shots. Some of the things I’ve seen happen to this poor beast are quite disturbing. :o

Kevin

[quote]But like you say, who cares?
[/quote]
:wink:

[quote]The way I see it, when on the first keyframe, the final transform matrix should be the identity matrix, because then the mesh should be in it’s base position. I am transforming each vertex point (straight from the SI vertex positions) by the final matrix - is that correct or am I missing something here?
[/quote]
Yeah, exactly right. If your skinning code is ok that is (each vertex should be transformed by the matrix of any bone that affects it and the results should be combined and weighted according to the envelope weights).

[quote]A potential problem I see is that the SI_transforms in my dotxsi file don’t all have scale = 1 but I am assuming 1 when I create the matrices. However, every joint below the root has both BP and SRT scale = 1. The root has a scale of 2.102196. The basepose scale of the geometry is 1.
[/quote]
This is very likely your problem. I’m not sure how exactly, but I’m sure it messes your results. The root scale (or any scale) has either modified the positions and/or rotations of the chain elements below, or the geometry vertices. Either way, this should be fixed in XSI (which iirc won’t be easy…) and have the model re-exported. Oh, and whip your animators until they scream “I’ll never use scale (without a freeze) again!”. :wink:

[quote]How does the Basepose transform of a child joint relate to that of it’s parent?

I see that the individual rotation and translation values of a child SRT cannot be added directly to the BP of the parent to give the BP of the child. (as was the case with the 3Dstudio ASE data).

If I convert the data from dotXSI into 4x4 transform matrices, then I would expect to see BPchild = BPparent x SRTchild by matrix multiplication, but this is not the case except for the translation components.

BPchild from dotXSI and the result (BPparent x SRTchild) transform the same point to completely different places. I’ve tried (SRTchild x BPparent) - that’s even worse.
[/quote]
I think the root scale has greatly affected what you’re seeing. This is what I see:

  • The BP is the absolute position/rotation of a given chain element.
  • The SRT is the relative position/rotation of the element in respect to its parent.
  • When animating, the current SRT changes.
  • When you update the skeleton for the current frame, you build the current “absolute pose”.
  • When you multiply the current absolute pose by the inverse basepose, the final matrix you get is the “current transformation” the bone applies to the “basepose” vertices.

So, if you apply the procedure in my 3rd post to the skeleton without animating anything (just the BPs and SRTs as you read them from the dotXSI), you should get a bunch of identity matrices. Just a note, the matrices on the first frame of an animation will generally NOT be identity matrices, unless that animation starts from the base pose (highly unlikely in a running cycle for example).

I hope this helps and don’t forget to remove that scale! :slight_smile:

Hi Spasi,

Got my model working late last night! :slight_smile:

Seems the root nodes (those SI_Models with a SI_IK_Root element) cannot be ignored, even though they do not affect the mesh and are not animated. They don’t appear in the effectors list and while they have Fcurves, these are constant over time.

I simply had to import and place these in the hierarchy as if they were real joints, and it works! No other changes required!

I thought they were redundant remnants of the IK setup because my original simple model worked without them. I see now that’s probably due to them being at the same place and rotation as their child joints.

Now this raises another issue. Originally I had around 40 true joints in my hierarchy and thats now jumped to over 60. This could be quite significant once I have 20 or 30 of these models running around, each individually animated.

I don’t believe these root nodes are essential in the hierarchy, especially since all they really provide is an SI_Transform SRT and BASEPOSE.

I can’t ask the animators if they can eliminate them on their side - after all this they’ll probably have me lynched. I figure I can maybe eliminate them programmatically by combining their SI_Transforms into their child joints. I’ll try that over the next few days unless you can advise otherwise.

And also, rescaling everything to 1 in SoftImage, like you said, didn’t make any difference. I suppose if the scales are consistent within the dotXSI, one can ignore them and just assume a value of 1 throughout when building the matrices, as I’m doing. We still need to get the overall scale to fit the rest of the scene. Currently these horses are about 25m high compared to everything else! Once I know the correct value I can easily apply it on my side.

Many, many thanks for your time.

Kevin

[quote]Got my model working late last night! :slight_smile:
[/quote]
Great! I can feel your relief! :wink:

[quote]Seems the root nodes (those SI_Models with a SI_IK_Root element) cannot be ignored, even though they do not affect the mesh and are not animated. They don’t appear in the effectors list and while they have Fcurves, these are constant over time.

I simply had to import and place these in the hierarchy as if they were real joints, and it works! No other changes required!

I thought they were redundant remnants of the IK setup because my original simple model worked without them. I see now that’s probably due to them being at the same place and rotation as their child joints.
[/quote]
Yes, of course. You absolutely need them in the hierarchy. I thought that was clear, I’ve mentioned both chain roots & chain bones in my previous responses. The ones you don’t need are the effectors*. The reason is that all SRT matrices are defined relative to their parents. And the parent of the first joint in a chain is the chain root. Besides, you want them in the hierarchy for things like an up/down motion of a skeleton root, or certain hacks to keep the joint count low, like shoulder movement without shoulder bones (yes, that’s not very clear, I hope you can visualize what I’m saying).

  • Effectors may be necessary in two cases: a) An effector affects the envelope (that is, there are weights assigned to vertices) and b) A chain is the child of an effector. Both can be avoided if the animators agree to never make effectors part of the envelope assignment (just selecting joints works great) and make sub-chains childs of the last joint, instead of the effector.

I guess the best action here is to benchmark your system and see if it goes over your limit. Although I believe it won’t be problem, since the bottleneck will probably be the vertex skinning and not the skeletal animation (adding the roots won’t add anything to skinning). In Marathon we can animate ~150 characters with 20-25 (because of vertex shader limits) skin-affecting bones, plus 5-8 roots (for a total of ~30 “chain elements”), about 200 times/sec. Although it’s not really important (the bottleneck has moved to rendering way before that), it can be further optimized using “skeletal-LODs”, animating important movements only, updating every couple of frames, etc.

[quote]I don’t believe these root nodes are essential in the hierarchy, especially since all they really provide is an SI_Transform SRT and BASEPOSE.

I can’t ask the animators if they can eliminate them on their side - after all this they’ll probably have me lynched. I figure I can maybe eliminate them programmatically by combining their SI_Transforms into their child joints. I’ll try that over the next few days unless you can advise otherwise.
[/quote]
That’s a cool idea, but to work properly you’ll also have to combine their fcurves (if animated). Keep me informed of any progress on this and…thanks for the tip! :wink:

[quote]And also, rescaling everything to 1 in SoftImage, like you said, didn’t make any difference. I suppose if the scales are consistent within the dotXSI, one can ignore them and just assume a value of 1 throughout when building the matrices, as I’m doing. We still need to get the overall scale to fit the rest of the scene. Currently these horses are about 25m high compared to everything else! Once I know the correct value I can easily apply it on my side.
[/quote]
Hehe, artists always have this tendency to make stuff either too small or too big! It’s strange scaling didn’t make a difference though. Hmm…

[quote]Many, many thanks for your time.
[/quote]
Glad to help! If it isn’t a problem, I’d like to see a screenshot with the result (spasi at zdimensions dot gr).

Hi Spasi,

Just got the elimination of the root nodes to work.

For a chain comprising two true joints separated by a root node:

Joint1 – Root2 – Joint3

RG0 = RotationGLobal calculated for the joint above Joint1
TG0 = TranslationGlobal calculated for the joint above Joint1

TL1 = TransformLocal for joint1 from dotXSI SRT transform
RL1 = RotationLocal Quat for joint1 from interpolation between FCurve keys

then for Joint1:
TG1 = (RG0 * TL1) + TG0
RG1 = (RG0 * RL1)

For the root2:
TG2 = (RG1 * TL2) + TG1
RG2 = (RG1 * RL2)

And for joint3:
TG3 = (RG2 * TL3) + TG2
RG3 = (RG2 * RL3)

To eliminate the root2, substitute its Global values into joint3.

TranslationGlobal for joint3:
TG3 = (RG2 * TL3) + TG2
= ((RG1 * RL2) * TL3) + (RG1 * TL2) + TG1
= RG1 * (RL2 * TL3 + TL2) + TG1

And (RL2 * TL3 + TL2) is constant (the root is not animated), so this can become the
new TranslationLocal for Joint3.

Similarly for the RotationGlobal of joint3:
RG3 = (RG2 * RL3)
= ((RG1 * RL2) * RL3)
= (RG1 * (RL2 * RL3))
Which means (RL2 * RL3) can be the new RotationLocal for joint3 - that is, each key rotation
quaternion must be pre-multiplied by the constant RotationLocal from the root node.

So to clarify, when building the joints hierarchy


if (this node is a root node) {
   ignore it, just process it's children recursively
}

if (this node is a child of a root node) {

   thisNode.translationLocal = rootNode.RotationLocal * thisNode.translationLocal + rootNode.translationLocal;

   for (each key rotation i) {
      thisNode.rotationLocal[i] = rootNode.rotationLocal * thisNode.rotationLocal[i];
   }

   process children...
}

Simple as that!

I’m using the presence of a SI_IK_Joint in the SI_Model to indicate a “real” joint. A root is identified by the presence of a SI_IK_Root and I’m completely ignoring SI_Models with SI_IK_Effectors.

Not bad - in my case a little more work up front has eliminated 800 floating point multiplies and 640 adds per model per keyframe! And my intermediate data file is a lot smaller too.

There’s a Softimage script available at inframes.com that’s supposed to do this on the Softimage side but we havn’t been able to get it to work.

The new model isn’t much to look at. If you would like to see the existing version of this game, go to www.digiturf.com, click on “Recent Race Results” at the bottom. Click on one of the green movie camera buttons. Choose the “Enhanced Race Viewer” on the left.

Kevin

[quote]Just got the elimination of the root nodes to work.

Simple as that!

I’m using the presence of a SI_IK_Joint in the SI_Model to indicate a “real” joint. A root is identified by the presence of a SI_IK_Root and I’m completely ignoring SI_Models with SI_IK_Effectors.

Not bad - in my case a little more work up front has eliminated 800 floating point multiplies and 640 adds per model per keyframe! And my intermediate data file is a lot smaller too.
[/quote]
Great! Really, really cool optimization, thanks for sharing it!

[quote]If you would like to see the existing version of this game, go to www.digiturf.com
[/quote]
Congratulations Kevin (and team), it looks and plays very nice. Awesome work!