Controllers for virtual humans

Wouldn’t it be also an idea to make it more human like with a learning process? There are some criteria like movement speed and other stuff and then there is a AI which learns by try and error how to walk. So it can automatically adjusts it’s behavior. This way we wouldn’t have to think of complex movement structures to keep it moving - and it would also be able to keep standing at unnormal situations e.g. when it gets hit…

The only question would be what is easier writing the AI or writing the movement behavior.

It’s just I see a problem with NewbTon way of controlling the Human - he said he would be using a big plate for the feet - This wouldn’t be any change in contrast to frame animations, because the human won’t be able anymore to fit into stairs.

I think this is a good idea but :

  • If we want to use that in a game, the learning process has to be finished, or the game would be unplayable at the beginning
  • We should find a good learning “algorithm”. I see many problems in this way, altough it’s a very interesting idea.
    Some ideas/problems :
  • How do we define the ragdoll “should walk” ? We could tell it may move ( comparing positions ) in that direction, and then it would try to move all his bones until its position is changed, and so on. One problem is : is the way human are walking the best way to move ? If it’s not the case, we would see our ragdoll move like an animal.

Regarding the Gamma project, I think it would be reasonable to use this development plan :

  • Version 0.1 : Only static graphics ( OBJ / 3DS formats ) are supported. However, it’s possible to use constraints ( ODEJava joints )
  • Version 0.2 : Frame animations ( MD2 ) are supported. And behaviors are made, so it’s possible to apply multiple behaviors to make the object move : for example walk and fire at the same time
  • Version 0.3 : Self-learning process is done

For the behaviors I suggest the following ideas :

  • There are “key bones” that are needed for a behavior : for example a “walk” behavior would need all the human bones for legs : thigh, knee, foot
  • Behaviors are all adjustable with different parameters ( speed, angles… )
  • When they are multiple behaviors that act on the same “key bone”, the result is the mean of all the modifications ( or maybe we should do a priority system ? ).

Your scedule for gamma seems to be good. We should ofcourse do this complex stuff later, so we have already a good testing environment. + We sure should make the learned stuff saveable (ObjectOutputStream), but I think this is implementation thingis which we should discuss when the time comes…

Yes, and for now it’s important to finish all the 6 Gamma tutorial, because when they will be all running correctly we will have finished Gamma 0.1 !
Here’s the list ( it has been extended since the first time ) :

  1. Hello World : finished
  2. Stress Test : finished
  3. Shoot the cans : to complete
  4. Splasher Boy : to do
  5. Bidus : to do
  6. <A secret project, for now> : to do