How to update the world?

Hi,

i got a problem: I currently only using a single selfmade Animator-Class (like the FPSAnimator) to update the world and render the scene. This works almost fine… I call the drawable.display(); followed by the drawable.swapBuffers(); method to call for a new frame. The time needed for displaying is measured and results in a time-scale-factor for the world update. I want to draw 60 frames per second, that means i have 16,6 milliseconds for drawing a single frame. If the rendering needs more time, the time-scale-factor grows and the world updates offsets the game time a bit more. If less time was needed, the thread may sleep some time.

The problem comes from the display and swap buffers methods, which are not invoked as i call them, but any time they want to.
The little image shows an ingame gui. The first line shows the current fps and a graph of the last 10 seconds. The second line show the measured mspf (milliseconds per frame). The green graph show the mean values for 10 seconds and the dotted white graph the last 30 values (the values for a half second). The last line shows the tsf (time scale factor) and the difference between the last 2 mean values. The dottle lines shows my problem. In most of the display, swapBuffers calls i need nearly 0 ms and for some calls i need 16.6 ms. One can imagine that this comes from the vsync option (setSwapInterval) but in both cases this is how it looks like.

It seems that the interface / device buffers the render requests and waits some time before it just renders all buffered requests. But how can you gain a steady world update with this?
The FPSAnimator class produces a steady framerate and render call rate, but while using drawable.repaint() with autoswap buffer mode you have no chance to measure the time past or the framerate reached. If you want to reach 60 fps and the machine can only produce 10 fps the worlds time passes in slow-motion…

Has anyone an idea to help me with this problem?

Greeting,
Achilleos.

Are you storing a delta time from your last update, and using that to control the “amount” of update that occurs? i.e. If you have a character that moves at 100 pixels / second, and your delta time is 10 milliseconds, then you want to move him (10 / 1000) * 100, or simplified (10 * 100) / 1000, or 1 pixel. Then if the delta time fluctuates, you obviously move it a decreased or increased amount, to give the appearance of fluid motion.

You can apply this technique to all draw calls in order to get a similar smoothness.

Also, to get delta time, you pretty much want to have a target time stored (in the above case, that’s the 1000), then in your update you say:


long now = System.currentTimeMillis();
long delta = now - lastUpdate;
lastUpdate = now;
update(delta / targetDelta);

The delta time is not usable. Thats my point.
Take an empty scene. Take the default FPSAnimator with scheduled animation of 60 fps.
Take your lines of code and just make an additional System.out.println(delta);.

I got this (for example):
16
16
16
15
16
31
16
16
15
32
16
0
15
15
16
16

And thats the problem.
Taking a mean value of delta is incorrect too, since measured over a second the render rate is still okay. But you cannot take a mean value of a second since this results into slow update time corrections.
Currently i made the following, which works quite fine:
Take the last 5 deltas, sort them in a list, take the middle.

(By the way: i have to correct myself. I took another look at the source of the animator classes and saw that drawable.display() was used with autoSwapBufferMode(true))

And another “by the way”: this seems only to occur on windows xp. windows vista produces constant values. linux and mac os x system also seem to have this problem. -> all tested only with nvidia cards.
Oh, another “by the way”: ATI cards sometimes only render 30 frames while the display method is called 60 times…

You should try to decouple your game update from your graphics update. Make it so that your game updates constantly at 60 updates/sec. After updating the game state, check to see if there’s enough time “left” to finish rendering a frame. This way the game will always be at 60 fps and if a computer can’t run it as well, they will only have laggy graphics instead of a laggy game state.

The algorithm is slightly more complicated, but I think this link describes it in pretty good detail:
http://www.gaffer.org/game-physics/fix-your-timestep

Thanks, thats a nice article!

Damn it. I just remembered a problem.
Lets talk about this:


    float t = 0.0f;
    const float dt = 0.01f;

    float currentTime = time();
    float accumulator = 0.0f;

    State previous;
    State current;

    while (!quit)
    {
         float newTime = time();
         float deltaTime = newTime - currentTime;
         currentTime = newTime;

         accumulator += deltaTime;

         while (accumulator>=dt)
         {
              previousState = currentState;
              integrate(currentState, t, dt);
              t += dt;
              accumulator -= dt;
         }

         const float alpha = accumulator / dt;

         State state = currentState*alpha + previousState*(1.0f-alpha);

         render(state);
    }

I do something similar. I do not accumulate time calculated by the “animating enigne part” but i calculate a factor for scaling the “dt”. That way i only need to make 1 step in animating the world, which is fine since i have no physics engine running. Making one animation update followed by an render call is just fine if you keep track of the time past and therefore so offset the animations.
The problem comes from the devices or interfaces itself. If the graphics hardware buffers images there is no chance of making “non-jerky” animations. Take a look at the timings 2 posts before. If there is a complete engine tick, which happened in less than a milliseconds the world should not update. But if the graphics hardware delays the buffer swaping of something else, the world should have updated. The problem comes from measuring the time past. The graph with the white dots shows it. Some frames take time, some not (the scene does not change). That way the world does not update for about 5 frames and afterwards updates the time passed for 10 frames within the next 5 frames…

I am not sure with this, but i have tried a lot of things. If i somehow start to measure time and consider the time passed for the world update and render/bufferswaping call, all animations start to have this subtle visual non-smoothness. If i say: “hey, vsync is on, 60 fps are okay, just make the world update”, then everything is smooth.

A fixed timestep is a major advantage when your dealing with drag and bounces. It ensures your game will be predictable, while with a dynamic dt, the gameplay will depend on your framerate. Just like in some (old) arcade games you can only make a jump with a certain framerate.

[quote]you can only make a jump with a certain framerate.
[/quote]
There should be no difference when using a fixed dt.
If the framerate is to low, you can have a 100 undetached animation updates, you wont see the jump.
Player presses “Up”… tick 0 start jump… tick 20 animated model is in the air… tick 40 animated model is back on the ground… tick … tick… render.
The problem of old arcade games had been, that absolutly no dt was used. Update -> Render -> Update -> Render. Old hardware, slow unplayable game, new hardware, extremly fast unplayable game. From the point on where you start considering the time passed you do not have this problem. A fixed dt is as good as a dynamic dt. Integrating a physics engine takes this to a new level… but i have no physics engine, thats why i want to make as few world updates as possible. And that is one update per rendering.

You might not see it, but you’d be at the other end of the gap. Guaranteed! With variable dt you might or have not jumped the gap fully - you might land a few pixels farther or nearer.

Handling acceleration and/or collision with variable dt is simply chaotic eventually.
With fixed dt it is deterministic. That is especially important in multiplayer games, but even in singleplayer the framerate won’t affect the gameplay, which is important in specific genres.

I agree that a fixed dt is more robust than a dynamic dt. Maybe sometimes a physics engine will be used in the project. Thats why i will change my code to use the algorithm provided above. But there will be some changes that have to be done in addition to make this work. Making these changes will take some time (cause i have other things to finish in advance). I will post again when i have finished them and tell you about the result.

But i do not guess that this will fix my problem. If it does not, i will provide a little applet (about 100 lines of code) which demonstrates the problem. Cause the problem does not come from using a dynamic dt but from the devices which delay images. Currently i guess that using swapBuffers() is the problamtic part. Using autoSwapBuffers seems not to produce this error.

Thanks so far,
Achilleos.

Well i made the changes and as expected: It doesn’t change a thing.
As promissed i made a little test applet.

Here comes the source:


package test;

import java.awt.Container;

import javax.media.opengl.GL;
import javax.media.opengl.GLAutoDrawable;
import javax.media.opengl.GLCanvas;
import javax.media.opengl.GLCapabilities;
import javax.media.opengl.GLEventListener;
import javax.swing.JApplet;

import com.sun.opengl.util.FPSAnimator;

public class Engine extends JApplet
  implements GLEventListener
{
  private static final long serialVersionUID = 4056777684480799564L;
  
  protected Container frame = null;
  protected GLAutoDrawable canvas = null;
  protected FPSAnimator fpsAnim = null;
  
  protected long currentTimestamp = System.currentTimeMillis();
  
  protected GL gl = null; 

   @Override
  public void init()
  {
     frame = this;
     frame.setName("Test");
     this.setSize(10, 10);
     
     GLCapabilities glCaps = new GLCapabilities();
     glCaps.setHardwareAccelerated(true);
     glCaps.setNumSamples(4);
     glCaps.setSampleBuffers(true);
     
     canvas = new GLCanvas(glCaps);
     canvas.addGLEventListener(this);
     
     frame.add((GLCanvas)canvas);

     fpsAnim = new FPSAnimator(canvas, 60, true);
  }
  

  @Override
  public void start(){
    fpsAnim.start();
  }

  @Override
  public void stop(){
    fpsAnim.stop();
  }

  @Override
  public void destroy(){
  }


  @Override
  public void display(GLAutoDrawable arg0){
    long newTime = System.currentTimeMillis();
    long deltaTime = newTime - currentTimestamp;
    currentTimestamp = newTime;
    
    System.out.println(deltaTime);
  }


  @Override
  public void displayChanged(GLAutoDrawable arg0, boolean arg1, boolean arg2){
  }


  @Override
  public void init(GLAutoDrawable drawable){
    gl = drawable.getGL();
  }


  @Override
  public void reshape(GLAutoDrawable arg0, int arg1, int arg2, int arg3, int arg4){ 
  }

}

I have uploaded a webpage which runs the applet.
Just follow this link:

http://www.informatik.uni-bremen.de/~pachur/index.html

You see nearly nothing. Just a 10x10 px sized rectangle in the upper left corner of the page.
The interesting part (the deltaTime between 2 calls) is outputted to the java console.

While there is nothing drawn nor anything difficult happens, the deltaTime varys from 15 milliseconds up to 32 milliseconds. But the FPSAnimator class is well set up and calls 60 times per second the display function.
Just taking the deltaTimings and update the world produces jerky animations, because there are no frames that needed additional time. The buffer swaping just seems to delay itself. As a result there are frames that forward the ingame time.

Help is still needed. Maybe there is something i forgot.

Achilleos

System.currentTimeMillis() has an accuracy of ~16ms on your system.

Try:


public static long smoothTimeMillis()
{
   return System.nanoTime() / 1000000L; // nanos -> millis
}

Take care of the AMD dual core bug then…

the accurancy is at 1 ms and okay. 16,6 ms per call would be the correct timing, since the fpsanimator is set to 60 fps which means a single image in 1/60 seconds.

Have you checked? On most windows machine it is around 15-16, and is what you see when you printed out the deltas.

No i didn’t check it. I though this is what the function does. But actualy you seem right. The API says:

I will try nanoTime tomorrow. I tell you about the results. But some comments earlier someone said that there is a problem with nanoTime on AMD cpus…

It goes back in time (or slowsdown, or fastforwards) due to the CPU throlling it’s frequency. Further, different CPU cores can have different ‘offsets’.

With a bit of checking, it can be made more or less stable - i.e.: acceptable.

[quote]Have you checked? On most windows machine it is around 15-16, and is what you see when you printed out the deltas.
[/quote]
yup, on windows it is around 16ms

maybe you can perform your logic like the following ?


long rate=20; //steps of 20ms for logic
long lastTime=System.currentTimeMillis();


//your uncontrolled framerate render loop
void renderLoop()
{
  long time=System.currentTimeMillis();
  
  long nbLogic=(time-lastTime)/rate;

  if(nbLogic==0)
   return;

  lastTime=time;
 
  //then perform logic : two ways

  logicNStep(nbLogic);

  // and/ or

  for(int numLogic=0;numLogic<nbLogic;numLogic++)
  {
   logic();
  }

}

//Perform logic for 20ms
void logic()
{
  //ex pos+=mv;
}

//Perform logic for 20ms*nbStep
void logicNStep(int nbStep)
{
 //ex pos+=mv*nbStep;

}

[quote]The problem comes from the display and swap buffers methods, which are not invoked as i call them, but any time they want to.
[/quote]
I dont think it is the best way, but you can perform active rendering if you want to control display : http://www.java-gaming.org/index.php/topic,17661.0.html

Just to complete this thread.
It works with nanoTime, but the AMD Athlon64 returns different values for the cores after some time. But thats another problem.

Thanks alot.

You can make a dedicated thread that writes its time into a globally accessible variable. That way you won’t suffer from different cores with different times, and you significantly reduce your calls to nanoTime (to a fixed frequency).