A interesting link from a post in the Box2D forums by a fellow called ‘hardwire’: (http://www.box2d.org/forum/viewtopic.php?f=3&t=143&st=0&sk=t&sd=a&start=30)
The summary makes a really good read. It’s very pragmatic though it seems they can ignore some issues because it’s a co-op game.
Kev
Wow, it’s startling how closely this resembles my physics synchronization support in JGN… :o
I’m curious about how he’s compressing the physics information being sent across. In my experience I was losing a lot of performance and only minimal gains for compression of the physics info. The biggest distinction is that he doesn’t care about security/hacking, and in JGN the client sends physics information for its objects down, but the server first validates the information and can then override (and send back a correction to the client) if it is determined to be incorrect.
This is awesome. Thanks for posting this CommanderKeith.
It’s the most fun in my day when I can talk about this stuff with you guys on JGO
It’s said that roll-back and re-simulation is too slow, but I don’t think it is if you allow a variable time step.
To re-simulate after a 1 second roll-back would take ages if you have to tick thru 60 fixed updates , but with a variable timestep you could do it in 1 update (at the extreme). But variable updates are harder to program for sure.
In my current frame of mind, I think that roll-back and re-simulation is the most ‘neat’ way of doing networked games. There’s no hackery and clients and server are guaranteed to be in synch. Also, it’s quite easy to automate the roll-back, you just have to save the state of the world. This means that as you add new features, you just have to make sure they’re state is saved and you don’t have to think about how to synchronize the features across clients and server.
PS
Another thing - I reckon that client cheating is over-rated. I mean really, who is going to be bothered hacking one of our games? I’d be honoured if someone did it to me! And after all, you can just do server-side sanity checks and then ban people.
I’m not sure about roll-back/resimulate at this point…at the Box2d forums I was arguing that approach for a while, but Erin Catto (Box2d’s developer) told me that in most big games that option is almost immediately discarded because it’s potentially too slow. I think it’s a killer combination of 1) most physics engines not working well with variable time steps, and 2) the fact that even if you separate the rollback groups by islands, these can potentially be very large (if there’s a lot of debris on a floor or something like that your islands can reach a long way). CommanderKeith is right, if you had a robust variable step integrator working well enough that you could take the whole delay in a single step, it would work fine, as the whole thing just takes a step. You just need to be very careful - a lot of things can go wrong when you crank your time step up to the level of network latency, such as tunneling and stack instability.
BTW, Glenn Fiedler (who gave the GDC talk on networking) also wrote a blog post about fixed time steps, which is worth reading, even if not directly related to this discussion: http://www.gaffer.org/game-physics/fix-your-timestep
Glenn’s GDC demos looked good. From what I could tell, the only limitation his method has over rollback/resim is the cheating issue, which I agree, is probably less of a problem for smaller games (though it does happen if you make it too easy). It’s also something you can hold back a bit with a layer of clever cheat detection.
(edit: typo)
[quote]BTW, Glenn Fielder (who gave the GDC talk on networking) also wrote a blog post about fixed time steps, which is worth reading, even if not directly related to this discussion: http://www.gaffer.org/game-physics/fix-your-timestep
[/quote]
Interesting article. One thing that I thought was strange was how Glen suggests decoupling the physics from the graphics. So you update the physics at say 30fps but the graphics at 60fps, for example. But since then you’re drawing the same physics twice, what’s the point? I don’t see why the graphics should run faster than the physics.
I wonder if it takes so long to do a physics update? In my limited experience the graphics in games takes >80% of the computer’s time. So doing extra logic/physics updates may not make a big difference.
Changing the view point is not physics. and graphics wise you can do some tricks.
CommanderKeith wrote:
[quote]Interesting article. One thing that I thought was strange was how Glen suggests decoupling the physics from the graphics. So you update the physics at say 30fps but the graphics at 60fps, for example. But since then you’re drawing the same physics twice, what’s the point? I don’t see why the graphics should run faster than the physics.
[/quote]
It works best if you don’t draw exactly the same thing twice, but lerp between the last physics frame and the next one based on the actual elapsed time. For instance, in the situation you mentioned, I would draw all my objects at positions and rotations halfway between where they were on the last step and the current step, then the next frame I’d draw them exactly where they are at the current step. This is particularly convenient when you don’t know exactly how long it will be between graphics steps, or where you just want to let the program scream and hit as high a frame rate as your computer can support, which is often the desired behavior (esp. in AAA games) these days. A well thought out multi-threaded design could conceivably use the time you’d save on physics to do something else, thus letting you have smooth looking graphics without spending all that time on physics.
There are optimizations inside physics engines that can be applied if your frame rate is fixed, which might explain why people prefer that approach. Also you don’t have to worry about instability due to variable time steps, as you can just tune your physics constants to work with the fixed physics step rather than trying to figure out how to calculate proper parameters as the step size changes.
FWIW, I’ve always been theoretically fond of the time-warp algorithm (http://www.cs.berkeley.edu/~daf/games/webpage/SimList/Papers/Mirtich-2000-TRB.pdf) which is sort of the ultimate variable time step roll back and resimulate algorithm. Some bits of physics are simulated way ahead of the current time when it’s “safe” to do so, and in the end you’re pretty much only simulating a single step from collision to collision, on a per-body basis (methods like conservative advancement try to step from collision to collision globally). Supposedly it could be useful for parallel implementation, too, which will likely become much more relevant over the next decade or so. I don’t know if anyone actually uses it in practice for games (the fact that it looks into the future makes it inconvenient when you don’t know about user input until the moment it’s needed), but you might find it interesting to read about.
[quote]I wonder if it takes so long to do a physics update? In my limited experience the graphics in games takes >80% of the computer’s time. So doing extra logic/physics updates may not make a big difference.
[/quote]
This is true. However, in a lot of games (especially bigger commercial ones) what happens is that during the early planning stages of the game each of the teams (physics, AI, game logic, graphics) is allocated a specific amount of both memory and processor time per frame, and they have to be sure not to go above those limits. Physics, being somewhat cheap compared to graphics, often gets the shaft in this budgeting (gotta make everyone cut corners to ensure those screen shots look good!). This is why in some physics engines (Box2d, for instance) you’ll see homegrown memory allocators that explicitly manage the chunk of RAM, putting it aside at the start. This is especially important on consoles, where memory is tighter than on the desktop. It’s harder to put tight limits on processor time, but an important step in that direction is to make sure that you don’t have any potential blowups in your usage due to variables you can’t control (user latency + island size, for instance). Since a single half second rollback (which is not unheard of) triggers a full 30 frames of resimulation, in order to fit that in your budget you’ve really got to build in a safety factor of 30x, which I suspect is difficult to do in most cases, especially since the early estimates of the time you’ll need for physics will likely be cut to the bone to start. Even without rollback you’d have to build in enough safety to handle the island size unknowns, but being sure you only have to handle one step of physics per frame is still preferable.
Without such a tight budgeting of processor time, I agree, it might not be a big problem, as you could either just accept a small framerate glitch or try to leave some spare cycles elsewhere in case you run overtime in physics. I know you’ve been playing with this type of method, and I think for smaller games it will likely work well, especially if you’re careful (or, again, if you have a good variable time step simulator working). But I’m pretty sure that the above paragraph explains why it’s not a more popular method with the big game makers.
Thanks for explaining that. I’ve bookmarked for close reading later. The screen shots at the bottom look sensational, I’d love to see that kind physics work in real-time.
I see what you mean about doing a half-way graphical update now, and that’s a pretty cool idea!
About the merits of the roll-back and re-simulate approach, while it might become unfeasible to re-simulate 1s in a big game, on the other hand, it seems like the best way to do a network game without having to think too hard about how to synchronize everything. It’s quite a general method that seems like it would work with most games, whereas other networking strategies are custom solutions which the maker had to think carefully about (such as Aramaz’s xsw game (http://www.java-gaming.org/forums/index.php?topic=12020.msg143541#msg143541) and Kev G’s mootox (http://www.java-gaming.org/forums/index.php?topic=18132.msg142591#msg142591)).
[quote=“CommanderKeith,post:4,topic:31469”]
If it ever starts to become an issue, you’ll be VERY glad you thought about it from the start.
“Do not trust the client”. Wise words.
Indeed