[SGS] DarkShooter

I’d kinda got the impression from Jeff that a 1 sec resolution on the server was considered sufficient. To me it seems a bit low for the dynamics model. Ok :slight_smile: maybe 60 Hz client side is a bit too high. But it still wants to be a lot higher than supported by PDTimer as is. You still end up with two simulations running at different rates.

Maybe I’m being abit slow on the uptake or expecting more than is available. I suspect I need to find some time to cut some code and then come back with some more questions. :wink:

@Endolf

so basically in your game your not really running a duplicate simulation on the server, just checking that what the client says is reasobable and letting the clients control the dynamics?

Dan

Hi

Not quite.

The client sends it’s average thrust vector to the server, the server send this to everyone, and they all interpolate that client to that location. The rotation of small objects will have no bearing on things, and larger objects will have much slower allowed rotation times (think small fighter, large wings, roughly spherical in general terms, v.s. large, long carrier type objects). This means that the client feels responsive, even though it’s actually not doing anything different. by the time the client recieves an ‘ok’ to where it wants to go, at most 1/2 a second has gone by, but they are already looking that way, the thruster lag resolves itself and the current position is marked as the start point, and the then multiplied with the thrust vector, and the next position calculated. Then it’s interpolated and the client moved. During this move, we are on to the next game cycle, and they can rotate and change the thrust vector again. Thruster lag takes effect and 1/4-1/2 a second later they start to move off in the new direction.

In Astroprime for example, a ship could rotate with no thrusters active and they would carry on the previous direction, traveling backwards or sideways. It’s only when the thrusters kick in that things need to change.

Like I said, I don’t know how it will play, it may be awful, it may make you seasick, but until I try, I don’t know :slight_smile:

Endolf

[quote]If you have a hard reference to the contrary please present it.
[/quote]
I’ve talked to Cryptic; I know some people who work there. They send user input from the client to the server (at least they did a year back), and simulate on the server. This amounts to lock-step per object. I don’t think they use lock-step for the entire world. Other simulations actually use lock-step for the entire world (like www.there.com).

When you say that “no server keeps data resident,” that’s the conceptual model, right? In reality, the object store would not perform very well unless it could keep committed data resident on the “most probable” node for the next get, and only migrate the ownership of the state if the next get comes from somewhere else. Thus, locality in successive get-s is important for good performance. Even if you have a large, expensive NUMA machine, local memory is faster than ownership transfer.

Now, when it comes to walk meshes, well-performing collission detection data takes a little while to demarshal when you load it. You certainly don’t want to do that each time you want to run another task on an object that references the mesh. Thus, assuming the objects stay alive between tasks, as long as they don’t have to migrate, then that’s what I meant by “keeping objects resident”.

Now, consider the case where you’re simulating something big – the Earth, say. Clearly, you can’t fit all the walk meshes of the Earth into RAM on a single node. Similarly, if you have a lot of entities that are distributed over this simulation, if you use stochastic distribution of entities across nodes, you will get a worst-case reference pattern: each node needs to load walk meshes (and other world date) for large parts of the simulated Earth. There is no way that such a distribution would perform well. Thus, proper load balancing MUST take locality of reference into account, such that things in New York City tend towards one node, and things in Kuala Lumpur tend towards another node, because the amount of duplicated or transferred objects between nodes goes down a lot, which means performance goes up a lot.

However, I see no way for objects to hint what localities of reference may matter. I also see no posts claiming that you can derive these localities automatically in a way that’s actually sufficient to solve the problem – but perhaps you can. You don’t need to go into details about how it works, but if this problem is “solved” it’d be interesting to hear that it is; if it’s not solved, then that’d be important to know, too.

I’ve talked to Cryptic; I know some people who work there. They send user input from the client to the server (at least they did a year back), and simulate on the server. This amounts to lock-step per object. I don’t think they use lock-step for the entire world. Other simulations actually use lock-step for the entire world (like www.there.com).
[/quote]
As it happens were talking to some cryptic people and I’ll ask them too.

However I STRIONGLY doubt thats the entire model. From what I see playing it, it CANT be because you would see control lags. A true locks tep woudl also make everyones machine runa s slow as the worst one in the simulation and I don’t see that effect, either. A true lock-step woudl never need to roll-back the client’s idea of character position but we DO see that happen in periods of mreo intense lag.

My suspicion is that what is really happening is that they are doing a local simulation for cosmetic reasons, but making key decisions such as combat and collision cheat detection based on the server’s model of the world. basically the same as JNWN. The only real difference, and its a minro one,. is the level of data transfer. (COntroller state v. object sstate). That is more or less orthogonal to the issue of lock step v. open loop.

This gets into implementation things that I both cant discuss and are free to change under the hood. Suffice it to say our initila POC got sub milisecond response times without such local caching.

Deserialization times have proved to not be a performance raod-block. That sub-ms time INCLUDES deserialization.

As already explained, its not all on a single node. Your starting from false premesis and thus drawing false conclusions.

So ont he server model would 1 second per tick be considered sufficient? I would be expecting to the run the client side at about 60 ticks per sec, would this not lead to lots of difficutlies syncing up the server and client?
[/quote]
Um can I say this?

no no no bad bad bad.

The last thing you want to be doing is driving your server by “frame rate” as if it was a client and tryiong to synch at that level. FWIK That was the disasterous mistake Sony made with SWG when they initially develoepd it and anyone who tried to play it can tell you what the results where like.

Such architectures typically come from game programmers trying to programs ervers the same way they knwo how to program clients and its fundementally wrong. A well done server shoudl be as event/transaction driven as possible. Now sometiem heartbeats are inevitable but if your designing you entire game around trying to make heart-beats match rendering frames your defintiely starting on the wrong foot.

A bit of thought about the bandwidth requirements for frame-rate server interaction will soon put things in perspective.

Assuming a dial-up connection, thats around 3k bytes/second maximum client->server & quite possibly less. (Remember 56k dial up is actually asymmetric, you don’t get 56k upstream.) If each game object is (say) 50 bytes, then you can transmit around 60 objects per second. If you ran at frame rate, this would only allow 1 game object per client :’(

So … we have to reduce the update rate dramatically. For SharpShooter Arena, I sent everything at 5Hz and used interpolation on the client to smooth things out. I also reduced the amount of data per object to the bare minimum (position, velocity, object ID - less than 20 bytes). Each client could only transmit 2 objects - 1 for the player, another for the player bullet.

Thus I was sending around 2025 = 200bytes/second for each player. If all the players were in close proximity, then you would need to be able to receive N-players * 200bytes/second. So for 10 players, we are receiving about 2k/second. With a server based architecture, this asymmetry in data transfer rates works with the modems higher downstream rate of up to 56kbps, so we would manage more then 10 players in one place. However we also need to allow for additional server generated data for Non-player characters. And so on.

To optimise this further, we need to implement variable data rates depending on how fast the entity is moving. (I didn’t manage this). This is getting close to a transactional system, where you only update an entity when you have to.

So from my very limited experience, I totally agree with Jeff. This does result in some game compromises, but when viewed in the light of the above issues, it becomes obvious very quickly that it is just tough :slight_smile:

Alan

Such architectures are also necessary for various distributed simulation systems (such as used in military wargaming, etc). Thus, I think it’s fairer to say that running physics on servers makes sense for highly physical games where you want to make authoritative decisions based on physics state, but does not make sense for a typical MMORPG.

Can I read from this that you believe that SGS is not a viable technology for systems that look more like a military simulation, and less like an MMORPG?

Btw, when it comes to Cryptic: please ask them! Last I did, they said that they run physics at 30 fps on the server, and they send only controller inputs from the client. They RLE encode the last X seconds of input into each upstream packet, which will compensate for a fair bit of packet loss. I probably used the word “lockstep” wrong, as it’s not an RTS model; instead, let’s call it “input communication model” (as opposed to “state communication model”). I would assume that they send baseline state information every once in a while to get a client back in sync if there’s been too much packet loss, or the server and client have de-synced; this would account for corrections/warping.

Last: if it takes “sub millisecond” for each get(), how can I simulate hundreds of close interacting objects in a physical simulation with 30 steps a second, per processing node? There’s only 30 milliseconds per step, at that rate.

I actually have a specific kind of system in mind when I’m asking these questions – is there another way of contacting you for more in-depth questions?

Um  can I say this?

no no no bad bad bad.

yup you can and did :slight_smile:

It wasn’t quite what I was suggesting though, sorry it came acress that way. I would never mean to suggest that update packets are sent every frame, that would be silly.

What I meant was( which is probably also wrong, so some clarification would be good) that I can’t see how the server and the client can hope to be remotely in sync if the server is only updating it’s model once per sec. Actually sending data I was thinking around once per sec. I guess what I’m thinking of is if the server side checks on collision, are checking intervals one sec apart then it would be easy to miss collisions (I suppose you could sweep).

I guess the use case I am thinking of is: Client reports a hit, sends to server for clarification (or doesn’t report damage til the server sends hit but does show fancy hit effect), server sim hasn’t seen it (cos it’s steps are too far apart), says no hit.

now I’m thinking this still isn’t the way I want to be looking at it, but I’m not sure what is.

Well it depends. SimNet was highly destributed and basicly client based. It would certainly work well cooridnating something like that. Military Sims have the agreeable property that if someone cheats, you take him outside and shoot him. Something MMORPG makers can’t do.

FLight and tank sim games all to varyign degree follow the SimNet model and should be prefectly implementable in an SGS centric world.
Cosmic Birdie is a good exampel of a game that does fast action. It does client side physics because the resposne time of a persistant system, even one sucha s ours, woudl be inappropriate for what they are trying to accomplish.

I would definitely say it is not suited to nor intended for scientific simulation. I had a nice chat with a fellow from NASA a few GDCs ago. They
simulate down to the flow of air particles and the ONLY thing that will do what they need is a big hunk of memory surrounded by 100s of directly coupeld processors.

Okay that I believe 8) I suspect if you aked them yould find they are ALSo runing loosely coupeld physics on the client. The only other way they could get the kind of motion theya re getting (ex a smooth arc in a jump) and not do that was if they were actually sending back paramteric curves to the client and I kind of doubt that.

Frankly if you wanted to do that sort of continuos global physics thing in the SGS you would do exactly what you do today: You would write a custom process that handled your real-time physics model and then talk to it through the SGS’s RawSocketManager. You would be responsible for scaling that yourself and any persistance you wanted from that. Realisticly we arent going to be able to give you any better then O(.1) MS access time to data inside of the SGS (which is pretty damn good and good for most things, but this aprticualr problem its probably not fast enough for.)

As I mentioned above though, physics and collision can be decoupled and there are a lot more efficient and less demanding ways to do collision.
If I were creating a physics based game I woudl probably look at doing client-side physics and considering the physics model effectively a “controller” on the objects. As long as you know the limist of your physics model detecting unreasonable results isnt that hard and suffices for cheat insurance.

Short answer is it sounds like your particualr application is outside of our problem domain, sorry.

I specifically asked them, and they run the same physics on the server and on the client, to avoid position cheating. Because they run physics at 30 frames per second, it means that they can smooth enough arcs when you jump – linear interpolation between successive 33ms bites wouldn’t be noticeably different from an actual parametric arc.

Thanks for clarifying the general model for the SGS back-end, too; it helps.

Last, @dsellars: I believe Jeff is suggesting that the server checks the moves the client’s made using rays or swepth spheres, covering the movement area, rather than just move the object and test for intersection. Thus, the server would be running different code than the client (or at least, the same code with vastly different parameters). This is a model you can use if your game is only loosely physical, or only physical on the client (like an MMORPG).

Okay. Due to the realities of interent latencies, at leastin the US, youa re talking latencies that are usualy in the 100ms-200ms range and jitter pretty badly, all the way up to near a second in worst cases. To make it mworse, analog modems can go into retrian mode and put as muchas 6 seconds of latency ona line.

All of this leads to an inevitable reality-- you cannot expect a tight synchronization over the net. Set two clients up next to each other for any online game and I gaurantee you that what you will see is a pretty loose synch. Thats just reality.

So sicne you cannot closely synch, you don’t try. You design your game as open loop and loosely synchronized. As long as the simulation on each persons terminal is reasonable and fair, they dont have to be identical. Now there are a few times when POV becomes critical, mostly these are interractions like shooting someone or hitting them. The schemes for that vary.

Most first person games take advantage of the fact that the only person who has any great amoutn of information on aiming is the shooter. based on that, you can let him decide if its a hit. The one problem with this as a complete solution is it leaves the cleitn open for cheating. One way to sovle this is to move shooting to the server. This works though you risk having shots thata re dead on on the lcients side miss in the server’s view. Another solution is to let the client decide but have the sevre checkign for “reasonableness”. If the serve decides there is no way a hit could have happened thatis being reported, it decides the client is cheating.

Now buried in this whole discussion is really the question of position. Im going to pull it to the front because it helsp illustrate a difference you cna take advantage of between client and server. The client has to be correct ona frame by frame basis. Thsi means that it indeedneeds to si,ulate ona frame by frame basis. The Server though just needs to detect the client cheating and resolve such questions as “where was object X at time Y”. With any object that moves in straight lines (or actually in a mathematically predictable curve, straight is just the degenrate case) if you know some information at two end points, you know every postion the object was in between these end points.

I dont actually need to process the entire shot when its fired if I can ask the question later “at this time could this object have hit that object with a shot.” Cheat detection doesnt have to be immediate as long a its reasonably soon after the fact. All you cna do with a cehater anyway is (a) reset the game to befoe the cheat and/or (b) kick him or her out.

BattleTrollS/JNWN takes advantage of this, as we demoed at the show. We send a message to the server whenevr our movement state changes-- this means start, stop, or chnage angle. The server does a line segment test vs the walk mesh to check all the points in between. If it intersects a wall or other unwalkable, we tport the player abck to the start position of that ssegment.

(We actually limit the angle change messages to a min-change to avoid flooding the server. We also send an update periodically even if we are just walking straight so lag doesnt get too extreme.)

I hoep this helps get the ideas flowing anyway.

But as you say they are simultaneously running the physcis on the client. Thats why its smooth, ist being locally controlled for the client. I assume they do some correcting periodically to keep it in lien with the servers idea of the world. Thsi is what I mean when I say a “loosely coupled simulation” on the client.

There woudl be no way to get it smooth otherwise, internet spikes and latencies woudl screw it all up.

Yes precisely. i laid out one approach above, i hope that helps, too.

Thank you both for your time and input.

The problems you laid out are what I was worried about with this type game, part of the reason I decided to have a think bout this is precisly cos I don’t see how you can keep everything excactly in sync :frowning: but then I have never tried to do it so…

I think I need to have a play and think about all of this now. I see what you are saying and it’s similar to what I’ve started to think about.

I suspect I was getting too caught up in the wrong train of thought.

Cheers,
Dan.

Btw… I am pretty surprised that CoH is doing any server side physics.

Their combat is all dice-roll from what I can see. There is no twitch targeting and such. (Which is true of just about all MMOs, again because of the latency issues.) The Havok style physics that came in with CoV all seem to be visual fluff and have no impact on play. Even “knockback” was a late addition and doesnt really do much in the way of physics being a single object projected backards ona straight line.

The powers that “throw things” dont knock boxes over nor can you pick up real map objects-- the "thrown object’ comes from nowehre and is just part of the animation.

About the only game-play physics I see in game is wall collision and floor detection. It seems to me like there would be far cheaper and more scalable ways to handle that.