[SGS] Darkstar and physics simulation

I’ve been reading the Darkstar documentation. It appears to use a common transactional model, where transactions are intended to cover only a small number of objects.

However, most physics solvers available today actually throw all world objects into a big matrix, runs an LCP solver on that matrix, and out comes the new positions/velocities/orientations of the objects.

A naive implementation of this mechanism as GLOs would have a world stepper that acquires (using get()) all the objects, throw them into the solver, writes the output to the objects, and then commits. This wouldn’t scale all that well. While you can partition the solver into separate “islands” that can be solved separately, knowing which island an object belongs to is part of the collision and interaction connectivity graph, which in turn would be a single large object with references to everything.

Is there a better solution to this problem within the SGS?

Also, modern physics games step at between 30 and 100 steps per second; would the object store keep up with committing the entire world at that rate? Does it do write combining, and if so, is the write combining ordered per object or store-wide?

I thought about the same thing and ended with the conclusion to make this simulation on the clients and only check the submitted positions on the the server. Otherwise I fear to overburden the object store. I think the object store will generally be the bottleneck, when you try to simulate large worlds.

Christoph

You’re right, the claims in the server programming manual only say “an efficient, scalable, reliable, and fault-tolerant architecture” and doesn’t actually mention “secure.” When you trust the client on simulation, there are a whole class of cheat-proofing you just can’t do, which you can with authenticated server-side physics. For a typical RPG, it probably doesn’t matter, but for something like a first-person shooter, it would.

I agree that a single object store seems like the main bottleneck. Again, for an RPG, it probably won’t matter, but for something with high-action, I don’t see how they could do thousands of players in a single action game world.

Does anyone know what the actual object store back-end is, and more importantly, how well it scales (in commits per second)?

The SDK ObejctStore is a custom in-memory transactional store backing itself up to BLOBs in Derby.

The full back-end is currently using HADB, which is a technology that Sun bought (actually we bought the whole company.) It is a destributed in-memory database that scales “near-lineraly” according to the developers.

Shawn Kendall at full sail/IMI and I have been talking about this specific problem some. To really take advanatge of the architecture you would want to develop a physics model which can handle parallel computation. this would allow you to properly spread your load across the SGS back-end. We provide the key primitive for such aprallel problems== which is the ATTEMPT form of GET. You can see how this is used to paralelize the processing of timer events in the PDTimer source code.

Actually developing the destributed processing algorithym though I have a feeling is an un-done research project. Anyone looking for a good thesis subject? 8)

Short term, you can punt and do what you woudl have done anyway-- build a seperate physics server process around your favorite physics library. You can then talk to it from the rest of your logic through the Raw Socket Manager in the SGS.

[quote]Actually developing the destributed processing algorithym though I have a feeling is an un-done research project. Anyone looking for a good thesis subject? 8)
[/quote]
Actually, I’ve seen that done, although in C++. I believe one rule has to be that an object can only derive its state from the state of objects in the past (i e, you must forego the big-LCP-solver approach), and thus you will need to at least double-buffer your physics state based on time. Thus, when updating state from (step-1) to (step), everyone could get state for (step-1) without interfering, and the object could lock itself at (step). However, I don’t quite see how to express this in SGS parlance, because you lock the entire object when you get() it, and there’s no “commit and swap” primitive. Any ideas?

Disclaimer: Daytime, I work for a company that has significant investment in a similar area.

We have that. Its called a PEEK. PEEK returns a tasklocal copy of the last comitted state.

But the problem is that the object might already have been committed for (step) by another task.

@Jeff: I suppose it could be implemented with peek if the peek-er looked at both “previous” and “current” states and took whichever was for the timestep they could read.

@mudman: The order of commit doesn’t matter if you make a partial ordering rule: all objects commit for step N before any object is allowed to commit for step N+1. This is basic simulation time management; you can read (for example) the HLA standard (ISO 1516) to learn all about it.

So, assuming that all you have is peek and get, and assuming that a peek that runs in parallel with someone else’s get will succeed (and return the previous state, until global commit), then the loop might look something like:


// stepped by someone who knows how to step everything in order
object::evolve(long fromStep) {
  assert( cur.time == fromStep );
  swap( prev, cur );
  cur.time = fromStep+1;
  foreach( other in myCell.findIntersectionsWithMe(fromStep) ) {
    state = other.peek();
    if( state.prev.time = fromStep ) useState = state.prev;
    else useState = state.cur;
    handleIntersection(useState);
  }
  calculateNewState();
}
// commit here

If what you want is to keep a previous version even after commit of a new value, I suppsoe you could just keep 2 GLOs.

A PEEK will pick up the most recently committed copy. Is that what youa re lookign for? If not, then yo ucoudl always run two GLOs per node and truly “double buffer.” Or keep multiple copies of the data in the same GLO…