Multi-Player Architectures?

I have a scalable authoritative server.

As I made my multi-player plug-in I came across an idea that would allow for peer to peer and authoritative server at the same time. Has anyone ever implemented this?

I was wondering if anyone has come across a better architecture as in faster without losing the authoritative capabilities.

BTW, is it really better to get rid of the multi-player server all together and just stream images to the client.

It would seem that despite it being a bandwidth hog that it would actually be a very simple design. I personally would never do this because I couldn’t use that architecture realistically for my robot army. But is it the best multi-player game design ever or am I dreaming?

Ofcourse it’s easiest to do everything on one machine in one process, preferably on one thread.

If you have the resources: CPU cycles, memory, bandwidth, datatraffic and your players would accept the lag, why not? :persecutioncomplex:

Streaming games is the next big thing, next to this ‘cloud’ thing they keep talking about.

Which begs the answer to your “Why not?” question.

I would like to say that I can’t really use it and just like 30 years ago it is a fad waiting to be broken, but I would be wrong.

It could be that this image download architecture is going to stay and am sticking myself into the wrong hole with the wrong philosophy like the RMI thread person.

I think the broken part is that we could see a move to much larger requirements on the client to the point that you can’t just stream images again at 60 fps. Like higher resolutions. Well I can only hope. What do you think the likely hood of the requirements scaling up so much that bandwidth is an issue again?

I think that some low latency games will still be much better without the download image architecture. I notice the lag, but for many games it is fine. VNC did the same thing 30 years ago, but now all of the sudden it’s a big deal.

Like I said before my future project may depend on low bandwidth and ad hoc network requirements so I probably can’t even use it for my games unless I want to have 2 different network philosophies.

I am at a point where I will probably have 4 3D renderer implementations under my AllBinary Platform and now possibly 2 network architectures. It could be more than I could handle.

Short Answer:

I would say that the simple way is going to be far better, but sadly my direction will keep that from doing it unless I decide to have 2 different network architectures to work with. Looks like I will need to make a real decision. Fuck.

Super Short Answer:

Sure why not.

Screw 30 years of network philosophy that was broken for games, but great for war.

:slight_smile:

I personally see it splitting into 2 categories.

1st
Standard resolution games streamed, with HIGH complexity and high CPU requirements. Imagine having 8+? people play a standard 1080 or 720, 1920x1200 or what have you resolution, but having a huge mini server running all the ingame physics/calculations/noise terrain generation/ a.i. (non visual stuff) where the individuals could never have enough power to truly work it.

2st
we now have huge Wall sized screens(10 foot screens with same pixel density of our monitors), with high pixel density. Lets say 10k x 10k or higher resolution. Significantly too high to stream(even though in this future, we may have tons more bandwidth) but we have the local rendering capabilities? That also happens to be 3D etc…

Standard calculations(though still significantly higher then today) (high resolution / rendering?)

Who knows. or the obvious 3rd.
It’ll fluctuate back and forth, depending on which is winning(The most probably outcome).

If bandwidth is winning, then more bandwidth heavy games
if localized computing is winning, then we get that.

Both categories are on the verge of some MAJOR increases (10x or more current) but as with all things, itll happen incrementally

From your comment I now realize that mixing the 2 together is the way to go.

Next generation Multi-Player Architecture with be as such:

  1. Stream images down and input up for those on a low end client but they will need to pay for the server cost… So you can play your hi end game on a netbook, playbook, iPad, zoom, and such.
  2. Allow a thicker client for high end machines that want lower latency and more control over their processing assets with lower or no server cost.

So I guess the 30 years wasn’t wasted after all. I didn’t realize that Multi-Player was going to so much fun coding. :slight_smile:

I think people forget the vast bandwidth you will require for a streamed game. There are 2 reasons why its really quite a lot. First is that you cannot compress the stream as well since of the real time constraint. The current crop of real time compression hardware is really expensive (>10k per card), and tbh, still looks like crap (see last world cup). 2) People don’t play in 800x600 anymore.

The next reason why its really not ever going to be big, is the difference in both cost and raw performance of network speeds and bandwidth vers CPU power. Long story short, I can always compete better by using the local CPU more, and the bandwidth less. A company that needs 10TB+ with very low latency is going to pay a fortune for there internet connections. And before you pull out youtube as a comparasion. No one is going to want to play a game with the kind performance they provide, and they don’t even have the latency issues. Don’t forget you can’t buffer a live stream like this. Even mobile devices are getting some pretty decent performance nowadays. There is no reason not to use it.

Compare to the MB per hour that something like Doom type games require. They can get 10000 uses on the same connection that the streamed version can only have about a dozen players. Think of it as highly domain specific video compression with less patents.

The thing about bandwidth, latency and uptime performance. Its not free. In fact it is really expensive. Oh and enough CPU power to basically be a PC per connected player won’t be cheap either. Using what the players already has makes more sense.

I agree but rich people don’t. Some people for some reason are interested in taking away home computing for various reasons:

  1. Hardware at home is not used efficiently.
  2. You can install the same game on almost any system.
  3. They can still make a profit with a 9 dollar subscriber fee.
  4. Licensing control is easy when the client code never gets on the client system. Authoritative multi-player server is automatic and almost perfect.
  5. Some of them are evil and want to dumb down society enough that normal people can’t afford comparable computing. If they control most of the computing they will control the new science production, and ultimately computing cost would be pushed to the consumer artificially like the post office.

So, while I don’t really like the idea. I know it is really here to stay this time. So you will need learn to live with it just as I will. I have known this for a long time, but it didn’t matter until this year.