Managing when the server should send which data

So I have basic implementation of a server and client for the first multi-player game I’ve been working on. However, I have run into yet another issue that I don’t know how to solve. How do I manage when the server should send which data to the client? For example, I want to send map data when the client connects to the server, but if the client loses connection and then re-establishes connection again I need a way for the server to know that it doesn’t need to send map data again. How do I manage this?

One idea I came up with was upon connection I send a byte from the client to the server that works as an ID for whether or not it should send it (0 = yes, 1 = no). I don’t know if this is how it ought to be done though…

This brings me to my second question…how should would client know what data is being

1st problem:
When client connects to the server don’t send the map unless client it asking for the data which is the way you described with 1 and 0s. I think this is a good way and you should do it this way.

2nd problem:
It depends on which networking solution you are using, if you are using something like KryoNET which is AWESOME, then you can just create the packets on client and server and check if the object that you got from server is player data or something else.

if (object instanceof PacketPlayerData) {
    doSomething();
}

This is the way I would do.

I’m not using any networking libraries; instead of libraries I am trying to do it all myself as a learning experience. :slight_smile:

I want to try to make my own packets eventually, but I’ll be honest when I say I have no clue where to start.

1.) The client should request the map data when it needs it. The server is dumb, it only responds to requests from the clients. The client should have a file that should save the state of the world and tell the client whether or not it needs to request new data from the server.

2.) Similar to what Edgu said, use a queue based system for your packet handler that processes packets in a first in, last out approach. You can find out what kind of packet each one is in the queue by using the instanceof operator, and then acting upon the result.

Think about networking as if it’s dumb. Everything is a response to something else. Player breaks object, client sends packet to server to notify the other clients. Server sends packet to other clients. Clients receive packets and process them. Rinse and repeat. Networking is complicated, but just break it down into simple response based problems, and you will be good!

Quick edit because you just posted:
If you are trying to make a game, I wouldn’t try to roll your own networking library! Use something like Kryonet or net (never used it, but I’m starting to see it used more) because they are optimized, feature rich libraries that are made by people that have been working on them for years. If you are just messing around and not trying to make a game, then go for it, but I would strongly advise against trying to make one yourself if you are actually setting out to make a game.

Well if you really want to do that, then what people usually do is send some integer along with other information in order to identify the packets, then there should be a switch statement which checks the integer and decides which packet it is. IMHO not using KryoNET is waste of time because it’s so simple to use.

KryoNET is actually not that good. it uses a very good serialization but the network implementation lacks a few more or less important things.

the usual way to identify the type of data is to send a int (or short) and map it back (using a int/short->object map). map could return a factory which converts incoming byte-stream into a concrete class. or it could just call a listener which would process the stream somewhere else.

anyway, the important thing is : when you grab the byte-stream directly from the selector using the selector thread - you can stall network if you process this data for too long. a listener might be a bad idea. a factory, building objects might be a bad idea too - if deserialization takes too long. this is where Kryo shines.

you could also just copy the incoming data into a ringbuffer and process it on another thread, just stalling the network by the cost of copying. if processing that ringbuffer takes too long you will run into new issues. a border-case where Kryo fails.

in any case, if you do not use an existing library to deserialize the byte-stream - i can recommend using a simple and small state-machine parsing the incoming bytes emiting objects. with that can (have a starting point to) handle broken streams, simple validation, ACK/NACK, discarding unimportant data etc.

i was running a system like that at a telecommunication company which turned out to be way more stable/solid, faster and less memory and cpu hungry than Kryo. it’s not magic, just reassembling the basic ideas of ethernet frames : http://en.wikipedia.org/wiki/Ethernet_frame

in the end, if you want to get things finished and playable quick - use KryoNet. if you already plan more already and see networking as a critical element you should start doing it properly right for the beginning, adding networking later can stall your project and maybe reveal fundamental design issues blocking the project from being finished at all - that’s why we dont get pvp in minecraft.

Essentially a “packet” is just one long string of bytes that the server/client can then use for something, right? So, since it’s all one line of bytes, should I just dedicate the first 4 bytes to the ID of the packet or something? Just some brainstorming I’m doing here. :slight_smile:

yep. something like … [preamble-bytes][header-bytes][payload-bytes][suffix-bytes]

header could contain a unique id, a data-type-id, source/destination id’s, payload size, checksum, etc etc. :slight_smile:

… the important thing it, when you look on it as a string of bytes, you must never expect that the first byte you fetch is actually a header-byte (or something else you would expect at the start of the communication). it can by a byte from any possible position in the “string”.

i use the preable byte-sequence to find the start of a transmission, setting the state-machine to the correct state - ignoring all incoming data until snapped in.

What do you mean don’t expect it to arrive in order? If I send one single string of data, it’s inherent that that single string will arrive in order, correct? Should the same be true of a string of bytes?

no no, you’re correct, if you use TCP you can just grab the bytes, they’re in the right order.

what i mean is when you receive bytes, they’re usually in a ByteBuffer, do not expect the first byte in there to be the beginning of a transmission (header/preamble). … or imagine, a server connecting to a stream which was already transporting messages. server might start reading at any position of the communication.

Okay, I think I understand.

I started working on the packets finally, after putting it off for a while. I think I have a basic system going, but I just have one question. Is there a way for a packet to have an unknown amount of data, or do I have to limit packets to X amount of data? For example, if I had a packet sending a username and password, would I be able to send a packet with just enough bytes for the amount of characters, or would I need to have let’s say 20 bytes per username / password, and dedicate a 0 or something to empty characters not used?

For example:

Username: Admin (in bytes: 97100109105110)
Password: Admin (in bytes: 97100109105110)

Data to send: 9710010910511097100109105110

in which case I’d need a way to identify where the username ends and the password is supposed to start…OR

Data to send:97100109105110000000009710010910511000000000

with the 0’s being empty bytes that were not used.

that’s the question of how to encode/decode the data.

in theory there is no limit for a packet size, it might become fragmented by TPC message/packet size but what we put on top of it is not affected by that (transport/application). the only limit is the memory on the decoder.

usually we’d have to cache all data until we think it’s worth decoding. better have a encoding/decoding schema which has proper streaming support, but than can become very specific to payload-data, hard to implement and possibly memory-inefficient. anyway, usually we’d give the encoder a bytebuffer of size x which would limit the max message size to x bytes. (this is also not addressed by kryonet in detail).

your idea of the 0-byte terminator/delimiter is very common and used all over the place.

data-type-id from header which tells : incoming data is a list of strings.
payload-data : [string-bytes][0][string-bytes][0][0]

reading 0-byte as the delimiter of strings and end of communication.

another idea is tossing the required information into the payload sequence itself. it’s a bit more memory hungry but more flexible :

  • data-type-id from header which tells : incoming data defines itself
  • payload-data : [data-type-id][size/number-of-element][element-bytes][data-type-id][…]

we could easily encode things like “string,string,float,float,int,boolean,string”

Okay, so I’m much, much further down the line than before. I have a structure for constructing, sending, and receiving/reading packets. My server creates a new thread for every client that connects to it in order to manage incoming/outgoing packets and such to that client. My next question is: should I have twp more threads per client (three total)? One for receiving packets, one for sending packets, and one just to hold/manage all general data for that client’s connection? Or is there some way to structure a single thread so that it can manage sending and receiving/reading packets at the same time (I doubt it. More threads seems like the way to go, but it just seems so…inefficient)?

cool. you got to the nitty gritty of networking.

one or more threads per connection is the “old” way to implement it. new “new” way is using just one single thread to handle all trafic.

NIO is not as hard as it might look on first glance. it’s pretty much this :


new Thread(new Runnable()
{
  @Override public void run()
  {
    try
    {
      selector = Selector.open();

      while(true) // server main loop
      {
        selector.select(); // block

        Iterator<SelectionKey> it;

        try
        {
          it = selector.selectedKeys().iterator();
        }
        catch(final ClosedSelectorException ex)
        {
          // handle error
          return;
        }

        while(it.hasNext())
        {
          final SelectionKey key = it.next();

          it.remove();

          try
          {
            // actual work the server does.
            if(!key.isValid()) key.cancel();             // broken key
            else if(key.isWritable()) write(key);        // outgoing data
            else if(key.isReadable()) read(key);         // incoming data
            else if(key.isConnectable()) connect(key);   // outgoing connection
            else if(key.isAcceptable()) accept(key);     // incoming connection
          }
          catch(final CancelledKeyException ex)
          {
            // could be ignored
          }
        }
      }
    }
    catch(IOException e)
    {
      // handle error
      return;
    }
  }
}).start();

interesting part is that [icode]selector.select()[/icode] blocks until it has something to do. you can poke the selector by [icode]selector.wakeup()[/icode] which will simply unblock it. you might want that if you know you have key to be selected or whatever else is in your main loop.

how to setup a Selector, ServerSocketChannel, SocketChannel is not complicated :
http://docs.oracle.com/javase/8/docs/technotes/guides/io/example/

clients :

SocketChannel clientSocket = SocketChannel.open();
clientSocket.configureBlocking(false);
clientSocket.connect(new InetSocketAddress([...]));

// this will make the server main loop unblock and fall into key.isConnectable()
SelectionKey key = clientSocket.register(selector, SelectionKey.OP_CONNECT);

the twist here is [icode]clientSocket.register()[/icode] has a 3rd argument which is a user-data object. you can attach anything you want to the key. that object can be accessed by the server main loop later, [icode]connect(key)[/icode] in this example. personally i stick the whole socket-abstraction-object in so the server/selector-thread doesn’t starve from missing information.

server is pretty similar :

ServerSocketChannel serverSocket = ServerSocketChannel.open();
serverSocket.configureBlocking(false);
serverSocket.socket().bind(new InetSocketAddress(port_number));

// allow main loop to fall into key.isAcceptable() if something connects to the port
SelectionKey key = serverSocket.register(selector, SelectionKey.OP_ACCEPT);

the two other SelectionKey options are

[icode]SelectionKey.OP_READ[/icode] - which we want to use with the incoming-socket-channel, after a connection is “accepted”. such connection is not really different to a “outgoing” connection.

private void accept(SelectionKey key) throws IOException
{
  SocketChannel socketChannel = ( (ServerSocketChannel)key.channel() ).accept();

  // every time the new connection is sending data to us the selector-thread main loop
  // will fall into key.isReadable() and may pull the received bytes
  socketChannel.register(selector,SelectionKey.OP_READ);

  Object attachment = key.attachment();
  // twist here is, if this attachment is the server which registered itself with SelectionKey.OP_ACCEPT
  // we could notify this server about the new connection here.
}

interesting about this is it means that we can setup as many different servers (listening to different ports) as we want, still using only one thread.

private void read(SelectionKey key) throws IOException
{
  SocketChannel socketChannel = (SocketChannel)key.channel();
  // up to your "logic"
  Object        attachment    = key.attachment();

  if(!socketChannel.isConnected())
  {
    // error
    return;
  }

  // selector-thread read cache.
  // a simple ByteBuffer big enough for a TCP package in this example. ~8kb
  readBuffer.clear();

  int numRead = socketChannel.read(readBuffer);

  if(numRead == -1)
  {
    // connection closed
    key.cancel();

    return;
  }

  readBuffer.flip();

  // buffer contains all recieved bytes up to its limit()

  // since we do not care what those bytes are, we pass it to the client, the attachment in this example.
  attachment.pull(readBuffer);
}

twist here is, the pull method would stall the selector-thread when packed with too much logic. another idea would be to have the attachment/client-abstraction provide the read-buffer - so we could read into it and just notify the “client” and do the work somewhere else.

last one is [icode]SelectionKey.OP_WRITE[/icode] - which i wont explain too much. there is alot of controversy about it. it’s basically used for all outgoing-data when desired to write to sockets from the selector-thread.

the twist is, sending data is utterly thread-safe. the NIC is doing that for us anyway. means, sending data is possible from any thread at any time. i use it only in a few situations when i know i have outgoing-data stalled. using OP_WRITE also mean that we would need to copy all outgoing bytes into some cache/ring-buffer which is purged by the selector-thread - which introduces latency.

o/

basil_: bare NIO is quite buggy. I woudn’t advice it to anybody but experts. There are several critical bugs in Selector, most commonly observed is the issue of accept() no longer blocking, causing rapid firing, with no selected keys. You could work around this issue by introducing sleeps, but the Selector gets in a worse and worse state, making Selector.accept() consume extraordinary amouts of CPU power, while effectively doing nothing useful. The only way to recover is to hoist all your active connections to a new Selector, until that one breaks too. Rinse and repeat.

A networking loop gets as complex as its developer is experienced. I like to compare it with a game loop: the problem at hand looks trivial, but eventuallly you end up with a hundred lines of code that merely handles timing. Same for networking, but then worse. Oh, did you know closing a TCP connection actually blocks… threadpool here we come!

Long story short: use a library, where all these experts have scratched their head to solve thing you don’t want to encounter.

you’re surely right.

yet …

most selector-bugs are fixed. even if not, it is not too hard (no expert knowledge required) to deal with it if you follow a few simple rules. 99% of errors can be avoided putting “critical” operations on the selector-thread :

  • create/close channels
  • modify sockets
  • create/register keys
  • adjust key-operations

if a network loop gets too complex, its a bad network loop. unlike rocket science, there is not much happening. multiplexing sockets, tracking connections, routing data and dealing with errors. whatever else is probably application code and at best - decoupled from the network-code. first which comes to my mind is de/serialization - which already is not part of networking.

i have a little trouble with “expert” thinking. dont get me wrong Riven :wink: … more in general, when reading papers on whatever topic for instance, NIO is just one. most time, things are implemented very very simple, the “core” idea is very very simple, but - followed by a description/explanation/concept/domain which is very very complex, written in a languange nobody but the “expert” can understand (head stuck up their arse). i’m not saying rocket science is easy to express - but for most things there is no reason for skipping : clear, short, precise, unbiased and unambiguous language - especially if the topic itself is simple - which is nice cos’ stupid people like me can get it. to me this is also part of the “job security” which is annoying as hell.

now NIO is not that complicated :wink: … take KryoNet, looking on the source, there isn’t much more on the selector-thread-topic happening than what i described here. that’s good news. networking should be very very simple and kept plain, simple plan good plan. less errors, less application hiccups. (tho’ kryo is a bit too simple for me)

closing a connection is blocking - if there is outgoing data. even then you can use SO_LINGER to configure the duration. if thats too much, closing is asynchronous is fine too. a single disconnect-thread is enough, no need for a pool.

  • adjust keys, unset OP_READ | OP_WRITE (selector-thread)
  • remove connection from server implementation (selector-thread)

selector-thread and servers do not process the connection now, nobody cares if it’s socket still “connected”. figuring out if a socket is closed remotely (immediately) is not possible anyway. thats how TCP is.

  • run close() (another thread).

nothing compilcated, not blocking.

long story short : know what networking is (at least it’s not too complicated), if you lookup “libraries” think for yourself. most time those “experts” scratched their heads but “solved” bugs/problems by hiding them in obfuscated code - or worse.

thanks for your feedback Riven. o/

[quote=“basil,post:16,topic:53449”]
Anything ‘too’ is bad, sure, that’s an easy argument. Simplifying too much is also ‘bad’. Without proper context such statements are meaningless. In this case the additional complexity is caused by bugs in NIO, for which workarounds are needed until they are solved. This is not a case of ‘too complex’, this is a case of ‘complexity for reliability’. If reliability is not a concern, then you are absolutely right: KISS.

[quote=“basil,post:16,topic:53449”]
From your replies it seems you were not aware of the blocking nature of SocketChannel.close(). This API ‘peculiarity’ is enough to slow a busy server to a crawl, when performed on the network-thread. Your workaround with SO_LINGER is rather dangerous, because if the timeout expires, the pending TCP packets are dropped. You might not care, but the other end surely does: think of an http-response being truncated, leaving the browser in limbo.

Closing sockets on 1 worker-thread is a bad idea, as it adds a significant delay to initiating a channel closure, in case the worker-thread is dealing with a slow peers, or simply a lot of them, queued up. Therefore: a threadpool is mandatory.

That KryoNet doesn’t have these workarounds is more of a worry than an reassurance.

You can wave these problems and their (library provided) solutions away, but that doesn’t mean you won’t run into them eventually.

The specification of NIO is nice, simple and consise - the implementation is not as ‘battle hardened’ as one would expect. You need an abstraction layer to be able to take advantage of it - preferably 3rd party. Either that, or you’ll have to fix all these issues in your own code, when they break your game network I/O, over and over again.

How you treat/explain NIO is how it is supposed to work, indeed, but sadly that’s not relevant in the real world, where services have to run for months without degrading performance and/or interruption.