server with NIO - sending data

Hello.
So I’ve learned how to accept connections and read data when it comes (registered operations for accepting and reading). Now I’m a little bit confused about sending data. I want my server to send packet every 50 ms (or whatever is best). How do I do that? ???
Problem is I don’t understand how registring OP_WRITE works (didn’t even tried it yet). Selector then notifies you every time channel is ready for writing? Dosen’t make much sense since it is ready all the time from opening a channel. My inital thought is new thread for sending data, who would sleep for 50ms and then send data across the channel to every client connected. I would like it if it could be implemented without creating new thread, in the same thread for listening.
Thanks.

Yes, OP_WRITE works just like OP_READ, it tells you when the channel is ready.

It will be ready very often, unless the OS buffers are full.

If you have nothing to send, but the channel is writable, you can sleep(1) to prevent the thread consuming too much cpu-cycles.

If you have nothing to send the thread should probably block on a mutex that is protecting your list of outgoing packets or something like that.

That would also block the OP_READ, as that’s (probably) in the same thread.

Ah… well I tend to prefer proper blocking as opposed to spinning in a loop with nothing to do… sleep(1) is more of hack ot make that tolerable than a properly designed solution to the problem.

But I’ll shut up now, since I haven’t used any of the NIO networking stuff yet :slight_smile:

So what do I do? Sending and reading data is extreamly fast, so I could just try to send the data, not testing readiness and if exception occurs I catch it, sleep 1ms and try again? It’s a simple game with less then 50 bytes per packet, but that isn’t a solution also, not what I had in mind. In all tutorials out there this isn’t mentioned, just sending data after you read it, like as response, and that certanly dosen’t satisfy my needs for a server.

Do I/O in a seperate thread if you’re using selectors (in a loop).

Otherwise just write to the channel (and check how many bytes were actually sent)
and just read from the channel (and be prepared to read 0 bytes very often)
this can be done in your main-loop, in asynchronous mode, that is.

i know the basics… read the first post. What I don’t know is how to send data properly like described in first post.

I answered your question in the last 3 lines. ::slight_smile:

And yes, they are the basics, but you were asking for them. ::slight_smile:

I am using a selector, so all stuff after otherwise word, when reading, is considered not usable to me (or maybe my english is bad, but othrewise = in another case => if I’m not using selectors :-\ ).

To summerize, I am using select(), which blocks, for accepting and reading data in separate thread for network IO. I don’t know how to implement writing in the same thread, couse if I register OP_WRITE, select() will activate all the time telling me channel is ready for writing… I just want to send data every 50ms, without having 1 mil useless selects in between.
So with my knowlage I can only think of one more thread for sending the data and sleeping for 50ms. Only potentional problem is can one thread send data at the same time other thread reads data from same channel? Could happen…

[quote]I am using a selector, so all stuff after otherwise word, when reading, is considered not usable to me
[/quote]
In your case you shouldn’t.

Either you use selectors, and let the OS do the timing, or do you own timing, without selectors.

I think you can just write to ClientChannels without OP_WRITE opcode registrations. Some NIO servers use

  • ThreadA to read selector for incoming packets. Packets are put to pending incoming actions queue.
  • ThreadB removes actions from queue and process it. Outgoing packets are put to outgoing actions queue.
  • ThreadC is run at constant rate to remove from outgoing queue and write packets to ClientChannels.

I think NIO SocketChannels are multithread safe. You can read packets in one thread and write in another. They sure synchronize stuff somewhere underneat but its not our side.

Or use single thread to handle OP_READ and OP_WRITE. Once you have outgoing packet you register SocketChannel with OP_WRITE code. Use your normal selector while loop to handle accept/read/write events. Soon after you have written a complete packet you can unregister OP_WRITE code. It should then not create uncecessary wakeups in a selector while loop.

To check for complete packet written, you must cumulate writtenBytes or use buffer.available() method to see when the position of write (byte)buffer is at the end.

Actually I had already written what I said I’ll do… and so far it works.
I have one thread to accept connections and read data. When connection is accepted I put that channel in client list. Second thread is for sending only and it waits until client list has some connections, then it loops through them and send data through channels every second. So at the end I don’t even need to bother with OP_WRITE as it seems, but It would be nice if someone confirms that thread that reads the data and thread that sends the data through same channel are synchronized in that, and not just the fact operations are fast so bug didn’t happened to me yet.

I’m now having problems with client list (a HashSet) and synchronizing adding to the list in accept thread, with send thread that iterates through it and gets channels to send data through them. I just started with this so I’ll post after some thinking and experimenting if I can’t solve it.

One thing I didn’t implement is queueing of actions, so far I just calculate buffer and write it directly. Is this important?

Yes, queueing is important as not all bytes in the ByteBuffer may have been written to the channel. So in case it was a partial send, you want that data into some kind of queue to be sent later when the OS buffers have space available again to write.

Kova: how do you handle incoming packets to determine the end of packet sent. Do you have a terminator byte or each packet has a lengthOfData value at teh start?

What I do, I have ByteArrayOutputStream attached to each SocketChannel as an attachment object. I write incoming bytes to a stream and check for terminator byte (I use NULL byte). When I have read null, I take bytes from a stream and create Packet object to be put on pending packet queue. Byte stream is then emptied. But this is tricky and must handle few special cases.

Test case:

= terminator byte, ABC = msg1, CDE = msg2, EFG = msg3

Reading a SocketChannel may give me bytes belonging to 0,1 or more packets. It is possible to receive “ABC#CD” bytes in one go, so I must put leftside bytes to a byte array stream, create a completed Packet instance, clear stream and put rightside bytes to a stream because they belong to a next packet. Or it may give me “ABC#CDE#E” where I read two completed packets in one go and a fragment of third packet.

All this is easy if you create a synchronized FIFO queue for incoming completed Packets. Action handler thread takes packets and put outgoing packets to another FIFO queue.

You won’t see such special case if clients don’t write data at high rate, but it is a pure luck not see it happen. I had a Flash client who did it constantly and I was puzzled few days why server lost or broke packets randomly.

I don’t :slight_smile: … I just wanted to see if it can be done my way (2 threads, listener and sender) since nobody anwsered how to build a server logic. I read somewhere about null byte terminator, I’ll try to implement that.

Exactly what I had in mind, I better implement queue fast. Tell me, why separate thread for action handler? Can’t queueing be arranged in thread you receive data? Like you read 1.5 packets, 1 packet goes to queue and half of other goes to temp buffer. Then next time you put temp + rest of it to queue as a whole packet.

Queing is meant to synchronize two separate threads to work along, thats how I understand it. If you use a thread who received a data as an action handler then you may block incoming op_read and op_accept operations if handler takes too much time. I use two threads in my server and FIFO queue is used to transfer packets between threads.
ThreadA: handle op_accept, op_read and op_write
ThreadB: handle completed packets and do what need to be done

ThreadB may run at constant rate to write data to remote clients constantly (like isAlive query or similar). You just take a packet from fifo queue each “while(isRunning)” step if one exists and handle it. But in my case I dont need to do anything if packets are not been sent, so ThreadB just idles until fifo queue notifies for new data.

Yeah, handeling received data in same thread might delay reading operation, but since my game is small and I’ll send packets every 50ms or something like that, I don’t worry.

FYI that’s completely the wrong thing to do, it’s around 100 times slower than doing things properly (blocking on a mutex or, best of all, send what you need, then deregister).

If you want to send every X milliseconds, you should:

  1. wait till the next 50’th ms
  2. register for WRITE
  3. send
  4. repeat until all sent
  5. de-register for WRITE
  6. … go and do other logic, then come back to 1. at the appropriate moment

How this solves puting send in same thread as read? You still have to sleep 50 ms in step 1 and miss incoming data. …or I’m the one who missed something? :slight_smile:
Any why registering for write? Isn’t that a selector thing? Why not just write if channel is open?