NIO question this time :)

Hi
I’m rewriting some network code (again), and as part of doing this I’ve read blahblahblah’s doc, one thing I wasn’t aware of is that when you write your byte buffer to a non blocking socketchannel it might not write all the bytes. So what I was thinking of is having a thread on the client, and one on the server, that selects on op_write (only those channels which messages have been send for are registered) that blocks on on the select and then carries on pumping bytes down till either it can’t send any more, in which case it reregisters that channel, or the message is sent. This is more of a design question, but I’m after comments. My reading is done by calling poll on my endpoints, but this just calls selectNow on the channels that are registered for op_read. The only reason I think I need this thread to handle the writes is that I don’t want to rely on the poll being called for writing messages, if I only poll say 10 times a second then my latency will be at least 100ms, for any message that can’t be written in one go, on top of the actual network lag.

Comments/suggestions?

Cheers

Endolf

[quote]The only reason I think I need this thread to handle the writes is that I don’t want to rely on the poll being called for writing messages, if I only poll say 10 times a second then my latency will be at least 100ms, for any message that can’t be written in one go, on top of the actual network lag.
[/quote]
Assuming you are polling because it’s one of the stages in your game-loop, then this sounds OK. As you’ve pointed out, if you don’t while on writing those bytes, the client might receive stuttered data that arrives in chunks ten times a second :).

Without an extra thread, you could

 while( not yet written all the bytes ) 

.
but if a client has a particularly slow connection you might run out of your timeslice in your game loop. I’m guessing this is the other reason why you’d like to parallelize that while loop into a separate thread, in addition to the stuttering problem you mentioned?

I often use separate threads for accepting, reading, and writing (so I’ll have three threads in the server, shared amongst all connected clients).

Note, however, that this partly reflects the environment I’m in, where it’s really useful to be able to separately throttle the three different aspects; you can artificially delay each thread without affecting the others (generally I fiddle with threads using app code, rather than by using thread priorities in java, because of the lack of guarantees offered by the Thread etc APIs; AFAICS this lack is only because Java runs on some non-pre-emptive OS’s, but it makes my life simpler not to assume…).

It would be nice to say “c.f. Game Programming Gems 4”, but that’s still not out for another 3 months :frowning: - I’ve got a gem in there that compares and contrasts different architectures for different parts of a server, and has example code for a 3 + N thread system (the N is for other parts of the server). :slight_smile: (sadly, authors can’t read other authors gems until we get our free copies, so I can’t specifically recommend the book, but it’s likely to be of similar quality to the other three books).