Hello. I’m experiencing very annoying problem while networking with old IO, so I decided to try NIO. I will keep old IO for game client, but I’ll try to re-write the server code with NIO.
I read documentation and several articles from internet, but I still consider myself total NIO newbie and I could use some advice.
First problem is with accepting new connections. In my current IO library I have dedicated thread which just stores new connections. They are read asynchronously from main server loop. Something like this:
public void run() {
while ( true ) {
socket = serverSocket.accept();
synchronizedList.add(socket);
}
}
public Socket getNewConnection() {
return synchronizedList.getFirstElement();
}
The question is, what is the best way to emulate this behavior with NIO? I figured I have to initialize it following way:
Thank you. BTW for some reason I’m not able to run any game from that site. It always says: unable to run specified configuration (I have windows XP with j2re1.4.2_08)
I also have one comment to that article (please note that I’m total NIO newbie, so I may be totaly wrong). So: I don’t understand some statements, basically you are saying: “With old IO you have to periodically poll EACH connection, which is evil. With NIO you don’t have to poll, you just ask: what’s ready?”.
Again, I don’t know how exactly is NIO implemented, but doesn’t it have to do exactly same thing internally? To poll EACH connection, each time you call Selector::select()? Just deffering what you would do with old IO to some internal library. In worse case it happens in some java library, in better case it happens in some native library. Can you imagine how else it could be implemented? By what other miracle it could learn which sockets have some data ready to read?
Aaaand, finaly I have one more question. With old IO I’m using single data buffer (well, I mean plain byte array), which is re-used for all connections. If I have 1000 connections, I just iterate them periodically calling InputStream::available() - and if any of them has enough data I read & immediatelly process them. Is it possible to achieve it with NIO? As far as I understand, you MUST have dedicated ByteBuffer for EACH connection - because you never know in advance have many bytes available, so you must be prepared to read incomplete messages. Well, it’s not too big issue, but anyway if there is some trick to use just single data buffer please let me know, I would prefer this way.
btw do you find it useful/reasonable that NIO writes non-fatal messages to stderr? (It writes a message each time a connection is broken). I was used to write all my internal error messages to stderr and redirect its output to file. Now it’s all polluted with not-very-useful messages from NIO. Is there a way to turn it off?
We’re in the 21st century now, and stderr is a not-very-useful hangover from the 1970’s. Do all your logging through a logging system, and output to multiple different logs in parallel.
Personally. I haven’t worked out how to get the start-stop-daemon (*) to redirect the JVM process’s stderr to a file, so I never even see stderr - from past experience, I believe this is a bug in the JVM. I’ve seen several process-level bugs in Sun’s JVM over the past few years where it just wasn’t adhering to the standards properly, or was doing odd things it shouldnt be doing that caused it to barf in quite a lot of standard linux/unix situations (e.g. it was impossible to run chroot’d. That’s pretty shocking).
(*) - debian thing that converts any program into a service, so that it is auto-started on bootup, and can be controlled centrally using a script with “start”, “stop”, “restart”, “reload config”, etc - all implemented for you. You just need to provide a few params for your app. In my case, that’s the JVM + a -jar to tell it which server to run ;)…
At the hardware level, when something happens tehre’s an interrupt. This interrupt allows the CPU to stop what it’s doing and handle the interrupt. Ultimately, if the OS is coded well and the libs are coded well, that interrupt can filter up to the thread that’s blocked on select without any polling at all.
Although, IIRC, Sun got it wrong with the first 3 versions of the linux NIO and hooked into the crappy linux asynch/nb library. Doh :).
There’s no difference. However, what you’re doing is wrong, and buggy :P, unless you can absolutely guarantee that all requests will always fit within your buffer. And even then you’re relying on your hardware in a way that you cannot safely do. Sooner or later you’ll get a broken server that confuses the heck out of you.
If your buffer is 1kb, and your requests are all 900b, then it sounds OK.
Except…e.g. if you get a corrupted request, and aren’t sure where the next request starts/stops until you have the whole thing in the buffer, you may get 899b of request 1 (which you can’t dispose of yet) and 101b of request 2 (which you are not seeing enough of to process).
Maybe your protocol has an explicit start such that if you get a corrupted request you can throw it away as soon as you see the start of the next request.
But…where do you think all these incomplete messages sit, if not read by your program? Your ethernet card only has a limited amount of on-board buffer, and your OS is only going to buffer a limited amount of that in memory for you (OS dependent). Something’s got to buffer it somewhere, and NIC’s usually have only 32kb of buffer (not very much!).
So…you might as well have a separate input buffer for all incoming connections, whehter you use IO or NIO - it ensures you never lose data because you’re not taking it out of the OS + hardware buffers fast enough. If you knew more about the OS and hardware, you may be better off not doing. But, since NIO gives you no guarantees about that, I don’t tend to bother.
[quote]However, what you’re doing is wrong, and buggy , unless you can absolutely guarantee that all requests will always fit within your buffer.
[/quote]
yes, that was the case. The messages had limited size and would always fit within the buffer. Anyway, it’s history. Since I switched to NIO I have separated buffer for each connection (there is no other possibility with NIO)
[quote]At the hardware level, when something happens tehre’s an interrupt. This interrupt allows the CPU to stop what it’s doing and handle the interrupt. Ultimately, if the OS is coded well and the libs are coded well, that interrupt can filter up to the thread that’s blocked on select without any polling at all.
[/quote]
Ok, if it’s possible to register socket “observers” at operating system level and jvm can be notified without polling, then … it’s good.