Conclusion: despite the hype, the only major benefits appear two-fold:
-
allows you to use different programming patterns (if you don’t like multiplexing and events, for instance)
-
if your OS is crap and has really crappy primitives for multiplexed event/edge IO, it might actually have really hot asynch IO instead, and this lets you take advantage of that.
EDIT: I’m not trying to put AIO down, but I’m not comfortable with IBM’s effusive claims that it’s inherently “better” than edge-driven multiplexing; they’re just as good as each other IME…
…and a possible 3, but I’ll wait to see how it performs “in the field”…
- Sun’s NIO is pretty crap at taking advantage of direct buffers and other things that theoretically would improve performance by going direct from device to device (avoiding the bottlenecks of e.g. moving through the CPU / JVM memory first). In most cases, their “direct I/O” seems no faster than doing it manually (which should, in theory, be significantly slower). And there’s not enough of their direct stuff going on. IBM seems to be implying that their wrapping of OS asynch IO is more efficient / more effective at taking advantage of these things. If so, it could provide a noticeable performance boost.
FYI I haven’t done any extensive x-platform NIO performance testing. I’ve done quite a lot, but not enough to draw many conclusions. Sun have been fixing major NIO bugs every JDK release including the upcoming 1.5, so I’ve been adopting the “wait and see” approach. My experience of IBM is that they seem to be a bit more of the “get it right first time, about as well as we’ll ever get it”, so maybe this is worth delving into deeply.
PS healthy warning to anyone who’s not used to AW: beware of licensing restrictions! Read them carefully; it’s a different kettle of fish compared to a Sun JDK technology…