NIO: using same ByteBuffer concurrently?

one more principal question:

I need quite often to broadcast exactly same message to many connected clients. So “intuitive” solution is to create ByteBuffer, populate it with data - and write it to several SocketChannel-s. Of course it will not work, because each SocketChannel::write will change buffer’s read-position to unpredictable value. (Also it may happen that SocketChannel::write has to be called several times, before buffer’s whole content is sent)

Second solution is: simply create as many buffers as necessary, and fill them all with same data. No problem, but it is sure very ineffective.

Would it be possible to have just one ByteBuffer shared by many Channels, and remember/restore current write position according to current SocketChannel? Isn’t it too crazy idea? Roughly something like:


class Connection {
     private SocketChannel channel;
     private ByteBuffer buffer;          // shared by more Connection-s
     private pos;

     public send() {
           buffer.position(pos);
           channel.write(buffer);
           pos = buffer.position();
     }
}

Or have do you broadcast same messages?

ByteBuffer.duplicate()
Note that the content is shared while the position, limit and mark values are independent.

that’s exactly what I needed, that you :slight_smile:

Ohhh, buffer.duplicate() method…nice. I’ve never seen it in nioAPI before. My mistake. I have a ClientConnection class for each nio socket. CC has a private bytebuffers dedicated to that client, but some of them are identical contents. Like static responses and terminator bytes.

Each time I have written a buffer I rewind it, but content is never changed. But duplicate method allows me to use a global shared buffer instance.

There is a bug in duplicate() so watch yourself. duplicate() always returns a new ByteBuffer that is big-endian even if the old ByteBuffer was little-endian. It is a known bug that is not going to be fixed. My duplicate() calls are always followed by .order( ByteOrder.nativeOrder() ).

What is the justification for not fixing it? I can’t see it actually breaking code if this was fixed. I mean the contents of the buffer are either big-endian, or little-endian, they aren’t going to change because you made a duplicate.

I suspect the jsutification is “there is an easy workaround and there are higher priority bugs that need attention.”

In the end I decided to not use duplicate(), but the approach I mentioned in my first post - exactly same buffer may be shared by many connections, but each connection remembers current write-position. It seems to work well.

The reason is that I maintain a pool of re-usable ByteBuffers (not really plain ByteBuffers, they are wrapped in “Message” class with additional functionality) and I wanted to have unified way to get/release a Message. When a message is queued into some connection its internal reference-counter is incremented. When a connection succesfuly sents a message the counter is decremented. When it reaches zero the message is automaticaly “recycled” (returned back to pool)

That’s also because don’t like to create new instance of any class, when it’s not really necessary.

That may be justification for putting off the fix, but it certainly doesn’t qualify as justification for not fixing the bug ever!
The time and effort needed to put the one-line workaround into the JRE is hardly going to derail development on other issues!

What is weird is that I just went to look for the original bug report and it doesn’t appear to be there anymore. The original said it was not going to be fixed because it would break too much existing code. Now the bug report says that a fix is in progress. Odd. I first read the original bug report only a couple weeks ago.

Who said its never going to be fixed?

Ah. Likely someone thought they shouldnt fix it and the someone else reviewed it and said “thats stuypid, as is it breaks the spec.”

Edit: Here’s a secret to avoid heartburn. Don’t take what the bug database
says just before a release terribly seriously. There is always some fenagling that
happens to make the paper shufflers happy that results in bug downgrading
that is fixed after the release…

Jumping back to the original topic.

Be VERY careful if yo uarent going to use duplicate.
Your topic says “concurrently”. Actually trying to use the same Buffer object concurrently from two different threads is likely to cause nasty race conditions.

I’m not using threads. Each connection keeps a list of messages queued for sending (same message may exist in several queues). The connections are sequentialy asked (from same thread) to flush their queues. Each connection also maintains current “write position” for the buffer which is currently being sent. This position is restored before channel::write is called, and stored after it. It is possible that it takes several iterations until whole message is sent, and different connections may have different current write-position for same buffer.

So far it seems to work . Do you think there is some conceptual problem?

I guess that exactly same thing happens when duplicate() is called. Two objects reffer to same byte array. I just maintain write positions myself - and this way I don’t have to allocate new objects. I’m not really sure if it was worth it, but I was always a bit touchy and paranoid when it comes to unnecessary object allocation :slight_smile:

No not quite. With duplciate you have seperate storage for all the inertnal buffer pointers that are moidified when you use it. WIth a single one they all have to use the same storage.

Thats fine as long as its truly sequential, but from two unsynchroinzied threads you have a race condition your way wheres you dont with a duplicate.

With modern VMs, that probably over-paranoid. They’ve gotten quite good at handling small short-lived objects.