[synchronized(this) { } ?]

If you’re talking about implementation of the Lock interface, then that’s wrong. The spec says that all implementations must enforce the same memory barrier semantics. Not that that can be guaranteed from an interface, but you can guarantee that the built-in implementations will adhere to it.

Couldn’t agree more! Learn to use lock-free mechanisms, particularly lock-free queues, for passing information around. Praxis’ architecture is designed in this way - there are almost no locks anywhere in the code base (except a couple of places where they’re enforced by 3rd party libs). The first part of this blog post on its architecture (and the linked to post by Ross Bencina) may explain some reasons this is a good idea in general, particularly where you want consistent timing (eg. framerate!)

Vectors are about the only time I use it, and that’s because I use vectors to communicate between threads. Otherwise I avoid it, you can manage locking inside the class if the object really needs it.

Huh, I stand corrected. I was under the impression that the monitor semantics only applied to the state of the Lock itself, but the javadoc is pretty clear in referencing the JLS. That’s interesting, because I have screwed myself with out-of-order updates when using a Lock, which went away when I synchronized instead. Perhaps I let go of the lock too soon… I’d check the code, but I don’t think I ever committed the broken version. Oh well, more anecdotal evidence that multithreading is hard. :-\

I don’t use synchronize anymore. Just use the stuff in java.util.concurrent.*, its typically the same speed or better and you have something that is generally easier to use.

I have a strong tendency to have strong isolation of “ownership” notions which requires virtually nothing special. I’d say the majority of my “sync” is via volatile. The most used concurrent data-structure is a fixed-length single producer/single consumer circular list (wait-free). Toss in some atomic types and that covers the majority of concurrent communication. But hey, maybe I don’t get out enough.

[quote=“Roquen,post:25,topic:40050”]
Disruptor or something custom?

I just use a ConcurrentLinkedQueue if I ever need to communicate a list of events/information between threads :slight_smile:

[quote=“ra4king,post:27,topic:40050”]
Be careful with that. CLQ is unbounded (no way to create back pressure) and the tail/head references sharing the same cache line causes unnecessary contention.

To say nothing of ConcurrentLinkedQueue not having any blocking operations. I prefer ArrayBlockingQueue myself, or SynchronousQueue when I need a synchronization point (which is pretty rare)

[quote=“Spasi,post:28,topic:40050”]

So…err…in English? :stuck_out_tongue:

In a typical producer/consumer relation between threads, you typically don’t want to produce at a much higher rate than what is consumed. A bounded queue (which means: it has a maximum capacity) will cause the producer to block on insert, when the queue is full, described by spasi as back-pressure.

Ah ok. I don’t see that as a problem for the few times I used it but I’ll keep that in mind :slight_smile:

Having an unbounded queue means that if the consumer thread(s) is slower than the producer, the whole thing might go out of control and eventually you’ll run out of memory (or have horrible performance).

In practice, queues are either mostly empty or mostly full. For mostly full queues, you don’t want an unbounded implementation for the above reason. With a mostly empty queue and a linked list implementation, you have this weird situation that both the head and the tail point to the same object, the same memory. When two separate threads try to concurrently update that same memory (the same cache line to be precise), you effectively get two serialized updates, you can’t have the head and tail updating simultaneously.

With the ConcurrentLinkedQueue implementation in particular you have another problem: the head and tail references in the CLQ object itself are next to each other, which means that in practice they’ll both be part of the same cache line in the CLQ instance. Updating the head invalidates the tail and vice-versa, every time and by any thread. This doesn’t mean that every access will have to go to main memory, modern CPUs handle it in the cache, but it still causes unnecessary communication across CPU cores. The Disruptor library handles this issue with dummy fields that create enough padding between fields that may be contended and are normally too close to each other. Though the JVM is pretty aggressive with laying out object fields and will often completely remove unused fields, so they had to come up with a few tricks to avoid that.

[quote=“Spasi,post:26,topic:40050”]

I’ve been aware of Distruptor for awhile and have been meaning to look at it more closely, but no…I’m talking about the trivial read counter, write counter and fixed array which is simple and perfectly fine (assuming sequential consistency of writes…modification can address that issue) when a fixed size with low probability of concurrent reads & writes describes the problem. Not that I would suggest anyone running out and writing this. DON’T WRITE CONCURRENT DATA STRUCTURES…is my main advice. Along with keeping things as simple as possible unless you really hate yourself. This ties back in to lock-free, not only is lock-free and wait-free more efficient (in most sane real-world cases), I’m of the very strong opinion that they are simpler to use.

This can’t be emphasized enough. Cache-thrashing murders performance and is something that most people seem to completely ignore and you won’t know that it’s occurring unless you’re explicitly looking for it. Like above I said that I mostly use volatile for communication…I’d advice not to follow my lead, use atomic wrappers instead. Remember we’re (generally) talking about multiple caches and if any memory within a line changes, then we have to reload the entire effected hierarchy to insure memory consistency (specifics are architecture dependent).

Getting too paranoid about cache performance before it matters aren’t we.

Well, there is a school of thought that says if you’ve resorted to multithreading at all in the first place you are already being extremely mindful of performance…

Cas :slight_smile:

Not really. I think that in Java using the atomic wrappers instead of volatile if you don’t have very sound understanding of caching and/or concurrency is very sound advice. Minimal cost to not have to worry about the issue. Or are you referring to something else?

I am referring to the fact that if cache coherency is the issue with multithreaded work, then you have much bigger problems. Multithreaded stuff like this only works where there is very little real contention. ie when missing a cache line every few 10,000 of clock cycles or more is not going to matter.

You need this if (and only if) you have a section of code that will be accessed by two or more threads at the same time, and which does calculations which need a strict order, so that the threads can’t access it in parallel.

Try to keep such sections small.

FYI: you do not have control over the order of execution, just over exclusive access to a code block, by 1 thread.