This makes sure that the instructions in the section are always executed sequentially. (This is what I wanted to say, not the order of threads accessing it.).
It’s slightly more complex than that. A synchronized block enforces ‘happens before’ and ‘happens after’ semantics, but within the synchronized block, you’ll have your usual out-of-order execution of instructions.
But maybe that’s what you meant :point:
@ delt0r: Then we’re 100% in agreement. Isolation of tasks and minimization of communication and shared data-structures is priority one (and it makes your life easier). I bring up the volatile vs. atomic point because people insist on creating their own wheels. Like I mentioned above the trivial SP/SC fixed-length circular list, consider the data layouts:
private volatile int rPos;
private volatile int wPos;
private final T[] data;
private final AtomicInteger rPos;
private final AtomicInteger wPos;
private final T[] data;
The volatile version is fine if concurrent reads & writes have a very low probability. The second burns some extra memory and slightly more cycles but you don’t have to worry (too much) about what the probability is. Likewise for any object instance which contains a volatile field (and again for static members).
WRT: synchronized. Again my advice is to try to “Just say no.”
@Roquen - just wondering if your example actually has another issue. While the read and write positions have “happen before” semantics in both cases, what about the data array itself? A need for data to be an AtomicReferenceArray?
No it doesn’t. I should have mentioned that this is a purposely bad example. The only role of having the read & write positions stored in atomic wrappers is to (semi-insure) that the memory chunk shared by the two threads is read-only. The atomic operations themselves really serve no purpose at all. The storage of the positions could be stored in any manner which insures this to be true (and thus superior)…via thread local or a common worker thread data-chunk for instance. The point being generally using the atomic wrappers will tend to insure better performance than volatiles while one is working up the learning-curve of concurrency.
The other upside of the atomic types is that since you don’t have to make them volatile, you can’t forget to do so either, and neither do you have to worry about synchronizing the accessors (which would be an issue if you used long). Their type encapsulates all the responsibility for thread-safety, so less of the responsibility for using it right falls on you. This is the best reason for using classes in java.util.concurrent instead of rolling your own: someone else got the semantics right so you don’t have to.