[ACCEPTED]-Implementation of BlockingQueue: What are the differences between SynchronousQueue and LinkedBlockingQueue-java.util.concurrent

Accepted answer
Score: 56

SynchronousQueue is a very special kind of queue - it implements 25 a rendezvous approach (producer waits until 24 consumer is ready, consumer waits until 23 producer is ready) behind the interface 22 of Queue.

Therefore you may need it only in the 21 special cases when you need that particular 20 semantics, for example, Single threading a task without queuing further requests.

Another reason 19 for using SynchronousQueue is performance. Implementation 18 of SynchronousQueue seems to be heavily optimized, so if 17 you don't need anything more than a rendezvous 16 point (as in the case of Executors.newCachedThreadPool(), where consumers 15 are created "on-demand", so that 14 queue items don't accumulate), you can get 13 a performance gain by using SynchronousQueue.

Simple synthetic 12 test shows that in a simple single producer 11 - single consumer scenario on dual-core 10 machine throughput of SynchronousQueue is ~20 time higher 9 that throughput of LinkedBlockingQueue and ArrayBlockingQueue with queue length 8 = 1. When queue length is increased, their 7 throughput rises and almost reaches throughput 6 of SynchronousQueue. It means that SynchronousQueue has low synchronization 5 overhead on multi-core machines compared 4 to other queues. But again, it matters only 3 in specific circumstances when you need 2 a rendezvous point disguised as Queue.

EDIT:

Here is 1 a test:

public class Test {
    static ExecutorService e = Executors.newFixedThreadPool(2);
    static int N = 1000000;

    public static void main(String[] args) throws Exception {    
        for (int i = 0; i < 10; i++) {
            int length = (i == 0) ? 1 : i * 5;
            System.out.print(length + "\t");
            System.out.print(doTest(new LinkedBlockingQueue<Integer>(length), N) + "\t");
            System.out.print(doTest(new ArrayBlockingQueue<Integer>(length), N) + "\t");
            System.out.print(doTest(new SynchronousQueue<Integer>(), N));
            System.out.println();
        }

        e.shutdown();
    }

    private static long doTest(final BlockingQueue<Integer> q, final int n) throws Exception {
        long t = System.nanoTime();

        e.submit(new Runnable() {
            public void run() {
                for (int i = 0; i < n; i++)
                    try { q.put(i); } catch (InterruptedException ex) {}
            }
        });    

        Long r = e.submit(new Callable<Long>() {
            public Long call() {
                long sum = 0;
                for (int i = 0; i < n; i++)
                    try { sum += q.take(); } catch (InterruptedException ex) {}
                return sum;
            }
        }).get();
        t = System.nanoTime() - t;

        return (long)(1000000000.0 * N / t); // Throughput, items/sec
    }
}    

And here is a result on my machine:

enter image description here

Score: 5

Currently the default Executors (ThreadPoolExecutor based) can either 27 use a set of pre-created threads of a fixed 26 size and a BlockingQueue of some size for any overflow 25 or create threads up to a max size size 24 if (and only if) that queue is full.

This 23 leads to some surprising properties. For 22 instance, as additional threads are only 21 created once the queue's capacity is reached, using 20 a LinkedBlockingQueue (which is unbounded) means that new threads 19 will never get created, even if the current pool 18 size is zero. If you use an ArrayBlockingQueue then the new 17 threads are created only if it is full, and 16 there is a reasonable likelihood that subsequent 15 jobs will be rejected if the pool hasn't 14 cleared space by then.

A SynchronousQueue has zero capacity 13 so a producer blocks until a consumer is 12 available, or a thread is created. This 11 means that despite the impressive looking 10 figures produced by @axtavt a cached thread 9 pool generally has the worst performance 8 from the producer's point of view.

Unfortunately 7 there isn't currently a nice library version 6 of a compromise implementation that will 5 create threads during bursts or activity 4 up to some maximum from a low minimum. You 3 either have a growable pool or a fixed one. We 2 have one internally, but it isn't ready 1 for public consumption yet.

Score: 4

The cache thread pool creates threads on 6 demand. It needs a queue which either passes 5 the task to a waiting consumer or fails. If 4 there is no waiting consumer, it creates 3 a new thread. SynchronousQueue does not 2 hold an element, instead it passes on the 1 element or fails.

More Related questions