Friday May 25, 2012

HCLH Locks - an additional constraint

HCLH Locks are NUMA-friendly hierarchical queue-based CLH locks. Subsequently, we've discovered that for correctness threads must be bound 1:1. Absent such binding the locks are vulnerable to exclusion failure related to node circulation & lifecycle issues. That particular invariant wasn't made clear in the HCLH paper.

Tuesday Mar 20, 2012

Foxboro says "No Dice"

Thursday Dec 22, 2011

Lock Cohorting - to appear in PPoPP 2012

Lock Cohorting: A General Technique for Designing NUMA Locks by Dave Dice, Virendra Marathe and Nir Shavit in PPoPP 2012. The Cohort lock technique allows developers to construct NUMA-aware locks from NUMA-oblivious locks.

Thursday Nov 03, 2011

Call for papers : Transact 2012

Title: 7th Workshop on Transactional Computing
Deadline: December 1, 2011
Workshop: February 26, 2012
Location: New Orleans, Louisiana, USA (with PPoPP 2012)

TRANSACT 2012 is a forum for the presentation of research on all aspects of transactional computing. The scope of the workshop is intentionally broad, with the goal of encouraging interaction across the languages, architecture, systems, database, and theory communities. Papers may address implementation techniques, foundational results, applications and workloads, or experience with working systems. Environments of interest include the full range from multithreaded or multicore processors to high-end parallel computing.

Tuesday Nov 01, 2011

MONITOR-MWAIT for spin loops

Some ramblings regarding the use of MONITOR-MWAIT for busy-wait loops.

Monday Aug 01, 2011

Make your ambition match your education

Seen in the DC Metro system in ads for a certain institution of higher education. How sad.

Tuesday Jun 21, 2011

Inverted schedctl usage in the JVM

The schedctl facility in Solaris allows a thread to request that the kernel defer involuntary preemption for a brief period. The mechanism is strictly advisory - the kernel can opt to ignore the request. Schedctl is typically used to bracket lock critical sections. That, in turn, can avoid convoying -- threads piling up on a critical section behind a preempted lock-holder -- and other lock-related performance pathologies. If you're interested see the man pages for schedctl_start() and schedctl_stop() and the schedctl.h include file. The implementation is very efficient. schedctl_start(), which asks that preemption be deferred, simply stores into a thread-specific structure -- the schedctl block -- that the kernel maps into user-space. Similarly, schedctl_stop() clears the flag set by schedctl_stop() and then checks a "preemption pending" flag in the block. Normally, this will be false, but if set schedctl_stop() will yield to politely grant the CPU to other threads. Note that you can't abuse this facility for long-term preemption avoidance as the deferral is brief. If your thread exceeds the grace period the kernel will preempt it and transiently degrade its effective scheduling priority. Further reading : US05937187 and various papers by Andy Tucker.

We'll now switch topics to the implementation of the "synchronized" locking construct in the HotSpot JVM. If a lock is contended then on multiprocessor systems we'll spin briefly to try to avoid context switching. Context switching is wasted work and inflicts various cache and TLB penalties on the threads involved. If context switching were "free" then we'd never spin to avoid switching, but that's not the case. We use an adaptive spin-then-park strategy. One potentially undesirable outcome is that we can be preempted while spinning. When our spinning thread is finally rescheduled the lock may or may not be available. If not, we'll spin and then potentially park (block) again, thus suffering a 2nd context switch. Recall that the reason we spin is to avoid context switching. To avoid this scenario I've found it useful to enable schedctl to request deferral while spinning. But while spinning I've arranged for the code to periodically check or poll the "preemption pending" flag. If that's found set we simply abandon our spinning attempt and park immediately. This avoids the double context-switch scenario above. This particular usage of schedctl is inverted in the sense that we cover the spin loop instead of the critical section. (I've experimented with extending the schedctl preemption deferral period over the critical section -- more about that in a subsequent blog entry).

One annoyance is that the schedctl blocks for the threads in a given process are tightly packed on special pages mapped from kernel space into user-land. As such, writes to the schedctl blocks can cause false sharing on other adjacent blocks. Hopefully the kernel folks will make changes to avoid this by padding and aligning the blocks to ensure that one cache line underlies at most one schedctl block at any one time. It's vaguely ironic that a facility designed to improve cooperation between threads suffers from false sharing.

Schedctl also exposes a thread's scheduling state. So if thread T2 holds a lock L, and T1 is contending for L, T1 can check T2's state to see whether it's running (ONPROC in Solaris terminology), ready, or blocked. If T2 is not running then it's usually prudent for T1 to park instead of continuing to spin, as the spin attempt is much more likely to be futile.

Tuesday Jun 14, 2011

MultiLane : a concurrent blocking multiset

MultiLane : a concurrent blocking multiset appeared in SPAA 2011.

It's rather trivial to extend the general idea and construct a deque instead of a queue-like construct.

Partitioned Ticket Lock

Partitioned Ticket Lock appeared in SPAA 2011.

When the number of "lanes" is just 1, a partitioned ticket lock is just a degenerate instance of a classic ticket lock.

We can easily provide K-exclusion by setting the number of lanes to K or larger and initialization the 1st K slots to 0 through K-1. For instance if we have 8 lanes and K=3, the the slots would be initialized to 0,1,2,0,0,0,0,0 respectively.

ACM DL Author-ize serviceBrief announcement: a partitioned ticket lock
David Dice
SPAA '11 Proceedings of the twenty-third annual ACM symposium on Parallelism in algorithms and architectures, 2011

Friday Jun 10, 2011

Trends in processor count and memory growth

Of late system address space has been growing at a rate of between 1 and 1.5 addressable bits per year. Interestingly, though, we're now adding processors to systems faster than we're adding memory. That is, processors/GB RAM is currently increasing. If that trend holds true and programming models remain the same, then, arguably, synchronization will become even more important as more processors will be coordinating access to a given amount of memory. Proper fine-grained orchestration will become even more critical.

Saturday Apr 16, 2011

Writing Musical History

Two sections of American Popular Music at Bridgewater State University are discussing the music of the last decade. The textbook stops around 1999. They are looking for your help. Imagine that it is 10 years from now and you must update the American popular music textbook to include a chapter on the years 2000-2011. What artists/groups do you feel absolutely must be in that chapter?

The survey takes about a minute to complete.

Thursday Apr 14, 2011

Flat-Combining NUMA Locks

Flat-Combining NUMA Locks to appear in SPAA 2011.

Tuesday Apr 05, 2011

Cache index-aware memory allocation

Cache Index-Aware Memory Allocation to appear in
International Symposium on Memory Managment (ISMM) 2011. As a consequence of the design, this "malloc" allocator is also NUMA-aware and NUMA-friendly assuming the system uses the usual default first-touch NUMA page placement policy.

ACM DL Author-ize serviceCache index-aware memory allocation
Yehuda Afek, Dave Dice, Adam Morrison
ISMM '11 Proceedings of the international symposium on Memory management, 2011

Tuesday Mar 01, 2011

crazy concurrency

Crazy Concurrency -- high-end nerdcore.

Tuesday Feb 15, 2011

atomic fetch-and-add vs compare-and-swap

There are a number of cases where an atomic fetch-and-add instruction might yield better performance than the classic Load;Φ;CAS loop, where CAS is the atomic compare-and-swap instruction. The x86 architecture exposes LOCK:XADD which I'll use in the discussion below. (If you don't need the original value you can also use LOCK:INC or LOCK:ADD instead of LOCK:XADD).

  1. CAS is "optimistic" and admits failure, whereas XADD does not. With XADD there's no explicit window of vulnerability to remote interference, and thus no need for a retry loop. Arguably, XADD has better progress properties, assuming the underlying XADD implementation doesn't have an implicit loop, but even in that case the window would be narrower than with Load;Φ;CAS.

  2. If you use the typical Load;Φ;CAS idiom, assuming normal snoop-based cache coherence, the Load may cause a read-to-share bus transaction to get the underlying cache line into S or E state. The CAS, which effectively has store semantics with respect to the cache coherence protocol, may induce another bus transaction to upgrade the line to M state. So in the worst case the idiom might incur two bus transactions, but an XADD implementation will usually drive the line directly into M state, requiring just one bus transaction. Of course you might be able to speculate on the value and have fast-path that tries a "naked" CAS without any prior load. (I use this frequently in native HotSpot code). Also, it's possible for sophisticated processor implementations to perform coherence speculation and aggressively probe the target line into M state. Finally, in some cases you can profitably insert a prefetch-for-write (PREFETCHW) instruction prior to the load to avoid the upgrade transaction. But that approach has to be applied carefully as in some cases it can do more harm than good. Given all that, XADD, where available, has an advantage.

  3. Lets say you were trying to increment a variable with the usual Load;INC;CAS loop. When the CAS starts failing with sufficient frequency you can find that the branch to exit the loop (normally taken under no or light contention) starts to predict toward the failure path where we stay in the loop. So when the CAS ultimately succeeds, you'll incur a branch mispredict when trying to exit the loop. This can be quite painful on processors with deep pipelines and lots of out-of-order speculative machinery. Typically, this is in a piece of code where you don't want a long stall. Relatedly, when the CAS starts to fail frequently, the branch begins to predict that control stays in the loop and in turn the loop runs faster and we cycle around more quickly to the CAS. Typically, we'd want some back-off in the loop. Under light load with infrequent failures the mispredict to retry the loop served as a potentially useful implicit back-off. But under higher load we lose the benefit of implicit back-off arising from the mispredict. There's no loop and no such issues with XADD

Even though CAS has a higher (infinite) consensus number, there are times when fetch-and-add -- which has a consensus number of just 2 -- is preferable.

Update: see CR7023898

Monday Feb 14, 2011

False sharing induced by card table marking

Garbage-collected runtime environments frequently use card tables in conjunction with write barriers to accelerate root-scanning during garage collection. (See A Fast write barrier for generational garbage collectors by Urs Holzle, ECOOP-OOPSLA 1993 for details). Briefly, and skipping a few details, this design partitions the heap into an array of power-of-two sized card pages. When a mutator stores into a reference field of an object the runtime write barrier code will mark the card table entry for the card page containing (covering) that field as dirty. In the HotSpot JVM the card page size is 512 bytes and the card table is implemented as a simple array of bytes. That is, a given card table entry, which represents the state of a card page, is just one byte. The write barrier is emitted directly by the JIT and is usually just a shift and store instruction. In a subsequent non-moving minor GC, the collector can avoid scanning reference fields in a card that is not dirty.

This design is well-proven widely employed but unfortunately it can result in performance issues in highly concurrent environments. Lets say our cache line size is 64 bytes, which is fairly common in modern processors. This means that 64 cards (32KB = 64\*512) will share a cache line in the card table. So reference stores by different threads that just happen to fall within the same 32KB region cause writes to the same cache line underlying the card table. This can result in excessive write invalidation and cache coherence traffic, which can reduce performance and impede scaling. Interestingly, the impact can worsen after a full/moving GC as threads tend to allocate into different address ranges by virtue of thread-local allocation buffers (TLABs), but after a full collection the remaining objects tend to be more tightly packed and thus more prone to the problem. Furthermore, most card table stores are redundant, as often the card is already marked dirty. This suggests a simple solution: instead of using an unconditional store in the barrier, we first check the card table entry and only store if it is clean. This slightly increases the barrier path-length and adds a conditional branch -- unless we were to be somewhat clever with conditional moves by annulling a redundant store by changing the destination address to be a thread-local dummy variable. On the other hand it avoids the problem. (For historical background, some years ago Doug Lea noticed an odd slow-down after a full GC in some concurrent Java code. He contacted me and I speculated that false sharing in the card table could be the issue. We conjured up a JVM with an experimental -XX:+UseCondCardMark flag that let us emit write barriers as either the usual unconditional store, or a conditional form that avoids redundant stores. The conditional form provided relief).

I ran into the problem recently when experimenting with some concurrent queue implementations, which are reference-heavy, on a Sun 256-way Sun/Oracle T5440. This is 4-socket system where each socket contains a Niagara T2+ UltraSPARC processor having 64 logical processors. My benchmark has 50 producer threads and 50 consumer threads and measures and reports the message throughput over a timing interval. In the default configuration we can pass about 8.8K messages per msec. Using the same JVM, when I add the -XX:+UseCondCardMark flag we can achieve 52.5K messages per msec, clearly demonstrating the magnitude of the effect.

I should also note that we ran into this same issue when experimenting with Java-level lock elision using hardware transactional memory. If two unrelated concurrent transactions happened to store into reference fields in the same 32KB region we'd have false aborts because of write-sharing on the card table cache line. Again, -XX:+UseCondCardMark provided relief.

Update: see CR7029167

Thursday Jan 20, 2011

MultiLane : a concurrent blocking multiset

MultiLane : a concurrent blocking multiset - under submission.

It's rather trivial to extend the general idea and construct a deque instead of a queue-like construct.

Friday Jan 14, 2011

Partitioned Ticket Lock

The Partitioned Ticket Lock - under submission.

Friday Oct 15, 2010

Submissions are being accepted for VEE 2010

Submissions are now being accepted for VEE 2010 (more formally, the 2011 ACM SIGPLAN/SIGOPS International Conference on Virtual Execution Environments). VEE is co-located with ASPLOS this year.

Tuesday Sep 07, 2010

Cache index-aware memory allocation

Cache index-Aware memory allocation(under submission) by Dave Dice, Yehuda Afek and Adam Morrison. We describe a relatively simple set of changes to malloc allocators that may reduce the conflict miss rate of applications accessing blocks returned by malloc.

Wednesday Jul 21, 2010

Solaris scheduling : SPARC and CPUIDs

Since it's a commonly asked question and source of confusion I thought I'd write up the following.

First, I should introduce some terminology and state the mapping between solaris logical CPUIDs and physical resources. On a Solaris/SPARC Niagara-2 system the logical CPUIDs map geographically to physical IDs and resources. You can interpret a logical CPUID as follows: (DieNumber: D; CoreNumber:3 ; ThreadGroup:1 ; Strand:2). That is, you have 8 cores per die, 2 thread groups per core, and 4 strands per thread group. All the logical processors on a given core share an level-1 cache. The "ThreadGroup" is a rather obscure name for a pipeline. On a Niagara-1, for instance, there is only 1 pipeline per core, but you have 2 per core on an N2. You can query the CPUID on which a thread is currently running with getcpuid(), which is extremely fast.

Assuming a simple model where all your threads remain runnable and there's no preemption ...

With unbound threads, the solaris scheduler will balance (disperse fairly) 1st over dies, then over cores, then over pipelines, in order to maximize true parallelism and, to the extent possible, avoiding contention over shared resources by placing threads as 'far away' from each other as possible. This is usually the best policy -- particularly for completely independent threads -- but beware that it ignores the issue of inter-chip coherency costs. If you have high coherence costs (writes to shared variables) then packing a group of communicating threads on-chip can sometimes be better than letting them disperse over multiple chips. (As an aside, you're typically much better off letting the scheduler assign CPUs than by trying to bind yourself. Naive binding -- say, with sequential CPUIDs -- will almost always result in suboptimal performance).

In this future these polices might shift to allow better power management by trying to keep dies or cores "parked" (idle, drawing less power). Digressing slightly, recent linux schedulers _do try to impose some type of power management by default, making it sometimes hard to squeeze maximal performance out of a MP Nehalem/AMD system and introducing confounding factors for those of us benchmarking.

If the threads are entirely CPU-bound then typically the scheduler will place them and the thread:cpu relationship then becomes fairly stable. Even if the thread blocks briefly, if it comes back ONPROC (running) in under 3 msecs it's considered to have residual affinity and will go back to the processor where it last ran, barring gross dispatch queue length imbalance. Check the OpenSolaris kernel sources for "rechoose_interval" if you're curious.

For background, each core has its own local dispatch queue and makes its own local scheduling decisions. There's no centralized global scheduler agent and no centralized scheduling data structures (with the exception of a queue for unbound real-time threads). The scheduling policies and parameters are designed such that the local & independent decisions collectively result in achieving the desired global scheduling policy. (If you're biologically inspired, think of a communal insect colony). At a high level the scheduler attempts to maximize aggregate useful work completed in unit-time. Specifically, the scheduler tries to maximize the # of cpus doing useful work as well as attempting, where possible, to maintain affinity (minimize migration). With CMT/CMP/HT systems it also tries to disperse work (LWPs) over cores. As I mentioned above, over time more constraints are being added to the scheduler's responsibilities, such as trying to minimize or reduce power consumption, striking a balance between performance and energy.

The system disperses threads via stealing (pulling, where idle CPU steal from other dispatch queues) and queue balancing (pushing, where if there's a gross imbalance in dispatch queue depth, a local scheduler will try to pass some blocked threads to other less-loaded dispatch queues).

As an aside, threads in the real-time scheduling class cause more centralized dispatch. If they're unbound they can induce lots of preemption and migration, but that's a topic for another day.

Tuesday Jun 01, 2010

Europar 2010 : Transactional Mutex Locks

Transactional Mutex Locks by Luke Dalessandro, Dave Dice, Michael Scott, Nir Shavit and Michael Spear will appear in Europar 2010.

Monday Apr 05, 2010

TLRW: Return of the Read-Write Lock

TLRW: Return of the Read-Write Lock by Dave Dice and Nir Shavit will appear in SPAA 2010. (The paper introduces the concept of the TLRW-ByteLock).

Simplifying Concurrent Algorithms by Exploiting Hardware Transactional Memory

Simplifying Concurrent Algorithms by Exploiting Hardware Transactional Memory will appear in SPAA 2010.

Wednesday Feb 03, 2010

QPI Quiescence

It's not uncommon to find Dekker-like idioms in modern concurrent programs. On platforms with weaker memory models -- say where a store followed by a load in program order can be reordered by the architecture to appear as a load and then a store in the effective memory order (sometimes called the "visibility order") -- programs must use barrier instructions to enforce memory ordering to implement Dekker correctly. For the purposes of discussion and assuming a relatively common system model we'll define memory order as the order of operations as they appear at the interface between the processor and the first-level coherent cache. Examples of barriers are MFENCE on x86 and MEMBAR #storeload on SPARC. In addition, x86 and SPARC TSO memory models allow only one variety of architectural reordering, the store-then-load form noted above. (For simplicity we'll restrict the discussion to TSO-like memory consistency models). On some platforms barriers introduce significant local latency. Perversely, we sometimes find that atomic instructions which have barrier semantics (are barrier-equivalent) are faster than the purpose-defined barrier instructions. A simplistic barrier implementation might simply quiesce the pipeline and wait for the store buffer to drain. To allay a common misconception it's worth pointing out that barriers -- sometimes called fences -- are typically implemented as processor-local operations and don't cause any distinguished action on the bus or interconnect and instead simply instruct the processor to ensure that prior stores become visible before subsequent loads (subsequent and prior refer to the barrier in program order). That is, they don't force anything to happen -- such as coherence messages on the bus -- that were not already destined to occur. Instead, they simply enforce an order, momentarily reconciling program and memory order. Crucially, at least with current x86 and SPARC implementations, barriers don't force anything to occur off-processor. That also means they don't impede or impair scalability. There's no fundamental reason, however, why barriers should be so slow. The processor implementation is free to speculate over the barrier, for instance, as long as stores in the speculative episode are not made visible and loads in the episode are tracked for coherence. And in fact on at least one processor, barrier instructions effectively have 0 latency.

Returning to the Dekker idiom, threads T1 and T2 might coordinate as follows: T1 might execute (ST A; barrier; LD B) and T2 executes (ST B; barrier; LD A), and in particular we refer to this pattern as the Dekker duality. As a concrete example, we coordinate thread state transitions in the HotSpot JVM via a similar protocol, where T1 is a Java thread (mutator) executing the reentry path from a JNI call, T2 has the role of the VM thread coordinating a stop-the-world safepoint, A is a variable that indicates T1's thread state (executing outside the JVM on a JNI call, or executing inside the managed runtime), and B indicates if a stop-the-world safepoint is pending. Critically, if T1 is running on a JNI call and attempts to return back into the managed environment while a safepoint is executing, we need to stall T1 at the point of ingress, as the VM expects that Java threads will not access the heap during a safepoint. (Among other uses, Safepoints are employed for certain types of garbage collection operations, for instance where we don't want the collector and the Java threads accessing the heap simultaneously). For the purposes of illustration I'm showing just a single mutator thread T1 and a single VM thread T2, but in practice the mechanism is much more general. T1's path, above, is likely to execute much more often than T2's, as JNI calls could be expected to occur more frequently than safepoints. As such, to improve performance we'd like to elide the barrier instruction from T1's path. Asymmetric Dekker Synchronization is a family of related mechanisms that allow us to safely remove the barrier from T1 while shifting the responsibility of dealing with T1's potential reorderings to T2. We call it asymmetric because to be profitable T1's path needs to run much more frequently than T2's. We then call T1's path the fast-path and T2's the slow-path. (This mechanism can enabled and disabled by way of the -XX:+/-UseMembar switch ).

The Asymmetric Dekker Synchronization document mentioned above enumerates a number of ways in which we might allow T1 and T2 to coordinate while still removing the barrier from T1's hot path, including signals, cross-calls (inter-processor interrupts), page-protection mechanisms, etc. On windows T2 might simply invoke the FlushProcessWriteBuffers facility, which seems to precisely match our needs. (Some time ago I filed an RFE -- request-for-enhancement -- for Solaris to provide a similar facility). Still, we're always looking for better ways to implement our asymmetric protocol, which almost brings us to QPI quiescence, but first we need some historical background.

Long ago Intel implemented atomics with a global bus lock. It was called the #LOCK signal and driven by the LOCK: prefix on instructions, thus the names we have today. Bus locking was conceptually simple as most multiprocess Intel systems used a common front-side bus (FSB) between the processor and memory. Unrelated atomic operations, however, could impair overall performance as #LOCK had to quiesce the bus. The old FSB was a split-transaction request-response bus, and allowed multiple requests in-flight at a given time, so to assert #LOCK too often could rob the system of performance. Bus locking also supported atomics that spanned 2 cache lines. Intel subsequently switched to so-called cache-locking, where atomic read-modify-write instructions were implemented directly in the local cache of the processor executing the atomic, avoiding the need to lock the shared bus. From the perspective of the bus such atomic operations are no different than a store. (All SPARC systems that I know of use cache-locking). Cache-locking was a good step forward as atomics now scale ideally if there's no sharing of the underlying cache lines. Despite that, Intel preserved bus locking to handle the exotic legacy case of atomics that span cache lines (split atomics), which, by definition, are misaligned accesses. For this odd case the best solution was to simply resort to bus locking so the two lines underlying the operand could be accessed atomically. Note that Intel and AMD appear to frown up such behavior in their reference manuals, but the processors still support it for legacy reasons, at least as of today.

With the advent of QuickPath Interconnect (QPI) Intel eliminated the common FSB and switched to a topology more akin to AMD's hypertransport. Nehalem is the first
processor to use QPI. But even with QPI the architects needed a way to support those legacy split atomics. To that end, QPI has the ability of quiesce the whole system to allow the split atomic to execute. It appears that QPI quiescence also drains the pipes and forces at least of the equivalent of barrier semantics over the whole system. That is, split atomics may serve as a way to force a system wide "remote" barrier.

Additional remarks

  • Beware, it's not clear that QPI quiescence is actually safe and provides true quiescence. Empirical tests with a program designed to stress the -UseMembar facility and inspired by a simple hunch about the QPI implementation suggest so, but absence of evidence isn't evidence of absence --we might yet find Karl Popper's black swan.

  • At best, QPI quiescence should be considered an academic curiosity and should never be used in production code. I'm sure processor vendors would be loath to endorse such stunts because it could ultimately limit their latitude in future bus designs. QPI quiescence is simply an implementation artifact and not an architecturally defined or guaranteed trait. Even if the facility were somehow blessed I doubt vendors would want to expose it by means of split atomics so perhaps a new instruction might be called for. (Put another way, there are two issues, should the facility be provided, and if so how to expose it to programmers). So for the moment QPI quiescence is only good for prototyping what \*might\* be accomplished with a hypothetical instruction.

  • It's possible such mechanism might be applicable to certain forms of RCU (read-copy-update).

  • Dmitriy V'jukov recently posted some timing results for FlushProcessWriteBuffers. It's pretty efficient on his system. I hope to be able to run similar benchmarks to measure the cost of QPI quiescence in the near future, at which point I'll post the results here. (Dmitriy is also the author of the relacy race detector which I recommend). It's worth noting that implementations of facilities such as FlushProcessWriteBuffers can be made very efficient. For example the implementation might be able to avoid a cross-call to a processor if it's known that no thread in the process is executing on that processor, using the knowledge that context switches are serializing events.

    Some extremely preliminary data shows that QPI quiescence by way of a split atomic incurs a local penalty of about 4800 cycles on an i7-920, and the degree of impact on the progress of other processors is very much a function of the miss rate of those processors.

  • I mentioned the idea of QPI quiescence to Dmitriy, who in turn pointed point out a relevant article on QPI in Dr. Dobbs. The "Locks" section is particularly interesting. As Dmitry noted, if it's possible to use quiescence for hot-plugging then it's not unreasonable to think that both the bus and processors are completely quiesced with no pending stores languishing in store buffers, which is precisely the behavior in which we're interested.

  • Is there a working analog to QPI quiescence on AMD's coherent hypertransport? This is left as an exercise for the curious reader.

  • Can QPI quiescence lead to a break-down of performance isolation in virtual machines running on the same physical system? (That's pretty easy to test even in the absence of virtual machines, with a simple multithreaded program or a few single-threaded processes).

Related reading

  • For another example of the Dekker duality in the HotSpot JVM and further discussion about the challenges of weak memory models
    refer to a previous blog entry about a long-standing bug in the park-unpark subsystem.

  • Biased locking, which is used in the HotSpot JVM, is a mechanism that attempts to the reduce the impact of high-latency atomic instructions. Interestingly, as processor vendors make improvements in the latency of such instructions there may come a time in the near future when biased locking is no longer profitable, at least on some platforms.

  • US07644409 and US7475395


Dave is a senior research scientist in the Scalable Synchronization Research Group within Oracle Labs : Google Scholar.


« November 2015