Scaling Solaris on Large CMT Systems

The Solaris Operating System is very effective at managing systems with large numbers of CPUs. Traditionally, these have been SMPs such as the Sun Fire(TM) E25K server, but these days it is CMT systems that are pushing the limits of Solaris scalability. The Sun SPARC(R) Enterprise T5140/T5240 Server, with 128 hardware strands that each behave as an independent CPU, is a good example. We continue to optimize Solaris to handle ever larger CPU counts, and in this posting I discuss a number of recent optimizations that enable Solaris to scale well on the T5140 and other large systems.

The Clock Thread

Clock is a kernel function that by default runs 100 times per second on the lowest numbered CPU in a domain and performs various housekeeping activities. This includes time adjustment, processing pending timeouts, traversing the CPU list to find currently running threads, and performing resource accounting and limit enforcement for the running threads. On a system with more CPUs, the CPU list traversal takes longer, and can exceed 10 ms, in which case clock falls behind, timeout processing is delayed, and the system becomes less responsive. When this happens, the mpstat command will show sys time approaching 100% on CPU 0. This is more likely for memory-intensive workloads on CMT systems with a shared L2$, as the increased L2$ miss rate further slows the clock thread.

We fixed this by multi-threading the clock function. Clock still runs at 100 Hz, but it divides the CPU list into sets, and cross calls a helper CPU to perform resource accounting for each set. The helpers are rotated so that over time the load is finely and evenly distributed over all CPUS; thus, what had been for example a 70% load on CPU 0 becomes a less than 1% load on each of 128 CPUs in a T5140 system. CPU 0 will still have a somewhat higher %sys load than the other CPUs, because it is solely responsible for some functions such as timeout processing.

Memory Placement Optimization (MPO)

The T5140 server in its default configuration has a NUMA characteristic, which is a common architectural strategy for building larger systems. Each server has two physical UltraSPARC(R) T2 Plus processors, and each processor has 64 hardware strands (CPUs). The 64 CPUs on a processor access memory controlled by that processor at a lower latency than memory controlled by the other processor. The physical address space is interleaved across the two processors at a 1 GB granularity. Thus, an operating system that is aware of CPU and memory locality can arrange that software threads allocate memory near the CPU on which they run, minimizing latency.

Solaris does exactly that, and has done so on various platforms since Solaris 9, using the Memory Placement Optimization framework, aka MPO. However, enabling the framework on the T5140 was non-trivial due to the virtualization of CPUs and memory in the sun4v architecture. We extended the hypervisor layer by adding locality arcs in the physical resource graph, and ensured that these arcs were preserved when a subset of the graph was extracted, virtualized, and passed to the Solaris guest at Solaris boot time.

Here are a few details on the MPO framework itself. Each set of CPUs and "near" memory is called a locality group, or lgroup; this corresponds to a single T2 Plus processor on the T5140. When a thread is created, it is assigned to a home lgroup, and the Solaris scheduler tries to run the thread on a CPU in its home lgroup whenever possible. Thread private memory (eg stack, heap, anon) is allocated from the home lgroup whenever possible. Shared memory (eg SysV shm) is striped across lgroups on a page granularity. For more details on Solaris MPO, including commands to control and observe lgroups and local memory, such as lgrpinfo, pmap -L, liblgrp, and memadvise, see the man pages and this presentation.

If an application is dominated by stall time due to memory references that miss in cache, then MPO can theoretically improve performance by as much as the ratio of remote to local memory latency, which is about 1.5 : 1 on the T5140. The STREAM benchmark is a good example; our early experiments with MPO yielded a 50% improvement in STREAM performance. See Brian's blog for the latest optimized results. Similarly, if an application is limited by global coherency bandwidth, then MPO can improve performance by reducing global coherency traffic, though this is unlikely on the T5140 because the local memory bandwidth and the global coherency bandwidth are well balanced.

Thread Scheduling

In my posting on the UltraSPARC T2 processor, I described how Solaris threads are spread across cores and pipelines to balance the load and maximize hardware resource usage. Since the T2 Plus is identical to the T2 in this area, these scheduling heuristics continue to be used for the T5140, but are augmented by scheduling at the lgroup level. Thus, independent software threads are first spread across processors, then across cores within a processor, then across pipelines within a core.

The Kernel Adaptive Mutex

The mutex is the basic locking primitive in Solaris. We have optimized the mutex for large CMT systems in several ways.

The implementation of the mutex in the kernel is adaptive, in that a waiter will busy-wait if the software thread that owns the mutex is running, on the supposition that the owner will release it soon. The waiter will yield the CPU and sleep if the owner is not running. To determine if a mutex owner is running, the code previously traversed all CPUs looking for the owner thread, as opposed to simply examining the owner thread's state, to avoid a race vs threads being freed. This O(NCPU) algorithm was costly on large systems, and we replaced it with a constant time algorithm that is safe wrt threads being freed.

Waiters attempt to acquire a mutex using a compare-and-swap (cas) operation. If many waiters continuously attempt cas on the same mutex, then a queue of requests builds in the memory subsystem, and the latency of each cas becomes proportional the number of requests. This dramatically reduces the rate at which the mutex can be acquired and released, and causes negative scaling for the higher level code which is using the mutex. The fix is to space out the cas requests over time, such that a queue never builds up, by forcing the waiters to busy-wait for a fixed period after a cas failure. The period increases exponentially after repeated failures, up to a maximum which is proportional to the number of CPUs, which is the upper bound on the number of actively waiting threads. Further, in the busy-wait loop, we use long-latency, low-impact operations, so the busy CPU consumes very little of the execution pipeline, leaving more cycles available to other strands sharing the pipeline.

To be clear, any application which causes many waiters to desire the same mutex has an inherent scalability bottleneck, and ultimately needs to be restructured for optimal scaling on large servers. However, the mutex optimizations above allow such apps to scale to perhaps 2X or 3X as many CPUs as they otherwise would, and to degrade gracefully under load rather than tip over into negative scaling.

Availability

All of the enhancements described herein are available in OpenSolaris, and will be available soon in updates to Solaris 10. The MPO and scheduling enhancements for the T5140 will be available in Solaris 10 4/08, and the clock and mutex enhancements will be released soon after in a KU patch.

Comments:

Steve, thanks for sharing all the information. It is sort of small "roadmap" for pair: Solaris+CMT cpus. Could you please tell me which of the optimization are relevant to T2 based servers (T5120/T5220) ?

Posted by przemol on April 17, 2008 at 10:02 PM EDT #

Post a Comment:
  • HTML Syntax: NOT allowed
About

Steve Sistare

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today