Thursday Nov 08, 2012

Faster Memory Allocation Using vmtasks

You may have noticed a new system process called "vmtasks" on Solaris 11 systems:

    % pgrep vmtasks
    8
    % prstat -p 8
       PID USERNAME  SIZE   RSS STATE  PRI NICE      TIME  CPU PROCESS/NLWP
         8 root        0K    0K sleep   99  -20   9:10:59 0.0% vmtasks/32
    

What is vmtasks, and why should you care? In a nutshell, vmtasks accelerates creation, locking, and destruction of pages in shared memory segments. This is particularly helpful for locked memory, as creating a page of physical memory is much more expensive than creating a page of virtual memory. For example, an ISM segment (shmflag & SHM_SHARE_MMU) is locked in memory on the first shmat() call, and a DISM segment (shmflg & SHM_PAGEABLE) is locked using mlock() or memcntl(). Segment operations such as creation and locking are typically single threaded, performed by the thread making the system call. In many applications, the size of a shared memory segment is a large fraction of total physical memory, and the single-threaded initialization is a scalability bottleneck which increases application startup time.

To break the bottleneck, we apply parallel processing, harnessing the power of the additional CPUs that are always present on modern platforms. For sufficiently large segments, as many of 16 threads of vmtasks are employed to assist an application thread during creation, locking, and destruction operations. The segment is implicitly divided at page boundaries, and each thread is given a chunk of pages to process. The per-page processing time can vary, so for dynamic load balancing, the number of chunks is greater than the number of threads, and threads grab chunks dynamically as they finish their work. Because the threads modify a single application address space in compressed time interval, contention on locks protecting VM data structures locks was a problem, and we had to re-scale a number of VM locks to get good parallel efficiency. The vmtasks process has 1 thread per CPU and may accelerate multiple segment operations simultaneously, but each operation gets at most 16 helper threads to avoid monopolizing CPU resources. We may reconsider this limit in the future.

Acceleration using vmtasks is enabled out of the box, with no tuning required, and works for all Solaris platform architectures (SPARC sun4u, SPARC sun4v, x86).

The following tables show the time to create + lock + destroy a large segment, normalized as milliseconds per gigabyte, before and after the introduction of vmtasks:

    ISM
        system     ncpu    before      after   speedup 
        ------     ----    ------      -----   -------
        x4600      32      1386        245     6X
        X7560      64      1016        153     7X
        M9000      512     1196        206     6X
        T5240      128     2506        234     11X
        T4-2       128     1197        107     11x
    
    DISM
        system     ncpu    before      after   speedup 
        ------     ----    ------      -----   -------
        x4600      32      1582        265     6X
        X7560      64      1116        158     7X
        M9000      512     1165        152     8X
        T5240      128     2796        198     14X
    

(I am missing the data for T4 DISM, for no good reason; it works fine).

The following table separates the creation and destruction times:

    ISM, T4-2
                  before    after  
                  ------    -----
        create    702       64
        destroy   495       43
    

To put this in perspective, consider creating a 512 GB ISM segment on T4-2. Creating the segment would take 6 minutes with the old code, and only 33 seconds with the new. If this is your Oracle SGA, you save over 5 minutes when starting the database, and you also save when shutting it down prior to a restart. Those minutes go directly to your bottom line for service availability.

Tuesday Jul 19, 2011

Fast Crash Dump

You may have noticed the new system crash dump file vmdump.N that was introduced in Solaris 10 9/10. However, you perhaps did not notice that the crash dump is generated much more quickly than before, reducing your down time by many minutes on large memory systems, by harnessing parallelism and high compression at dump time. In this entry, I describe the Fast Crash optimizations that Dave Plauger and I added to Solaris.

In the previous implementation, if a system panics, the panic thread freezes all other CPUs and proceeds to copy system memory to the dump device. By default, only kernel pages are saved, which is usually sufficient to diagnose a kernel bug, but that can be changed with the dumpadm(1M) command. I/O is the bottleneck, so the panic thread compresses pages on the fly to reduce the data to be written. It uses lzjb compression, which provides decent compression at a reasonable CPU utilization. When the system reboots, the single-threaded savecore(1M) process reads the dump device, uncompresses the data, and creates the crash dump files vmcore.N and unix.N, for a small integer N.

Even with lzjb compression, writing a crash dump on systems with gigabytes to terabytes of memory takes a long time. What if we use stronger compression to further reduce the amount of data to write? The following chart compares the compression ratio of lzjb vs bzip2 for 42 crash dumps picked at random from our internal support site.




bzip2 compresses 2x more than lzjb for most cases, and in the extreme case, bzip2 achieves a 39X compression vs 9X for lzjb. (We also tested gzip levels 1 through 9, and they fall in between the two.) Thus we could reduce the disk I/O time for crash dump by using bzip2. The catch is that bzip2 requires significantly more CPU time than lzjb per byte compressed, some 20X to 40X more on the SPARC and x86 CPUs we tested, so introducing bzip2 in a single threaded dump would be a net loss. However, we hijack the frozen CPUs to compress different ranges of physical memory in parallel. The panic thread traverses physical memory in 4 MB chunks, mapping each chunk and passing its address to a helper CPU. The helper compresses the chunk to an output buffer, and passes the result back to the panic thread, which writes it to disk. This is implemented in a pipelined, dataflow fashion such that the helper CPUs are kept busy compressing the next batch of data while the panic thread writes the previous batch of data to disk.

We dealt with several practical problems to make this work. Each helper CPU needs several MB of buffer space to run the bzip2 algorithm, which really adds up for 100's of CPUs, and we did not want to statically reserve that much memory per domain. Instead, we scavenge memory that is not included in the dump, such as userland pages in a kernel-only dump. Also, during a crash dump, only the panic thread is allowed to use kernel services, because the state of kernel data structures is suspect and concurrency is not safe. Thus the panic thread and helper CPUs must communicate using shared memory and spin locks only.

The speedup of parallel crash dump versus the serial dump depends on compression factor, CPU speed, and disk speed, but here are a few examples. These are "kernel only" dumps, and the dumpsize column below is the uncompressed kernel size. The disk is either a raw disk or a simple ZFS zvol, with no striping. Before is the time for a serial dump, and after is the time for a parallel dump, measured from the "halt -d" command to the last write to the dump device.

    system  NCPU  disk  dumpsize  compression  before  after  speedup
                          (GB)                 mm:ss   mm:ss
    ------  ----  ----  --------  -----------  -----   -----  -------
    M9000   512   zvol    90.6      42.0       28:30    2:03    13.8X
    T5440   256   zvol    19.4       7.2       27:21    4:29     6.1X
    x4450    16   raw      4.9       6.7        0:22    0:07     3.2X
    x4600    32   raw     14.6       3.1        3:47    1:46     2.1X
    

The higher compression, performed concurrently with I/O, gives a significant speedup, but we are still I/O limited, and future speedup will depend on improvements in I/O. For example, a striped zvol is not supported as a dump device, but that would help. You can use hardware raid to configure a faster dump device.

We also optimized live dumps, which are crash dumps that are generated with the "savecore -L" command, without stopping the system. This is useful for diagnosing systems that are misbehaving in some way but are still performing adequately, without interrupting service. We fixed a bug (CR 6878030) in which live dump writes were broken into 8K physical transfers, giving terrible I/O throughput. The live dumps could take hours on large systems, making them practically unusable. We obtained these speedups for various dump sizes:

    system disk  before   after  speedup
                 h:mm:ss  mm:ss
    ------ ----  -------  -----  -------
    M9000  zvol  3:28:10  15:06  13.8X
    T5440  zvol    23:29   1:19  18.6X
    T5440  raw      9:17   0:55  10.5X
    

Lastly, we optimized savecore, which runs when the system boots after a panic, and copies data from the dump device to a persistent file, such as /var/crash/hostname/vmdump.N. savecore is 3X to 5X faster faster because the dump device contents are more compressed, so there are fewer bytes to copy, and because it no longer uncompresses the dump by default. That makes more sense if you need to send the vmdump across the internet. To uncompress by default, use "dumpadm -z off". To uncompress a vmdump.N file and produce vmcore.N and unix.N, which are required for using tools such as mdb, run the "savecore -f" command. We multi-threaded savecore to uncompress in parallel.

Having said all that, I sincerely hope you never see a crash dump on Solaris! The best way to reduce downtime is for the system to stay up. The Sun Ray server I am working on now has been up for 160 days since its previous planned downtime.

About

Steve Sistare

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today