Wednesday Mar 28, 2007

Eeek: Time isn't accurate in a VM!

I noticed that time was all over the place on my Solaris guest, and it made me wonder just how to measure and quantify time on a virtualized guest. The problem is, if you use gettimeofday() in the guest as a reference, it too may not be accurate. So, I used an external time reference to measure the guest, and low and behold, time was indeed out!

On my most recent VMware test configuration, Solaris was jumping forward several seconds/minutes at random times with snv as a guest to vmware on Ubuntu.

The test host was Ubuntu 6.10 on a 2x2core opteron rev.e system.

Basically, the problem is that the ubuntu dom0 power manages the opteron cores, AND it seems that some virtualization layers (in this case we used vmware) don't take into account that the time registers (tsc's) are not in sync across cores. When this happens, time jumps forward at random intervals, sometimes up to an hour. This particular problem only happens if numa systems are used with non syncrhonous tsc's.

To solve the problem, I bound my snv guest to a core, and tell VMware not to adjust the tsc: here's what I have as a description: "The host.noTSC and ptsc.noTSC lines enable a mechanism that tries to keep the guest clock accurate even when the time stamp counter (TSC) is slow."

processor0.use = FALSE 
processor1.use = FALSE 
processor2.use = FALSE 
 
host.cpukHz = 2200000 
host.noTSC = TRUE 
ptsc.noTSC = TRUE 

Here's what I did to quantify the issue: I ran an externally controlled timed benchmark of a time program, one of a reference (unvirtualized host) and the other on the virtualized snv guest. That way, I could know what the real elapsed time was rather than assuming what the guest was telling me. Interestly, the guest was indeed lying about it's notion of wallclock time.

Here's what 18 seconds looks like on (a) a vmware solaris guest, and (b) a reference machine (old SPARC), e.g. both of these test ran for exactly 18 real seconds. Sec is the # of seconds via gettimeofday() from the start, the tod is a delta of gettimeofday() between 1s sleeps.

Reference: 
Sec     tod   hrtime 
 0  1009354  1009476 
 1  1009801  1009817 
 2  1010484  1010505 
 3  1009315  1009333 
 4  1009905  1009923 
 5  1009909  1009930 
 6  1009905  1009924 
 7  1009911  1009945 
 8  1009842  1009869 
 9  1009897  1009918 
10  1009887  1009906 
11  1009918  1009933 
12  1009903  1009920 
13  1009913  1009931 
14  1009892  1009910 
15  1009908  1009928 
16  1009895  1009921 
17  1009900  1009929 
18  1009876  1009896 

snv Guest:

 
Sec     tod   hrtime 
 0  1000665  1000702  
 1  1007738  1007756  
 2 177169798 177169813  <= Argh!
179  1011251  1011275  
180  1008404  1008432  
181  1009989  1010016  
182  1009618  1009644  
183  1009896  1009924  
184  1009747  1009766  
185  1000265  1000291  
186  1009336  1009360  

In the Solaris vmware guest with numanode = "1" set, it gets better, but now time runs slow (setting this binds the guest onto a numa & time coherent set of cores):

Sec     tod   hrtime 
 0  1000122  1000139 
 1  1004468   989472 
 2  4973883   939326 
 6  1009682  1009689 
 7  1005019   991275 
 8  4975168   939355 
13  1009630  1009638 
14  1003097   989955 

With the following params set:

processor0.use = FALSE 
processor1.use = FALSE 
processor2.use = FALSE 
host.cpukHz = 2200000 
host.noTSC = TRUE 
ptsc.noTSC = TRUE 

Sec     tod   hrtime 
 0  1004787  1004911 
 1  1009783  1009802 
 2  1009914  1009935 
 3  1009894  1009913 
 4  1009895  1009913 
 5  1009900  1009918 
 6  1037644  1037680 
 7  1002091  1002117 
 8  1009910  1009929 
 9  1009897  1009920 
10  1009904  1009923 
11  1009893  1009913 
12  1009916  1009934 
13  1009876  1009894 
14  1009893  1009918 
15  1009873  1009891 
16  1009901  1009921 
17  1009883  1009911 
18  1009903  1009922 

Success!

Thursday Mar 15, 2007

New Features of the Solaris Performance Wiki

We've added a few new features and some more content at the Solaris Performance Wiki.

Namely:

  • Popular content Rating and Navigation
  • Performance news aggregation
  • New content

Of course, everyone is welcome to contribute!

[ T: ]

Friday Nov 11, 2005

Tuning for Maximum Sequential I/O Bandwidth

John asks the question "what does maxphys do, and how should I tune it?"

The maxphys parameter use to be the authoritive limit to the maximum I/O transfer size in Solaris. A large transfer size, if the device and I/O layer supports it, generally provides better large I/O throughput. This is generally supported by the fact that disks like larger transfers if you are trying to get absolute maximum throughput. For a single disk, 64k is generally the point at which maximum transfer rate occurs, and for disk arrays, typically 1MB.

Historically, there was a maxphys of 56k set, due to some older (VME?) bus max xfer limitations. The maxphys limit was increased to 128k on SPARC around the Solaris 2.6 timeframe. Since Solaris 7, the sd/ssd drivers (SCSI and Fibre Channel) override maxphys if the device supports tag queuing, up to a default of 1MB.

On x86/x64, the default is still 56k (we need to look at this!).

In summary, the defaults for SPARC with SCSI or Fibre channel are optimal defaults. You can always check by doing a large I/O test and observing the average I/O size:

# dd if=/dev/rdsk/c0t0d0s0 of=/dev/null bs=8192k

# iostat -xnc 3
     cpu
 us sy wt id
 10  4  0 86
                    extended device statistics              
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
   22.4    0.0 22910.7    0.0  0.0  1.0    0.1   44.6   0 100 c0t0d0

Here we can see that we are doing 22 reads per second, and 22MB/s; thus the disks are performing optimal 1MB I/O's. Also, a simple set of test with varying block sizes will help identify the best I/O size for your device.

One more comment about max transfer size; these parameters (the driver's max transfer size) is read by UFS's newfs command, and used to set the cluster size of the file system; i.e. the number of contiguous blocks to read ahead or write behind. If you put a file system on a device and want to see large I/O's, you'll need to ensure that the maxcontig parameter also reflects the devices's max transfer size. It can be tweaked after with tunefs.

# mkfs -m /dev/dsk/c0t0d0s0
mkfs -F ufs -o nsect=248,ntrack=19,bsize=8192,fragsize=1024,cgsize=22,free=1,rps=120,nbpi=8155,opt=
t,apc=0,gap=0,nrpos=8,maxcontig=128,mtb=n /dev/dsk/c0t0d0s0 18588840

Here you can see that maxcontig=128 8k blocks, which is 1MB. If you tune maxphys, then reset the file system's max cluster size with tunefs after, too.

[ T: ]

Monday Oct 31, 2005

CMT is coming: Is your application ready?

We're close to seeing some of the most exciting SPARC systems in over a decade. The new Niagara based systems are the most aggressive CMT systems the industry has seen to date, with 32 threads in a single chip. A chip like this will be able to deliver the performance of up to 15 UltraSPARC processors while using less than one third of the power. This represents a compelling advantage not only in performance, but as a significant reduction power, cooling and space.

Since even a single Niagara chip presents itself to software as a 32-processor system, the ability of system and application software to exploit multiple processors or threads simultaneously is becoming more important than ever. As CMT hardware progresses, the software is required to scale accordingly to fully exploit the parallelism of the chip.

Current efforts are delivering successful scaling scaling results for key applications. Oracle, Sun Web Server, SAP are among many examples of applications which have already shown scalability which can fully exploit all the threads of a Niagara based system.

To maximize the success of CMT systems we need renewed focus on application scalability. Many of the applications we migrate to CMT systems will have been developed on low end Linux systems; they may have never been tested on a higher end system.

The Association for Computing Machinery (ACM) is running a special feature on the impact of CMT on software this month. There are several relevant articles in this issue:

  • Kunle Olukotun, founder of Afara Websystems that pioneered what is now the Niagara processor, writes about the inevitable transition to CMT:
  • "the transition to CMPs is inevitable because past efforts to speed up processor architectures with techniques that do not modify the basic von Neumann computing model, such as pipelining and superscalar issue, are encountering hard limits. As a result, the microprocessor industry is leading the way to multicore architectures"

    Throughput computing is the first and most pressing area where CMPs are having an impact. This is because they can improve power/performance results right out of the box, without any software changes, thanks to the large numbers of independent threads that are available in these already multithreaded applications."

    More

  • Luiz Barroso, principal engineer at Google, shows why CMT is the only viable economic solution to large datacenter scale-out:
  • "We can break down the TCO (total cost of ownership) of a large-scale computing cluster into four main components: price of the hardware, power (recurring and initial data-center investment), recurring data-center operations costs, and cost of the software infrastructure.

    ...And it gets worse. If performance per watt is to remain constant over the next few years, power costs could easily overtake hardware costs, possibly by a large margin."

    More

  • Richard McDougall, Performance and Availability Engineering on the fundamentals of software scaling:
  • "We need to consider the effects of the change in the degree of scaling on the way we architect applications, on which operating system we choose, and on the techniques we use to deploy applications - even at the low end."

    More

  • Herb Sutter, Architect at Microsoft writes about changes to programming languages which could exploit parallelism:
  • "But concurrency is hard. Not only are today's languages and tools inadequate to transform applications into parallel programs, but also it is difficult to find parallelism in mainstream applications, and - worst of all - concurrency requires programmers to think in a way humans find difficult.

    Nevertheless, multicore machines are the future, and we must figure out how to program them. The rest of this article delves into some of the reasons why it is hard, and some possible directions for solutions."

    More

In addition to the ACM queue articles, there was a recent NetTalk on Scaling My Apps, featuring Bryan Cantrill. There will be a followup experts exchange on this topic, where customers can live chat with the technical scaling experts. Also, look for a new whitepaper on scaling applications for CMT, from Denis Sheahan of the Niagara architecture group.

Friday Aug 12, 2005

File System Performance Benchmarks (FileBench) added to Performance Community

For those who have been following FileBench, we've finally Open Sourced it (FileBench is our file system performance measurement and benchmark framework). We've started a community web page and discussion within the OpenSolaris performance community for this topic.

We're working on some Linux ext3/rieser comparisons to server partly as a workload validation experiment; which should be available soon.

Documentation is being worked on as we speak. I'll be providing some worked examples for reference in the near future.

[ T: ]

Tuesday Jul 12, 2005

Performance Improvements for the File System Cache in Solaris 10

One of the major improvements in Solaris 10 was the improvement of the file system cache. Here is a quick summary of the perf changes that I grabbed recently using Filebench.

The file system caching mechanism was overhauled to eliminate the majority of the MMU overhead associated with the page cache. The new segkpm facility providesfast mappings for physical pages within the kernel's address space. It is used by the file systems to provide a virtual addres that can be used to copy data to and from the user's address space for file system I/O.

Since the available virtual address range in a 64 bit kernel is always larger than physical memory size, the entire physical memory can be mapped into the kernel. This eliminates the need to have to map/unmap pages everytime they are accessed via segmap, significantly reducing code path and the need for TLB shoot downs. In addition, segkpm can use large TLB mappings to minimize TLB miss overhead.

Some quick measurements on Solaris 8 and 10 show the code path reduction. There are three important cases:

  • Small In-Memory: When the file fits in memory, and is smaller than the size of the segmap cache
  • Large In-Memory: When the file fits in memory, and is larger than the size of the segmap cache
  • Large Non-Cached: When the file does not fit in memory

Measurements show that the most significant gain is a reduction of CPU per system call when a file is in memory (fits in the page cache), and is larger than segmap (which is 12% of physmem on SPARC, and 64MB on x64). Importantly, this is the most common case I've seen with real applications, too.

Random Read of 8kSolaris 8Solaris 10
Small In-Memory (Less than segmap)89us83us
Large In-Memory (Greater than segmap but less then physmem)181us98us
Large Non-Cached236us153us

The throughput gains for a CPU constrained system are shown here:

With Solaris 10 you should expect significant improvements for applications which are I/O intensive into the file system cache. The actual improvement will vary, and will be greater on systems with higher CPU counts. You should also expect to see the cross call count drop (see xcal in mpstat), and a significantly reduced amount of system time.

Technorati Tag: OpenSolaris

Technorati Tag: Solaris

Wednesday Jun 29, 2005

Using prstat to estimate memory slow-downs

There are many indicators which show when Solaris has a memory shortage, but when you get in this situation wouldn't it be nice if you could tell who is being affected, and to what degree?

Here's a simple method: use the microstate measurement facility in prstat(1M). Using the memory-stall counts, you can observe the percentage of wall clock time spent waiting in data faults.

The microstates show a timeline for 100% of the wall clock time of a thread in between samples. The time is broken down into eight categories; on cpu in USR/SYS, in a software trap (TRP), in memory stalls (DFL and TFL), in user locks (LCK), sleeping on I/O or something else (SLP), and the time spent waiting on the run queue (LAT). Of particular interest here is the DFL column. It shows the percentage of time spent waiting for data faults to be serviced. When an application is stalled waiting to be paged in from the swap device, time accumulates in this column.

The In the following example we shows a severe memory shortage situation. The system was running short of memory, and each thread in filebench is waiting for memory approximately 90% of the time.

sol8$ prstat -mL
   PID USERNAME USR SYS TRP TFL DFL LCK SLP LAT VCX ICX SCL SIG PROCESS/LWPID
 15625 rmc      0.1 0.7 0.0 0.0  95 0.0 0.9 3.2  1K 726  88   0 filebench/2
 15652 rmc      0.1 0.7 0.0 0.0  94 0.0 1.8 3.6  1K  1K  10   0 filebench/2
 15635 rmc      0.1 0.7 0.0 0.0  96 0.0 0.5 3.2  1K  1K   8   0 filebench/2
 15626 rmc      0.1 0.6 0.0 0.0  95 0.0 1.4 2.6  1K 813  10   0 filebench/2
 15712 rmc      0.1 0.5 0.0 0.0  47 0.0  49 3.8  1K 831 104   0 filebench/2
 15628 rmc      0.1 0.5 0.0 0.0  96 0.0 0.0 3.1  1K 735   4   0 filebench/2
 15725 rmc      0.0 0.4 0.0 0.0  92 0.0 1.7 5.7 996 736   8   0 filebench/2
 15719 rmc      0.0 0.4 0.0 0.0  40  40  17 2.9  1K 708 107   0 filebench/2
 15614 rmc      0.0 0.3 0.0 0.0  92 0.0 4.7 2.4 874 576  40   0 filebench/2
 15748 rmc      0.0 0.3 0.0 0.0  94 0.0 0.0 5.5 868 646   8   0 filebench/2
 15674 rmc      0.0 0.3 0.0 0.0  86 0.0 9.7 3.2 888 571  62   0 filebench/2
 15666 rmc      0.0 0.3 0.0 0.0  29  46  23 2.1 689 502 107   0 filebench/2
 15682 rmc      0.0 0.2 0.0 0.0  24  43  31 1.9 660 450 107   0 filebench/2

Tuesday Jun 14, 2005

Adding your own performance statistics to Solaris

Have you ever wondered how all of those magical vmstat statistics come from, or how you might go about adding your own? Now that we have DTrace, you will be able to get all the add-hoc statistics you need, so we seldom need to add hard coded statistics. However, the kstat mechanism is used to provide light weight statistics that are a stable part of your kernel code. The kstat interface would be used to provide standard information that would be reported from a user level tool. For example, if you wanted to add your own device driver I/O statistics into the statistics pool reported by the iostat command, you would add a kstat provider.

Peter Boothby provides an excellent introduction to the kstat facility and the user API in his developer article. Since Pete's covered the detail, I'll just recap the major parts and jump into an overview of a kstat provider.

We'll start with the basics about adding your own kstat provider, and now that we've launched OpenSolaris, I can include source references ;-)

The statistics reported by vmstat, iostat and most of the other Solaris tools are gathered via a central kernel statistics subsystems, known as "kstat". The kstat facility is an all purpose interface for collecting and reporting named and typed data.

A typical scenario will have a kstat producer and a kstat reader. The kstat reader is a utility in user mode which reads, potentially aggregates and then reports the results. For example, the vmstat utility is a kstat reader, which aggregates statistics which a provided by the vm system in the kernel.

Statistics are named and accessed via a four-tuple: class, module, name, instance. Solaris 8 introduced a new method to access kstat information from the command line or in custom-written scripts. You can use the command-line tool /usr/bin/kstat interactively to print all or selected kstat information from a system. This program is written in the Perl language, and you can use the Perl XS extension module to write your own custom Perl programs. Both facilities are documented in the pages of the online manual.

The kstat Command

You can invoke the kstat command on the command line or within shell scripts to selectively extract kernel statistics. Like many other Solaris commands, kstat it takes optional interval and count arguments for repetitive, periodic output. Its command options are quite flexible.

The first form follows standard UNIX command-line syntax, and the second form provides a way to pass some of the arguments as colon-separated fields. Both forms offer the same functionality. Each of the module, instance, name, or statistic specifiers may be a shell glob pattern or a Perl regular expression enclosed by "/" characters. You can It is possible to use both specifier types within a single operand. Leaving a specifier empty is equivalent to using the "\*" glob pattern for that specifier. Running kstat with no arguments will print out nearly all kstat entries from the running kernel (most, but not all kstats of KSTAT_TYPE_RAW are decoded).

The tests specified by the options are logically ANDed, and all matching kstats are selected. The argument for the -c, -i, -m, -n, and -s options can may be specified as a shell glob pattern, or a Perl regular expression enclosed in "/" characters. If you want to pass a regular expression containing shell metacharacters to the command, you must protect it from the shell by enclosing it with the appropriate quotation marks. For example, to show all kstats that have a statistics name beginning with intr in the module name cpu_stat, you could use the following script:

$ kstat -p -m cpu_stat -s 'intr\*'
cpu_stat:0:cpu_stat0:intr       878951000
cpu_stat:0:cpu_stat0:intrblk    21604
cpu_stat:0:cpu_stat0:intrthread 668353070
cpu_stat:1:cpu_stat1:intr       211041358
cpu_stat:1:cpu_stat1:intrblk    280
cpu_stat:1:cpu_stat1:intrthread 209879640

The -p option used in the preceding previous example displays output in a parsable format. If you do not specify this option, kstat produces output in a human-readable, tabular format. In the following example, we leave out the -p flag and use the module:instance:name:statistic argument form and a Perl regular expression.

$ kstat cpu_stat:::/\^intr/
module: cpu_stat                        instance: 0
name:   cpu_stat0                       class:    misc
        intr                            879131909
        intrblk                         21608
        intrthread                      668490486
module: cpu_stat                        instance: 1
name:   cpu_stat1                       class:    misc
        intr                            211084960
        intrblk                         280
        intrthread                      209923001

A kstat provider - a walk though

To add your own statistics to your Solaris kernel, you will need to create a kstat provider, which consists of an initialization function to create the statistics group, and then a call-back function that updates the statistics prior to being read. The callback function is often used to aggregate or summarize information prior to being reported back to the reader. The kstat provider interface is defined in kstat(3KSTAT) And kstat(9S). There is some more verbose information in usr/src/uts/common/sys/kstat.h. The first step is to decide on the type of information you want to export. The two primary types are RAW, NAMED or IO. The RAW interface exports raw C data structures to userland, and it's use is strongly discouraged, since a change in the C structure will cause incompatibilities in the reader. The named mechanism are preferred since the data is typed and extensible. Both the NAMED and IO use typed data.

The NAMED type is for providing single or multiple records of data, and is the most common choice. The IO record is specifically for providing I/O statistics. It is collected and reported by the iostat command, and therefore should be used only for items which can be viewed and reported as I/O devices (we do this currently for I/O devices and NFS file systems).

A simple example of named statistics is the virtual memory summaries provided via "system_pages":

$ kstat -n system_pages
module: unix                            instance: 0     
name:   system_pages                    class:    pages                         
        availrmem                       343567
        crtime                          0
        desfree                         4001
        desscan                         25
        econtig                         4278190080
        fastscan                        256068
        freemem                         248309
        kernelbase                      3556769792
        lotsfree                        8002
        minfree                         2000
        nalloc                          11957763
        nalloc_calls                    9981
        nfree                           11856636
        nfree_calls                     6689
        nscan                           0
        pagesfree                       248309
        pageslocked                     168569
        pagestotal                      512136
        physmem                         522272
        pp_kernel                       64102
        slowscan                        100
        snaptime                        6573953.83957897

These are first declared and initialized by the following C structs in usr/src/uts/common/os/kstat_fr.c:


struct {
        kstat_named_t physmem;
        kstat_named_t nalloc;
        kstat_named_t nfree;
        kstat_named_t nalloc_calls;
        kstat_named_t nfree_calls;
        kstat_named_t kernelbase;
        kstat_named_t econtig;
        kstat_named_t freemem;
        kstat_named_t availrmem;
        kstat_named_t lotsfree;
        kstat_named_t desfree;
        kstat_named_t minfree;
        kstat_named_t fastscan;
        kstat_named_t slowscan;
        kstat_named_t nscan;
        kstat_named_t desscan;
        kstat_named_t pp_kernel;
        kstat_named_t pagesfree;
        kstat_named_t pageslocked;
        kstat_named_t pagestotal;
} system_pages_kstat = {

        { "physmem",            KSTAT_DATA_ULONG },
        { "nalloc",             KSTAT_DATA_ULONG },
        { "nfree",              KSTAT_DATA_ULONG },
        { "nalloc_calls",       KSTAT_DATA_ULONG },
        { "nfree_calls",        KSTAT_DATA_ULONG },
        { "kernelbase",         KSTAT_DATA_ULONG },
        { "econtig",            KSTAT_DATA_ULONG },
        { "freemem",            KSTAT_DATA_ULONG },
        { "availrmem",          KSTAT_DATA_ULONG },
        { "lotsfree",           KSTAT_DATA_ULONG },
        { "desfree",            KSTAT_DATA_ULONG },
        { "minfree",            KSTAT_DATA_ULONG },
        { "fastscan",           KSTAT_DATA_ULONG },
        { "slowscan",           KSTAT_DATA_ULONG },
        { "nscan",              KSTAT_DATA_ULONG },
        { "desscan",            KSTAT_DATA_ULONG },
        { "pp_kernel",          KSTAT_DATA_ULONG },
        { "pagesfree",          KSTAT_DATA_ULONG },
        { "pageslocked",        KSTAT_DATA_ULONG },
        { "pagestotal",         KSTAT_DATA_ULONG },
};

These statistics are the most simple type; merely a basic list of 64-bit variables. Once declared, the kstats are registered with the subsystem:


static int system_pages_kstat_update(kstat_t \*, int);

...

        kstat_t \*ksp;

        ksp = kstat_create("unix", 0, "system_pages", "pages", KSTAT_TYPE_NAMED,
                sizeof (system_pages_kstat) / sizeof (kstat_named_t),
                KSTAT_FLAG_VIRTUAL);
        if (ksp) {
                ksp->ks_data = (void \*) &system_pages_kstat;
                ksp->ks_update = system_pages_kstat_update;
                kstat_install(ksp);
        }

...

The kstat create function takes the 4-tuple description, the size of the kstat and provides a handle to the created kstats. The handle is then updated to include a pointer to the data, and a call-back function which will be invoked when the user reads the statistics.

The callback function has the task of updating the data structure pointed to by ks_data when invoked. If you choose not to have one, simply set the callback function to default_kstat_update(). The system pages kstat preable looks like this:


static int
system_pages_kstat_update(kstat_t \*ksp, int rw)
{

        if (rw == KSTAT_WRITE) {
                return (EACCES);
        }

This basic preamble checks to see if the user code is trying to read or write the structure. (Yes, it's possible to write to some statistics, if the provider allows it). Once basic checks are done, the update call-back simply stores the statistics into the predefined data structure, and then returns.


...
        system_pages_kstat.freemem.value.ul     = (ulong_t)freemem;
        system_pages_kstat.availrmem.value.ul   = (ulong_t)availrmem;
        system_pages_kstat.lotsfree.value.ul    = (ulong_t)lotsfree;
        system_pages_kstat.desfree.value.ul     = (ulong_t)desfree;
        system_pages_kstat.minfree.value.ul     = (ulong_t)minfree;
        system_pages_kstat.fastscan.value.ul    = (ulong_t)fastscan;
        system_pages_kstat.slowscan.value.ul    = (ulong_t)slowscan;
        system_pages_kstat.nscan.value.ul       = (ulong_t)nscan;
        system_pages_kstat.desscan.value.ul     = (ulong_t)desscan;
        system_pages_kstat.pagesfree.value.ul   = (ulong_t)freemem;
...

        return (0);
}

That's it for a basic named kstat.

I/O statistics

Moving on to I/O, we can see how I/O stats are measured and recorded. There is special type of kstat type for I/O statistics. This provides a common methodology for recording statistics about devices which have queuing, utilization and response time metrics.

These devices are measured as a queue using "reimann sum" - which is a count of the visits to the queue and a sum of the "active" time. These two metrics can be used to determine the average service time and I/O counts for the device. There are typically two queues for each device, the wait queue and the active queue. This represents the time spent after the request has been accepted and enqueued, and then the time spent active on the device. The statistics are covered in kstat(3KSTAT):

     typedef struct kstat_io {
     /\*
      \* Basic counters.
      \*/
     u_longlong_t     nread;      /\* number of bytes read \*/
     u_longlong_t     nwritten;   /\* number of bytes written \*/
     uint_t           reads;      /\* number of read operations \*/
     uint_t           writes;     /\* number of write operations \*/
     /\*
     \* Accumulated time and queue length statistics.
     \*
     \* Time statistics are kept as a running sum of "active" time.
     \* Queue length statistics are kept as a running sum of the
     \* product of queue length and elapsed time at that length --
     \* that is, a Riemann sum for queue length integrated against time.
     \*       \^
         \*       |           _________
         \*       8           | i4    |
         \*       |           |   |
         \*   Queue   6           |   |
         \*   Length  |   _________   |   |
         \*       4   | i2    |_______|   |
         \*       |   |   i3      |
         \*       2_______|           |
         \*       |    i1             |
         \*       |_______________________________|
         \*       Time->  t1  t2  t3  t4
     \*
     \* At each change of state (entry or exit from the queue),
     \* we add the elapsed time (since the previous state change)
     \* to the active time if the queue length was non-zero during
     \* that interval; and we add the product of the elapsed time
     \* times the queue length to the running length\*time sum.
     \*
     \* This method is generalizable to measuring residency
     \* in any defined system: instead of queue lengths, think
     \* of "outstanding RPC calls to server X".
     \*
     \* A large number of I/O subsystems have at least two basic
     \* "lists" of transactions they manage: one for transactions
     \* that have been accepted for processing but for which processing
     \* has yet to begin, and one for transactions which are actively
     \* being processed (but not done). For this reason, two cumulative
     \* time statistics are defined here: pre-service (wait) time,
     \* and service (run) time.
     \*
     \* The units of cumulative busy time are accumulated nanoseconds.
     \* The units of cumulative length\*time products are elapsed time
     \* times queue length.
     \*/
     hrtime_t   wtime;            /\* cumulative wait (pre-service) time \*/
     hrtime_t   wlentime;         /\* cumulative wait length\*time product\*/
     hrtime_t   wlastupdate;      /\* last time wait queue changed \*/
     hrtime_t   rtime;            /\* cumulative run (service) time \*/
     hrtime_t   rlentime;         /\* cumulative run length\*time product \*/
     hrtime_t   rlastupdate;      /\* last time run queue changed \*/
     uint_t     wcnt;             /\* count of elements in wait state \*/
     uint_t     rcnt;             /\* count of elements in run state \*/
     } kstat_io_t;

An I/O device driver has a similar declare and create secion, as we saw with the named statistics. A quick look at the floppy disk device driver ( usr/src/uts/sun/io/fd.c) shows the kstat_create() in the device driver attach function:


static int
fd_attach(dev_info_t \*dip, ddi_attach_cmd_t cmd)
{
...
        fdc->c_un->un_iostat = kstat_create("fd", 0, "fd0", "disk",
            KSTAT_TYPE_IO, 1, KSTAT_FLAG_PERSISTENT);
        if (fdc->c_un->un_iostat) {
                fdc->c_un->un_iostat->ks_lock = &fdc->c_lolock;
                kstat_install(fdc->c_un->un_iostat);
        }
...
}

The per-I/O statistics are updated in two places: the device driver strategy function, where the I/O is first recieved and queued. At this point, the I/O is marked as waiting on the wait queue:

#define KIOSP   KSTAT_IO_PTR(un->un_iostat)

static int
fd_strategy(register struct buf \*bp)
{
        struct fdctlr \*fdc;
        struct fdunit \*un;

        fdc = fd_getctlr(bp->b_edev);
        un = fdc->c_un;
...
	/\* Mark I/O as waiting on wait q \*/
        if (un->un_iostat) {
                kstat_waitq_enter(KIOSP);
        }

...
}

The I/O spends some time on the wait queue until the device is able to process the request. For each I/O the fdstart() routine moves the I/O from the wait queue to the run queue via the kstat_waitq_to_runq() function:


static void
fdstart(struct fdctlr \*fdc)
{

...
		/\* Mark I/O as active, move from wait to active q \*/
                if (un->un_iostat) {
                        kstat_waitq_to_runq(Kiosp);
                }
...

		/\* Do I/O... \*/
...

When the I/O is complete (still in the fdstart() function), it is marked as leaving the active queue via kstat_runq_exit(). This updates the last part of the statistic, leaving us with the number of I/Os, and the total time spent on each queue.

		/\* Mark I/O as complete \*/
                if (un->un_iostat) {
                        if (bp->b_flags & B_READ) {
                                KIOSP->reads++;
                                KIOSP->nread +=
                                        (bp->b_bcount - bp->b_resid);
                        } else {
                                KIOSP->writes++;
                                KIOSP->nwritten += (bp->b_bcount - bp->b_resid);
                        }
                        kstat_runq_exit(KIOSP);
                }
                biodone(bp);

...

}

These statistics provide us with our familiar metrics where actv is the average length of the queue of active I/Os and asvc_t is the average service time in the device. The wait queue is represented accordingly with wait and wsvc_t.

$ iostat -xn 10
                    extended device statistics              
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
    1.2    0.1    9.2    1.1  0.1  0.5    0.1   10.4   1   1 fd0

That's likely enough for a primer to kstat providers, as the detailed information is available in other places... Enjoy!

Richard.

Technorati Tag: OpenSolaris

Technorati Tag: Solaris

Monday Jun 13, 2005

Understanding Memory Allocation and File System Caching in OpenSolaris

Yes, I'm guilty. While the workings of the Solaris virtual memory system prior to Solaris 8 are documented quite well, I've not written much about the new cyclic file system cache. There will be quite a bit on this subject in the new Solaris Internals edition, but to answer a giant FAQ in a somewhat faster mode, I'm posting this overview.

A quick Introduction

File system caching has been implemented as an integrated part of the Solaris virtual memory system - since as far back as SunOS 4.0. This has the great advantage of dynamically using available memory as a file system cache. While this has many positive advantages (like being able to speed up some I/O intensive apps by as much as 500x), there were some historic side affects: applications with a lot of file system I/O could swamp the memory system with demand for memory allocations, putting so much pressure that memory pages would be agressively stolen from important applications. Typical symptoms of this condition were that everything seemed to "slow down" when there was file I/O occuring, and the system reported it was constantly out of memory. In Solaris 6 and 7, I updated the paging algorithms to only steal file system pages unless there was a real memory shortage, as part of the feature named "Priority Paging". This meant that although there was still significant pressure from file I/O and high "scan rates", applications didn't get paged out, nor suffer from the pressure. A healthy Solaris 7 system still reported it was out of memory, but performed well.

Solaris 8 - "Cyclic Page Cache"

Starting with Solaris 8, we provided a significant architectual enhancement which provides a more effective solution. The file system cache was changed so that it steals memory from itself, rather than other parts of the system. Hence, a system with a large amount of file I/O will remain in a healthy virtual memory state -- with large amounts of visible free memory, and since the page scanner doesn't need to run, there are no agressive scan rates. Since the page scanner isn't required constantly to free up large amounts of memory, it no longer limits file system related I/O throughput. Other benefits of the enhancement are that applications which want to allocate a large amount of memory can do so by efficiently consuming it directly from the file system cache. For example, starting Oracle with a 50Gbyte SGA now takes less than a minute, compared to the 20-30 minutes with the prior implementation.

The old allocation algorithm

To keep this explantion relatively simple, lets take a brief look at what used to happen with Solaris 7, even with priority paging. The file system consumes memory from the freelists every time a new page is read from disk (or whereever) into the file system. The more pages we read, the more pages depleted from the systems' free list (the central place where memory is kept for reuse). Eventually (sometimes rather quickly), the free memory pool is depleted. At this point, if there is enough pressure, futher requests for new memory pages are blocked until the free memory pool is replenished by the page scanner. The page scanner scans inneficiently though all of memory, looking for pages which it can free up, and slowly refills the free list, but only by enough to satisfy the immediate request, processes resume for a short time, and then stop again as they again run short on memory. The page scanner is a bottleneck in the whole memory life cycle.

In the diagram above, we can see the file system's cache mechanism (segmap) consuming memory from the free list until it's depleted. After those pages are used, they are kept around, but they are only immediately accessable by the file system cache in the direct re-use case; that is, if a file system cache hit occurs, then they can be "reclaimed" back into segmap to avoid a subsequent physical I/O. However, if the file system cache needs a new page, there is no easy way of finding these pages -- rather the page scanner is used to stumble across them. The page scanner effectively "bigles" out the system, blindly looking for new pages to refill the free list. The page scanner has to fill the free list at the same rate as the file system is reading new pages - and is a single point of constraint in the whole design.

The new allocation algorithm

The new algorithm uses a central list to place inactive file cache (that which isn't immediately mapped anywhere), so that they can easily be used to satisfy new memory requests. This is a very subtle change, but with significant demonstrable effects. Firstly, the file system cache now appears as a single age-ordered FIFO: recently read pages are placed on the tail of the list, and new pages are consumed from the head. While on the list, the pages remain as valid cached portions of the file, so if a read cache hit occurs, they are simply removed from where ever they are on the list. This means that pages which are accessed often (cache hit often) are frequently moved to the tail of the list, and only the oldest and least used pages migrate to the head as candidates for freeing.

The cachelist is linked to the freelist, such that if the free list is exhausted then pages will be taken from the head of the cachelist and their contents discarded. New page requests are requested from the freelist, but since this list is often empty, allocations occur mostly from the head of the cache list, consuming the oldest file system cache pages. The page scanner doesn't need to get involved, eliminating the paging bottleneck and the need to run the scanner at high rates (and hence, not wasting CPU either).

If an application process requests a large amount of memory, it too can take from the cachelist via the freelist. Thus, an application can take a large amount of memory from the file system cache without needing to start the page scanner, resulting in substantially faster allocation.

Putting it all together: The Allocation Cycle of Physical Memory

The most significant central pool physical memory is the freelist. Physical memory is placed on the freelist in page-size chunks when the system is first booted and then consumed as required. Three major types of allocations occur from the freelist, as shown above.

Anonymous/process allocations

Anonymous memory, the most common form of allocation from the freelist, is used for most of a process’s memory allocation, including heap and stack. Anonymous memory also fulfills shared memory mappings allocations. A small amount of anonymous memory is also used in the kernel for items such as thread stacks. Anonymous memory is pageable and is returned to the freelist when it is unmapped or if it is stolen by the page scanner daemon.

File system “page cache”

The page cache is used for caching of file data for file systems. The file system page cache grows on demand to consume available physical memory as a file cache and caches file data in page-size chunks. Pages are consumed from the freelist as files are read into memory. The pages then reside in one of three places: - the segmap cache, a process’s address space to which they are mapped, or on the cachelist.

The cachelist is the heart of the page cache. All unmapped file pages reside on the cachelist. Working in conjunction with the cache list are mapped files and the segmap cache.

Think of the segmap file cache as the fast first level file system read/write cache. segmap is a cache that holds file data read and written through the read and write system calls. Memory is allocated from the freelist to satisfy a read of a new file page, which then resides in the segmap file cache. File pages are eventually moved from the segmap cache to the cachelist to make room for more pages in the segmap cache.

The cachelist is typically 12% of the physical memory size on SPARC systems. The segmap cache works in conjunction with the system cachelist to cache file data. When files are accessed - through the read and write system calls, up to 12% of the physical memory file data resides in the segmap cache and the remainder is on the cache list.

Memory mapped files also allocate memory from the freelist and remain allocated in memory for the duration of the mapping or unless a global memory shortage occurs. When a file is unmapped (explicitly or with madvise), file pages are returned to the cache list.

The cachelist operates as part of the freelist. When the freelist is depleted, allocations are made from the oldest pages in the cachelist. This allows the file system page cache to grow to consume all available memory and to dynamically shrink as memory is required for other purposes.

Kernel allocations

The kernel uses memory to manage information about internal system state; for example, memory used to hold the list of processes in the system. The kernel allocates memory from the freelist for these purposes with its own allocators: - the vmem and slab. However, unlike process and file allocations, the kernel seldom returns memory to the freelist; memory is allocated and freed between kernel subsystems and the kernel allocators. Memory is consumed from the freelist only when the total kernel allocation grows.

Memory allocated to the kernel is mostly nonpageable and so, cannot be managed by the system page scanner daemon. Memory is returned to the system freelist proactively by the kernel’s allocators when there is a global memory shortage occurs.

How to observe and monitor the new VM algorithms

The page scanner and its metrics are an important indicator or memory health. If the page scanner is running, there is likely a memory shortage. This is an interesting departure from the behavior you might have been accustomed to on Solaris 7 and earlier, where the page scanner was always running. Since Solaris 8, the file system cache resides on the cachelist, which is part of the global free memory count. Thus, if a significant amount of memory is available, even if it’s being used as a file system cache, the page scanner won’t be running.

The most important metric is the scan rate, which indicates whether the page scanner is running. The scanner starts scanning at an initial rate (slowscan) when freememory falls down to the configured watermark—lotsfree—and then runs faster as free memory gets lower, up to a maximum (fastscan).

We can perform a quick and simple health check by determining whether there is a significant memory shortage. To do this, use vmstat to look at scanning activity and check to see if there is sufficient free memory on the system.

Let’s first look at a healthy system. This system is showing 970 Mbytes of free memory in the free column, and a scan rate (sr) of zero.

sol8# vmstat -p 3
     memory           page          executable      anonymous      filesystem 
   swap  free  re  mf  fr  de  sr  epi  epo  epf  api  apo  apf  fpi  fpo  fpf
 1512488 837792 160 20 12   0   0    0    0    0    0    0    0   12   12   12
 1715812 985116 7  82   0   0   0    0    0    0    0    0    0   45    0    0
 1715784 983984 0   2   0   0   0    0    0    0    0    0    0   53    0    0
 1715780 987644 0   0   0   0   0    0    0    0    0    0    0   33    0    0

Looking at a second case, we can see two of the key indicators showing a memory shortage—both high scan rates (sr > 50000 in this case) and very low free memory (free < 10 Mbytes).

sol8# vmstat -p 3
     memory           page          executable      anonymous      filesystem 
   swap  free  re  mf  fr  de  sr  epi  epo  epf  api  apo  apf  fpi  fpo  fpf
 2276000 1589424 2128 19969 1 0 0    0    0    0    0    0    0    0    1    1
 1087652 388768 12 129675 13879 0 85590 0 0   12    0 3238 3238   10 9391 10630
 608036 51464  20 8853 37303 0 65871 38   0  781   12 19934 19930 95 16548 16591
  94448  8000  17 23674 30169 0 238522 16 0  810   23 28739 28804 56  547  556

Given that the page scanner runs only when the freelist and cachelist are effectively depleted, then any scanning activity is our first sign of memory shortage. Drilling downfuther with ::memstat and shows us where the major allocations are.

sol9# mdb -k
Loading modules: [ unix krtld genunix ip ufs_log logindmux ptm cpc sppp ipc random nfs ]
> ::memstat

Page Summary                Pages                MB  %Tot
------------     ----------------  ----------------  ----
Kernel                      53444               208   10%
Anon                       119088               465   23%
Exec and libs                2299                 8    0%
Page cache                  29185               114    6%
Free (cachelist)              347                 1    0%
Free (freelist)            317909              1241   61%

Total                      522272              2040
Physical                   512136              2000

The categories are described as follows:

Kernel

The total memory used for nonpageable kernel allocations. This is how much memory the kernel is using, excluding anonymous memory used for ancillaries (see Anon).

Anon

The amount of anonymous memory. This includes user process heap, stack and copy-on-write pages, shared memory mappings, and small kernel ancillaries, such as lwp thread stacks, present on behalf of user processes.

Exec and libs

The amount of memory used for mapped files interpreted as binaries or libraries. This is typically the sum of memory used for user binaries and shared libraries. Technically, this memory is part of the page cache, but it is page cache tagged as “executable” when a file is mapped with PROT_EXEC and file permissions include execute permission.

Page cache

The amount of unmapped page cache, that is, page cache not on the cachelist. This category includes the segmap portion of the page cache, and any memory mapped files. If the applications on the system are solely using a read/write path, then we would expect the size of this bucket not to exceed segmap_percent (defaults to 12% of physical memory size). Files in /tmp are also included in this category.

Free (cachelist)

The amount of page cache on the freelist. The freelist contains unmapped file pages and is typically where the majority of the file system cache resides. Expect to see a large cachelist on a system that has large file sets and sufficient memory for file caching.

Free (freelist)

The amount of memory that is actually free. This is memory that has no association with any file or process.

If you want this functionality for Solaris 8, copy the downloadable memory.so into /usr/lib/mdb/kvm/sparcv9, and then use ::load memory before running ::memstat. (Note that this is not a Sun-supported code, but it is considered low risk since it affects only the mdb user-level program).


# wget http://www.solarisinternals.com/si/downloads/memory.so
# cp memory.so /usr/lib/mdb/kvm/sparcv9
# mdb -k
Loading modules: [ unix krtld genunix ip ufs_log logindmux ptm cpc sppp ipc random nfs ]
> ::load memory
> ::memstat

That's it for now.

Technorati Tag: OpenSolaris

Technorati Tag: Solaris

Technorati Tag: mdb

Monday Apr 18, 2005

Usenix Performance lab scripts uploaded

Thanks to all who attended our Solaris 10 performance class last week at Usenix! The slides are available for reference.

Jim Mauro is spotted here with a giant screen full of DTrace:

We had a great time with the high level of class interaction. One surprise was from Phil Brown -- he showed us a Fujitsu tablet, which itself looked cool, but next he showed us 'uname -a': 5.10 Generic i86pc! Nice.

As a followup to the class, I just posted a tarball of the labs directory, containing the load generation scripts for each of the sessions. Soon, we will also create a place for the DTrace scripts.

About

rmc

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today