Tuesday Apr 19, 2011

A general update to force saving of blog

Just a short update

Thursday Aug 31, 2006

How often does Solaris sync-up with Intel ACPI CA releases?

We receive updates to the ACPI CA source on a semi-periodic basis from Intel, and I'll do a quick integration into a Solaris workspace using a simple script that handles the “average” case in under 2 minutes. I'll then build a kernel and sanity-test it on a machine or two. The entire process typically takes less than an a half-day, and I can quickly provide feedback to the Intel ACPI CA team. I'll review the changes in the update; if there's something compelling, or it has been more than 4 months since the last time I integrated into Solaris Nevada, I'll do additional testing on a much broader range of machines and start the integration process. This means that Solaris Nevada is usually less than a few months out-of-sync with the very latest Intel sources and represents broad “soak” testing.

Backports to Solaris 10 are demand-driven by escalations and functional-dependencies; for example, as part of the work I did to support the Sun Blade 8000, I backported the integration of ACPI CA release 20060217 to Solaris 10 6/06 (aka Update 2). Individual bug fixes (like CR 6426595) are often backported ala carte.

Intel ACPI CA source release 20060721 integrated into Solaris Nevada build 48

I recently integrated Intel ACPI CA 20060721 into Solaris Nevada; it will be available in build 48. As is the case with most updates to ACPI CA, there are a few enhancements and a few bug fixes. One of the interesting enhancements in this update is to increase compatibility with “the other” ACPI AML interpreter:

Implemented support within the AML interpreter for package objects that contain a larger AML length (package list length) than the package element count. In this case, the length of the package is truncated to match the package element count. Some BIOS code apparently modifies the package length on the fly, and this change supports this behavior. Provides compatibility with the MS AML interpreter. (With assistance from Fiodor Suietov)

This is a good example of how everyone wins with open source – pretty much every non-Microsoft x86 OS today uses Intel ACPI CA. At the risk of tooting my own horn a bit, this update also contains the following bug-fix:

Fixed a memory mapping leak during the deletion of a SystemMemory operation region where a cached memory mapping was not deleted. This became a noticeable problem for operation regions that are defined within frequently used control methods. (Dana Myers)

I found this particular bug while testing on the Sun Blade 8000 (Andromeda); a very intense hot-plug test scenario would reliably fail after precisely 3 hours and 3 minutes, like clock-work. Andromeda's ACPI BIOS is used to implement PCIe hot-plug, so the test scenario was very heavily exercising a portion of the ACPI BIOS which dynamically creates a SystemMemory operation region, leaking a bit of memory-mapping each time. After something like 180 or so hot-plug cycles, this resource was exhausted, leading to the failure. I'm pleased we found this in the lab before a customer did, but I'm more pleased with how quickly the fix was integrated by Bob Moore at Intel. Once again, everyone wins with open source. This is a corner-case problem that won't be seen in the field.

Another change Intel made at my request was to provide support for 64-bit thread Ids – a minor change, but again, one which enhances stability and ease-of-integration.

For a complete list of changes in each ACPI CA release, have a look usr/src/uts/i86pc/io/acpica/changes.txt which is updated with each integration. The previously-integrated release of ACPI CA was 20060317.

Tuesday May 09, 2006

Solaris ACPI CA debug output options

How to control Solaris ACPI CA debug output on DEBUG and non-DEBUG kernels.[Read More]

Wednesday Mar 15, 2006

Living with a loquacious BIOS (or what's all that stuff in /var/adm/messages?)

I haven't heard any complaints about this yet, but I recently noticed that /var/adm/messages on my Acer Ferrari 3400 was filling up with jabber from the system BIOS, like this:

Mar 15 16:23:22 unknown acpica: [ID 927697 kern.notice] [ACPI Debug] String: [0x8] "QUERY_09"
Mar 15 16:23:22 unknown acpica: [ID 486228 kern.notice] [ACPI Debug] String: [0xD] "CMBatt - SMSL"
Mar 15 16:23:22 unknown acpica: [ID 214751 kern.notice] [ACPI Debug] Integer: 0x F1
Mar 15 16:23:22 unknown acpica: [ID 920918 kern.notice] [ACPI Debug] String: [0x12] "CMBatt - CHBP.BAT1"
Mar 15 16:23:22 unknown acpica: [ID 729625 kern.notice] [ACPI Debug] String: [0x1B] "CMBatt - BAT1 still present"
Mar 15 16:23:22 unknown acpica: [ID 578842 kern.notice] [ACPI Debug] String: [0x12] "CMBatt - UPBI.BAT1"
Mar 15 16:23:22 unknown acpica: [ID 776413 kern.notice] [ACPI Debug] Package: [0xD Elements]
Mar 15 16:23:22 unknown acpica: [ID 411424 kern.notice] [ACPI Debug] (0) Integer: 0x 1
Mar 15 16:23:22 unknown acpica: [ID 923811 kern.notice] [ACPI Debug] (1) Integer: 0x 1130

Pretty exciting, eh?  Back in Solaris Nevada build 29, I fixed kernel printf() so Solaris ACPI CA debug output would work, and I left the default ACPI CA "debug level" setting intact.  One of the things that the default level sends to /var/adm/messages is when an ACPI BIOS contains ASL statements like:

                Store ("CMBatt - BAT1 STATE CHANGE", Debug)
and
                Store (PBST, Debug)

The Acer Ferrari 3400 BIOS is peppered with them.  Some of you are already familiar with Intel's ACPI CA interpreter and know that there is a gobal variable to select levels of debug output, and may have just fixed it yourself (gold stars for all of you) but I'd bet a dollar there are many more people that are just suffering in silence.   Well, here's the easy way to make it stop - add the following line to /etc/system and reboot:

set acpica:AcpiDbgLevel = 0x7

That's all.  If you really don't want to see any debug output from ACPI CA, you set it to 0, but that's a bit extreme.  For more information, have a look at this ACPI CA header file in OpenSolaris.

Solaris Nevada will continue to default to leaving this feature turned on, but I'll change the default for Solaris 10 when I backport to an update release.


Monday Mar 06, 2006

ZFS v. UFS performance for building Solaris

Building Solaris: ZFS vs. UFS
Abstract:

I'd been an early-adopter of ZFS; it's amazingly easy to use and promises good performance.  My personal workstation is always running a relatively recent build of Solaris, and one of the most time-consuming tasks I do is full "nightly" builds.  Once I upgraded my workstation to a dual-core CPU, I noticed some oddity in ZFS performance and did some investigation.

Background:

My personal workstation is an Asus A8N-SLI Deluxe with 1GB of RAM and an Opteron 180 CPU (effectively identical to an Athlon 64 X2 4800+), upgraded last month from an Athlon 64 3200+.

Several months ago, when ZFS first integrated into Solaris Nevada, I did some experimentation with ZFS on a Dell PowerEdge equipped with 5 fast SAS disks.  I found that single-threaded access to ZFS was fast, and I could easily achieve over 200MB/sec transfer rates to a pool of 4 disks.  I saw some oddities with Bonnie numbers, and mentioned this to Tim Bray and he was kind enough to blog about this early experiment. and perform some experimentation of his own.

I've been using ZFS on my personal workstation since then; when I upgraded from a single-core Athlon 64 to a dual-core Athlon 64 X2, I wondered why I didn't see as much of an improvement in full nightly build times as I expected, and compared notes with another Athlon 64 X2 user.  I discovered that my build time was about 40% longer than his, and the only difference I could find was that he was using UFS and I was using ZFS for the workspace storage.

In the last week, a wad of ZFS fixes have integrated into Solaris Nevada, and, I'm sure, will roll out into the sunlight in a Solaris Express release.  The fixes sounded quite promising, so I ran an experiment this weekend.

DISCLAIMER: these are not official benchmarks of any kind.  They're just some stuff I did because it was too wet to go outside and mow the lawn.

I added another disk to my personal workstation (an 80GB Hitachi SATA 7200RPM drive, if it matters), partitioned it into two equal 40GB slices, and created a UFS in one slice and a ZFS pool in the other (I suppose it is possible that the location of the slices on the disk may have some impact on the overal performance; I may try swapping the filesystems in the slices as a sanity-test for this.  For now, I'm going to assume that the disk performance is the same for both slices).

... Then I got to work with some simple Solaris nightly-build benchmarking.

The tests are simple:
  • Initial workspace bringover time
  • Several trials of full nightly build time
  • A "parallel" bringover, in which two workspaces were brought over at the same time
  • A "parallel" build, in which two workspace were built at the same time.
Note that I only tested one kind of filesystem at a time, and that the system configuration was unchanged during the tests.  The system was essentially idle otherwise.  Overall, I'm assuming that the impact of all other system elements is identical.

On to the numbers!

First, the intial bringover:

UFS
ZFS
real    7m31.42s
user    0m13.58s
sys     0m19.53s
real    6m25.35s
user    0m13.56s
sys     0m18.63s

Well, the user time is identical in both trials, which one would expect.  I ran the same application twice, so the times should be about the same.  Whew! Sane results so far.  Note that ZFS seems to enjoy about a 17% performance advantage in this trial.  Note that this test is dominated by filesystem performance.

Now, the full nightly build trials.  I did three; the first one does a little less work in that the workspace is not populated with binaries to delete (make clobber).  These are just the real elapsed-time for each build, in HH:MM:SS:

UFS
ZFS
  1. real    1:39:08
  2. real    1:46:01
  3. real    1:47:56
  1. real    1:42:08
  2. real    1:43:47
  3. real    1:44:50

Interesting.  I'm not surprised UFS was quicker on the first trial and slower on the subsequent trials.  ZFS seems to be slightly faster overall, though the difference is just 1-2%.  Clearly, this experiment suggests that ZFS and UFS performance are essentially identical when it comes to building Solaris.  Note that this test is dominated by CPU performance.

Before the recent wad of ZFS fixes, I began to suspect that ZFS was too aggressive in internal locking and that multiple threads would become synchonized on a ZFS pool.  In fact, this would tend to explain the 40% penalty for using ZFS that I had seen on earlier Solaris bits.  So the next tests attempt to do two things in parallel; I launched the tests with a script that did something like

start test 1 &
sleep 10
start test 2 &
wait

First parallel test - two bringovers in parallel:


UFS
ZFS
real    13m57.95s
user    0m27.83s
sys     0m41.19s
real    8m21.28s
user    0m27.30s
sys     0m36.77s

More sane numbers!  User times are the same and twice what a single bringover took.  UFS was almost but not quite twice the single-bringover real-time.  ZFS does very well in this test, around 65% faster.  Again, this test is dominated by filesystem performance.

OK, let's do two builds in parallel:

UFS
ZFS
real    8:09:42
real    3:34:15

The ZFS trial looks totally sane; 3h 34m is just about exactly twice the 1h 47m times we saw in the single-build trials.  The UFS trial took over 8 hours, which isn't easy to explain.  I did not monitor the system page/swap statistics during either trial, but I have a hard time believing ZFS wouldn't equally suffer page/swap penalties.   If I wasn't busy enough with real work, I'd try this again, but I don't believe I did anything blatantly wrong in this test case.

Whew.  This is strange - but the ZFS numbers remain totally sane.

Let's try one more round of tests, this time, turning compression=on in the ZFS.  Since I didn't do UFS trials, I'll just compare uncompressed to compressed here:

ZFS (uncompressed)
ZFS (compressed)
Bringover:
real    6m25.35s
user    0m13.56s
sys     0m18.63s

Single build times:
real    1:42:08
real    1:43:47
real    1:44:50

Parallel bringover (2 ws):
real    8m21.28s
user    0m27.30s
sys     0m36.77s

Parallel build time (2 jobs):
real    3:34:15
Bringover:
real    6m32.13s
user    0m13.56s
sys     0m19.99s

Single build:
real    1:43:14



Parallel bringover (2 ws):
real    9m48.18s
user    0m27.55s
sys     0m41.02s

Parallel build time (2 jobs):
real    3:30:35

From this, I suspect that ZFS compression exacts a slight penalty on write performance, but may impart a slight improvement in read/build performance.  In any case, I noted a compression ratio of over 1.7x after the parallel build time tests, so it's pretty cool that I'm getting more data on the disk and not paying a performance penalty at all.

As soon as build 36 makes it out into Solaris Express, I'd love to get some comparable results from the community.

Tuesday Jun 14, 2005

Configuring Solaris ACPI at boot-time

As part of the new Solaris ACPI subsystem integrated as part of Newboot, I've added a new bit to the "acpi-user-options" boot option.

Historically, acpi-user-options=0x2 has been the only publicly documented option, and is used to disable Solaris use of ACPI for CPU enumeration and interrupt routing. Generally speaking, the pattern has historically been to set acpi-user-options=0x2 if there's any problem at all, just to see if the system works better. Changes made in Solaris 10 have made ACPI use in Solaris much more robust, so disabling use of ACPI should not be required as frequently as in previous releases.

Beginning with Newboot, integrated into Solaris source in April (2005), acpi-user-options has changed in a couple of ways:

  • The previous ACPI subsystem did not put the system into “ACPI” mode, but left the system in “Legacy” mode, where the system BIOS retains control of the system. The new Solaris ACPI subsystem based on ACPI CA now places the system in ACPI mode by default.

  • acpi-user-options=0x8 causes the new Solaris ACPI subsystem to leave the system in Legacy mode. This is the first option one should try if ACPI-related issues are suspected.

  • acpi-user-options=0x4 is present in Solaris 10, and causes both the previous Solaris ACPI subsystem and the new subsystem to partially disable use of ACPI – but Hyper-Threaded CPUs are still enumerated using ACPI tables. This is the second option one should use if ACPI-related issues are suspected.

  • acpi-user-options=0x2 is present in Solaris 10, and causes both the previous Solaris ACPI subsystem and the new subsystem to disable the use of ACPI.

Generally speaking, the new Solaris ACPI subsystem seems to do very well by default. I'll blog separately about some issues I've diagnosed that appeared to be ACPI-related but turned out to be to something else (BIOS issues, actually).



Suppose I ought to introduce myself...

I'm Dana Myers, and I'm in the Solaris engineering organization where I spend most of my time leading the Solaris ACPI engineering effort. In my nearly 12 years at Sun, I've been a Solaris x86 engineer, a Consumer/Embedded engineer, a Java Wireless BizDev technologist, and once again a Solaris engineer. While my focus in on providing a high-quality ACPI sub-system based on Intel's excellent ACPI CA interpreter, you might find me tinkering with pretty much any system-board level thing. I still have a soft-spot for network device drivers - perhaps some of you are still using the PCnet-PCI driver in Solaris? I was pleasantly surprised to find a fair bit of the code I wrote at the end of 1994 is still visible in that driver. Coming back to Solaris engineering after a long absence, at a time with so much new energy and new development as we dramatically expand x86 and x64 support, this is such a rush! Just so I don't start thinking it is as good as it gets, I was fortunate to be able to join the Newboot development team - Newboot was my first customer for the new Solaris ACPI subsystem. My ACPI-team colleague David Chieu quickly developed the ISA-device enumeration code on top of the new interpreter, and we're pleased to have filled-in a key piece of Newboot. I'll be blogging a bit about the relatively few, relatively minor issues I've encountered since integrating ACPI CA into Solaris. In general, ACPI CA is working extremely well - the issues we've seen are generally related to how I've changed the way that Solaris interacts with the system. The biggest change is that we run the system in ACPI-mode now, not legacy mode - but I'll save that for another blog. Cheers!

Tuesday May 10, 2005

Dtrace: kool-aid worth drinking

So, a few days ago I installed an experimental Solaris kernel build, and my mouse stopped working. In the old days, before dtrace, this would have meant digging around in kernel source, building experimental modules and running the kernel debugger. This time, though, I wrote a few lines of "D" code, kicked-off dtrace and a few minutes later I'd located exactly where the error was occurring. It just took another couple of minutes to check to see if the source file had been recently modified - and I found the bug had just been fixed the day before, I just needed to grab the updated source. Start to finish, it took under 30 minutes to resolve this issue. dtrace is an \*amazing\* tool.
About

danasblog

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today
News
Blogroll