Entering the blogosphere, an Introductory Blog

OK, so I've been convinced by numerous people that I should start a blog. Today is the first, and I hope I to push out many of the performance tools and tips that we often destine for papers though this vehicle. I'll also try and capture my thoughts (and opinons) on some technical topics here, the first will likely be on IP storage.

I work within the performance engineering group (officially known as "Performance and Availability Engineering"), and have been working on various aspects of system performance. I moved to Menlo Park in 1998, after working for Sun engineering remotely from Australia as part of a high end systems group with Brian Wong and Adrian Cockroft. I've spent most of my time working on operating system performance and workload management -- and have enjoyed studies in the areas of virtual memory and file system performance.

Jim Mauro and I published Solaris Internals in 2000, and we are working agressively on a new edition for Solaris 10 -- leveraging DTrace for performance observability will be one of the main focuses on the new edition. The new edition is targetted for this summer. Most recently, I've been working with a team on Solaris improvements for high end systems, looking at OS implications for CMT processors, and in my spare time, looking at ways to characterize and improve file system performance.

Applied Performance Engineering

So what do we do? Our team focusses on characterization and optimization for performance and RAS. The group's charter encompasses developing workloads for performance measurement, characterizing performance, and identifying opportunities for improvement. We cover Solaris, Opteron/SPARC hardware and key ISV applications (like Oracle, SAP, and BEA) as part of the product development.

We work closely with customers and engineering, and create a link between the two. It's extremely important that we design systems closely to how they will be used. We partner with sister performance groups, such as the Strategic Application Engineering group (who are responsible for the majority of the benchmark publications), and the Performance Pit who run extensive performance testing as part of the performance lifestyle.

Workload Characterization

Capturing data about systems deployment is key to knowing what to optimize. We use a variety of methods to do this -- a toolset known as WCP (workload charactertization for performance - sun only link) to collect the most relevant data from real customer applications into a database, allowing detailed mining of many aspects of systems performance. This data comes from applications which are tested in the Sun Benchmark centers, and a large sample from live customer applications.

In addition, extensive trace data is collected from the key "benchmark" workloads, like SJAppserver and SPECweb, to run simulations.

Workload Creation

Once an application has been characterized, it can be decomposed into a representative benchmark. Some of the more recent public workloads we've been responsible for are SPEC jAppServer2004 and SPEC WEB2005.

Other than the formal workloads, there are a variety of in house developed workloads that fit closely with the customer applications: for example, we have a large mockup datacenter application (OLTP-NW) running on F25k to simulate how our starcats are used. Others include XMLmark for XML parsing, FileBench -- which emulates a large list of applications for file system measurement and libMicro which characterizes and operating system at the system call level.

Performance Optimization and Prototyping

Performance work starts early in the development process of our products, to identify performance opportunities early enough to make changes. By improving the operating system and applications we can ensure the applications run at their full potential of the platform. Areas of expertise are networking (network stack and drivers), compilers, JVM+Application Server, filesystems, NFS, and operating system internals.

As an example of some of the customer connected work -- we looked closely at databases on file systems. It was clear that for benchmarking purposes, databases run much faster on raw disks than file systems. However, we rarely use raw disks in a production environment, because it makes for complex administration. Prior to analysis, this was slated down as being due to CPU overheads with the file system. After deeper analysis with the Oracle database, we discovered that it mostly as a result of escerteric interactions with the databases use of synchronous operations (O_DSYNC), used for guaranteeing writes to stable storage for data integrity. Once resolved, we were able to get the file system performance for databases went from being 5x slower to within a few percent of that of raw disks -- in the noise.

Of course it goes without saying that we love DTrace!. We use DTrace extensively -- prior to the existance of DTrace, we had to write custom kernels or tools to instrument the layers we were intested in. We can now ask all the arbritrary questions at any time, and zero in on exactly where to optimize. It's hard to describe wiuth words just how much DTrace helps us... We are in the process of making all of our DTrace scripts and tools externally available -- I'll post more about this in a followup.

Knowledge Management

It's important to leverage the performance knowledge, so that we can all understand how to better configure and tune our systems. We distill the actionable performance information into forms that can be used by the Sun field engineers and to our customers. We make this information available through various mechanisms, including books, via TOI's at conferences, and papers (and now blogs :-)). Allan Packer has a wealth of knowledge on database tuning and capacity planning, -- he captured a lot of it in his book on Configuring and Tuning Databases. In the future, we will be communicating more of the timely information also through this medium.

Conferences and Introductions

There are two conferences coming up SuPerG and Usenix. Some of the PAE'ers are presenting there: Phil Harman is doing his famous DTrace talk and live demo, Bob Sneed is talking about Starcat Performance, Biswadeep Nag will talk about optimizing Oracle RAC, Roch Bourbonnais talking about NFS optimization, Richard Elling is talking about benchmarking for Availability, and I will be presenting on IP Storage. Jim and I are also doing a 2-day Solaris 10 Performance and DTrace tutorial at Usenix this month -- you can access the updated slides at solarisinternals.com.

Stay tuned for followups, we plan on pushing out additional material in the future.

-Richard.

Comments:

Welcome! That was a hell of an intro. Excellent.

Posted by Jim Grisanzio on April 06, 2005 at 08:31 AM PDT #

Welcome to the blogosphere Richard, I'm looking forward to reading your posts (and of course to the update of Solaris Internals).

Posted by fintanr on April 06, 2005 at 04:11 PM PDT #

Richard, Looking at the slides of your filebench program (looking forward to the code), you mention that you have a workload varmail which is similar to postmark. My concern with postmark is that there is no fsync call in the entire program and mta's which write to mail spool are fsync intensive. You might want to look at Bruce Guenter's fstress which models mail being written to a Maildir and being POP'ed at the same time http://untroubled.org/benchmarking/2004-04/

Posted by Yusuf Goolamabbas on April 06, 2005 at 11:11 PM PDT #

Looking forward to hearing your presentation at SuperG!

Posted by Kenneth on April 11, 2005 at 03:22 AM PDT #

Hi , Just stumbled across this blog - looks fascinating. Q: what was the outcome of your discoveries with oracle and O_DSYNC ? -> oracle patch / solaris patch ? Many Thanks Darren Allen

Posted by Darren Allen on May 17, 2006 at 11:12 PM PDT #

Post a Comment:
  • HTML Syntax: NOT allowed
About

rmc

Search

Archives
« July 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
  
       
Today