Tuesday Jun 30, 2009

How to add DTrace probes to your application

MC Brown and I co-presented a session at CommunityOne West on how to add probes to applications, using MySQL and PostgreSQL as case study. In the presentation, I used a very simple example to demonstrate how easy it is to add probes. If you want to try out yourself, here is the code. Extract the files, run gmake (or gnumake on OS X) to build, and run the executable in one terminal and the DTrace script in another terminal to see the output from the probes. To see more complicated examples, checkout the MySQL or PostgreSQL source code.

Monday Jun 29, 2009

More on the demos from PgCon 2009

At PgCon 2009 in Ottawa, I did a lightning talk on DTrace probes in PostreSQL 8.4. I wanted to show several demos but ran out of time. If you want to try them out, use the scripts below.

Here is the script, query_time.d, used in slide 14. This script is used to identify slow queries by printing out the query execution time.


#!/usr/sbin/dtrace -s
#pragma D option quiet

dtrace:::BEGIN
{
  printf("Tracing... Hit Ctrl-C to end.\\n");
}

postgresql\*:::query-start
{
    self->query = copyinstr(arg0);
    self->ts = timestamp;
}

postgresql\*:::query-done
/self->ts/
{
    @query_time[self->query] = avg(timestamp - self->ts);
    self->query = 0;
    self->ts = 0;
}

dtrace:::END
{
     printf("%10s %s\\n", "TIME (ns)", "QUERY");
     printf("==============================================================\\n");
     printa("%@10d %s\\n", @query_time);
}

Here is the second script, sort.d, used in slide 16. This script tells the type of sort, whether the sort was done in memory or on disk, and the time to perform the sort.


#!/usr/sbin/dtrace -qs

dtrace:::BEGIN
{
        sorttype[0] = "TUPLE";
        sorttype[1] = "INDEX";
        sorttype[2] = "DATUM";

        sortmethod[0] = "INTERNAL";
        sortmethod[1] = "EXTERNAL";
}

postgresql\*:::sort-start
{
        self->ts = timestamp;
        printf("\\nBegin %s sort, workmem = %d KB \\n", sorttype[arg0], arg3);
}

postgresql\*:::sort-done
/self->ts && arg0 == 0/
{
        /\* Internal Sort \*/
        printf("%s sort ended, space used = %d KB \\n", sortmethod[arg0], arg1);
        printf("Sort time = %d ms\\n\\n", (timestamp - self->ts) / 1000000);
}

postgresql\*:::sort-done
/self->ts && arg0 == 1 /
{        /\* External Sort \*/
        printf("%s sort ended, space used = %d disk blocks\\n", sortmethod[arg0], 
arg1);
        printf("Sort time = %d ms\\n\\n", (timestamp - self->ts) / 1000000);
}

Below is the last script, query_trace.d, used in slide 23. This script provides useful data that will allow you to dig down deeper. In this example, the buffer reads to table 16397 (this is the OID) is huge. This signals a red flag that an index may be needed for this table. To find out the table name from OID, run "SELECT relname FROM pg_class WHERE relfilenode=16397" in psql.


#!/usr/sbin/dtrace -qs

postgresql\*:::query-start
{
        self->ts = timestamp;
        self->pid = pid;
}

postgresql\*:::buffer-read-start
/self->pid/
{
        self->readts = timestamp;
}

postgresql\*:::buffer-read-done
/self->pid && arg7/
{
        /\* Buffer cache hit \*/
        @read_count[arg2, arg3, arg4] = count();
        @read_hit_total["Total buffer cache hits      : "] = count();
        @read_hit_time["Average read time from cache : "] = avg (timestamp - self->readts);
        self->readts = 0;
}

postgresql\*:::buffer-read-done
/self->pid && !arg7/
{
        /\* Buffer cache miss \*/
        @read_count[arg2, arg3, arg4] = count();
        @read_miss_total["Total buffer cache misses    : "] = count();
        @read_miss_time["Average read time from disk  : "] = avg (timestamp - self->readts);
        self->readts = 0;
}

postgresql\*:::buffer-flush-start
/self->pid/
{
        self->writets = timestamp;
}

postgresql\*:::buffer-flush-done
/self->pid/
{
        @write_count[arg2, arg3, arg4] = count();
        @write_time["Average write time to disk   : "] = avg (timestamp - self->writets);
        self->writets = 0;
}

postgresql\*:::query-done
/self->ts && self->pid == pid/
{
        printf("\\n============ Buffer Read Counts ============\\n");
        printf("%10s %10s %10s %10s\\n","Tablespace", "Database", "Table", "Count");
        printa("%10d %10d %10d %@10d\\n",@read_count);

        printf("\\n======= Buffer Write Request Counts ========\\n");
        printf("%10s %10s %10s %10s\\n","Tablespace", "Database", "Table", "Count");
        printa("%10d %10d %10d %@10d\\n",@write_count);

        printf("\\n========== Additional Statistics ===========\\n");

        printf ("Backend PID    : %d\\n", pid);
        printf ("SQL Statement  : %s\\n", copyinstr(arg0));
        printf ("Execution time : %d.%03d sec (%d ns)\\n", (timestamp - self->ts) / 1000000000, ((timestamp - self->ts) / 1000000) % 1000, timestamp - self->ts);
        printa("\\n%19s %@8d\\n",@read_hit_total);
        printa("%19s %@8d\\n",@read_miss_total);
        printa("%19s %@8d (ns)\\n",@read_hit_time);
        printa("%19s %@8d (ns)\\n",@read_miss_time);
        printa("%19s %@8d (ns)\\n",@write_time);
        printf("\\n\\n");

        trunc(@read_count);
        trunc(@write_count);
        trunc(@read_hit_total);
        trunc(@read_miss_total);
        trunc(@read_hit_time);
        trunc(@read_miss_time);
        trunc(@write_time);

        self->ts = 0;
        self->pid = 0;
}

To see more sample scripts as well as a GUI tool, check out the PostgreSQL DTrace Toolkit.

Sunday Jun 28, 2009

PostgreSQL DTrace Toolkit

As many of you know, PostgreSQL 8.4 has quite a few more DTrace probes. See my previous blog post for more details . To use the probes, you need to write DTrace scripts, which is quite easy to do, but to make it easier to use the probes (especially for those who are new to DTrace), I have written some scripts that you can just run from the command line. In addition, I've integrated some of those script with Chime to make it even easier to visualize the data. Check out the toolkit on PgFoundry.

Sunday Nov 30, 2008

Time for a Home NAS!

Over the years I have accumulated quite a bit of digital content (music, pictures, videos, archived tax returns, etc) and have used external drives, CDs/DVDs, and even USB sticks to backup the data. After a while, it has become increasing inconvenience to locate the content let alone share it with multiple computers, so I decided to shop for a home NAS. But which one? There are quite a few of them on the market with varying prices ($200 to $2000+) and features. I liked the Netgear's ReadyNAS, but it was quite pricy. With 2TB of storage, it was around $1,500 at the time. This costed more than a PC with the same amount of storage, so I thought to myself, "Why not use a PC with ZFS."

After some Googling, I found a number of people had already done exactly this, and Simon's blog was particularly helpful , especially for the system config that worked well for him. So, I decided to build a similar system. I ordered the components from Newegg.com, and the total price was less than $900 for a pretty powerful system (2.6 Ghz AMD Athlon, 2GB ECC RAM, 4x500GB WD drives). It took me a couple of days to put the system together, install OpenSolaris 2008.05, setup ZFS pools, and CIFS server/client, and voila, I have a system that can be used as a NAS as well as a general purpose server. Now, I have all my data in one place and be able to get to it from all the computers on the network, and be assured that if a drive fails, I won't lose data thanks to the RAID support in ZFS.

Monday Nov 10, 2008

Not just another NAS appliance

Sun just announced a new line of storage appliances that will forever change the NAS appliance market. By using OpenSolaris and technologies such as ZFS, DTrace, FMA and SMF with standard-based storage server, Sun is able to bring the products to market at a much cheaper price point than its competitor and yet provide more differentiated features such as:

  1. Analytics - With DTrace as the underlying technologies, Analytics allow users to ask questions about the storage system in production in realtime from a GUI.
  2. Hybrid Storage Pool - Leveraging the Flash technology, the ZFS Hybrid Storage Pool transparently manages DRAM, Flash, and low-cost hard drives, providing a high performance and cost effective storage solution.
  3. Remote Replication - Data can easily replicated with minimum configuration from the browser using the powerful ZFS features.
  4. Supported Protocols: NFS, CIFS, iSCSI, HTTP, WebDAV, FTP

To get more technical information about these exciting new NAS appliances, checkout the blogs and demos from the engineering team.

BTW, you can also try out the products for 60 days for FREE.

Monday Jul 21, 2008

BIG News for the PostgreSQL team at Sun

If you haven't heard already, Peter Eisentraut is joining Sun to work on PostgreSQL. This is very exciting, and I'm looking forward to working with Peter on some interesting projects.

Just as Peter decided to join Sun, Josh Berkus decided to leave us. Josh has made many contributions, and he will be missed. I have learned a lot from him for the past couple of years. Josh, good luck with your new endeavors.

BTW, I'm at OSCON this week. I arrived here Saturday night so I could attend PDXPug Day. I'm glad I decided to attend this mini-conference because the talks were excellent and I also got to meet some cool PostgreSQL people.

Monday Apr 28, 2008

Austin's First PostgreSQL User Group Meeting

A few of us are getting together to start a PostgreSQL User Group in Austin. If you're interested in PostgreSQL and live in the area, come join us!

Our kick off meeting will be on Tuesday, May 6th from 6-8pm. Please RSVP to austinpug@decibel.org (also subscribe to austinpug@postgresql.org ) if you plan to attend so we can pre-register all visitors to save time and know how many pizza to get! BTW, pizza is FREE.

I'm pleased to be able to offer Sun's facility for this meeting. Here's the address and link to Google map.

Sun Microsystems
Building 8 - Longhorn Conference Room
5300 Riata Park Ct
Austin, TX
Map

Thursday Apr 03, 2008

DTrace probes in PostgreSQL now work on Mac OS X Leopard

The issue with PostgreSQL's DTrace probes not working with Mac OS X Leopard as reported in the mail list has been fixed and checked into the 8.4 development tree. The problem had to do with the fact that Leopard's DTrace implementation not supporting the -G flag.

If you're curious about the gory details, check out the proposal, patch, and code commit.

With the new implementation, the steps for adding new probes are slightly different than before, but the provider and probe names remain the same. For details on how to use and add new probes, refer to the online doc.

Many thanks to Peter Eisentraut, Tom Lane, and Alvaro Herrera for their valuable feedback and assistance!

Friday Aug 25, 2006

Taking the plunge

Hello World! Okay, I'm a bit late to the blogging party, but as they say, it's better to be late than never.

Just a quick introduction of myself. I work in Market Development Engineering (MDE) at Sun. Over the past year or so, I have led an initiative focusing on working with open source partners and communities to ensure key open source apps run well on Solaris 10 and beyond. It has been an awesome experience, and along the way, I have gotten to work on some interesting projects with PostgreSQL on Solaris which I'm looking forward to share with the world, among others!

About

rnl

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today