Sunday Apr 05, 2009

Effects of Flash/SSD on PostgreSQL - PostgreSQL East 2009

I presented a talk at PostgreSQL East 2009 at Drexel University today morning.  The topic was "Effects of Flash/SSDs on PostgreSQL".  Here are the slides from the presentation

If you have questions please leave comments.

Wednesday Mar 11, 2009

Thundering Herd of Elephants for Scalability

During last year's PgCon 2008 I had presented about "Problems with PostgreSQL on Multi-core Systems".  On slide 14 I talked about the results with IGEN with various think times and had identied the problem of how it is difficult to scale with increasing number of users. The chart showed of how 200ms think time tests will saturate about 1000  active users and then throughput starts to fall.  On slide 16 & 17  I identified ProcArrayLock as the culprit of why scalability tanks with increasing number of users. 

Today I was again intrigued by the same problem as I was trying out PostgreSQL 8.4 builds and once again hit the bottleneck at about 1000 users and frustrated that PostgreSQL cannot scale even with only 1 active socket (64 strands)  of Sun Enterprise SPARC T5240 which is a 2 socket UltraSPARC T2 plus system.  

Again I was digging through the source code of PostgreSQL 8.4 snapshot to see what can be done. While  reviewing lwlock.c I thought of a change  and quickly  changed couple  of lines in the code and recompiled the database binaries and re-ran the test. The results are as show below (8.4 Vs 8.4fix)

The setup was quite standard (as how I test them). The database and log files are on RAM (/tmp). The think times were about 200ms for each user between transactions.  But the results was quite pleasing. On the same system setup,  TPM throughput went up about 1.89x up and response time now shows a more gradual increase rather than a knee-jerk response.

So what was the change in the source code? Well before I explain what I did, let me explain what is PostgreSQL trying to do in that code logic.

After a process acquires a lock and does its critical work, when it is ready to release the lock, it finds the next postgres process that it needs to wake up so effectively not causing starvation for processes trying to do exclusive locks  and also I think it is trying to avoid a Thundering Herd problem. It is a good thing to do for single core systems since CPU resources are sacred and effective usage results in better performane. However on systems with many CPU resources (say like 256 threads of Sun SPARC Enterprise T5440) this ends up artificially bottlenecking since it is not the system but the application determining which next process should wake up and try to get the lock.

The change I did was discard the selective process waiter wake-up and just wake up all waiters waiting for that lock and let the processes, OS Dispatcher, and CPU resources do its magic on its own way (and that worked,  worked very well.)


          //if (!proc->lwExclusive)
           if (1)
           {
                            while (proc->lwWaitLink != NULL &&
                                         1)
                                          // !proc->lwWaitLink->lwExclusive)
                                       proc = proc->lwWaitLink;
           }


Its amazing that last year I had tried so many different approaches to the problem and a mere simple approach proved to be more effective.

I have put a proposal to the PostgreSQL Performance alias that a postgresql.conf tunable be defined for PostgreSQL 8.4 so people can tweak their instances to use the above method of awaking all waiters without impacting the existing behavior for most existing users.

In the meantime if you do compile your own PostgreSQL binaries, try the workaround if you are using PostgreSQL on ay 16 or more cores/threads system and provide feedback about the impact on your workloads.


Friday Mar 06, 2009

Solaris 10 10/08 (Update 6) Update Manager

Ever since I had upgraded my workstation to Solaris 10 10/08 (Update 6) on my Sun Blade 2000 I had noticed that my patch update connection had not worked.Initially it would give me some weird Cocoa error or sometimes an empty dialog box with just OK in it.

Toda finally I was determined to fix it so I can apply the latest patches automatically instead of downloading them manually. Anyway the trick is to read these two Sun Forum entries.

  1. Entry regarding the Solaris 10 patch  that was released in January 2009
  2. Entry regarding the correct Java version that is required by underlying smpatch. 

In short in my case I had to install patch 121118-15 (for SPARC)  and also remove references to JDK 1.6_10 in my /usr/jdk so that I can make my update manager work again on my desktop. Finally my useful update manager is now working on Solaris 10 10/08 to find and install recent patches to the system.

Thursday Mar 05, 2009

PostgreSQL Conference East 2009 & upcoming 8.4 beta

Sometimes you need a kickstart to get one moving from inertia. Considering that I haven't blogged in 2009 at all, I welcome the PostgreSQL Conference East 2009  email reminders to get me going again.  That will motivate me to do some targetted work with PostgreSQL for the community. I will be presenting on SSDs and PostgreSQL during the event. Unlike Robert Treat who can pretty much prepare all his slides during the lunch break of the conference, some of us require more time.  As a reminder PostgreSQL East 2009 registration has already started. 

 I am also looking forward to PostgreSQL 8.4 beta and test drive it on 128-threaded Sun SPARC Enterprise  T5240.



Tuesday Dec 16, 2008

PostgreSQL 8.3 Appliance based on OpenSolaris using VirtualBox

Based on Alex Eremin's blog entry on minimal script and some of my own hacking,  I have a script that should allow to create a basic PostgreSQL 8.3 Appliance using OpenSolaris 2008.11 Kernel  in Virtualbox easily with an image which is not bloated (No X, Gnome, etc).

Here is how to try it out:

  • For the time being this requires the  OpenSolaris 2008.11 CD image (650+ MB) only to execute my script create_pg_appliance.sh which is about 6KB. Bear with me till I solve this problem and play along :-)
  • Download Virtualbox and install it
  • Create a New Virtual Box VM with following parameters in the Wizard Screen
    • Call it PostgreSQL 8.3 Appliance
    • Select OS Type "OpenSolaris" 
    • Base Memory  768MB or increase it to 1GB if you can spare your RAM for it
    • Create a New Dynamic Expanding Image with exactly 16.00 GB (Any other wise may not work)
  • Once the VM is created, immediately  click the blue Network link and modify it to select a "Host Based Network"  (in the setup make sure it is connected to the active host interface - wired or wireless depending on the host system)
  • Also Click the CD-ROM image and point it to the osol-200811.iso image that you downloaded earlier
  • Boot up the VM and select the first LiveCD option in the Grub Menu options
  • Select through the defaults to get the full Gnome Desktop
  • Open Firefox in the VM and make  sure your VM  has internet access
  • Open a terminal window and execute the following commands in sequence
    • wget "http://blogs.sun.com/jkshah/resource/create_pg_appliance.sh"
    • pfexec sh -x create_pg_appliance.sh
    • pfexec reboot
  • At this point stop the VM and disconnect the CD image connected to the VM and boot again or  select the  grub entry
    • Boot from Hard disk
  • Once the VM boots from the hard disk, select the  grub menu item
    • PostgreSQL 8.3 Appliance based on OpenSolaris 2008.11
  • The Operating system boots up to give a login prompt. Login as root/opensolaris
  • The PostgreSQL database server is already running with PGDATA in /var/postgres/8.3/data
  • Modify postgresql.conf  to add  listen-address='\*' parameter
  • Modify pg_hba .conf to  allow your clients in pg_hba.conf
  •  Restart the postgresql server instance as follows:
    • svcadm restart postgresql_83:default_32bit
  • Now the PostgreSQL Appliance is ready to be connected from your clients.
  • Remember to take snapshots of rpool/VOSApp/var so you can always revert back to earlier copies if you ever need it.
Image of Grub Menu

Note: In a trial installation on a MacBook Pro the script executed in  about 11 minutes which  includes the time it takes to download packages from pkg.opensolaris.org.  I thought that was fantastic.

Maybe I should also create a minimized LiveCD just to execute the script on Virtualbox.  If you have problems executing the script let me know.



Friday Dec 12, 2008

Call for Papers for CommunityOne (New York) on March 18 2009

This year CommunityOne is being held also in New York (March 18,  2009) in addition to the CommunityOne held in San Francisco (June 1, 2009). CommunityOne is a conference sponsored by Sun Microsystems focused on open source innovation and implementation. It brings together developers, technologists and students for technical education and exchange.

The Call for Paper is on and been extended to December 19.  Since quite a bit of PostgreSQL crowd is in east coast maybe we should also put in some PostgreSQL proposals atleast for the New York event in March 2009. Any takers/suggestions?

Here is what that comes to my mind instantaneously :


Others that I missed?


Wednesday Dec 10, 2008

OpenSolaris 2008.11 and PostgreSQL 8.3

The second supported version of OpenSolaris called OpenSolaris 2008.11 is now officially launched. With the release, I guess now PostgreSQL 8.3 is now "officially" supported on OpenSolaris too (though you could have used it prior  anyway with the development releases of OpenSolaris after the initial OpenSolaris 2008.05 was released).

 Anyway let's quickly look at some tips and how-to which I think could be useful to improve the experience of PostgreSQL 8.3 with OpenSolaris 2008.11.  One way to take a quick drive of OpenSolaris 2008.11 is to also try out the new VirtualBox 2.0.6 and install OpenSolaris 2008.11 in a VM. (Note: If you do plan to access the VM from other systems then use Host Interface instead of NAT to make it easy to access it from outside.)

PostgreSQL 8.3 is not already installed once you install OpenSolaris 2008.11 from the LiveCD. This gives an opportunity for a tweak that I would highly recommend. By default OpenSolaris 2008.11 does not have option for separate ZFS dataset for /var (like Solaris 10 10/08 - U6 does). I would certainly like my PostgreSQL database to be on a separate dataset that I can control PostgreSQL snapshots  independently. Since I know the default PostgreSQL database path for PostgreSQL 8.3 on OpenSolaris is /var/postgres/8.3 I generally create a separate dataset as follows before I use pkg script to install PostgreSQL 8.3

# zfs create -o mountpoint=/var/postgres rpool/postgres

Now I am ready to install the various packages of PostgreSQL 8.3

# pkg install SUNWpostgr-83-server

# pkg install SUNWpostgr-83-client SUNWpostgr-jdbc SUNWpostgr-83-contrib

# pkg install SUNWpostgr-83-docs  SUNWpostgr-83-devel

# pkg install SUNWpostgr-83-tcl SUNWpostgr-83-pl

Now to install PGAdmin3 as follows:

# pkg install SUNWpgadmin3

Even pgbouncer is available in the repository

# pkg install SUNWpgbouncer-pg83

Now that all binaries are installed, let's look on how to get started. Generally the best way to start and stop PostgreSQL 8.3 server is via the svcadm command of OpenSolaris. The SMF manifest for PostgreSQL 8.3 is installed with SUNWpostgr-83-server, however the import of the script does not happen till the next reboot. We can always work that around by doing a quick manual update as follows:

# svccfg import /var/svc/manifest/application/database/postgresql_83.xml

This creates two entries one for 32-bit server instance and one for 64-bit server instance.

# svcs -a |grep postgres
disabled        1:14:44 svc:/application/database/postgresql_83:default_64bit
disabled        1:14:45 svc:/application/database/postgresql_83:default_32bit

Depending on your server/choice you can start the corresponding server. Or 32-bit instance may be just easier to select if there are doubts.

# svcadm enable postgresql_83:default_32bit

# svcs -a |grep postgres
disabled        1:14:44 svc:/application/database/postgresql_83:default_64bit
online          2:12:37 svc:/application/database/postgresql_83:default_32bit

# zfs list rpool/postgres
NAME             USED  AVAIL  REFER  MOUNTPOINT
rpool/postgres  30.4M  11.8G  30.4M  /var/postgres

The client psql is still not in the default path. The path /usr/postgres/8.3/bin should be in your search PATH. (or if you are using 64-bit add /usr/postgres/8.3/bin/64).

# /usr/postgres/8.3/bin/psql -U postgres postgres
Welcome to psql 8.3.4, the PostgreSQL interactive terminal.

Type:  \\copyright for distribution terms
       \\h for help with SQL commands
       \\? for help with psql commands
       \\g or terminate with semicolon to execute query
       \\q to quit

postgres=#

For people who are using PostgreSQL 8.2 on OpenSolaris 2008.05 already, there is a way to get to PostgreSQL 8.3. First you have to update your OpenSolaris 2008.05 to OpenSolaris 2008.11 image using

# pkg image-update

Then install PostgreSQL 8.3 binaries as above and also install an additional package

# pkg install SUNWpostgr-upgrade

If there are more question feel free to leave a comment.




Thursday Dec 04, 2008

PostgreSQL and Netbeans 6.5

With the release of JavaFX 1.0, a new version of Netbeans 6.5 is now also available. Let's quickly look at its current PostgreSQL Support.

The JDBC driver for PostgreSQL is integrated in Netbeans 6.5. It is  located at:

netbeans-6.5/ide10/modules/ext/postgresql-8.3-603.jdbc3.jar

As you can see it is pretty much upto sync to the latest (about a rev back since the latest is 604 build).  Clicking the Services tab of Netbeans shows Database where a new entry can be created using using the "Direct URL Entry" using the format as follows:

jdbc:postgresql://hostname/databasename

providing the user name and password for the connection and then the schema name (hint: public schema) in the Advanced Tab and press OK to create a new connection to PostgreSQL database.

Once connected clicking the expand sign which looks like  o- can  be expanded to see the Tables/Views/Procedures and more expansions can be done on Tables to see  data either using menu  options or using the SQL Editor.

With the PostgreSQL setup in place now it should be easier to program with your favorite language be it Java  or JavaFX , Python, PHP, Ruby, etc using PostgreSQL database as the backend.

( Note for myself:  Try out Greg Smith's pgtune project which uses Python with the new Netbeans's Python editor.)



Tuesday Nov 18, 2008

Quest for OpenSolaris based Appliance

Recently I burned a copy of OpenSolaris 2008.11 RC1 and used it with Songbird (using pkg install SUNWsongbird)  and Fluendo MP3 Decoder (which is free for OpenSolaris),  Flash and soon I had a setup in my bedroom which I love to call OpenSolaris Home Theatre Edition that I used to listen to my songs collection and watch online shows that I controlled via Remote Desktop  which to me was a serious contender to those Media Centers out there. 

However I realized that while I wanted to "Pump it up" for my personal usage, I really wanted to "Strip it Down" for business usage. What I meant is in order for someone to try it out with say another Open Source Software it is easier now to say try that Open Source Software pre-installed on OpenSolaris in a Virtualbox Image. However I found it very hard to do it in practice.

Say for example I want to get somebody to try out the latest version of MySQL on OpenSolaris. The easiest thing is to give them a VDI image of preinstalled version of MySQL 5.1 RC  running on OpenSolaris that somebody just double clicks and boom a virutalbox instance starts up with MySQL ready to deliver.

However there is a problem with that theory. The VDI image of OpenSolaris fresh install in a virtualbox instance is about 2.5 - 3 GB. Of course adding MySQL on top of it won't drastically increase the size but I still have a problem with the base size itself. Since the only way that this can work is to use some sort of Peer to Peer File Sharing technology as hosting and hoping people will download this DVD Size downloads without any problems is like going at 10:00am for a Thanksgiving Deal at a store that opens at 6:00am with less than 20 SKU available (Tough Luck!!).

Anyway I started dissecting on how to reduce the size of the VDI image. There are few tricks provided by Virtualbox itself to release zero'ed out sectors using

VBoxManage modifyvdi VDINAME compact

However trying this out on a VDI holding OpenSolaris 2008.11 RC1 candidate did not really help.

The next thing I tried was to list out all the packages that are installed using

pkg list

There are about 550 packages installed as reported by that list. I tried  removing many of the non-essential desktop things (Aha who needs Xorg, Gnome, Firefox, Thunderbird) and reduced it to less than 400 packages. However even with that the VDI images is still not much smaller than 2.5GB. I tried the trick of cloning the disk to another VDI via ZFS replace thinking it will get rid of all the freed up space obtained by removing all the packages but the resulting new VDI image was still 2.5GB.

Looking at ZFS list output I found that for my virtualbox instance with 512MB I have three main zfs file systems defined which gives me the approximate 2.5GB usage.

rpool/ROOT/opensolaris- The legacy mount which is root (/) which is about 1.3 GB

rpool/dump which is about 512MB

rpool/swap which is again about 512MB

Now the goal is to reduce the VDI size without creating complications for the end user trying out the VDI and also still have the capability of going back to the fully installed version of OpenSolaris.

Hence the quest still continues..


Wednesday Nov 12, 2008

Snapshots with PostgreSQL and Sun Storage 7000 Unified Storage Systems

Continuing on my earlier blog entry on setup of PostgreSQL using Sun Storage 7000  Unified Storage (Amber Road), let's now look at how to take snapshots for such systems. 

Considering a typical PostgreSQL system there are two ways to take backup: 

Cold or Offline backup: Backing of PostgreSQL while PostgreSQL server is  not running. The advantage of such a backup is that it is simple to achieve the purpose of backing up the database. However the disadvantage is that the database system is unavailable for work.  The general steps for doing a cold backup is, shutdown gracefully the PostgreSQL server and backup all the files including $PGDATA, pg_xlog if it is in different location and all the tablespaces location used for that server instance. After backup the files, start the PostgreSQL server again and you are done. In case if you ever want to go back to that earlier backup version, you shutdown the database, restore all the old files and restart the database.  This is essentially the same logic which we will follow with snapshots except if you now have a multi gigabyte database, the snapshot operation  will look like million times faster than having to do a "tar" or "cpio" (specially if you consider that "tar" is a single threaded application doing its best but still slow if you use it on multi-core systems).

Lets consider an example on how to take a snapshot with OpenStorage systems with Cold or Offline backup strategy.

Assume that Sun Storage 7000  Unified Storage System  has the  hostname "amberroad" (Sun Storage 7000 Unified Storage Systems is mouthful ) have a project defined "postgres" which has three devices exported "pgdata" which holds $PGDATA, "pglog" which holds $PGDATA/pg_xlog and "pgtbs1" which is one of the tablespaces defined.

Then a backup/snapshot script  (as postgres user ) will look something similar to the following (Disclaimer:  Provided AS-IS. You will have to modify it for your own setups):

#!/bin/sh
date
echo "---------------------------------------------------------------"
echo "Shutting down the PostgreSQL database..."
echo "---------------------------------------------------------------"
pg_ctl -D $PGDATA stop
echo "---------------------------------------------------------------"
echo "Deleting earlier snapshot called testsnap.."
echo "---------------------------------------------------------------"
ssh -l root amberroad confirm shares select postgres snapshots select testsnap destroy
echo "---------------------------------------------------------------"
echo "Taking a new snapshot..."
echo "---------------------------------------------------------------"
ssh -l root amberroad shares select postgres snapshots snapshot testsnap
echo "---------------------------------------------------------------"
echo "Verifying the snapshot.."
echo "---------------------------------------------------------------"
ssh -l root amberroad shares select postgres snapshots show
ssh -l root amberroad shares select postgres snapshots select testsnap show
echo "---------------------------------------------------------------"
echo "Restarting the database.."
echo "---------------------------------------------------------------"
pg_ctl -D $PGDATA start -l $PGDATA/server.log
echo "---------------------------------------------------------------"
date

Note: The testsnap should not exist otherwise snapshot operation will fail and hence we delete any earlier snaps called testsnap in the script. You may modify the script to suit your priorities.

Now in case you want to restore to the snapshot "testsnap" version
you can use a sample script as follows:

#!/bin/sh
date
echo "Restoring the database"
echo "---------------------------------------------------------------"
echo "Shutting down the database..."
echo "---------------------------------------------------------------"
pg_ctl -D $PGDATA stop
echo "---------------------------------------------------------------"
echo "Restoring the snapshot.."
echo "---------------------------------------------------------------"
ssh -l root amberroad confirm shares select postgres select pgdata snapshots \\
select testsnap rollback
ssh -l root amberroad confirm shares select postgres select pglog snapshots \\
select testsnap rollback
ssh -l root amberroad confirm shares select postgres select pgtbs1 snapshots \\
select testsnap rollback
echo "---------------------------------------------------------------"
echo "Restarting the database.."
echo "---------------------------------------------------------------"
pg_ctl -D $PGDATA start -l $PGDATA/server.log
echo "---------------------------------------------------------------"
echo "Database restored to earlier snapshot: testsnap."
echo "---------------------------------------------------------------"
date

NOTE: When you use the above script, all snapshots taken after “testsnap” are lost as they are rolled back to the point in time when “testsnap” was taken. There is another feature to clone the snapshot incase if you don't want to use rollback.

Hot or  Online backup:  A Hot or Online backup is when one takes a backup while PostgreSQL server is running. The advantage is that the database server is available. The disadvantage is that the setup is more complex than the Cold or Offline backup strategy. The recommended way to use Hot or Online backup with PostgreSQL is to use it in conjuction with PITR - Point in Time Recovery feature of PostgreSQL.

This can be achieved by turning on continuous WAL archiving in PostgreSQL and using   SELECT pg_start_backup('label');before taking the snapshot and then issuing SELECT pg_stop_backup(); after the snapshot is completed.

For this you will also need another dedicated device under the "postgres" project to just hold the WAL archive files. (In my case I call it "pgwal" and mounted as /pgwal/archive 

To turn on continuous WAL archiving you need to set the following variables in postgresql.conf (and then restart the PostgreSQL server):

archive_mode = true
archive_command = 'test ! -f /var/lib/postgres/8.2/data/backup_in_progress || cp -i %p /pgwal/archive/%f < /dev/null'
archive_timeout=3600

Then a sample backup/snapshot with hot backup  script can look as follows:

#!/bin/sh
#PGDATA= /var/postgres/8.3/data; export PGDATA # if not set 
date
echo "---------------------------------------------------------------"
echo "Indicating PostgreSQL for a hot Snapshot.."
echo "---------------------------------------------------------------"
touch $PGDATA/backup_in_progress
psql  -c “select pg_start_backup('MyHotSnapShot');” postgres
echo "---------------------------------------------------------------"
echo "Deleting earlier snapshot.."
echo "---------------------------------------------------------------"
ssh -l root amberroad confirm shares select postgres snapshots select testsnap destroy
echo "---------------------------------------------------------------"
echo "Taking a new snapshot..."
echo "---------------------------------------------------------------"
ssh -l root amberroad shares select postgres snapshots snapshot testsnap
echo "---------------------------------------------------------------"
echo "Verifying the snapshot.."
echo "---------------------------------------------------------------"
ssh -l root amberroad shares select postgres snapshots show
ssh -l root amberroad shares select postgres snapshots select testsnap show
echo "---------------------------------------------------------------"
echo "Indicating to PostgreSQL Server that Hot Snapshot is done."
echo "---------------------------------------------------------------"
psql  -c “ select pg_stop_backup();” postgres
rm $PGDATA/backup_in_progress
echo "---------------------------------------------------------------"
date

Restoring from a hot backup snapshot is also bit tricky since we need to decide whether we need to roll forward and upto what point. It is probably beyond the scope of my post and hence I will just do a simple one with all files in the wal arhcives.

First of all restoring with PITR requires a recovery.conf file which should contain the path to the WAL Archives:

restore_command = 'cp /pgwal/archive/%f "%p"'

Also since we will be doing rollforward with the archive files,  we need to remove all the old pg_xlog files, create an archive status directory, remove the postmaster.pid file and an artifact of our earlier script, remove the backup_in_progress marker file too.

(We will do the above steps also as part  of our restore script.)

A restore snapshot script file can look something like as follows:

#!/bin/sh
#PGDATA= /var/postgres/8.3/data; export PGDATA # if not set 
date
echo "Restoring the database"
echo "---------------------------------------------------------------"
echo "Shutting down the database..."
echo "---------------------------------------------------------------"
pg_ctl -D $PGDATA stop
echo "---------------------------------------------------------------"
echo "Restoring the snapshot.."
echo "---------------------------------------------------------------"
ssh -l root amberroad confirm shares select postgres select pgdata snapshots \\
select testsnap rollback
ssh -l root amberroad confirm shares select postgres select pglog snapshots \\
select testsnap rollback
ssh -l root amberroad confirm shares select postgres select pgtbs1 snapshots \\
select testsnap rollback
#if there is reason to rollback walarchives then uncomment the next two lines
#ssh -l root amberroad confirm shares select postgres select pgwal snapshots \\
#select testsnap rollback
echo "---------------------------------------------------------------"
echo "Make sure pg_xlog/\* is empty ..."
echo "Make sure pg_xlog/archive_status exists..."
echo "Make sure postmaster.pid is removed"
echo "---------------------------------------------------------------"
rm $PGDATA/backup_in_progress
rm $PGDATA/postmaster.pid
rm $PGDATA/pg_xlog/\*
mkdir -p $PGDATA/pg_xlog/archive_status
echo "---------------------------------------------------------------"
echo "Set recovery.conf for recover.."
echo "---------------------------------------------------------------"
echo restore_command = \\'cp /pgwal/archive/%f "%p"\\' > $PGDATA/recovery.conf
echo "---------------------------------------------------------------"
echo "Restarting the database.."
echo "---------------------------------------------------------------"
pg_ctl -D $PGDATA start -l $PGDATA/server.log
echo "---------------------------------------------------------------"
echo "Database restored."
echo "---------------------------------------------------------------"
date

Of course the above scripts may not be best but gives an idea of how one can use snapshot features provided by Sun Storage 7000  Unified Storage Systems with PostgreSQL.



About

Jignesh Shah is Principal Software Engineer in Application Integration Engineering, Oracle Corporation. AIE enables integration of ISV products including Oracle with Unified Storage Systems. You can also follow me on my blog http://jkshah.blogspot.com

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today