Sunday Apr 12, 2009

Openbravo ERP 2.40 Appliance using Postgres 8.3 appliance with OpenSolaris OVF

Few days ago I talked about a Postgres 8.3 Appliance based on OpenSolaris. Today lets look at how to use that appliance image to get an Openbravo ERP 2.40 appliance based on OpenSolaris in VirtualBox.

Download the Postgres 8.3 Appliance OVF image and unzip the two files. Fire up VirtualBox 2.2 and use File->Import Appliance and point it to the .ovf file  from the zip file. Change the networking from NAT to "Bridged Network" and start the VM and soon you get "postgresdb login:" screen.  Use root/opensolaris to login into the system and verify that postgres instance is already running as follows:

# svcs -a |grep postgres
disabled       19:08:00 svc:/application/database/postgresql_83:default_64bit
online         19:08:23 svc:/application/database/postgresql_83:default_32bit

The default options of postgresql.conf are pretty low so bump them up slightly

# vi /var/postgres/8.3/data/postgresql.conf

shared_buffers=128MB
wal_buffers=128kB
checkpoint_segments=16
listen_addresses='\*'

# svcadm restart svc:/application/database/postgresql_83:default_32bit

 Now import other required dependencies for Openbravo ERP 2.40

# pkg install SUNWj6dev SUNWant SUNWtcat
DOWNLOAD                                    PKGS       FILES     XFER (MB)
SUNWj6dev                                    0/4     25/4756    1.08/84.90

Make sure that your newly installed tomcat setup has a valid server.xml file or copy it from an example file included. 

# cp /var/apache/tomcat/conf/server.xml-example /var/apache/tomcat/conf/server.xml

 Now download Openbravo ERP 2.40 installer as follows:

# pkg install SUNWwget

# wget "http://voxel.dl.sourceforge.net/sourceforge/openbravo/OpenbravoERP_2.40-solaris-intel-installer.bin"

# chmod a+x OpenbravoERP_2.40-solaris-intel-installer.bin

# ./OpenbravoERP_2.40-solaris-intel-installer.bin

 And use the following options:

  • /opt/OpenbravoERP | /var/OpenbravoERP/AppsOpenbravo/attachments
  • Complete |Standard | /usr/jdk/latest | /usr/bin/ant 
  • /var/apache/tomcat
  • PostgreSQL
  • /usr/postgres/8.3/bin
  • localhost     5432
  • (Enter password for postgres user as "postgres" twice)
  • openbravo    tad     (Enter password for tad user  twice)
  • Context name: openbravo
  • Date format: DD MM YYYY, Date Separator -, Time format 24h, Time Separator :
  • Demo data: Y or N depending on your preferences

After the information the installation GUI takes quite a bit of time to complete specially if you select to load the demo data. (Hope you made changes to PostgreSQL before to tune this loading.)

Once the installation completes  start tomcat as follows

# /usr/apache/tomcat/bin/startup.sh

Now from any other machine (or host machine) fire a browser and enter the IP address of the VM with port 8080 and uri openbravo and you now have a virtual VM with Openbravo running

http://myVMipaddress:8080/openbravo

The login screen for Openbravo should appear. Use Openbravo as username and openbravo (all lower case) as password to login and set it up for your business.



Wednesday Apr 08, 2009

Postgres 8.3 Appliance for VirtualBox 2.2

With the release of VirtualBox 2.2 today, exporting and importing appliances becomes lot easier. The new features of VirtualBox 2.2 includes import and export of appliances based on OVF or Open Virtualization Format. 

One of  my earlier blog entry talks about creating appliances. Using the same script mentioned in the earlier blog entry,  I now have a OVF Image  which will work with VirtualBox 2.2  "Import Appliance"  and make it easier to try a virtual PostgreSQL 8.3 database appliance running OpenSolaris under the cover.

The PostgreSQL 8.3 Appliance Image is less than 330MB (unlike a full install of OpenSolaris which could take more than 1500MB).  Just unzip the files in a directory and import them through VirtualBox 2.2 using File->Import Appliance Wizard.  Since an appliance (database in this case) is typically accessed through external application servers, you will want to use Bridge Network instead of NAT.  Detailed instructions to make it accessible (both VirtualBox VM as well as PostgreSQL 8.3)  from external clients are published on Postgres 8.3 Appliance page.

Finally  it is lot easier to finally see the following grub menu on your own VirtualBox 2.2:

 Also for PostgreSQL users/developers who are not familiar with OpenSolaris,  this VM  allows you to try the DTrace probes and ZFS snapshots with PostgreSQL 8.3. More information related to it is  coming soon on Postgres 8.3 Appliance page. Also the PostgreSQL DTrace Toolkit 2009.03.29 is available on PGFoundry.org.

Tip: You can also use my PostgreSQL Monitor Demo to connect to it:


As usual if you have questions, I will be happy to answer them.

Sunday Apr 05, 2009

Effects of Flash/SSD on PostgreSQL - PostgreSQL East 2009

I presented a talk at PostgreSQL East 2009 at Drexel University today morning.  The topic was "Effects of Flash/SSDs on PostgreSQL".  Here are the slides from the presentation

If you have questions please leave comments.

Wednesday Mar 11, 2009

Thundering Herd of Elephants for Scalability

During last year's PgCon 2008 I had presented about "Problems with PostgreSQL on Multi-core Systems".  On slide 14 I talked about the results with IGEN with various think times and had identied the problem of how it is difficult to scale with increasing number of users. The chart showed of how 200ms think time tests will saturate about 1000  active users and then throughput starts to fall.  On slide 16 & 17  I identified ProcArrayLock as the culprit of why scalability tanks with increasing number of users. 

Today I was again intrigued by the same problem as I was trying out PostgreSQL 8.4 builds and once again hit the bottleneck at about 1000 users and frustrated that PostgreSQL cannot scale even with only 1 active socket (64 strands)  of Sun Enterprise SPARC T5240 which is a 2 socket UltraSPARC T2 plus system.  

Again I was digging through the source code of PostgreSQL 8.4 snapshot to see what can be done. While  reviewing lwlock.c I thought of a change  and quickly  changed couple  of lines in the code and recompiled the database binaries and re-ran the test. The results are as show below (8.4 Vs 8.4fix)

The setup was quite standard (as how I test them). The database and log files are on RAM (/tmp). The think times were about 200ms for each user between transactions.  But the results was quite pleasing. On the same system setup,  TPM throughput went up about 1.89x up and response time now shows a more gradual increase rather than a knee-jerk response.

So what was the change in the source code? Well before I explain what I did, let me explain what is PostgreSQL trying to do in that code logic.

After a process acquires a lock and does its critical work, when it is ready to release the lock, it finds the next postgres process that it needs to wake up so effectively not causing starvation for processes trying to do exclusive locks  and also I think it is trying to avoid a Thundering Herd problem. It is a good thing to do for single core systems since CPU resources are sacred and effective usage results in better performane. However on systems with many CPU resources (say like 256 threads of Sun SPARC Enterprise T5440) this ends up artificially bottlenecking since it is not the system but the application determining which next process should wake up and try to get the lock.

The change I did was discard the selective process waiter wake-up and just wake up all waiters waiting for that lock and let the processes, OS Dispatcher, and CPU resources do its magic on its own way (and that worked,  worked very well.)


          //if (!proc->lwExclusive)
           if (1)
           {
                            while (proc->lwWaitLink != NULL &&
                                         1)
                                          // !proc->lwWaitLink->lwExclusive)
                                       proc = proc->lwWaitLink;
           }


Its amazing that last year I had tried so many different approaches to the problem and a mere simple approach proved to be more effective.

I have put a proposal to the PostgreSQL Performance alias that a postgresql.conf tunable be defined for PostgreSQL 8.4 so people can tweak their instances to use the above method of awaking all waiters without impacting the existing behavior for most existing users.

In the meantime if you do compile your own PostgreSQL binaries, try the workaround if you are using PostgreSQL on ay 16 or more cores/threads system and provide feedback about the impact on your workloads.


Thursday Mar 05, 2009

PostgreSQL Conference East 2009 & upcoming 8.4 beta

Sometimes you need a kickstart to get one moving from inertia. Considering that I haven't blogged in 2009 at all, I welcome the PostgreSQL Conference East 2009  email reminders to get me going again.  That will motivate me to do some targetted work with PostgreSQL for the community. I will be presenting on SSDs and PostgreSQL during the event. Unlike Robert Treat who can pretty much prepare all his slides during the lunch break of the conference, some of us require more time.  As a reminder PostgreSQL East 2009 registration has already started. 

 I am also looking forward to PostgreSQL 8.4 beta and test drive it on 128-threaded Sun SPARC Enterprise  T5240.



Tuesday Dec 16, 2008

PostgreSQL 8.3 Appliance based on OpenSolaris using VirtualBox

Based on Alex Eremin's blog entry on minimal script and some of my own hacking,  I have a script that should allow to create a basic PostgreSQL 8.3 Appliance using OpenSolaris 2008.11 Kernel  in Virtualbox easily with an image which is not bloated (No X, Gnome, etc).

Here is how to try it out:

  • For the time being this requires the  OpenSolaris 2008.11 CD image (650+ MB) only to execute my script create_pg_appliance.sh which is about 6KB. Bear with me till I solve this problem and play along :-)
  • Download Virtualbox and install it
  • Create a New Virtual Box VM with following parameters in the Wizard Screen
    • Call it PostgreSQL 8.3 Appliance
    • Select OS Type "OpenSolaris" 
    • Base Memory  768MB or increase it to 1GB if you can spare your RAM for it
    • Create a New Dynamic Expanding Image with exactly 16.00 GB (Any other wise may not work)
  • Once the VM is created, immediately  click the blue Network link and modify it to select a "Host Based Network"  (in the setup make sure it is connected to the active host interface - wired or wireless depending on the host system)
  • Also Click the CD-ROM image and point it to the osol-200811.iso image that you downloaded earlier
  • Boot up the VM and select the first LiveCD option in the Grub Menu options
  • Select through the defaults to get the full Gnome Desktop
  • Open Firefox in the VM and make  sure your VM  has internet access
  • Open a terminal window and execute the following commands in sequence
    • wget "http://blogs.sun.com/jkshah/resource/create_pg_appliance.sh"
    • pfexec sh -x create_pg_appliance.sh
    • pfexec reboot
  • At this point stop the VM and disconnect the CD image connected to the VM and boot again or  select the  grub entry
    • Boot from Hard disk
  • Once the VM boots from the hard disk, select the  grub menu item
    • PostgreSQL 8.3 Appliance based on OpenSolaris 2008.11
  • The Operating system boots up to give a login prompt. Login as root/opensolaris
  • The PostgreSQL database server is already running with PGDATA in /var/postgres/8.3/data
  • Modify postgresql.conf  to add  listen-address='\*' parameter
  • Modify pg_hba .conf to  allow your clients in pg_hba.conf
  •  Restart the postgresql server instance as follows:
    • svcadm restart postgresql_83:default_32bit
  • Now the PostgreSQL Appliance is ready to be connected from your clients.
  • Remember to take snapshots of rpool/VOSApp/var so you can always revert back to earlier copies if you ever need it.
Image of Grub Menu

Note: In a trial installation on a MacBook Pro the script executed in  about 11 minutes which  includes the time it takes to download packages from pkg.opensolaris.org.  I thought that was fantastic.

Maybe I should also create a minimized LiveCD just to execute the script on Virtualbox.  If you have problems executing the script let me know.



Friday Dec 12, 2008

Call for Papers for CommunityOne (New York) on March 18 2009

This year CommunityOne is being held also in New York (March 18,  2009) in addition to the CommunityOne held in San Francisco (June 1, 2009). CommunityOne is a conference sponsored by Sun Microsystems focused on open source innovation and implementation. It brings together developers, technologists and students for technical education and exchange.

The Call for Paper is on and been extended to December 19.  Since quite a bit of PostgreSQL crowd is in east coast maybe we should also put in some PostgreSQL proposals atleast for the New York event in March 2009. Any takers/suggestions?

Here is what that comes to my mind instantaneously :


Others that I missed?


Wednesday Dec 10, 2008

OpenSolaris 2008.11 and PostgreSQL 8.3

The second supported version of OpenSolaris called OpenSolaris 2008.11 is now officially launched. With the release, I guess now PostgreSQL 8.3 is now "officially" supported on OpenSolaris too (though you could have used it prior  anyway with the development releases of OpenSolaris after the initial OpenSolaris 2008.05 was released).

 Anyway let's quickly look at some tips and how-to which I think could be useful to improve the experience of PostgreSQL 8.3 with OpenSolaris 2008.11.  One way to take a quick drive of OpenSolaris 2008.11 is to also try out the new VirtualBox 2.0.6 and install OpenSolaris 2008.11 in a VM. (Note: If you do plan to access the VM from other systems then use Host Interface instead of NAT to make it easy to access it from outside.)

PostgreSQL 8.3 is not already installed once you install OpenSolaris 2008.11 from the LiveCD. This gives an opportunity for a tweak that I would highly recommend. By default OpenSolaris 2008.11 does not have option for separate ZFS dataset for /var (like Solaris 10 10/08 - U6 does). I would certainly like my PostgreSQL database to be on a separate dataset that I can control PostgreSQL snapshots  independently. Since I know the default PostgreSQL database path for PostgreSQL 8.3 on OpenSolaris is /var/postgres/8.3 I generally create a separate dataset as follows before I use pkg script to install PostgreSQL 8.3

# zfs create -o mountpoint=/var/postgres rpool/postgres

Now I am ready to install the various packages of PostgreSQL 8.3

# pkg install SUNWpostgr-83-server

# pkg install SUNWpostgr-83-client SUNWpostgr-jdbc SUNWpostgr-83-contrib

# pkg install SUNWpostgr-83-docs  SUNWpostgr-83-devel

# pkg install SUNWpostgr-83-tcl SUNWpostgr-83-pl

Now to install PGAdmin3 as follows:

# pkg install SUNWpgadmin3

Even pgbouncer is available in the repository

# pkg install SUNWpgbouncer-pg83

Now that all binaries are installed, let's look on how to get started. Generally the best way to start and stop PostgreSQL 8.3 server is via the svcadm command of OpenSolaris. The SMF manifest for PostgreSQL 8.3 is installed with SUNWpostgr-83-server, however the import of the script does not happen till the next reboot. We can always work that around by doing a quick manual update as follows:

# svccfg import /var/svc/manifest/application/database/postgresql_83.xml

This creates two entries one for 32-bit server instance and one for 64-bit server instance.

# svcs -a |grep postgres
disabled        1:14:44 svc:/application/database/postgresql_83:default_64bit
disabled        1:14:45 svc:/application/database/postgresql_83:default_32bit

Depending on your server/choice you can start the corresponding server. Or 32-bit instance may be just easier to select if there are doubts.

# svcadm enable postgresql_83:default_32bit

# svcs -a |grep postgres
disabled        1:14:44 svc:/application/database/postgresql_83:default_64bit
online          2:12:37 svc:/application/database/postgresql_83:default_32bit

# zfs list rpool/postgres
NAME             USED  AVAIL  REFER  MOUNTPOINT
rpool/postgres  30.4M  11.8G  30.4M  /var/postgres

The client psql is still not in the default path. The path /usr/postgres/8.3/bin should be in your search PATH. (or if you are using 64-bit add /usr/postgres/8.3/bin/64).

# /usr/postgres/8.3/bin/psql -U postgres postgres
Welcome to psql 8.3.4, the PostgreSQL interactive terminal.

Type:  \\copyright for distribution terms
       \\h for help with SQL commands
       \\? for help with psql commands
       \\g or terminate with semicolon to execute query
       \\q to quit

postgres=#

For people who are using PostgreSQL 8.2 on OpenSolaris 2008.05 already, there is a way to get to PostgreSQL 8.3. First you have to update your OpenSolaris 2008.05 to OpenSolaris 2008.11 image using

# pkg image-update

Then install PostgreSQL 8.3 binaries as above and also install an additional package

# pkg install SUNWpostgr-upgrade

If there are more question feel free to leave a comment.




Thursday Dec 04, 2008

PostgreSQL and Netbeans 6.5

With the release of JavaFX 1.0, a new version of Netbeans 6.5 is now also available. Let's quickly look at its current PostgreSQL Support.

The JDBC driver for PostgreSQL is integrated in Netbeans 6.5. It is  located at:

netbeans-6.5/ide10/modules/ext/postgresql-8.3-603.jdbc3.jar

As you can see it is pretty much upto sync to the latest (about a rev back since the latest is 604 build).  Clicking the Services tab of Netbeans shows Database where a new entry can be created using using the "Direct URL Entry" using the format as follows:

jdbc:postgresql://hostname/databasename

providing the user name and password for the connection and then the schema name (hint: public schema) in the Advanced Tab and press OK to create a new connection to PostgreSQL database.

Once connected clicking the expand sign which looks like  o- can  be expanded to see the Tables/Views/Procedures and more expansions can be done on Tables to see  data either using menu  options or using the SQL Editor.

With the PostgreSQL setup in place now it should be easier to program with your favorite language be it Java  or JavaFX , Python, PHP, Ruby, etc using PostgreSQL database as the backend.

( Note for myself:  Try out Greg Smith's pgtune project which uses Python with the new Netbeans's Python editor.)



Wednesday Nov 12, 2008

Snapshots with PostgreSQL and Sun Storage 7000 Unified Storage Systems

Continuing on my earlier blog entry on setup of PostgreSQL using Sun Storage 7000  Unified Storage (Amber Road), let's now look at how to take snapshots for such systems. 

Considering a typical PostgreSQL system there are two ways to take backup: 

Cold or Offline backup: Backing of PostgreSQL while PostgreSQL server is  not running. The advantage of such a backup is that it is simple to achieve the purpose of backing up the database. However the disadvantage is that the database system is unavailable for work.  The general steps for doing a cold backup is, shutdown gracefully the PostgreSQL server and backup all the files including $PGDATA, pg_xlog if it is in different location and all the tablespaces location used for that server instance. After backup the files, start the PostgreSQL server again and you are done. In case if you ever want to go back to that earlier backup version, you shutdown the database, restore all the old files and restart the database.  This is essentially the same logic which we will follow with snapshots except if you now have a multi gigabyte database, the snapshot operation  will look like million times faster than having to do a "tar" or "cpio" (specially if you consider that "tar" is a single threaded application doing its best but still slow if you use it on multi-core systems).

Lets consider an example on how to take a snapshot with OpenStorage systems with Cold or Offline backup strategy.

Assume that Sun Storage 7000  Unified Storage System  has the  hostname "amberroad" (Sun Storage 7000 Unified Storage Systems is mouthful ) have a project defined "postgres" which has three devices exported "pgdata" which holds $PGDATA, "pglog" which holds $PGDATA/pg_xlog and "pgtbs1" which is one of the tablespaces defined.

Then a backup/snapshot script  (as postgres user ) will look something similar to the following (Disclaimer:  Provided AS-IS. You will have to modify it for your own setups):

#!/bin/sh
date
echo "---------------------------------------------------------------"
echo "Shutting down the PostgreSQL database..."
echo "---------------------------------------------------------------"
pg_ctl -D $PGDATA stop
echo "---------------------------------------------------------------"
echo "Deleting earlier snapshot called testsnap.."
echo "---------------------------------------------------------------"
ssh -l root amberroad confirm shares select postgres snapshots select testsnap destroy
echo "---------------------------------------------------------------"
echo "Taking a new snapshot..."
echo "---------------------------------------------------------------"
ssh -l root amberroad shares select postgres snapshots snapshot testsnap
echo "---------------------------------------------------------------"
echo "Verifying the snapshot.."
echo "---------------------------------------------------------------"
ssh -l root amberroad shares select postgres snapshots show
ssh -l root amberroad shares select postgres snapshots select testsnap show
echo "---------------------------------------------------------------"
echo "Restarting the database.."
echo "---------------------------------------------------------------"
pg_ctl -D $PGDATA start -l $PGDATA/server.log
echo "---------------------------------------------------------------"
date

Note: The testsnap should not exist otherwise snapshot operation will fail and hence we delete any earlier snaps called testsnap in the script. You may modify the script to suit your priorities.

Now in case you want to restore to the snapshot "testsnap" version
you can use a sample script as follows:

#!/bin/sh
date
echo "Restoring the database"
echo "---------------------------------------------------------------"
echo "Shutting down the database..."
echo "---------------------------------------------------------------"
pg_ctl -D $PGDATA stop
echo "---------------------------------------------------------------"
echo "Restoring the snapshot.."
echo "---------------------------------------------------------------"
ssh -l root amberroad confirm shares select postgres select pgdata snapshots \\
select testsnap rollback
ssh -l root amberroad confirm shares select postgres select pglog snapshots \\
select testsnap rollback
ssh -l root amberroad confirm shares select postgres select pgtbs1 snapshots \\
select testsnap rollback
echo "---------------------------------------------------------------"
echo "Restarting the database.."
echo "---------------------------------------------------------------"
pg_ctl -D $PGDATA start -l $PGDATA/server.log
echo "---------------------------------------------------------------"
echo "Database restored to earlier snapshot: testsnap."
echo "---------------------------------------------------------------"
date

NOTE: When you use the above script, all snapshots taken after “testsnap” are lost as they are rolled back to the point in time when “testsnap” was taken. There is another feature to clone the snapshot incase if you don't want to use rollback.

Hot or  Online backup:  A Hot or Online backup is when one takes a backup while PostgreSQL server is running. The advantage is that the database server is available. The disadvantage is that the setup is more complex than the Cold or Offline backup strategy. The recommended way to use Hot or Online backup with PostgreSQL is to use it in conjuction with PITR - Point in Time Recovery feature of PostgreSQL.

This can be achieved by turning on continuous WAL archiving in PostgreSQL and using   SELECT pg_start_backup('label');before taking the snapshot and then issuing SELECT pg_stop_backup(); after the snapshot is completed.

For this you will also need another dedicated device under the "postgres" project to just hold the WAL archive files. (In my case I call it "pgwal" and mounted as /pgwal/archive 

To turn on continuous WAL archiving you need to set the following variables in postgresql.conf (and then restart the PostgreSQL server):

archive_mode = true
archive_command = 'test ! -f /var/lib/postgres/8.2/data/backup_in_progress || cp -i %p /pgwal/archive/%f < /dev/null'
archive_timeout=3600

Then a sample backup/snapshot with hot backup  script can look as follows:

#!/bin/sh
#PGDATA= /var/postgres/8.3/data; export PGDATA # if not set 
date
echo "---------------------------------------------------------------"
echo "Indicating PostgreSQL for a hot Snapshot.."
echo "---------------------------------------------------------------"
touch $PGDATA/backup_in_progress
psql  -c “select pg_start_backup('MyHotSnapShot');” postgres
echo "---------------------------------------------------------------"
echo "Deleting earlier snapshot.."
echo "---------------------------------------------------------------"
ssh -l root amberroad confirm shares select postgres snapshots select testsnap destroy
echo "---------------------------------------------------------------"
echo "Taking a new snapshot..."
echo "---------------------------------------------------------------"
ssh -l root amberroad shares select postgres snapshots snapshot testsnap
echo "---------------------------------------------------------------"
echo "Verifying the snapshot.."
echo "---------------------------------------------------------------"
ssh -l root amberroad shares select postgres snapshots show
ssh -l root amberroad shares select postgres snapshots select testsnap show
echo "---------------------------------------------------------------"
echo "Indicating to PostgreSQL Server that Hot Snapshot is done."
echo "---------------------------------------------------------------"
psql  -c “ select pg_stop_backup();” postgres
rm $PGDATA/backup_in_progress
echo "---------------------------------------------------------------"
date

Restoring from a hot backup snapshot is also bit tricky since we need to decide whether we need to roll forward and upto what point. It is probably beyond the scope of my post and hence I will just do a simple one with all files in the wal arhcives.

First of all restoring with PITR requires a recovery.conf file which should contain the path to the WAL Archives:

restore_command = 'cp /pgwal/archive/%f "%p"'

Also since we will be doing rollforward with the archive files,  we need to remove all the old pg_xlog files, create an archive status directory, remove the postmaster.pid file and an artifact of our earlier script, remove the backup_in_progress marker file too.

(We will do the above steps also as part  of our restore script.)

A restore snapshot script file can look something like as follows:

#!/bin/sh
#PGDATA= /var/postgres/8.3/data; export PGDATA # if not set 
date
echo "Restoring the database"
echo "---------------------------------------------------------------"
echo "Shutting down the database..."
echo "---------------------------------------------------------------"
pg_ctl -D $PGDATA stop
echo "---------------------------------------------------------------"
echo "Restoring the snapshot.."
echo "---------------------------------------------------------------"
ssh -l root amberroad confirm shares select postgres select pgdata snapshots \\
select testsnap rollback
ssh -l root amberroad confirm shares select postgres select pglog snapshots \\
select testsnap rollback
ssh -l root amberroad confirm shares select postgres select pgtbs1 snapshots \\
select testsnap rollback
#if there is reason to rollback walarchives then uncomment the next two lines
#ssh -l root amberroad confirm shares select postgres select pgwal snapshots \\
#select testsnap rollback
echo "---------------------------------------------------------------"
echo "Make sure pg_xlog/\* is empty ..."
echo "Make sure pg_xlog/archive_status exists..."
echo "Make sure postmaster.pid is removed"
echo "---------------------------------------------------------------"
rm $PGDATA/backup_in_progress
rm $PGDATA/postmaster.pid
rm $PGDATA/pg_xlog/\*
mkdir -p $PGDATA/pg_xlog/archive_status
echo "---------------------------------------------------------------"
echo "Set recovery.conf for recover.."
echo "---------------------------------------------------------------"
echo restore_command = \\'cp /pgwal/archive/%f "%p"\\' > $PGDATA/recovery.conf
echo "---------------------------------------------------------------"
echo "Restarting the database.."
echo "---------------------------------------------------------------"
pg_ctl -D $PGDATA start -l $PGDATA/server.log
echo "---------------------------------------------------------------"
echo "Database restored."
echo "---------------------------------------------------------------"
date

Of course the above scripts may not be best but gives an idea of how one can use snapshot features provided by Sun Storage 7000  Unified Storage Systems with PostgreSQL.



About

Jignesh Shah is Principal Software Engineer in Application Integration Engineering, Oracle Corporation. AIE enables integration of ISV products including Oracle with Unified Storage Systems. You can also follow me on my blog http://jkshah.blogspot.com

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today