Wednesday Nov 19, 2008

Test driving Hyperic HQ 4.x

Hyperic just released Hyperic HQ 4.x, and I recently took it for a test drive on OpenSolaris 2008.05. The installation went smoothly, but I ran into one issue when starting the server. With a 64-bit kernel, the startup script assumes you have 64-bit JRE on the system. It my case, the 64-bit JRE wasn't available, so the server startup failed. I could have installed the 64 bit version, but I decide to just comment out the check in the startup script (hq-server.sh) to use the 32-bit version. Below are the details of the installation steps.

Before running the Hyperic HQ install script, setup the database first. Below are the commands I used for PostgreSQL. Run psql, and then run the following commands:

postgres=# create role admin with login createdb password 'hqadmin';
CREATE ROLE
postgres=# CREATE DATABASE "HQ" OWNER admin;
CREATE DATABASE
postgres=# 

Now, run the install script (assuming you're unpacked the download in /var/tmp) and note the answers I entered in bold.

-bash-3.2# /var/tmp/hyperic-hq-installer/setup.sh
Initializing Hyperic HQ 4.0.1 Installation...
Loading taskdefs...
Taskdefs loaded
Choose which software to install:
1: Hyperic HQ Server
2: Hyperic HQ Agent
You may enter multiple choices, separated by commas.
1,2
HQ server installation path [default '/home/hyperic']:
/usr/local/hyperic
Choices:
	1: Oracle 9i/10g
	2: PostgreSQL
	3: MySQL 5.x
What backend database should the HQ server use? [default '1']:
2
Enter the JDBC connection URL for the PostgreSQL database [default 'jdbc:postgresql://localhost:5432/HQ?protocolVersion=2']: Enter

Enter the username to use to connect to the database:
admin
Enter the password to use to connect to the database: enter password here
(again): enter password again
HQ agent installation path [default '/usr/local/hyperic']: Enter

Loading install configuration...
Install configuration loaded.
Preparing to install...
Validating agent install configuration...
Validating server install configuration...
Checking server webapp port...
Checking server secure webapp port...
Checking server JRMP port...
Checking server JNP port...
Checking database permissions...
Verifying admin user properties
Validating server DB configuration...
Installing the agent...
Looking for previous installation
Unpacking agent to: /usr/local/hyperic/agent-4.0.1...
Setting permissions on agent binaries...
Fixing line endings on text files...
--------------------------------------------------------------------------------
Installation Complete:
  Agent successfully installed to: /usr/local/hyperic/agent-4.0.1
--------------------------------------------------------------------------------

 You can now start your HQ agent by running this command:

  /usr/local/hyperic/agent-4.0.1/bin/hq-agent.sh start

Installing the server...
Unpacking server to: /usr/local/hyperic/server-4.0.1...
Creating server configuration files...
Copying binaries and libraries to server installation...
Copying server configuration file...
Copying server control file...
Copying server binaries...
Copying server libs...
Setting up server database...
Setting permissions on server binaries...
Fixing line endings on text files...
--------------------------------------------------------------------------------
Installation Complete:
  Server successfully installed to: /usr/local/hyperic/server-4.0.1
--------------------------------------------------------------------------------


 You can now start your HQ server by running this command:

  /usr/local/hyperic/server-4.0.1/bin/hq-server.sh start

 Note that the first time the HQ server starts up it may take several minutes
 to initialize.  Subsequent startups will be much faster.

 Once the HQ server reports that it has successfully started, you can log in
 to your HQ server at: 

  http://opensolaris.local:7080/
  username: hqadmin
  password: hqadmin

 To change your password, log in to the HQ server, click the "Administration"
 link, choose "List Users", then click on the "hqadmin" user.


Setup completed.
A copy of the output shown above has been saved to:
  /var/tmp/hyperic-hq-installer/installer-4.0.1/./hq-install.log

At this point the installation is completed, and the server can be started with the following line, but as you can see HQ failed to start.

-bash-3.2# /usr/local/hyperic/server-4.0.1/bin/hq-server.sh start
Starting HQ server...
Initializing HQ server configuration...
Checking jboss jndi port...
Checking jboss mbean port...
Setting -d64 JAVA OPTION to enable SunOS 64-bit JRE
Booting the HQ server (Using JAVA_OPTS=-XX:MaxPermSize=192m -Xmx512m -Xms512m -d64)...
HQ failed to start
The log file /usr/local/hyperic/server-4.0.1/logs/server.out may contain further details on why it failed to start.

Looking at the hq-server.sh script, I notice that it checks if you're running a 64-bit kernel. If so, it uses the 64-bit JRE. Since I only have 32-bit JRE on my system, I commented out the following section in hq-server.sh.

#if [ $THISOS = "SunOS" ] ; then
#       ARCH=`isainfo -kv`
#
#       case $ARCH in
#               \*64-bit\*)
#                 echo "Setting -d64 JAVA OPTION to enable SunOS 64-bit JRE"
#                       HQ_JAVA_OPTS="${HQ_JAVA_OPTS} -d64"
#                       ;;
#       esac
#fi

And rerunning the script, the server started up properly.

-bash-3.2# /usr/local/hyperic/server-4.0.1/bin/hq-server.sh start
Starting HQ server...
Removing stale pid file /usr/local/hyperic/server-4.0.1/logs/hq-server.pid
Initializing HQ server configuration...
Checking jboss jndi port...
Checking jboss mbean port...
Booting the HQ server (Using JAVA_OPTS=-XX:MaxPermSize=192m -Xmx512m -Xms512m)...
HQ server booted.
Login to HQ at: http://127.0.0.1:7080/
-bash-3.2#

At this point you can just hit the server by pointing the browser to http://127.0.0.1:7080/, and you should see the Portal Dashboard.

This is just a quick cheatsheet to get you started. The official installation/configuration instructions are available on Hyperic website.

Friday Oct 31, 2008

OFBiz and PostgreSQL

There are a quite number of Open Source ERP apps out there, but only a few (Compiere/Adempiere, OpenBravo, xTuple, OFBiz) seem to have gained traction. Recently, I tried out OFBiz and getting it to work with PostgreSQL was quite simple. OFBiz is actually shipped with an embedded Java DB (Derby), so for testing or development, you probably don't need an external DB.

I did my test on OpenSolaris 2008.05, so if you use a different version of Solaris or a different OS, the instructions below may vary.

First, check out the source, compile, and run with the embedded database to make sure everything works properly.

  1. Create a directory where you want OFBiz to reside, and run "svn co http://svn.apache.org/repos/asf/ofbiz/trunk ofbiz"
  2. Go to the OFBiz directory (now referred to as $OFBIZDIR), and run "./ant run-install"
  3. In $OFBIZDIR, run "java -jar ofbiz.jar"
  4. Once OFBiz is started, check out the sample app by pointing the browser to http://localhost:8080/ecommerce/

Once your have it working with Derby, follow the steps below to switch to PostgreSQL. I use version 8.3.x.

  1. Assuming you have Postgres installed and ready to go, run:

    postgres $ createuser -S -D -R -P -E ofbiz
    postgres $ createdb -O ofbiz ofbizdb

  2. Edit the OFBiz DB config file in $OFBIZDIR/framework/entity/config/entityengine.xml

    <delegator name="default" entity-model-reader="main" entity-group-reader="main" entity-eca-reader="main" distributed-cache-clear-enabled="false">
    <group-map group-name="org.ofbiz" datasource-name="localpostnew"/>
    ...
    </delegator>

    <datasource name="localpostnew"
    ...
    <inline-jdbc
    jdbc-driver="org.postgresql.Driver"
    jdbc-uri="jdbc:postgresql://127.0.0.1:5432/ofbizdb"
    jdbc-username="ofbiz"
    jdbc-password="ofbiz"
    isolation-level="ReadCommitted"
    pool-minsize="2"
    pool-maxsize="250"/>
    </datasource>

  3. Copy PostgreSQL JDBC driver to $OFBIZDIR/framework/entity/lib/jdbc. Make sure to use the JDBC4 driver.
  4. Go to $OFBIZDIR, and run "./ant run-install" to populate the data in PostgreSQL
  5. Re/start OFBiz by running "java -jar ofbiz.jar"
  6. Once OFBiz is up and running, you should be able to access http://localhost:8080/ecommerce/

That's all there is to it.

Jignesh has a post on setting up OpenBravo with Postgres on OpenSolaris 2008.05.

Monday Feb 04, 2008

Achieving High Availability with PostgreSQL using Solaris Cluster

Quick announcement: Sun has just released Solaris Express Developer Edition (SXDE) 1/08, a developer version that has all the latest and greatest Solaris features and tools. So what's new for PostgreSQL? The main updates/additions are 8.2.5, Perl driver, and pgAdmin III v1.6.3.

High availability (HA) is very important for most applications. PostgreSQL has an HA feature called warm standby or log shipping which allows one or more secondary servers to take over when the primary server fails. However, there are some limitations with PostgreSQL warm standby, two of which are:

  1. No mechanism in PostgreSQL to identify and perform automatic failover when the primary server fails
  2. No read-only support on the standby server

Fortunately, Solaris Cluster can be used to solve limitation #1. Solaris Cluster is a robust clustering solution that has been around for many years, and best of all it's open sourced and free.

Detlef Ulherr has recently implemented the Solaris Cluster agent to work with PostgreSQL warm standby. We discussed different possible use-case scenarios with PostgreSQL warm standby, and he came up with a design that I think will work well for non-shared storage clustering. Maybe not everything will be perfect initially, but as more people test out the agent, we'll know what can be improved. The agent is still in beta now and will be released soon, probably in a couple of months.

The cool thing with Solaris Cluster is that you can now setup a cluster on a single node using multiple Solaris Zones . This is extremely useful because it eliminates the need for multiple machines or complicated hardware setup if you just want to try it out or if you want a simple environment for doing development. Here's more details on the Solaris Cluster and Zones integration

Of course you wouldn't want to deploy your HA application on a single machine. In production environment, you should have at least a two nodes cluster. Please refer to the Solaris Cluster documentation for more info.

From my recent test, I setup the cluster on a single machine with two Solaris Zones. Here's how the automatic failover works in a nutshell:

  • Client connects to a logical host. The logical host is configured to have two resource groups, each containing a Zone.
  • The logical host initially points to the primary server (Zone 1) where PostgreSQL is running, and PostgreSQL is configured to ship WAL logs to the secondary server (Zone 2) where PostgreSQL is running in continuous recovery mode.
  • When primary server fails, the Solaris Cluster agent detects the failure and triggers the logical host to automatically switch to the IP address of the secondary server.
  • From the PostgreSQL client perspective, it's still using the same logical host, but the actual PostgreSQL server has moved to a different machine, all happens transparently.
  • If the client was connecting to the DB on the primary, the session would be disconnected momentarily and reconnected automatically to the DB on the secondary server, and the application would continue on its merry way.

The combination of PostgreSQL warm standby and Solaris Cluster can provide an enterprise class clustering solution for free (unless to need support services off course). So, please try it out and provide your feedback on what can be improved.

In my next blog, I will discuss how Solaris ZFS and Zones can be used in a clever way to overcome limitation #2. This idea has been used already, and some of you may have seen Theo's blog on this topic. I will provide a working sample code and step by step instructions for setting it up.

About

rnl

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today