Wednesday Oct 31, 2007

Configuring hadb for Sun Application server 9.1 on Solaris 10 machine

Thanks to Ajit Kamble, I just made small additions into his document :-). Feel free to visit his blog
to find out about other complex topologies we try out for the open source portal from Sun Microsystem

 What is hadb ?

HADB is a horizontally-scalable database. It is designed to support data availability with load balancing, failover, and state recovery capabilities.

How is it used with  Sun Open Portal version ?

HADB is used for session failovers for portal  server instances which are clustered. This is also used for portlet session Failover.

Topology that we follow when we cluster Portal server :

 Gateway(This comes with the SRA components of the portal) : https://gateway.com
  LoadBalancer : http://loadbalancer.com/portal 
 Access Manager : http://accessmanager.com:8080/amconsole
 Portalserver instance1 + Admin server : http://machine1.com:38080/portal
 Portalserver instance2:  http://machine2.com:38080/portal

Portalserver instance1 and Portalserver instance2 are clustered with Admin server on 4848 port of machine1 .

In this install hadb is installed on machine1 and machine2 machine.

 

Steps to Configure hadb for Sun Application server 9.1 on Solaris 10 machine :

Pre-requisites for configuring hadb : 

1. All the machines should be in the same subnet

2. Ensure that name resolution of all servers is correct on each server, either via the hosts file or DNS as required.

3. Ensure that the fully qualified hostname is the first entry after the IP address in the /etc/hosts file. Enter the other machine details also in hosts file. For ex : machine1.india.sun.com should be present in hosts file of machine2 and viceversa

cat /etc/hosts file on machine1 : 

127.0.0.1       localhost
172.12.144.23   machine1 machine1.com loghost
172.12.144.24   machine2 machine2.com

Repeat the same thing for hosts file on machine2

4. Shared Memory Configuration

Depending on how much physical memory is on the system, configure the shared memory in the system configuration file: /etc/system

root@as1#prtconf | grep Mem

Memory size: 1024 Megabytes

Use the following formula to calculate the value of the shminfo_shmmax parameter

shminfo_shmmax =( Server's Physical Memory in MB / 256MB ) \* 10000000

examples:

2GB == 80000000

1GB == 40000000

512MB == 20000000

Add the following parameters to the /etc/system configuration file:

set shmsys:shminfo_shmmax=0x40000000

set shmsys:shminfo_shmseg=20

set semsys:seminfo_semmni=16

set semsys:seminfo_semmns=128

set semsys:seminfo_semmnu=1000

                                set ip:dohwcksum=0

5.Check that hostname lookup and reverse lookup is functioning correctly, check the contents of the /etc/nsswitch.conf file hosts entry:

root@as1#cat /etc/nsswitch.conf | grep hosts

hosts: files dns 

6. Allow non-console root login by commenting out the CONSOLE=/dev/console entry in the /etc/default/login file:

root@as1#cat /etc/default/login | grep “CONSOLE=”

#CONSOLE=/dev/console

7. Enable remote root ftp, comment out the root entry in the /etc/ftpd/ftpusers file:

root@as1#cat /etc/ftpd/ftpusers | grep root

#root

8. Permit ssh root login. Set PermitRootLogin to yes in /etc/ssh/sshd_config file, and restart the ssh daemon process:

root@as1#cat /etc/ssh/sshd_config | grep PermitRootLogin

PermitRootLogin yes 

9. root@as1#/etc/init.d/sshd stop

10. root@as1#/etc/init.d/sshd start

11. Generate the ssh plulic and private key pair on each machine, machine1 and machine2 in this case :

root@as1#ssh-keygen -t dsa

 Enter file in which to save the key(//.ssh/id_dsa): <CR>

Enter passphrase(empty for no passphrase): <CR>

Enter same passphrase again: <CR>

12. Copy all the public key values to each server's authorized_keys file. Create the authorized_keys file on each server

 

root@as1# cd ~/.ssh

root@as1# cp id_dsa.pub authorized_keys

Run the same on machine2 machine also and copy the public key of machine1(present in the authorized_keys file on machine1) to the authorized_keys file on machine2 and do the same thing viceversa.

13.disableIPv6 interface that is not supported by HADB.

To do this on solaris, please remove the file /etc/hostname6.__0(where __ is eir or hme)

14. Make Date and Time same on all the machines involved in the topology

15. Restart both the machines using init 6

16. Install AS9.1 along with hadb from the installer panel on both the machines machine1 and machine2

17. On both machine1 and machine2 run ma,

cd /opt/SUNWhadb/4/bin

./ma

18. We have noticed that this wont install all the hadb packages. Hence on both the machines, go to the directory where you have unzipped the AS9.1 installer ex : <AS9.1 unzip location>/installer-ifr/package/

19. run  pkgadd -d . SUNWhadbe SUNWhadbi SUNWhadbv

20.machine1 will have DAS and first portal instance and machine2 machine has second portal instance.

21. On machine1 : create cluster

./asadmin create-cluster --user admin pscluster

22. Create node agent on machine1

./asadmin create-node-agent --host machine1.com --port 4848 --user admin machine1node

23. Create node agent on machine2(Note, the --host should point to machine1 since machine1 is the Admin server)

./asadmin create-node-agent --host machine1 --port 4848 --user admin machine2node

24.  Create an AS9.1 instance on machine1 :

./asadmin create-instance --user admin --cluster pscluster --nodeagent machine1node --systemproperties HTTP_LISTENER_PORT=38080 machine1instance

25. Create an AS9.1 instance on machine2 :

./asadmin create-instance --user admin --cluster pscluster --host machine1.com  --nodeagent machine2node --systemproperties HTTP_LISTENER_PORT=38080 machine2instance

26. Start both the node agents :

/opt/SUNWappserver/appserver/bin/asadmin start-node-agent machine1node 

27. run configure-ha-cluster

 ./asadmin configure-ha-cluster --user admin --port 4848 --devicesize 256 --hosts machine1.com,machine2.com pscluster

IMP Note : 

If configure-ha-cluster fails for some reason, then while re-installing the next time, do a ps -ef | grep ma and kill all the ma process and the process which runs on port 15200

restart the machines again and try configure-ha-cluster again. 

If configure-ha-cluster is successful, one can start installing the Portal server 7.2 to leverage its enhanced cluster capability as given in the following blog by Prakash Radhakrishnan


About

siddeshk123

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today