'Virtual Metal' Provisioning with Oracle VM and PXE

Basis for Bare Metal Provisioning (BMP) in EMGC 10.2.0.5 is as mentioned in an earlier blog entry "PXE boot".

snap-rac-vm00042.jpg
This blog entry describes how to setup PXE boot (TFTP and DHCP) for a para-virtualised guests.
This allows you to automatically install virtualised guests by kickstart file.

By the way, in this setup I am on OEl 5U2 x86, if you want to reproduce for say x86_64, you may need other packages.

Below are my notes of the setup:
- install dhcp-3.0.5-18.el5
- install tftp-0.42-3.1.0.1 (we need this one later a required package for pypxeboot)
- install tftp-server-0.42-3.1.0.1

After installation of these packages, we begin with the configuration of dhcp in /etc/dhcpd.conf.
As this is just a test I am not using all options for DHCP.
Be care full if you test this, DHCP be working too good...
#
# DHCP Server Configuration file.
#   see /usr/share/doc/dhcp*/dhcpd.conf.sample  
#
ddns-update-style none;
allow booting; 
allow bootp;   

subnet 192.168.200.0 netmask 255.255.255.0 {
    option routers             192.168.200.1;
    option subnet-mask         255.255.255.0;
    option nis-domain          "nl.oracle.com";
    option domain-name         "nl.oracle.com";
    option domain-name-servers 192.135.82.60;

    default-lease-time 60;
    max-lease-time 60;
 
    next-server 192.168.200.173;
    filename "/pxelinux.0";

    host RK{
    hardware ethernet 00:16:3e:62:39:d3;
    fixed-address 192.168.200.177;
    }
}
As you can see I specified subnet, netmask, domain-name and details for the host called "RK".
Details are: name, mac and ip address.

The purpose of the "next-server" is to specify the name (or ip) of the tftp-server.
It makes sense to put DHCP and TFTP server on the same box.

In order to (re)start dhcp:
service dhcpd restart 
After setting up DHCP, TFTP needs to be setup. This is just a matter of enabling the service in inetd.
Set disable = no in the file /etc/xinetd.d/tftp. After this, restart service xinetd.

Pxeboot files need to be copied to /tftpboot on the tftp-server:
cp /usr/lib/syslinux/pxelinux.0 /tftpboot/
cp /usr/lib/syslinux/mboot.c32 /tftpboot/
From your OEL distribution, copy the boot-installation files:
cp $MOUNT_OEL_DISTR/images/xen/* /tftpboot/
Create a PXE configuration file for the guest you want to start:
[root@gridnode03 pxelinux.cfg]# gethostip -x 192.168.200.177
C0A8C8B1
So for a guest with ip-number 192.168.200.177 we need to put the details for the PV-PXE installation into /tftpboot/pxelinux.cfg/C0A8C8B1
[root@gridnode03 ~]# cat /tftpboot/pxelinux.cfg/C0A8C8B1 
default linux
prompt 1
timeout 120
label linux
  kernel vmlinuz
  append initrd=initrd.img lang=en_US keymap=us \
  ks=nfs:192.168.200.200:/vol/vol1/distrib/linux32/workshop-ovs/oel/OEL5U2/ks.cfg \  
  ksdevice=eth0 ip=dhcp
You can see:
- my OEL kickstart-file is on NFS (as my installation)
- the ip number is obtained by ip using eth0

I created my kickstart from an existing OEL installation.
With the help of the command system-config-kickstart --generate I re-generated it.

After this, I had to modify some bits about installation media (from cdrom to nfs).
Specifics for my kickstart file here

See the Redhat site for all options of kickstart.

Before I could start a vm guest I also, had to:
- install pypxeboot and
- install udhcp-0.9.8-1usermac

Then, created a vm configuration file:
[root@nlhpblade07 pxe]# cat rk.cfg 
name = "RK"
memory = "1024"
disk = [ 'file:/OVS/running_pool/pxe/system.img,xvda,w',]
vif = [ 'mac=00:16:3e:62:39:d3,bridge=xenbr0', '', ]
vfb = ["type=vnc,vncunused=1,vnclisten=0.0.0.0"]
#bootloader="/usr/bin/pygrub"
bootloader="/usr/bin/pypxeboot"
bootargs=vif[0]
vcpus=1
on_reboot   = 'restart'
on_crash    = 'restart'
Before I could start the VM, the 'disk' (image) had to be in place:
[root@nlhpblade07 pxe]# dd if=/dev/zero of=system.img bs=1M count=8000
8000+0 records in
8000+0 records out
8388608000 bytes (8.4 GB) copied, 165.725 seconds, 50.6 MB/s
[root@nlhpblade07 pxe]# 
So, after starting, remember that the third console of the installation enables you to see what is going on during the run of the anaconda installation procedure:
snap-rac-vm00037.jpg
After installation and before the reboot the vm-config file had to be modified and looks like this:
[root@nlhpblade07 pxe]# cat rk.cfg
name = "RK"
memory = "1024"
disk = [ 'file:/OVS/running_pool/pxe/system.img,xvda,w',]
vif = [ 'mac=00:16:3e:62:39:d3,bridge=xenbr0', '', ]
vfb = ["type=vnc,vncunused=1,vnclisten=0.0.0.0"]
bootloader="/usr/bin/pygrub"
vcpus=1
on_reboot   = 'restart'
on_crash    = 'restart'
snap-rac-vm00040.jpg After a successful installation the OS is setup and ready to be used: snap-rac-vm00041.jpg Rene Kundersma
Oracle Expert Services, The Netherlands
Comments:

awesome blog :) I'm very glad I wandered here through google. gonna need to add another one to the list

Posted by guest on December 12, 2009 at 06:25 AM PST #

Post a Comment:
  • HTML Syntax: NOT allowed
About

Blog of Rene Kundersma, Principal Member of Technical Staff at Oracle Development USA. I am designing and evaluating solutions and best practices around database MAA focused on Exadata. This involves HA, backup/recovery, migration and database consolidation and upgrades on Exadata. Opinions are my own and not necessarily those of Oracle Corporation. See http://www.oracle.com/technology/deploy/availability/htdocs/maa.htm.

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today