Thursday Nov 01, 2007

Indiana VMWare...

The OpenSolaris Developer Preview is now available.


It's available for download at: http://dlc.sun.com/osol/indiana/downloads/current/in-preview.iso

This is an x86-based LiveCD install image, containing some new and emerging OpenSolaris technologies.

I gave it a try today in a VMWare Fusion image on Mac OS 10.5 host OS.

The experience was great! This is a liveCD, so it comes up with GNOME and works fine. I wanted to install a developer version of Solaris with NetBeans and Idm7.1 and a few other tools.

The installation was extremely simple, but a few things which I ran into along the way:

The VMware image selected should be 8 GB. For some reason, the OS would not proceed if I selected a 4 GB disk.

The default screen resolution was too big for VMWare on my machine, I had to switch to 1024 x 768 and even then I had to scroll down then to navigate the install screens. I tried 800 x 600 but could not see the bottom of the screen to navigate the install screens so I switched back to 1024 x 768. I opened a install bug on this issue at http://www.opensolaris.org/os/project/indiana/resources/reporting_bugs/

Select a 64 bit VMWare image (assuming your hardware is 64bit capable) and the 32 bit boot option from the LiveCD. When the installation is done it will install in 64 bit mode if the hardware is capable.

Some impressive things about this milestone OpenSolaris release:
  • LiveCD for OpenSolaris!
  • cdrom install media!
  • Installation was almost too easy! zfs is the default filesystem!

Wednesday Feb 21, 2007

The Vastness of Solaris

A presentation by a Sun collegue, Brad Beatles, was given entitled: "What's new in Solaris 10 11/06" last night at the Dallas/Ft. Worth OpenSolaris User Group - DFWOSUG

This inagural meeting took place last night after last month's meeting was cancelled due to an ice storm.

I'm using Solaris and openSolaris heavily for my identity demonstration environments and used this opportunity to explore the new features in more detail.

The first slide caught my attention immediately. It simply said "The Vastness of Solaris". How true, I talked to Brad afterward to find out who coined that phrase, and he said he's not sure where the origin was, but he heard it from another Sun collegue. Anyone who explores the breadth of features in Solaris certainly understands that this phrase rings true.

The new features that really caught my eye were the new zone options.
  • "zone clone"
  • rename
  • move

These new featues will make the build process for some of the environments I maintain much easier and allow a level of automation they is truely impressive. The basic premise is to use a minimal global zone (built from jumpstart) with no software or special configuration installed. Then create a "master" zone with all the configuration you require, then clone that master zone and use the clone for everything. This way in the event of any changes, you can always test them prior to adopting them and "roll back" to a previous zone if necessary. This combined with the abilty to move a zone from one server to another and use ZFS storage is quite powerful. Brad's presentation will be posted to the Dallas/Ft. Worth OpenSolaris User Group - DFWOSUG soon.

Saturday Jan 20, 2007

Time To FLOSS?

When I was a kid and went to the dentist I was fooled by a picture which was on the ceiling of the exam room. The picture was just a single phrase (in a frame):

You don't have to floss all your teeth, just the ones you want to keep.

I was a kid and did not like flossing, but since I had nothing better to do in the dentist's chair than look up, I always read this and pondered it, and that made me get the point that it was for my own good.

Oddly enough, this posting is not really about dental hygene, it's about software. Sun understands better than anyone that open source is a good thing. Sun has always been a leader in Open Source software, Here is a recent report with some hard numbers showing Sun as a leader in Free/Libre/Open Source Software (FLOSS), aka FOSS.
Huge New Study On Free/Open Software

Now for the part about this being the time. I am rebuilding a server of mine (details to follow in a future post) with Open Solaris, Open Directory, and Open SSO, and getting involved in the OpenSolaris User group meeting (this week was cancelled due to an ice storm). Next Meeting Tues. Fed. 20th, 2007:

Dallas/Ft. Worth OpenSolaris User Group - DFWOSUG

Monday Aug 29, 2005

Solaris x86 VMware adding a drive

Solaris 10, Adding drives in VMware

If you have a Solaris VMware image which needs additional space, these steps can be performed to add another disk to the image and configure it. (These steps were applicable to previous versions of Solaris, but this post was just updated after installing Solaris 10, 8/07). If you are new to Solaris or want a great alternative to Linux, explore the Solaris Express Developer Edition Community.

The following steps can be performed to add another disk to a Solaris 10 VMware image. Most steps are general Solaris admin tasks, so the configuration is straight forward.

Add a VMware Disk to a Solaris image

  • Shut down the VM
  • Edit the VMware configuration: Select Virtual Machines -> Settings
  • Add a new hard disk device
  • start the Solaris image

Tell Solaris to look for new devices when booting

  • Select the Solaris entry GRUB menu that you want to boot.
  • to edit, enter e
  • select the "kernel /platform" line
  • to edit that again enter e
  • add to the end of the 'kernel' line a space followed by -r
    kernel /platform/i86pc/multiboot -r
  • press enter key to accept the change
  • press b to boot
OPTIONAL: Another method to force Solaris to rebuild the device tree: You can force Solaris to rebuild it's device tree while the system is running but creating an empty file and rebooting Solaris:
#touch /reconfigure
#reboot

Use format to Partition the new disk

#format
Example output:
AVAILABLE DISK SELECTIONS:
       0. c0d0 
          /pci@0,0/pci-ide@7,1/ide@0/cmdk@0,0
       1. c0d1 
          /pci@0,0/pci-ide@7,1/ide@0/cmdk@1,0
Select the new device (1 for c0d1 in my case): Specify disk (enter its number): 1 Select fdisk to create a partition table format> fdisk Select 'y' for 100% solaris View the partition table - select 'p' for partition table format> p Then 'p' to print the partition table format> p Note the size of the device (partition 2, 1 GB in my case)
Current partition table (original):
Total disk cylinders available: 1020 + 2 (reserved cylinders)

Part      Tag    Flag     Cylinders        Size            Blocks
  0 unassigned    wm       0               0         (0/0/0)          0
  1 unassigned    wm       0               0         (0/0/0)          0
  2     backup    wu       0 - 1020     1021.00MB    (1021/0/0) 2091008
  3 unassigned    wm       0               0         (0/0/0)          0
  4 unassigned    wm       0               0         (0/0/0)          0
  5 unassigned    wm       0               0         (0/0/0)          0
  6 unassigned    wm       0               0         (0/0/0)          0
  7 unassigned    wm       0               0         (0/0/0)          0
  8       boot    wu       0 -    0        1.00MB    (1/0/0)       2048
  9 alternates    wm       1 -    2        2.00MB    (2/0/0)       4096

partition>
Allocate space to slice s0 partition> 0 - Select the details for slice 0
Part      Tag    Flag     Cylinders        Size            Blocks
  0 unassigned    wm       0               0         (0/0/0)          0

Enter partition id tag[unassigned]:
Enter partition permission flags[wm]:
Enter new starting cyl[0]: 3
Enter partition size[0b, 0c, 3e, 0.00mb, 0.00gb]: 1017c
Note: The size 1017c (cylinders) was selected above since slice 8 and 9 are using the first 3 cylinders, the remaining cylinders on the disk is obtained be subtracting the used cylinders (3) from the total cylinders (1020 listed in slice 2, the backup slice which listed the entire size of the disk)
Select 'p' to print the partition table and verify the correct size
partition> p
Current partition table (unnamed):
Total disk cylinders available: 1020 + 2 (reserved cylinders)

Part      Tag    Flag     Cylinders        Size            Blocks
  0 unassigned    wm       3 - 1019     1017.00MB    (1017/0/0) 2082816
  1 unassigned    wm       0               0         (0/0/0)          0
  2     backup    wu       0 - 1020     1021.00MB    (1021/0/0) 2091008
  3 unassigned    wm       0               0         (0/0/0)          0
  4 unassigned    wm       0               0         (0/0/0)          0
  5 unassigned    wm       0               0         (0/0/0)          0
  6 unassigned    wm       0               0         (0/0/0)          0
  7 unassigned    wm       0               0         (0/0/0)          0
  8       boot    wu       0 -    0        1.00MB    (1/0/0)       2048
  9 alternates    wm       1 -    2        2.00MB    (2/0/0)       4096

Select label to write the partition table to disk, quit format
partition> label
Ready to label disk, continue? y
partition> q
format> q

Create a new file system on the new disk slice

First a quick note about options. You can create a ufs filesystem, or the fantastic zfs file system. I will show the steps for each, I have been told by some of my Sun collegues that: if you are creating a partition for Solaris Live Upgrade to use, it must be a bootable partition and as of 9/07 you can not boot from a ZFS partition so you must create a UFS one. If the objective is to create a Live Upgrade partition, you are done (you don't put a files system on it, lucreate will handle that).

OPTION 1: create a ufs filesystem

bash-3.00# newfs /dev/dsk/c0d1s0
newfs: construct a new file system /dev/rdsk/c0d1s0: (y/n)? y
/dev/rdsk/c0d1s0:       2082816 sectors in 1017 cylinders of 64 tracks, 32 sectors
1017.0MB in 64 cyl groups (16 c/g, 16.00MB/g, 7680 i/g)

Mount the new filesystem
bash-3.00# mount /dev/dsk/c0d1s0 /disk2
bash-3.00# df -k
Filesystem            kbytes    used   avail capacity  Mounted on
/dev/dsk/c0d0s0      9239837 3366138 5781301    37%    /
/devices                   0       0       0     0%    /devices
ctfs                       0       0       0     0%    /system/contract
proc                       0       0       0     0%    /proc
mnttab                     0       0       0     0%    /etc/mnttab
swap                 2109532     852 2108680     1%    /etc/svc/volatile
objfs                      0       0       0     0%    /system/object
/usr/lib/libc/libc_hwcap1.so.1
                     9239837 3366138 5781301    37%    /lib/libc.so.1
fd                         0       0       0     0%    /dev/fd
swap                 2108728      48 2108680     1%    /tmp
swap                 2108708      28 2108680     1%    /var/run
/hgfs                16777215    4096 16772864     1%    /hgfs
/tmp/VMwareDnD       67108860   16384 67091456     1%    /var/run/vmblock
/dev/dsk/c0d1s0       978927    1041  919151     1%    /disk2

OPTIONAL: edit the vfstab file to add the mount point
/dev/dsk/c0d1s0 /dev/rdsk/c0d1s0        /disk2 ufs     1       yes     -

mount the new file system
#mount /disk2 
or 
#mountall

OPTION 2: create a zfs filesystem

The zfs filesystem is quite impressive. It is also quite simple to setup. create the zfs pool:
# zpool create -f zones c1d1s0
The filesystem is now created and mounted:
bash-3.00# df -k
Filesystem            kbytes    used   avail capacity  Mounted on
/dev/dsk/c0d0s0      9239837 3366140 5781299    37%    /
/devices                   0       0       0     0%    /devices
ctfs                       0       0       0     0%    /system/contract
proc                       0       0       0     0%    /proc
mnttab                     0       0       0     0%    /etc/mnttab
swap                 2104416     852 2103564     1%    /etc/svc/volatile
objfs                      0       0       0     0%    /system/object
/usr/lib/libc/libc_hwcap1.so.1
                     9239837 3366140 5781299    37%    /lib/libc.so.1
fd                         0       0       0     0%    /dev/fd
swap                 2103612      48 2103564     1%    /tmp
swap                 2103592      28 2103564     1%    /var/run
/hgfs                16777215    4096 16772864     1%    /hgfs
/tmp/VMwareDnD       67108860   16384 67091456     1%    /var/run/vmblock
/dev/dsk/c0d1s0       978927    1041  919151     1%    /disk2
disk3                 999424      24  999339     1%    /disk3


Many other interesting things can be done with zfs, but that is a topic of another post, here are a few commands for snapshots to play with:
# zfs snapshot disk3@empty
# zfs list -t snapshot
NAME          USED  AVAIL  REFER  MOUNTPOINT
disk3@empty      0      -  24.5K  -
Now we can rollback to the empty snapshot at any time in the future with:
# zfs rollback -r disk3@empty


Tags:

Friday Aug 26, 2005

Solaris 10 Zones Java Enterprise System log...

I'm rebuilding some of my environments and wanted to share the relevant steps...

Staging a Solaris 10 system for Java Enterprise System

I have attached my configuration notes for this process, a much more comprehensive description of Solaris Zones in general is available.

1) Install Solaris 10
  • I attempted to minimize the number of packages, but with limited time, I used the end user cluster minus Star Office and Evolution.
  • I added the extra packages required by the JES components (per user, etc.) See the Java Enterprise System docs for full instructions.
2) Create a zone for installation of the Java Enterprise System

# mkdir -p /zone/jes3
# chmod 700 /zone/jes3


Create a zone configuration file for a full root zone (required by JES)

# more jes3zone.cfg
create -b
set zonepath=/zone/jes3
set autoboot=false
add net
set address=192.168.159.90
set physical=pcn0
end


# zonecfg -z jes3 -f jes3zone.cfg
# zoneadm -z jes3 install


To boot the zone:

#zoneadm -z jes3 boot

To access the zone console (to answer the first time startup questions)

#zlogin -C jes3
(To break out of zlogin: ~.)


To Shut down the zone:

#zoneadm -z jes3 halt

After booting the zone you can install the Java Enterprise System from the zone. The installation bits must be made available to the zone. This seems easy enough, here are a few options:

  • 1) copy the bits into the zone and install - since I have limited disk space available this was not an option
  • 2) use the global zone cdrom and use a loop back file system to mount the global zone's cdrom from the zone: (warning, this get you access to the cdrom, but I had problems with the jes installer with this approach, exploring why...)


  • -Shut down the zone

    # zonecfg -z jes3
    add fs
    set dir=/cdrom
    set special=/cdrom
    set type=lofs
    set options=[nodevices]
    end
    
  • 3) Add the cdrom device to the zone (Red flags all over on this one, all docs suggest that sharing devices to zones should not be done)
  • 4) Use NFS - This is an old standby and was the right solution for me. It is easy to configure and worked great.


  • From the global zone:

    #vi /etc/dfs/dfstab
    add the following line to this file to share the cdrom with nfs:
    


    share -F nfs -o ro /cdrom/cdrom0

    #/etc/init.d/nfs.server start

    To validate the share is accessible:

    #share
    -               /cdrom/jes_05q1_sol_x86   ro   ""
    (note: it shows the name of the cdrom)
    


    From the zone mount the nfs share:

    #mkdir /globalcdrom
    #mount -F nfs idm:/cdrom/jes_05q1_sol_x86
    (note: I used the name of the cdrom to mount)
    


    Now the installation can proceed from /globalcdrom


About

harcey

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today