X

News, tips, partners, and perspectives for the Oracle Solaris operating system

Increasing Storage Capacity

Guest Author

One of the ways I run OpenSolaris is under VirtualBox.
The virtual disk image I initially created was only 8 GB and yesterday I found
myself out of space. Running out of disk space is a situation we've all
found ourselves in before - regardless if whether we're running
virtualized or not. Initially I just thought I could add another disk to the pool, but it turns out that it is not possible to boot OpenSolaris from a striped disk (see: OpenSolaris will not boot after adding to the ZFS pool). So my options are to upgrade the entire disk or to move one or more of my file systems onto a new disk. For this entry I'm going to cover the latter, by moving /export off of the boot disk. I'll cover the option of replacing the entire disk in a future post.

Step 1: Make More Space Available

If
you're running OpenSolaris in a virtual machine, simply create a new
virtual disk for the vm to use. This is how I did it using VirtualBox.

Open the Virtual Disk Manager (from the File menu). At a minimum you'll see your existing virtual disk image:

Click the New icon to start the Create New Virtual Disk Wizard:

And select Dynamically expanding image as the Image Type:

Give the disk a Name and a Size:



Then open the Settings for the virtual machine and select Hard Disks:

And click the Add Attachment icon to add the virtual disk image you just created:


Step 2: Create a new ZFS Storage Pool

Here we're going to create a new ZFS storage pool, with our new disk as its one and only device. Start OpenSolaris, open a terminal and find the name of the existing ZFS pool in use: 

bleonard@opensolaris:~$ zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
rpool 7.81G 6.36G 1.45M 81% ONLINE -
bleonard@opensolaris:~$

From this output we can see that I have one pool that is called rpool (root pool) and that I'm at 81% capacity. By checking the status of that pool we can determine the disk that it currently uses:

bleonard@opensolaris:~$ zpool status rpool
pool: rpool
state: ONLINE
scrub: none requested
config:

NAME STATE READ WRITE CKSUM

rpool ONLINE 0 0 0

c4d0s0 ONLINE 0 0 0
errors: No known data errors
bleonard@opensolaris:~$

Here we see our existing pool uses disk c4d0s0.

Now run format to determine the name of the new disk we've made available:

bleonard@opensolaris:~$ pfexec format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c4d0
/pci@0,0/pci-ide@1,1/ide@0/cmdk@0,0
1. c4d1
/pci@0,0/pci-ide@1,1/ide@0/cmdk@1,0
Specify disk (enter its number):

Here we see c4d1 is the ID of the disk we just added.  Using that information we can create a new ZFS storage pool on that disk. su as root and then run:

zpool create epool c4d1p0

I'm naming the new pool epool because I'm going to move the file system mounted at /export into that pool. You're free to name your new pool whatever you like. I can now run zpool list and zpool status to see my new pool:

bleonard@opensolaris:~# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
epool 9.94G 111K 9.94G 0% ONLINE -
rpool 7.81G 6.36G 1.45G 81% ONLINE -
bleonard@opensolaris:~# zpool status
pool: epool
state: ONLINE
scrub: none requested
config:

NAME STATE READ WRITE CKSUM

epool ONLINE 0 0 0

c4d1p0 ONLINE 0 0 0
errors: No known data errors
pool: rpool
state: ONLINE
scrub: none requested
config:

NAME STATE READ WRITE CKSUM

rpool ONLINE 0 0 0

c4d0s0 ONLINE 0 0 0
errors: No known data errors
bleonard@opensolaris:~#

Step 3: Copy the Data to the New Pool

In this step we will create a ZFS snapshot of the data we want to migrate and then send it over. Let's start by looking at the current list of datasets by running zfs list:

bleonard@opensolaris:~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
epool 106K 9.78G 18K /epool
rpool 6.36G 1.33G 55K /rpool
rpool@install 17.5K - 55K -
rpool/ROOT 3.63G 1.33G 18K /rpool/ROOT
rpool/ROOT@install 15K - 18K -
rpool/ROOT/opensolaris 3.63G 1.33G 2.94G legacy
rpool/ROOT/opensolaris@install 253M - 2.22G -
rpool/ROOT/opensolaris/opt 449M 1.33G 449M /opt
rpool/ROOT/opensolaris/opt@install 81K - 3.60M -
rpool/export 2.73G 1.33G 19K /export
rpool/export@install 15K - 19K -
rpool/export/home 2.73G 1.33G 2.73G /export/home
rpool/export/home@install 19K - 21K -
bleonard@opensolaris:~#

From this output we can see that the file systems mounted at /export and /export/home consume 2.73G of disk space. Let's take a ZFS snapshot of these 2 file sytems:

zfs snapshot -r rpool/export@snapshot  

The -r option takes recursive snapshots. So if we run zfs list again we can see the snapshots in the list:

bleonard@opensolaris:~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
epool 106K 9.78G 18K /epool
rpool 6.36G 1.33G 55K /rpool
rpool@install 17.5K - 55K -
rpool/ROOT 3.63G 1.33G 18K /rpool/ROOT
rpool/ROOT@install 15K - 18K -
rpool/ROOT/opensolaris 3.63G 1.33G 2.94G legacy
rpool/ROOT/opensolaris@install 253M - 2.22G -
rpool/ROOT/opensolaris/opt 449M 1.33G 449M /opt
rpool/ROOT/opensolaris/opt@install 81K - 3.60M -
rpool/export 2.73G 1.33G 19K /export
rpool/export@install 15K - 19K -
rpool/export@snapshot 0 - 19K -
rpool/export/home 2.73G 1.33G 2.73G /export/home
rpool/export/home@install 19K - 21K -
rpool/export/home@snapshot 0 - 2.73G -
bleonard@opensolaris:~#

Next we'll send the data to the new pool:

zfs send rpool/export@snapshot | zfs receive epool/export

zfs send rpool/export/home@snapshot | zfs receive epool/export/home


The transfer of /export/home will take a couple of minutes since there's 2.73 GBs of data to copy. Once complete, you can run zfs list again to see that epool now also consumes 2.73 GBs of storage:
bleonard@opensolaris:~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
epool 2.73G 7.05G 19K /epool
epool/export 2.73G 7.05G 19K /epool/export

epool/export@snapshot 0 - 19K -
epool/export/home 2.73G 7.05G 2.73G /epool/export/home
epool/export/home@snapshot 0 - 2.73G -
rpool 6.36G 1.33G 55K /rpool
rpool@install 17.5K - 55K -
rpool/ROOT 3.63G 1.33G 18K /rpool/ROOT
rpool/ROOT@install 15K - 18K -
rpool/ROOT/opensolaris 3.63G 1.33G 2.94G legacy
rpool/ROOT/opensolaris@install 253M - 2.22G -
rpool/ROOT/opensolaris/opt 449M 1.33G 449M /opt
rpool/ROOT/opensolaris/opt@install 81K - 3.60M -
rpool/export 2.73G 1.33G 19K /export
rpool/export@install 15K - 19K -
rpool/export@snapshot 0 - 19K -
rpool/export/home 2.73G 1.33G 2.73G /export/home
rpool/export/home@install 19K - 21K -
rpool/export/home@snapshot 0 - 2.73G -
bleonard@opensolaris:~#


You can also see this by running zpool list:

bleonard@opensolaris:~# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
epool 9.94G 2.73G 7.21G 27% ONLINE -
rpool 7.81G 6.36G 1.45G 81% ONLINE -
bleonard@opensolaris:~#
 

The next step is to redirect the mount points /export and /export/home from rpool to epool. Since we're currently using rpool/export, it can't be unmounted, so we need to boot into the OpenSolaris live CD.

Step 4: Redirecting the Mount Points

Shutdown OpenSolaris and boot up the installation CD. If you're using VirtualBox, open the settings for your VM and select CD/DVD-ROM and click the check box to mount the CD/DVD Drive. If you have the ISO image, VirtualBox can boot directly from that image:

Once the live CD is up and running, open a terminal, su to root (password is opensolaris) and import the ZFS storage pools:

zpool import -f rpool
zpool import -f epool

Running zpool list will show that the pools were imported successfully:

jack@opensolaris:~# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
epool 9.94G 2.73G 7.21G 27% ONLINE -
rpool 7.81G 6.36G 1.45G 81% ONLINE -
jack@opensolaris:~#

Next unmount /export/home and then /export:

zfs unmount /export/home
zfs unmount /export 
Let's set the mountpoint for rpool/export/home to /rpool/export/home:
zfs set mountpoint=/rpool/export/home rpool/export/home

Then set the mountpoints for epool to the root (/)

zfs set mountpoint=/export epool/export 
zfs set mountpoint=/export/home epool/export/home

You can run zfs list to see your changes:

jack@opensolaris:~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
epool 2.73G 7.05G 19K /epool
epool/export 2.73G 7.05G 19K /export

epool/export@snapshot 0 - 19K -
epool/export/home 2.73G 7.05G 2.73G /export/home
epool/export/home@snapshot 0 - 2.73G -
rpool 6.36G 1.33G 55K /rpool
rpool@install 17.5K - 55K -
rpool/ROOT 3.63G 1.33G 18K /rpool/ROOT
rpool/ROOT@install 15K - 18K -
rpool/ROOT/opensolaris 3.63G 1.33G 2.94G legacy
rpool/ROOT/opensolaris@install 253M - 2.22G -
rpool/ROOT/opensolaris/opt 449M 1.33G 449M /opt
rpool/ROOT/opensolaris/opt@install 81K - 3.60M -
rpool/export 2.73G 1.33G 19K /rpool/export
rpool/export@install 15K - 19K -
rpool/export@snapshot 0 - 19K -
rpool/export/home 2.73G 1.33G 2.73G /rpool/export/home
rpool/export/home@install 19K - 21K -
rpool/export/home@snapshot 0 - 2.73G -
jack@opensolaris:~#

Shutdown the live CD and boot back into your installed version of OpenSolaris.

Step 5: Free the Space

If you're running in VirtualBox, go back into the CD/DVD-ROM settings and remove the ISO image. Once your installed version of OpenSolaris is back up and running, your /export directory will be mounted to the new epool. Once you are satisfied of this, it's time to free up the space consumed by rpool/export:

pfexec zfs destroy -r rpool/export

Running zfs list will indeed show that it is now gone:

zbleonard@opensolaris:~$ zfs list
NAME USED AVAIL REFER MOUNTPOINT
epool 2.73G 7.05G 20K /epool
epool/export 2.73G 7.05G 19K /export
epool/export@snapshot 18K - 19K -
epool/export/home 2.73G 7.05G 2.73G /export/home
epool/export/home@snapshot 220K - 2.73G -
rpool 3.63G 4.06G 58K /rpool
rpool@install 18.5K - 55K -
rpool/ROOT 3.63G 4.06G 18K /rpool/ROOT
rpool/ROOT@install 15K - 18K -
rpool/ROOT/opensolaris 3.63G 4.06G 2.94G legacy
rpool/ROOT/opensolaris@install 253M - 2.22G -
rpool/ROOT/opensolaris/opt 449M 4.06G 449M /opt
rpool/ROOT/opensolaris/opt@install 81K - 3.60M -
bleonard@opensolaris:~$

And running a zpool list will show that rpool is now only at 46% capacity:

bleonard@opensolaris:~$ zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
epool 9.94G 2.73G 7.21G 27% ONLINE -
rpool 7.81G 3.63G 4.18G 46% ONLINE -
bleonard@opensolaris:~$

Finally, if you want to get rid of the 2 snapshots, you can run:
pfexec zfs destroy epool/export@snapshot
pfexec zfs destroy epool/export/home@snapshot

And you should be back in business, but now with a bunch of new living space.

Join the discussion

Comments ( 3 )
  • Georg Wednesday, July 16, 2008

    This is not a comment on the topic above, but a general suggestion: please make the display wider, in its current format the articles need a lot of vertical space and one feels a bit like sitting in a pitch-black room, looking through a narrow window into the Sun...

    I like the blog a lot, trying OpenSolars on Windows using VirtualBox myself. Are you going to report about OpenSolaris news (new releases, planned features, distros etc.) also?

    Georg


  • Roman Strobl Wednesday, July 16, 2008

    I updated the width, check it out. Yes, we plan to also blog about news in OpenSolaris... stay tuned! :)


  • Elijah Menifee Monday, March 23, 2009

    Another alternative is to do it live on the system without going to single user mode or rebooting. My OS 2008.11 is setup as a headless box with no cd, so I had to find another method as follows.

    Step 1: edit /etc/user_attr to make root normal user instead of role.

    Step 2: edit /etc/ssh/sshd_config to allow root to login, setup ssh credentials for root, and restart the sshd svc

    Step 3: logout as normal user, ssh back in as root, (use who to make sure no users are logged in).

    Step 4: zfs unmount /export\*

    Step 5: change zfs mount location properties


Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.