Tuesday Apr 26, 2016

Renaming a zone and changing the mountpoint

Zones are great, since they allow you to run and manage in isolated containers all of your application... but sometimes, just at the end of the entire installation, you realize that probably you could have picked up better naming conventions for the zones and of the zpools/datasets. So you start scratching your head repeating yourself that reinstalling everything from scratch, is not an option... So, here's the scenario, my zones are both hosted on a zpool named BADPOOL, which I'm going to rename to ZonesPool:


# zpool list
NAME    SIZE  ALLOC   FREE  CAP  DEDUP  HEALTH  ALTROOT
BADPOOL    15.9G  5.87G  10.0G  36%  1.00x  ONLINE  -
rpool  63.5G  16.8G  46.7G  26%  1.00x  ONLINE  -
# zfs list |grep BADPOOL
BADPOOL                                          5.87G  9.76G    31K  legacy
BADPOOL/old                                    2.93G  9.76G    32K  /zones/old
BADPOOL/old/rpool                              2.93G  9.76G    31K  /rpool
BADPOOL/old/rpool/ROOT                         1.93G  9.76G    31K  legacy
BADPOOL/old/rpool/ROOT/solaris-15              1.93G  9.76G  1.73G  /zones/old/root
BADPOOL/old/rpool/ROOT/solaris-15/var           206M  9.76G   206M  /zones/old/root/var
BADPOOL/old/rpool/VARSHARE                     1.13M  9.76G  1.07M  /var/share
BADPOOL/old/rpool/VARSHARE/pkg                   63K  9.76G    32K  /var/share/pkg
BADPOOL/old/rpool/VARSHARE/pkg/repositories      31K  9.76G    31K  /var/share/pkg/repositories
BADPOOL/old/rpool/app                          1022M  9.76G  1022M  /app
BADPOOL/old/rpool/export                        120K  9.76G    32K  /export
BADPOOL/old/rpool/export/home                    88K  9.76G  55.5K  /export/home
BADPOOL/old/rpool/export/home/admin            32.5K  9.76G  32.5K  /export/home/admin
BADPOOL/bad                                    2.94G  9.76G    32K  /zones/bad
BADPOOL/bad/rpool                              2.94G  9.76G    31K  /rpool
BADPOOL/bad/rpool/ROOT                         1.92G  9.76G    31K  legacy
BADPOOL/bad/rpool/ROOT/solaris-15              1.92G  9.76G  1.72G  /zones/bad/root
BADPOOL/bad/rpool/ROOT/solaris-15/var           204M  9.76G   204M  /zones/bad/root/var
BADPOOL/bad/rpool/VARSHARE                     1.13M  9.76G  1.07M  /var/share
BADPOOL/bad/rpool/VARSHARE/pkg                   63K  9.76G    32K  /var/share/pkg
BADPOOL/bad/rpool/VARSHARE/pkg/repositories      31K  9.76G    31K  /var/share/pkg/repositories
BADPOOL/bad/rpool/app                          1.02G  9.76G  1.02G  /app
BADPOOL/bad/rpool/export                        110K  9.76G    32K  /export
BADPOOL/bad/rpool/export/home                  78.5K  9.76G    46K  /export/home
BADPOOL/bad/rpool/export/home/admin            32.5K  9.76G  32.5K  /export/home/admin

Current zone names are old and bad, and I'd like to rename them to this and that; first of all, of course the zones should be at least down:


# zoneadm list -civ
  ID NAME             STATUS      PATH                         BRAND      IP
   0 global          running     /                          solaris    shared
   - old             installed   /zones/old                 solaris    shared
   - bad             installed   /zones/bad                 solaris    shared

Now, let's first deal with the zpool part. In ZFS there's no way of renaming a zpool which is already 'imported', the only way to do that is to export the pool and re-import it with the new, correct name:


# zpool list BADPOOL
NAME   SIZE  ALLOC   FREE  CAP  DEDUP  HEALTH  ALTROOT
BADPOOL   15.9G  5.87G  10.0G  36%  1.00x  ONLINE  -
# zpool export BADPOOL
# zpool import BADPOOL ZonesPool
# zpool list ZonesPool
NAME   SIZE  ALLOC   FREE  CAP  DEDUP  HEALTH  ALTROOT
ZonesPool    15.9G  5.87G  10.0G  36%  1.00x  ONLINE  -

And that was easy ;-) But the various dataset are still reflecting the previous naming, with old and bad names and mountpoints:


# zfs list |grep ZonesPool
NAME                                          USED  AVAIL  REFER  MOUNTPOINT
ZonesPool                                           5.87G  9.76G    31K  legacy
ZonesPool/old                                     2.93G  9.76G    32K  /zones/old
ZonesPool/old/rpool                               2.93G  9.76G    31K  /rpool
ZonesPool/old/rpool/ROOT                          1.93G  9.76G    31K  legacy
ZonesPool/old/rpool/ROOT/solaris-15               1.93G  9.76G  1.73G  /
ZonesPool/old/rpool/ROOT/solaris-15/var            206M  9.76G   206M  /var
ZonesPool/old/rpool/VARSHARE                      1.13M  9.76G  1.07M  /var/share
ZonesPool/old/rpool/VARSHARE/pkg                    63K  9.76G    32K  /var/share/pkg
ZonesPool/old/rpool/VARSHARE/pkg/repositories       31K  9.76G    31K  /var/share/pkg/repositories
ZonesPool/old/rpool/app                           1022M  9.76G  1022M  /app
ZonesPool/old/rpool/export                         120K  9.76G    32K  /export
ZonesPool/old/rpool/export/home                     88K  9.76G  55.5K  /export/home
ZonesPool/old/rpool/export/home/admin             32.5K  9.76G  32.5K  /export/home/admin
ZonesPool/bad                                     2.94G  9.76G    32K  /zones/bad
ZonesPool/bad/rpool                               2.94G  9.76G    31K  /rpool
ZonesPool/bad/rpool/ROOT                          1.92G  9.76G    31K  legacy
ZonesPool/bad/rpool/ROOT/solaris-15               1.92G  9.76G  1.72G  /
ZonesPool/bad/rpool/ROOT/solaris-15/var            204M  9.76G   204M  /var
ZonesPool/bad/rpool/VARSHARE                      1.13M  9.76G  1.07M  /var/share
ZonesPool/bad/rpool/VARSHARE/pkg                    63K  9.76G    32K  /var/share/pkg
ZonesPool/bad/rpool/VARSHARE/pkg/repositories       31K  9.76G    31K  /var/share/pkg/repositories
ZonesPool/bad/rpool/app                           1.02G  9.76G  1.02G  /app
ZonesPool/bad/rpool/export                         110K  9.76G    32K  /export
ZonesPool/bad/rpool/export/home                   78.5K  9.76G    46K  /export/home
ZonesPool/bad/rpool/export/home/admin             32.5K  9.76G  32.5K  /export/home/admin

So we need to rename the datasets:


# zfs rename ZonesPool/old ZonesPool/this
# zfs rename ZonesPool/bad ZonesPool/that
# zfs list |grep ZonesPool
NAME                                          USED  AVAIL  REFER  MOUNTPOINT
ZonesPool                                           5.87G  9.76G    31K  legacy
ZonesPool/this                                      2.93G  9.76G    32K  /zones/old
ZonesPool/this/rpool                                2.93G  9.76G    31K  /rpool
ZonesPool/this/rpool/ROOT                           1.93G  9.76G    31K  legacy
ZonesPool/this/rpool/ROOT/solaris-15                1.93G  9.76G  1.73G  /
ZonesPool/this/rpool/ROOT/solaris-15/var             206M  9.76G   206M  /var
ZonesPool/this/rpool/VARSHARE                       1.13M  9.76G  1.07M  /var/share
ZonesPool/this/rpool/VARSHARE/pkg                     63K  9.76G    32K  /var/share/pkg
ZonesPool/this/rpool/VARSHARE/pkg/repositories        31K  9.76G    31K  /var/share/pkg/repositories
ZonesPool/this/rpool/app                            1022M  9.76G  1022M  /app
ZonesPool/this/rpool/export                          120K  9.76G    32K  /export
ZonesPool/this/rpool/export/home                      88K  9.76G  55.5K  /export/home
ZonesPool/this/rpool/export/home/admin              32.5K  9.76G  32.5K  /export/home/admin
ZonesPool/that                                      2.94G  9.76G    32K  /zones/bad
ZonesPool/that/rpool                                2.94G  9.76G    31K  /rpool
ZonesPool/that/rpool/ROOT                           1.92G  9.76G    31K  legacy
ZonesPool/that/rpool/ROOT/solaris-15                1.92G  9.76G  1.72G  /
ZonesPool/that/rpool/ROOT/solaris-15/var             204M  9.76G   204M  /var
ZonesPool/that/rpool/VARSHARE                       1.13M  9.76G  1.07M  /var/share
ZonesPool/that/rpool/VARSHARE/pkg                     63K  9.76G    32K  /var/share/pkg
ZonesPool/that/rpool/VARSHARE/pkg/repositories        31K  9.76G    31K  /var/share/pkg/repositories
ZonesPool/that/rpool/app                            1.02G  9.76G  1.02G  /app
ZonesPool/that/rpool/export                          110K  9.76G    32K  /export
ZonesPool/that/rpool/export/home                    78.5K  9.76G    46K  /export/home
ZonesPool/that/rpool/export/home/admin              32.5K  9.76G  32.5K  /export/home/admin

As well as the mount points:


# zfs set mountpoint=/zones/this ZonesPool/this
# zfs set mountpoint=/zones/that ZonesPool/that
# zfs list |grep ZonesPool
NAME                                          USED  AVAIL  REFER  MOUNTPOINT
ZonesPool                                           5.87G  9.76G    31K  legacy
ZonesPool/this                                      2.93G  9.76G    32K  /zones/this
ZonesPool/this/rpool                                2.93G  9.76G    31K  /rpool
ZonesPool/this/rpool/ROOT                           1.93G  9.76G    31K  legacy
ZonesPool/this/rpool/ROOT/solaris-15                1.93G  9.76G  1.73G  /
ZonesPool/this/rpool/ROOT/solaris-15/var             206M  9.76G   206M  /var
ZonesPool/this/rpool/VARSHARE                       1.13M  9.76G  1.07M  /var/share
ZonesPool/this/rpool/VARSHARE/pkg                     63K  9.76G    32K  /var/share/pkg
ZonesPool/this/rpool/VARSHARE/pkg/repositories        31K  9.76G    31K  /var/share/pkg/repositories
ZonesPool/this/rpool/app                            1022M  9.76G  1022M  /app
ZonesPool/this/rpool/export                          120K  9.76G    32K  /export
ZonesPool/this/rpool/export/home                      88K  9.76G  55.5K  /export/home
ZonesPool/this/rpool/export/home/admin              32.5K  9.76G  32.5K  /export/home/admin
ZonesPool/that                                      2.94G  9.76G    32K  /zones/that
ZonesPool/that/rpool                                2.94G  9.76G    31K  /rpool
ZonesPool/that/rpool/ROOT                           1.92G  9.76G    31K  legacy
ZonesPool/that/rpool/ROOT/solaris-15                1.92G  9.76G  1.72G  /
ZonesPool/that/rpool/ROOT/solaris-15/var             204M  9.76G   204M  /var
ZonesPool/that/rpool/VARSHARE                       1.13M  9.76G  1.07M  /var/share
ZonesPool/that/rpool/VARSHARE/pkg                     63K  9.76G    32K  /var/share/pkg
ZonesPool/that/rpool/VARSHARE/pkg/repositories        31K  9.76G    31K  /var/share/pkg/repositories
ZonesPool/that/rpool/app                            1.02G  9.76G  1.02G  /app
ZonesPool/that/rpool/export                          110K  9.76G    32K  /export
ZonesPool/that/rpool/export/home                    78.5K  9.76G    46K  /export/home
ZonesPool/that/rpool/export/home/admin              32.5K  9.76G  32.5K  /export/home/admin

Now that we have the filesystems in place, we still need to 'refine' the zones, as in the zones configuration, we still have the old names and definitions:


# zoneadm list -civ
  ID NAME             STATUS      PATH                         BRAND      IP
   0 global           running     /                            solaris    shared
   - old             installed   /zones/old                 solaris    shared
   - bad             installed   /zones/bad                 solaris    shared
# zoneadm -z old rename this
# zoneadm -z bad rename that
# zoneadm list -civ
  ID NAME             STATUS      PATH                         BRAND      IP
   0 global           running     /                            solaris    shared
   - this             installed   /zones/old                 solaris    shared
   - that             installed   /zones/bad                 solaris    shared

After changing the names of the zones, we have to change now their PATH, but again, also this operation cannot be done while the zone is in the 'installed' state and it is attached to a live system; therefore we should first forcibly detach the zone (we'll have to use the -F option to force the dismount since the dataset on which the zone was built is not there anymore):


# zoneadm -z this detach -F
# zoneadm -z that detach -F
# zoneadm list -civ
  ID NAME             STATUS      PATH                         BRAND      IP
   0 global           running     /                            solaris    shared
   - this             configured  /zones/old                 solaris    shared
   - that             configured  /zones/bad                 solaris    shared

Now that both zones are detached, we can change the zone path:


# zonecfg -z this info zonepath
zonepath: /zones/old
# zonecfg -z that info zonepath
zonepath: /zones/bad
# zonecfg -z this set zonepath=/zones/this
# zonecfg -z this set zonepath=/zones/this
# zonecfg -z this info zonepath
zonepath: /zones/this
# zonecfg -z that info zonepath
zonepath: /zones/that

And verify that the change has been correctly made, but the zones are (of course), still in the 'configured' state:


# zoneadm list -civ
  ID NAME             STATUS      PATH                         BRAND      IP
   0 global           running     /                            solaris    shared
   - this configured  /zones/this                 solaris    shared
   - that configured  /zones/that                solaris    shared

We're now ready to re-attach the zones to the live system:


# zoneadm -z this attach
Progress being logged to /var/log/zones/zoneadm.20160426T151603Z.this.attach
    Installing: Using existing zone boot environment
      Zone BE root dataset: FE/this/rpool/ROOT/solaris-15
                     Cache: Using /var/pkg/publisher.
  Updating non-global zone: Linking to image /.
Processing linked: 1/1 done
  Updating non-global zone: Auditing packages.
No updates necessary for this image. (zone:this)

  Updating non-global zone: Zone updated.
                    Result: Attach Succeeded.
Log saved in non-global zone as /zones/this/root/var/log/zones/zoneadm.20160426T151603Z.this.attach
# zoneadm -z that attach
Progress being logged to /var/log/zones/zoneadm.20160426T153312Z.that.attach
    Installing: Using existing zone boot environment
      Zone BE root dataset: FE/that/rpool/ROOT/solaris-15
                     Cache: Using /var/pkg/publisher.
  Updating non-global zone: Linking to image /.
Processing linked: 1/1 done
  Updating non-global zone: Auditing packages.
No updates necessary for this image. (zone:that)

  Updating non-global zone: Zone updated.
                    Result: Attach Succeeded.
Log saved in non-global zone as /zones/that/root/var/log/zones/zoneadm.20160426T153312Z.that.attach
# zoneadm list -civ
  ID NAME             STATUS      PATH                         BRAND      IP
   0 global           running     /                            solaris    shared
   - this             installed   /zones/this                  solaris    shared
   - that             installed   /zones/that                  solaris    shared


Finally, we need to boot the zones:


# zoneadm -z this boot
# zoneadm -z that boot
# zoneadm list -civ
  ID NAME             STATUS      PATH                         BRAND      IP
   0 global           running     /                            solaris    shared
   6 this             running     /zones/this                  solaris    shared
   7 that             running     /zones/that                  solaris    shared
# zlogin this zonename
this
# zlogin that zonename
that

And this concludes our journey through zones, zpools and datasets ;-)

Friday Oct 29, 2010

ZFS ARC Cache tuning for a laptop...

I've my laptop (Toshiba Tecra M5 - Intel Core2-Duo@2GHz - 2GB RAM) with OpenSolaris (snv_150) and I've noted that sometimes it becomes slow and unresponsive for a few seconds in which the disk was spinning hardly... a very simple probe showed the problem:


# kstat -m zfs -n arcstats -T d 2


I'll save you all the neverending output, but the interesting numbers were the ones coming from c,c_max, c_min and size.


As I read on the ZFS Evil Tuning Guide :


[...] The ZFS Adaptive Replacement Cache (ARC) tries to use most of a system's
available memory to cache file system data. The default is to use all
of physical memory except 1 GB. As memory pressure increases, the ARC
relinquishes memory. [...]


Mine problem was that when trying to launch many application (typically at the login, when you may start Firefox, Thunderbird, Netbeans, Acrobat Reader and OpenOffice almost sequentially) the laptop was clogged up and was with the disk spinning and almost unresponsive. I know that my laptop has limited performances and is not the latest piece of hardware available on the market, but still when I launch the same applications under other O.S.-es [both Linux Ubuntu 10.10 (64-bit) and WinXP SP3 (32-bit)] I don't have to wait that long and the system looks more responsive.


Monitoring the size parameter of the ARC cache, I've seen that it was always around 1 GB Size, and the applications were instead unable to run with few available memory and swapping on the disk... this was not sane.


First I shrinked the amount of ram allocated for ZFS ARC live (as explained in the "ZFS Guide";), and since the performances and the stability of the machine seemed improved, I set that value into the /etc/system file to make it persistent across reboots:


set zfs:zfs_arc_max = 822083584


Even if the ZFS ARC cache size is more constant now (I've an average that is close to the set value, with limited 'fluctuations'), I'm running without any apparent problem.... So far, so good ;-)

About

Marco Milo-Oracle

Search

Archives
« July 2016
SunMonTueWedThuFriSat
     
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
      
Today