Need another disk in your zone? No problem!

Solaris 11.2 allows you to modify your zone configuration and apply those changes without a reboot.  Check this out:

First, I want to show that there are no disk devices present in the zone.

root@vzl-212:~# zlogin z1 'find /dev/*dsk'
/dev/dsk
/dev/rdsk

Next, I modify the static zone configuration to add a access to all partitions of a particular disk.  This has no effect on the running zone.

root@vzl-212:~# zonecfg -z z1
zonecfg:z1> add device
zonecfg:z1:device> set match=/dev/*dsk/c0t600144F0DBF8AF19000052EC175B0004d0*
zonecfg:z1:device> end
zonecfg:z1> exit

I then use zoneadm to apply the changes in the zone configuration to the running zone.

root@vzl-212:~# zoneadm -z z1 apply
zone 'z1': Checking: Adding device match=/dev/*dsk/c0t600144F0DBF8AF19000052EC175B0004d0*
zone 'z1': Applying the changes

And now, the zone can see the devices. 

root@vzl-212:~# zlogin z1 'find /dev/*dsk'
/dev/dsk
/dev/dsk/c0t600144F0DBF8AF19000052EC175B0004d0
/dev/dsk/c0t600144F0DBF8AF19000052EC175B0004d0s0
/dev/dsk/c0t600144F0DBF8AF19000052EC175B0004d0s1
/dev/dsk/c0t600144F0DBF8AF19000052EC175B0004d0s2
/dev/dsk/c0t600144F0DBF8AF19000052EC175B0004d0s3
/dev/dsk/c0t600144F0DBF8AF19000052EC175B0004d0s4
/dev/dsk/c0t600144F0DBF8AF19000052EC175B0004d0s5
/dev/dsk/c0t600144F0DBF8AF19000052EC175B0004d0s6
/dev/rdsk
/dev/rdsk/c0t600144F0DBF8AF19000052EC175B0004d0
/dev/rdsk/c0t600144F0DBF8AF19000052EC175B0004d0s0
/dev/rdsk/c0t600144F0DBF8AF19000052EC175B0004d0s1
/dev/rdsk/c0t600144F0DBF8AF19000052EC175B0004d0s2
/dev/rdsk/c0t600144F0DBF8AF19000052EC175B0004d0s3
/dev/rdsk/c0t600144F0DBF8AF19000052EC175B0004d0s4
/dev/rdsk/c0t600144F0DBF8AF19000052EC175B0004d0s5
/dev/rdsk/c0t600144F0DBF8AF19000052EC175B0004d0s6

If I wanted to just temporarily modify the configuration, there's an option to just modify the running configuration. For example, if I plan on doing some maintenance on my ZFS Storage Appliance that hosts the LUN I allocated above, I may want to be sure that the zone can't see it for a bit.  That's easy enough.

Here I use zonecfg's -r option to modify the running configuration.

root@vzl-212:~# zonecfg -z z1 -r
zonecfg:z1> info device
device:
    match: /dev/*dsk/c0t600144F0DBF8AF19000052EC175B0004d0*
    storage not specified
    allow-partition not specified
    allow-raw-io not specified
zonecfg:z1> remove device
zonecfg:z1> info device
zonecfg:z1> commit
zone 'z1': Checking: Removing device match=/dev/*dsk/c0t600144F0DBF8AF19000052EC175B0004d0*
zone 'z1': Applying the changes
zonecfg:z1> exit

Without the -r option, zonecfg displays the on-disk configuration.

root@vzl-212:~# zonecfg -z z1 info device
device:
    match: /dev/*dsk/c0t600144F0DBF8AF19000052EC175B0004d0*
    storage not specified
    allow-partition not specified
    allow-raw-io not specified

root@vzl-212:~# zonecfg -z z1 -r info device
root@vzl-212:~#

The running configuration reflects the contents of /dev/dsk and /dev/rdsk inside the zone.

root@vzl-212:~# zlogin z1 'find /dev/*dsk'
/dev/dsk
/dev/rdsk

When it is time to revert back to the on-disk configuration, simply apply the on-disk configuration and the device tree inside the zone reverts to the on-disk configuration.

root@vzl-212:~# zoneadm -z z1 apply
zone 'z1': Checking: Adding device match=/dev/*dsk/c0t600144F0DBF8AF19000052EC175B0004d0*
zone 'z1': Applying the changes

root@vzl-212:~# zlogin z1 'find /dev/*dsk'
/dev/dsk
/dev/dsk/c0t600144F0DBF8AF19000052EC175B0004d0
/dev/dsk/c0t600144F0DBF8AF19000052EC175B0004d0s0
/dev/dsk/c0t600144F0DBF8AF19000052EC175B0004d0s1
/dev/dsk/c0t600144F0DBF8AF19000052EC175B0004d0s2
/dev/dsk/c0t600144F0DBF8AF19000052EC175B0004d0s3
/dev/dsk/c0t600144F0DBF8AF19000052EC175B0004d0s4
/dev/dsk/c0t600144F0DBF8AF19000052EC175B0004d0s5
/dev/dsk/c0t600144F0DBF8AF19000052EC175B0004d0s6
/dev/rdsk
/dev/rdsk/c0t600144F0DBF8AF19000052EC175B0004d0
/dev/rdsk/c0t600144F0DBF8AF19000052EC175B0004d0s0
/dev/rdsk/c0t600144F0DBF8AF19000052EC175B0004d0s1
/dev/rdsk/c0t600144F0DBF8AF19000052EC175B0004d0s2
/dev/rdsk/c0t600144F0DBF8AF19000052EC175B0004d0s3
/dev/rdsk/c0t600144F0DBF8AF19000052EC175B0004d0s4
/dev/rdsk/c0t600144F0DBF8AF19000052EC175B0004d0s5
/dev/rdsk/c0t600144F0DBF8AF19000052EC175B0004d0s6

Live Zone Reconfiguration is not limited to just device resources, it works with most other resources as well.  See the Live Zones Reconfiguration section in zones(5).

Comments:

Not able to add more than 8 disks to kernel zone? How many are supported?

Some logs ....

While exporting disks from ldom to a kernel zone, we face an issue of overlapping disk id’s, with following error message-
“ verify_cfg solaris-kz brand error: /dev/rdsk/c1d1 /dev/rdsk/c1d11: bad match value, devices cannot overlap”

This happens if we have two disks with id c1d1 and c1d11, then exporting these two disks to a kernel zone fails.
First digit in the lun IDs (c1d1 and c1d11) are creating overlap and not allowing to add additional luns.
Hence, as of now, we are able to export only 9 luns to a kernel zone.

We also suspect that this will happen on control domain as well, however we have not checked it yet.

Error Logs:
---------------------------
root@vmorat52-4-LDOM2:~# zonecfg -z rahulkz info | grep c1d3
match: /dev/rdsk/c1d3 ß-------------------- existing device with lun id3
root@vmorat52-4-LDOM2:~#

zonecfg:rahulkz> add device
zonecfg:rahulkz:device> set match=/dev/rdsk/c1d30
zonecfg:rahulkz:device> end
zonecfg:rahulkz> verify
verify_cfg solaris-kz brand error: /dev/rdsk/c1d3 /dev/rdsk/c1d30: bad match value, devices cannot overlap
rahulkz: Brand-specific error

----------------------------------------------------------

In below output, c1d3 is missing and the reason for this is presence of lun c1d30
-----------------------------------------------------------
root@vmorat52-4-LDOM2:~# zonecfg -z my_kz info
zonename: my_kz
brand: solaris-kz
autoboot: false
autoshutdown: shutdown
bootargs:
pool:
scheduling-class:
hostid: 0xcc202bd

device:
match not specified
storage: dev:/dev/zvol/dsk/rpool/VARSHARE/zones/my_kz/disk0
id: 0
bootpri: 0
device:
match: /dev/rdsk/c1d1
storage not specified
id: 2
bootpri not specified
device:
match: /dev/rdsk/c1d2
storage not specified
id: 1
bootpri not specified
device:
match: /dev/rdsk/c1d4
storage not specified
id: 4
bootpri not specified
device:
match: /dev/rdsk/c1d5
storage not specified
id: 5
bootpri not specified
device:
match: /dev/rdsk/c1d6
storage not specified
id: 6
bootpri not specified
device:
match: /dev/rdsk/c1d7
storage not specified
id: 7
bootpri not specified
device:
match: /dev/rdsk/c1d8
storage not specified
id: 8
bootpri not specified
device:
match: /dev/rdsk/c1d9
storage not specified
id: 9
bootpri not specified
device:
match: /dev/rdsk/c1d30
storage not specified
id: 10
bootpri not specified
----------------------------------------------------------

Posted by Venkata Reddy on November 18, 2014 at 03:24 AM CST #

Hi Venkata - thanks for your question.

This blog post concerns solaris (native) zones not solaris-kz (kernel) zones - live zone reconfiguration is not yet supported with kernel zones. That being said, I've been able to reproduce the problem you found with kernel zones but not with solaris zones and have opened the following bug.

Bug 20051531 - erroneous bad match value, devices cannot overlap

There is no special limit to the number of devices delegated to a kernel zone. A kernel zone should be able to support the same number of devices as any other global zone. It seems as though there is a string comparison error that's causing the problem you are seeing.

A workaround would seem to be to create all of the ldom's disks that you intend to delegate to the zone with id's in the 100 to 999 range, thus avoid the problem of "c1d3" being a substring of "c1d30".

Posted by Mike Gerdts on November 18, 2014 at 07:26 AM CST #

Post a Comment:
  • HTML Syntax: NOT allowed
About

Contributors:

  • Mike Gerdts - Principal Software Engineer, Solaris Virtualization
  • More coming soon!

Search

Categories
Archives
« July 2015
SunMonTueWedThuFriSat
   
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
 
       
Today