Monday Sep 21, 2015

Configuring Secure NFS in Solaris 11

This entry goes through the steps to build a Secure NFS server and client configuration. This includes the necessary DNS server configuration, creating a single Kerberos Key Distribution Center, and configuring NFS server and client to force access using Secure NFS.[Read More]

Secure NFS: Step O, as in Optional--NTP and DNS

Optional Network Time Protocol and Domain Name System Setup for Kerberos

Kerberos requires in-sync system time across all systems utilizing the service. Solaris Kerberos also requires direct access to DNS, as it does not use the local name service switch to select host name resolution. Thus I start with the steps to set up NTP and DNS, should you need either or both.

NTP

Since my setup is using Solaris Zones on a single system, they share the Global Zone's clock, and thus all the Zones' times are in sync. When using Kerberos across multiple systems, it is suggested to keep clock skew at a minimum. You may be doing this already for other reasons. If not, here is a simple Network Time Protocol configuration. Your routers may be valid NTP servers.

I add several server references in /etc/inet/ntp.conf, which I base off of the provided /etc/inet/ntp.client file.

global# diff /etc/inet/ntp.conf /etc/inet/ntp.client
49,53d48
< server 0.us.pool.ntp.org iburst
< server 1.us.pool.ntp.org iburst
< server 2.us.pool.ntp.org iburst
< server 3.us.pool.ntp.org iburst
global#

Replace the "x.us.pool.ntp.org" with your NTP servers' IP addresses or hostnames.

DNS

DNS infrastructure is required for Kerberos. Solaris' Kerberos is compiled to use DNS to do hostname lookups. See Kerberos, DNS, and the Naming Service.

If you have DNS servers you can update or even just reference for the nodes you need, please use them. I you don't have that or don't want to use them, here are steps to set up your own DNS service. This will include a single DNS server. More available DNS is out of the scope of this entry.

Create the DNS server Solaris Zone

My Zone configuration file is as follows.

global# cat dns.cfg
create -b
set brand=solaris
set zonepath=/zones/dns
set autoboot=false
set autoshutdown=shutdown
set ip-type=exclusive
add anet
set linkname=net0
set lower-link=net1
set configure-allowed-address=true
set link-protection=mac-nospoof
set mac-address=random
set vlan-id=17
end
add anet
set linkname=net1
set lower-link=net0
set configure-allowed-address=true
set link-protection=mac-nospoof
set mac-address=random
end
add admin
set user=steffen
set auths=login,manage,config
end
global#

The Zone has two network interfaces. The first (linkname=net0) is on VLAN ID 17 and is for this Secure NFS setup. The second network interface (linkname=net1) ties into my local network, and also my local DNS server (my broadband router at home, or my office network's DNS server--that I can't get modified for my hostnames.)

I also set the Zone up so that I can administer it without becoming root, though all the examples here are as root.

I configure the zone using the dns.cfg configuration file as input.

global# zonecfg -z dns -f dns.cfg
UX: /usr/sbin/usermod: steffen is currently logged in, some changes may not take effect until next login.
global#

Then to speed things up I clone the Zone from a "master" zone I created in advance. On my system a clone takes less than 20 seconds, while an install, with a local IPS repository, takes about 90 seconds. Your times will vary based on your system, type of storage, and the network connection to the IPS repository you use.

global# zoneadm -z dns clone -c dns_profile.xml kdcmaster
The following ZFS file system(s) have been created:
    pool1/zones/dns
Progress being logged to /var/log/zones/zoneadm.20150901T012022Z.dns.clone
Log saved in non-global zone as /zones/dns/root/var/log/zones/zoneadm.20150901T012022Z.dns.clone
global#

Lets boot the Zone.

global# zoneadm -z dns boot
global#

Once the Zone is up and running, I like to create a new boot environment, so that I if have to revert the changes I made, I can just reboot into the existing new Zone. While creating a new Zone is fast, this save some work, and it is also convenient later on to test additional changes.

global# zlogin dns
[Connected to zone 'dns' pts/8]
Oracle Corporation	SunOS 5.11	11.2	July 2015
root@dns:~#

root@dns:~# beadm create dns
root@dns:~# beadm activate dns
root@dns:~# reboot

[Connection to zone 'dns' pts/8 closed]
global#

Install the DNS server in the Solaris Zone

The DNS server package service/network/dns/bind is not installed by default, so we have to install it. We can verify it is not there by testing for the service.

global# zlogin dns
[Connected to zone 'dns' pts/8]
Oracle Corporation	SunOS 5.11	11.2	July 2015
root@dns:~#

root@dns:~# svcs *dns*
STATE          STIME    FMRI
disabled       21:26:25 svc:/network/dns/multicast:default
online         21:26:29 svc:/network/dns/client:default
root@dns:~#

root@dns:~# pkg install pkg:/service/network/dns/bind
           Packages to install:  1
            Services to change:  1
       Create boot environment: No
Create backup boot environment: No
DOWNLOAD                                PKGS         FILES    XFER (MB)   SPEED
Completed                                1/1         38/38      1.4/1.4  9.2M/s

PHASE                                          ITEMS
Installing new actions                         74/74
Updating package state database                 Done
Updating package cache                           0/0
Updating image state                            Done
Creating fast lookup database                   Done
Updating package cache                           2/2
root@dns:~#

root@dns:~# svcs *dns*
STATE          STIME    FMRI
disabled       21:26:25 svc:/network/dns/multicast:default
disabled       21:27:17 svc:/network/dns/server:default
online         21:26:29 svc:/network/dns/client:default
root@dns:~#

Configured the DNS server

With the DNS server package installed, it is time to create a basic DNS server configuration. I am using network 172.17.0.0/22 for some historical reasons. You can adjust to meet your own preferences or local requirements.

Some preliminary work for my configuration. My Zone configuration, if you remember, has two networks. The syconfig profile configured net0 for my private network. I still need to configure net1 on my standard network. I will use DHCP to get an address.

root@dns:~# dladm show-link
LINK                CLASS     MTU    STATE    OVER
net0                vnic      1500   up       ?
net1                vnic      1500   up       ?
root@dns:~#
root@dns:~# ipadm show-addr
ADDROBJ           TYPE     STATE        ADDR
lo0/v4            static   ok           127.0.0.1/8
net0/v4           static   ok           172.17.0.250/22
lo0/v6            static   ok           ::1/128
net0/v6           addrconf ok           fe80::8:20ff:fe90:a16e/10
root@dns:~#
root@dns:~# ipadm create-ip net1
root@dns:~#
root@dns:~# ipadm create-addr -T dhcp net1
net1/v4
root@dns:~#
root@dns:~# ipadm show-addr
ADDROBJ           TYPE     STATE        ADDR
lo0/v4            static   ok           127.0.0.1/8
net0/v4           static   ok           172.17.0.250/22
net1/v4           dhcp     ok           192.168.1.112/24
lo0/v6            static   ok           ::1/128
net0/v6           addrconf ok           fe80::8:20ff:fe90:a16e/10
root@dns:~#

It is time to create the master DNS file in /etc/named.conf. Some items of note include:

  • My two subnets, 172.17.0.0/22 and 192.168.1.0/24
  • I have ACLs to allow access from my two subnets
  • I set a forward to my local DNS server (my local router or my office network's DNS servers.)
  • I listen on the two networks listed in the ipadm output above.
  • This is set up for additional slave DNS servers, though I will not be showing the setup of that here.

Here is my final /etc/named.conf file.

root@dns:~# cat /etc/named.conf
//
// sample BIND configuration file
// taken from http://www.madboa.com/geek/soho-bind/
//

// Added acl per DNS setup at
// https://www.digitalocean.com/community/tutorials/how-to-configure-bind-as-a-caching-or-forwarding-dns-server-on-ubuntu-14-04
//
acl goodclients {
  172.17.0.0/22;
  192.168.1.0/24;
  localhost;
};

options {
  // tell named where to find files mentioned below
  directory "/var/named";
  // on a multi-homed host, you might want to tell named
  // to listen for queries only on certain interfaces
  listen-on { 127.0.0.1; 172.17.0.250/22; 192.168.1.112/24; };
  allow-query { goodclients; };
  forwarders { 192.168.1.1; };
};

// The single dot (.) is the root of all DNS namespace, so
// this zone tells named where to start looking for any
// name on the Internet
zone "." IN {
  // a hint type means that we've got to look elsewhere
  // for authoritative information
  type hint;
  file "named.root";
};

// Where the localhost hostname is defined
zone "localhost" IN {
  // a master type means that this server needn't look
  // anywhere else for information; the localhost buck
  // stops here.
  type master;
  file "zone.localhost";
  // don't allow dynamic DNS clients to update info
  // about the localhost zone
  allow-update { none; };
};

// Where the 127.0.0.0 network is defined
zone "0.0.127.in-addr.arpa" IN {
  type master;
  file "revp.127.0.0";
  allow-update { none; };
};

zone "steffentw.com" IN {
  // this is the authoritative server for
  // steffentw.com info
  type master;
  file "zone.com.steffentw";
  also-notify { 172.17.0.251; 172.17.0.252; };
};

zone "0.17.172.in-addr.arpa" {
  // this is the authoritative server for
  // the 172.17.0.0/22 network
  type master;
  file "revp.172.17.0.0";
  also-notify { 172.17.0.251; 172.17.0.252; };
};
root@dns:~#

Now I have to create or update the files pointed to be /etc/named.conf with my local hostnames.

root@dns:~# cd /var/named
root@dns:/var/named# ls
named.root        revp.172.17.0.0   zone.localhost
revp.127.0.0      zone.com.steffentw
root@dns:/var/named#
root@dns:/var/named# cat zone.com.steffentw
;
; dns zone for for steffentw.com
;
; 20150827	Hide _nfsv4idmapdomain to test domainname(1M) response
; 20150824	Removed CNAME for kdc to see if this is required
;
$ORIGIN steffentw.com.
$TTL 1M				; set to 1M for testing, was 1D
; any time you make a change to the domain, bump the
; "serial" setting below. the format is easy:
; YYYYMMDDI, with the I being an iterator in case you
; make more than one change during any one day
@	IN SOA   dns hostmaster (
			201508311 ; serial
			8H        ; refresh
			4M        ; retry
			1H        ; expire
			1D        ; minimum
			)
; dns.steffentw.com serves this domain as both the
; name server (NS) and mail exchange (MX)
		NS	dns
		MX	10 dns
; define domain functions with CNAMEs
depot           CNAME   dns
www             CNAME   dns
; for NFSv4 (2015.08.12)
;_nfsv4idmapdomain	IN TXT	"steffentw.com"
; just in case someone asks for localhost.steffentw.com
localhost	A	127.0.0.1
;
;	172.17.0.0/22 Infrastructure Administration Network
;
host1		A	172.17.0.101
host2		A	172.17.0.102
host3		A	172.17.0.103
host4		A	172.17.0.104
host5		A	172.17.0.105
host6		A	172.17.0.106
host7		A	172.17.0.107
host8		A	172.17.0.108
host9		A	172.17.0.109
zfs1		A	172.17.0.201
zfs2		A	172.17.0.202
zfs3		A	172.17.0.203
dns		A	172.17.0.250
kdc1		A	172.17.0.251
kdc2		A	172.17.0.252
kdc3		A	172.17.0.253
root@dns:/var/named#
root@dns:/var/named# cat revp.172.17.0.0
;
; reverse pointers for 172.17.0.0 subnet
;
$ORIGIN 0.16.172.in-addr.arpa.
$TTL 1D
@	IN SOA  dns.steffentw.com. hostmaster.steffentw.com. (
		201508311  ; serial
		28800      ; refresh (8 hours)
		14400      ; retry (4 hours)
		2419200    ; expire (4 weeks)
		86400      ; minimum (1 day)
		)
; define the authoritative name server
		NS	dns.steffentw.com.
;		NS	dns1.steffentw.com.
;		NS	dns2.steffentw.com.
;
;       172.17.0.0/22 Infrastructure Administration Network
;
101	PTR	host1.steffentw.com.
102	PTR	host2.steffentw.com.
103	PTR	host3.steffentw.com.
104	PTR	host4.steffentw.com.
105	PTR	host5.steffentw.com.
106	PTR	host6.steffentw.com.
107	PTR	host7.steffentw.com.
108	PTR	host8.steffentw.com.
109	PTR	host9.steffentw.com.
;
201	PTR	zfs1.steffentw.com.
202	PTR	zfs2.steffentw.com.
203	PTR	zfs3.steffentw.com.
;
250	PTR	dns.steffentw.com.
251	PTR	kdc1.steffentw.com.
252	PTR	kdc2.steffentw.com.
253	PTR	kdc3.steffentw.com.
root@dns:/var/named#

With those files created it is time to enable the DNS server. Keep an eye out on the console of the Zone in case you have errors.

root@dns:/var/named# svcs *dns*
STATE          STIME    FMRI
disabled       21:26:25 svc:/network/dns/multicast:default
disabled       21:27:17 svc:/network/dns/server:default
online         21:26:29 svc:/network/dns/client:default
root@dns:/var/named#
root@dns:/var/named# svcadm enable dns/server
root@dns:/var/named#
root@dns:/var/named# svcs *dns*
STATE          STIME    FMRI
disabled       21:26:25 svc:/network/dns/multicast:default
online         21:26:29 svc:/network/dns/client:default
online         21:44:31 svc:/network/dns/server:default
root@dns:/var/named#

Test the DNS server

Let us see if DNS really works.

root@dns:~# getent hosts kdc1
172.17.0.251	kdc1.steffentw.com
root@dns:~# getent hosts host1
172.17.0.101	host1.steffentw.com
root@dns:~#

A quick test to see if this Zone can do a DNS lookup for an external name.

root@dns:~# nslookup www.oracle.com
Server:		172.17.0.250
Address:	172.17.0.250#53

Non-authoritative answer:
www.oracle.com	canonical name = www.oracle.com.edgekey.net.
www.oracle.com.edgekey.net	canonical name = e7075.x.akamaiedge.net.
Name:	e7075.x.akamaiedge.net
Address: 23.66.214.140

root@dns:~#
root@dns:~# getent hosts www.oracle.com
23.66.214.140	e7075.x.akamaiedge.net www.oracle.com www.oracle.com.edgekey.net
root@dns:~#

Summary and Next Step

With NTP and DNS working, the next step is to build the Key Distribution Server. Either go to KDC setup or back to the introduction.

Secure NFS: Step 1--Setting Up the Kerberos KDC

Kerberos KDC

With DNS set up, the next service to configure is the Key Distribution Center. It will need to access DNS services.

Creating the KDC Zone

The Zone configuration is similar to the DNS server, with the interface using VLAN ID 17 in my setup.

global# cat kdc1.cfg
create -b
set brand=solaris
set zonepath=/zones/kdc1
set autoboot=false
set autoshutdown=shutdown
set ip-type=exclusive
add anet
set linkname=net0
set lower-link=net1
set configure-allowed-address=true
set link-protection=mac-nospoof
set mac-address=random
set vlan-id=17
end
add admin
set user=steffen
set auths=login,manage,config
end
global#

Since the KDC must use DNS, lets put that into the sysconfig profile.

global# more kdc1_profile.xml
...
  <service version="1" type="service" name="network/install">
    <instance enabled="true" name="default">
      <property_group type="application" name="install_ipv6_interface">
        <propval type="astring" name="stateful" value="yes"/>
        <propval type="astring" name="address_type" value="addrconf"/>
        <propval type="astring" name="name" value="net0/v6"/>
        <propval type="astring" name="stateless" value="yes"/>
      </property_group>
      <property_group type="application" name="install_ipv4_interface">
        <propval type="net_address_v4" name="static_address" value="172.17.0.251 /24"/>
        <propval type="astring" name="name" value="net0/v4"/>
        <propval type="astring" name="address_type" value="static"/>
      </property_group>
    </instance>
  </service>
  <service version="1" type="service" name="network/physical">
    <instance enabled="true" name="default">
      <property_group type="application" name="netcfg">
        <propval type="astring" name="active_ncp" value="DefaultFixed"/>
      </property_group>
    </instance>
  </service>
  <service version="1" type="service" name="system/name-service/switch">
    <property_group type="application" name="config">
      <propval type="astring" name="default" value="files"/>
      <propval type="astring" name="host" value="files dns"/>
    </property_group>
    <instance enabled="true" name="default"/>
  </service>
  <service version="1" type="service" name="network/dns/client">
    <property_group type="application" name="config">
      <property type="net_address" name="nameserver">
        <net_address_list>
          <value_node value="172.17.0.250"/>
        </net_address_list>
      </property>
      <property type="astring" name="search">
        <astring_list>
          <value_node value="steffentw.com"/>
        </astring_list>
      </property>
    </property_group>
    <instance enabled="true" name="default"/>
  </service>
  ...
global#

Configure and clone the KDC Zone.

global# zonecfg -z kdc1 -f kdc1.cfg
UX: /usr/sbin/usermod: steffen is currently logged in, some changes may not take effect until next login.
global#
global#
global# zoneadm -z kdc1 clone -c kdc1_profile.xml kdcmaster
The following ZFS file system(s) have been created:
    pool1/zones/kdc1
Progress being logged to /var/log/zones/zoneadm.20150901T204046Z.kdc1.clone
Log saved in non-global zone as /zones/kdc1/root/var/log/zones/zoneadm.20150901T204046Z.kdc1.clone
global#
global# zoneadm -z kdc1 boot
global#

After logging into the KDC Zones, first verify that DNS is configured properly.

global#
global# zlogin kdc1
[Connected to zone 'kdc1' pts/8]
Oracle Corporation	SunOS 5.11	11.2	July 2015
root@kdc1:~#
root@kdc1:~# getent hosts host1
172.17.0.101	host1.steffentw.com
root@kdc1:~#

Installing the Kerberos Server Software

The necessary KDC package is not installed by default.

root@kdc1:~# svcs *krb5* ; svcs *kerb*
STATE          STIME    FMRI
STATE          STIME    FMRI
disabled       16:41:20 svc:/system/kerberos/install:default
root@kdc1:~#

Again I prefer to create an alternate boot environment. This time I will do it as part of the package installation.

root@kdc1:~# pkg install --be-name kdc system/security/kerberos-5
           Packages to install:   1
       Create boot environment: Yes
Create backup boot environment:  No
DOWNLOAD                                PKGS         FILES    XFER (MB)   SPEED
Completed                                1/1         41/41      0.7/0.7 27.9M/s

PHASE                                          ITEMS
Installing new actions                         90/90
Updating package state database                 Done
Updating package cache                           0/0
Updating image state                            Done
Creating fast lookup database                   Done
Updating package cache                           2/2

A clone of solaris-0 exists and has been updated and activated.
On the next boot the Boot Environment kdc will be
mounted on '/'.  Reboot when ready to switch to this updated BE.

Updating package cache                           2/2
root@kdc1:~#

A quick check on the BE, and then boot into it.

root@kdc1:~# beadm list
BE        Flags Mountpoint Space  Policy Created         
--        ----- ---------- -----  ------ -------         
kdc       R     -          95.45M static 2015-09-01 16:47
solaris-0 N     /          6.29M  static 2015-09-01 16:40
root@kdc1:~#
root@kdc1:~# reboot

[Connection to zone 'kdc1' pts/8 closed]
global#

First lets confirm the necessary services are there.

global# zlogin kdc1
[Connected to zone 'kdc1' pts/8]
Oracle Corporation	SunOS 5.11	11.2	July 2015
root@kdc1:~#
root@kdc1:~# svcs *krb5* ; svcs *kerb*
STATE          STIME    FMRI
disabled       16:48:22 svc:/network/security/krb5_prop:default
disabled       16:48:22 svc:/network/security/krb5kdc:default
STATE          STIME    FMRI
disabled       16:48:21 svc:/system/kerberos/install:default
root@kdc1:~#

Configuring the KDC

The first configuration step is to modify two files. I make a copy for backups and to compare the new to the original here.

root@kdc1:~# cd /etc/krb5/
root@kdc1:/etc/krb5#
root@kdc1:/etc/krb5# cp -p kdc.conf kdc.conf.orig
root@kdc1:/etc/krb5# cp -p krb5.conf krb5.conf.orig
root@kdc1:/etc/krb5#
root@kdc1:/etc/krb5# vi kdc.conf
root@kdc1:/etc/krb5#
root@kdc1:/etc/krb5# cat kdc.conf
#
#
# Copyright (c) 2008, Oracle and/or its affiliates. All rights reserved.
#

[kdcdefaults]
	kdc_ports = 88,750

[realms]
	___default_realm___ = {
		profile = /etc/krb5/krb5.conf
		database_name = /var/krb5/principal
		acl_file = /etc/krb5/kadm5.acl
		kadmind_port = 749
		max_life = 8h 0m 0s
		max_renewable_life = 7d 0h 0m 0s
		default_principal_flags = +preauth
 		master_key_type = des3-cbc-sha1-kd
 		supported_enctypes = des3-cbc-sha1-kd:normal
	}
root@kdc1:/etc/krb5#
root@kdc1:/etc/krb5# diff kdc.conf*
18,19d17
<  		master_key_type = des3-cbc-sha1-kd
<  		supported_enctypes = des3-cbc-sha1-kd:normal
root@kdc1:/etc/krb5#
root@kdc1:/etc/krb5# vi krb5.conf
root@kdc1:/etc/krb5#
root@kdc1:/etc/krb5# head -20 krb5.conf
#
#
# Copyright (c) 2007, Oracle and/or its affiliates. All rights reserved.
#

# krb5.conf template
# In order to complete this configuration file
# you will need to replace the ____ placeholders
# with appropriate values for your network and uncomment the
# appropriate entries.
#
[libdefaults]
#        default_realm = ___default_realm___
 	default_tgs_enctypes = des3-cbc-sha1-kd
 	default_tkt_enctypes = des3-cbc-sha1-kd
 	permitted_enctypes = des3-cbc-sha1-kd
 	allow_weak_enctypes = false


[realms]
root@kdc1:/etc/krb5#
root@kdc1:/etc/krb5# diff krb5.conf*
14,17d13
<  	default_tgs_enctypes = des3-cbc-sha1-kd
<  	default_tkt_enctypes = des3-cbc-sha1-kd
<  	permitted_enctypes = des3-cbc-sha1-kd
<  	allow_weak_enctypes = false
19d14
<
root@kdc1:/etc/krb5#

Since my sample domain name is steffentw.com, my Kerberos realm is STEFFENTW.COM. Here I create the master KDC. It will prompt for two sets of password, make sure the remember them. The admin password will be required on all the clients.

root@kdc1:/etc/krb5# kdcmgr -a kws/admin -r STEFFENTW.COM create master

Starting server setup
---------------------------------------------------

Setting up /etc/krb5/kdc.conf.

Setting up /etc/krb5/krb5.conf.

Initializing database '/var/krb5/principal' for realm 'STEFFENTW.COM',
master key name 'K/M@STEFFENTW.COM'
You will be prompted for the database Master Password.
It is important that you NOT FORGET this password.
Enter KDC database master key: enter master password here
Re-enter KDC database master key to verify: enter master password here

Authenticating as principal root/admin@STEFFENTW.COM with password.
WARNING: no policy specified for kws/admin@STEFFENTW.COM; defaulting to no policy
Enter password for principal "kws/admin@STEFFENTW.COM": enter admin password here
Re-enter password for principal "kws/admin@STEFFENTW.COM": enter admin password here
Principal "kws/admin@STEFFENTW.COM" created.

Setting up /etc/krb5/kadm5.acl.

---------------------------------------------------
Setup COMPLETE.

root@kdc1:/etc/krb5#

Once the configuration is complete, I quickly check to make sure it looks OK. I especially look for kadmin:default to be online.

root@kdc1:/etc/krb5# kdcmgr status

KDC Status Information
--------------------------------------------
svc:/network/security/krb5kdc:default (Kerberos key distribution center)
 State: online since September  1, 2015 04:51:06 PM EDT
   See: man -M /usr/share/man -s 1M krb5kdc
   See: /var/svc/log/network-security-krb5kdc:default.log
Impact: None.

KDC Master Status Information
--------------------------------------------
svc:/network/security/kadmin:default (Kerberos administration daemon)
 State: online since September  1, 2015 04:51:07 PM EDT
   See: man -M /usr/share/man -s 1M kadmind
   See: /var/svc/log/network-security-kadmin:default.log
Impact: None.

Transaction Log Information
--------------------------------------------

Kerberos update log (/var/krb5/principal.ulog)
Update log dump :
	Log version # : 1
	Log state : Stable
	Entry block size : 2048
	Number of entries : 3
	First serial # : 1
	Last serial # : 3
	First time stamp : Tue Sep  1 16:51:06 2015
	Last time stamp : Tue Sep  1 16:51:06 2015


Kerberos Related File Information
--------------------------------------------
(will display any missing files below)

root@kdc1:/etc/krb5#

Enabling Kerberos Client Configuration

With the KDC set up, the next step is to make is easier to configure the Kerberos clients. Two files are required, and by putting them into a location that is shared via NFS, setting up the clients will be very easy.

Step 1 is to create a mountpoint.

root@kdc1:/etc/krb5# zfs create -o mountpoint=/share -o share.nfs=on rpool/share
root@kdc1:/etc/krb5#
root@kdc1:/etc/krb5# share
rpool_share	/share	nfs	sec=sys,rw	
root@kdc1:/etc/krb5#

Step 2 is to create the file kcprofile

root@kdc1:/etc/krb5# mkdir /share/krb5
root@kdc1:/etc/krb5#
root@kdc1:/etc/krb5# vi /share/krb5/kcprofile
root@kdc1:/etc/krb5#
root@kdc1:/etc/krb5# cat /share/krb5/kcprofile
REALM STEFFENTW.COM
KDC kdc1.steffentw.com
ADMIN kws
FILEPATH /net/kdc1.steffentw.com/share/krb5/krb5.conf
NFS 1
DNSLOOKUP none
root@kdc1:/etc/krb5#
root@kdc1:/etc/krb5# cp /etc/krb5/krb5.conf /share/krb5/
root@kdc1:/etc/krb5#
root@kdc1:/etc/krb5# cat /share/krb5/krb5.conf 
[libdefaults]
	default_realm = STEFFENTW.COM

[realms]
	STEFFENTW.COM = {
		kdc = kdc1.steffentw.com
		admin_server = kdc1.steffentw.com
	}

[domain_realm]
	.steffentw.com = STEFFENTW.COM

[logging]
	default = FILE:/var/krb5/kdc.log
	kdc = FILE:/var/krb5/kdc.log
	kdc_rotate = {
		period = 1d
		versions = 10
	}

[appdefaults]
	kinit = {
		renewable = true
		forwardable = true
	}
root@kdc1:/etc/krb5#

Summary and Next Step

With the KDC set up, the next step is to create the first client and configure secure NFS. Either go to NFS Server Setup or back to the introduction.

Secure NFS: Step 2--First Keberos Client--NFS Server

Secure NFS Server

With our Kerberos KDC set up, it is time to build the NFS server. First step is creating another Solaris Zone similar to the previous ones.

Creating a NFS Server Zone

global# cat zfs1.cfg
create -b
set brand=solaris
set zonepath=/zones/zfs1
set autoboot=false
set autoshutdown=shutdown
set ip-type=exclusive
add anet
set linkname=net0
set lower-link=net2
set configure-allowed-address=true
set link-protection=mac-nospoof
set mac-address=random
set vlan-id=17
end
add admin
set user=steffen
set auths=login,manage,config
end
global#
global# zonecfg -z zfs1 -f zfs1.cfg
UX: /usr/sbin/usermod: steffen is currently logged in, some changes may not take effect until next login.
global#
global# zoneadm -z zfs1 clone -c zfs1_profile.xml kdcmaster
The following ZFS file system(s) have been created:
    pool1/zones/zfs1
Progress being logged to /var/log/zones/zoneadm.20150901T210134Z.zfs1.clone
Log saved in non-global zone as /zones/zfs1/root/var/log/zones/zoneadm.20150901T210134Z.zfs1.clone
global#
global# zoneadm -z zfs1 boot
global#

Configuring the Zone as a Kerberos Client

We also follow the same steps as for the previous KDC client.

global# zlogin zfs1
[Connected to zone 'zfs1' pts/10]
Oracle Corporation	SunOS 5.11	11.2	July 2015
root@zfs1:~#
root@zfs1:~# ping kdc1
kdc1 is alive
root@zfs1:~#
root@zfs1:~# cat /net/kdc1/share/krb5/kcprofile
REALM STEFFENTW.COM
KDC kdc1.steffentw.com
ADMIN kws
FILEPATH /net/kdc1.steffentw.com/share/krb5/krb5.conf
NFS 1
DNSLOOKUP none
root@zfs1:~#
root@zfs1:~# head -5 /net/kdc1.steffentw.com/share/krb5/krb5.conf
[libdefaults]
	default_realm = STEFFENTW.COM

[realms]
	STEFFENTW.COM = {
root@zfs1:~#
root@zfs1:~# kclient -p /net/kdc1/share/krb5/kcprofile

Starting client setup

---------------------------------------------------

Setting up /etc/krb5/krb5.conf.

Copied /net/kdc1.steffentw.com/share/krb5/krb5.conf to /system/volatile/kclient/kclient-krb5conf.MYaafI.
Obtaining TGT for kws/admin ...
Password for kws/admin@STEFFENTW.COM: enter admin password here
kinit:  no ktkt_warnd warning possible

nfs/zfs1.steffentw.com entry ADDED to KDC database.
nfs/zfs1.steffentw.com entry ADDED to keytab.

host/zfs1.steffentw.com entry ADDED to KDC database.
host/zfs1.steffentw.com entry ADDED to keytab.

---------------------------------------------------
Setup COMPLETE.

root@zfs1:~#
root@zfs1:~# klist -k
Keytab name: FILE:/etc/krb5/krb5.keytab
KVNO Principal
---- --------------------------------------------------------------------------
   2 nfs/zfs1.steffentw.com@STEFFENTW.COM
   2 nfs/zfs1.steffentw.com@STEFFENTW.COM
   2 nfs/zfs1.steffentw.com@STEFFENTW.COM
   2 nfs/zfs1.steffentw.com@STEFFENTW.COM
   2 host/zfs1.steffentw.com@STEFFENTW.COM
   2 host/zfs1.steffentw.com@STEFFENTW.COM
   2 host/zfs1.steffentw.com@STEFFENTW.COM
   2 host/zfs1.steffentw.com@STEFFENTW.COM
root@zfs1:~#

Configuring the NFS Server File System

With the NFS server a Kerberos client, now create a ZFS file system that is exported as an NFS share requiring Kerberos privacy settings (the "krb5p" setting.)

root@zfs1:~# zfs create -o mountpoint=/secure -o share.nfs=on -o share.nfs.sec=krb5p rpool/secure
root@zfs1:~# share
rpool_secure	/secure	nfs	sec=krb5p,rw	
root@zfs1:~#

Then create a file with some easily recognized content.

root@zfs1:~# echo "The quick brown fox jumps over the lazy dog." > /secure/fox.txt
root@zfs1:~#
root@host1:~# cat /secure/fox.txt
The quick brown fox jumps over the lazy dog.
root@zfs1:~#

Summary and Next Step

With the NFS server running, the next step is to create an NFS client. Either go to NFS Client Setup or back to the introduction.

Secure NFS: Step 3--The Secure NFS Client

Secure NFS Client

We are getting close to a fully completed configuration. The next item is the client.

Build the NFS Client Zone as a KDC Client

global# cat host1.cfg
create -b
set brand=solaris
set zonepath=/zones/host1
set autoboot=false
set autoshutdown=shutdown
set ip-type=exclusive
add anet
set linkname=net0
set lower-link=net2
set configure-allowed-address=true
set link-protection=mac-nospoof
set mac-address=random
set vlan-id=17
end
add admin
set user=steffen
set auths=login,manage,config
end
global#
global# zoneadm -z host1 clone -c host1_profile.xml kdcmaster
The following ZFS file system(s) have been created:
    pool1/zones/host1
Progress being logged to /var/log/zones/zoneadm.20150901T213207Z.host1.clone
Log saved in non-global zone as /zones/host1/root/var/log/zones/zoneadm.20150901T213207Z.host1.clone
global#
global# zlogin host1
[Connected to zone 'host1' pts/8]
Oracle Corporation	SunOS 5.11	11.2	July 2015
root@host1:~#
root@host1:~# ping kdc1
kdc1 is alive
root@host1:~#
root@host1:~# cat /net/kdc1/share/krb5/kcprofile
REALM STEFFENTW.COM
KDC kdc1.steffentw.com
ADMIN kws
FILEPATH /net/kdc1.steffentw.com/share/krb5/krb5.conf
NFS 1
DNSLOOKUP none
root@host1:~#
root@host1:~# kclient -p /net/kdc1/share/krb5/kcprofile

Starting client setup

---------------------------------------------------

Setting up /etc/krb5/krb5.conf.

Copied /net/kdc1.steffentw.com/share/krb5/krb5.conf to /system/volatile/kclient/kclient-krb5conf.ToaOPV.
Obtaining TGT for kws/admin ...
Password for kws/admin@STEFFENTW.COM: enter admin password here
kinit:  no ktkt_warnd warning possible

nfs/host1.steffentw.com entry ADDED to KDC database.
nfs/host1.steffentw.com entry ADDED to keytab.

host/host1.steffentw.com entry ADDED to KDC database.
host/host1.steffentw.com entry ADDED to keytab.

---------------------------------------------------
Setup COMPLETE.

root@host1:~#

Demonstrate the NFS Client Working

The simples test is to just navigate to the /net/<server name> location.

root@host1:~# cat /net/zfs1/secure/fox.txt
The quick brown fox jumps over the lazy dog.
root@host1:~#

However, was this really an encrypted data transfer? One way to check is with snoop(1M).

root@host1:~# snoop -d net0 -r host zfs1 &
[1] 21547
root@host1:~# Using device net0 (promiscuous mode)

root@host1:~# cat /net/zfs1/secure/fox.txt
The quick brown fox jumps over the lazy dog.
root@host1:~# 172.17.0.101 -> 172.17.0.201 TCP D=2049 S=1023 Syn Seq=1000276621 Len=0 Win=32804 Options=<mss 1460,sackOK,tstamp 129311831 0,nop,wscale 5>
172.17.0.201 -> 172.17.0.101 TCP D=1023 S=2049 Syn Ack=1000276622 Seq=576217546 Len=0 Win=32806 Options=<sackOK,tstamp 129311831 129311831,mss 1460,nop,wscale 5>
172.17.0.101 -> 172.17.0.201 TCP D=2049 S=1023 Ack=576217547 Seq=1000276622 Len=0 Win=32806 Options=<nop,nop,tstamp 129311831 129311831>
...
172.17.0.101 -> 172.17.0.201 RPC RPCSEC_GSS C NFS ver(4) proc(1) (data encrypted)
172.17.0.201 -> 172.17.0.101 TCP D=1023 S=2049 Ack=1000276950 Seq=576217547 Len=0 Win=32796 Options=<nop,nop,tstamp 129311831 129311831>
172.17.0.201 -> 172.17.0.101 RPC RPCSEC_GSS R NFS ver(4) proc(1) (data encrypted)
172.17.0.101 -> 172.17.0.201 TCP D=2049 S=1023 Ack=576217959 Seq=1000276950 Len=0 Win=32806 Options=<nop,nop,tstamp 129311832 129311832>
...
172.17.0.101 -> 172.17.0.201 RPC RPCSEC_GSS C NFS ver(4) proc(1) (data encrypted)
172.17.0.201 -> 172.17.0.101 RPC RPCSEC_GSS R NFS ver(4) proc(1) (data encrypted)
...
root@host1:~# kill %1
root@host1:~#

To see the difference, lets create a second share that does not require Kerberos.

root@zfs1:~# zfs create -o mountpoint=/clear -o share.nfs=on rpool/clear
root@zfs1:~#
root@zfs1:~# share
rpool_secure	/secure	nfs	sec=krb5p,rw	
rpool_clear	/clear	nfs	sec=sys,rw	
root@zfs1:~#
root@zfs1:~# cp /secure/fox.txt /clear/
root@zfs1:~#

And run snoop with the option to dump all the data in each Ethernet frame. I like to use -x 0.

First using encrypted mountpoint.

root@host1:~# snoop -d net0 -r -x 0 host zfs1 &
[1] 21591
root@host1:~# Using device net0 (promiscuous mode)

root@host1:~# cat /net/zfs1/secure/fox.txt
The quick brown fox jumps over the lazy dog.
root@host1:~# 172.17.0.101 -> 172.17.0.201 TCP D=2049 S=48428 Syn Seq=788443968 Len=0 Win=64240 Options=<mss 1460,sackOK,tstamp 129469208 0,nop,wscale 1>

	   0: 0208 20e4 7813 0208 20ea 4c3d 0800 4500    .. .x... .L=..E.
	  16: 003c ea59 4000 4006 0000 ac11 0065 ac11    .<.Y@.@......e..
	  32: 00c9 bd2c 0801 2efe b340 0000 0000 a002    ...,.....@......
	  48: faf0 597f 0000 0204 05b4 0402 080a 07b7    ..Y.............
	  64: 8b18 0000 0000 0103 0301                   ..........

172.17.0.201 -> 172.17.0.101 TCP D=48428 S=2049 Syn Ack=788443969 Seq=2268877688 Len=0 Win=32806 Options=<sackOK,tstamp 129469208 129469208,mss 1460,nop,wscale 5>

	   0: 0208 20ea 4c3d 0208 20e4 7813 0800 4500    .. .L=.. .x...E.
	  16: 003c f568 4000 4006 ec02 ac11 00c9 ac11    .<.h@.@.........
	  32: 0065 0801 bd2c 873c 5378 2efe b341 a012    .e...,.<Sx...A..
	  48: 8026 c6b9 0000 0402 080a 07b7 8b18 07b7    .&..............
	  64: 8b18 0204 05b4 0103 0305                   ..........

172.17.0.101 -> 172.17.0.201 TCP D=2049 S=48428 Ack=2268877689 Seq=788443969 Len=0 Win=64436 Options=<nop,nop,tstamp 129469208 129469208>

	   0: 0208 20e4 7813 0208 20ea 4c3d 0800 4500    .. .x... .L=..E.
	  16: 0034 ea5a 4000 4006 0000 ac11 0065 ac11    .4.Z@.@......e..
	  32: 00c9 bd2c 0801 2efe b341 873c 5379 8010    ...,.....A.<Sy..
	  48: fbb4 5977 0000 0101 080a 07b7 8b18 07b7    ..Yw............
	  64: 8b18                                       ..

...

172.17.0.101 -> 172.17.0.201 RPC RPCSEC_GSS C NFS ver(4) proc(1) (data encrypted)

	   0: 0208 20e4 7813 0208 20ea 4c3d 0800 4500    .. .x... .L=..E.
	  16: 017c ea70 4000 4006 0000 ac11 0065 ac11    .|.p@.@......e..
	  32: 00c9 03ff 0801 4667 92c6 2d1f 25fc 8018    ......Fg..-.%...
	  48: 8026 5abf 0000 0101 080a 07b7 8b1b 07b7    .&Z.............
	  64: 8b1b 8000 0144 6e7d 0f68 0000 0000 0000    .....Dn}.h......
	  80: 0002 0001 86a3 0000 0004 0000 0001 0000    ................
	  96: 0006 0000 0018 0000 0001 0000 0000 0000    ................
	 112: 0002 0000 0003 0000 0004 1e00 0000 0000    ................
	 128: 0006 0000 001c 0404 04ff ffff ffff 0000    ................
	 144: 0000 15d8 2a96 8cb9 33d6 91df d5de 4ee1    ....*...3.....N.
	 160: d51a 0000 00e4 0504 06ff 0000 0000 0000    ................
	 176: 0000 15d8 2a97 61c4 fa98 3b63 14d0 c5cb    ....*.a...;c....
	 192: 59ee 8848 1638 12bc 486e d73a 8b1e d704    Y..H.8..Hn.:....
	 208: 74e2 65e6 e036 6847 32e8 d2c8 a100 655b    t.e..6hG2.....e[
	 224: df06 73df 78d2 af8a 7850 193c a0bc 2147    ..s.x...xP.<..!G
	 240: 6073 7dcf 3038 cfbb 95d4 5f35 489c 65eb    `s}.08...._5H.e.
	 256: 1e54 3572 60c8 9b1e 78c8 f47a ac25 e8be    .T5r`...x..z.%..
	 272: ddd5 c104 8067 cf6a ca03 1327 c14d e5dd    .....g.j...'.M..
	 288: 0f06 2dac bac9 d689 7536 e391 0e3f 14dd    ..-.....u6...?..
	 304: 2f7b 33d1 231e 3b7b 0de5 5ee2 c28f cb54    /{3.#.;{..^....T
	 320: a2e0 2456 1ffa ddf0 c37f 42bf 252b 1667    ..$V......B.%+.g
	 336: 02c2 1fe3 b19d 0d7b 94a2 4e50 748b 5935    .......{..NPt.Y5
	 352: 890b 746c deb2 5744 97a4 4c07 83e4 5377    ..tl..WD..L...Sw
	 368: 4ca4 75e4 8081 f196 6f01 63fd 4e56 bee9    L.u.....o.c.NV..
	 384: 5510 c21a 6b6a 2d63 c326                   U...kj-c.&

172.17.0.201 -> 172.17.0.101 RPC RPCSEC_GSS R NFS ver(4) proc(1) (data encrypted)

	   0: 0208 20ea 4c3d 0208 20e4 7813 0800 4500    .. .L=.. .x...E.
	  16: 01d0 f57e 4000 4006 ea58 ac11 00c9 ac11    ...~@.@..X......
	  32: 0065 0801 03ff 2d1f 25fc 4667 940e 8018    .e....-.%.Fg....
	  48: 8026 8344 0000 0101 080a 07b7 8b1b 07b7    .&.D............
	  64: 8b1b 8000 0198 6e7d 0f68 0000 0001 0000    ......n}.h......
	  80: 0000 0000 0006 0000 001c 0404 05ff ffff    ................
	  96: ffff 0000 0000 22a9 1433 c781 6e9e 8ed8    ......"..3..n...
	 112: e6cc aa86 e4d9 0000 0000 0000 0160 0504    .............`..
	 128: 07ff 0000 0000 0000 0000 22a9 1434 68c0    .........."..4h.
	 144: e008 d7e8 cca4 af88 da90 2b45 dc13 57b9    ..........+E..W.
	 160: 3a0a e3f8 5a98 fddb 5039 62bc 1858 ecd5    :...Z...P9b..X..
	 176: 0f5c fcd6 a150 7bf0 0782 d337 8cf6 8de1    .\...P{....7....
	 192: 5e81 481f b921 9054 d74a 0160 e9a4 0522    ^.H..!.T.J.`..."
	 208: 8d85 f55d 9576 f819 6515 c010 8d22 d0a4    ...].v..e...."..
	 224: e685 0b00 ebd9 cb9b 4079 dcd1 1195 5690    ........@y....V.
	 240: 9d07 846b a8e0 f022 c33d 7412 5065 3bc5    ...k...".=t.Pe;.
	 256: 0be5 7f98 9cb5 f5cb 8452 aa0a dfa7 cfb3    .........R......
	 272: e9eb a607 03a8 59c9 dc62 903c b289 dd13    ......Y..b.<....
	 288: b20f 612d 1603 c335 2705 61ce af13 b792    ..a-...5'.a.....
	 304: 442e 5a19 59fb d867 377e 34f3 b43d f8e3    D.Z.Y..g7~4..=..
	 320: ff0a 2937 d04c 1b22 0213 5227 57f1 ba26    ..)7.L."..R'W..&
	 336: 44e0 5e52 2f79 41d9 a494 cee6 bd76 f8e0    D.^R/yA......v..
	 352: ecd1 4b98 0e91 7b09 321e 97b1 26ef 3cdc    ..K...{.2...&.<.
	 368: 7211 7ae3 b71c 3bb0 c1b0 2e91 93e2 2b37    r.z...;.......+7
	 384: a1de 76ca f736 70c4 4987 b39f 71e9 736f    ..v..6p.I...q.so
	 400: fc6e 433e 5f2f f283 06b6 cf1b 96f8 b447    .nC>_/.........G
	 416: af39 1d95 6fe7 4173 e554 2d77 c9b8 df88    .9..o.As.T-w....
	 432: 48d2 843e 67cb 54a2 93c8 8bad b24c 1e40    H..>g.T......L.@
	 448: 64aa 7f75 5fec a0c6 4d58 de19 ec68 25d3    d..u_...MX...h%.
	 464: af93 6f26 e12f 180b f0c0 87b6 7df6         ..o&./......}.

...

172.17.0.101 -> 172.17.0.201 NFS R CB_NULL

	   0: 0208 20e4 7813 0208 20ea 4c3d 0800 4500    .. .x... .L=..E.
	  16: 0050 ea7c 4000 4006 0000 ac11 0065 ac11    .P.|@.@......e..
	  32: 00c9 b385 ed12 c833 5144 9614 5a3c 8018    .......3QD..Z<..
	  48: 8026 5993 0000 0101 080a 07b7 8b1d 07b7    .&Y.............
	  64: 8b1a 8000 0018 627d 0f68 0000 0001 0000    ......b}.h......
	  80: 0000 0000 0000 0000 0000 0000 0000         ..............

172.17.0.201 -> 172.17.0.101 TCP D=45957 S=60690 Ack=3358806368 Seq=2517916220 Len=0 Win=32806 Options=<nop,nop,tstamp 129469213 129469213>

	   0: 0208 20ea 4c3d 0208 20e4 7813 0800 4500    .. .L=.. .x...E.
	  16: 0034 f58a 4000 4006 ebe8 ac11 00c9 ac11    .4..@.@.........
	  32: 0065 ed12 b385 9614 5a3c c833 5160 8010    .e......Z<.3Q`..
	  48: 8026 cd1f 0000 0101 080a 07b7 8b1d 07b7    .&..............
	  64: 8b1d                                       ..

172.17.0.101 -> 172.17.0.201 TCP D=2049 S=1023 Ack=757019588 Seq=1181196406 Len=0 Win=32806 Options=<nop,nop,tstamp 129469216 129469211>

	   0: 0208 20e4 7813 0208 20ea 4c3d 0800 4500    .. .x... .L=..E.
	  16: 0034 ea7d 4000 4006 0000 ac11 0065 ac11    .4.}@.@......e..
	  32: 00c9 03ff 0801 4667 a076 2d1f 33c4 8010    ......Fg.v-.3...
	  48: 8026 5977 0000 0101 080a 07b7 8b20 07b7    .&Yw......... ..
	  64: 8b1b                                       ..


root@host1:~#

And now using the clear text mount point.

root@host1:~# snoop -d net0 -r -x 0 host zfs1 &
[1] 21593
root@host1:~# Using device net0 (promiscuous mode)

root@host1:~# cat /net/zfs1/clear/fox.txt
The quick brown fox jumps over the lazy dog.
...

172.17.0.201 -> 172.17.0.101 NFS R 4 (read        ) NFS4_OK PUTFH NFS4_OK READ NFS4_OK (45 bytes) EOF

	   0: 0208 20ea 4c3d 0208 20e4 7813 0800 4500    .. .L=.. .x...E.
	  16: 00b0 f594 4000 4006 eb62 ac11 00c9 ac11    ....@.@..b......
	  32: 0065 0801 03ff 2d1f 3ba8 4667 a8d2 8018    .e....-.;.Fg....
	  48: 8026 f4c5 0000 0101 080a 07b7 9377 07b7    .&...........w..
	  64: 9377 8000 0078 917d 0f68 0000 0001 0000    .w...x.}.h......
	  80: 0000 0000 0000 0000 0000 0000 0000 0000    ................
	  96: 0000 0000 000c 7265 6164 2020 2020 2020    ......read     
	 112: 2020 0000 0002 0000 0016 0000 0000 0000      ..............
	 128: 0019 0000 0000 0000 0001 0000 002d 5468    .............-Th
	 144: 6520 7175 6963 6b20 6272 6f77 6e20 666f    e quick brown fo
	 160: 7820 6a75 6d70 7320 6f76 6572 2074 6865    x jumps over the
	 176: 206c 617a 7920 646f 672e 0a00 0000          lazy dog.....

...

172.17.0.101 -> 172.17.0.201 TCP D=2049 S=1023 Ack=757021992 Seq=1181198770 Len=0 Win=32806 Options=<nop,nop,tstamp 129471358 129471351>

	   0: 0208 20e4 7813 0208 20ea 4c3d 0800 4500    .. .x... .L=..E.
	  16: 0034 ea89 4000 4006 0000 ac11 0065 ac11    .4..@.@......e..
	  32: 00c9 03ff 0801 4667 a9b2 2d1f 3d28 8010    ......Fg..-.=(..
	  48: 8026 5977 0000 0101 080a 07b7 937e 07b7    .&Yw.........~..
	  64: 9377                                       .w


root@host1:~#

In both cases, because I let automounter time out and a new mount is initiated in each case, the are so many packets it is hard to know which is doing what. However, in the case of reading the file on /clear the "quick brown fox" text is clearly visibale. Your own tests and snoop output should make this difference very clear.

By default, the mounts use NFS version 4 (NFSv4). You can mount stating you want version 3. The results will be the same.

Additional NFS Client Configuration Options

root@host1:~# mount -o vers=3 zfs1:/secure /mnt
root@host1:~#

And as a reminder you can force mounts to use version 3 on either a client or a server using the sharectl(1M) command.

root@host1:~# sharectl get -p client_versmax nfs
client_versmax=4
root@host1:~#
root@host1:~# sharectl set -p client_versmax=3 nfs
root@host1:~# sharectl get -p client_versmax nfs
client_versmax=3
root@host1:~#

Summary and Next Step

This completes the Secure NFS setup. One option is to co-located the KDC and NFS server. Either go to Combining KDC and NFS Server or back to the introduction.

Secure NFS: Step 4--Combining the KDC and NFS Server

Combining the KDC and NFS Server

When I asked my customer about their availability requirements, they stated that they only need a few NFS clients with encrypted traffic. They would like to keep the setup simple, and therefore combine the KDC and NFS server. They are using Oracle Solaris Cluster for availability, and by putting both services in a single Solaris Zone, can meet their availability requirements with Oracle Solaris Cluster managing the Solaris Zone startup and failover.

So I looked into whether this is a good idea, and I was informed that this is fully supported and tested. They way to do this is to make the KDC a client of itself.

Making the KDC a Kerberos Client

root@kdc1:~# kclient -p /net/kdc1/share/krb5/kcprofile

Starting client setup

---------------------------------------------------

Setting up /etc/krb5/krb5.conf.

Copied /net/kdc1.steffentw.com/share/krb5/krb5.conf to /system/volatile/kclient/kclient-krb5conf.mmayyQ.
Obtaining TGT for kws/admin ...
Password for kws/admin@STEFFENTW.COM:
kinit:  no ktkt_warnd warning possible

nfs/kdc1.steffentw.com entry ADDED to KDC database.
nfs/kdc1.steffentw.com entry ADDED to keytab.

host/kdc1.steffentw.com entry already exists in KDC database.
host/kdc1.steffentw.com entry already present in keytab.
host/kdc1.steffentw.com entry ADDED to keytab.

---------------------------------------------------
Setup COMPLETE.

root@kdc1:~#
root@kdc1:~# klist -k
Keytab name: FILE:/etc/krb5/krb5.keytab
KVNO Principal
---- --------------------------------------------------------------------------
   3 host/kdc1.steffentw.com@STEFFENTW.COM
   3 host/kdc1.steffentw.com@STEFFENTW.COM
   3 host/kdc1.steffentw.com@STEFFENTW.COM
   3 host/kdc1.steffentw.com@STEFFENTW.COM
   2 nfs/kdc1.steffentw.com@STEFFENTW.COM
   2 nfs/kdc1.steffentw.com@STEFFENTW.COM
   2 nfs/kdc1.steffentw.com@STEFFENTW.COM
   2 nfs/kdc1.steffentw.com@STEFFENTW.COM
root@kdc1:~#

Creating Secured NFS Share

Then create a new mount point and put some data into it.

root@kdc1:~# zfs create -o mountpoint=/secure -o share.nfs=on -o share.nfs.sec=krb5p rpool/secure
root@kdc1:~#
root@kdc1:~# share
rpool_share     /share  nfs     sec=sys,rw     
rpool_secure    /secure nfs     sec=krb5p,rw   
root@kdc1:~#
root@kdc1:~# cp /net/zfs1/secure/fox.txt /secure/
root@kdc1:~#
root@kdc1:~# cat /secure/fox.txt
The quick brown fox jumps over the lazy dog.
root@kdc1:~#

Back on the client, read the file on the KDC, with snoop running to show data is encrypted. And since the maximum client version was set to version 3, the snoop shows that as well.

root@host1:~# snoop -d net0 -r host kdc1 &
[1] 21825
root@host1:~# Using device net0 (promiscuous mode)

root@host1:~# cat /net/kdc1/secure/fox.txt
The quick brown fox jumps over the lazy dog.
root@host1:~# 172.17.0.101 -> 172.17.0.251 TCP D=2049 S=1022 Syn Seq=597683294 Len=0 Win=32804 Options=<mss 1460,sackOK,tstamp 129789256 0,nop,wscale 5>
172.17.0.251 -> 172.17.0.101 TCP D=1022 S=2049 Syn Ack=597683295 Seq=1916087307 Len=0 Win=32806 Options=<sackOK,tstamp 129789256 129789256,mss 1460,nop,wscale 5>
172.17.0.101 -> 172.17.0.251 TCP D=2049 S=1022 Ack=1916087308 Seq=597683295 Len=0 Win=32806 Options=<nop,nop,tstamp 129789256 129789256>
172.17.0.101 -> 172.17.0.251 RPC RPCSEC_GSS C NFS ver(3) proc(1) (data encrypted)
172.17.0.251 -> 172.17.0.101 TCP D=1022 S=2049 Ack=597683495 Seq=1916087308 Len=0 Win=32806 Options=<nop,nop,tstamp 129789257 129789257>
172.17.0.251 -> 172.17.0.101 RPC RPCSEC_GSS R NFS ver(3) proc(1) (data encrypted)
172.17.0.101 -> 172.17.0.251 TCP D=2049 S=1022 Ack=1916087520 Seq=597683495 Len=0 Win=32806 Options=<nop,nop,tstamp 129789259 129789259>
172.17.0.101 -> 172.17.0.251 RPC RPCSEC_GSS C NFS ver(3) proc(4) (data encrypted)
172.17.0.251 -> 172.17.0.101 TCP D=1022 S=2049 Ack=597683699 Seq=1916087520 Len=0 Win=32806 Options=<nop,nop,tstamp 129789259 129789259>
172.17.0.251 -> 172.17.0.101 RPC RPCSEC_GSS R NFS ver(3) proc(4) (data encrypted)
172.17.0.101 -> 172.17.0.251 TCP D=2049 S=1022 Ack=1916087740 Seq=597683699 Len=0 Win=32806 Options=<nop,nop,tstamp 129789259 129789259>
172.17.0.101 -> 172.17.0.251 RPC RPCSEC_GSS C NFS ver(3) proc(1) (data encrypted)
172.17.0.251 -> 172.17.0.101 RPC RPCSEC_GSS R NFS ver(3) proc(1) (data encrypted)
172.17.0.101 -> 172.17.0.251 TCP D=2049 S=1022 Ack=1916087952 Seq=597683899 Len=0 Win=32806 Options=<nop,nop,tstamp 129789266 129789259>

root@host1:~#

Summary and Next Step

That is everything, I hope. Here you can quickly go back to the introduction.

Wednesday Jun 17, 2009

ssh and friends scp, sftp say "hello crypto!"

Solaris includes the SunSSH toolset (ssh, scp, and sftp) in Solaris 9 and later. Solaris 10 comes with the Solaris Cryptographic Framework that provides an easy mechanism for applications that use PKCS #11, OpenSSL, Java Security Extensions, or the NSS interface to take advantage of cryptographic hardware or software on the system.

Separately, the UltraSPARC® T2 processor in the T-series (CMT) has built-in cyptographic processors (one per core, or typically eight per socket) that accelerate secure one-way hashes, public key session establishment, and private key bulk data transfers. The latter is useful for long standing connections and for larger data operations, such as a file transfer.

Prior to Solaris 10 5/09, an scp or sftp file transfer operation had the encryption and decryption done the by the CPU. While usually this is not a big deal, as most CPUs do private key crypto reasonably fast, on the CMT systems these operations are relatively slow. Now with SunSSH With OpenSSL PKCS#11 Engine Support in 5/09, the SunSSH server and client will use the cryptographic framework when an UltraSPARC® T2 process nc2p cryptographic unit is available.

To demonstrate this, I used a T5120 with Logical Domains (LDoms) 1.1 configured running Solaris 10 5/09. Using LDoms helps, as I can assign or remove crypto units on a per-LDom basis. (Since the crypto units are not supported yet with dynamic reconfiguration, a reboot of the LDom instance is required. However, in general, I don't see making that kind of change very often.)

I did all the work in the 'primary' control and service LDom, where I have direct access to the network devices, and can see the LDom configuration. I am listing parts of it here, although this is about Solaris, SunSSH, and the crypto hardware.

medford# ldm list-bindings primary
NAME             STATE      FLAGS   CONS    VCPU  MEMORY   UTIL  UPTIME
primary          active     -n-cv-  SP      16    8G       0.1%  22h 16m

MAC
    00:14:4f:ac:57:c4

HOSTID
    0x84ac57c4

VCPU
    VID    PID    UTIL STRAND
    0      0      0.6%   100%
    1      1      1.9%   100%
    2      2      0.0%   100%
    3      3      0.0%   100%
    4      4      0.0%   100%
    5      5      0.1%   100%
    6      6      0.0%   100%
    7      7      0.0%   100%
    8      8      0.7%   100%
    9      9      0.1%   100%
    10     10     0.0%   100%
    11     11     0.0%   100%
    12     12     0.0%   100%
    13     13     0.0%   100%
    14     14     0.0%   100%
    15     15     0.0%   100%

MAU
    ID     CPUSET
    0      (0, 1, 2, 3, 4, 5, 6, 7)
    1      (8, 9, 10, 11, 12, 13, 14, 15)

MEMORY
    RA               PA               SIZE
    0x8000000        0x8000000        8G
The 'system' has 16 CPUs (hardware strands), two MAUs (those are the crypto units), and 8 GB of memory. I am using e1000g0 for the network and the remote system is a V210 running Solaris Express Community Edition snv_113 SPARC (OK, I am a little behind). The network is 1 GbE.

The command I run is

source#/usr/bin/time scp -i /.ssh/destination /large-file destination:/tmp

source# du -h /large-file
 1.3G   /large-file
My results with the crypto units were
real     1:13.6
user       32.2
sys        34.5
while without the crypto units
real     2:28.2
user     2:10.9
sys        26.8
The transfer took one half the time and considerably less CPU processing with the crypto units in place (I have two although I think it is using only one since this is a single transfer).

So, SunSSH benefits from the built-in cryptographic hardware in the UltraSPARC® T2 process!

Steffen

Monday Jan 19, 2009

IPMP Re-architecture is delivered

In the process of working on some zones and IPMP testing, I ran into a little difficulty. After probing for some insight, I was reminded by Peter Memishian that the IPMP Re-Architecture (part of Project Clearview) bits were going to be in Nevada/SXCE build 107, and that I could BFU the lastest bits onto an existing Nevada install. Well!!! [For Peter's own perspective of this, see his recent blog.]

Since I was already playing with build 105 because the Crossbow features are now integrated, I decided to apply the IPMP bits to a 105 installation. [Note: The IPMP Re-architecture is expected to be in Solaris Express Community Edition (SX-CE) build 107 or so (due to be out early Feb 2009), and thus in OpenSolaris 2009.spring (I don't know what its final name will be. Early access to IPS packages for OpenSolaris 2008.11 should appear in the bi-weekly developer repository shortly after SX-CE has the feature included. There is no intention to back port the re-architecture to Solaris 10.]

I am impressed! The bits worked right away, and once I got used to the slightly different way of monitoring IPMP, I really liked what I saw.

Being accustomed to using IPMP on Solaris 10 and with Crossbow beta testing previous Nevada bits, I used the long-standing (Solaris 10 and prior) IPMP configuration style I am used to. For my testing, I am using link failure testing only, so no probe addresses are configured. [For examples of the new configuration format, see the section Using the New IPMP Configuration Style below. (15 Feb 2009)]

global# cat /etc/hostname.bge1
group shared

global# cat /etc/hostname.bge2
group shared

global# cat /etc/hostname.bge3
group shared standby
In my test case bge1 and bge2 are active interfaces, and bge3 is a standby interface.
global# ifconfig -a4
bge0: flags=201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS> mtu 1500 index 2
        inet 139.164.63.125 netmask ffffff00 broadcast 139.164.63.255
        ether 0:3:ba:e3:42:8b
bge1: flags=201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS> mtu 1500 index 3
        inet 0.0.0.0 netmask ff000000 broadcast 0.255.255.255
        groupname shared
        ether 0:3:ba:e3:42:8c
bge2: flags=201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS> mtu 1500 index 4
        inet 0.0.0.0 netmask ff000000 broadcast 0.255.255.255
        groupname shared
        ether 0:3:ba:e3:42:8d
bge3: flags=261000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,STANDBY,INACTIVE,CoS> mtu 1500 index 5
        inet 0.0.0.0 netmask ff000000 broadcast 0.255.255.255
        groupname shared
        ether 0:3:ba:e3:42:8e
ipmp0: flags=8201000842<BROADCAST,RUNNING,MULTICAST,IPv4,CoS,IPMP> mtu 1500 index 6
        inet 0.0.0.0 netmask 0
        groupname shared
You will notice that all three interfaces are up and part of group shared. What is different from the old IPMP is that automatically another interface was created, with the flag IPMP. This is the interface that will be used for all the data IP addresses.

Because I used the old format for the /etc/hostname.\* files, the backward compatibility of the new IPMP automatically created the ipmp0 interface and assigned it a name. If I wish to have control over that name, I must configure IPMP slightly differently. More on that later.

The new command ipmpstat(1M) is also introduced to get enhanced information regarding the IPMP configuration.

My test is really about using zones and IPMP, so here is what things look like when I bring up three zones that are also configured the traditional way, with network definitions using the bge interfaces. [Using the new format, I would replace bge with either ipmp0 (keep in mind that 0 (zero) is set dynamically) or shared. For more details on the new format, go to Using the New IPMP Configuration Style below. (15 Feb 2009)]

global# for i in 1 2 3 \^Jdo\^J zonecfg -z shared${i} info net \^Jdone
net:
        address: 10.1.14.141/26
        physical: bge1
        defrouter: 10.1.14.129
net:
        address: 10.1.14.142/26
        physical: bge1
        defrouter: 10.1.14.129
net:
        address: 10.1.14.143/26
        physical: bge2
        defrouter: 10.1.14.129
After booting the zones, note that the zones' IP addresses are on logical interfaces on ipmp0, not the previous way of being logical interfaces on bge.
global# ifconfig -a4
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
        inet 127.0.0.1 netmask ff000000
lo0:1: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
        zone shared1
        inet 127.0.0.1 netmask ff000000
lo0:2: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
        zone shared2
        inet 127.0.0.1 netmask ff000000
lo0:3: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
        zone shared3
        inet 127.0.0.1 netmask ff000000
bge0: flags=201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS> mtu 1500 index 2
        inet 139.164.63.125 netmask ffffff00 broadcast 139.164.63.255
        ether 0:3:ba:e3:42:8b
bge1: flags=201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS> mtu 1500 index 3
        inet 0.0.0.0 netmask ff000000 broadcast 0.255.255.255
        groupname shared
        ether 0:3:ba:e3:42:8c
bge2: flags=201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS> mtu 1500 index 4
        inet 0.0.0.0 netmask ff000000 broadcast 0.255.255.255
        groupname shared
        ether 0:3:ba:e3:42:8d
bge3: flags=261000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,STANDBY,INACTIVE,CoS> mtu 1500 index 5
        inet 0.0.0.0 netmask ff000000 broadcast 0.255.255.255
        groupname shared
        ether 0:3:ba:e3:42:8e
ipmp0: flags=8201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS,IPMP> mtu 1500 index 6
        zone shared1
        inet 10.1.14.141 netmask ffffffc0 broadcast 10.1.14.191
        groupname shared
ipmp0:1: flags=8201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS,IPMP> mtu 1500 index 6
        zone shared2
        inet 10.1.14.142 netmask ffffffc0 broadcast 10.1.14.191
ipmp0:2: flags=8201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS,IPMP> mtu 1500 index 6
        zone shared3
        inet 10.1.14.143 netmask ffffffc0 broadcast 10.1.14.191
For address information, here are the pre and post boot ipmpstat outputs.
global# ipmpstat -a
ADDRESS                   STATE  GROUP       INBOUND     OUTBOUND
0.0.0.0                   down   ipmp0       --          --
global# ipmpstat -a
ADDRESS                   STATE  GROUP       INBOUND     OUTBOUND
10.1.14.143               up     ipmp0       bge1        bge2 bge1
10.1.14.142               up     ipmp0       bge2        bge2 bge1
10.1.14.141               up     ipmp0       bge1        bge2 bge1
What's really neat is that it shows which interface(s) are used for outbound traffic. A different interface will be selected for each new remote IP address. That is the level of outbound load spreading at this time.
global# ipmpstat -g
GROUP       GROUPNAME   STATE     FDT       INTERFACES
ipmp0       shared      ok        --        bge2 bge1 (bge3)
There is no group difference before or after.
global# ipmpstat -g
GROUP       GROUPNAME   STATE     FDT       INTERFACES
ipmp0       shared      ok        --        bge2 bge1 (bge3)
The FDT column lists the probe-based failure detection time, and is empty since that is disabled in this setup. bge3 is listed third and in parenthesis since that interface is not being used for data traffic at this time.
global# ipmpstat -i
INTERFACE   ACTIVE  GROUP       FLAGS     LINK      PROBE     STATE
bge3        no      ipmp0       is-----   up        disabled  ok
bge2        yes     ipmp0       -------   up        disabled  ok
bge1        yes     ipmp0       --mb---   up        disabled  ok
Also, there are no differences for interface status. In both cases bge1 is used from multicast and broadcast traffic, and bge3 is inactive and in standby mode.
global# ipmpstat -i
INTERFACE   ACTIVE  GROUP       FLAGS     LINK      PROBE     STATE
bge3        no      ipmp0       is-----   up        disabled  ok
bge2        yes     ipmp0       -------   up        disabled  ok
bge1        yes     ipmp0       --mb---   up        disabled  ok
The probe and target output is uninteresting in this setup as I don't have probe based failure detection on. I am including them for completeness.
global# ipmpstat -p
ipmpstat: probe-based failure detection is disabled

global# ipmpstat -t
INTERFACE   MODE      TESTADDR            TARGETS
bge3        disabled  --                  --
bge2        disabled  --                  --
bge1        disabled  --                  --
So lets see what happens on a link 'failure' as I turn of the switch port going to bge1.

On the console, the indication is a link failure.

Jan 15 14:49:07 global in.mpathd[210]: The link has gone down on bge1
Jan 15 14:49:07 global in.mpathd[210]: IP interface failure detected on bge1 of group shared
The various ipmpstat outputs reflect the failure of bge1 and failover to to bge3, which had been in standby mode, and to bge2. I had expected both IP addresses to end up on bge3. Instead, IPMP determines how to best spread the IPs across the available interfaces.

The address output shows that .141 and .143 are now on bge3.

global# ipmpstat -a
ADDRESS                   STATE  GROUP       INBOUND     OUTBOUND
10.1.14.143               up     ipmp0       bge3        bge3 bge2
10.1.14.142               up     ipmp0       bge2        bge3 bge2
10.1.14.141               up     ipmp0       bge2        bge3 bge2
The group status has changed, with bge1 now shown in brackets as it is in failed mode.
global# ipmpstat -g
GROUP       GROUPNAME   STATE     FDT       INTERFACES
ipmp0       shared      degraded  --        bge3 bge2 [bge1]
The interface status makes it clear that bge1 is down. Broadcast and multicast is now handed by bge2.
global# ipmpstat -i
INTERFACE   ACTIVE  GROUP       FLAGS     LINK      PROBE     STATE
bge3        yes     ipmp0       -s-----   up        disabled  ok
bge2        yes     ipmp0       --mb---   up        disabled  ok
bge1        no      ipmp0       -------   down      disabled  failed
As expected, the only difference in the ifconfig output is for bge1, showing that it is in failed state. The zones are continue to shown using the ipmp0 interface. This took me a little bit of getting used to. Before, ifconfig was sufficient to fully see what the state is. Now, I must use ipmpstat as well.

global# ifconfig -a4
...
bge1: flags=211000803<UP,BROADCAST,MULTICAST,IPv4,FAILED,CoS> mtu 1500 index 3
        inet 0.0.0.0 netmask ff000000 broadcast 0.255.255.255
        groupname shared
        ether 0:3:ba:e3:42:8c
bge2: flags=201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS> mtu 1500 index 4
        inet 0.0.0.0 netmask ff000000 broadcast 0.255.255.255
        groupname shared
        ether 0:3:ba:e3:42:8d
bge3: flags=221000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,STANDBY,CoS> mtu 1500 index 5
        inet 0.0.0.0 netmask ff000000 broadcast 0.255.255.255
        groupname shared
        ether 0:3:ba:e3:42:8e
ipmp0: flags=8201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS,IPMP> mtu 1500 index 6
        zone shared1
        inet 10.1.14.141 netmask ffffffc0 broadcast 10.1.14.191
        groupname shared
ipmp0:1: flags=8201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS,IPMP> mtu 1500 index 6
        zone shared2
        inet 10.1.14.142 netmask ffffffc0 broadcast 10.1.14.191
ipmp0:2: flags=8201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS,IPMP> mtu 1500 index 6
        zone shared3
        inet 10.1.14.143 netmask ffffffc0 broadcast 10.1.14.191
"Repairing" the interface, things return to normal.
Jan 15 15:13:03 global in.mpathd[210]: The link has come up on bge1
Jan 15 15:13:03 global in.mpathd[210]: IP interface repair detected on bge1 of group shared
Note here only one IP address ended up getting moved back to bge1.
global# ipmpstat -a
ADDRESS                   STATE  GROUP       INBOUND     OUTBOUND
10.1.14.143               up     ipmp0       bge1        bge2 bge1
10.1.14.142               up     ipmp0       bge2        bge2 bge1
10.1.14.141               up     ipmp0       bge2        bge2 bge1
Interface bge3 is back in standby mode.
global# ipmpstat -g
GROUP       GROUPNAME   STATE     FDT       INTERFACES
ipmp0       shared      ok        --        bge2 bge1 (bge3)
All three interfaces are up, only two are active, and broadcast and multicast stayed on bge2 (no need to change that now).
global# ipmpstat -i
INTERFACE   ACTIVE  GROUP       FLAGS     LINK      PROBE     STATE
bge3        no      ipmp0       is-----   up        disabled  ok
bge2        yes     ipmp0       --mb---   up        disabled  ok
bge1        yes     ipmp0       -------   up        disabled  ok
As a further example of rebalancing of the IP address, here is what happens with four IP addresses spread across two interfaces.
global# ipmpstat -a
ADDRESS                   STATE  GROUP       INBOUND     OUTBOUND
10.1.14.144               up     ipmp0       bge2        bge2 bge1
10.1.14.143               up     ipmp0       bge1        bge2 bge1
10.1.14.142               up     ipmp0       bge2        bge2 bge1
10.1.14.141               up     ipmp0       bge1        bge2 bge1

Jan 15 16:19:09 global in.mpathd[210]: The link has gone down on bge1
Jan 15 16:19:09 global in.mpathd[210]: IP interface failure detected on bge1 of group shared

global# ipmpstat -a
ADDRESS                   STATE  GROUP       INBOUND     OUTBOUND
10.1.14.144               up     ipmp0       bge2        bge3 bge2
10.1.14.143               up     ipmp0       bge3        bge3 bge2
10.1.14.142               up     ipmp0       bge2        bge3 bge2
10.1.14.141               up     ipmp0       bge3        bge3 bge2

Jan 15 18:11:35 global in.mpathd[210]: The link has come up on bge1
Jan 15 18:11:35 global in.mpathd[210]: IP interface repair detected on bge1 of group shared

global# ipmpstat -a
ADDRESS                   STATE  GROUP       INBOUND     OUTBOUND
10.1.14.144               up     ipmp0       bge2        bge2 bge1
10.1.14.143               up     ipmp0       bge1        bge2 bge1
10.1.14.142               up     ipmp0       bge2        bge2 bge1
10.1.14.141               up     ipmp0       bge1        bge2 bge1
There is even spreading of the IP addresses across any two active interfaces.

Using the New IPMP Configuration Style

In the previous examples, I used the old style of configuring IPMP with the /etc/hostname.xyzN files. Those files should work on all older versions of Solaris as well as with the re-architecture bits. This section briefly covers the new format.

A new file that is introduced is the hostname.ipmp-group configuration file. It must follow the same format as any other data link configuration, ASCII characters followed by a number. I will use the same group name as above; however, I have to add a number to the end--thus the group name will be shared0. If you don't have the trailing number, the old style of IPMP setup will be used.

I create a file to define the IPMP group. Note that it contains only the keyword ipmp.

global# cat /etc/hostname.shared0
ipmp
The other files for the NICs reference the IPMP group name.

global# cat /etc/hostname.bge1
group shared0 up

global# cat /etc/hostname.bge2
group shared0 up

global# cat /etc/hostname.bge3
group shared0 standby up
One note that may not be obvious. I am not using the keyword -failover as I am not using test addresses. Thus the interfaces are also not listed as deprecated in the ifconfig output.

global# ifconfig -a4
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
        inet 127.0.0.1 netmask ff000000
shared0: flags=8201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS,IPMP> mtu 1500 index 2
        inet 0.0.0.0 netmask ff000000 broadcast 0.255.255.255
        groupname shared0
bge0: flags=201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS> mtu 1500 index 3
        inet 139.164.63.125 netmask ffffff00 broadcast 139.164.63.255
        ether 0:3:ba:e3:42:8b
bge1: flags=201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS> mtu 1500 index 4
        inet 0.0.0.0 netmask ff000000 broadcast 0.255.255.255
        groupname shared0
        ether 0:3:ba:e3:42:8c
bge2: flags=201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS> mtu 1500 index 5
        inet 0.0.0.0 netmask ff000000 broadcast 0.255.255.255
        groupname shared0
        ether 0:3:ba:e3:42:8d
bge3: flags=261000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,STANDBY,INACTIVE,CoS> mtu 1500 index 6
        inet 0.0.0.0 netmask ff000000 broadcast 0.255.255.255
        groupname shared0
        ether 0:3:ba:e3:42:8e
After booting the zones, which are still configured to use bge1 or bge2, things look like this.
global# ifconfig -a4
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
        inet 127.0.0.1 netmask ff000000
lo0:1: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
        zone shared1
        inet 127.0.0.1 netmask ff000000
lo0:2: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
        zone shared2
        inet 127.0.0.1 netmask ff000000
lo0:3: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
        zone shared3
        inet 127.0.0.1 netmask ff000000
shared0: flags=8201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS,IPMP> mtu 1500 index 2
        inet 0.0.0.0 netmask ff000000 broadcast 0.255.255.255
        groupname shared0
shared0:1: flags=8201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS,IPMP> mtu 1500 index 2
        zone shared1
        inet 10.1.14.141 netmask ffffffc0 broadcast 10.1.14.191
shared0:2: flags=8201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS,IPMP> mtu 1500 index 2
        zone shared2
        inet 10.1.14.142 netmask ffffffc0 broadcast 10.1.14.191
shared0:3: flags=8201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS,IPMP> mtu 1500 index 2
        zone shared3
        inet 10.1.14.143 netmask ffffffc0 broadcast 10.1.14.191
bge0: flags=201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS> mtu 1500 index 3
        inet 139.164.63.125 netmask ffffff00 broadcast 139.164.63.255
        ether 0:3:ba:e3:42:8b
bge1: flags=201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS> mtu 1500 index 4
        inet 0.0.0.0 netmask ff000000 broadcast 0.255.255.255
        groupname shared0
        ether 0:3:ba:e3:42:8c
bge2: flags=201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS> mtu 1500 index 5
        inet 0.0.0.0 netmask ff000000 broadcast 0.255.255.255
        groupname shared0
        ether 0:3:ba:e3:42:8d
bge3: flags=261000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,STANDBY,INACTIVE,CoS> mtu 1500 index 6
        inet 0.0.0.0 netmask ff000000 broadcast 0.255.255.255
        groupname shared0
        ether 0:3:ba:e3:42:8e
global# ipmpstat -a
ADDRESS                   STATE  GROUP       INBOUND     OUTBOUND
10.1.14.143               up     shared0     bge1        bge2 bge1
10.1.14.142               up     shared0     bge2        bge2 bge1
10.1.14.141               up     shared0     bge1        bge2 bge1
0.0.0.0                   up     shared0     --          --

global# ipmpstat -g
GROUP       GROUPNAME   STATE     FDT       INTERFACES
shared0     shared0     ok        --        bge2 bge1 (bge3)

global# ipmpstat -i
INTERFACE   ACTIVE  GROUP       FLAGS     LINK      PROBE     STATE
bge3        no      shared0     is-----   up        disabled  ok
bge2        yes     shared0     -------   up        disabled  ok
bge1        yes     shared0     --mb---   up        disabled  ok
Things are the same as before, except that the I now have specified the IPMP group name (shared0 instead of the previous ipmp0). I find this very useful as the name can help identify the purpose, and when debugging, different IPMP group names using context appropriate text should be very helpful.

I find the integration, or rather the backward compatibility, great. Not only will the old or existing IPMP setup work, the existing zonecfg network setup works as well. This means the same configuration files will work pre- and post-re-architecture!

Let's take a look at how things look within a zone.

shared1# ifconfig -a4
lo0:1: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
        inet 127.0.0.1 netmask ff000000
shared0:1: flags=8201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS,IPMP> mtu 1500 index 2
        inet 10.1.14.141 netmask ffffffc0 broadcast 10.1.14.191
shared1# netstat -rnf inet

Routing Table: IPv4
  Destination           Gateway           Flags  Ref     Use     Interface
-------------------- -------------------- ----- ----- ---------- ---------
default              10.1.14.129          UG        1          2 shared0
10.1.14.128          10.1.14.141          U         1          0 shared0:1
127.0.0.1            127.0.0.1            UH        1         33 lo0:1
The zone's network is on the link shared0 using a logical IP, and everything else looks as it has always looked. This output is actually while bge1 is down. IPMP hides all the details in the non-global zone.

Using Probe-based Failover

The configurations so far have been with link-based failure detection. IPMP has the ability to do probe-based failure detection, where ICMP packet are sent to other nodes on the system. This allows for failure detection way beyond what link-based detection can do, including the whole switch, and items past it up to and including routers. In order to use probe-based failure detection, test addresses are required on the physical NICs. For my configuration, I use test addresses on a completely different subnet, and my router is another system running Solaris 10. The router happens to be a zone with two NICs and configured as an exclusive IP Instance.

I am using a completely different subnet as I want to isolate the global zone from the non-global zones, and the setup is also using the defrouter zonecfg option, and I don't want to interfere with that setup.

The IPMP setup is as follows. I have added test addresses on the 172.16.10.0/24 subnet, and the interfaces are set to not fail over.

global# cat /etc/hostname.shared0
ipmp

global# cat /etc/hostname.bge1
172.16.10.141/24 group shared0 -failover up

global# cat /etc/hostname.bge2
172.16.10.142/24 group shared0 -failover up

global# cat /etc/hostname.bge3
172.16.10.143/24 group shared0 -failover standby up
This is the state of the system before bringing up any zones.
global# ifconfig -a4
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
        inet 127.0.0.1 netmask ff000000
shared0: flags=8201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS,IPMP> mtu 1500 index 2
        inet 0.0.0.0 netmask ff000000 broadcast 0.255.255.255
        groupname shared0
bge0: flags=201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS> mtu 1500 index 3
        inet 139.164.63.125 netmask ffffff00 broadcast 139.164.63.255
        ether 0:3:ba:e3:42:8b
bge1: flags=209040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER,CoS> mtu 1500 index 4
        inet 172.16.10.141 netmask ffffff00 broadcast 172.16.10.255
        groupname shared0
        ether 0:3:ba:e3:42:8c
bge2: flags=209040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER,CoS> mtu 1500 index 5
        inet 172.16.10.142 netmask ffffff00 broadcast 172.16.10.255
        groupname shared0
        ether 0:3:ba:e3:42:8d
bge3: flags=269040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER,STANDBY,INACTIVE,CoS> mtu 1500 index 6
        inet 172.16.10.143 netmask ffffff00 broadcast 172.16.10.255
        groupname shared0
        ether 0:3:ba:e3:42:8e
The ipmpstat output is different now.
global# ipmpstat -a
ADDRESS                   STATE  GROUP       INBOUND     OUTBOUND
0.0.0.0                   up     shared0     --          --

global# ipmpstat -g
GROUP       GROUPNAME   STATE     FDT       INTERFACES
shared0     shared0     ok        10.00s    bge2 bge1 (bge3)

global# ipmpstat -i
INTERFACE   ACTIVE  GROUP       FLAGS     LINK      PROBE     STATE
bge3        no      shared0     is-----   up        ok        ok
bge2        yes     shared0     -------   up        ok        ok
bge1        yes     shared0     --mb---   up        ok        ok
The Failure Detection Time is now set. And the probe information option lists an ongoing update of the probe results.
global# ipmpstat -p
TIME      INTERFACE   PROBE  NETRTT    RTT       RTTAVG    TARGET
0.14s     bge3        426    0.48ms    0.56ms    0.68ms    172.16.10.16
0.24s     bge2        426    0.50ms    0.98ms    0.74ms    172.16.10.16
0.26s     bge1        424    0.42ms    0.71ms    1.72ms    172.16.10.16
1.38s     bge1        425    0.42ms    0.50ms    1.57ms    172.16.10.16
1.79s     bge2        427    0.54ms    0.86ms    0.76ms    172.16.10.16
1.93s     bge3        427    0.45ms    0.53ms    0.66ms    172.16.10.16
2.79s     bge1        426    0.38ms    0.56ms    1.44ms    172.16.10.16
2.85s     bge2        428    0.34ms    0.41ms    0.71ms    172.16.10.16
3.15s     bge3        428    0.44ms    4.55ms    1.14ms    172.16.10.16
\^C
The target information option shows the current probe targets.
global# ipmpstat -t
INTERFACE   MODE      TESTADDR            TARGETS
bge3        multicast 172.16.10.143       172.16.10.16
bge2        multicast 172.16.10.142       172.16.10.16
bge1        multicast 172.16.10.141       172.16.10.16
Once the zones are up and running and bge1 is down, the status output changes accordingly.
global# ipmpstat -a
ADDRESS                   STATE  GROUP       INBOUND     OUTBOUND
10.1.14.143               up     shared0     bge2        bge3 bge2
10.1.14.142               up     shared0     bge3        bge3 bge2
10.1.14.141               up     shared0     bge2        bge3 bge2
0.0.0.0                   up     shared0     --          --

global# ipmpstat -g
GROUP       GROUPNAME   STATE     FDT       INTERFACES
shared0     shared0     degraded  10.00s    bge3 bge2 [bge1]

global# ipmpstat -i
INTERFACE   ACTIVE  GROUP       FLAGS     LINK      PROBE     STATE
bge3        yes     shared0     -s-----   up        ok        ok
bge2        yes     shared0     --mb---   up        ok        ok
bge1        no      shared0     -------   down      failed    failed

global# ipmpstat -p
TIME      INTERFACE   PROBE  NETRTT    RTT       RTTAVG    TARGET
0.46s     bge2        839    0.43ms    0.98ms    1.17ms    172.16.10.16
1.15s     bge3        840    0.32ms    0.37ms    0.65ms    172.16.10.16
1.48s     bge2        840    0.37ms    0.45ms    1.08ms    172.16.10.16
2.56s     bge3        841    0.45ms    0.54ms    0.63ms    172.16.10.16
3.17s     bge2        841    0.40ms    0.51ms    1.01ms    172.16.10.16
3.93s     bge3        842    0.40ms    0.47ms    0.61ms    172.16.10.16
4.61s     bge2        842    0.63ms    0.75ms    0.98ms    172.16.10.16
5.17s     bge3        843    0.38ms    0.46ms    0.59ms    172.16.10.16
5.72s     bge2        843    0.36ms    0.44ms    0.91ms    172.16.10.16
\^C

global# ipmpstat -t
INTERFACE   MODE      TESTADDR            TARGETS
bge3        multicast 172.16.10.143       172.16.10.16
bge2        multicast 172.16.10.142       172.16.10.16
bge1        multicast 172.16.10.141       172.16.10.16
Without showing the details here, the non-global zones continue to function.

Bringing all three interfaces down, things look like this.

Jan 19 13:51:22 global in.mpathd[61]: The link has gone down on bge2
Jan 19 13:51:22 global in.mpathd[61]: IP interface failure detected on bge2 of group shared0
Jan 19 13:52:04 global in.mpathd[61]: The link has gone down on bge3
Jan 19 13:52:04 global in.mpathd[61]: All IP interfaces in group shared0 are now unusable
global# ipmpstat -a
ADDRESS                   STATE  GROUP       INBOUND     OUTBOUND
10.1.14.143               up     shared0     --          --
10.1.14.142               up     shared0     --          --
10.1.14.141               up     shared0     --          --
0.0.0.0                   up     shared0     --          --

global# ipmpstat -g
GROUP       GROUPNAME   STATE     FDT       INTERFACES
shared0     shared0     failed    10.00s    [bge3 bge2 bge1]

global# ipmpstat -i
INTERFACE   ACTIVE  GROUP       FLAGS     LINK      PROBE     STATE
bge3        no      shared0     -s-----   down      failed    failed
bge2        no      shared0     -------   down      failed    failed
bge1        no      shared0     -------   down      failed    failed

global# ipmpstat -p
\^C

global# ipmpstat -t
INTERFACE   MODE      TESTADDR            TARGETS
bge3        multicast 172.16.10.143       --
bge2        multicast 172.16.10.142       --
bge1        multicast 172.16.10.141       --
The whole IPMP group shared0 is down, all appropriate ipmpstat output reflects that, and no probes are listed nor probe RTT time reports are updated.

An additional scenario might be to have two separate paths, and have something other than a link failure force the failover.

Thursday Jan 08, 2009

Crossbow is delivered--Traveling VNICs and more

With Solaris Express Community Edition build 105, the initial implementation of Network Virtualization and Resource Control, known as Project Crossbow, is delivered into the main networking code base and available in the distributed images. No need to install additional software! The multi-year effort has reached a major milestone.

The feature I have been waiting for the most is the virtual NICs (VNICs). This allows me to create multiple data links using a single physical network interface, such as on my laptop. Each data link can be assigned to a different zone, and with exclusive IP Instance zones, each zone can have separate IP management and characteristics. The most useful one for me is to have one zone working on the native local network, and another zone with IPsec enabled, for a VPN connection.

Previously, I have demonstrated how to do this with two NICs and with one NIC and VNICs. I also have an example of how to achieve this with VNANs.

Now that Crossbow is integrated, things are much simpler!

Some Specifics

First thing I did was create a VNIC. Note that the dladm(1M) commands have changed slightly, both general and for VNICs. To see what physical NICs are available. On my laptop it looks like this. (The option used to be show-dev.)
global# dladm show-phys
LINK         MEDIA                STATE      SPEED  DUPLEX    DEVICE
ath0         WiFi                 down       0      unknown   ath0
bge0         Ethernet             up         1000   full      bge0
Data links are the entities that can be assigned to a zone, so lets see those.
global# dladm show-link
LINK        CLASS    MTU    STATE    OVER
ath0        phys     1500   down     --
bge0        phys     1500   up       --
Now I create a VNIC.
global# dladm create-vnic -l bge0 vpn0

global# dladm show-link
LINK        CLASS    MTU    STATE    OVER
ath0        phys     1500   down     --
bge0        phys     1500   up       --
vpn0        vnic     1500   up       bge0
I used the basic create-vnic format, where I only specified the option over which device to create the VNIC. I let Solaris determine the MAC address, and I did not assign any other properties to the VNIC. The name for a data link must start with characters and end with a number. Thus I chose vpn0 to make it clear to me what I want to use it for. I could have called it vpn123456789, showing that the number part can be quite large.

I now create a zone, and I chose the following configuration.

global# zonecfg -z vpn info
zonename: vpn
zonepath: /zones/vpn
brand: native
autoboot: false
bootargs:
pool:
limitpriv:
scheduling-class:
ip-type: exclusive
inherit-pkg-dir:
        dir: /lib
inherit-pkg-dir:
        dir: /platform
inherit-pkg-dir:
        dir: /sbin
inherit-pkg-dir:
        dir: /usr
net:
        address not specified
        physical: vpn0
        defrouter not specified
Key items are in bold. The zone is an exclusve IP Instance zone, and I only assigned the vpn0 data link to it. The zone is a sparse zone, and the need to inherit an extra directory for IPsec to work is no longer required (I was curious whether this had been fixed.)

After installing (I made a clone of an existing zone) and before booting the zone, I copied into the zone a customized sysidcfg file.

global# cat /zones/vpn/root/etc/sysidcfg
system_locale=C
terminal=xterm
network_interface=PRIMARY {
        dhcp
        protocol_ipv6=no
}
nfs4_domain=dynamic
security_policy=NONE
name_service=NONE
timezone=US/Eastern
service_profile=limited_net
timeserver=localhost
root_password=YyDStVVvtZX6.
Upon booting, the zone gets an IP address via DHCP. This will be useful for being on a variety of networks. When using wireless, I won't have to change the zone's configuration. I will, however, have to recreate vpn0 on top of ath0.

Now I can happily be on a public and the corporate network at the same time. This example has me using the non-global zone to run VPN within. However, depending on my needs at the moment, I could have the global zone be VPNed in, and the non-global zone be on the public network. It is just a matter of where I run the VPN software.

global# ifconfig -a4
lo0: flags=2001000849 mtu 8232 index 1
        inet 127.0.0.1 netmask ff000000
ath0: flags=201000802 mtu 1500 index 2
        inet 0.0.0.0 netmask 0
        ether 0:b:6b:80:bc:59
bge0: flags=201004843 mtu 1500 index 3
        inet 192.168.15.104 netmask ffffff00 broadcast 192.168.15.255
        ether 0:c0:9f:5b:43:33

vpn# ifconfig -a4
lo0: flags=2001000849 mtu 8232 index 1
        inet 127.0.0.1 netmask ff000000
vpn0: flags=201004843 mtu 1500 index 2
        inet 192.168.15.105 netmask ffffff00 broadcast 192.168.15.255
        ether 2:8:20:86:53:e3
ip.tun0: flags=10010008d1 mtu 1366 index 3
        inet tunnel src 192.168.15.105 tunnel dst 192.168.101.183
        tunnel security settings  -->  use 'ipsecconf -ln -i ip.tun0'
        tunnel hop limit 60
        inet 192.168.48.27 --> 192.168.76.43 netmask ffffffff
This demonstrates one of the features of Crossbow. I will now be able to do a lot more with zones, while taking advantage of IP Instances, without needing multiple NICs. This is great for customer demos. I have not covered items such as the virtual switch that is created, or the ability to snoop traffic between zones now, or all the resource monitoring and controls that Crossbow offers. More on that elsewhere and in the future.

P.S. Crossbow affects and works with a lot of the generic LAN driver (GLD) framework, and delivers a new MAC interface, utilizes improvements in dladm, data link naming (vanity naming from Project Clearview), and lots more, and thus is a lot of code changes. There is a high level of interest in getting the VNIC features into Solaris 10. If you have a strong need for that, please add a Service Record using your support channel to Change Request 6790102.

Wednesday Jan 30, 2008

IP Instances with ce NICs patches are in progress!

The patches for the ON (OS and networking) part of the changes to allow IP Instances to work with the ce NICs (CR 6616075) are in progress. The patch numbers will be:

137042-01 (SPARC)
137043-01 (i386, x86, x64)

The patches should be available in about two weeks, after final internal and customre testing. If you have a service contract, you can get a temporary T-patch as interim relief, with all the caveats of a T-patch. Folks with an escalation should already have been notified. The fix will also be delivered in the next update of Solaris 10. It did not make the Beta of that update (Update 5), however. Don't forget, you also need the ce patch:

118777-12 (SPARC)
118778-11 (i386, x86, x64)

Happy IP-Instancing with ce!!

Wednesday Dec 05, 2007

More good news for IP Instances

Continuing progress on the use of IP Instances on the full line of SPARC systems. The e1000g Intel PCI-X Gigabit Ethernet UTP and MMF adapters are now supported on the Sun Fire UltraSPARC servers. The NICs are:
  • x7285a - Sun PCI-X Dual GigE UTP Low Profile. RoHS-6 compliant
  • x7286a - Sun PCI-X GigE MMF Low Profile, RoHS-6 compliant
The NICs are supported on the V490, V890, E2900, E4900, E6900, E20K, and E25K systems. This is an alternative for those waiting for the GigaSwift (ce) NIC to be supported, or who don't need quad-port cards. Since the driver used is the e1000g, which is a GLDv3 driver, full support for IP Instances is available using these cards.

Wednesday May 30, 2007

In two places at once?

Some background. Like any other mobile workforce, Sun employees have a need to access internal network services while not in the office. While we use commercial products, Sun engineers have also been working on a \*product\* called punchin. Punchin is a Sun-created VPN technology that uses native IPsec/IKE from the operating system in which it runs. It is the primary Solaris VPN solution for Solaris servers and clients, and will be expanding to other operating systems such as MacOS X in the near future.

Security policy states that if a system is 'punched in', it must not be on the public network at the same it. In other words, while the VPN tunnel is up, access to the Internet directly is restricted, especially access from the Internet to the system. While a system is on the VPN, it can not also be your Internet facing personal web server, for example.

Bringing up the VPN is an interactive process, requiring a challenge/response sequence. If you are like me, you may have a system at home and while at work need to access from that system some data on the corporate network. This is a catch-22, since the connection you use remotely to activate the VPN breaks as you start the VPN establishment process (enforcing the policy of being on only one network at a time).

Introduce Solaris Containers, or zones. Each zone looks like its own system. However, they share a single kernel and single IP. But wait, there is this new thing called IP Instance that allows zones configured as having an exclusive IP Instance to have their own IP (they already have their own TCP and UDP for all practical purposes). And wouldn't it be great if I could do this with just one NIC? Hey, Project Crossbow has IP Instances and VNICs. Great!

Now for the reality check. As I was told not so long ago, Rome was not built in one day. IP Instances are in Solaris Nevada and targeted for Solaris 10 7/07. VNICs are only available in a snapshot applied via BFU to Nevada build 61. [See also Note 1 below.]

So, lets see how to do this with just IP Instances.

First, since each instance, which are at least the global zone and one non-global need their own NIC, I need at least two NICs. Not all NICs support IP Instances, so the one(s) for the non-global zone(s) need to support IP Instances, and thus must be using GLDv3 drivers.

In my case, I am using a Sun Blade 100 with an on-board eri 100Mbps Ethernet interface. I purchased an Intel 1000/Pro MT Server NIC, which requires an e1000g driver. Here is a list of NICs that are known to work with IP Instances and VNICs.

After installing Solaris Nevada, I created my non-global zone with the following configuration:

global# zonecfg -z vpnzone info
zonename: vpnzone
zonepath: /zones/vpnzone
brand: native
autoboot: true
bootargs: 
pool: 
limitpriv: 
scheduling-class:
ip-type: exclusive
inherit-pkg-dir:
        dir: /lib
inherit-pkg-dir:
        dir: /platform
inherit-pkg-dir:
        dir: /sbin
inherit-pkg-dir:
        dir: /usr
inherit-pkg-dir:
        dir: /etc/crypto/certs
fs:
        dir: /usr/local
        special: /zones/vpnzone/usr-local
        raw not specified
        type: lofs
        options: []
net:
        address not specified
        physical: e1000g0
global#
I had to include an additional inherit directive for this sparse, because currently some of the crypto stuff is not duplicated into a non-global zone. Without this, even the digest command would fail, for example. I needed to provide a private directory for /usr/local since that is where the Punchin packages get installed by default.

Once I installed and configured vpnzone, I was able to install and configure the Punchin client.

However, this required two NICs. So to use just one, I created a VNIC for my VPN zone.

global# dladm show-dev
eri0                 link: unknown   speed:       0Mb  duplex: unknown
e1000g0         link: up             speed:   100Mb  duplex: full
global# dladm show-link
eri0                 type: legacy     mtu: 1500       device: eri0
e1000g0         type: non-vlan  mtu: 1500       device: e1000g0
global# dladm create-vnic -d e1000g0 -m 0:4:23:e0:5f:1 1
global# dladm show-link
eri0                 type: legacy     mtu: 1500       device: eri0
e1000g0         type: non-vlan  mtu: 1500       device: e1000g0
vnic1               type: non-vlan  mtu: 1500       device: vnic1
global# ifconfig -a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
        inet 127.0.0.1 netmask ff000000
e1000g0: flags=201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS> mtu 1500 index 2
        inet 192.168.1.58 netmask ffffff00 broadcast 192.168.1.255
        ether 0:4:23:e0:5f:6b
global# 
I chose to provide my on MAC address, based on the address of the base NIC.

I modified the non-global zone configuration:

global# zonecfg -z vpnzone info
zonename: vpnzone
zonepath: /zones/vpnzone
brand: native
autoboot: true
bootargs: 
pool: 
limitpriv: 
scheduling-class:
ip-type: exclusive
inherit-pkg-dir:
        dir: /lib
inherit-pkg-dir:
        dir: /platform
inherit-pkg-dir:
        dir: /sbin
inherit-pkg-dir:
        dir: /usr
inherit-pkg-dir:
        dir: /etc/crypto/certs
fs:
        dir: /usr/local
        special: /zones/vpnzone/usr-local
        raw not specified
        type: lofs
        options: []
net:
        address not specified
        physical: vnic1
global#
Now I can access the system at home while I am not there, zlogin into vpnzone, punchin, and be connected to our internal network. This is really significant for me, since at home I have 6Mbps download compared to only 600Kbps in the office. So downloading the DVD ISO that I used to create this setup took 1/10th the time at home than at work.

[1] I also used the SUNWonbld package. This package is specific to build 61!

Because I install BFUs a lot, I have added the following to my .profile

if [ -d /opt/onbld ]
then
   FASTFS=/opt/onbld/bin/`uname -p`/fastfs ; export FASTFS
   BFULD=/opt/onbld/bin/`uname -p`/bfuld ; export BFULD
   GZIPBIN=/usr/bin/gzip ; export GZIPBIN
   PATH=$PATH:/opt/onbld/bin
fi

Saturday May 12, 2007

Network performance differences within an IP Instance vs. across IP Instances

When consolidating or co-locating multiple applications on the same system, inter-application network typically stays within the system, since the shared IP in the kernel recognizes that the destination address is on the same system, and thus loops it back up the stack without ever putting the data on a physical network. This has introduced some challenges for customers deploying Solaris Containers (specifically zones) where different Containers are on different subnets, and it is expected that traffic between them leaves the system (maybe through a router or fireall to restrict or monitor inter-tier traffic).

With IP Instances in Solaris Nevada build 57 and targeted for Solaris 10 7/07, there is the ability to configures zones with exclusive IP Instances, thus forcing all traffic leaving a zone out onto the network. This introduces additional network stack processing both on the transmit and the receive. Prompted by some customer questions regarding this, I performed a simple test to measure the difference.

On two systems, a V210 with two 1.336GHz CPUs and 8GB memory, and an x4200 with two dual-core Opteron XXXX and 8GB memory, I ran FTP transfers between zones. My switch is a Netgear GS716T Smart Switch with 1Gbps ports. The V210 has four bge interfaces and the x4200 has four e1000g interfaces.

I created four zones. Zones x1 and x2 have eXclusive IP Instances, while zones s1 and s2 have Shared IP Instances (IP is shared with the global zone). Both systems are running Solaris 10 7/07 build 06.

Relevant zonecfg info is a follows (all zones are sparse):


v210# zonecfg -z x1 info
zonename: x1
zonepath: /localzones/x1
...
ip-type: exclusive
net:
        address not specified
        physical: bge1

v210# zonecfg -z s1 info
zonename: s1
zonepath: /localzones/s1
...
ip-type: shared
net:
        address: 10.10.10.11/24
        physical: bge3
 
As a test user in each zone, I created a file using 'mkfile 1000m /tmp/file1000m'. Then I used ftp to transfer it between zones. No tuning was done whatsoever.

The results are as follows.

V210: (bge)

Exclusive to Exclusive
x1# /usr/bin/time ftp x2 << EOF\^Jcd /tmp\^Jbin\^Jput file1000m\^JEOF

real       17.0
user        0.2
sys        11.2

Exclusive to Shared
x1# /usr/bin/time ftp s2 << EOF\^Jcd /tmp\^Jbin\^Jput file1000m\^JEOF

real       17.3
user        0.2
sys        11.6

Shared to Shared
s2# /usr/bin/time ftp s1 << EOF\^Jcd /tmp\^Jbin\^Jput file1000m\^JEOF

real        6.6
user        0.1
sys         5.3


X4200: (e1000g)

Exclusive to Exclusive
x1# /usr/bin/time ftp x2 << EOF\^Jcd /tmp\^Jbin\^Jput file1000m\^JEOF

real        9.1
user        0.0
sys         4.0

Exclusive to Shared
x1# /usr/bin/time ftp s2 << EOF\^Jcd /tmp\^Jbin\^Jput file1000m\^JEOF

real        9.1
user        0.0
sys         4.1

Shared to Shared
s2# /usr/bin/time ftp s1 << EOF\^Jcd /tmp\^Jbin\^Jput file1000m\^JEOF

real        4.0
user        0.0
sys         3.5
I ran each test several times and picked a result that seemed average across the runs. Not very scientific, and a table might be nicer.

Something I noticed that surprised me was that time spent in IP and the driver is measurable on the V210 with bge, and much less so on the x4200 with e1000g.

About

Stw-Oracle

Search

Archives
« May 2016
SunMonTueWedThuFriSat
1
2
3
4
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
    
       
Today