In two places at once?

Some background. Like any other mobile workforce, Sun employees have a need to access internal network services while not in the office. While we use commercial products, Sun engineers have also been working on a \*product\* called punchin. Punchin is a Sun-created VPN technology that uses native IPsec/IKE from the operating system in which it runs. It is the primary Solaris VPN solution for Solaris servers and clients, and will be expanding to other operating systems such as MacOS X in the near future.

Security policy states that if a system is 'punched in', it must not be on the public network at the same it. In other words, while the VPN tunnel is up, access to the Internet directly is restricted, especially access from the Internet to the system. While a system is on the VPN, it can not also be your Internet facing personal web server, for example.

Bringing up the VPN is an interactive process, requiring a challenge/response sequence. If you are like me, you may have a system at home and while at work need to access from that system some data on the corporate network. This is a catch-22, since the connection you use remotely to activate the VPN breaks as you start the VPN establishment process (enforcing the policy of being on only one network at a time).

Introduce Solaris Containers, or zones. Each zone looks like its own system. However, they share a single kernel and single IP. But wait, there is this new thing called IP Instance that allows zones configured as having an exclusive IP Instance to have their own IP (they already have their own TCP and UDP for all practical purposes). And wouldn't it be great if I could do this with just one NIC? Hey, Project Crossbow has IP Instances and VNICs. Great!

Now for the reality check. As I was told not so long ago, Rome was not built in one day. IP Instances are in Solaris Nevada and targeted for Solaris 10 7/07. VNICs are only available in a snapshot applied via BFU to Nevada build 61. [See also Note 1 below.]

So, lets see how to do this with just IP Instances.

First, since each instance, which are at least the global zone and one non-global need their own NIC, I need at least two NICs. Not all NICs support IP Instances, so the one(s) for the non-global zone(s) need to support IP Instances, and thus must be using GLDv3 drivers.

In my case, I am using a Sun Blade 100 with an on-board eri 100Mbps Ethernet interface. I purchased an Intel 1000/Pro MT Server NIC, which requires an e1000g driver. Here is a list of NICs that are known to work with IP Instances and VNICs.

After installing Solaris Nevada, I created my non-global zone with the following configuration:

global# zonecfg -z vpnzone info
zonename: vpnzone
zonepath: /zones/vpnzone
brand: native
autoboot: true
bootargs: 
pool: 
limitpriv: 
scheduling-class:
ip-type: exclusive
inherit-pkg-dir:
        dir: /lib
inherit-pkg-dir:
        dir: /platform
inherit-pkg-dir:
        dir: /sbin
inherit-pkg-dir:
        dir: /usr
inherit-pkg-dir:
        dir: /etc/crypto/certs
fs:
        dir: /usr/local
        special: /zones/vpnzone/usr-local
        raw not specified
        type: lofs
        options: []
net:
        address not specified
        physical: e1000g0
global#
I had to include an additional inherit directive for this sparse, because currently some of the crypto stuff is not duplicated into a non-global zone. Without this, even the digest command would fail, for example. I needed to provide a private directory for /usr/local since that is where the Punchin packages get installed by default.

Once I installed and configured vpnzone, I was able to install and configure the Punchin client.

However, this required two NICs. So to use just one, I created a VNIC for my VPN zone.

global# dladm show-dev
eri0                 link: unknown   speed:       0Mb  duplex: unknown
e1000g0         link: up             speed:   100Mb  duplex: full
global# dladm show-link
eri0                 type: legacy     mtu: 1500       device: eri0
e1000g0         type: non-vlan  mtu: 1500       device: e1000g0
global# dladm create-vnic -d e1000g0 -m 0:4:23:e0:5f:1 1
global# dladm show-link
eri0                 type: legacy     mtu: 1500       device: eri0
e1000g0         type: non-vlan  mtu: 1500       device: e1000g0
vnic1               type: non-vlan  mtu: 1500       device: vnic1
global# ifconfig -a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
        inet 127.0.0.1 netmask ff000000
e1000g0: flags=201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS> mtu 1500 index 2
        inet 192.168.1.58 netmask ffffff00 broadcast 192.168.1.255
        ether 0:4:23:e0:5f:6b
global# 
I chose to provide my on MAC address, based on the address of the base NIC.

I modified the non-global zone configuration:

global# zonecfg -z vpnzone info
zonename: vpnzone
zonepath: /zones/vpnzone
brand: native
autoboot: true
bootargs: 
pool: 
limitpriv: 
scheduling-class:
ip-type: exclusive
inherit-pkg-dir:
        dir: /lib
inherit-pkg-dir:
        dir: /platform
inherit-pkg-dir:
        dir: /sbin
inherit-pkg-dir:
        dir: /usr
inherit-pkg-dir:
        dir: /etc/crypto/certs
fs:
        dir: /usr/local
        special: /zones/vpnzone/usr-local
        raw not specified
        type: lofs
        options: []
net:
        address not specified
        physical: vnic1
global#
Now I can access the system at home while I am not there, zlogin into vpnzone, punchin, and be connected to our internal network. This is really significant for me, since at home I have 6Mbps download compared to only 600Kbps in the office. So downloading the DVD ISO that I used to create this setup took 1/10th the time at home than at work.

[1] I also used the SUNWonbld package. This package is specific to build 61!

Because I install BFUs a lot, I have added the following to my .profile

if [ -d /opt/onbld ]
then
   FASTFS=/opt/onbld/bin/`uname -p`/fastfs ; export FASTFS
   BFULD=/opt/onbld/bin/`uname -p`/bfuld ; export BFULD
   GZIPBIN=/usr/bin/gzip ; export GZIPBIN
   PATH=$PATH:/opt/onbld/bin
fi
Comments:

You gotta love when remote offices have piddly little links. I'm glad that punching-in to the Menlo Park server gives you much better throughput. (Make sure your TCP receive windows are big enough, though!)

Posted by Dan McDonald on June 19, 2007 at 08:08 AM EDT #

Post a Comment:
  • HTML Syntax: NOT allowed
About

stw

Search

Archives
« July 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
  
       
Today