My last blog entry was about my debugging experience with Puppet and promise to share the setup that I used. I now follow up that previous entry with this one which describes my Crossbow + NAT + S11 Zones proof of concept.
One of the very nice features in Solaris 11 Express is the inclusion of the Crossbow virtual networking infrastructure into the mainline S11 Express code. I was familiar with Crossbow from some of the older presentations from CommunityOne when it was still just an OpenSolaris project. Now that it has been included in the mainline S11 Express codebase, I decided it was time to check things out and see how I could leverage it to do some proof of concept testing with my continuing evaluation of Puppet.
What is Puppet?
Puppet is a data center automation tool that we are evaluating to harmonize our configurations across multiple systems. As most people know, hand building and tweaking systems only works when you have a small number of systems, and even then it is error prone and not optimal. Puppet is a tool (one of many such tools) that seeks to automate these tasks with configuration servers and configuration agents.
Puppetmasterd is the configuration server that holds the configuration for an entire site. Inside the site defintion will be host mappings that map hosts to particular configurations that the agent is expected to conform to.
Puppetd is the agent on the client side that contacts a Puppet server, retrieves the expected host configuration definition, and then is the agent of change that implements the changes on the client. It will periodically poll the puppetmasterd service for any changes to the host configuration and will then ensure compliance to the new changes.
One thing that you have to realize is that by nature, Puppet is a multi-system architecture. Even in the smallest configuration you would need a puppetmaster server and an puppetd client for a minimum of two systems. In the olden days, that would have meant two independent computers/servers to simulate the test environment. That is not to mention the networking infrastructure just to connect these two machines together (yes, I know that a network switch is cheap, but it is not central to testing Puppet which is what my main goal is). All these pieces cost money, and is in my opinion, a complete waste of money and resources just to test and evaluate Puppet which is very lightweight by nature.
So, I decided to test and evaluate Puppet on my workstation, a Sun Ultra 24 running Solaris 11 Express. I knew that I could create zones on it, but I did not want to give it a Virtual NIC with a real IP address which is how networking is normally plumbed in Solaris. Since this was my testing and evaluation, I did not want or need it to be on the public work network just in case it disrupted someone else and not to hog IPs just for testing inside my private network.
Enter Crossbow Virtualized Networking:
Since I read the S11 Express release notes, I remembered that Crossbow was integrated into the S11 Express build that I was using. I decided to try using that to accomplish my testing. As usual, blogs.sun.com was the resource to consult, and I found Nicolas Droux's blog entry for setting up an etherstub network with NAT (see his original entry here).
The Crossbow Etherstub + NAT Howto:
You can think of an etherstub network conceptually as a virtual ethernet switch VLAN. To this etherstub, you connect virtual NICs (or vnics) that have the etherstub identifier. Those VNICs that have the same etherstub identifer will be able to communicate with amongst all the other VNICs that have the same etherstub identifer (analagous to having multiple system NICs on the same VLAN). There of course can be multiple etherstubs on a single host using crossbow to simulate multiple VLANs.
The original instructions had the following:
# dladm create-etherstub etherstub0
# dladm create-vnic -d etherstub0 vnic0
# dladm create-vnic -d etherstub0 vnic1
# dladm create-vnic -d etherstub0 vnic2
However, in the time that Nicolas wrote his blog and now, the command flags in dladm had changed to specify the etherstub identifier for the create-vnic subcommand from -d to -l (that is a lowercase L as in Llama).
After consulting the updated man pages, the new instructions are:
# dladm create-etherstub etherstub0
# dladm create-vnic -l etherstub0 vnic0
# dladm create-vnic -l etherstub0 vnic1
# dladm create-vnic -l etherstub0 vnic2
Basically after the vnics are created, I assigned vnic0 to the global zone, vnic1 to the puppetmaster zone, and vnic2 to the puppetclient zone. Follow Nicolas' blog entry to figure out how to do that in the zonecfg zone definition.
After that, the steps are:
- Plumb vnic0 (ifconfig vnic0 plumb; ifconfig vnic0 192.168.0.1 up)
- Enable routing in the global zone (
routeadm -u -e ipv4-forwarding)
Create an NAT rule in /etc/ipf/ipnat.conf (replace e1000g0 with the interface with your default route)
map e1000g0 192.168.0.0/24 -> 0/32 portmap tcp/udp auto
map e1000g0 192.168.0.0/24 -> 0/32
- Enable ipfilter if not already enabled (
svcadm enable network/ipfilter)
- Check the NAT mappings were taken in and accepted (
At this point you can then boot your zones and configure them with IPs in the NAT'ed subnet (in my case 192.168.0.0/24). They should be able to access any machines outside that the global zone can access through NAT.
After completing the zone installation and Puppet installation, I now had a puppetmaster and puppetclient that could talk to each other that would not be exposed to the public. I did not have to purchase any new equipment and the configuration was about as simple as can be. The best part about all of this is that I can continue creating puppet clients as zones and have different puppet configurations that inherit different configuration manifests and test this all out without buying anything extra. Since zones are very lightweight compared to fully virtualized VMs, I can have many zones running on my workstation without a slowdown and they won't chew up all my processor time.
I can imagine many other great uses of Zones + Crossbow technology. It is immensely useful in situations where you are not testing the performance, but rather the features and proof of concepts to justify further investment. It can simulate small networks of deployment in a single host without further infrastructure investment (compute resources or network).