Tuesday Dec 14, 2010

Using Crossbow and Solaris 11 Express Zones for a single machine proof of concept environment with Puppet

My last blog entry was about my debugging experience with Puppet and promise to share the setup that I used. I now follow up that previous entry with this one which describes my Crossbow + NAT + S11 Zones proof of concept.

One of the very nice features in Solaris 11 Express is the inclusion of the Crossbow virtual networking infrastructure into the mainline S11 Express code. I was familiar with Crossbow from some of the older presentations from CommunityOne when it was still just an OpenSolaris project. Now that it has been included in the mainline S11 Express codebase, I decided it was time to check things out and see how I could leverage it to do some proof of concept testing with my continuing evaluation of Puppet.

What is Puppet?

Puppet is a data center automation tool that we are evaluating to harmonize our configurations across multiple systems. As most people know, hand building and tweaking systems only works when you have a small number of systems, and even then it is error prone and not optimal. Puppet is a tool (one of many such tools) that seeks to automate these tasks with configuration servers and configuration agents.

Puppetmasterd is the configuration server that holds the configuration for an entire site. Inside the site defintion will be host mappings that map hosts to particular configurations that the agent is expected to conform to.

Puppetd is the agent on the client side that contacts a Puppet server, retrieves the expected host configuration definition, and then is the agent of change that implements the changes on the client. It will periodically poll the puppetmasterd service for any changes to the host configuration and will then ensure compliance to the new changes.

One thing that you have to realize is that by nature, Puppet is a multi-system architecture. Even in the smallest configuration you would need a puppetmaster server and an puppetd client for a minimum of two systems. In the olden days, that would have meant two independent computers/servers to simulate the test environment. That is not to mention the networking infrastructure just to connect these two machines together (yes, I know that a network switch is cheap, but it is not central to testing Puppet which is what my main goal is). All these pieces cost money, and is in my opinion, a complete waste of money and resources just to test and evaluate Puppet which is very lightweight by nature.

 So, I decided to test and evaluate Puppet on my workstation, a Sun Ultra 24 running Solaris 11 Express. I knew that I could create zones on it, but I did not want to give it a Virtual NIC with a real IP address which is how networking is normally plumbed in Solaris. Since this was my testing and evaluation, I did not want or need it to be on the public work network just in case it disrupted someone else and not to hog IPs just for testing inside my private network.

Enter Crossbow Virtualized Networking:

Since I read the S11 Express release notes, I remembered that Crossbow was integrated into the S11 Express build that I was using. I decided to try using that to accomplish my testing. As usual, blogs.sun.com  was the resource to consult, and I found Nicolas Droux's blog entry for setting up an etherstub network with NAT (see his original entry here).

The Crossbow Etherstub + NAT Howto:

You can think of an etherstub network conceptually as a virtual ethernet switch VLAN. To this etherstub, you connect virtual NICs (or vnics) that have the etherstub identifier. Those VNICs that have the same etherstub identifer will be able to communicate with amongst all the other VNICs that have the same etherstub identifer (analagous to having multiple system NICs on the same VLAN). There of course can be multiple etherstubs on a single host using crossbow to simulate multiple VLANs.

The original instructions had the following:

# dladm create-etherstub etherstub0
# dladm create-vnic -d etherstub0 vnic0
# dladm create-vnic -d etherstub0 vnic1
# dladm create-vnic -d etherstub0 vnic2

However, in the time that Nicolas wrote his blog and now, the command flags in dladm had changed to specify the etherstub identifier for the create-vnic subcommand from -d to -l (that is a lowercase L as in Llama).

After consulting the updated man pages, the new instructions are:

# dladm create-etherstub etherstub0
# dladm create-vnic -l etherstub0 vnic0
# dladm create-vnic -l etherstub0 vnic1
# dladm create-vnic -l etherstub0 vnic2

 Basically after the vnics are created, I assigned vnic0 to the global zone, vnic1 to the puppetmaster zone, and vnic2 to the puppetclient zone. Follow Nicolas' blog entry to figure out how to do that in the zonecfg zone definition.

After that, the steps are:

  1. Plumb vnic0 (ifconfig vnic0 plumb; ifconfig vnic0 up)
  2. Enable routing in the global zone (routeadm -u -e ipv4-forwarding)
  3. Create an NAT rule in /etc/ipf/ipnat.conf (replace e1000g0 with the interface with your default route)
    map e1000g0 -> 0/32 portmap tcp/udp auto
    map e1000g0 -> 0/32
  4. Enable ipfilter if not already enabled (svcadm enable network/ipfilter)
  5. Check the NAT mappings were taken in and accepted (ipnat -l)

At this point you can then boot your zones and configure them with IPs in the NAT'ed subnet (in my case They should be able to access any machines outside that the global zone can access through NAT.

The Results:

After completing the zone installation and Puppet installation, I now had a puppetmaster and puppetclient that could talk to each other that would not be exposed to the public. I did not have to purchase any new equipment and the configuration was about as simple as can be. The best part about all of this is that I can continue creating puppet clients as zones and have different puppet configurations that inherit different configuration manifests and test this all out without buying anything extra. Since zones are very lightweight compared to fully virtualized VMs, I can have many zones running on my workstation without a slowdown and they won't chew up all my processor time.

I can imagine many other great uses of Zones + Crossbow technology. It is immensely useful in situations where you are not testing the performance, but rather the features and proof of concepts to justify further investment. It can simulate small networks of deployment in a single host without further infrastructure investment (compute resources or network).

Tuesday Dec 07, 2010

Debugging Puppetmasterd from Puppet in Solaris 11 Express (and others)

I am evaluating Puppet for possible use in our labs. For those of you who do not know of Puppet, it is a Ruby based automation tool in the same vein of CFEngine and other system automation tools. This comes in very handy in medium to large deployments to make sure site-wide configurations are guaranteed to be identical. Hand building and tweaking systems becomes tedious and prone to error once the number of systems grows to maintain consistency in the configuration details.

As I was going through the Puppet installation on my Solaris 11 Express workstation I noticed that puppetmasterd was not starting an instance of puppet. I had followed the instructions in the Configuration Guide from PuppetLabs for Puppet 2.6.4. I had installed from the Ruby source instead of using one of the many prebuilt packages available on the interwebs and package repositories.

One major thing that I found non-intuitive with puppet was the options that you can run with puppetmasterd to debug problems. This is the daemon that starts the puppetmaster service on the host listening on port 8140. When I ran it, the puppetmasterd output was clean and it returned me to the shell. Unfortunately after testing with my puppet client, it appeared that the service was not running. I confirmed this with the puppetmaster zone seeing nothing binding to port 8140 with netstat and not seeing the output when I ran ps -ef | grep puppet.

One critical thing that I did was follow the instructions to use puppetmasterd --mkusers which automates a bunch of the tasks like creating the puppet users and groups, making certificates, and much more.

My first reaction was to run puppetmasterd --help and look for options that might give me debug output. Well there is a debug flag (--debug) and also a verbose flag (-v). I tried running them both with puppetmasterd and I still didn't get anything. Then I redirected the output to a file since I wasn't logged in to the zone console on the puppetmaster thinking that the output was being sent to the console. No such luck either. After searching around on the interwebs, I finally found the debug output I was looking for. I was missing the trace option which prints the stack trace.

puppetmasterd --no-daemonize --verbose --debug --trace

Once I did that, I saw the following:

err: /File[/var/lib/puppet/rrd]/ensure: change from absent to directory failed: Could not set 'directory on ensure: Permission denied - /var/lib/puppet/rrd

Sure enough, when I checked the permissions of /var/lib/puppet, it was all owned by root:root.

Then, and only then did it seem to show me what I needed to do to fix the problem. Once I did a chown -R puppet:puppet /var/lib/puppet everything was happy! My main gripe was that the debug and verbose options should have been enough to see that error. I don't see why I have to print the stack trace to get the debug information I was looking for. The error did not manifest itself when I used debug and verbose flags, it only came out when I added the trace flag. I expected the trace flag to show me the stack traces, not to show me additional debug error messages that actually told me what the problem was!

I will have a future followup blog about how I used new features in Solaris 11 from OpenSolaris such as Zones and Crossbow networking to virtualize my network connection and run it through NAT on my workstation for testing. I was able to have a puppet master and multiple puppet clients virtualized on a single machine connected through a virtual etherstub network courtesy of Crossbow.


I am part of the OpenSolaris.org Development Operations team. I have been at Sun since October 2007 where I originally joined the Sun Streaming System, an IPTV video streaming server solution. My primary interests are networking and all related technology


« June 2016