Thursday Aug 14, 2014

VXLAN in Solaris 11.2

What is a VXLAN?

VXLAN, or Virtual eXtensible LAN, is essentially a tunneling mechanism used to provide isolated virtual Layer 2 (L2) segments that can span multiple physical L2 segments. Since it is a tunneling mechanism it uses IP (IPv4 or IPv6) as its underlying network which means we can have isolated virtual L2 segments over networks connected by IP. This allows Virtual Machines (VM) to be in the same L2 segment even if they  are located on systems that are in different physical networks. Some of the benefits of VXLAN include:

  • Better use of resources, i.e. VMs can be provisioned on systems, that span different geographies, based on system load.
  • VMs can be moved across systems without having to reconfigure the underlying physical network.
  • Fewer MAC address collision issues, i.e. MAC address may collide as long as they are in different VXLAN segments.
Isolated L2 segments can be supported by existing mechanisms such as VLANs, but VLANs don't scale; the number of VLANs are limited to 4094 (0 and 1 are reserved), but VXLAN can provide upto 16 million isolated L2 networks.

Additional details, including protocol working, can be found in the VXLAN draft IETF RFC. Note that Solaris uses the IANA specified UDP port number of 4789 for VXLAN. 

The following is a quick primer on administering VXLAN in Solaris 11.2 using the Solaris administrative utility dladm(1m). Solaris Elastic Virtual Switch (EVS) can be used to manage VXLAN deployment automatically in a cloud environment - this will be the subject of a  future discussion.

The following illustrates how VXLANs are created on Solaris:

where IPx is an IP address (IPv4 or IPv6) and VNIs y and z are different VXLAN segments. VM1, VM2 and VM3 are guests with interfaces configured on VXLAN segments y and z. vxlan1 and vxlan2 are VXLAN links, represented by a new class called VXLAN.

Creating VXLANs

To begin with we need to create  VXLAN links in the segments that we want to use  for guests - let's assume we want to create segments 100 and 101. Additionally, we also want to create the VXLAN links on IP (remember VXLANs are overlay over IP networks), so we need the IP address over which we want to create the VXLAN links - let's assume our endpoint on this system is 10.10.10.1 (in the following example this IP address resides on net4).

# ipadm show-addr net4                                      
ADDROBJ           TYPE     STATE        ADDR
net4/v4                 static        ok           10.10.10.1/24

Create VXLAN segments 100 and 101 on this IP address.

# dladm create-vxlan -p addr=10.10.10.1,vni=100 vxlan1 
# dladm create-vxlan -p addr=10.10.10.1,vni=101 vxlan2    

Notes:

  • In the above example we explicitly provide the IP address, however, you could also:
    • provide a prefix and prefixlen to use an IP address that matches it, e.g:
# dladm create-vxlan -p addr=10.10.10.0/24,vni=100 vxlan1
    • provide an interface (say net4 in our case) to pick an active address on that interface, e.g:
# dladm create-vxlan -p interface=net4,vni=100 vxlan1
(you can't provide interface and addr together)

  • VXLAN links can be created on an IP address over any interface, including IPoIB link, except IPMP, loopback or VNI (Virtual Network Interface).
  • The IP address may belong to a VLAN segment.

Displaying VXLANs

Check if we have our VXLAN links:

# dladm show-vxlan                                          
LINK                ADDR                     VNI   MGROUP
vxlan1              10.10.10.1               100   224.0.0.1
vxlan2              10.10.10.1               101   224.0.0.1

One thing  we haven't talked about so far is the MGROUP. Recall from the RFC that VXLAN links use IP multicast for broadcast. So, we can assign a multicast address to each  VXLAN segment that we create. If we don't specify a multicast address, we assign the all-host multicast address (or all nodes for IPv6) to the VXLAN segments. In the above case since we didn't specify the multicast address both vxlan1 and vxlan2 will use the all-host multicast address.

The VXLAN links created, vxlan1 and vxlan2, are just like other datalinks (physical, VNIC, VLAN, etc.) and can be displayed using 

# dladm show-link
LINK                CLASS     MTU    STATE    OVER
...
vxlan1              vxlan     1440         up            --
vxlan2              vxlan     1440         up            --

The STATE reflects that state of the VXLAN links which is based on the status of the IP address (10.10.10.1 in this case). Note that the MTU is reduced because of the VXLAN encapsulation for each packet, on this VXLAN link.

Now that we have our VXLAN links, we can create Virtual Links (VNICs) over these  VXLAN links. Note, the VXLAN links themselves not active links, i.e. you can't plumb IP address or create Flows on them, but they can be snooped.

# dladm create-vnic  -l vxlan1 vnic1                    
# dladm create-vnic  -l vxlan1 vnic2    
# dladm create-vnic  -l vxlan2 vnic3            
# dladm create-vnic  -l vxlan2 vnic4  

# dladm show-vnic                                           
LINK                OVER              SPEED  MACADDRESS        MACADDRTYPE VIDS
vnic1               vxlan1            10000     2:8:20:d9:df:5f            random                   0
vnic2               vxlan1            10000     2:8:20:72:9a:70          random                   0
vnic3               vxlan2            10000     2:8:20:19:c7:14          random                   0
vnic4               vxlan2            10000     2:8:20:88:98:6d         random                    0

You can see from the above that the process of creating a VNIC on a VXLAN link  is no different from creating one any other link  such as physical, aggregation, etherstub etc.  This means that the VNICs created may belong to a VLAN and properties (such as maxbw and priority) can be set on them.

Once created, these VNICs can be assiged explicitly to Solaris zones. Alternatively, the VXLAN links can be set as the lower-link for configuring anet (automatic VNIC) links in Solaris Zones.

For Logical Domains on SPARC, the virtual switch (add-vsw) can be created on the VXLAN device which means the vnets created on the virtual switch will be part of the VXLAN segment.

Deleting VXLANs

A VXLAN can be deleted once all the VNICs over the VXLAN links have been deleted. Thus in our case:


# dladm delete-vnic vnic1   
# dladm delete-vnic vnic2 
# dladm delete-vnic vnic3     
# dladm delete-vnic vnic4  

# dladm delete-vxlan vxlan1
# dladm delete-vxlan vxlan2  

Additional Notes:
  • VXLAN for Solaris Kernel zone and LDom guests are not supported with direct I/O.
  • Hardware capabilities such as checksum and LSO are not available for the encapsulated (inner) packet.
  • Some earlier implementations (e.g. Linux) might use a pre-IANA assigned port number. If so, such implementations might have to be configured to use the IANA port number to interoperate with Solaris VXLAN. 
  • IP multicast must be available in the underlying network and if communicating  across different IP subnets, multicast routing should be available as well.
  • Modifying properties (IP address, multicast address or VNI) on a VXLAN link is currently not supported; you'd have to delete the VXLAN and re-create it.

Tuesday May 06, 2014

Puppet Configuration in Solaris

What is Puppet?

Puppet is IT automation software that helps system administrators manage IT infrastructure. It automates tasks such as provisioning, configuration, patch management and compliance. Repetitive tasks are easily automated, deployment of critical applications occurs rapidly, and required system changes are proactively managed. Puppet scales to meet the needs of the environment, whether it is a simple deployment or a complex infrastructure, and works on-premise or in the cloud.

Puppet is now available as part of Oracle Solaris 11.2!

Use ntpdate or ntpd -q to set the date

Puppet can error out with some very strange messages if the clocks on both the master and agent aren't synchronized.  You can use ntpdate or ntpd -q to set the date just once if you'd like to manage the NTP service with Puppet, or you can configure NTP.

Install the required packages on both systems 

# pkg install puppet

This will install the puppet, facter and ruby-19 packages.

Configure the Puppet SMF instances

master # svccfg -s puppet:master setprop config/server = master.fqdn.company.com
master # svccfg -s puppet:master refresh
master # svcadm enable puppet:master

agent # svccfg -s puppet:agent setprop config/server = master.fqdn.company.com
agent # svccfg -s puppet:agent refresh

Test the connection to the master and configure authentication

Before enabling the puppet:agent service, you'll want to test the connection first in order to set up authentication

agent # puppet agent --test --server master.fqdn.company.com

Info: Creating a new SSL key for agent.fqdn.company.com
Info: Caching certificate for ca
Info: Creating a new SSL certificate request for agent.fqdn.company.com
Info: Certificate Request fingerprint (SHA256):
C9:63:22:6A:9F:88:D6:18:7F:F3:F4:FA:89:E4:86:A1:C7:BE:94:CF:F1:D5:59:B9:DD:21:8D:C1:C9:B0:F4:18
**Exiting; no certificate found and waitforcert is disabled**

Now that the agent has created a new SSL key, authorization needs approval on the master.

Sign the SSL certificate on the master

master # puppet cert list
  "agent.fqdn.company.com" (SHA256)
  C9:63:22:6A:9F:88:D6:18:7F:F3:F4:FA:89:E4:86:A1:C7:BE:94:CF:F1:D5:59:B9:DD:21:8D:C1:C9:B0:F4:18

master # puppet cert sign agent.fqdn.company.com
Notice: Signed certificate request for agent.fqdn.company.com
Notice: Removing file Puppet::SSL::CertificateRequest agent.fqdn.company.com at
'/etc/puppet/ssl/ca/requests/agent.fqdn.company.com.pem'

Retest the agent to ensure it can connect

agent # puppet agent --test --server master.fqdn.company.com
Info: Caching certificate for agent.fqdn.company.com
Info: Caching certificate_revocation_list for ca
Info: Retrieving plugin
Info: Caching catalog for agent.fqdn.company.com
Info: Applying configuration version '1371232699'
Notice: Finished catalog run in 0.65 seconds

Enable the agent service

agent # svcadm enable puppet:agent

Additional configuration of /etc/puppet/puppet.conf on both master and agent (optional) 

Further customizations can be made in /etc/puppet/puppet.conf.  See Puppet's Configurables page for more details.

NOTE:  Puppet's configuration is completely done via  SMF stencils.  /etc/puppet/puppet.conf should not be directly edited as any edits will be lost when the Puppet SMF service (re)starts.  Setting a new value should be done via svccfg(1M):

# svccfg -s puppet:agent setprop config/<option> = <value>

# svccfg -s puppet:agent refresh

(substitute :master as needed)
About

The Observatory is a blog for users of Oracle Solaris. Tune in here for tips, tricks and more as we explore the Solaris operating system from Oracle.

Search

Archives
« April 2015
SunMonTueWedThuFriSat
   
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
  
       
Today