Sunday Jul 19, 2015

Flat networks and fixed IPs with OpenStack Juno

Girish blogged previously on the work that we've been doing to support new features with the Solaris integrations into the Neutron OpenStack networking project. One of these features provides a flat network topology, allowing administrators to be able to plumb VMs created through an OpenStack environment directly into an existing network infrastructure. This essentially gives administrators a choice between a more secure, dynamic network using either VLAN or VXLAN and a pool of floating IP addresses, or an untagged static or 'flat' network with a set of allocated fixed IP addresses.

Scott Dickson has blogged about flat networks, along with the steps required to set up a flat network with OpenStack, using our driver integration into Neutron based on Elastic Virtual Switch. Check it out!

Tuesday Jul 07, 2015

What's New in Solaris OpenStack Juno Neutron

The  current update  of Oracle  OpenStack  for Oracle  Solaris updates  existing
features to the OpenStack Juno release and adds the following new features:

  1. Complete IPv6 Support for Tenant Networks
  2. Support for Source NAT
  3. Support for Metadata Services
  4. Support for Flat (untagged) Layer-2 Network Type
  5. Support for New Neutron Subcommands

1. Complete IPv6 Support for Tenant Networks

Finally, Juno  provides feature parity between  IPv4 and IPv6. The  Juno release
allows tenants to create networks and associate IPv6 subnets with these networks
such that  the VM instances  that connect to these  networks can get  their IPv6
addresses in either of the following ways:

- Stateful address configuration
- Stateless address configuration

Stateful or DHCPv6  address configuration is facilitated  through the dnsmasq(8)
daemon.

Stateless address configuration is facilitated in either of the following ways:
- Through the provider (physical) router in the data center networks.
- Through  the  Neutron  router  and Solaris  IPv6  neighbor  discovery  daemon
(in.ndpd(1M)). The Neutron L3 agent  sets the AdvSendAdvertisements parameter to
true in ndpd.conf(4) for  an interface that hosts the IPv6  subnet of the tenant
and refreshes the SMF service (svc:/network/routing/ndp:default) associated with
the daemon.

This  IPv6 support  adds the  following two  new attributes  to Neutron  Subnet:
ipv6_address_mode and  ipv6_ra_mode. Possible  values for these  attributes are:
slaac, dhcpv6-stateful, and  dhcpv6-stateless. The two Neutron  agents - Neutron
DHCP agent and Neutron L3 agent - work together to provide IPv6 support.

For most  cases, these new  attributes are  set to the  same value. For  one use
case,  only  ipv6_address_mode  is  set.   The  following  table  provides  more
information:

2. Support for Source NAT

The floating IPs feature in  OpenStack Neutron provides external connectivity to
VMs by performing a  one-to-one NAT of a the internal IP address  of a VM to the
external floating IP address.

The SNAT  feature provides external connectivity  to all of the  VMs through the
gateway public IP. The  gateway public IP is the IP address  of the gateway port
that   gets  created   when   you  execute   the   following  command:   neutron
router-gateway-set router_uuid external_network_uuid

This external  connectivity setup is similar  to wireless network setup  at home
where you have a single public IP from the ISP configured on the router, and all
our personal devices  are behind this IP on an  internal network. These internal
devices can  reach out to  anywhere on the  internet through SNAT;  however, the
external entities cannot reach these internal devices.

Example:

- Create a Public network

# neutron net-create --router:external=True --provider:network_type=flat public_net
neutron Created a new network:
|-----------------------+--------------------------------------|
| Field                 | Value                                |
|-----------------------+--------------------------------------|
| admin_state_up        | True                                 |
| network_id            | 3c9c4bdf-2d6d-40a2-883b-a86076def1fb |
| name                  | public_net                           |
| provider:network_type | flat                                 |
| router:external       | True                                 |
| shared                | False                                |
| status                | ACTIVE                               |
| subnets               |                                      |
| tenant_id             | dab8af7f10504d3db582ce54a0ce6baa     |
|-----------------------+--------------------------------------|

# neutron subnet-create --name public_subnet --disable-dhcp \
--allocation-pool start=10.134.67.241,end=10.134.67.241 \
--allocation-pool start=10.134.67.243,end=10.134.67.245 \
public_net 10.134.67.0/24
Created a new subnet:
|-------------------+----------------------------------------------------|
| Field             | Value                                              |
|-------------------+----------------------------------------------------|
| allocation_pools  | {"start": "10.134.67.241", "end": "10.134.67.241"} |
|                   | {"start": "10.134.67.243", "end": "10.134.67.245"} |
| cidr              | 10.134.67.0/24                                     |
| dns_nameservers   |                                                    |
| enable_dhcp       | False                                              |
| gateway_ip        | 10.134.67.1                                        |
| host_routes       |                                                    |
| subnet_id         | 6063613c-1008-4826-ae17-ce6a58511b2f               |
| ip_version        | 4                                                  |
| ipv6_address_mode |                                                    |
| ipv6_ra_mode      |                                                    |
| name              | public_subnet                                      |
| network_id        | 3c9c4bdf-2d6d-40a2-883b-a86076def1fb               |
| tenant_id         | dab8af7f10504d3db582ce54a0ce6baa                   |
|-------------------+----------------------------------------------------|

- Create a private network

# neutron net-create private_net
# neutron subnet-create --name private_sunbet private_net 192.168.109.0/24

- Create a router

# neutron router-create provider_router
Created a new router:
|-----------------------|--------------------------------------+
| Field                 | Value                                |
|-----------------------|--------------------------------------+
| admin_state_up        | True                                 |
| external_gateway_info |                                      |
| router_id             | b48fd525-2519-4501-99d9-9c2d51a543f1 |
| name                  | provider_router                      |
| status                | ACTIVE                               |
| tenant_id             | dab8af7f10504d3db582ce54a0ce6baa     |
|-----------------------|--------------------------------------+

Note: Update /etc/neutron/l3_agent.ini file with following entry and
restart neutron-l3-agent SMF service (svcadm restart neutron-l3-agent)
router_id = b48fd525-2519-4501-99d9-9c2d51a543f1

- Add external network to router

# neutron router-gateway-set provider_router public_net
Set gateway for router provider_router
# neutron router-show provider_router
|-----------------------|------------------------------------------------------------------------------|
| Field                 | Value                                                                        |
|-----------------------|------------------------------------------------------------------------------|
| admin_state_up        | True                                                                         |
| external_gateway_info | {"network_id": "3c9c4bdf-2d6d-40a2-883b-a86076def1fb",                       |
|                       | "enable_snat": true,                                                         |
|                       | "external_fixed_ips": [{"subnet_id": "6063613c-1008-4826-ae17-ce6a58511b2f", |
|                       | "ip_address": "10.134.67.241"}]}                                             |
| router_id             | b48fd525-2519-4501-99d9-9c2d51a543f1                                         |
| name                  | provider_router                                                              |
| status                | ACTIVE                                                                       |
| tenant_id             | dab8af7f10504d3db582ce54a0ce6baa                                             |
|-----------------------|------------------------------------------------------------------------------|


Note: By default, SNAT is  enabled on the gateway interface of  the Neutron router. To
disable  this  feature,  specify  the   --disable-snat  option  to  the  neutron
router-gateway-set subcommand.

- Add internal network to router

# neutron router-interface-add provider_router private_subnet
Added interface 9bcfd21a-c751-40bb-99b0-d9274523e151 to router provider_router.

# neutron router-port-list provider_router
|--------------------------------------+-------------------+-----------------------------------------|
| id                                   | mac_address       | fixed_ips                               |
|--------------------------------------+-------------------+-----------------------------------------|
| 4b2f5e3d-0608-4627-b93d-f48afa86c347 | fa:16:3e:84:30:e4 | {"subnet_id":                           |
|                                      |                   | "6063613c-1008-4826-ae17-ce6a58511b2f", |
|                                      |                   | "ip_address": "10.134.67.241"}          |
|                                      |                   |                                         |
| 9bcfd21a-c751-40bb-99b0-d9274523e151 | fa:16:3e:df:c1:0f | {"subnet_id":                           |
|                                      |                   | "c7f99141-25f0-47af-8efb-f5639bcf6181", |
|                                      |                   | "ip_address": "192.168.109.1"}          |
|--------------------------------------+-------------------+-----------------------------------------|
Now all of the VMs that are in the internal network can reach outside through SNAT through 10.134.67.241


3. Support for Metadata Services

A metadata service provides an OpenStack VM instance with information such as the
following:

 -- The public IP/hostname
 -- A random seed
 -- The metadata that the tenant provided at install time
 -- and much more

The metadata requests are  made by the VM instance at the  well known address of
169.254.169.254, port  80. All  such requests  arrive at  the Neutron  L3 agent,
which forwards the requests to a Neutron  proxy server running at port 9697. The
proxy  server was  spawned by  the Neutron  L3 agent.  The Neutron  proxy server
forwards the requests  to the Neutron metadata agent through  a UNIX socket. The
Neutron metadata  agent interacts with  the Neutron Server service  to determine
the instance UUID that is making  the requests. After the Neutron metadata agent
gets the  instance UUID, it makes  a call into  the Nova API metadata  server to
fetch  the information  for the  VM instance.  The fetched  information is  then
passed back to the instance that made the request.

4. Support for Flat (untagged) Layer-2 Network Type

Flat OpenStack Network is used to place all the VM instances on the same segment
without VLAN  or VXLAN.  This  means that the VM  instances will share  the same
network.

In the flat l2-type there is no VLAN tagging or other network segregation taking
place,  i.e., all  the VNICs  (and thus  VM instances)  that connect  to a  flat
l2-type network are created with VLAN ID set to 0.  It follows that flat l2-type
cannot be used to achieve multi-tenancy. Instead, it will be used by data center
admins to map directly to the existing physical networks in the data center.

One  use of  Flat network  type is  in the  configuration of  floating IPs.   If
available floating IPs are subset of  the existing physical network's IP subnet,
then you would need to create flat  network with subnet set to physical networks
IP  subnet and  allocation pool  set to  available floating  IPs.  So,  the flat
network contains  part of  the existing  physical network's  IP subnet.  See the
examples in previous section.

5. Support for New Neutron Subcommands

With this  Solaris OpenStack  Juno release,  you can  run multiple  instances of
neutron-dhcp-agent, each instance running on  a separate network node. Using the
dhcp-agent-network-add neutron  subcommand, you can manually  select which agent
should serve a DHCP enabled subnet. By default, the Neutron server automatically
load balances the work among various DHCP agents.

The following table  shows the new subcommands  that have been added  as part of
the Solaris OpenStack Juno release.
|---------------------------+-----------------------------------------|
| neutron subcommands       | Comments                                |
|---------------------------+-----------------------------------------|
| agent-delete              | Delete a given agent.                   |
| agent-list                | List agents.                            |
| agent-show                | Show information for a specified agent. |
| agent-update              | Update the admin status and             |
|                           | description for a specified agent.      |
| dhcp-agent-list-          | List DHCP agents hosting a network.     |
| hosting-net               |                                         |
| net-list-on-dhcp-agent    | List the networks on a DHCP agent.      |
| dhcp-agent-network-add    | Add a network to a DHCP agent.          |
| dhcp-agent-network-remove | Remove a network from a DHCP agent.     |
|---------------------------+-----------------------------------------|


Configuring the Neutron L3 Agent in Solaris OpenStack Juno

The Oracle Solaris implementation of OpenStack Neutron supports the following deployment model: provider router with private networks deployment. You can find more information about this model here. In this deployment model, each tenant can have one or more private networks and all the tenant networks share the same router. This router is created, owned, and managed by the data center administrator. The router itself will not be visible in the tenant's network topology view as it is owned by the service tenant. Furthermore, as there is only a single router, tenant networks cannot use overlapping IPs. Thus, it is likely that the administrator would create the private networks on behalf of tenants.

By default, this router prevents routing between private networks that are part of the same tenant. That is, VMs within one private network cannot communicate with the VMs in an another private network, even though they are all part of the same tenant. This behavior can be changed by setting allow_forwarding_between_networks to True in the /etc/neutron/l3_agent.ini configuration file and restarting the neturon-l3-agent SMF service.

This router provides connectivity to the outside world for the tenant VMs. It does this by performing Bidirectional NAT and/or Source NAT on the interface that connects the router to the external network. Tenants create as many floating IPs (public IPs) as they need or as are allowed by the floating IP quota and then associate these floating IPs with the VMs that need outside connectivity.

The following figure captures the supported deployment model.

deployment_model.png

Figure 1 Provider router with private networks deployment

Tenant A has:

  • Two internal networks:
    HR (subnet: 192.168.100.0/24, gateway: 192.168.100.1)
    ENG (subnet: 192.168.101.0/24, gateway: 192.168.101.1)
  • Two VMs
    VM1 connected to HR with a fixed IP address of 192.168.100.3
    V
    M2 connected to ENG with a fixed IP address of 192.168.101.3

Tenant B has:

  • Two internal networks:
    IT (subnet: 192.168.102.0/24, gateway: 192.168.102.1)
    ACCT (subnet: 192.168.103.0/24, gateway: 192.168.103.1)
  • Two VMs
    VM3 connected to IT with a fixed IP address of 192.168.102.3
    VM4 connected to ACCT with a fixed IP address of 192.168.103.3

All the gateway interfaces are instantiated on the node that is running neutron-l3-agent.

The external network is a provider network that is associated with the subnet 10.134.13.0/24 that is reachable from outside. Tenants will create floating IPs from this network and associate them to their VMs. VM1 and VM2 have floating IPs 10.134.13.40 and 10.134.13.9 associated with them respectively. VM1 and VM2 are reachable from the outside world through these IP addresses.

Configuring neutron-l3-agent on a Network Node

Note: This post assumes that all Compute Nodes and Network Nodes in the network have been identified and the configuration files for all the OpenStack services have been appropriately configured so that these services can communicate with each other.

The service tenant is a tenant that has all of the OpenStack services' users, namely, nova, neutron, glance, cinder, swift, keystone, heat, and horizon. Services communicate with each other using these users who all have admin role. The steps below show how to use the service tenant to create a router, an external network, and an external subnet that will be used by all of the tenants in the data center. Please refer to the following table and diagram while walking through the steps.

Note: Alternatively, you could create a separate tenant (DataCenter) and a new user (datacenter) with admin role, and the DataCenter tenant could host all of the aforementioned shared resources. 

ip_address_planning.png

Table 1 Public IP address mapping

network_topology

Figure 2 Neutron L3 agent configuration

Steps required to setup Neutron L3 agent as a data center administrator:

Note: We will need to use OpenStack CLI to configure the shared single router and associate network/subnets from different tenants with the router because from OpenStack dashboard you can only manage one tenant’s resources at a time. 

1. Enable Solaris IP filter functionality.

   l3-agent# svcadm enable ipfilter
   l3-agent# svcs ipfilter
   STATE  STIME    FMRI
   online 10:29:04 svc:/network/ipfilter:default

2. Enable IP forwarding on the entire host.

   l3-agent# ipadm show-prop -p forwarding ipv4
   PROTO PROPERTY    PERM CURRENT      PERSISTENT   DEFAULT      POSSIBLE
   ipv4  forwarding  rw   on           on           off          on,off 

3. Ensure that the Solaris Elastic Virtual Switch feature is configured correctly and has the VLAN ID required for the external network. So, if the external network/subnet uses VLAN ID of 15, then do the following:

   l3-agent# evsadm show-controlprop -p vlan-range,l2-type
   PROPERTY            PERM VALUE               DEFAULT             HOST
   l2-type             rw   vlan                vlan                --
   vlan-range          rw   200-300             --                  --

   l3-agent# evsadm set-controlprop -p vlan-range=15,200-300

In our case, the external network/subnet shares the same subnet as the compute node, so we can use Flat Layer-2 network type and not use VLANs at all. We need to configure which datalink on the network node will be used for Flat networking. In our case, it is going to be net0 (and net1 is used for connecting to internal network)

   l3-agent# evsadm set-controlprop -p uplink-port=net0,flat=yes

Note: For more information on EVS please refer to Chapter 5, "About Elastic Virtual Switches" and Chapter 6, "Administering Elastic Virtual Switches" in Managing Network Virtualization and Network Resources in Oracle Solaris 11.2 (http://docs.oracle.com/cd/E36784_01/html/E36813/index.html). In short, Solaris EVS forms the backend for OpenStack networking, and it facilitates inter-VM communication (on the same compute-node or across compute-node) either using VLANs or VXLANs or Flat networks.

4. Ensure that the service tenant is already there.

   l3-agent# keystone --os-endpoint=http://localhost:35357/v2.0 \
   --os-token=ADMIN tenant-list
   +----------------------------------+---------+---------+
   |                id                |   name  | enabled |
   +----------------------------------+---------+---------+
   | 511d4cb9ef6c40beadc3a664c20dc354 |   demo  |   True  |
   | f164220cb02465db929ce520869895fa | service |   True  |
   +----------------------------------+---------+---------+

5. Create the provider router. Note the UUID of the new router.

   l3-agent# export OS_USERNAME=neutron
   l3-agent# export OS_PASSWORD=neutron
   l3-agent# export OS_TENANT_NAME=service
   l3-agent# export OS_AUTH_URL=http://localhost:5000/v2.0
   l3-agent# neutron router-create provider_router
   Created a new router:
   +-----------------------+--------------------------------------+
   | Field                 | Value                                |
   +-----------------------+--------------------------------------+
   | admin_state_up        | True                                 |
   | external_gateway_info |                                      |
   | id                    | 181543df-40d1-4514-ea77-fddd78c389ff |
   | name                  | provider_router                      |
   | status                | ACTIVE                               |
   | tenant_id             | f164220cb02465db929ce520869895fa     |
   +-----------------------+--------------------------------------+

6. Use the router UUID from step 5 and update /etc/neutron/l3_agent.ini file with following entry:

router_id = 181543df-40d1-4514-ea77-fddd78c389ff

7. Enable the neutron-l3-agent service.

   l3-agent# svcadm enable neutron-l3-agent
   l3-agent# svcs neutron-l3-agent
   STATE STIME FMRI
   online 11:24:08 svc:/application/openstack/neutron/neutron-l3-agent:default

8. Create an external network.

   l3-agent# neutron net-create --provider:network_type=flat \
   --router:external=true  external_network
   Created a new network:
   +--------------------------+--------------------------------------+
   | Field                    | Value                                |
   +--------------------------+--------------------------------------+
   | admin_state_up           | True                                 |
   | id                       | f67f0d72-0ddf-11e4-9d95-e1f29f417e2f |
   | name                     | external_network                     |
   | provider:network_type    | flat                                 |
   | router:external          | True                                 |
   | shared                   | False                                |
   | status                   | ACTIVE                               |
   | subnets                  |                                      |
   | tenant_id                | f164220cb02465db929ce520869895fa     |
   +--------------------------+--------------------------------------+

9. Associate a subnet to external_network

   l3-agent# neutron subnet-create --disable-dhcp --name external_subnet \
   --allocation-pool start=10.134.13.8,end=10.134.13.254 external_network 10.134.13.0/24
   Created a new subnet:
   +------------------+--------------------------------------------------+
   | Field            | Value                                            |
   +------------------+--------------------------------------------------+
   | allocation_pools | {"start": "10.134.13.8", "end": "10.134.13.254"} |
   | cidr             | 10.134.13.0/24                                   |
   | dns_nameservers  |                                                  |
   | enable_dhcp      | False                                            |
   | gateway_ip       | 10.134.13.1                                      |
   | host_routes      |                                                  |
   | id               | 5d9c8958-0de0-11e4-9d96-e1f29f417e2f             |
   | ip_version       | 4                                                |
   | name             | external_subnet                                  |
   | network_id       | f67f0d72-0ddf-11e4-9d95-e1f29f417e2f             |
   | tenant_id        | f164220cb02465db929ce520869895fa                 |
   +------------------+--------------------------------------------------+

10. Add external_network to the router.

    l3-agent# neutron router-gateway-set -h
    usage: neutron router-gateway-set [-h] [--request-format {json,xml}]
                                      [--disable-snat]
     router-id external-network-id

    l3-agent# neutron router-gateway-set \
    181543df-40d1-4514-ea77-fddd78c389ff \  (provider_router UUID)
    f67f0d72-0ddf-11e4-9d95-e1f29f417e2f    (external_network UUID)
    Set gateway for router 181543df-40d1-4514-ea77-fddd78c389ff

    l3-agent# neutron router-list -c name -c external_gateway_info
+-----------------+--------------------------------------------------------+
| name            | external_gateway_info                                  |
+-----------------+--------------------------------------------------------+
| provider_router | {"network_id": "f67f0d72-0ddf-11e4-9d95-e1f29f417e2f", |
|                 | "enable_snat": true,                                   |
|                 | "external_fixed_ips":                                  |

|                 |[{"subnet_id": "5d9c8958-0de0-11e4-9d96-e1f29f417e2f",  |
|                 | "ip_address": "10.134.13.8"}]}                         |
+-----------------+--------------------------------------------------------+
Note: By default, SNAT is  enabled on the gateway interface of  the Neutron
router. To disable  this  feature,  specify  the   --disable-snat  option
to  the  neutron router-gateway-set subcommand.

11. Add the tenant's private networks to the router. The networks shown by neutron net-list were previously configured.

    l3-agent# keystone tenant-list
    +----------------------------------+---------+---------+
    |                id                |   name  | enabled |
    +----------------------------------+---------+---------+
    | 511d4cb9ef6c40beadc3a664c20dc354 |   demo  |   True  |
    | f164220cb02465db929ce520869895fa | service |   True  |
    +----------------------------------+---------+---------+

    l3-agent# neutron net-list --tenant-id=511d4cb9ef6c40beadc3a664c20dc354
    +-------------------------------+------+------------------------------+
    | id                            | name | subnets                      |
    +-------------------------------+------+------------------------------+
    | c0c15e0a-0def-11e4-9d9f-      | HR   | c0c53066-0def-11e4-9da0-     |
    |  e1f29f417e2f                 |      | e1f29f417e2f 192.168.100.0/24|   
    | ce64b430-0def-11e4-9da2-      | ENG  | ce693ac8-0def-11e4-9da3-     |
    |  e1f29f417e2f                 |      | e1f29f417e2f 192.168.101.0/24|
    +-------------------------------+------+------------------------------+

    Note: The above two networks were pre-configured 

    l3-agent# neutron router-interface-add  \
    181543df-40d1-4514-ea77-fddd78c389ff \ (provider_router UUID)
    c0c53066-0def-11e4-9da0-e1f29f417e2f   (HR subnet UUID)
    Added interface 7843841e-0e08-11e4-9da5-e1f29f417e2f to router 181543df-40d1-4514-ea77-fddd78c389ff.

    l3-agent# neutron router-interface-add \
    181543df-40d1-4514-ea77-fddd78c389ff \ (provider_router UUID)
    ce693ac8-0def-11e4-9da3-e1f29f417e2f   (ENG subnet UUID)
    Added interface 89289b8e-0e08-11e4-9da6-e1f29f417e2f to router 181543df-40d1-4514-ea77-fddd78c389ff.

12. The following figure shows how the network topology looks when you log in as a service tenant user.

provider_router.png

Steps required to create and associate floating IPs as a tenant user

1. Log into the OpenStack Dashboard using the tenant user's credential

2. Select Project -> Access & Security -> Floating IPs

3. With external_network selected, click the Allocate IP button

allocate_floating_ip.png

4. The Floating IPs tab shows that 10.134.13.9 Floating IP is allocated.

allocated_floating_ip.png

5. Click the Associate button and select the VM's port from the pull down menu.

associate_fip.png

6. The Project -> Instances window shows that the floating IP is associated with the VM.

instances.png

If you had selected a keypair (SSH Public Key) while launching an instance, then that SSH key would be added into the root's authorized_keys file in the VM. With that done, you can ssh into the running VM.

       [gmoodalb@thunta:~] ssh root@10.134.13.9
       Last login: Fri Jul 18 00:37:39 2014 from
       10.132.146.13 Oracle Corporation SunOS 5.11 11.2 June 2014

       root@host-192-168-101-3:~# uname -a
       SunOS host-192-168-101-3 5.11 11.2 i86pc i386 i86pc
       root@host-192-168-101-3:~# zoneadm list -cv
       ID NAME              STATUS      PATH                 BRAND      IP    
        2 instance-00000001 running     /                    solaris    excl 
       root@host-192-168-101-3:~# ipadm
       NAME             CLASS/TYPE STATE        UNDER      ADDR
       lo0              loopback   ok           --         --
         lo0/v4         static     ok           --         127.0.0.1/8
       lo0/v6          static     ok          --         ::1/128
       net0             ip         ok           --         --
         net0/dhcp      inherited  ok           --         192.168.101.3/24

Under the covers:

On the node where neutron-l3-agent is running, you can use IP filter commands (ipf(1m), ippool(1m), and ipnat(1m)) and networking commands (dladm(1m) and ipadm(1m)) to observe and troubleshoot the configuration done by neturon-l3-agent.

VNICs created by neutron-l3-agent:

l3-agent# dladm show-vnic
    LINK                OVER         SPEED  MACADDRESS        MACADDRTYPE VIDS
    l3i7843841e_0_0     net1         1000   2:8:20:42:ed:22   fixed       200
    l3i89289b8e_0_0     net1         1000   2:8:20:7d:87:12   fixed       201
    l3ed527f842_0_0     net0         100    2:8:20:9:98:3e    fixed       0

IP addresses created by neutron-l3-agent:

    l3-agent# ipadm
    NAME                  CLASS/TYPE STATE   UNDER      ADDR
    l3ed527f842_0_0       ip         ok      --         --
      l3ed527f842_0_0/v4  static     ok      --         10.134.13.8/24
      l3ed527f842_0_0/v4a static     ok      --         10.134.13.9/32
    l3i7843841e_0_0       ip         ok      --         --
      l3i7843841e_0_0/v4  static     ok      --         192.168.100.1/24
    l3i89289b8e_0_0       ip         ok      --         --
      l3i89289b8e_0_0/v4  static     ok      --         192.168.101.1/24

IP Filter rules:

   l3-agent# ipfstat -io
   empty list for ipfilter(out)
   block in quick on l3i7843841e_0_0 from 192.168.100.0/24 to pool/4386082

   pass in on l3i7843841e_0_0 to l3ed527f842_0_0:10.134.13.1 from any to !192.168.100.0/24
   block in quick on l3i89289b8e_0_0 from 192.168.101.0/24 to pool/8226578
   pass in on l3i89289b8e_0_0 to l3ed527f842_0_0:10.134.13.1 from any to !192.168.101.0/24
   l3-agent# ippool -l
   table role = ipf type = tree number = 8226578
{ 192.168.100.0/24; };
   table role = ipf type = tree number = 4386082
{ 192.168.101.0/24; };

IP NAT rules:

   l3-agent# ipnat -l
   List of active MAP/Redirect filters:
   rdr l3i89289b8e_0_0 169.254.169.254/32 port 80 -> 192.168.101.1 port 9697 tcp
   map l3i89289b8e_0_0 192.168.101.0/24 -> 10.134.13.8/32
   rdr l3i7843841e_0_0 169.254.169.254/32 port 80 -> 192.168.100.1 port 9697 tcp
   map l3i7843841e_0_0 192.168.100.0/24 -> 10.134.13.8/32
   bimap l3ed527f842_0_0 192.168.101.3/32 -> 10.134.13.9/32
   List of active sessions:
   BIMAP 192.168.101.3  22  <- -> 10.134.13.9  22 [10.132.146.13 36405]

Thursday Jul 31, 2014

Neutron L3 Agent in Oracle Solaris OpenStack

The Oracle Solaris implementation of OpenStack Neutron supports the following deployment model: provider router with private networks deployment. You can find more information about this model here. In this deployment model, each tenant can have one or more private networks and all the tenant networks share the same router. This router is created, owned, and managed by the data center administrator. The router itself will not be visible in the tenant's network topology view. Because there is only a single router, tenant networks cannot use overlapping IPs. Thus, it is likely that the administrator would create the private networks on behalf of tenants.

By default, this router prevents routing between private networks that are part of the same tenant. That is, VMs within one private network cannot communicate with the VMs in another private network, even though they are all part of the same tenant. This behavior can be changed by setting allow_forwarding_between_networks to True in the /etc/neutron/l3_agent.ini configuration file and restarting the neturon-l3-agent SMF service.

This router provides connectivity to the outside world for the tenant VMs. It does this by performing bidirectional NAT on the interface that connects the router to the external network. Tenants create as many floating IPs (public IPs) as they need or as are allowed by the floating IP quota and then associate these floating IPs with the VMs that need outside connectivity.

The following figure captures the supported deployment model.

deployment_model.png

Figure 1 Provider router with private networks deployment

Tenant A has:

  • Two internal networks:
    HR (subnet: 192.168.100.0/24, gateway: 192.168.100.1)
    ENG (subnet: 192.168.101.0/24, gateway: 192.168.101.1)
  • Two VMs
    VM1 connected to HR with a fixed IP address of 192.168.100.3
    V
    M2 connected to ENG with a fixed IP address of 192.168.101.3

Tenant B has:

  • Two internal networks:
    IT (subnet: 192.168.102.0/24, gateway: 192.168.102.1)
    ACCT (subnet: 192.168.103.0/24, gateway: 192.168.103.1)
  • Two VMs
    VM3 connected to IT with a fixed IP address of 192.168.102.3
    VM4 connected to ACCT with a fixed IP address of 192.168.103.3

All the gateway interfaces are instantiated on the node that is running neutron-l3-agent.

The external network is a provider network that is associated with the subnet

10.134.13.0/24 that is reachable from outside. Tenants will create floating IPs from this network and associate them to their VMs. VM1 and VM2 have floating IPs 10.134.13.40 and 10.134.13.9 associated with them respectively. VM1 and VM2 are reachable from the outside world through these IP addresses.

Configuring neutron-l3-agent on a Network Node

Note: In this configuration, all Compute Nodes and Network Nodes in the network have been identified, and the configuration file for all the OpenStack services has been appropriately configured so that these services can communicate with each other.

The service tenant is a tenant for all the OpenStack services (nova, neutron, glance, cinder, swift, keystone, and horizon) and the users for each of the services. Services communicate with each other using these users who all have admin role. The steps below show how to use the service tenant to create a router, an external network, and an external subnet that will be used by all of the tenants in the data center. Please refer to the following table and diagram while walking through the steps.

Note: Alternatively, you could create a separate tenant (DataCenter) and a new user (datacenter) with admin role, and the DataCenter tenant could host all of the aforementioned shared resources. 

ip_address_planning.png

Table 1 Public IP address mapping

network_topology

Figure 2 Neutron L3 agent configuration

Steps required to setup Neutron L3 agent as a data center administrator:

Note: We will need to use OpenStack CLI to configure the shared single router and associate network/subnets from different tenants with it because from OpenStack dashboard you can only manage one tenant’s resources at a time. 

1. Enable Solaris IP filter functionality.

   l3-agent# svcadm enable ipfilter
   l3-agent# svcs ipfilter
   STATE  STIME    FMRI
   online 10:29:04 svc:/network/ipfilter:default

2. Enable IP forwarding on the entire host.

   l3-agent# ipadm show-prop -p forwarding ipv4
   PROTO PROPERTY    PERM CURRENT      PERSISTENT   DEFAULT      POSSIBLE
   ipv4  forwarding  rw   on           on           off          on,off 

3. Ensure that the Solaris Elastic Virtual Switch feature is configured correctly and has the VLAN ID required for the external network. In our case, the external network/subnet uses VLAN 1.

   l3-agent# evsadm show-controlprop -p vlan-range,l2-type
   PROPERTY            PERM VALUE               DEFAULT             HOST
   l2-type             rw   vlan                vlan                --
   vlan-range          rw   200-300             --                  --

   l3-agent# evsadm set-controlprop -p vlan-range=1,200-300

Note: For more information on EVS please refer to Chapter 5, "About Elastic Virtual Switches" and Chapter 6, "Administering Elastic Virtual Switches" in Managing Network Virtualization and Network Resources in Oracle Solaris 11.2 (http://docs.oracle.com/cd/E36784_01/html/E36813/index.html). In short, Solaris EVS forms the backend for OpenStack networking, and it facilitates inter-VM communication (on the same compute-node or across compute-node) either using VLANs or VXLANs.

4. Ensure that the service tenant is already there.

   l3-agent# keystone --os-endpoint=http://localhost:35357/v2.0 \
   --os-token=ADMIN tenant-list
   +----------------------------------+---------+---------+
   |                id                |   name  | enabled |
   +----------------------------------+---------+---------+
   | 511d4cb9ef6c40beadc3a664c20dc354 |   demo  |   True  |
   | f164220cb02465db929ce520869895fa | service |   True  |
   +----------------------------------+---------+---------+

5. Create the provider router. Note the UUID of the new router.

   l3-agent# export OS_USERNAME=neutron
   l3-agent# export OS_PASSWORD=neutron
   l3-agent# export OS_TENANT_NAME=service
   l3-agent# export OS_AUTH_URL=http://localhost:5000/v2.0
   l3-agent# neutron router-create provider_router
   Created a new router:
   +-----------------------+--------------------------------------+
   | Field                 | Value                                |
   +-----------------------+--------------------------------------+
   | admin_state_up        | True                                 |
   | external_gateway_info |                                      |
   | id                    | 181543df-40d1-4514-ea77-fddd78c389ff |
   | name                  | provider_router                      |
   | status                | ACTIVE                               |
   | tenant_id             | f164220cb02465db929ce520869895fa     |
   +-----------------------+--------------------------------------+

6. Use the router UUID from step 5 and update /etc/neutron/l3_agent.ini file with following entry:

router_id = 181543df-40d1-4514-ea77-fddd78c389ff

7. Enable the neutron-l3-agent service.

   l3-agent# svcadm enable neutron-l3-agent
   l3-agent# svcs neutron-l3-agent
   STATE STIME FMRI
   online 11:24:08 svc:/application/openstack/neutron/neutron-l3-agent:default

8. Create an external network.

   l3-agent# neutron net-create --provider:network_type=vlan \
   --provider:segmentation_id=1 --router:external=true  external_network
   Created a new network:
   +--------------------------+--------------------------------------+
   | Field                    | Value                                |
   +--------------------------+--------------------------------------+
   | admin_state_up           | True                                 |
   | id                       | f67f0d72-0ddf-11e4-9d95-e1f29f417e2f |
   | name                     | external_network                     |
   | provider:network_type    | vlan                                 |
   | provider:segmentation_id | 1                                    |
   | router:external          | True                                 |
   | shared                   | False                                |
   | status                   | ACTIVE                               |
   | subnets                  |                                      |
   | tenant_id                | f164220cb02465db929ce520869895fa     |
   +--------------------------+--------------------------------------+

9. Associate a subnet to external_network

   l3-agent# neutron subnet-create --enable-dhcp=False \
   --name external_subnet external_network 10.134.13.0/24
   Created a new subnet:
   +------------------+--------------------------------------------------+
   | Field            | Value                                            |
   +------------------+--------------------------------------------------+
   | allocation_pools | {"start": "10.134.13.2", "end": "10.134.13.254"} |
   | cidr             | 10.134.13.0/24                                   |
   | dns_nameservers  |                                                  |
   | enable_dhcp      | False                                            |
   | gateway_ip       | 10.134.13.1                                      |
   | host_routes      |                                                  |
   | id               | 5d9c8958-0de0-11e4-9d96-e1f29f417e2f             |
   | ip_version       | 4                                                |
   | name             | external_subnet                                  |
   | network_id       | f67f0d72-0ddf-11e4-9d95-e1f29f417e2f             |
   | tenant_id        | f164220cb02465db929ce520869895fa                 |
   +------------------+--------------------------------------------------+

10. Apply the workaround for not having --allocation-pool support for subnets. Because 10.134.13.2 through 10.134.13.7 IP addresses are set aside for other OpenStack API services, perform the following floatingip-create steps to ensure that no tenant will assign these IP addresses to VMs:

NOTE: This workaround is not needed if you are running S11.2 SRU5 and above as the support
for allocation pool was added in that update.

   l3-agent# for i in `seq 1 6`; do neutron floatingip-create \
   external_network; done
   l3-agent# neutron floatingip-list -c id -c floating_ip_address
   +--------------------------------------+---------------------+
   | id                                   | floating_ip_address |
   +--------------------------------------+---------------------+
   | 58fbccdd-1b60-c6ba-9a51-bbc2cbcc95f8 | 10.134.13.2         |
   | ce620f79-aed4-6d1c-b5e7-c64c5f6d1f28 | 10.134.13.3         |
   | 6442eef1-b748-cb51-8a96-98b90e264bd0 | 10.134.13.4         |
   | a9792d03-f5de-cae1-fa5a-bb614720b22c | 10.134.13.5         |
   | da18a52d-73a5-4c7d-fb98-95d292d9b0e8 | 10.134.13.6         |
   | 22e02f77-5b44-402a-d369-9e6b1d831ca0 | 10.134.13.7         |
   +--------------------------------------+---------------------+

11. Add external_network to the router.

    l3-agent# neutron router-gateway-set -h
    usage: neutron router-gateway-set [-h] [--request-format {json,xml}]
                                      [--disable-snat]
     router-id external-network-id

    l3-agent# neutron router-gateway-set \
    181543df-40d1-4514-ea77-fddd78c389ff \  (provider_router UUID)
    f67f0d72-0ddf-11e4-9d95-e1f29f417e2f    (external_network UUID)
    Set gateway for router 181543df-40d1-4514-ea77-fddd78c389ff

    l3-agent# neutron router-list -c name -c external_gateway_info
+-----------------+--------------------------------------------------------+
| name            | external_gateway_info                                  |
+-----------------+--------------------------------------------------------+
| provider_router | {"network_id": "f67f0d72-0ddf-11e4-9d95-e1f29f417e2f"} |
+-----------------+--------------------------------------------------------+

12. Add the tenant's private networks to the router. The networks shown by neutron net-list were previously configured.

    l3-agent# keystone tenant-list
    +----------------------------------+---------+---------+
    |                id                |   name  | enabled |
    +----------------------------------+---------+---------+
    | 511d4cb9ef6c40beadc3a664c20dc354 |   demo  |   True  |
    | f164220cb02465db929ce520869895fa | service |   True  |
    +----------------------------------+---------+---------+

    l3-agent# neutron net-list --tenant-id=511d4cb9ef6c40beadc3a664c20dc354
    +-------------------------------+------+------------------------------+
    | id                            | name | subnets                      |
    +-------------------------------+------+------------------------------+
    | c0c15e0a-0def-11e4-9d9f-      | HR   | c0c53066-0def-11e4-9da0-     |
    |  e1f29f417e2f                 |      | e1f29f417e2f 192.168.100.0/24|   
    | ce64b430-0def-11e4-9da2-      | ENG  | ce693ac8-0def-11e4-9da3-     |
    |  e1f29f417e2f                 |      | e1f29f417e2f 192.168.101.0/24|
    +-------------------------------+------+------------------------------+

    Note: The above two networks were preconfigured 

    l3-agent# neutron router-interface-add  \
    181543df-40d1-4514-ea77-fddd78c389ff \ (provider_router UUID)
    c0c53066-0def-11e4-9da0-e1f29f417e2f   (HR subnet UUID)
    Added interface 7843841e-0e08-11e4-9da5-e1f29f417e2f to router 181543df-40d1-4514-ea77-fddd78c389ff.

    l3-agent# neutron router-interface-add \
    181543df-40d1-4514-ea77-fddd78c389ff \ (provider_router UUID)
    ce693ac8-0def-11e4-9da3-e1f29f417e2f   (ENG subnet UUID)
    Added interface 89289b8e-0e08-11e4-9da6-e1f29f417e2f to router 181543df-40d1-4514-ea77-fddd78c389ff.

13. The following figure shows how the network topology looks when you log in as a service tenant user.

provider_router.png

Steps required to create and associate floating IPs as a tenant user

1. Log into the OpenStack Dashboard using the tenant user's credential

2. Select Project -> Access & Security -> Floating IPs

3. With external_network selected, click the Allocate IP button

allocate_floating_ip.png

4. The Floating IPs tab shows that 10.134.13.9 Floating IP is allocated.

allocated_floating_ip.png

5. Click the Associate button and select the VM's port from the pull down menu.

associate_fip.png

6. The Project -> Instances window shows that the floating IP is associated with the VM.

instances.png

If you had selected a keypair (SSH Public Key) while launching an instance, then that SSH key would be added into the root's authorized_keys file in the VM. With that done you can ssh into the running VM.

       [gmoodalb@thunta:~] ssh root@10.134.13.9
       Last login: Fri Jul 18 00:37:39 2014 from
       10.132.146.13 Oracle Corporation SunOS 5.11 11.2 June 2014

       root@host-192-168-101-3:~# uname -a
       SunOS host-192-168-101-3 5.11 11.2 i86pc i386 i86pc
       root@host-192-168-101-3:~# zoneadm list -cv
       ID NAME              STATUS      PATH                 BRAND      IP    
        2 instance-00000001 running     /                    solaris    excl 
       root@host-192-168-101-3:~# ipadm
       NAME             CLASS/TYPE STATE        UNDER      ADDR
       lo0              loopback   ok           --         --
         lo0/v4         static     ok           --         127.0.0.1/8
 lo0/v6          static     ok          --         ::1/128
       net0             ip         ok           --         --
         net0/dhcp      inherited  ok           --         192.168.101.3/24

Under the covers:

On the node where neutron-l3-agent is running, you can use IP filter commands (ipf(1m), ippool(1m), and ipnat(1m)) and networking commands (dladm(1m) and ipadm(1m)) to observe and troubleshoot the configuration done by neturon-l3-agent.

VNICs created by neutron-l3-agent:

    l3-agent# dladm show-vnic
    LINK                OVER         SPEED  MACADDRESS        MACADDRTYPE VIDS
    l3i7843841e_0_0     net1         1000   2:8:20:42:ed:22   fixed       200
    l3i89289b8e_0_0     net1         1000   2:8:20:7d:87:12   fixed       201
    l3ed527f842_0_0     net0         100    2:8:20:9:98:3e    fixed       0

IP addresses created by neutron-l3-agent:

    l3-agent# ipadm
    NAME                  CLASS/TYPE STATE   UNDER      ADDR
    l3ed527f842_0_0       ip         ok      --         --
      l3ed527f842_0_0/v4  static     ok      --         10.134.13.8/24
      l3ed527f842_0_0/v4a static     ok      --         10.134.13.9/32
    l3i7843841e_0_0       ip         ok      --         --
      l3i7843841e_0_0/v4  static     ok      --         192.168.100.1/24
    l3i89289b8e_0_0       ip         ok      --         --
      l3i89289b8e_0_0/v4  static     ok      --         192.168.101.1/24

IP Filter rules:

   l3-agent# ipfstat -io
   empty list for ipfilter(out)
   block in quick on l3i7843841e_0_0 from 192.168.100.0/24 to pool/4386082
   block in quick on l3i89289b8e_0_0 from 192.168.101.0/24 to pool/8226578
   l3-agent# ippool -l
   table role = ipf type = tree number = 8226578
{ 192.168.100.0/24; };
   table role = ipf type = tree number = 4386082
{ 192.168.101.0/24; };

IP NAT rules:

   l3-agent# ipnat -l
   List of active MAP/Redirect filters:
   bimap l3ed527f842_0_0 192.168.101.3/32 -> 10.134.13.9/32
   List of active sessions:
   BIMAP 192.168.101.3  22  <- -> 10.134.13.9  22 [10.132.146.13 36405]

Known Issues:

1. The neutron-l3-agent SMF service goes into maintenance when it is restarted. This will be fixed in an SRU. The workaround is to restart the ipfilter service and clear the neutron-l3-agent.

# svcadm restart ipfilter:default
# svcadm clear neutron-l3-agent:default

2. The default gateway for the network node is removed in certain setups.

If the IP address of the Network Node is derived from the external_network address space, then if you use the neutron router-gateway-clear command to remove the external_network from the provider_router, the default gateway for the network node is deleted and the network node is inaccessible.

     l3-agent# neutron router-gateway-clear <router_UUID_goes_here>

To fix this problem, connect to the network node through the console and then add the default gateway again.

About

Oracle OpenStack is cloud management software that provides customers an enterprise-grade solution to deploy and manage their entire IT environment. Customers can rapidly deploy Oracle and third-party applications across shared compute, network, and storage resources with ease, with end-to-end enterprise-class support. For more information, see here.

Search

Archives
« September 2015
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today