Neutron L3 Agent in Oracle Solaris OpenStack
By Girishmoodalbail-Oracle on Jul 31, 2014
The Oracle Solaris implementation of OpenStack Neutron supports the following deployment model: provider router with private networks deployment. You can find more information about this model here. In this deployment model, each tenant can have one or more private networks and all the tenant networks share the same router. This router is created, owned, and managed by the data center administrator. The router itself will not be visible in the tenant's network topology view. Because there is only a single router, tenant networks cannot use overlapping IPs. Thus, it is likely that the administrator would create the private networks on behalf of tenants.
By default, this router prevents routing between private networks that are part of the same tenant. That is, VMs within one private network cannot communicate with the VMs in another private network, even though they are all part of the same tenant. This behavior can be changed by setting allow_forwarding_between_networks to True in the /etc/neutron/l3_agent.ini configuration file and restarting the neturon-l3-agent SMF service.
This router provides connectivity to the outside world for the tenant VMs. It does this by performing bidirectional NAT on the interface that connects the router to the external network. Tenants create as many floating IPs (public IPs) as they need or as are allowed by the floating IP quota and then associate these floating IPs with the VMs that need outside connectivity.
The following figure captures the supported deployment model.
Figure 1 Provider router with private networks deployment
Tenant A has:
- Two internal networks:
HR (subnet: 192.168.100.0/24, gateway: 192.168.100.1)
ENG (subnet: 192.168.101.0/24, gateway: 192.168.101.1)
- Two VMs
VM1 connected to HR with a fixed IP address of 192.168.100.3
VM2 connected to ENG with a fixed IP address of 192.168.101.3
Tenant B has:
- Two internal networks:
IT (subnet: 192.168.102.0/24, gateway: 192.168.102.1)
ACCT (subnet: 192.168.103.0/24, gateway: 192.168.103.1)
- Two VMs
VM3 connected to IT with a fixed IP address of 192.168.102.3
VM4 connected to ACCT with a fixed IP address of 192.168.103.3
All the gateway interfaces are instantiated on the node that is running neutron-l3-agent.
The external network is a provider network that is associated with the subnet
10.134.13.0/24 that is reachable from outside. Tenants will create floating IPs from this network and associate them to their VMs. VM1 and VM2 have floating IPs 10.134.13.40 and 10.134.13.9 associated with them respectively. VM1 and VM2 are reachable from the outside world through these IP addresses.
Configuring neutron-l3-agent on a Network Node
Note: In this configuration, all Compute Nodes and Network Nodes in the network have been identified, and the configuration file for all the OpenStack services has been appropriately configured so that these services can communicate with each other.
The service tenant is a tenant for all the OpenStack services (nova, neutron, glance, cinder, swift, keystone, and horizon) and the users for each of the services. Services communicate with each other using these users who all have admin role. The steps below show how to use the service tenant to create a router, an external network, and an external subnet that will be used by all of the tenants in the data center. Please refer to the following table and diagram while walking through the steps.
Note: Alternatively, you could create a separate tenant (DataCenter) and a new user (datacenter) with admin role, and the DataCenter tenant could host all of the aforementioned shared resources.
Table 1 Public IP address mapping
Figure 2 Neutron L3 agent configuration
Steps required to setup Neutron L3 agent as a data center administrator:
Note: We will need to use OpenStack CLI to configure the shared single router and associate network/subnets from different tenants with it because from OpenStack dashboard you can only manage one tenant’s resources at a time.
1. Enable Solaris IP filter functionality.
2. Enable IP forwarding on the entire host.
3. Ensure that the Solaris Elastic Virtual Switch feature is configured correctly and has the VLAN ID required for the external network. In our case, the external network/subnet uses VLAN 1.
Note: For more information on EVS please refer to Chapter 5, "About Elastic Virtual Switches" and Chapter 6, "Administering Elastic Virtual Switches" in Managing Network Virtualization and Network Resources in Oracle Solaris 11.2 (http://docs.oracle.com/cd/E36784_01/html/E36813/index.html). In short, Solaris EVS forms the backend for OpenStack networking, and it facilitates inter-VM communication (on the same compute-node or across compute-node) either using VLANs or VXLANs.
4. Ensure that the service tenant is already there.
5. Create the provider router. Note the UUID of the new router.
6. Use the router UUID from step 5 and update /etc/neutron/l3_agent.ini file with following entry:
7. Enable the neutron-l3-agent service.
8. Create an external network.
9. Associate a subnet to external_network
10. Apply the workaround for not having --allocation-pool support for subnets. Because 10.134.13.2 through 10.134.13.7 IP addresses are set aside for other OpenStack API services, perform the following floatingip-create steps to ensure that no tenant will assign these IP addresses to VMs:
NOTE: This workaround is not needed if you are running S11.2 SRU5 and above as the support
for allocation pool was added in that update.
11. Add external_network to the router.
12. Add the tenant's private networks to the router. The networks shown by neutron net-list were previously configured.
13. The following figure shows how the network topology looks when you log in as a service tenant user.
Steps required to create and associate floating IPs as a tenant user
1. Log into the OpenStack Dashboard using the tenant user's credential
2. Select Project -> Access & Security -> Floating IPs
3. With external_network selected, click the Allocate IP button
4. The Floating IPs tab shows that 10.134.13.9 Floating IP is allocated.
5. Click the Associate button and select the VM's port from the pull down menu.
6. The Project -> Instances window shows that the floating IP is associated with the VM.
If you had selected a keypair (SSH Public Key) while launching an instance, then that SSH key would be added into the root's authorized_keys file in the VM. With that done you can ssh into the running VM.
Under the covers:
On the node where neutron-l3-agent is running, you can use IP filter commands (ipf(1m), ippool(1m), and ipnat(1m)) and networking commands (dladm(1m) and ipadm(1m)) to observe and troubleshoot the configuration done by neturon-l3-agent.
VNICs created by neutron-l3-agent:
IP addresses created by neutron-l3-agent:
IP Filter rules:
IP NAT rules:
1. The neutron-l3-agent SMF service goes into maintenance when it is restarted. This will be fixed in an SRU. The workaround is to restart the ipfilter service and clear the neutron-l3-agent.
2. The default gateway for the network node is removed in certain setups.
If the IP address of the Network Node is derived from the external_network address space, then if you use the neutron router-gateway-clear command to remove the external_network from the provider_router, the default gateway for the network node is deleted and the network node is inaccessible.
To fix this problem, connect to the network node through the console and then add the default gateway again.