News, tips, partners, and perspectives for the Oracle Linux operating system and upstream Linux kernel work

Neutron L3 Agent in Oracle Solaris OpenStack

The Oracle Solaris implementation
of OpenStack Neutron supports the following deployment model: provider router
with private networks deployment. You can find more information about this
In this deployment model, each tenant can have one or more private networks and
all the tenant networks share the same router. This router is created, owned,
and managed by the data center administrator. The router itself will not be
visible in the tenant's network topology view. Because there is only a single
router, tenant networks cannot use overlapping IPs. Thus, it is likely that the
administrator would create the private networks on behalf of tenants.

By default, this router prevents
routing between private networks that are part of the same tenant. That is, VMs
within one private network cannot communicate with the VMs in another private
network, even though they are all part of the same tenant. This behavior can be
changed by setting allow_forwarding_between_networks to True in the /etc/neutron/l3_agent.ini
configuration file and restarting the neturon-l3-agent SMF service.

This router provides connectivity
to the outside world for the tenant VMs. It does this by performing bidirectional
NAT on the interface that connects the router to the external network. Tenants
create as many floating IPs (public IPs) as they need or as are allowed by the floating
IP quota and then associate these floating IPs with the VMs that need outside

The following figure captures the supported deployment


Figure 1 Provider router with private networks deployment

Tenant A has:

  • Two internal networks:
    (subnet:, gateway:
    (subnet:, gateway:
  • Two VMs
    connected to HR with a fixed IP address of
    connected to ENG with a fixed IP address of

Tenant B has:

  • Two internal networks:
    (subnet:, gateway:
    (subnet:, gateway:
  • Two VMs
    connected to IT with a fixed IP address of
    connected to ACCT with a fixed IP address of

All the gateway interfaces are
instantiated on the node that is running neutron-l3-agent.

The external network is a
provider network that is associated with the subnet that is reachable
from outside. Tenants will create floating IPs from this network and associate
them to their VMs. VM1 and VM2 have floating IPs and
associated with them respectively. VM1 and VM2 are reachable from the outside
world through these IP addresses.

Configuring neutron-l3-agent on a Network

Note: In this configuration, all
Compute Nodes and Network Nodes in the network have been identified, and the
configuration file for all the OpenStack services has been appropriately
configured so that these services can communicate with each other.

The service tenant is a tenant
for all the OpenStack services (nova, neutron, glance, cinder, swift, keystone, and horizon)
and the users for each of the services. Services communicate with each other
using these users who all have admin role. The steps below show how to use the
service tenant to create a router, an external network, and an external subnet
that will be used by all of the tenants in the data center. Please refer to the
following table and diagram while walking through the steps.

Note: Alternatively, you could create
a separate tenant (DataCenter) and a new user (datacenter) with admin role, and
the DataCenter tenant could host all of the aforementioned shared resources. 


Table 1 Public IP address mapping


Figure 2 Neutron L3 agent configuration

Steps required to setup Neutron L3 agent as
a data center administrator:

Note: We will need to use OpenStack CLI to configure the
shared single router and associate network/subnets from different tenants with
it because from OpenStack dashboard you can only manage one tenant’s resources
at a time. 

1. Enable Solaris IP filter functionality.

svcadm enable ipfilter
svcs ipfilter
   online 10:29:04

2. Enable IP forwarding on the entire host.

   l3-agent# ipadm show-prop -p forwarding
   ipv4  forwarding 
rw   on           on          
off          on,off 

3. Ensure that the Solaris Elastic Virtual Switch
feature is configured correctly and has the VLAN ID required for the external
network. In our case, the external network/subnet uses VLAN 1.

evsadm show-controlprop -p vlan-range,l2-type
PERM VALUE              
rw   vlan               
vlan-range          rw   200-300            

evsadm set-controlprop -p vlan-range=1,200-300

Note: For more information on EVS please refer to Chapter 5, "About
Elastic Virtual Switches" and Chapter 6, "Administering Elastic
Virtual Switches" in Managing Network Virtualization and Network Resources
in Oracle Solaris 11.2
(http://docs.oracle.com/cd/E36784_01/html/E36813/index.html). In short, Solaris
EVS forms the backend for OpenStack networking, and it facilitates inter-VM communication
(on the same compute-node or across compute-node) either using VLANs or VXLANs.

4. Ensure that the service tenant is already there.

keystone --os-endpoint=http://localhost:35357/v2.0 \
--os-token=ADMIN tenant-list
|   name  | enabled |
511d4cb9ef6c40beadc3a664c20dc354 |  
demo  |   True  |
f164220cb02465db929ce520869895fa | service |   True  |

5. Create the provider router. Note the UUID of the
new router.

export OS_USERNAME=neutron
export OS_PASSWORD=neutron
export OS_TENANT_NAME=service
export OS_AUTH_URL=http://localhost:5000/v2.0
neutron router-create provider_router
   Created a new
   | Field                
| Value                               
admin_state_up        | True                                
external_gateway_info |                                     
   | id                   
| 181543df-40d1-4514-ea77-fddd78c389ff |
   | name                  |
   | status               
| ACTIVE                              
| f164220cb02465db929ce520869895fa     |

6. Use the router UUID from step 5 and update
/etc/neutron/l3_agent.ini file with following entry:

router_id =

7. Enable the neutron-l3-agent service.

svcadm enable neutron-l3-agent
svcs neutron-l3-agent
   online 11:24:08

8. Create an external network.

   l3-agent# neutron net-create --provider:network_type=vlan \
   --provider:segmentation_id=1 --router:external=true  external_network
   Created a new network:
   | Field                    | Value                                |
   | admin_state_up           | True                                 |
   | id                       | f67f0d72-0ddf-11e4-9d95-e1f29f417e2f |
   | name                     | external_network                     |
   | provider:network_type    | vlan                                 |
   | provider:segmentation_id | 1                                    |
   | router:external          | True                                 |
   | shared                   | False                                |
   | status                   | ACTIVE                               |
   | subnets                  |                                      |
   | tenant_id                | f164220cb02465db929ce520869895fa     |

9. Associate a subnet to external_network

neutron subnet-create --enable-dhcp=False \
external_subnet external_network
   Created a new
   | Field           
| Value                                           
   | allocation_pools
| {"start": "", "end": ""}
   | cidr            
dns_nameservers  |                                                 
| False                                           
   | gateway_ip       |                                      |
   | id              
| 5d9c8958-0de0-11e4-9d96-e1f29f417e2f            
| 4                                          
   | name            
| external_subnet                                 
| f67f0d72-0ddf-11e4-9d95-e1f29f417e2f            
tenant_id        |

10. Apply
the workaround for not having --allocation-pool support for subnets. Because
through IP addresses are set aside for other OpenStack API services,
perform the following floatingip-create steps to ensure that no tenant will
assign these IP addresses to VMs:

NOTE: This workaround is not needed if you are running S11.2 SRU5 and above as the support
for allocation pool was added in that update.

   l3-agent# for i in `seq 1 6`; do neutron floatingip-create \
   external_network; done
   l3-agent# neutron floatingip-list -c id -c floating_ip_address
   | id                                   | floating_ip_address |
   | 58fbccdd-1b60-c6ba-9a51-bbc2cbcc95f8 |         |
   | ce620f79-aed4-6d1c-b5e7-c64c5f6d1f28 |         |
   | 6442eef1-b748-cb51-8a96-98b90e264bd0 |         |
   | a9792d03-f5de-cae1-fa5a-bb614720b22c |         |
   | da18a52d-73a5-4c7d-fb98-95d292d9b0e8 |         |
   | 22e02f77-5b44-402a-d369-9e6b1d831ca0 |         |

11. Add
external_network to the router.

    l3-agent# neutron router-gateway-set -h
    usage: neutron router-gateway-set [-h] [--request-format {json,xml}]
     router-id external-network-id

    l3-agent# neutron router-gateway-set \
    181543df-40d1-4514-ea77-fddd78c389ff \  (provider_router UUID)
    f67f0d72-0ddf-11e4-9d95-e1f29f417e2f    (external_network UUID)
    Set gateway for router 181543df-40d1-4514-ea77-fddd78c389ff

    l3-agent# neutron router-list -c name -c external_gateway_info
| name            | external_gateway_info                                  |
| provider_router | {"network_id": "f67f0d72-0ddf-11e4-9d95-e1f29f417e2f"} |

12. Add
the tenant's private networks to the router. The networks shown by neutron
net-list were previously configured.

    l3-agent# keystone tenant-list
    |                id                |   name  | enabled |
    | 511d4cb9ef6c40beadc3a664c20dc354 |   demo  |   True  |
    | f164220cb02465db929ce520869895fa | service |   True  |

    l3-agent# neutron net-list --tenant-id=511d4cb9ef6c40beadc3a664c20dc354
    | id                            | name | subnets                      |
    | c0c15e0a-0def-11e4-9d9f-      | HR   | c0c53066-0def-11e4-9da0-     |
    |  e1f29f417e2f                 |      | e1f29f417e2f|   
    | ce64b430-0def-11e4-9da2-      | ENG  | ce693ac8-0def-11e4-9da3-     |
    |  e1f29f417e2f                 |      | e1f29f417e2f|

    Note: The above two networks were preconfigured 

    l3-agent# neutron router-interface-add  \
    181543df-40d1-4514-ea77-fddd78c389ff \ (provider_router UUID)
    c0c53066-0def-11e4-9da0-e1f29f417e2f   (HR subnet UUID)
    Added interface 7843841e-0e08-11e4-9da5-e1f29f417e2f to router 181543df-40d1-4514-ea77-fddd78c389ff.

    l3-agent# neutron router-interface-add \
    181543df-40d1-4514-ea77-fddd78c389ff \ (provider_router UUID)
    ce693ac8-0def-11e4-9da3-e1f29f417e2f   (ENG subnet UUID)
    Added interface 89289b8e-0e08-11e4-9da6-e1f29f417e2f to router 181543df-40d1-4514-ea77-fddd78c389ff.

13. The
following figure shows how the network topology looks when you log in as a
service tenant user.


Steps required to create and associate
floating IPs as a tenant user

1. Log into the OpenStack Dashboard using the
tenant user's credential

2. Select Project -> Access & Security ->
Floating IPs

3. With external_network selected, click the
Allocate IP button


4. The Floating IPs tab shows that
Floating IP is allocated.


5. Click the Associate button and select the VM's
port from the pull down menu.


6. The Project -> Instances window shows that
the floating IP is associated with the VM.


If you had selected a keypair (SSH Public Key) while
launching an instance, then that SSH key would be added into the root's
authorized_keys file in the VM. With that done you can ssh into the running VM.

       [gmoodalb@thunta:~] ssh root@
       Last login: Fri Jul 18 00:37:39 2014 from Oracle Corporation SunOS 5.11 11.2 June 2014

       root@host-192-168-101-3:~# uname -a
       SunOS host-192-168-101-3 5.11 11.2 i86pc i386 i86pc
       root@host-192-168-101-3:~# zoneadm list -cv
       ID NAME              STATUS      PATH                 BRAND      IP    
        2 instance-00000001 running     /                    solaris    excl 
       root@host-192-168-101-3:~# ipadm
       NAME             CLASS/TYPE STATE        UNDER      ADDR
       lo0              loopback   ok           --         --
         lo0/v4         static     ok           --
 lo0/v6          static     ok          --         ::1/128
       net0             ip         ok           --         --
         net0/dhcp      inherited  ok           --

Under the covers:

On the node where
neutron-l3-agent is running, you can use IP filter commands (ipf(1m),
ippool(1m), and ipnat(1m)) and networking commands (dladm(1m) and ipadm(1m)) to
observe and troubleshoot the configuration done by neturon-l3-agent.

VNICs created by neutron-l3-agent:

    l3-agent# dladm show-vnic
    LINK                OVER         SPEED  MACADDRESS        MACADDRTYPE VIDS
    l3i7843841e_0_0     net1         1000   2:8:20:42:ed:22   fixed       200
    l3i89289b8e_0_0     net1         1000   2:8:20:7d:87:12   fixed       201
    l3ed527f842_0_0     net0         100    2:8:20:9:98:3e    fixed       0

IP addresses created by neutron-l3-agent:

    l3-agent# ipadm
    NAME                  CLASS/TYPE STATE   UNDER      ADDR
    l3ed527f842_0_0       ip         ok      --         --
      l3ed527f842_0_0/v4  static     ok      --
      l3ed527f842_0_0/v4a static     ok      --
    l3i7843841e_0_0       ip         ok      --         --
      l3i7843841e_0_0/v4  static     ok      --
    l3i89289b8e_0_0       ip         ok      --         --
      l3i89289b8e_0_0/v4  static     ok      --

IP Filter rules:

   l3-agent# ipfstat -io
   empty list for ipfilter(out)
   block in quick on l3i7843841e_0_0 from to pool/4386082
   block in quick on l3i89289b8e_0_0 from to pool/8226578
   l3-agent# ippool -l
   table role = ipf type = tree number = 8226578
{; };
   table role = ipf type = tree number = 4386082
{; };

IP NAT rules:

   l3-agent# ipnat -l
   List of active MAP/Redirect filters:
   bimap l3ed527f842_0_0 ->
   List of active sessions:
   BIMAP  22  <- ->  22 [ 36405]

Known Issues:

1. The neutron-l3-agent SMF service goes into
maintenance when it is restarted. This will be fixed in an SRU. The workaround
is to restart the ipfilter service and clear the neutron-l3-agent.

# svcadm restart ipfilter:default
# svcadm clear neutron-l3-agent:default

2. The default gateway for the network node is
removed in certain setups.

If the IP address of the Network
Node is derived from the external_network address space, then if you use the
neutron router-gateway-clear command to remove the external_network from the
provider_router, the default gateway for the network node is deleted and the
network node is inaccessible.

     l3-agent# neutron router-gateway-clear <router_UUID_goes_here>

To fix this problem, connect to
the network node through the console and then add the default gateway again.

Join the discussion

Comments ( 5 )
  • guest Thursday, August 7, 2014

    Very detailed and informative.

    Thanks Girish.

  • J.Matsuda Friday, September 26, 2014

    Thanks for neutron setteing.

    Any Question neutron setting.


    Why is router created by service tenant?

    If router created by service tenant, private network cannnot connect to router in horizon(GUI).

    In the case of Linux OpenStack, external network is created by service tenant, but router is created by demo tenant.


    What is the link name of zone ?

    Zone's link name is "net0" by default.

    But the sample of dladm show-vnic command, link name is "net1".

    Please tell me chenge of link name from net0 to net1.

    Best Regrds.


  • J.Matsuda Saturday, October 4, 2014

    Thanks Girish.

    Any Question OK?


    Solaris OpenStack is not supported "Scenario 2: two tenants, two networks, two routers?"


    Figure 2 setiing is used to 2 servers?

    I want to set OpenStack as follows.

    For example.

    In the case of OVM.

    Guest domain1 is network node, Guest domain2 is compute node.

    -"net0" of guest domain1 is external network.

    -"net1" of guest domain1 is connected to guest domain2.

     I not set IP Address by ipadm. I think that l3-agent sets IP Address.

    -"net0" of guest domain2 used to compute node.

    Is this setting OK?

    Should I refer to "Multi-node Solaris 11.2 OpenStack on SPARC Servers"?

    If I do not know setting Multi-node, I write comment to ""Multi-node Solaris 11.2 OpenStack on SPARC Servers".

    Best Regrds.


  • guest Sunday, January 24, 2016

    Do the newest openstack Juno in solaris11.3 support the Distributed Virtual Routing (DVR)? thanks!

  • Girish Moodalbail Sunday, January 24, 2016

    The latest OpenStack in Solaris 11.3 (i.e., Juno) doesn't support DVR.

Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.