Tuesday Jul 28, 2015

Migrating Neutron Database from sqlite to MySQL for Oracle OpenStack for Oracle Solaris

Many OpenStack development environments use sqlite as a backend to store data. However in a production environment MySQL is widely used. Oracle also recommends to use MySQL for its OpenStack services. For many of the OpenStack services (nova, cinder, neutron...) sqlite is the default backend. Oracle OpenStack for Oracle Solaris users may want to migrate their backend database from sqlite to MySQL.

The general idea is to dump the sqlite database. Translate the dumped SQL statements so that they are compatible with MySQL. Stop neutron services. Create MySQL database. Replay the modified SQL statements in the MySQL database.

The details listed here are for the Juno release (integrated in Oracle Solaris 11.2 SRU 10.5 or newer) and Neutron is taken as an example use case.

Migrating neutron database from sqlite to MySQL

If not already installed, install MySQL

# pkg install --accept mysql-55 mysql-55/client python-mysql

Start the MySQL service
# svcadm enable -rs mysql

NOTE: If MySQL was already installed and running, then before running the next step double check that neutron database on MySQL is either not yet created or it is empty. The next step will drop the existing MySQL Neutron database if it exists on MySQL and create it. If the MySQL Neutron database is not empty then stop at this point. The following steps are limited to the case where MySQL neutron database and newly created/recreated.

Create Neutron database on MySQL

mysql -u root -p<<EOF
DROP DATABASE IF EXISTS neutron;
CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \
IDENTIFIED BY 'neutron';
FLUSH PRIVILEGES;
EOF

Enter the root password when prompted

Identify that the Neutron services are online: # svcs -a | grep neutron | grep online | awk '{print $3}' \ > /tmp/neutron-svc
Disable the Neutron services: # for item in `cat /tmp/neutron-svc`; do svcadm disable $item; done
Make a backup of Neutron sqlite database:
# cp /var/lib/neutron/neutron.sqlite \
    /var/lib/neutron/neutron.sqlite.ORIG
Get the db dump of Neutron from sqlite:
# /usr/bin/sqlite3 /var/lib/neutron/neutron.sqlite .dump \
       > /tmp/neutron-sqlite.sql

The following steps are run to create a neutron-mysql.sql file which will be compatible with MySQL database engine.

Suppress foreign key checks during create table/index
# echo 'SET foreign_key_checks = 0;' > /tmp/neutron-sqlite-schema.sql

Dump sqlite schema to a file
# /usr/bin/sqlite3 /var/lib/neutron/neutron.sqlite .dump  | \  grep -v 'INSERT INTO' >> /tmp/neutron-sqlite-schema.sql

 

Remove BEGIN/COMMIT/PRAGMA lines from the file.
(Oracle Solaris sed does not support -i option and hence redireciting to a new file 
 and then renaming it to original file)
# sed '/BEGIN TRANSACTION;/d; /COMMIT;/d; /PRAGMA/d' \ /tmp/neutron-sqlite-schema.sql \ > /tmp/neutron-sqlite-schema.sql.new \ && mv /tmp/neutron-sqlite-schema.sql.new \ /tmp/neutron-sqlite-schema.sql


Replace some SQL identifiers that are enclosed in double quotes, 
to be enclosed in back quotes
e.g. "limit to `limit`
# for item in binary blob group key limit type; do sed "s/\"$item\"/\`$item\`/g" \ /tmp/neutron-sqlite-schema.sql > /tmp/neutron-sqlite-schema.sql.new \ && mv /tmp/neutron-sqlite-schema.sql.new \ /tmp/neutron-sqlite-schema.sql; done

Enable foreign key checks at the end of the file

# echo 'SET foreign_key_checks = 1;' >> /tmp/neutron-sqlite-schema.sql 
Dump the data alone (INSERT statements) into another file

# /usr/bin/sqlite3 /var/lib/neutron/neutron.sqlite .dump \
| grep 'INSERT INTO' > /tmp/neutron-sqlite-data.sql
In INSERT statements table names are in double quotes in sqlite,
 but in mysql there should not be double quotes

# sed 's/INSERT INTO \"\(.*\)\"/INSERT INTO \1/g' \
/tmp/neutron-sqlite-data.sql > /tmp/neutron-sqlite-data.sql.new \
 && mv /tmp/neutron-sqlite-data.sql.new /tmp/neutron-sqlite-data.sql


Concat schema and data files to neutron-mysql.sql

# cat /tmp/neutron-sqlite-schema.sql \
/tmp/neutron-sqlite-data.sql > /tmp/neutron-mysql.sql 
Populate Neutron database in MySQL: # mysql neutron < /tmp/neutron-mysql.sql

Specify the connection under [database] section of /etc/neutron/neutron.conf file:

The connection string format is as follows:
connection = mysql://%SERVICE_USER%:%SERVICE_PASSWORD%@hostname/neutron 
For example:
connection = mysql://neutron:neutron@localhost/neutron
Enable the Neutron services:
# for item in `cat /tmp/neutron-svc`; do svcadm enable -rs $item; done 
Cleanup:
# rm -f /var/lib/neutron/neutron.sqlite.ORIG \ /tmp/neutron-sqlite-schema.sql \ /tmp/neutron-sqlite-data.sql \   /tmp/neutron-mysql.sql 

Details about translating SQL statements to be compatible with MySQL

NOTE: /tmp/neutron-sqlite-schema.sql will have the Neutron sqlite database schema as SQL statements and /tmp/neutron-sqlite-data.sql will have the data in Neutron sqlite database which can be replayed to recreate the database. The sql statements in neutron-sqlite-schema.sql and neutron-sqlite-data.sql are to be MySQL compatible so that it can be replayed on MySQL Neutron database. A set of sed commands as listed above are used to create MySQL compatible SQL statements. The following text provides detailed information about the differences between sqlite and MySQL that are to be dealt with.

There are some differences in the way sqlite and MySQL expect the SQL statements to be which are as shown in the table below:

sqliteMySQL
Reserved words are in double quotes: 
e.g "blob", "type", "key", 
"group", "binary", "limit"
Reserved words are in back quotes: 
e.g `blob`, `type`, `key`, 
`group`, `binary`, `limit`
Table name in Insert Statement 
are in quotes 
INSERT INTO "alembic_version"
 VALUES('juno');
Table name in Insert Statement 
are without quotes 
INSERT INTO alembic_version
 VALUES('juno');

Apart from the above the following requirements are to be met before running neutron.sql on MySQL:

The lines containing PRAGMA, 'BEGIN TRANSACTION', 'COMMIT' are to be removed from the file.

 

The CREATE TABLE statements with FOREIGN KEY references are to be rearranged (or ordered) in such a way that the TABLE name that is REFERENCED has to be created earlier than the table that is REFERRING it. The Indices on tables which are referenced by FOREIGN KEY statements are created soon after those tables are created. The last two requirements are not necessary if FOREIGN KEY check is disabled. Hence foreign_key_checks is SET to 0 at the beginning of neutron-mysql.sql and enabled again by setting foreign_key_checks to 1 before the INSERT statements in neutron-mysql.sql file.

New Oracle University course for Oracle OpenStack!

A new Oracle University course is now available: OpenStack Administration Using Oracle Solaris (Ed 1). This is a great way to get yourself up to speed on OpenStack, especially if you're thinking about getting a proof of concept, development or test, or even production environments online!

The course is based on OpenStack Juno in Oracle Solaris 11.2 SRU 10.5. Through a series of guided hands-on labs you will learn to:

  • Describe the OpenStack Framework.
  • Configure a Single-Node OpenStack Setup.
  • Configure a Multi-Node OpenStack Setup.
  • Administer OpenStack Resources Using the Horizon UI.
  • Manage Virtual Machine Instances.
  • Troubleshoot OpenStack.
  •  

The course is 3 days long and we recommend that you have taken a previous Oracle Solaris 11 administration course. This is an excellent introduction to OpenStack that you'll not want to miss!

Thursday Jul 23, 2015

OpenStack Summit Tokyo - Voting Begins!

It's voting time! The next OpenStack Summit will be held in Tokyo, October 27-30.

The Oracle OpenStack team have submitted a few papers for the summit that can now be voted for:

If you'd like to see these talks, please Vote Now!

Monday Jul 20, 2015

OpenStack and Hadoop

It's always interesting to see how technologies get tied together in the industry. Orgad Kimchi from the Oracle Solaris ISV engineering group has blogged about the combination of OpenStack and Hadoop. Hadoop is an open source project run by the Apache Foundation that provided distributed storage and compute for large data sets - in essence, the very heart of big data. In this technical How To, Orgad shows how to set up a multi-node Hadoop cluster using OpenStack by creating a pre-configured Unified Archives that can be uploaded to the Glance Image Repository for deployment across VMs created with Nova.

Check out: How to Build a Hadoop 2.6 Cluster Using Oracle OpenStack

Sunday Jul 19, 2015

Flat networks and fixed IPs with OpenStack Juno

Girish blogged previously on the work that we've been doing to support new features with the Solaris integrations into the Neutron OpenStack networking project. One of these features provides a flat network topology, allowing administrators to be able to plumb VMs created through an OpenStack environment directly into an existing network infrastructure. This essentially gives administrators a choice between a more secure, dynamic network using either VLAN or VXLAN and a pool of floating IP addresses, or an untagged static or 'flat' network with a set of allocated fixed IP addresses.

Scott Dickson has blogged about flat networks, along with the steps required to set up a flat network with OpenStack, using our driver integration into Neutron based on Elastic Virtual Switch. Check it out!

Sunday Jul 12, 2015

Upgrading the Solaris engineering OpenStack Cloud

Internally we've set up an OpenStack cloud environment for the developers of Solaris as a self-service Infrastructure as a Service solution. We've been running a similar service for years called LRT, or Lab Reservation Tool, that allows developers to book time on systems in our lab. Dave Miner has blogged previously about this work to set up the OpenStack cloud, initially based on Havana:

While the OpenStack team were off building the tools to make an upgrade painless, Dave was patiently waiting (and filing bugs) before he could upgrade the cloud to Juno. With the tooling in place, he had the green light. Check out Dave's experiences with his latest post: Upgrading Solaris Engineering's OpenStack Cloud.

As a reminder, OpenStack Juno is now in Oracle Solaris 11.2 SRU 10.5 onwards and also in the Oracle Solaris 11.3 Beta release we pushed out last week with some great new OpenStack features that we've added to our drivers.

Thursday Jul 09, 2015

PRESENTATION: Oracle OpenStack for Oracle Linux at OpenStack Summit Session

In this blog, we wanted to share a presentation given at OpenStack Summit in Vancouver early in May. We have just setup our SlideShare.net account and published our first presentation there.  

If you want to see more of these presentations, follow us at our Oracle OpenStack SlideShare space.

Tuesday Jul 07, 2015

Upgrading OpenStack from Havana to Juno

Upgrading from Havana to Juno - Under the Covers

Upgrade from one OpenStack release to the next is a daunting task.  Experienced OpenStack operators usually only do so reluctantly.  After all, it took days (for some - weeks) to get OpenStack to stand up correctly in the first place and now they want to upgrade it?  At the last OpenStack Summit in Vancouver, it wasn't uncommon to hear about companies with large clouds still running Havana.  Moving forward to Icehouse, Juno or Kilo was to be an epic undertaking with lots of downtime for users and lots of frustration for operators.

In Solaris engineering, not only are we dealing with upgrading from Havana but we actually skipped Icehouse entirely.  This means we had to move people from Havana directly to Juno which isn't officially supported upstream.   Upstream only supports moving from X to X+1 so we were mostly on our own for this.  Luckily, the Juno code base for each component carried the database starting point from Havana and carried SQLAlchemy-Migrate scripts through Icehouse to Juno.  This ends up being a huge headache saver because 'component-manage db sync' will simply do the right thing and convert the schema automatically.

We created new SMF services for each component to handle any of the upgrade duties.  Each service does comparable tasks:  prepare the database for migration and update configuration files for Juno.  The component-upgrade service is now a dependency for each other component-* service.  This way,  Juno versions of OpenStack software won't run until after the upgrade service completes.

The Databases

For the most part, migration of the databases from Havana to Juno is straight-forward.  Since the components deliver the appropriate SQLAlchemy-Migrate scripts, we can simply enable the proper component-db SMF service and let the 'db sync' calls handle the database.   We did hit a few snags along the way, however.  Migration of SQLite-backed databases became increasingly error-prone as we worked on Juno.  Upstream, there's a strong push to do away with SQLite support entirely.  We decided that we would not support migration of SQLite databases explicitly.  That is, if an operator chose to run one or more of the components with SQLite, we would try to upgrade the database automatically for them but there were no guarantees.  It's well documented both in Oracle's documentation and the upstream documentation that SQLite isn't robust enough to handle what OpenStack needs for throughput to the database.

The second major snag we hit was the forced change to 'charset = utf8' in Glance 2014.2.2 for MySQL.  This required our upgrade SMF services to introspect into each component's configuration files, extract their SQLAlchemy connection string, and, if MySQL, convert all the databases to use utf8.  With these checks done and any MySQL databases converted, our databases could migrate cleanly and be ready for Juno

The Configuration Files

Each component's configuration files had to examined to look for deprecations or changes from Havana to Juno.  We started off simply examining the default configuration files for Juno 2014.2.2 and looking for 'deprecated'.  A simple Python dictionary was created to contain the renames and deprecations for Juno.  We then examine each configuration file and, if necessary, move configuration names and values to the proper place.  As an example, the Havana components typically set DEFAULT.sql_connection = <SQLAlchemy Connection String>.  In Juno, those were all changed to database.connection = <SQLAlchemy Connection String> so we had to make sure the upgraded configuration file for Juno brought the new variable along including the rename.

The Safety Net

"Change configuration values automatically?!"
"Update database character sets?!!"
"Are you CRAZY?!  You can't DO that!"

Oh, but remember that you're running Solaris where we have the best enterprise OS tools.  Upgrading to Juno will create a new boot environment for operators.  For anyone unfamiliar with boot environments, please examine the awesome magic here.  What this means is an upgrade to Juno is completely safe.  The Havana deployment and all of the instances, databases and configurations will be saved in the current BE while Juno will be installed into a brand new BE for you.  The new BE activates on the next boot where the upgrade process will happen automatically.  If the upgrade goes sideways, the old BE is calmly sitting there ready to called back into action.

Hopefully this takes the hand-wringing out of upgrading OpenStack for operators running Solaris.  OpenStack is complicated enough as it is without also incurring additional headaches around upgrading from one release to the next.   

What's New in Solaris OpenStack Juno Neutron

The  current update  of Oracle  OpenStack  for Oracle  Solaris updates  existing
features to the OpenStack Juno release and adds the following new features:

  1. Complete IPv6 Support for Tenant Networks
  2. Support for Source NAT
  3. Support for Metadata Services
  4. Support for Flat (untagged) Layer-2 Network Type
  5. Support for New Neutron Subcommands

1. Complete IPv6 Support for Tenant Networks

Finally, Juno  provides feature parity between  IPv4 and IPv6. The  Juno release
allows tenants to create networks and associate IPv6 subnets with these networks
such that  the VM instances  that connect to these  networks can get  their IPv6
addresses in either of the following ways:

- Stateful address configuration
- Stateless address configuration

Stateful or DHCPv6  address configuration is facilitated  through the dnsmasq(8)
daemon.

Stateless address configuration is facilitated in either of the following ways:
- Through the provider (physical) router in the data center networks.
- Through  the  Neutron  router  and Solaris  IPv6  neighbor  discovery  daemon
(in.ndpd(1M)). The Neutron L3 agent  sets the AdvSendAdvertisements parameter to
true in ndpd.conf(4) for  an interface that hosts the IPv6  subnet of the tenant
and refreshes the SMF service (svc:/network/routing/ndp:default) associated with
the daemon.

This  IPv6 support  adds the  following two  new attributes  to Neutron  Subnet:
ipv6_address_mode and  ipv6_ra_mode. Possible  values for these  attributes are:
slaac, dhcpv6-stateful, and  dhcpv6-stateless. The two Neutron  agents - Neutron
DHCP agent and Neutron L3 agent - work together to provide IPv6 support.

For most  cases, these new  attributes are  set to the  same value. For  one use
case,  only  ipv6_address_mode  is  set.   The  following  table  provides  more
information:

2. Support for Source NAT

The floating IPs feature in  OpenStack Neutron provides external connectivity to
VMs by performing a  one-to-one NAT of a the internal IP address  of a VM to the
external floating IP address.

The SNAT  feature provides external connectivity  to all of the  VMs through the
gateway public IP. The  gateway public IP is the IP address  of the gateway port
that   gets  created   when   you  execute   the   following  command:   neutron
router-gateway-set router_uuid external_network_uuid

This external  connectivity setup is similar  to wireless network setup  at home
where you have a single public IP from the ISP configured on the router, and all
our personal devices  are behind this IP on an  internal network. These internal
devices can  reach out to  anywhere on the  internet through SNAT;  however, the
external entities cannot reach these internal devices.

Example:

- Create a Public network

# neutron net-create --router:external=True --provider:network_type=flat public_net
neutron Created a new network:
|-----------------------+--------------------------------------|
| Field                 | Value                                |
|-----------------------+--------------------------------------|
| admin_state_up        | True                                 |
| network_id            | 3c9c4bdf-2d6d-40a2-883b-a86076def1fb |
| name                  | public_net                           |
| provider:network_type | flat                                 |
| router:external       | True                                 |
| shared                | False                                |
| status                | ACTIVE                               |
| subnets               |                                      |
| tenant_id             | dab8af7f10504d3db582ce54a0ce6baa     |
|-----------------------+--------------------------------------|

# neutron subnet-create --name public_subnet --disable-dhcp \
--allocation-pool start=10.134.67.241,end=10.134.67.241 \
--allocation-pool start=10.134.67.243,end=10.134.67.245 \
public_net 10.134.67.0/24
Created a new subnet:
|-------------------+----------------------------------------------------|
| Field             | Value                                              |
|-------------------+----------------------------------------------------|
| allocation_pools  | {"start": "10.134.67.241", "end": "10.134.67.241"} |
|                   | {"start": "10.134.67.243", "end": "10.134.67.245"} |
| cidr              | 10.134.67.0/24                                     |
| dns_nameservers   |                                                    |
| enable_dhcp       | False                                              |
| gateway_ip        | 10.134.67.1                                        |
| host_routes       |                                                    |
| subnet_id         | 6063613c-1008-4826-ae17-ce6a58511b2f               |
| ip_version        | 4                                                  |
| ipv6_address_mode |                                                    |
| ipv6_ra_mode      |                                                    |
| name              | public_subnet                                      |
| network_id        | 3c9c4bdf-2d6d-40a2-883b-a86076def1fb               |
| tenant_id         | dab8af7f10504d3db582ce54a0ce6baa                   |
|-------------------+----------------------------------------------------|

- Create a private network

# neutron net-create private_net
# neutron subnet-create --name private_sunbet private_net 192.168.109.0/24

- Create a router

# neutron router-create provider_router
Created a new router:
|-----------------------|--------------------------------------+
| Field                 | Value                                |
|-----------------------|--------------------------------------+
| admin_state_up        | True                                 |
| external_gateway_info |                                      |
| router_id             | b48fd525-2519-4501-99d9-9c2d51a543f1 |
| name                  | provider_router                      |
| status                | ACTIVE                               |
| tenant_id             | dab8af7f10504d3db582ce54a0ce6baa     |
|-----------------------|--------------------------------------+

Note: Update /etc/neutron/l3_agent.ini file with following entry and
restart neutron-l3-agent SMF service (svcadm restart neutron-l3-agent)
router_id = b48fd525-2519-4501-99d9-9c2d51a543f1

- Add external network to router

# neutron router-gateway-set provider_router public_net
Set gateway for router provider_router
# neutron router-show provider_router
|-----------------------|------------------------------------------------------------------------------|
| Field                 | Value                                                                        |
|-----------------------|------------------------------------------------------------------------------|
| admin_state_up        | True                                                                         |
| external_gateway_info | {"network_id": "3c9c4bdf-2d6d-40a2-883b-a86076def1fb",                       |
|                       | "enable_snat": true,                                                         |
|                       | "external_fixed_ips": [{"subnet_id": "6063613c-1008-4826-ae17-ce6a58511b2f", |
|                       | "ip_address": "10.134.67.241"}]}                                             |
| router_id             | b48fd525-2519-4501-99d9-9c2d51a543f1                                         |
| name                  | provider_router                                                              |
| status                | ACTIVE                                                                       |
| tenant_id             | dab8af7f10504d3db582ce54a0ce6baa                                             |
|-----------------------|------------------------------------------------------------------------------|


Note: By default, SNAT is  enabled on the gateway interface of  the Neutron router. To
disable  this  feature,  specify  the   --disable-snat  option  to  the  neutron
router-gateway-set subcommand.

- Add internal network to router

# neutron router-interface-add provider_router private_subnet
Added interface 9bcfd21a-c751-40bb-99b0-d9274523e151 to router provider_router.

# neutron router-port-list provider_router
|--------------------------------------+-------------------+-----------------------------------------|
| id                                   | mac_address       | fixed_ips                               |
|--------------------------------------+-------------------+-----------------------------------------|
| 4b2f5e3d-0608-4627-b93d-f48afa86c347 | fa:16:3e:84:30:e4 | {"subnet_id":                           |
|                                      |                   | "6063613c-1008-4826-ae17-ce6a58511b2f", |
|                                      |                   | "ip_address": "10.134.67.241"}          |
|                                      |                   |                                         |
| 9bcfd21a-c751-40bb-99b0-d9274523e151 | fa:16:3e:df:c1:0f | {"subnet_id":                           |
|                                      |                   | "c7f99141-25f0-47af-8efb-f5639bcf6181", |
|                                      |                   | "ip_address": "192.168.109.1"}          |
|--------------------------------------+-------------------+-----------------------------------------|
Now all of the VMs that are in the internal network can reach outside through SNAT through 10.134.67.241


3. Support for Metadata Services

A metadata service provides an OpenStack VM instance with information such as the
following:

 -- The public IP/hostname
 -- A random seed
 -- The metadata that the tenant provided at install time
 -- and much more

The metadata requests are  made by the VM instance at the  well known address of
169.254.169.254, port  80. All  such requests  arrive at  the Neutron  L3 agent,
which forwards the requests to a Neutron  proxy server running at port 9697. The
proxy  server was  spawned by  the Neutron  L3 agent.  The Neutron  proxy server
forwards the requests  to the Neutron metadata agent through  a UNIX socket. The
Neutron metadata  agent interacts with  the Neutron Server service  to determine
the instance UUID that is making  the requests. After the Neutron metadata agent
gets the  instance UUID, it makes  a call into  the Nova API metadata  server to
fetch  the information  for the  VM instance.  The fetched  information is  then
passed back to the instance that made the request.

4. Support for Flat (untagged) Layer-2 Network Type

Flat OpenStack Network is used to place all the VM instances on the same segment
without VLAN  or VXLAN.  This  means that the VM  instances will share  the same
network.

In the flat l2-type there is no VLAN tagging or other network segregation taking
place,  i.e., all  the VNICs  (and thus  VM instances)  that connect  to a  flat
l2-type network are created with VLAN ID set to 0.  It follows that flat l2-type
cannot be used to achieve multi-tenancy. Instead, it will be used by data center
admins to map directly to the existing physical networks in the data center.

One  use of  Flat network  type is  in the  configuration of  floating IPs.   If
available floating IPs are subset of  the existing physical network's IP subnet,
then you would need to create flat  network with subnet set to physical networks
IP  subnet and  allocation pool  set to  available floating  IPs.  So,  the flat
network contains  part of  the existing  physical network's  IP subnet.  See the
examples in previous section.

5. Support for New Neutron Subcommands

With this  Solaris OpenStack  Juno release,  you can  run multiple  instances of
neutron-dhcp-agent, each instance running on  a separate network node. Using the
dhcp-agent-network-add neutron  subcommand, you can manually  select which agent
should serve a DHCP enabled subnet. By default, the Neutron server automatically
load balances the work among various DHCP agents.

The following table  shows the new subcommands  that have been added  as part of
the Solaris OpenStack Juno release.
|---------------------------+-----------------------------------------|
| neutron subcommands       | Comments                                |
|---------------------------+-----------------------------------------|
| agent-delete              | Delete a given agent.                   |
| agent-list                | List agents.                            |
| agent-show                | Show information for a specified agent. |
| agent-update              | Update the admin status and             |
|                           | description for a specified agent.      |
| dhcp-agent-list-          | List DHCP agents hosting a network.     |
| hosting-net               |                                         |
| net-list-on-dhcp-agent    | List the networks on a DHCP agent.      |
| dhcp-agent-network-add    | Add a network to a DHCP agent.          |
| dhcp-agent-network-remove | Remove a network from a DHCP agent.     |
|---------------------------+-----------------------------------------|


Configuring the Neutron L3 Agent in Solaris OpenStack Juno

The Oracle Solaris implementation of OpenStack Neutron supports the following deployment model: provider router with private networks deployment. You can find more information about this model here. In this deployment model, each tenant can have one or more private networks and all the tenant networks share the same router. This router is created, owned, and managed by the data center administrator. The router itself will not be visible in the tenant's network topology view as it is owned by the service tenant. Furthermore, as there is only a single router, tenant networks cannot use overlapping IPs. Thus, it is likely that the administrator would create the private networks on behalf of tenants.

By default, this router prevents routing between private networks that are part of the same tenant. That is, VMs within one private network cannot communicate with the VMs in an another private network, even though they are all part of the same tenant. This behavior can be changed by setting allow_forwarding_between_networks to True in the /etc/neutron/l3_agent.ini configuration file and restarting the neturon-l3-agent SMF service.

This router provides connectivity to the outside world for the tenant VMs. It does this by performing Bidirectional NAT and/or Source NAT on the interface that connects the router to the external network. Tenants create as many floating IPs (public IPs) as they need or as are allowed by the floating IP quota and then associate these floating IPs with the VMs that need outside connectivity.

The following figure captures the supported deployment model.

deployment_model.png

Figure 1 Provider router with private networks deployment

Tenant A has:

  • Two internal networks:
    HR (subnet: 192.168.100.0/24, gateway: 192.168.100.1)
    ENG (subnet: 192.168.101.0/24, gateway: 192.168.101.1)
  • Two VMs
    VM1 connected to HR with a fixed IP address of 192.168.100.3
    V
    M2 connected to ENG with a fixed IP address of 192.168.101.3

Tenant B has:

  • Two internal networks:
    IT (subnet: 192.168.102.0/24, gateway: 192.168.102.1)
    ACCT (subnet: 192.168.103.0/24, gateway: 192.168.103.1)
  • Two VMs
    VM3 connected to IT with a fixed IP address of 192.168.102.3
    VM4 connected to ACCT with a fixed IP address of 192.168.103.3

All the gateway interfaces are instantiated on the node that is running neutron-l3-agent.

The external network is a provider network that is associated with the subnet 10.134.13.0/24 that is reachable from outside. Tenants will create floating IPs from this network and associate them to their VMs. VM1 and VM2 have floating IPs 10.134.13.40 and 10.134.13.9 associated with them respectively. VM1 and VM2 are reachable from the outside world through these IP addresses.

Configuring neutron-l3-agent on a Network Node

Note: This post assumes that all Compute Nodes and Network Nodes in the network have been identified and the configuration files for all the OpenStack services have been appropriately configured so that these services can communicate with each other.

The service tenant is a tenant that has all of the OpenStack services' users, namely, nova, neutron, glance, cinder, swift, keystone, heat, and horizon. Services communicate with each other using these users who all have admin role. The steps below show how to use the service tenant to create a router, an external network, and an external subnet that will be used by all of the tenants in the data center. Please refer to the following table and diagram while walking through the steps.

Note: Alternatively, you could create a separate tenant (DataCenter) and a new user (datacenter) with admin role, and the DataCenter tenant could host all of the aforementioned shared resources. 

ip_address_planning.png

Table 1 Public IP address mapping

network_topology

Figure 2 Neutron L3 agent configuration

Steps required to setup Neutron L3 agent as a data center administrator:

Note: We will need to use OpenStack CLI to configure the shared single router and associate network/subnets from different tenants with the router because from OpenStack dashboard you can only manage one tenant’s resources at a time. 

1. Enable Solaris IP filter functionality.

   l3-agent# svcadm enable ipfilter
   l3-agent# svcs ipfilter
   STATE  STIME    FMRI
   online 10:29:04 svc:/network/ipfilter:default

2. Enable IP forwarding on the entire host.

   l3-agent# ipadm show-prop -p forwarding ipv4
   PROTO PROPERTY    PERM CURRENT      PERSISTENT   DEFAULT      POSSIBLE
   ipv4  forwarding  rw   on           on           off          on,off 

3. Ensure that the Solaris Elastic Virtual Switch feature is configured correctly and has the VLAN ID required for the external network. So, if the external network/subnet uses VLAN ID of 15, then do the following:

   l3-agent# evsadm show-controlprop -p vlan-range,l2-type
   PROPERTY            PERM VALUE               DEFAULT             HOST
   l2-type             rw   vlan                vlan                --
   vlan-range          rw   200-300             --                  --

   l3-agent# evsadm set-controlprop -p vlan-range=15,200-300

In our case, the external network/subnet shares the same subnet as the compute node, so we can use Flat Layer-2 network type and not use VLANs at all. We need to configure which datalink on the network node will be used for Flat networking. In our case, it is going to be net0 (and net1 is used for connecting to internal network)

   l3-agent# evsadm set-controlprop -p uplink-port=net0,flat=yes

Note: For more information on EVS please refer to Chapter 5, "About Elastic Virtual Switches" and Chapter 6, "Administering Elastic Virtual Switches" in Managing Network Virtualization and Network Resources in Oracle Solaris 11.2 (http://docs.oracle.com/cd/E36784_01/html/E36813/index.html). In short, Solaris EVS forms the backend for OpenStack networking, and it facilitates inter-VM communication (on the same compute-node or across compute-node) either using VLANs or VXLANs or Flat networks.

4. Ensure that the service tenant is already there.

   l3-agent# keystone --os-endpoint=http://localhost:35357/v2.0 \
   --os-token=ADMIN tenant-list
   +----------------------------------+---------+---------+
   |                id                |   name  | enabled |
   +----------------------------------+---------+---------+
   | 511d4cb9ef6c40beadc3a664c20dc354 |   demo  |   True  |
   | f164220cb02465db929ce520869895fa | service |   True  |
   +----------------------------------+---------+---------+

5. Create the provider router. Note the UUID of the new router.

   l3-agent# export OS_USERNAME=neutron
   l3-agent# export OS_PASSWORD=neutron
   l3-agent# export OS_TENANT_NAME=service
   l3-agent# export OS_AUTH_URL=http://localhost:5000/v2.0
   l3-agent# neutron router-create provider_router
   Created a new router:
   +-----------------------+--------------------------------------+
   | Field                 | Value                                |
   +-----------------------+--------------------------------------+
   | admin_state_up        | True                                 |
   | external_gateway_info |                                      |
   | id                    | 181543df-40d1-4514-ea77-fddd78c389ff |
   | name                  | provider_router                      |
   | status                | ACTIVE                               |
   | tenant_id             | f164220cb02465db929ce520869895fa     |
   +-----------------------+--------------------------------------+

6. Use the router UUID from step 5 and update /etc/neutron/l3_agent.ini file with following entry:

router_id = 181543df-40d1-4514-ea77-fddd78c389ff

7. Enable the neutron-l3-agent service.

   l3-agent# svcadm enable neutron-l3-agent
   l3-agent# svcs neutron-l3-agent
   STATE STIME FMRI
   online 11:24:08 svc:/application/openstack/neutron/neutron-l3-agent:default

8. Create an external network.

   l3-agent# neutron net-create --provider:network_type=flat \
   --router:external=true  external_network
   Created a new network:
   +--------------------------+--------------------------------------+
   | Field                    | Value                                |
   +--------------------------+--------------------------------------+
   | admin_state_up           | True                                 |
   | id                       | f67f0d72-0ddf-11e4-9d95-e1f29f417e2f |
   | name                     | external_network                     |
   | provider:network_type    | flat                                 |
   | router:external          | True                                 |
   | shared                   | False                                |
   | status                   | ACTIVE                               |
   | subnets                  |                                      |
   | tenant_id                | f164220cb02465db929ce520869895fa     |
   +--------------------------+--------------------------------------+

9. Associate a subnet to external_network

   l3-agent# neutron subnet-create --disable-dhcp --name external_subnet \
   --allocation-pool start=10.134.13.8,end=10.134.13.254 external_network 10.134.13.0/24
   Created a new subnet:
   +------------------+--------------------------------------------------+
   | Field            | Value                                            |
   +------------------+--------------------------------------------------+
   | allocation_pools | {"start": "10.134.13.8", "end": "10.134.13.254"} |
   | cidr             | 10.134.13.0/24                                   |
   | dns_nameservers  |                                                  |
   | enable_dhcp      | False                                            |
   | gateway_ip       | 10.134.13.1                                      |
   | host_routes      |                                                  |
   | id               | 5d9c8958-0de0-11e4-9d96-e1f29f417e2f             |
   | ip_version       | 4                                                |
   | name             | external_subnet                                  |
   | network_id       | f67f0d72-0ddf-11e4-9d95-e1f29f417e2f             |
   | tenant_id        | f164220cb02465db929ce520869895fa                 |
   +------------------+--------------------------------------------------+

10. Add external_network to the router.

    l3-agent# neutron router-gateway-set -h
    usage: neutron router-gateway-set [-h] [--request-format {json,xml}]
                                      [--disable-snat]
     router-id external-network-id

    l3-agent# neutron router-gateway-set \
    181543df-40d1-4514-ea77-fddd78c389ff \  (provider_router UUID)
    f67f0d72-0ddf-11e4-9d95-e1f29f417e2f    (external_network UUID)
    Set gateway for router 181543df-40d1-4514-ea77-fddd78c389ff

    l3-agent# neutron router-list -c name -c external_gateway_info
+-----------------+--------------------------------------------------------+
| name            | external_gateway_info                                  |
+-----------------+--------------------------------------------------------+
| provider_router | {"network_id": "f67f0d72-0ddf-11e4-9d95-e1f29f417e2f", |
|                 | "enable_snat": true,                                   |
|                 | "external_fixed_ips":                                  |

|                 |[{"subnet_id": "5d9c8958-0de0-11e4-9d96-e1f29f417e2f",  |
|                 | "ip_address": "10.134.13.8"}]}                         |
+-----------------+--------------------------------------------------------+
Note: By default, SNAT is  enabled on the gateway interface of  the Neutron
router. To disable  this  feature,  specify  the   --disable-snat  option
to  the  neutron router-gateway-set subcommand.

11. Add the tenant's private networks to the router. The networks shown by neutron net-list were previously configured.

    l3-agent# keystone tenant-list
    +----------------------------------+---------+---------+
    |                id                |   name  | enabled |
    +----------------------------------+---------+---------+
    | 511d4cb9ef6c40beadc3a664c20dc354 |   demo  |   True  |
    | f164220cb02465db929ce520869895fa | service |   True  |
    +----------------------------------+---------+---------+

    l3-agent# neutron net-list --tenant-id=511d4cb9ef6c40beadc3a664c20dc354
    +-------------------------------+------+------------------------------+
    | id                            | name | subnets                      |
    +-------------------------------+------+------------------------------+
    | c0c15e0a-0def-11e4-9d9f-      | HR   | c0c53066-0def-11e4-9da0-     |
    |  e1f29f417e2f                 |      | e1f29f417e2f 192.168.100.0/24|   
    | ce64b430-0def-11e4-9da2-      | ENG  | ce693ac8-0def-11e4-9da3-     |
    |  e1f29f417e2f                 |      | e1f29f417e2f 192.168.101.0/24|
    +-------------------------------+------+------------------------------+

    Note: The above two networks were pre-configured 

    l3-agent# neutron router-interface-add  \
    181543df-40d1-4514-ea77-fddd78c389ff \ (provider_router UUID)
    c0c53066-0def-11e4-9da0-e1f29f417e2f   (HR subnet UUID)
    Added interface 7843841e-0e08-11e4-9da5-e1f29f417e2f to router 181543df-40d1-4514-ea77-fddd78c389ff.

    l3-agent# neutron router-interface-add \
    181543df-40d1-4514-ea77-fddd78c389ff \ (provider_router UUID)
    ce693ac8-0def-11e4-9da3-e1f29f417e2f   (ENG subnet UUID)
    Added interface 89289b8e-0e08-11e4-9da6-e1f29f417e2f to router 181543df-40d1-4514-ea77-fddd78c389ff.

12. The following figure shows how the network topology looks when you log in as a service tenant user.

provider_router.png

Steps required to create and associate floating IPs as a tenant user

1. Log into the OpenStack Dashboard using the tenant user's credential

2. Select Project -> Access & Security -> Floating IPs

3. With external_network selected, click the Allocate IP button

allocate_floating_ip.png

4. The Floating IPs tab shows that 10.134.13.9 Floating IP is allocated.

allocated_floating_ip.png

5. Click the Associate button and select the VM's port from the pull down menu.

associate_fip.png

6. The Project -> Instances window shows that the floating IP is associated with the VM.

instances.png

If you had selected a keypair (SSH Public Key) while launching an instance, then that SSH key would be added into the root's authorized_keys file in the VM. With that done, you can ssh into the running VM.

       [gmoodalb@thunta:~] ssh root@10.134.13.9
       Last login: Fri Jul 18 00:37:39 2014 from
       10.132.146.13 Oracle Corporation SunOS 5.11 11.2 June 2014

       root@host-192-168-101-3:~# uname -a
       SunOS host-192-168-101-3 5.11 11.2 i86pc i386 i86pc
       root@host-192-168-101-3:~# zoneadm list -cv
       ID NAME              STATUS      PATH                 BRAND      IP    
        2 instance-00000001 running     /                    solaris    excl 
       root@host-192-168-101-3:~# ipadm
       NAME             CLASS/TYPE STATE        UNDER      ADDR
       lo0              loopback   ok           --         --
         lo0/v4         static     ok           --         127.0.0.1/8
       lo0/v6          static     ok          --         ::1/128
       net0             ip         ok           --         --
         net0/dhcp      inherited  ok           --         192.168.101.3/24

Under the covers:

On the node where neutron-l3-agent is running, you can use IP filter commands (ipf(1m), ippool(1m), and ipnat(1m)) and networking commands (dladm(1m) and ipadm(1m)) to observe and troubleshoot the configuration done by neturon-l3-agent.

VNICs created by neutron-l3-agent:

l3-agent# dladm show-vnic
    LINK                OVER         SPEED  MACADDRESS        MACADDRTYPE VIDS
    l3i7843841e_0_0     net1         1000   2:8:20:42:ed:22   fixed       200
    l3i89289b8e_0_0     net1         1000   2:8:20:7d:87:12   fixed       201
    l3ed527f842_0_0     net0         100    2:8:20:9:98:3e    fixed       0

IP addresses created by neutron-l3-agent:

    l3-agent# ipadm
    NAME                  CLASS/TYPE STATE   UNDER      ADDR
    l3ed527f842_0_0       ip         ok      --         --
      l3ed527f842_0_0/v4  static     ok      --         10.134.13.8/24
      l3ed527f842_0_0/v4a static     ok      --         10.134.13.9/32
    l3i7843841e_0_0       ip         ok      --         --
      l3i7843841e_0_0/v4  static     ok      --         192.168.100.1/24
    l3i89289b8e_0_0       ip         ok      --         --
      l3i89289b8e_0_0/v4  static     ok      --         192.168.101.1/24

IP Filter rules:

   l3-agent# ipfstat -io
   empty list for ipfilter(out)
   block in quick on l3i7843841e_0_0 from 192.168.100.0/24 to pool/4386082

   pass in on l3i7843841e_0_0 to l3ed527f842_0_0:10.134.13.1 from any to !192.168.100.0/24
   block in quick on l3i89289b8e_0_0 from 192.168.101.0/24 to pool/8226578
   pass in on l3i89289b8e_0_0 to l3ed527f842_0_0:10.134.13.1 from any to !192.168.101.0/24
   l3-agent# ippool -l
   table role = ipf type = tree number = 8226578
{ 192.168.100.0/24; };
   table role = ipf type = tree number = 4386082
{ 192.168.101.0/24; };

IP NAT rules:

   l3-agent# ipnat -l
   List of active MAP/Redirect filters:
   rdr l3i89289b8e_0_0 169.254.169.254/32 port 80 -> 192.168.101.1 port 9697 tcp
   map l3i89289b8e_0_0 192.168.101.0/24 -> 10.134.13.8/32
   rdr l3i7843841e_0_0 169.254.169.254/32 port 80 -> 192.168.100.1 port 9697 tcp
   map l3i7843841e_0_0 192.168.100.0/24 -> 10.134.13.8/32
   bimap l3ed527f842_0_0 192.168.101.3/32 -> 10.134.13.9/32
   List of active sessions:
   BIMAP 192.168.101.3  22  <- -> 10.134.13.9  22 [10.132.146.13 36405]

OpenStack Juno in Solaris 11.3 Beta

It's been less than year since we announced availability of the Havana version of the OpenStack cloud infrastructure software as part of Solaris 11.2 and we've since continued to see what can only be described as a startling amount of momentum build in the OpenStack community. It's an incredibly exciting space for us, and for Oracle as a whole, as we watch the benefits of cloud based infrastructure and service management transform the way in which our customers run their Enterprises. 

Fully automated self-service provisioning and orchestration of compute, network, and storage is a beautiful thing...empowering developers to self-provision in minutes the infrastructure needed to build, test or deploy applications without having to waste time trying to file tickets, procure systems, or wait on others. Administrators are able to view, and manage what would otherwise be a sprawl of compute, networking, and storage as an actual system. Rather than wasting time repeatedly servicing individual requests, they can instead focus their attention on managing the cloud 's resources as a pool, and ensuring smooth operation of services provided by the cloud.

We've watched this transformation happen internally in Solaris Engineering as we've shifted from ad-hoc management of the test and development systems used, to managing that infrastructure as an OpenStack cloud. Utilization efficiency of our infrastructure has dramatically improved as Engineers who formerly "camped" on systems to ensure those environments would be available when needed no longer need to, since they can easily save and later re-deploy images of their development environment in minutes. Wasted time formerly spent hunting through lists of systems trying to find one that's free, working, and sufficient, is now spent getting actual work done, or better yet, drinking coffee!

If you've been thinking you would like to get started learning about OpenStack, perhaps by experimenting and building yourself a small private cloud, there's really never been a better time. Especially since today we're very excited to announce that OpenStack Juno is now available to you as part of Oracle Solaris 11.3 Beta. You can start small, and in about 10 minutes install a Solaris Unified Archive that essentially is a fully configured OpenStack Cloud-In-A-Box. Deploy the OpenStack Unified Archive to a system, perform a few configuration steps (specific to your environment, e.g. SSH keys and such), and voila you have a functional OpenStack cloud that you can start learning how to operate.

If you are more experienced with OpenStack and are looking to build a cloud system for your Enterprise that is powered by best of breed Solaris technologies, such as Solaris Zones, the ZFS file system, and Solaris SDN...and that leverages SPARC systems, x86 systems (or both) you'll appreciate how well we've integrated the worlds most popular open source cloud infrastructure software with the Solaris technologies you've come to know and trust.

Within Solaris 11.3 Beta, we've integrated the Juno versions of the core OpenStack Cloud Infrastructure services: Nova, Neutron, Cinder, Swift, Keystone, Glance, Heat, and Horizon, along with the drivers enabling OpenStack to drive Solaris virtualization, and ZFS backed shared storage over iSCSI or FC (both from Solaris natively or via the ZFS Storage Appliance). Within OpenStack Horizon, you'll find an integrated Zones Console interface, and you can upgrade your 11.2 Havana based OpenStack cloud via IPS to Juno based Solaris 11.3 Beta.

Post 11.3 Beta, we'll be very excited to introduce bare metal provisioning support for SPARC and x86 systems through OpenStack Ironic. In addition to being able to offer virtualized environments of varying sizes/configs (e.g. flavors) to cloud tenants, Ironic enables bare metal flavors to also be provided. We'll probably also have a few more exciting features to talk about as well. :) But in the meanwhile, we hope you enjoy OpenStack Juno on Solaris 11.3 Beta, and do let us know if you have any questions and/or run into any issues as we would be more than happy to help!

Wednesday May 20, 2015

LIVE WEBINAR (May 28): How to Get Started with Oracle OpenStack for Oracle Linux

Webinar title: Oracle VM VirtualBox to Get Started with Oracle OpenStack for Oracle Linux

Date: Thursday, May 28, 2015

Time: 10:00 AM PDT

Speakers: 
Dilip Modi, Principal Product Manager, Oracle OpenStack
Simon Coter, Principal Product Manager, Oracle VM and VirtualBox

You are invited to our webinar about how to get started with Oracle OpenStack for Oracle Linux. Built for enterprise applications and simplified for IT, Oracle OpenStack for Oracle Linux is an integrated solution that focuses on simplifying the building of a cloud foundation for enterprise applications and databases. In this webcast, Oracle experts will discuss how to use Oracle VM VirtualBox to create an Oracle OpenStack for Oracle Linux test environment and get you started learning about Oracle OpenStack for Oracle Linux.

Tuesday May 19, 2015

Oracle Solaris gets OpenStack Juno Release

We've just recently pushed an update to Oracle OpenStack for Oracle Solaris. Supported customers who have access to the Support Repository Updates (SRU) can upgrade their OpenStack environments to the Juno release with the availability of SRU 11.2.10.5.0.

The Juno release includes a number of new features, and in general offers a more polished cloud experience for users and administrators. We've written a document that covers the upgrade from Havana to Juno. The process to upgrade involves some manual administrator to copy and merge OpenStack configuration across the two releases, and upgrade the database schemas that the various services use. We're working hard to provide a more seamless upgrade experience, so stay tuned!

-- Glynn Foster

Join us at the Oracle OpenStack booth!

We've reached the second day of the OpenStack Summit in Vancouver and our booth is now officially open. Come by and see us and talk about some of the work that we've been doing at Oracle - whether it's integrating a complete distribution of OpenStack into Oracle Linux and Oracle Solaris, Cinder and Swift storage on the Oracle ZFS Storage Appliance, integration with Swift and our Oracle HSM tape storage product, and how to quickly provision Oracle Database 12c in an OpenStack environment. We've got a lot of demos and experts there to answer your questions.

The Oracle sponsor session is on today also. Markus Flierl will be talking about "Making OpenStack Secure and Compliant for the Enterprise" at 2:50-3:30pm Tuesday Room 116/117. Markus will talk about the challenges of deploying an OpenStack cloud while still meeting critical secure and compliance requirements, and how Oracle can help you do this.

And in case anyone asks, yes, we're hiring!

How to setup a HA OpenStack environment with Oracle Solaris Cluster

The Oracle Solaris Cluster team have just released a new technical whitepaper that covers how administrators can use Oracle Solaris Cluster to set up a HA OpenStack environment on Oracle Solaris.

Providing High Availability to the OpenStack Cloud Controller on Oracle Solaris with Oracle Solaris Cluster

In a typical multi-node environment in OpenStack it's important that administrators can set up infrastructure that is resilient to service or hardware failure. Oracle Solaris Cluster is developed in lock step with Oracle Solaris to provide additional HA capabilities and is deeply integrated into the platform. Service availability is maximized with full orchestrated disaster recovery for enterprise applications in both physical and virtual environments. Leveraging these core values, we've written some best practices for how you integrate clustering in an OpenStack with a guide that initially covers a two node cloud controller architecture. Administrators can then use this as a basis for a more complex architecture spanning multiple physical nodes.

-- Glynn Foster

About

Oracle OpenStack is cloud management software that provides customers an enterprise-grade solution to deploy and manage their entire IT environment. Customers can rapidly deploy Oracle and third-party applications across shared compute, network, and storage resources with ease, with end-to-end enterprise-class support. For more information, see here.

Search

Archives
« August 2015
SunMonTueWedThuFriSat
      
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
     
Today