Tuesday Oct 06, 2015

Oracle OpenStack at Oracle OpenWorld 2015

Oracle OpenWorld 2015 is nearly here. We've got a great line up of OpenStack related sessions and hands on labs. If you're coming to the event and want to get up to speed on the benefits of OpenStack and the work that Oracle is doing across its product line to integrate with this cloud platform, make sure to check out the sessions below:

General Sessions

  • General Session: Oracle Linux-the State of the Penguin [GEN9479]
    Wim Coekaerts, Senior Vice President, Oracle
    Chris Kawalek, Sr. Principal Product Director, Oracle
    Tuesday, Oct 27, 11:00 a.m. | Park Central-Metropolitan II
  • General Session: Security, Speed, Simplicity - Hybrid Cloud Present & Future with Oracle Solaris [GEN8606]
    Markus Flierl, Vice President, Oracle
    Chris Riggin, Chief IT Architect, Verizon
    Tuesday, Oct 27, 11:00 a.m. | Intercontinental—Intercontinental C (5th Floor)

Conference Sessions

  • OpenStack and MySQL [CON2625]
    Matthew Lord, MySQL Product Manager, Oracle
    Thursday, Oct 29, 1:15 p.m. | Moscone South-250
  • Secure Private Cloud Done Right with Oracle and OpenStack [CON8313]
    Glynn Foster, Oracle Solaris Product Manager, Oracle
    Jeffrey Kiely, Principal Product Manager, Oracle
    Monday, Oct 26, 5:15 p.m. | Intercontinental-Intercontinental C (5th Floor)
  • Developer Cloud Made Simple: How to Build an OpenStack Developer Cloud [CON8337]
    Deepankar Bairagi, Principal Software Engineer, Oracle
    Liang Chen, Architect, Oracle Developer Studio, Oracle
    Nasser Nouri, Principal Software Engineer, Oracle
    Thursday, Oct 29, 9:30 a.m. | Intercontinental-Intercontinental B (5th Floor)
  • Maximize Your Private Cloud Investment with Oracle OpenStack for Oracle Linux [CON9574]
    Chris Kawalek, Sr. Principal Product Director, Oracle
    Dilip Modi, Principal Product Manager, Oracle OpenStack, Oracle
    Wednesday, Oct 28, 4:15 p.m. | Park Central-Metropolitan II
  • The DBaaS You’ve Been Waiting for-Oracle Database, Oracle Solaris, SPARC, and OpenStack [CON8354]
    Mehmet Kurtoglu, TT Group - BroadBand Database Operations Manager, TTNET
    Onder Ozbek, Product Manager, Oracle
    Eric Saxe, Director of Engineering, Oracle
    Thursday, Oct 29, 1:15 p.m. | Intercontinental-Intercontinental B (5th Floor)
  • Rapid Private Cloud with Oracle VM and Oracle OpenStack for Oracle Linux [CON9576]
    Michael Glasgow, Technical Product Manager, Oracle
    John Priest, Director, Oracle VM Product Management, Oracle
    Wednesday, Oct 28, 1:45 p.m. | Park Central-Metropolitan II
  • The Cutting Edge of Technology: Deploying a Secure Cloud with OpenStack [CON3225]
    Detlef Drewanz, Master Principal Sales Consultant, Oracle
    Eric Saxe, Director of Engineering, Oracle
    Thursday, Oct 29, 2:30 p.m. | Intercontinental-Intercontinental B (5th Floor)
  • Oracle Enterprise Manager? OpenStack? VSphere? Have It Your Way! [CON8059]
    Shrikanth Krupanandan, Director, Database administration, Fidelity Management & Research Company
    Scott Meadows, Director, Oracle
    Nirant Puntambekar, Senior Manager, Software Development, Oracle
    Thursday, Oct 29, 1:15 p.m. | Intercontinental-Sutter (5th Floor)
  • DevOps Done Right: Secure Virtualization with Oracle Solaris [CON8468]
    Duncan Hardie, Principal Product Manager, Oracle
    Fritz Wittwer, Service Engineer, Swisscom Schweiz AG
    Wednesday, Oct 28, 12:15 p.m. | Intercontinental-Intercontinental B (5th Floor)
  • MySQL Backup and Recovery-Use Cases and Solutions [CON2494]
    Mike Frank, Product Management Director, Oracle
    Monday, Oct 26, 12:15 p.m. | Moscone South-262

Hands on Labs

  • How to Build a Hadoop Cluster Using OpenStack [HOL1598]
    Ekine Akuiyibo, Software Engineer, Oracle
    Thursday, Oct 29, 9:30 a.m. | Hotel Nikko-Monterey
  • Oracle OpenStack for Oracle Solaris-Fast, Secure, and Compliant App Deployment [HOL10358]
    Scott Dickson, Principal Sales Engineer, Oracle
    Glynn Foster, Oracle Solaris Product Manager, Oracle
    Wednesday, Oct 28, 10:15 a.m. | Hotel Nikko-Nikko Ballroom I
  • Deploying a Multinode OpenStack Setup with a Preconfigured Oracle WebLogic Cluster [HOL6653]
    Sai spoorthy Padigi, Software Developer 1, Oracle
    Sandeep Shanbhag, Software Engineer, Oracle
    Wednesday, Oct 28, 8:45 a.m. | Hotel Nikko-Monterey
  • Oracle OpenStack for Oracle Solaris-a Complete Cloud Environment in Minutes [HOL10357]
    Scott Dickson, Principal Sales Engineer, Oracle
    Tuesday, Oct 27, 4:00 p.m. | Hotel Nikko-Nikko Ballroom I
  • In 60 Minutes: Build a Storage Cloud That Is Sustainable, Low Cost, and Secure [HOL2925]
    Donna Harland, Principal Product Manager, Oracle
    Joseph Lampitt, Solution Specialist, Oracle
    Hanli Ren, Senior Software Engineer, Oracle
    Adam Zhang, Principle Software Engineer, Oracle
    Tuesday, Oct 27, 8:45 a.m. | Hotel Nikko-Monterey
  • Build Your Own Cloud Environment with Oracle Solaris 11 RAD and REST [HOL6663]
    Gary Wang, Manager, Oracle
    Yu Wang, Software Engineer, Oracle
    Xiao-song Zhu, Principal Software Engineer, Oracle
    Monday, Oct 26, 3:30 p.m. | Hotel Nikko-Nikko Ballroom I
You can see all the session abstracts, along with all the rest of the Oracle OpenWorld 2015 content, at the content catalog. Looking forward to you joining us for a great event!

Friday Oct 02, 2015

Friday Spotlight: Oracle Linux, Virtualization, and OpenStack Showcase at OOW15

Happy Friday everyone!

Today's topic will be about our amazing showcase at Oracle OpenWorld, Oct 25-29. The Oracle Linux, Oracle VM and OpenStack showcase is located in Moscone South, booth #121, featuring Oracle product demos and Partners.  In past years, our showcase had been a great location to see demos of Oracle Linux and Oracle VM as well as solutions from our Partners. This year, it is expanded with Oracle OpenStack product demos and a theatre. Here's a list of the Oracle and Partner kiosks, don't forget to visit and talk to one of the experts that can help you out with your questions:

  • SLX-007 - Access Applications Securely with Oracle Secure Global Desktop
  • SLX-008 - Oracle VM VirtualBox
  • SLX-009 - Enhance Security and Reduce Costs Using Zero-Downtime Updates with Oracle Linux and Ksplice
  • SLX-010 - Oracle OpenStack for Oracle Linux  -Enterprise Ready
  • SLX-011 - Oracle Linux for the Cloud-Enabled Data Center
  • SLX-012  - Develop and Distribute Containerized Applications with Oracle Linux
  • SLX-013 - Oracle VM Server for x86
  • SLX-014 - Oracle VM Server for SPARC  

The table below lists the featured Partners and their solutions:

The Oracle Linux, Oracle VM, and OpenStack Showcase will also include an in-booth theatre for Partners and Oracle experts to share their solutions to customers and partners, alike. For the latest listing of theatre sessions currently confirmed please refer to the Schedule Builder

Don't forget to visit us at Moscone South #121, we will giveaway some great software (keeping it as a surprise- you need to come and see) and be in the drawing for our famous penguins and and Intel Mini PC - NUC appliance where you can use it for set top boxes to video surveillance, and home entertainment systems to digital signage, it is one appliance that can do it all.

Register today.

Wednesday Sep 09, 2015

Managing Nova's image cache

If you've deployed an OpenStack environment for a while, over time you'll notice that your image cache continues to grow as the images installed into VMs are transferred from Glance over to each of the Nova compute nodes. Dave Miner, who's been lead on setting up an internal OpenStack cloud for our Oracle Solaris engineering organization, has covered some remediation steps in his blog:

Configuring and Managing OpenStack Nova's Image Cache

In essence his solution is to provide a periodic SMF service to routinely clean up the images from the cache. Check it out!

Tuesday Aug 11, 2015

Chat to us at OpenStack Silicon Valley 2015 Event!

Oracle is sponsoring the upcoming OpenStack Silicon Valley 2015 event in a couple of weeks time. We're looking forward to participating in the discussions, and we will have a sponsored session with Markus Flierl, VP of Solaris engineering (not currently posted on the schedule).

We've made some pretty great progress in OpenStack over the past year across all of the software and hardware portfolios that I mentioned in my recent OpenStack Silicon Valley blog post. The IT industry is moving fast and with the recent interest in containers, agile development and microservices, we're excited to see the standardization through recent efforts including the Open Container Initiative and our announcement to include Docker into Oracle Solaris. We'd love to chat to you about what we're doing with OpenStack, our Software in Silicon strategy at Oracle and some of the trends we're seeing in our customer base, at our booth E4. Come join us!

Monday Aug 10, 2015

Swift Object Storage with ZFS Storage Appliance

Jim Kremer has written a new blog that shows you how to configure Swift to take advantage of an Oracle ZFS Storage Appliance. Jim walks step by step how to configure OpenStack Swift into a highly available cluster using an Oracle ZFS Storage Appliance as the backend storage over NFSv4.

Jim summarizes the unique benefits that using a ZFS Storage Appliance brings to OpenStack environments over a typical Swift deployment:

  • Swift data will be stored on a ZFS filesystem as a backing store instead of XFS.
  • Storage will be accessed via NFS v4. Solaris NFS supports extended attributes and locking so it works great with Swift.
  • Each Solaris Swift instance will run the account server, container server and object server as well as the proxy server instead of having separate proxy servers and storage servers.
  • All of the Solaris Swift instances can access and share the same backend storage systems.
  • All the Solaris Swift servers will use the exact same Swift ring configuration.
  • Disaster recovery is supported with the built in remote replication available on the ZFS Storage Appliance.
  • Only one copy of data needs to be stored since ZFS supports different levels of mirroring as well as raidz.
  • ZFS automatically caches hot data in SSDs or in DRAM to increase reading hot blocks of data. A good example of such a workload is booting many VMs in a cloud computing environment.

For more information, see Solaris Swift using ZFS Storage Appliance

Tuesday Jul 28, 2015

Migrating Neutron Database from sqlite to MySQL for Oracle OpenStack for Oracle Solaris

Many OpenStack development environments use sqlite as a backend to store data. However in a production environment MySQL is widely used. Oracle also recommends to use MySQL for its OpenStack services. For many of the OpenStack services (nova, cinder, neutron...) sqlite is the default backend. Oracle OpenStack for Oracle Solaris users may want to migrate their backend database from sqlite to MySQL.

The general idea is to dump the sqlite database. Translate the dumped SQL statements so that they are compatible with MySQL. Stop neutron services. Create MySQL database. Replay the modified SQL statements in the MySQL database.

The details listed here are for the Juno release (integrated in Oracle Solaris 11.2 SRU 10.5 or newer) and Neutron is taken as an example use case.

Migrating neutron database from sqlite to MySQL

If not already installed, install MySQL

# pkg install --accept mysql-55 mysql-55/client python-mysql

Start the MySQL service
# svcadm enable -rs mysql

NOTE: If MySQL was already installed and running, then before running the next step double check that neutron database on MySQL is either not yet created or it is empty. The next step will drop the existing MySQL Neutron database if it exists on MySQL and create it. If the MySQL Neutron database is not empty then stop at this point. The following steps are limited to the case where MySQL neutron database and newly created/recreated.

Create Neutron database on MySQL

mysql -u root -p<<EOF
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \
IDENTIFIED BY 'neutron';

Enter the root password when prompted

Identify that the Neutron services are online: # svcs -a | grep neutron | grep online | awk '{print $3}' \ > /tmp/neutron-svc
Disable the Neutron services: # for item in `cat /tmp/neutron-svc`; do svcadm disable $item; done
Make a backup of Neutron sqlite database:
# cp /var/lib/neutron/neutron.sqlite \
Get the db dump of Neutron from sqlite:
# /usr/bin/sqlite3 /var/lib/neutron/neutron.sqlite .dump \
       > /tmp/neutron-sqlite.sql

The following steps are run to create a neutron-mysql.sql file which will be compatible with MySQL database engine.

Suppress foreign key checks during create table/index
# echo 'SET foreign_key_checks = 0;' > /tmp/neutron-sqlite-schema.sql

Dump sqlite schema to a file
# /usr/bin/sqlite3 /var/lib/neutron/neutron.sqlite .dump  | \  grep -v 'INSERT INTO' >> /tmp/neutron-sqlite-schema.sql


Remove BEGIN/COMMIT/PRAGMA lines from the file.
(Oracle Solaris sed does not support -i option and hence redireciting to a new file 
 and then renaming it to original file)
# sed '/BEGIN TRANSACTION;/d; /COMMIT;/d; /PRAGMA/d' \ /tmp/neutron-sqlite-schema.sql \ > /tmp/neutron-sqlite-schema.sql.new \ && mv /tmp/neutron-sqlite-schema.sql.new \ /tmp/neutron-sqlite-schema.sql

Replace some SQL identifiers that are enclosed in double quotes, 
to be enclosed in back quotes
e.g. "limit to `limit`
# for item in binary blob group key limit type; do sed "s/\"$item\"/\`$item\`/g" \ /tmp/neutron-sqlite-schema.sql > /tmp/neutron-sqlite-schema.sql.new \ && mv /tmp/neutron-sqlite-schema.sql.new \ /tmp/neutron-sqlite-schema.sql; done

Enable foreign key checks at the end of the file

# echo 'SET foreign_key_checks = 1;' >> /tmp/neutron-sqlite-schema.sql 
Dump the data alone (INSERT statements) into another file

# /usr/bin/sqlite3 /var/lib/neutron/neutron.sqlite .dump \
| grep 'INSERT INTO' > /tmp/neutron-sqlite-data.sql
In INSERT statements table names are in double quotes in sqlite,
 but in mysql there should not be double quotes

# sed 's/INSERT INTO \"\(.*\)\"/INSERT INTO \1/g' \
/tmp/neutron-sqlite-data.sql > /tmp/neutron-sqlite-data.sql.new \
 && mv /tmp/neutron-sqlite-data.sql.new /tmp/neutron-sqlite-data.sql

Concat schema and data files to neutron-mysql.sql

# cat /tmp/neutron-sqlite-schema.sql \
/tmp/neutron-sqlite-data.sql > /tmp/neutron-mysql.sql 
Populate Neutron database in MySQL: # mysql neutron < /tmp/neutron-mysql.sql

Specify the connection under [database] section of /etc/neutron/neutron.conf file:

The connection string format is as follows:
connection = mysql://%SERVICE_USER%:%SERVICE_PASSWORD%@hostname/neutron 
For example:
connection = mysql://neutron:neutron@localhost/neutron
Enable the Neutron services:
# for item in `cat /tmp/neutron-svc`; do svcadm enable -rs $item; done 
# rm -f /var/lib/neutron/neutron.sqlite.ORIG \ /tmp/neutron-sqlite-schema.sql \ /tmp/neutron-sqlite-data.sql \   /tmp/neutron-mysql.sql 

Details about translating SQL statements to be compatible with MySQL

NOTE: /tmp/neutron-sqlite-schema.sql will have the Neutron sqlite database schema as SQL statements and /tmp/neutron-sqlite-data.sql will have the data in Neutron sqlite database which can be replayed to recreate the database. The sql statements in neutron-sqlite-schema.sql and neutron-sqlite-data.sql are to be MySQL compatible so that it can be replayed on MySQL Neutron database. A set of sed commands as listed above are used to create MySQL compatible SQL statements. The following text provides detailed information about the differences between sqlite and MySQL that are to be dealt with.

There are some differences in the way sqlite and MySQL expect the SQL statements to be which are as shown in the table below:

Reserved words are in double quotes: 
e.g "blob", "type", "key", 
"group", "binary", "limit"
Reserved words are in back quotes: 
e.g `blob`, `type`, `key`, 
`group`, `binary`, `limit`
Table name in Insert Statement 
are in quotes 
INSERT INTO "alembic_version"
Table name in Insert Statement 
are without quotes 
INSERT INTO alembic_version

Apart from the above the following requirements are to be met before running neutron.sql on MySQL:

The lines containing PRAGMA, 'BEGIN TRANSACTION', 'COMMIT' are to be removed from the file.


The CREATE TABLE statements with FOREIGN KEY references are to be rearranged (or ordered) in such a way that the TABLE name that is REFERENCED has to be created earlier than the table that is REFERRING it. The Indices on tables which are referenced by FOREIGN KEY statements are created soon after those tables are created. The last two requirements are not necessary if FOREIGN KEY check is disabled. Hence foreign_key_checks is SET to 0 at the beginning of neutron-mysql.sql and enabled again by setting foreign_key_checks to 1 before the INSERT statements in neutron-mysql.sql file.

New Oracle University course for Oracle OpenStack!

A new Oracle University course is now available: OpenStack Administration Using Oracle Solaris (Ed 1). This is a great way to get yourself up to speed on OpenStack, especially if you're thinking about getting a proof of concept, development or test, or even production environments online!

The course is based on OpenStack Juno in Oracle Solaris 11.2 SRU 10.5. Through a series of guided hands-on labs you will learn to:

  • Describe the OpenStack Framework.
  • Configure a Single-Node OpenStack Setup.
  • Configure a Multi-Node OpenStack Setup.
  • Administer OpenStack Resources Using the Horizon UI.
  • Manage Virtual Machine Instances.
  • Troubleshoot OpenStack.

The course is 3 days long and we recommend that you have taken a previous Oracle Solaris 11 administration course. This is an excellent introduction to OpenStack that you'll not want to miss!

Thursday Jul 23, 2015

OpenStack Summit Tokyo - Voting Begins!

It's voting time! The next OpenStack Summit will be held in Tokyo, October 27-30.

The Oracle OpenStack team have submitted a few papers for the summit that can now be voted for:

If you'd like to see these talks, please Vote Now!

Monday Jul 20, 2015

OpenStack and Hadoop

It's always interesting to see how technologies get tied together in the industry. Orgad Kimchi from the Oracle Solaris ISV engineering group has blogged about the combination of OpenStack and Hadoop. Hadoop is an open source project run by the Apache Foundation that provided distributed storage and compute for large data sets - in essence, the very heart of big data. In this technical How To, Orgad shows how to set up a multi-node Hadoop cluster using OpenStack by creating a pre-configured Unified Archives that can be uploaded to the Glance Image Repository for deployment across VMs created with Nova.

Check out: How to Build a Hadoop 2.6 Cluster Using Oracle OpenStack

Sunday Jul 19, 2015

Flat networks and fixed IPs with OpenStack Juno

Girish blogged previously on the work that we've been doing to support new features with the Solaris integrations into the Neutron OpenStack networking project. One of these features provides a flat network topology, allowing administrators to be able to plumb VMs created through an OpenStack environment directly into an existing network infrastructure. This essentially gives administrators a choice between a more secure, dynamic network using either VLAN or VXLAN and a pool of floating IP addresses, or an untagged static or 'flat' network with a set of allocated fixed IP addresses.

Scott Dickson has blogged about flat networks, along with the steps required to set up a flat network with OpenStack, using our driver integration into Neutron based on Elastic Virtual Switch. Check it out!

Sunday Jul 12, 2015

Upgrading the Solaris engineering OpenStack Cloud

Internally we've set up an OpenStack cloud environment for the developers of Solaris as a self-service Infrastructure as a Service solution. We've been running a similar service for years called LRT, or Lab Reservation Tool, that allows developers to book time on systems in our lab. Dave Miner has blogged previously about this work to set up the OpenStack cloud, initially based on Havana:

While the OpenStack team were off building the tools to make an upgrade painless, Dave was patiently waiting (and filing bugs) before he could upgrade the cloud to Juno. With the tooling in place, he had the green light. Check out Dave's experiences with his latest post: Upgrading Solaris Engineering's OpenStack Cloud.

As a reminder, OpenStack Juno is now in Oracle Solaris 11.2 SRU 10.5 onwards and also in the Oracle Solaris 11.3 Beta release we pushed out last week with some great new OpenStack features that we've added to our drivers.

Thursday Jul 09, 2015

PRESENTATION: Oracle OpenStack for Oracle Linux at OpenStack Summit Session

In this blog, we wanted to share a presentation given at OpenStack Summit in Vancouver early in May. We have just setup our SlideShare.net account and published our first presentation there.  

If you want to see more of these presentations, follow us at our Oracle OpenStack SlideShare space.

Tuesday Jul 07, 2015

Upgrading OpenStack from Havana to Juno

Upgrading from Havana to Juno - Under the Covers

Upgrade from one OpenStack release to the next is a daunting task.  Experienced OpenStack operators usually only do so reluctantly.  After all, it took days (for some - weeks) to get OpenStack to stand up correctly in the first place and now they want to upgrade it?  At the last OpenStack Summit in Vancouver, it wasn't uncommon to hear about companies with large clouds still running Havana.  Moving forward to Icehouse, Juno or Kilo was to be an epic undertaking with lots of downtime for users and lots of frustration for operators.

In Solaris engineering, not only are we dealing with upgrading from Havana but we actually skipped Icehouse entirely.  This means we had to move people from Havana directly to Juno which isn't officially supported upstream.   Upstream only supports moving from X to X+1 so we were mostly on our own for this.  Luckily, the Juno code base for each component carried the database starting point from Havana and carried SQLAlchemy-Migrate scripts through Icehouse to Juno.  This ends up being a huge headache saver because 'component-manage db sync' will simply do the right thing and convert the schema automatically.

We created new SMF services for each component to handle any of the upgrade duties.  Each service does comparable tasks:  prepare the database for migration and update configuration files for Juno.  The component-upgrade service is now a dependency for each other component-* service.  This way,  Juno versions of OpenStack software won't run until after the upgrade service completes.

The Databases

For the most part, migration of the databases from Havana to Juno is straight-forward.  Since the components deliver the appropriate SQLAlchemy-Migrate scripts, we can simply enable the proper component-db SMF service and let the 'db sync' calls handle the database.   We did hit a few snags along the way, however.  Migration of SQLite-backed databases became increasingly error-prone as we worked on Juno.  Upstream, there's a strong push to do away with SQLite support entirely.  We decided that we would not support migration of SQLite databases explicitly.  That is, if an operator chose to run one or more of the components with SQLite, we would try to upgrade the database automatically for them but there were no guarantees.  It's well documented both in Oracle's documentation and the upstream documentation that SQLite isn't robust enough to handle what OpenStack needs for throughput to the database.

The second major snag we hit was the forced change to 'charset = utf8' in Glance 2014.2.2 for MySQL.  This required our upgrade SMF services to introspect into each component's configuration files, extract their SQLAlchemy connection string, and, if MySQL, convert all the databases to use utf8.  With these checks done and any MySQL databases converted, our databases could migrate cleanly and be ready for Juno

The Configuration Files

Each component's configuration files had to examined to look for deprecations or changes from Havana to Juno.  We started off simply examining the default configuration files for Juno 2014.2.2 and looking for 'deprecated'.  A simple Python dictionary was created to contain the renames and deprecations for Juno.  We then examine each configuration file and, if necessary, move configuration names and values to the proper place.  As an example, the Havana components typically set DEFAULT.sql_connection = <SQLAlchemy Connection String>.  In Juno, those were all changed to database.connection = <SQLAlchemy Connection String> so we had to make sure the upgraded configuration file for Juno brought the new variable along including the rename.

The Safety Net

"Change configuration values automatically?!"
"Update database character sets?!!"
"Are you CRAZY?!  You can't DO that!"

Oh, but remember that you're running Solaris where we have the best enterprise OS tools.  Upgrading to Juno will create a new boot environment for operators.  For anyone unfamiliar with boot environments, please examine the awesome magic here.  What this means is an upgrade to Juno is completely safe.  The Havana deployment and all of the instances, databases and configurations will be saved in the current BE while Juno will be installed into a brand new BE for you.  The new BE activates on the next boot where the upgrade process will happen automatically.  If the upgrade goes sideways, the old BE is calmly sitting there ready to called back into action.

Hopefully this takes the hand-wringing out of upgrading OpenStack for operators running Solaris.  OpenStack is complicated enough as it is without also incurring additional headaches around upgrading from one release to the next.   

What's New in Solaris OpenStack Juno Neutron

The  current update  of Oracle  OpenStack  for Oracle  Solaris updates  existing
features to the OpenStack Juno release and adds the following new features:

  1. Complete IPv6 Support for Tenant Networks
  2. Support for Source NAT
  3. Support for Metadata Services
  4. Support for Flat (untagged) Layer-2 Network Type
  5. Support for New Neutron Subcommands

1. Complete IPv6 Support for Tenant Networks

Finally, Juno  provides feature parity between  IPv4 and IPv6. The  Juno release
allows tenants to create networks and associate IPv6 subnets with these networks
such that  the VM instances  that connect to these  networks can get  their IPv6
addresses in either of the following ways:

- Stateful address configuration
- Stateless address configuration

Stateful or DHCPv6  address configuration is facilitated  through the dnsmasq(8)

Stateless address configuration is facilitated in either of the following ways:
- Through the provider (physical) router in the data center networks.
- Through  the  Neutron  router  and Solaris  IPv6  neighbor  discovery  daemon
(in.ndpd(1M)). The Neutron L3 agent  sets the AdvSendAdvertisements parameter to
true in ndpd.conf(4) for  an interface that hosts the IPv6  subnet of the tenant
and refreshes the SMF service (svc:/network/routing/ndp:default) associated with
the daemon.

This  IPv6 support  adds the  following two  new attributes  to Neutron  Subnet:
ipv6_address_mode and  ipv6_ra_mode. Possible  values for these  attributes are:
slaac, dhcpv6-stateful, and  dhcpv6-stateless. The two Neutron  agents - Neutron
DHCP agent and Neutron L3 agent - work together to provide IPv6 support.

For most  cases, these new  attributes are  set to the  same value. For  one use
case,  only  ipv6_address_mode  is  set.   The  following  table  provides  more

2. Support for Source NAT

The floating IPs feature in  OpenStack Neutron provides external connectivity to
VMs by performing a  one-to-one NAT of a the internal IP address  of a VM to the
external floating IP address.

The SNAT  feature provides external connectivity  to all of the  VMs through the
gateway public IP. The  gateway public IP is the IP address  of the gateway port
that   gets  created   when   you  execute   the   following  command:   neutron
router-gateway-set router_uuid external_network_uuid

This external  connectivity setup is similar  to wireless network setup  at home
where you have a single public IP from the ISP configured on the router, and all
our personal devices  are behind this IP on an  internal network. These internal
devices can  reach out to  anywhere on the  internet through SNAT;  however, the
external entities cannot reach these internal devices.


- Create a Public network

# neutron net-create --router:external=True --provider:network_type=flat public_net
neutron Created a new network:
| Field                 | Value                                |
| admin_state_up        | True                                 |
| network_id            | 3c9c4bdf-2d6d-40a2-883b-a86076def1fb |
| name                  | public_net                           |
| provider:network_type | flat                                 |
| router:external       | True                                 |
| shared                | False                                |
| status                | ACTIVE                               |
| subnets               |                                      |
| tenant_id             | dab8af7f10504d3db582ce54a0ce6baa     |

# neutron subnet-create --name public_subnet --disable-dhcp \
--allocation-pool start=,end= \
--allocation-pool start=,end= \
Created a new subnet:
| Field             | Value                                              |
| allocation_pools  | {"start": "", "end": ""} |
|                   | {"start": "", "end": ""} |
| cidr              |                                     |
| dns_nameservers   |                                                    |
| enable_dhcp       | False                                              |
| gateway_ip        |                                        |
| host_routes       |                                                    |
| subnet_id         | 6063613c-1008-4826-ae17-ce6a58511b2f               |
| ip_version        | 4                                                  |
| ipv6_address_mode |                                                    |
| ipv6_ra_mode      |                                                    |
| name              | public_subnet                                      |
| network_id        | 3c9c4bdf-2d6d-40a2-883b-a86076def1fb               |
| tenant_id         | dab8af7f10504d3db582ce54a0ce6baa                   |

- Create a private network

# neutron net-create private_net
# neutron subnet-create --name private_sunbet private_net

- Create a router

# neutron router-create provider_router
Created a new router:
| Field                 | Value                                |
| admin_state_up        | True                                 |
| external_gateway_info |                                      |
| router_id             | b48fd525-2519-4501-99d9-9c2d51a543f1 |
| name                  | provider_router                      |
| status                | ACTIVE                               |
| tenant_id             | dab8af7f10504d3db582ce54a0ce6baa     |

Note: Update /etc/neutron/l3_agent.ini file with following entry and
restart neutron-l3-agent SMF service (svcadm restart neutron-l3-agent)
router_id = b48fd525-2519-4501-99d9-9c2d51a543f1

- Add external network to router

# neutron router-gateway-set provider_router public_net
Set gateway for router provider_router
# neutron router-show provider_router
| Field                 | Value                                                                        |
| admin_state_up        | True                                                                         |
| external_gateway_info | {"network_id": "3c9c4bdf-2d6d-40a2-883b-a86076def1fb",                       |
|                       | "enable_snat": true,                                                         |
|                       | "external_fixed_ips": [{"subnet_id": "6063613c-1008-4826-ae17-ce6a58511b2f", |
|                       | "ip_address": ""}]}                                             |
| router_id             | b48fd525-2519-4501-99d9-9c2d51a543f1                                         |
| name                  | provider_router                                                              |
| status                | ACTIVE                                                                       |
| tenant_id             | dab8af7f10504d3db582ce54a0ce6baa                                             |

Note: By default, SNAT is  enabled on the gateway interface of  the Neutron router. To
disable  this  feature,  specify  the   --disable-snat  option  to  the  neutron
router-gateway-set subcommand.

- Add internal network to router

# neutron router-interface-add provider_router private_subnet
Added interface 9bcfd21a-c751-40bb-99b0-d9274523e151 to router provider_router.

# neutron router-port-list provider_router
| id                                   | mac_address       | fixed_ips                               |
| 4b2f5e3d-0608-4627-b93d-f48afa86c347 | fa:16:3e:84:30:e4 | {"subnet_id":                           |
|                                      |                   | "6063613c-1008-4826-ae17-ce6a58511b2f", |
|                                      |                   | "ip_address": ""}          |
|                                      |                   |                                         |
| 9bcfd21a-c751-40bb-99b0-d9274523e151 | fa:16:3e:df:c1:0f | {"subnet_id":                           |
|                                      |                   | "c7f99141-25f0-47af-8efb-f5639bcf6181", |
|                                      |                   | "ip_address": ""}          |
Now all of the VMs that are in the internal network can reach outside through SNAT through

3. Support for Metadata Services

A metadata service provides an OpenStack VM instance with information such as the

 -- The public IP/hostname
 -- A random seed
 -- The metadata that the tenant provided at install time
 -- and much more

The metadata requests are  made by the VM instance at the  well known address of, port  80. All  such requests  arrive at  the Neutron  L3 agent,
which forwards the requests to a Neutron  proxy server running at port 9697. The
proxy  server was  spawned by  the Neutron  L3 agent.  The Neutron  proxy server
forwards the requests  to the Neutron metadata agent through  a UNIX socket. The
Neutron metadata  agent interacts with  the Neutron Server service  to determine
the instance UUID that is making  the requests. After the Neutron metadata agent
gets the  instance UUID, it makes  a call into  the Nova API metadata  server to
fetch  the information  for the  VM instance.  The fetched  information is  then
passed back to the instance that made the request.

4. Support for Flat (untagged) Layer-2 Network Type

Flat OpenStack Network is used to place all the VM instances on the same segment
without VLAN  or VXLAN.  This  means that the VM  instances will share  the same

In the flat l2-type there is no VLAN tagging or other network segregation taking
place,  i.e., all  the VNICs  (and thus  VM instances)  that connect  to a  flat
l2-type network are created with VLAN ID set to 0.  It follows that flat l2-type
cannot be used to achieve multi-tenancy. Instead, it will be used by data center
admins to map directly to the existing physical networks in the data center.

One  use of  Flat network  type is  in the  configuration of  floating IPs.   If
available floating IPs are subset of  the existing physical network's IP subnet,
then you would need to create flat  network with subnet set to physical networks
IP  subnet and  allocation pool  set to  available floating  IPs.  So,  the flat
network contains  part of  the existing  physical network's  IP subnet.  See the
examples in previous section.

5. Support for New Neutron Subcommands

With this  Solaris OpenStack  Juno release,  you can  run multiple  instances of
neutron-dhcp-agent, each instance running on  a separate network node. Using the
dhcp-agent-network-add neutron  subcommand, you can manually  select which agent
should serve a DHCP enabled subnet. By default, the Neutron server automatically
load balances the work among various DHCP agents.

The following table  shows the new subcommands  that have been added  as part of
the Solaris OpenStack Juno release.
| neutron subcommands       | Comments                                |
| agent-delete              | Delete a given agent.                   |
| agent-list                | List agents.                            |
| agent-show                | Show information for a specified agent. |
| agent-update              | Update the admin status and             |
|                           | description for a specified agent.      |
| dhcp-agent-list-          | List DHCP agents hosting a network.     |
| hosting-net               |                                         |
| net-list-on-dhcp-agent    | List the networks on a DHCP agent.      |
| dhcp-agent-network-add    | Add a network to a DHCP agent.          |
| dhcp-agent-network-remove | Remove a network from a DHCP agent.     |

Configuring the Neutron L3 Agent in Solaris OpenStack Juno

The Oracle Solaris implementation of OpenStack Neutron supports the following deployment model: provider router with private networks deployment. You can find more information about this model here. In this deployment model, each tenant can have one or more private networks and all the tenant networks share the same router. This router is created, owned, and managed by the data center administrator. The router itself will not be visible in the tenant's network topology view as it is owned by the service tenant. Furthermore, as there is only a single router, tenant networks cannot use overlapping IPs. Thus, it is likely that the administrator would create the private networks on behalf of tenants.

By default, this router prevents routing between private networks that are part of the same tenant. That is, VMs within one private network cannot communicate with the VMs in an another private network, even though they are all part of the same tenant. This behavior can be changed by setting allow_forwarding_between_networks to True in the /etc/neutron/l3_agent.ini configuration file and restarting the neturon-l3-agent SMF service.

This router provides connectivity to the outside world for the tenant VMs. It does this by performing Bidirectional NAT and/or Source NAT on the interface that connects the router to the external network. Tenants create as many floating IPs (public IPs) as they need or as are allowed by the floating IP quota and then associate these floating IPs with the VMs that need outside connectivity.

The following figure captures the supported deployment model.


Figure 1 Provider router with private networks deployment

Tenant A has:

  • Two internal networks:
    HR (subnet:, gateway:
    ENG (subnet:, gateway:
  • Two VMs
    VM1 connected to HR with a fixed IP address of
    M2 connected to ENG with a fixed IP address of

Tenant B has:

  • Two internal networks:
    IT (subnet:, gateway:
    ACCT (subnet:, gateway:
  • Two VMs
    VM3 connected to IT with a fixed IP address of
    VM4 connected to ACCT with a fixed IP address of

All the gateway interfaces are instantiated on the node that is running neutron-l3-agent.

The external network is a provider network that is associated with the subnet that is reachable from outside. Tenants will create floating IPs from this network and associate them to their VMs. VM1 and VM2 have floating IPs and associated with them respectively. VM1 and VM2 are reachable from the outside world through these IP addresses.

Configuring neutron-l3-agent on a Network Node

Note: This post assumes that all Compute Nodes and Network Nodes in the network have been identified and the configuration files for all the OpenStack services have been appropriately configured so that these services can communicate with each other.

The service tenant is a tenant that has all of the OpenStack services' users, namely, nova, neutron, glance, cinder, swift, keystone, heat, and horizon. Services communicate with each other using these users who all have admin role. The steps below show how to use the service tenant to create a router, an external network, and an external subnet that will be used by all of the tenants in the data center. Please refer to the following table and diagram while walking through the steps.

Note: Alternatively, you could create a separate tenant (DataCenter) and a new user (datacenter) with admin role, and the DataCenter tenant could host all of the aforementioned shared resources. 


Table 1 Public IP address mapping


Figure 2 Neutron L3 agent configuration

Steps required to setup Neutron L3 agent as a data center administrator:

Note: We will need to use OpenStack CLI to configure the shared single router and associate network/subnets from different tenants with the router because from OpenStack dashboard you can only manage one tenant’s resources at a time. 

1. Enable Solaris IP filter functionality.

   l3-agent# svcadm enable ipfilter
   l3-agent# svcs ipfilter
   online 10:29:04 svc:/network/ipfilter:default

2. Enable IP forwarding on the entire host.

   l3-agent# ipadm show-prop -p forwarding ipv4
   ipv4  forwarding  rw   on           on           off          on,off 

3. Ensure that the Solaris Elastic Virtual Switch feature is configured correctly and has the VLAN ID required for the external network. So, if the external network/subnet uses VLAN ID of 15, then do the following:

   l3-agent# evsadm show-controlprop -p vlan-range,l2-type
   PROPERTY            PERM VALUE               DEFAULT             HOST
   l2-type             rw   vlan                vlan                --
   vlan-range          rw   200-300             --                  --

   l3-agent# evsadm set-controlprop -p vlan-range=15,200-300

In our case, the external network/subnet shares the same subnet as the compute node, so we can use Flat Layer-2 network type and not use VLANs at all. We need to configure which datalink on the network node will be used for Flat networking. In our case, it is going to be net0 (and net1 is used for connecting to internal network)

   l3-agent# evsadm set-controlprop -p uplink-port=net0,flat=yes

Note: For more information on EVS please refer to Chapter 5, "About Elastic Virtual Switches" and Chapter 6, "Administering Elastic Virtual Switches" in Managing Network Virtualization and Network Resources in Oracle Solaris 11.2 (http://docs.oracle.com/cd/E36784_01/html/E36813/index.html). In short, Solaris EVS forms the backend for OpenStack networking, and it facilitates inter-VM communication (on the same compute-node or across compute-node) either using VLANs or VXLANs or Flat networks.

4. Ensure that the service tenant is already there.

   l3-agent# keystone --os-endpoint=http://localhost:35357/v2.0 \
   --os-token=ADMIN tenant-list
   |                id                |   name  | enabled |
   | 511d4cb9ef6c40beadc3a664c20dc354 |   demo  |   True  |
   | f164220cb02465db929ce520869895fa | service |   True  |

5. Create the provider router. Note the UUID of the new router.

   l3-agent# export OS_USERNAME=neutron
   l3-agent# export OS_PASSWORD=neutron
   l3-agent# export OS_TENANT_NAME=service
   l3-agent# export OS_AUTH_URL=http://localhost:5000/v2.0
   l3-agent# neutron router-create provider_router
   Created a new router:
   | Field                 | Value                                |
   | admin_state_up        | True                                 |
   | external_gateway_info |                                      |
   | id                    | 181543df-40d1-4514-ea77-fddd78c389ff |
   | name                  | provider_router                      |
   | status                | ACTIVE                               |
   | tenant_id             | f164220cb02465db929ce520869895fa     |

6. Use the router UUID from step 5 and update /etc/neutron/l3_agent.ini file with following entry:

router_id = 181543df-40d1-4514-ea77-fddd78c389ff

7. Enable the neutron-l3-agent service.

   l3-agent# svcadm enable neutron-l3-agent
   l3-agent# svcs neutron-l3-agent
   online 11:24:08 svc:/application/openstack/neutron/neutron-l3-agent:default

8. Create an external network.

   l3-agent# neutron net-create --provider:network_type=flat \
   --router:external=true  external_network
   Created a new network:
   | Field                    | Value                                |
   | admin_state_up           | True                                 |
   | id                       | f67f0d72-0ddf-11e4-9d95-e1f29f417e2f |
   | name                     | external_network                     |
   | provider:network_type    | flat                                 |
   | router:external          | True                                 |
   | shared                   | False                                |
   | status                   | ACTIVE                               |
   | subnets                  |                                      |
   | tenant_id                | f164220cb02465db929ce520869895fa     |

9. Associate a subnet to external_network

   l3-agent# neutron subnet-create --disable-dhcp --name external_subnet \
   --allocation-pool start=,end= external_network
   Created a new subnet:
   | Field            | Value                                            |
   | allocation_pools | {"start": "", "end": ""} |
   | cidr             |                                   |
   | dns_nameservers  |                                                  |
   | enable_dhcp      | False                                            |
   | gateway_ip       |                                      |
   | host_routes      |                                                  |
   | id               | 5d9c8958-0de0-11e4-9d96-e1f29f417e2f             |
   | ip_version       | 4                                                |
   | name             | external_subnet                                  |
   | network_id       | f67f0d72-0ddf-11e4-9d95-e1f29f417e2f             |
   | tenant_id        | f164220cb02465db929ce520869895fa                 |

10. Add external_network to the router.

    l3-agent# neutron router-gateway-set -h
    usage: neutron router-gateway-set [-h] [--request-format {json,xml}]
     router-id external-network-id

    l3-agent# neutron router-gateway-set \
    181543df-40d1-4514-ea77-fddd78c389ff \  (provider_router UUID)
    f67f0d72-0ddf-11e4-9d95-e1f29f417e2f    (external_network UUID)
    Set gateway for router 181543df-40d1-4514-ea77-fddd78c389ff

    l3-agent# neutron router-list -c name -c external_gateway_info
| name            | external_gateway_info                                  |
| provider_router | {"network_id": "f67f0d72-0ddf-11e4-9d95-e1f29f417e2f", |
|                 | "enable_snat": true,                                   |
|                 | "external_fixed_ips":                                  |

|                 |[{"subnet_id": "5d9c8958-0de0-11e4-9d96-e1f29f417e2f",  |
|                 | "ip_address": ""}]}                         |
Note: By default, SNAT is  enabled on the gateway interface of  the Neutron
router. To disable  this  feature,  specify  the   --disable-snat  option
to  the  neutron router-gateway-set subcommand.

11. Add the tenant's private networks to the router. The networks shown by neutron net-list were previously configured.

    l3-agent# keystone tenant-list
    |                id                |   name  | enabled |
    | 511d4cb9ef6c40beadc3a664c20dc354 |   demo  |   True  |
    | f164220cb02465db929ce520869895fa | service |   True  |

    l3-agent# neutron net-list --tenant-id=511d4cb9ef6c40beadc3a664c20dc354
    | id                            | name | subnets                      |
    | c0c15e0a-0def-11e4-9d9f-      | HR   | c0c53066-0def-11e4-9da0-     |
    |  e1f29f417e2f                 |      | e1f29f417e2f|   
    | ce64b430-0def-11e4-9da2-      | ENG  | ce693ac8-0def-11e4-9da3-     |
    |  e1f29f417e2f                 |      | e1f29f417e2f|

    Note: The above two networks were pre-configured 

    l3-agent# neutron router-interface-add  \
    181543df-40d1-4514-ea77-fddd78c389ff \ (provider_router UUID)
    c0c53066-0def-11e4-9da0-e1f29f417e2f   (HR subnet UUID)
    Added interface 7843841e-0e08-11e4-9da5-e1f29f417e2f to router 181543df-40d1-4514-ea77-fddd78c389ff.

    l3-agent# neutron router-interface-add \
    181543df-40d1-4514-ea77-fddd78c389ff \ (provider_router UUID)
    ce693ac8-0def-11e4-9da3-e1f29f417e2f   (ENG subnet UUID)
    Added interface 89289b8e-0e08-11e4-9da6-e1f29f417e2f to router 181543df-40d1-4514-ea77-fddd78c389ff.

12. The following figure shows how the network topology looks when you log in as a service tenant user.


Steps required to create and associate floating IPs as a tenant user

1. Log into the OpenStack Dashboard using the tenant user's credential

2. Select Project -> Access & Security -> Floating IPs

3. With external_network selected, click the Allocate IP button


4. The Floating IPs tab shows that Floating IP is allocated.


5. Click the Associate button and select the VM's port from the pull down menu.


6. The Project -> Instances window shows that the floating IP is associated with the VM.


If you had selected a keypair (SSH Public Key) while launching an instance, then that SSH key would be added into the root's authorized_keys file in the VM. With that done, you can ssh into the running VM.

       [gmoodalb@thunta:~] ssh root@
       Last login: Fri Jul 18 00:37:39 2014 from Oracle Corporation SunOS 5.11 11.2 June 2014

       root@host-192-168-101-3:~# uname -a
       SunOS host-192-168-101-3 5.11 11.2 i86pc i386 i86pc
       root@host-192-168-101-3:~# zoneadm list -cv
       ID NAME              STATUS      PATH                 BRAND      IP    
        2 instance-00000001 running     /                    solaris    excl 
       root@host-192-168-101-3:~# ipadm
       NAME             CLASS/TYPE STATE        UNDER      ADDR
       lo0              loopback   ok           --         --
         lo0/v4         static     ok           --
       lo0/v6          static     ok          --         ::1/128
       net0             ip         ok           --         --
         net0/dhcp      inherited  ok           --

Under the covers:

On the node where neutron-l3-agent is running, you can use IP filter commands (ipf(1m), ippool(1m), and ipnat(1m)) and networking commands (dladm(1m) and ipadm(1m)) to observe and troubleshoot the configuration done by neturon-l3-agent.

VNICs created by neutron-l3-agent:

l3-agent# dladm show-vnic
    LINK                OVER         SPEED  MACADDRESS        MACADDRTYPE VIDS
    l3i7843841e_0_0     net1         1000   2:8:20:42:ed:22   fixed       200
    l3i89289b8e_0_0     net1         1000   2:8:20:7d:87:12   fixed       201
    l3ed527f842_0_0     net0         100    2:8:20:9:98:3e    fixed       0

IP addresses created by neutron-l3-agent:

    l3-agent# ipadm
    NAME                  CLASS/TYPE STATE   UNDER      ADDR
    l3ed527f842_0_0       ip         ok      --         --
      l3ed527f842_0_0/v4  static     ok      --
      l3ed527f842_0_0/v4a static     ok      --
    l3i7843841e_0_0       ip         ok      --         --
      l3i7843841e_0_0/v4  static     ok      --
    l3i89289b8e_0_0       ip         ok      --         --
      l3i89289b8e_0_0/v4  static     ok      --

IP Filter rules:

   l3-agent# ipfstat -io
   empty list for ipfilter(out)
   block in quick on l3i7843841e_0_0 from to pool/4386082

   pass in on l3i7843841e_0_0 to l3ed527f842_0_0: from any to !
   block in quick on l3i89289b8e_0_0 from to pool/8226578
   pass in on l3i89289b8e_0_0 to l3ed527f842_0_0: from any to !
   l3-agent# ippool -l
   table role = ipf type = tree number = 8226578
{; };
   table role = ipf type = tree number = 4386082
{; };

IP NAT rules:

   l3-agent# ipnat -l
   List of active MAP/Redirect filters:
   rdr l3i89289b8e_0_0 port 80 -> port 9697 tcp
   map l3i89289b8e_0_0 ->
   rdr l3i7843841e_0_0 port 80 -> port 9697 tcp
   map l3i7843841e_0_0 ->
   bimap l3ed527f842_0_0 ->
   List of active sessions:
   BIMAP  22  <- ->  22 [ 36405]


Oracle OpenStack is cloud management software that provides customers an enterprise-grade solution to deploy and manage their entire IT environment. Customers can rapidly deploy Oracle and third-party applications across shared compute, network, and storage resources with ease, with end-to-end enterprise-class support. For more information, see here.


« October 2015