Wednesday Feb 25, 2015

Key Points To Know About Oracle OpenStack for Oracle Linux

Now generally available, the Oracle OpenStack for Oracle Linux distribution allows users to control Oracle Linux and Oracle VM through OpenStack in production environments. Based on the OpenStack Icehouse release, Oracle’s distribution provides customers with increased choice and interoperability and takes advantage of the efficiency, performance, scalability, and security of Oracle Linux and Oracle VM. Oracle OpenStack for Oracle Linux is available as part of Oracle Linux Premier Support and Oracle VM Premier Support offerings at no additional cost.

The Oracle OpenStack for Oracle Linux distribution is generally available, allowing customers to use OpenStack software with Oracle Linux and Oracle VM.

Oracle OpenStack for Oracle Linux is OpenStack software that installs on top of Oracle Linux. To help ensure flexibility and openness, it can support any guest operating system (OS) that is supported with Oracle VM, including Oracle Linux, Oracle Solaris, Microsoft Windows, and other Linux distributions.

This release allows customers to build a highly scalable, multitenant environment and integrate with the rich ecosystem of plug-ins and extensions available for OpenStack.

In addition, Oracle OpenStack for Oracle Linux can integrate with third-party software and hardware to provide more choice and interoperability for customers.

Oracle OpenStack for Oracle Linux is available as a free download from the Oracle Public Yum Server and Unbreakable Linux Network (ULN).

An Oracle VM VirtualBox image of the product is also available on Oracle Technology Network, providing an easy way to get started with OpenStack.

http://www.oracle.com/technetwork/server-storage/openstack/linux/downloads/index.html


Here are some of the benefits :

  • Extends choice for building public or private clouds with enterprise-class components
  • Accelerates cloud deployment with ease and peace of mind
  • Provides end-to-end support from the OpenStack platform to base OS, guest OS and Oracle workloads from a single vendor
  • Delivers built-in high-availability support with Oracle Clusterware to ensure continuity and resiliency of OpenStack services
  • Reduces total cost of ownership with zero license cost and low enterprise support cost

 

Read more at Oracle OpenStack for Oracle Linux website

Download now

Monday Feb 02, 2015

New OpenStack Hands on Labs

We've just published 2 new Hands on Labs that we ran during last year's Oracle OpenWorld. The labs were originally running on a SPARC T5-4 system with an attached Oracle ZFS Storage Appliance. During the lab, we walked participants through how to set up an OpenStack environment on Oracle Solaris, and then showed them how to create a golden image environment of the Oracle Database to be used to rapidly clone new VMs in the cloud. We've customized the lab so that it can be run in Oracle VM VirtualBox so check out the following labs:

Enjoy!

Tuesday Nov 18, 2014

Oracle Technology Network Virtual Tech Event

The guys over at the Oracle Technology Network are hosting a new set of virtual events that are FREE to attend:

During the event there will be different tracks on the Database, Middleware, Java and Systems. For the Systems track we've got some great content lined up from Oracle Solaris, Oracle Linux and Oracle VM.

The first two sessions of the day in the Systems track are about setting up OpenStack on Oracle Solaris. We'll walk you through how to take a standard Oracle Solaris 11.2 installation, install and configure the OpenStack packages and get a simple single-node instance up and running. After this we'll deploy our first instance in OpenStack and show you how to create an application golden image. We'll also walk you through some of the additional enhancements we've made to be able to provide read-only VM environments through OpenStack.

There's a little bit of preparation work required for the labs. In our case we'll be using Oracle Solaris 11.2 installed in a VirtualBox environment. If you're interested in joining us for the events, check out the required preparation (there will be different preparation required for some of the other sessions so check out the registration page).

Monday Nov 17, 2014

Making OpenStack Safe for Pets

Eric Saxe (Oracle) co-presented with Michael Aday (HP) and Nigel Cook (Intel) during the OpenStack Summit in Paris earlier this month on how OpenStack is evolving to allow the cloud infrastructure to also host managed enteprise workloads (pets) rather than workloads that can be easily created or destroyed as needed (cattle). Check it out:

Other sessions held during the summit are available here here.

Tuesday Oct 21, 2014

Oracle OpenStack for Oracle Solaris Multi-Node Docs

For most evaluations, running OpenStack on a single node is ideal. It gives you a chance to understand the different cloud services that make up OpenStack, understand how they are configured, and how to troubleshoot errors that you may come across. We've provided an OpenStack Unified Archive to help make that much easier to do - simply modify your Automated Installer manifests to point at this archive, or use the archive to create an Oracle Solaris Kernel Zone.

At some point in time, you'll want to expand this into a multi-node architecture - spreading the load of those cloud services across multiple physical systems. As part of our regular Oracle Solaris 11.2 product documentation set, we've published an Installing and Configuring OpenStack in Oracle Solaris 11.2 document to help you with that. This walks you through a typical small three-node reference architecture that includes a controller node, a network node, and a compute node (conveniently representing the architecture that's also published in the OpenStack upstream documentation).

From this initial three-node setup, it's relatively easy to add more compute nodes, or even split out the cloud storage capabilities into a separate node. This is the first revision of this document, so please give us feedback for how we can improve it!

-- Glynn Foster

Tuesday Sep 02, 2014

Building an OpenStack Cloud for Solaris Engineering

Dave Miner has started to blog his experiences in deploying OpenStack internally for the Oracle Solaris engineering organization. Here's a blurb from the first post of the blog series:

In the Solaris engineering organization we've long had dedicated lab systems dispersed among our various sites and a home-grown reservation tool for developers to reserve those systems; various teams also have private systems for specific testing purposes. But as a developer, it can still be difficult to find systems you need, especially since most Solaris changes require testing on both SPARC and x86 systems before they can be integrated. We've added virtual resources over the years as well in the form of LDOMs and zones (both traditional non-global zones and the new kernel zones). Fundamentally, though, these were all still deployed in the same model: our overworked lab administrators set up pre-configured resources and we then reserve them. Sounds like pretty much every traditional IT shop, right? Which means that there's a lot of opportunity for efficiencies from greater use of virtualization and the self-service style of cloud computing. As we were well into development of OpenStack on Solaris, I was recruited to figure out how we could deploy it to both provide more (and more efficient) development and test resources for the organization as well as a test environment for Solaris OpenStack.

You can read the rest of the blog series here (will update this post with new links as they are published):

Wednesday Aug 27, 2014

Multi-node Solaris 11.2 OpenStack on SPARC Servers

In this blog post we are going to look at how to partition a single Oracle SPARC server and configure multi-node OpenStack on the server running OVM Server for SPARC (or LDoms).

If we are going to partition the server into multiple Root domains and, optionally, IO domains (not with SR-IOV VFs),
then configuring Solaris OpenStack Havana on these domains is very similar to setting up OpenStack on multiple individual physical machines.

On the other hand, if we are going to partition the server into multiple domains such that each domain (other than the primary domain) utilizes either

   -- networking service from primary domain OR
   -- SR-IOV Virtual Function (VF)

then there are some networking constraints that dictate how these domains can be used to run OpenStack services and how they can be used as compute nodes to host zones. We will look into these constraints and see how we can use VXLAN tunneling technology to overcome them.

Note: For the purposes of this blog, any non-primary domain is a guest domain. It is assumed that the user is familiar with LDoms Virtual Networking, SR-IOV VFs, and Crossbow VNICs.

Networking Constraint

To support a solaris brand or solaris-kz brand zone inside a guest domain, or just a VNIC inside a guest domain, it is required that the VNET device (or VF device) be instantiated with several alternate MAC addresses (See here). If the devices have just one MAC address, then VNIC creation fails as below:
   +-------------------------------------------------------------------------+
   |guest_domain_1# dladm show-phys net0                                     |
   |LINK              MEDIA                STATE      SPEED  DUPLEX    DEVICE|
   |net0              Ethernet             up         0      unknown   vnet0 |
   |guest_domain_1# dladm show-phys -m net0                                  |
   |LINK                SLOT     ADDRESS            INUSE CLIENT             |
   |net0                primary  0:14:4f:fb:37:a    yes   net0               |
   |guest_domain_1# dladm create-vnic -l net0 vnic0                          |
   |dladm: vnic creation failed: operation not supported                     |
   +-------------------------------------------------------------------------+

If the VNET device was added with several alternate MAC addresses, then one can create a VNIC:

   +--------------------------------------------------------------------------------+
   |guest_domain_1# dladm show-phys -m net1                                         |
   |LINK                SLOT     ADDRESS            INUSE CLIENT                    |
   |net1                primary  0:14:4f:fb:af:ed   no    --                        |
   |                    1        0:14:4f:fb:4c:8a   no    --                        |
   |                    2        0:14:4f:fb:ea:71   no    --                        |
   |                    3        0:14:4f:fa:e9:b8   no    --                        |
   |guest_domain_1# dladm create-vnic -l net1 vnic0                                 |
   |guest_domain_1# dladm show-vnic vnic0                                           |
   |LINK                OVER              SPEED  MACADDRESS        MACADDRTYPE VIDS |
   |vnic0               net1              0      0:14:4f:fb:4c:8a  factory, slot 1 0|
   +--------------------------------------------------------------------------------+

However, we cannot create a VNIC with any random MAC address. The MAC address should be one of the alternate MAC addresses. This is the issue or the constraint that I was alluding to early on. OpenStack Neutron through Solaris EVS applies a random MAC address to a OpenStack Neutron port. When a VM is launched inside the guest domain, it now tries to create a VNIC with this random MAC address and zone boot fails.

In the case of para-virtualized networking, guest domains transmit/receive packets through the primary domain's physical device. If the physical device in the primary domain is unaware of the MAC addresses used inside the guest domains, then the zones or VNICs using the random MAC address will receive packets.

In the case of SR-IOV VF, guest domains transmit/receive packets through the VF inside the guest domain. However, these VFs are pre-programmed with MAC addresses, and the guest cannot create VNICs outside of these MAC addresses.

The upstream OpenStack has resolved this issue for other hypervisors by re-creating the port at the time of VM launch, using one of the unused hypervisor MAC address. However, this issue is not that straightforward in Solaris. Instead of a list of MAC addresses per server, Solaris has a list of MAC addresses per device. We realize this is a gap, and we are working toward fixing it.

VXLAN (Virtual eXtensible LAN) to the rescue

VXLAN, or Virtual eXtensible LAN, is a tunneling mechanism that provides isolated virtual Layer 2 (L2) segments that can span multiple physical L2 segments. Since it is a tunneling mechanism, it uses IP (IPv4 or IPv6) as its underlying network, which means we can have isolated virtual L2 segments over networks connected by IP. This allows Virtual Machines (VM) to be in the same L2 segment even if they  are located on systems that are in different physical networks. For more info on VXLAN do read this blog post.

VXLAN enables you to create VNICs with any MAC address on top of VXLAN datalinks, and the packets from these VNICs will be wrapped in an IP packet that will use primary MAC address of the VNET or VF device. The inner MAC address is not of importance for routing packets in and out of the guest domain.



In the above case, the packets from the VNIC (vnic0) will be wrapped in a UDP->IP->Ethernet packet before it is finally delivered out of net0.

Basic requirements to use VXLAN 

(a) IP interface of the primary domain and all the guest domains should be in the same subnet. This is not a hard requirement, but avoids the need for multicast routing.


In the setup above, all the domains are part of the 10.129.192.0/24 subnet. The 10.129.192.1 forms the default gateway IP, while the primary domain is assigned 10.129.192.2 and guest domains guest_domain_1 and guest_domain_2 are assigned 10.129.192.3 and 10.129.192.4 respectively. Various VXLAN datalinks will be created on top of these IP interfaces. Note that one VXLAN datalink will be created for each OpenStack Network.


(b) OpenStack services placement

Strictly speaking, only the OpenStack Neutron L3 agent needs to be run in the Primary Domain, while the rest of the OpenStack services can be run in a Guest domain. Neutron L3 agent deals with infrastructure that needs VLANS, for example for like for providing public addresses for tenants' VMs.

In the setup described below, the OpenStack services are placed as shown in the following list:
   +------------------------------+
   |Primary Domain:               |
   |  - Neutron server            |
   |  - Neutron L3 gent           |
   |  - Neutron DHCP agent        |
   |  - EVS controller            |
   |                            |
   |Guest Domain (guest_domain_1):|
   |  - Cinder services           |
   |  - Glance services           |
   |  - Nova services             |
   |  - Keystone services         |
   |  - Horizon services          |
   |                              |
   |Guest Domain (guest_domain_2):|
   |  - Nova compute              |
   +------------------------------+

Configuring OpenStack services on individual nodes

On the primary domain:

   - Modify the following options in /etc/neutron/neutron.conf
   +-------------------------------------------+
   |rabbit_host = 10.129.192.3                 |
   |auth_host = 10.129.192.3                   |
   |identity_uri = http://10.129.192.3:35357   |
   |auth_uri = http://10.129.192.3:5000/v2.0   |
   +-------------------------------------------+

   - Set the EVS controller to 10.129.192.2
   +-------------------------------------------------------------------------+
   |primary_domain# evsadm set-prop -p controller=ssh://evsuser@10.129.192.2 |
   +-------------------------------------------------------------------------+

     Copy neutron's, root's, and evsuser's public keys into /var/user/evsuser/.ssh/authorized_keys so that those users can
     do password-less ssh into 10.129.192.2 as evsuser.

   - Set the following options on EVS controller
   +----------------------------------------------------------------+
   |primary_domain# evsadm set-controlprop -p l2-type=vxlan         |
   |primary_domain# evsadm set-controlprop -p uplink-port=net0      |
   |primary_domain# evsadm set-controlprop -p vxlan-range=2000-3000 |
   |primary_domain# evsadm set-controlprop -p vlan-range=1          |
   +----------------------------------------------------------------+

   - Enable Solaris IP filter feature (svcamd enable ipfilter)

   - Enable IP forwarding (ipadm set-prop -p forwarding=on ipv4)

   - Enable Neutron server (svcadm enable neutron-server)

   - Enable Neutron DHCP agent (svcadm enable neutron-dhcp-agent)

On guest_domain_1:

   - Delete the keystone service endpoint that says neutron is available on 10.129.192.3,
     and add a new service endpoint for neutron as shown in the following keystone command.

    guest_domain_1# set |grep OS_
    OS_AUTH_URL=http://10.129.192.3:5000/v2.0
    OS_PASSWORD=neutron
    OS_TENANT_NAME=service
    OS_USERNAME=neutron
   +---------------------------------------------------------------------------------+
   |guest_domain_1# keystone endpoint-create --region RegionOne \                   |
   |--service 4f49dea054b46cf6f83afff4a216aa13 --publicurl http://10.129.192.2:9696 \|
   |--adminurl http://10.129.192.2:9696 --internalurl http://10.129.192.2:9696       |
   +---------------------------------------------------------------------------------+

   - Set the EVS controller to 10.129.192.2
   +--------------------------------------------------------------------------+
   |guest_domain_1# evsadm set-prop -p controller=ssh://evsuser@10.129.192.2  |
   +--------------------------------------------------------------------------+
     Copy root's public key into 10.129.192.2:/var/user/evsuser/.ssh/authorized_keys so that root on this machine can ssh
     as evsuser into 10.129.192.2. This is needed by zoneadmd that runs as a root to fetch EVS information from the EVS
     controller.

On guest_domain_2:

   - Set the EVS controller to 10.129.192.2
   +-------------------------------------------------------------------------+
   |guest_domain_2# evsadm set-prop -p controller=ssh://evsuser@10.129.192.2 |
   +-------------------------------------------------------------------------+

     Copy root's public key into 10.129.192.2:/var/user/evsuser/.ssh/authorized_keys so that root on this machine can ssh
     as evsuser into 10.129.192.2. This is needed by zoneadmd that runs as a root to fetch EVS information from EVS
     controller.

Create Networks

Creating an internal network for tenant demo:
    guest_domain_1# set |grep OS_
    OS_AUTH_URL=http://10.129.192.3:5000/v2.0
    OS_PASSWORD=secrete
    OS_TENANT_NAME=demo
    OS_USERNAME=admin
   +-------------------------------------------------------------------------------+
   |guest_domain_1# neutron net-create eng_net                                 |
   |guest_domain_1# neutron subnet-create --name eng_subnet eng_net 192.168.10.0/24|
   +-------------------------------------------------------------------------------+

Creating an external network for the tenant service:
    primary_domain# set |grep OS_
    OS_AUTH_URL=http://10.129.192.3:5000/v2.0
    OS_PASSWORD=neutron
    OS_TENANT_NAME=service
    OS_USERNAME=neutron
   +------------------------------------------------------------------------------+
   |primary_domain# neutron net-create --router:external=true ext_net \  |
   |--provider:network_type=vlan  |
   |                           |
   |primary_domain# neutron subnet-create --name ext_subnet --enable_dhcp=false \ |
   |ext_net 10.129.192.0/24                                                       |
   +------------------------------------------------------------------------------+

Creating a router and add interfaces to it
   +------------------------------------------------------+
   |primary_domain# neutron router-create provider_router |
   +------------------------------------------------------+
       Copy the router UUID from the above output and set it to router_id in /etc/neutron/l3_agent.ini.
   +---------------------------------------------------------------------------------+
   |primary_domain# neutron router-gateway-set <router_uuid> <external_network_uuid> |
   |primary_domain# neutron router-interface-add <router_uuid> <internal_subnet_uuid>|
   +---------------------------------------------------------------------------------+

Enable the Neutron L3 agent
   +-----------------------------------------------+
   |primary_domain# svcadm enable neutron-l3-agent |
   +-----------------------------------------------+

        At this point, the following resources are created in primary_domain and they are depicted in the diagram below.

   +--------------------------------------------------------------------------------+
   |primary_domain# dladm show-vxlan                                           |
   |LINK                ADDR                     VNI   MGROUP                     |
   |evs-vxlan2000       10.129.192.2            2000  224.0.0.1                   |
   |primary_domain# dladm show-vnic                                             |
   |LINK                OVER              SPEED  MACADDRESS        MACADDRTYPE VIDS |
   |ldoms-vsw0.vport0   net0              1000   0:14:4f:fb:37:a   fixed       0    |
   |evsb0abc182_2_0     evs-vxlan2000     1000   2:8:20:c9:ee:39   fixed       0    |
   |l3id27a4750_2_0     evs-vxlan2000     1000   2:8:20:af:d0:65   fixed       0    |
   |l3ec631ab64_2_0     net0              1000   2:8:20:32:84:94   fixed       0    |
   |primary_domain# ipadm                                                       |
   |NAME                 CLASS/TYPE STATE        UNDER      ADDR    |
   |evsb0abc182_2_0      ip         ok           --         --                    |
   |  evsb0abc182_2_0/v4 static     ok           --         192.168.10.2/24        |
   |l3ec631ab64_2_0      ip         ok           --         --    |
   |  l3ec631ab64_2_0/v4 static     ok           --         10.129.192.5/24        |
   |l3id27a4750_2_0      ip         ok           --         --                    |
   |  l3id27a4750_2_0/v4 static     ok           --         192.168.10.1/24        |
   |net0                 ip         ok           --         --                    |
   |  net0/v4            static     ok           --         10.129.192.2/24        |
   +--------------------------------------------------------------------------------+

Launch a VM

Launch VM connected to the internal network. Once the VM is in the Active state, you will see the following resources created in guest_domain1:
   +--------------------------------------------------------------------------------+
   |guest_domain_1# dladm show-vxlan                                             |
   |LINK                ADDR                     VNI   MGROUP                      |
   |evs-vxlan2000       10.129.192.3           2000  224.0.0.1                    |
   |guest_domain_1# dladm show-vnic                                               |
   |LINK                OVER              SPEED  MACADDRESS        MACADDRTYPE VIDS |
   |instance-00000005/net0 evs-vxlan2000  0      2:8:20:5b:ec:6b   fixed       0    |
   +--------------------------------------------------------------------------------+

From within the zone, you can ping the default gateway IP of 192.168.10.1 that is present in primary domain. The diagram below shows the path taken by ICMP packets.

   +-------------------------------------------------------------------+
   |root@host-192-168-10-3:~# ping -s 192.168.10.1                    |
   |PING 192-168.10.1: 56 data bytes                                |
   |64 bytes from 192.168.10.1: icmp_seq=0. time=0.432 ms             |
   |64 bytes from 192.168.10.1: icmp_seq=1. time=0.452 ms             |
   |64 bytes from 192.168.10.1: icmp_seq=2. time=0.326 ms             |
   |^C                                                              |
   |----192.168.10.1 PING Statistics----                            |
   |3 packets transmitted, 3 packets received, 0% packet loss         |
   |round-trip (ms)  min/avg/max/stddev = 0.326/0.403/0.452/0.068      |
   +-------------------------------------------------------------------+

Create and associate a Floating IP
   +-----------------------------------------------------------------------------+
   |guest_domain_1# neutron floatingip-create <external_network_uuid>            |
   |guest_domain_1# neutron floatingip-associate <floatingip_uuid> <VM_Port_UUID>|
   +-----------------------------------------------------------------------------+

Check the IP Filter and IP NAT rules on the primary domain:
   +------------------------------------------------------------------------+
   |primary_domain# ipadm show-addr l3ec631ab64_2_0/                      |
   |ADDROBJ           TYPE     STATE        ADDR                          |
   |l3ec631ab64_2_0/v4 static  ok           10.129.192.5/24                |
   |l3ec631ab64_2_0/v4a static ok           10.129.192.6/32                |
   |                                                                   |
   |primary_domain# ipfstat -io                                         |
   |empty list for ipfilter(out)                                          |
   |block in quick on l3id27a4750_2_0 from 192.168.10.0/24 to pool/11522149 |
   |                              |
   |primary_domain# ipnat -l      |
   |List of active MAP/Redirect filters:                                  |
   |bimap l3ec631ab64_2_0 192.168.10.3/32 -> 10.129.192.6/32              |
   |                                                                   |
   |List of active sessions:      |
   +------------------------------------------------------------------------+

Now the VM should be accessible from the external network as 10.129.192.6.

   +------------------------------------------------------------------------+
   |[gmoodalb@thunta:~]                                                  |
   |>ping -ns 10.129.192.6                                                |
   |PING 10.129.192.6 (10.129.192.6): 56 data bytes                         |
   |64 bytes from 10.129.192.6: icmp_seq=0. time=0.919 ms                   |
   |64 bytes from 10.129.192.6: icmp_seq=1. time=0.854 ms                   |
   |64 bytes from 10.129.192.6: icmp_seq=2. time=0.828 ms                   |
   |^C                                                                 |
   |----10.129.192.6 PING Statistics----                                    |
   |3 packets transmitted, 3 packets received, 0% packet loss               |
   |round-trip (ms)  min/avg/max/stddev = 0.828/0.867/0.919/0.047           |
   |[gmoodalb@thunta:~]                                                  |
   |>ssh root@10.129.192.6                                                |
   |Password:                                                            |
   |Last login: Fri Aug 22 21:32:38 2014 from 10.132.146.13                 |
   |Oracle Corporation      SunOS 5.11      11.2    June 2014               |
   |root@host-192-168-10-3:~# zonename                                      |
   |instance-00000005                                                    |
   |root@host-192-168-10-3:~#                                               |
   +------------------------------------------------------------------------+

Thursday Aug 21, 2014

Solaris OpenStack Horizon customizations

In Oracle Solaris OpenStack Havana, we have customized the Horizon BUI by modifying existing dashboard and panels to reflect only those features that we support. The modification mostly involves:

 --  disabling an widget (checkbox, button, textarea, and so on)
 --  removal of a tab from a panel
--  removal of options from pull-down menus

The following table lists the customizations that we have made.

|-----------------------------+-----------------------------------------------------|
| Where                       | What                                                |
|-----------------------------+-----------------------------------------------------|
| Project => Instances =>     | Post-Creation tab is removed.                       |
| Launch Instance             |                                                     |
|                             |                                                     |
| Project => Instances =>     | Security Groups tab is removed.                     |
| Actions => Edit Instance    |                                                     |
|                             |                                                     |
| Project => Instances =>     | Console tab is removed.                             |
| Instance Name               |                                                     |
|                             |                                                     |
| Project => Instances =>     | Following actions Console, Edit Security Groups,    |
| Actions                     | Pause Instance, Suspend Instance, Resize Instance,  |
|                             | Rebuild Instance, and Migrate Instance are removed. |
|                             |                                                     |
| Project =>                  | Security Groups tab is removed.                     |
| Access and Security         |                                                     |
|                             |                                                     |
| Project =>                  | Create Volume action is removed.                    |
| Images and Snapshots =>     |                                                     |
| Images => Actions           |                                                     |
|                             |                                                     |
| Project => Networks =>      | Admin State is disabled and its value is always     |
| Create Network              | true.                                               |
|                             |                                                     |
| Project => Networks =>      | Disable Gateway checkbox is disabled, and its       |
| Create Network =>           | value is always false.                              |
| Subnet                      |                                                     |
|                             |                                                     |
| Project => Networks =>      | Allocation Pools and Host Routes text area are      |
| Create Network =>           | disabled.                                      |
| Subnet Detail               |                                                     |
|                             |                                                     |
| Project => Networks =>      | Edit Subnet action is removed.                      |
| Network Name => Subnet =>   |                                                     |
| Actions                     |                                                     |
|                             |                                                     |
| Project => Networks =>      | Edit Port action is removed.                        |
| Network Name => Ports =>    |                                                     |
| Actions                     |                                                     |
|                             |                                                     |
| Admin => Instnaces =>       | Following actions Console, Pause Instance,          |
| Actions                     | Suspend Instance, and Migrate Instance are removed. |
|                             |                                                     |
| Admin => Networks =>        | Edit Network action is removed                      |
| Actions                     |                                                     |
|                             |                                                     |
| Admin => Networks =>         | Edit Subnet action is removed                       |
| Subnets => Actions          |                                                     |
|                             |                                                     |
| Admin => Networks =>         | Edit Port action is removed                         |
| Ports => Actions            |                                                     |
|                             |                                                     |
| Admin => Networks =>         | Admin State and Shared check box are disabled.      |
| Create Network              | Network's Admin State is always true, and Shared is |
|                             | always false.                                       |
|                             |                                                     |
| Admin => Networks =>        | Admin State check box is disabled and its value     |
| Network Name => Create Port | is always true.                                     |
|-----------------------------+-----------------------------------------------------|

Tuesday Aug 05, 2014

OpenStack Summit in Paris - Nov 3-7

The next OpenStack summit is soon approaching, hosted in Paris Nov 3-7. With a six month cadence, it's an opportunity for developers, users and operators to get together and talk all things OpenStack and plan for the next release of OpenStack (codenamed 'Kilo'). The Oracle Solaris OpenStack team will be there in attendance again, so please find us out if you have any questions.

Eric and I have also submitted a session for the summit called "Making OpenStack Safe for Pets" - VOTE FOR THIS SESSION!

Many Enterprise customers are well on their way towards adopting OpenStack for (at least) the Cattle rich pastures of their test & DevOps infrastructure, and are increasingly interested in consolidation of existing enterprise applications and mission critical services into that same infrastructure and management paradigm.

Many of those applications exhibit needs and characteristics more like Pets rather than Cattle however, presenting a barrier both for consolidation and broader adoption of cloud / OpenStack by the Enterprise.

While some have argued that Cloud / OpenStack is simply the wrong infrastructure for pet-like applications, we would posit that isn't and shouldn't be the case.

In this talk, we will talk about trends that we are seeing with respect to adoption of OpenStack by Enterprise customers, and how that is driving our investment in OpenStack as well as our underlying compute, networking, storage, Operating System and virtualization technologies. We will talk about ways in which Oracle plans to contribute to OpenStack, and what we believe are the key areas of investment needed to address the needs of cloud wanting Enterprise customers, including high-availability cloud services, fault-tolerant cloud infrastructure, simplified cloud lifecycle management and more.

While the day may come when Enterprise applications can be thought of as Cattle, until then significant value exists in meeting the needs of Enterprise customers wanting their pets to thrive in the cloud, and who tend to think of their cloud infrastructure as pet-like too.

-- Glynn Foster

OpenStack Havana Updates

Today we pushed some updates to OpenStack on Oracle Solaris into the release repository. These updates are to provide fixes for a number of bugs that were uncovered leading up the general release of Oracle Solaris 11.2. These fixes can be summarized as the following:

  • General robustness and fit-n-finish cleanup for Horizon
  • DHCP, L3, IPv6, and floating IP fixes for Neutron
  • Nova improvements to deal with halted zones
  • ZS3 Cinder driver fix for attaching multiple volumes
  • Package dependency fixes for minimization
  • Minor configuration file simplifications
These fixes will also be pushed into the support repository when Oracle Solaris 11.2 SRU 1 becomes available.

To update to these packages you can use the following command:

# pkg update
This will automatically apply the new package versions. You will manually need to restart the following OpenStack services:
cinder-volume:default
http:apache22
keystone
neutron-dhcp-agent
neutron-l3-agent
neutron-server
nova-compute

For reference, here's the list of packages that have been updated:

cloud/openstack/cinder
cloud/openstack/glance
cloud/openstack/horizon
cloud/openstack/keystone
cloud/openstack/neutron
cloud/openstack/nova
cloud/openstack/swift
library/python-2/jsonpatch
library/python-2/jsonpatch-26
library/python-2/jsonpatch-27
service/network/dnsmasq

Happy OpenStacking!

-- Glynn Foster

Thursday Jul 31, 2014

Neutron L3 Agent in Oracle Solaris OpenStack

The Oracle Solaris implementation of OpenStack Neutron supports the following deployment model: provider router with private networks deployment. You can find more information about this model here. In this deployment model, each tenant can have one or more private networks and all the tenant networks share the same router. This router is created, owned, and managed by the data center administrator. The router itself will not be visible in the tenant's network topology view. Because there is only a single router, tenant networks cannot use overlapping IPs. Thus, it is likely that the administrator would create the private networks on behalf of tenants.

By default, this router prevents routing between private networks that are part of the same tenant. That is, VMs within one private network cannot communicate with the VMs in another private network, even though they are all part of the same tenant. This behavior can be changed by setting allow_forwarding_between_networks to True in the /etc/neutron/l3_agent.ini configuration file and restarting the neturon-l3-agent SMF service.

This router provides connectivity to the outside world for the tenant VMs. It does this by performing bidirectional NAT on the interface that connects the router to the external network. Tenants create as many floating IPs (public IPs) as they need or as are allowed by the floating IP quota and then associate these floating IPs with the VMs that need outside connectivity.

The following figure captures the supported deployment model.

deployment_model.png

Figure 1 Provider router with private networks deployment

Tenant A has:

  • Two internal networks:
    HR (subnet: 192.168.100.0/24, gateway: 192.168.100.1)
    ENG (subnet: 192.168.101.0/24, gateway: 192.168.101.1)
  • Two VMs
    VM1 connected to HR with a fixed IP address of 192.168.100.3
    V
    M2 connected to ENG with a fixed IP address of 192.168.101.3

Tenant B has:

  • Two internal networks:
    IT (subnet: 192.168.102.0/24, gateway: 192.168.102.1)
    ACCT (subnet: 192.168.103.0/24, gateway: 192.168.103.1)
  • Two VMs
    VM3 connected to IT with a fixed IP address of 192.168.102.3
    VM4 connected to ACCT with a fixed IP address of 192.168.103.3

All the gateway interfaces are instantiated on the node that is running neutron-l3-agent.

The external network is a provider network that is associated with the subnet

10.134.13.0/24 that is reachable from outside. Tenants will create floating IPs from this network and associate them to their VMs. VM1 and VM2 have floating IPs 10.134.13.40 and 10.134.13.9 associated with them respectively. VM1 and VM2 are reachable from the outside world through these IP addresses.

Configuring neutron-l3-agent on a Network Node

Note: In this configuration, all Compute Nodes and Network Nodes in the network have been identified, and the configuration file for all the OpenStack services has been appropriately configured so that these services can communicate with each other.

The service tenant is a tenant for all the OpenStack services (nova, neutron, glance, cinder, swift, keystone, and horizon) and the users for each of the services. Services communicate with each other using these users who all have admin role. The steps below show how to use the service tenant to create a router, an external network, and an external subnet that will be used by all of the tenants in the data center. Please refer to the following table and diagram while walking through the steps.

Note: Alternatively, you could create a separate tenant (DataCenter) and a new user (datacenter) with admin role, and the DataCenter tenant could host all of the aforementioned shared resources. 

ip_address_planning.png

Table 1 Public IP address mapping

network_topology

Figure 2 Neutron L3 agent configuration

Steps required to setup Neutron L3 agent as a data center administrator:

Note: We will need to use OpenStack CLI to configure the shared single router and associate network/subnets from different tenants with it because from OpenStack dashboard you can only manage one tenant’s resources at a time. 

1. Enable Solaris IP filter functionality.

   l3-agent# svcadm enable ipfilter
   l3-agent# svcs ipfilter
   STATE  STIME    FMRI
   online 10:29:04 svc:/network/ipfilter:default

2. Enable IP forwarding on the entire host.

   l3-agent# ipadm show-prop -p forwarding ipv4
   PROTO PROPERTY    PERM CURRENT      PERSISTENT   DEFAULT      POSSIBLE
   ipv4  forwarding  rw   on           on           off          on,off 

3. Ensure that the Solaris Elastic Virtual Switch feature is configured correctly and has the VLAN ID required for the external network. In our case, the external network/subnet uses VLAN 1.

   l3-agent# evsadm show-controlprop -p vlan-range,l2-type
   PROPERTY            PERM VALUE               DEFAULT             HOST
   l2-type             rw   vlan                vlan                --
   vlan-range          rw   200-300             --                  --

   l3-agent# evsadm set-controlprop -p vlan-range=1,200-300

Note: For more information on EVS please refer to Chapter 5, "About Elastic Virtual Switches" and Chapter 6, "Administering Elastic Virtual Switches" in Managing Network Virtualization and Network Resources in Oracle Solaris 11.2 (http://docs.oracle.com/cd/E36784_01/html/E36813/index.html). In short, Solaris EVS forms the backend for OpenStack networking, and it facilitates inter-VM communication (on the same compute-node or across compute-node) either using VLANs or VXLANs.

4. Ensure that the service tenant is already there.

   l3-agent# keystone --os-endpoint=http://localhost:35357/v2.0 \
   --os-token=ADMIN tenant-list
   +----------------------------------+---------+---------+
   |                id                |   name  | enabled |
   +----------------------------------+---------+---------+
   | 511d4cb9ef6c40beadc3a664c20dc354 |   demo  |   True  |
   | f164220cb02465db929ce520869895fa | service |   True  |
   +----------------------------------+---------+---------+

5. Create the provider router. Note the UUID of the new router.

   l3-agent# export OS_USERNAME=neutron
   l3-agent# export OS_PASSWORD=neutron
   l3-agent# export OS_TENANT_NAME=service
   l3-agent# export OS_AUTH_URL=http://localhost:5000/v2.0
   l3-agent# neutron router-create provider_router
   Created a new router:
   +-----------------------+--------------------------------------+
   | Field                 | Value                                |
   +-----------------------+--------------------------------------+
   | admin_state_up        | True                                 |
   | external_gateway_info |                                      |
   | id                    | 181543df-40d1-4514-ea77-fddd78c389ff |
   | name                  | provider_router                      |
   | status                | ACTIVE                               |
   | tenant_id             | f164220cb02465db929ce520869895fa     |
   +-----------------------+--------------------------------------+

6. Use the router UUID from step 5 and update /etc/neutron/l3_agent.ini file with following entry:

router_id = 181543df-40d1-4514-ea77-fddd78c389ff

7. Enable the neutron-l3-agent service.

   l3-agent# svcadm enable neutron-l3-agent
   l3-agent# svcs neutron-l3-agent
   STATE STIME FMRI
   online 11:24:08 svc:/application/openstack/neutron/neutron-l3-agent:default

8. Create an external network.

   l3-agent# neutron net-create --provider:network_type=vlan \
   --provider:segmentation_id=1 --router:external=true  external_network
   Created a new network:
   +--------------------------+--------------------------------------+
   | Field                    | Value                                |
   +--------------------------+--------------------------------------+
   | admin_state_up           | True                                 |
   | id                       | f67f0d72-0ddf-11e4-9d95-e1f29f417e2f |
   | name                     | external_network                     |
   | provider:network_type    | vlan                                 |
   | provider:segmentation_id | 1                                    |
   | router:external          | True                                 |
   | shared                   | False                                |
   | status                   | ACTIVE                               |
   | subnets                  |                                      |
   | tenant_id                | f164220cb02465db929ce520869895fa     |
   +--------------------------+--------------------------------------+

9. Associate a subnet to external_network

   l3-agent# neutron subnet-create --enable-dhcp=False \
   --name external_subnet external_network 10.134.13.0/24
   Created a new subnet:
   +------------------+--------------------------------------------------+
   | Field            | Value                                            |
   +------------------+--------------------------------------------------+
   | allocation_pools | {"start": "10.134.13.2", "end": "10.134.13.254"} |
   | cidr             | 10.134.13.0/24                                   |
   | dns_nameservers  |                                                  |
   | enable_dhcp      | False                                            |
   | gateway_ip       | 10.134.13.1                                      |
   | host_routes      |                                                  |
   | id               | 5d9c8958-0de0-11e4-9d96-e1f29f417e2f             |
   | ip_version       | 4                                                |
   | name             | external_subnet                                  |
   | network_id       | f67f0d72-0ddf-11e4-9d95-e1f29f417e2f             |
   | tenant_id        | f164220cb02465db929ce520869895fa                 |
   +------------------+--------------------------------------------------+

10. Apply the workaround for not having --allocation-pool support for subnets. Because 10.134.13.2 through 10.134.13.7 IP addresses are set aside for other OpenStack API services, perform the following floatingip-create steps to ensure that no tenant will assign these IP addresses to VMs:

   l3-agent# for i in `seq 1 6`; do neutron floatingip-create \
   external_network; done
   l3-agent# neutron floatingip-list -c id -c floating_ip_address
   +--------------------------------------+---------------------+
   | id                                   | floating_ip_address |
   +--------------------------------------+---------------------+
   | 58fbccdd-1b60-c6ba-9a51-bbc2cbcc95f8 | 10.134.13.2         |
   | ce620f79-aed4-6d1c-b5e7-c64c5f6d1f28 | 10.134.13.3         |
   | 6442eef1-b748-cb51-8a96-98b90e264bd0 | 10.134.13.4         |
   | a9792d03-f5de-cae1-fa5a-bb614720b22c | 10.134.13.5         |
   | da18a52d-73a5-4c7d-fb98-95d292d9b0e8 | 10.134.13.6         |
   | 22e02f77-5b44-402a-d369-9e6b1d831ca0 | 10.134.13.7         |
   +--------------------------------------+---------------------+

11. Add external_network to the router.

    l3-agent# neutron router-gateway-set -h
    usage: neutron router-gateway-set [-h] [--request-format {json,xml}]
                                      [--disable-snat]
     router-id external-network-id

    l3-agent# neutron router-gateway-set \
    181543df-40d1-4514-ea77-fddd78c389ff \  (provider_router UUID)
    f67f0d72-0ddf-11e4-9d95-e1f29f417e2f    (external_network UUID)
    Set gateway for router 181543df-40d1-4514-ea77-fddd78c389ff

    l3-agent# neutron router-list -c name -c external_gateway_info
+-----------------+--------------------------------------------------------+
| name            | external_gateway_info                                  |
+-----------------+--------------------------------------------------------+
| provider_router | {"network_id": "f67f0d72-0ddf-11e4-9d95-e1f29f417e2f"} |
+-----------------+--------------------------------------------------------+

12. Add the tenant's private networks to the router. The networks shown by neutron net-list were previously configured.

    l3-agent# keystone tenant-list
    +----------------------------------+---------+---------+
    |                id                |   name  | enabled |
    +----------------------------------+---------+---------+
    | 511d4cb9ef6c40beadc3a664c20dc354 |   demo  |   True  |
    | f164220cb02465db929ce520869895fa | service |   True  |
    +----------------------------------+---------+---------+

    l3-agent# neutron net-list --tenant-id=511d4cb9ef6c40beadc3a664c20dc354
    +-------------------------------+------+------------------------------+
    | id                            | name | subnets                      |
    +-------------------------------+------+------------------------------+
    | c0c15e0a-0def-11e4-9d9f-      | HR   | c0c53066-0def-11e4-9da0-     |
    |  e1f29f417e2f                 |      | e1f29f417e2f 192.168.100.0/24|   
    | ce64b430-0def-11e4-9da2-      | ENG  | ce693ac8-0def-11e4-9da3-     |
    |  e1f29f417e2f                 |      | e1f29f417e2f 192.168.101.0/24|
    +-------------------------------+------+------------------------------+

    Note: The above two networks were preconfigured 

    l3-agent# neutron router-interface-add  \
    181543df-40d1-4514-ea77-fddd78c389ff \ (provider_router UUID)
    c0c53066-0def-11e4-9da0-e1f29f417e2f   (HR subnet UUID)
    Added interface 7843841e-0e08-11e4-9da5-e1f29f417e2f to router 181543df-40d1-4514-ea77-fddd78c389ff.

    l3-agent# neutron router-interface-add \
    181543df-40d1-4514-ea77-fddd78c389ff \ (provider_router UUID)
    ce693ac8-0def-11e4-9da3-e1f29f417e2f   (ENG subnet UUID)
    Added interface 89289b8e-0e08-11e4-9da6-e1f29f417e2f to router 181543df-40d1-4514-ea77-fddd78c389ff.

13. The following figure shows how the network topology looks when you log in as a service tenant user.

provider_router.png

Steps required to create and associate floating IPs as a tenant user

1. Log into the OpenStack Dashboard using the tenant user's credential

2. Select Project -> Access & Security -> Floating IPs

3. With external_network selected, click the Allocate IP button

allocate_floating_ip.png

4. The Floating IPs tab shows that 10.134.13.9 Floating IP is allocated.

allocated_floating_ip.png

5. Click the Associate button and select the VM's port from the pull down menu.

associate_fip.png

6. The Project -> Instances window shows that the floating IP is associated with the VM.

instances.png

If you had selected a keypair (SSH Public Key) while launching an instance, then that SSH key would be added into the root's authorized_keys file in the VM. With that done you can ssh into the running VM.

       [gmoodalb@thunta:~] ssh root@10.134.13.9
       Last login: Fri Jul 18 00:37:39 2014 from
       10.132.146.13 Oracle Corporation SunOS 5.11 11.2 June 2014

       root@host-192-168-101-3:~# uname -a
       SunOS host-192-168-101-3 5.11 11.2 i86pc i386 i86pc
       root@host-192-168-101-3:~# zoneadm list -cv
       ID NAME              STATUS      PATH                 BRAND      IP    
        2 instance-00000001 running     /                    solaris    excl 
       root@host-192-168-101-3:~# ipadm
       NAME             CLASS/TYPE STATE        UNDER      ADDR
       lo0              loopback   ok           --         --
         lo0/v4         static     ok           --         127.0.0.1/8
 lo0/v6          static     ok          --         ::1/128
       net0             ip         ok           --         --
         net0/dhcp      inherited  ok           --         192.168.101.3/24

Under the covers:

On the node where neutron-l3-agent is running, you can use IP filter commands (ipf(1m), ippool(1m), and ipnat(1m)) and networking commands (dladm(1m) and ipadm(1m)) to observe and troubleshoot the configuration done by neturon-l3-agent.

VNICs created by neutron-l3-agent:

    l3-agent# dladm show-vnic
    LINK                OVER         SPEED  MACADDRESS        MACADDRTYPE VIDS
    l3i7843841e_0_0     net1         1000   2:8:20:42:ed:22   fixed       200
    l3i89289b8e_0_0     net1         1000   2:8:20:7d:87:12   fixed       201
    l3ed527f842_0_0     net0         100    2:8:20:9:98:3e    fixed       0

IP addresses created by neutron-l3-agent:

    l3-agent# ipadm
    NAME                  CLASS/TYPE STATE   UNDER      ADDR
    l3ed527f842_0_0       ip         ok      --         --
      l3ed527f842_0_0/v4  static     ok      --         10.134.13.8/24
      l3ed527f842_0_0/v4a static     ok      --         10.134.13.9/32
    l3i7843841e_0_0       ip         ok      --         --
      l3i7843841e_0_0/v4  static     ok      --         192.168.100.1/24
    l3i89289b8e_0_0       ip         ok      --         --
      l3i89289b8e_0_0/v4  static     ok      --         192.168.101.1/24

IP Filter rules:

   l3-agent# ipfstat -io
   empty list for ipfilter(out)
   block in quick on l3i7843841e_0_0 from 192.168.100.0/24 to pool/4386082
   block in quick on l3i89289b8e_0_0 from 192.168.101.0/24 to pool/8226578
   l3-agent# ippool -l
   table role = ipf type = tree number = 8226578
{ 192.168.100.0/24; };
   table role = ipf type = tree number = 4386082
{ 192.168.101.0/24; };

IP NAT rules:

   l3-agent# ipnat -l
   List of active MAP/Redirect filters:
   bimap l3ed527f842_0_0 192.168.101.3/32 -> 10.134.13.9/32
   List of active sessions:
   BIMAP 192.168.101.3  22  <- -> 10.134.13.9  22 [10.132.146.13 36405]

Known Issues:

1. The neutron-l3-agent SMF service goes into maintenance when it is restarted. This will be fixed in an SRU. The workaround is to restart the ipfilter service and clear the neutron-l3-agent.

# svcadm restart ipfilter:default
# svcadm clear neutron-l3-agent:default

2. The default gateway for the network node is removed in certain setups.

If the IP address of the Network Node is derived from the external_network address space, then if you use the neutron router-gateway-clear command to remove the external_network from the provider_router, the default gateway for the network node is deleted and the network node is inaccessible.

     l3-agent# neutron router-gateway-clear <router_UUID_goes_here>

To fix this problem, connect to the network node through the console and then add the default gateway again.

OpenStack 101 - How to get started on Oracle Solaris 11

As Eric has already mentioned with Oracle Solaris 11.2 we've included a complete, enterprise-ready distribution of OpenStack based on the "Havana" release of the upstream project. We've talked to many customers who have expressed an interest in OpenStack generally, but also being able to have Oracle Solaris participate in a heterogeneous mix of technologies that you'd typically see in a data center environment. We're absolutely thrilled to be providing this functionality to our customers as part of the core Oracle Solaris platform and support offering, so they can set up agile, self-service private clouds with Infrastructure-as-a-Service (IaaS), or develop Platform-as-a-Service (PaaS) or Software-as-a-Service (SaaS) solutions on top of this infrastructure.

If you haven't really had much experience with OpenStack, you'll almost certainly be confused by the myriad of different project names for some of the core components of an OpenStack cloud. Here's a handy table:

Component Description
Nova OpenStack Nova provides a cloud computing fabric controller that supports a wide variety of virtualization technologies. In addition to its native API, it includes compatibility with the commonly encountered Amazon Elastic Compute Cloud (EC2) and Amazon Simple Storage Service (S3) APIs.
Neutron OpenStack Neutron provides an API to dynamically request and configure virtual networks. These networks connect "interfaces" from other OpenStack services (for example, VNICs from Nova VMs). The Neutron API supports extensions to provide advanced network capabilities, for example, quality of service (QoS), access control lists (ACLs) and network monitoring.
Cinder OpenStack Cinder provides an infrastructure for managing block storage volumes in OpenStack. It allows block devices to be exposed and connected to compute instances for expanded storage, better performance, and integration with enterprise storage platforms.
Swift OpenStack Swift provides object storage services for projects and users in the cloud.
Glance OpenStack Glance provides services for discovering, registering, and retrieving virtual machine images. Glance has a RESTful API that allows querying of VM image metadata as well as retrieval of the actual image. VM images made available through Glance can be stored in a variety of locations from simple file systems to object-storage systems such as OpenStack Swift.
Keystone OpenStack Keystone is the OpenStack identity service used for authentication between the OpenStack services.
Horizon OpenStack Horizon is the canonical implementation of OpenStack's dashboard, which provides a web-based user interface to OpenStack services including Nova, Neutron, Cinder, Swift, Keystone and Glance.

So how do you get started? Due to the distributed architecture of OpenStack and being able to run different services across multiple nodes, OpenStack isn't the easiest thing in the world to configure and get running. We've made that easier for you to be able to set up a single-node pre-configured instance to evaluate initially with an OpenStack Unified Archive and an excellent getting started guide. Once you've got up to speed on a single-node set up, you can use your experience to deploy OpenStack on a multi-node setup. We've also got a bunch of other resource available:

We're just starting our journey of providing OpenStack on Oracle Solaris with this initial integration and we expect to deliver more value over time. Ready to start your journey with OpenStack in your data center?

-- Glynn Foster

OpenStack Immutable VMs

Solaris 11 brought us the ability to have Immutable non global Zones.  With Solaris 11.2 we have extended that capability so that it works with Kernel Zones, LDOMs (OVM SPARC) and bare metal global zones.

Now what about deploying Immutable Zones via OpenStack ?

The way to do this is to via the Flavors facility in Nova.

From the OpenStack Dashboard (Horizon) navigate to the Admin-> Flavor page. We can either update an existing one of the Solaris flavours or create a new one.  Lets do this by creating a new one called 'Immutable Solaris non global Zone'

Make sure you set the 'Flavor Access' to include the projects you want to use this.

Then from the 'More' menu on the entry in the table select 'View Extra Specs'


that will bring up a window like this one, since we are creating a new entry from scratch we have to also setup the type of zone this will be.

Select Create and fill in the following to set a non global zone (if you wanted a kernel zone instead then change the value to solaris-kz):

The do the same again and create a key/value pair for 'zonecfg:file-mac-profile' with the value being one of 'flexible-configuration,fixed-configuration,strict' eg:

Thats it close the flavor window and now we can select this as a type when we deploy a new instance.

If create a new VM instance using this flavor and look at the configuration of the zone that Nova is deploying for us we will see something like this:

$ zonecfg -z instance-0000000f info
zonename: instance-0000000f
zonepath: /system/zones/instance-0000000f
brand: solaris
autoboot: false
autoshutdown: shutdown
bootargs: 
file-mac-profile: fixed-configuration
...

It is possible to set other zonecfg global scope settings here as well.  Currently the choice is limited to a fixed set but I'm hoping to change that to allow any of the known global scope settings.  This would allow using some of the more advanced Zone resource controls via an OpenStack Nova Flavor.

 -- Darren J Moffat

 
  

OpenStack Cinder Volume encryption with ZFS

In an OpenStack deployment the VMs is provided by the Cinder service. In the case of a Solaris instance these VMs are either Kernel Zones or non global zones configured for ZOSS (Zones On Shared Storage).  When Solaris 11.1 came out I wrote about using ZFS to encrypt zones.

The Cinder volume service for OpenStack can be provided by ZFS using ZVOLs.  So it shouldn't be surprising that we get to benefit from ZFS features such as compression, encryption and deduplication.

When deploying a simple OpenStack configuration using the 'solaris.zfs.ZFSVolumeDriver'  we  create ZVOLs in the dataset specified by the 'zfs_volume_base' variable in /etc/cinder/cinder.conf.  If the dataset specified by 'zfs_volume_base' doesn't already exist then the SMF service 'svc:/application/openstack/cinder/cinder-volume:setup' will create it for you and sets the file system permissions and zfs allow delegations for the 'cinder' user appropriately.

If we pre-create the ZFS dataset that zfs_volume_base points to all the ZVOLs that are created by cinder below that are automatically encrypted.

For example if I'm using a ZFS pool called 'cloudstore' and I set 'cloudstore/cinder' as 'zfs_volume_base' I can do this:

# zfs create -o encryption=on -o keysource=passphrase,https://keys.example.com/cinder cloudstore/cinder

In the above example I'm assuming we have an ad-hoc key manager available already that is providing keys/passphrases over https, you could also use a raw file, PKCS#11 keystore or interactively prompt; see the ZFS Encryption documentation for more guidance.

Now restart the  cinder-volume:setup service and we are ready to use our transparent encryption of Cinder volumes:

# svcadm restart cinder-volume:setup

If we look at the ZFS datasets that are created after we have launched a VM instance and the cinder volume for it was created we see this:

$ zfs get -r encryption cloudstore/cinder                   zfs-bugs
NAME                                                      PROPERTY    VALUE  SOURCE
cloudstore/cinder                                              encryption  on     local
cloudstore/cinder/volume-8ae498b7-5866-60da-85f6-d22d6bc932e9  encryption  on     inherited from cloudstore/cinder
 

Using the above method neither Cinder or Nova are aware of the encryption of the volumes nor are they involved in the key management. 

We are investigating what will be required to extend the Solaris ZFS drivers for Cinder so that Cinder is involved in or at least aware of ZFS encryption and then eventuall the key management since Cinder has some support for this already and a future OpenStack release will be extending this via the Barbican project.

-- Darren J Moffat


Oracle Solaris 11 - Engineered for Cloud

Today's release of Oracle Solaris 11.2 is especially meaningful for many of us in Solaris Engineering that have been hard at work over the last few years making OpenStack Cloud Infrastructure a first class Solaris technology. Today we release not only one of the most significant, complete, and solid versions of Solaris ever, with many new cloud virtualization features, but also included is the fully integrated cloud infrastructure software itself....everything needed (from a software perspective anyway ;)) to stand up a fully functional, OpenStack cloud system providing Infrastructure as a Service (IaaS), and Cloud block/object storage on both SPARC and x86 based systems.

Why is the Solaris Engineering Team tackling Cloud Infrastructure? For the Enterprise, what we consider to be the "Operating System" is shifting thanks to the rise of cloud computing. When you think about the role of an Operating System, what comes to mind? What does it do, fundamentally? Of course, it's the software that manages and allocates compute resources to users and workloads. It virtualizes those resources (CPU, memory, persistent storage) to provide applications with elasticity in their resource use. It runs workloads, hosts services, and provides APIs and interfaces for both workloads and users of those services. Operating Systems have tended to do this within the confines of single physical systems (or VMs) however.

Cloud Systems fundamentally need to provide all of these same basic OS services as well. From a pool of virtualized compute, networking, and storage, those resources need to be virtualized and allocated. Applications needs to have the illusion of resource elasticity to enable them to scale to meet the demands of the workload and users...and the Cloud System needs to run workloads and host services.

We've evolved from the time when enterprise applications were simply comprised of a number of processes/threads running on bare metal or in a VM consuming CPU, memory, storage, and talking over the network...and we see the enterprise OS evolving as well. Today's and tomorrow's enterprise applications are distributed workloads and cloud services that are hosted and run on cloud systems spanning many physical nodes. OpenStack provides a standard set of interfaces which have enabled us to evolve Solaris into a fully open, yet very differentiated platform for hosting cloud services and workloads.

That differentiation comes in part because we've built OpenStack on Solaris to seamlessly leverage many new features newly available with Solaris 11.2, including Kernel Zones based virtualization being offered up via OpenStack Nova, Unified Archive based Image deployment served up via Glance, and Elastic Virtual Switch based SDN managed by OpenStack Neutron. Solaris also provides ZFS backed cloud block and object storage (though OpenStack Cinder and Swift) over iSCSI and Fiber Channel connected storage and/or via Oracle's ZFS Storage Appliance(s).

Differentiation also comes about because Solaris based OpenStack has at its foundation the platform and technology you know and trust for running your mission critical enterprise workloads. Unparalleled reliability, scalability, efficiency and performance...both for hosting mission critical cloud services, as well as your mission critical cloud infrastructure, are all just as important as they've always been.

So what's the best way to get started? You don't need a massive sprawl of infrastructure to begin. With just a system or two, you can get create your own Solaris based OpenStack cloud providing Infrastructure As A Service (IaaS). Check out Getting Started with OpenStack on Solaris 11.2 to get started. You can also find Solaris 11.2 in the OpenStack Marketplace.

You'll find packages for the Havana version of OpenStack available in the Solaris 11 package repositories, including Nova, Neutron, Cinder, Glance, Keystone, Horizon, and Swift.

If you run into issues, or have questions, feel free to drop us a note at solaris_openstack_interest@openstack.java.net...we're happy to help! Enjoy!

About

Oracle OpenStack is cloud management software that provides customers an enterprise-grade solution to deploy and manage their entire IT environment. Customers can rapidly deploy Oracle and third-party applications across shared compute, network, and storage resources with ease, with end-to-end enterprise-class support. For more information, see here.

Search

Archives
« March 2015
SunMonTueWedThuFriSat
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
    
       
Today