Monday Jul 20, 2015

OpenStack and Hadoop

It's always interesting to see how technologies get tied together in the industry. Orgad Kimchi from the Oracle Solaris ISV engineering group has blogged about the combination of OpenStack and Hadoop. Hadoop is an open source project run by the Apache Foundation that provided distributed storage and compute for large data sets - in essence, the very heart of big data. In this technical How To, Orgad shows how to set up a multi-node Hadoop cluster using OpenStack by creating a pre-configured Unified Archives that can be uploaded to the Glance Image Repository for deployment across VMs created with Nova.

Check out: How to Build a Hadoop 2.6 Cluster Using Oracle OpenStack

Thursday Jul 09, 2015

PRESENTATION: Oracle OpenStack for Oracle Linux at OpenStack Summit Session

In this blog, we wanted to share a presentation given at OpenStack Summit in Vancouver early in May. We have just setup our SlideShare.net account and published our first presentation there.  

If you want to see more of these presentations, follow us at our Oracle OpenStack SlideShare space.

Tuesday Jul 07, 2015

Upgrading OpenStack from Havana to Juno

Upgrading from Havana to Juno - Under the Covers

Upgrade from one OpenStack release to the next is a daunting task.  Experienced OpenStack operators usually only do so reluctantly.  After all, it took days (for some - weeks) to get OpenStack to stand up correctly in the first place and now they want to upgrade it?  At the last OpenStack Summit in Vancouver, it wasn't uncommon to hear about companies with large clouds still running Havana.  Moving forward to Icehouse, Juno or Kilo was to be an epic undertaking with lots of downtime for users and lots of frustration for operators.

In Solaris engineering, not only are we dealing with upgrading from Havana but we actually skipped Icehouse entirely.  This means we had to move people from Havana directly to Juno which isn't officially supported upstream.   Upstream only supports moving from X to X+1 so we were mostly on our own for this.  Luckily, the Juno code base for each component carried the database starting point from Havana and carried SQLAlchemy-Migrate scripts through Icehouse to Juno.  This ends up being a huge headache saver because 'component-manage db sync' will simply do the right thing and convert the schema automatically.

We created new SMF services for each component to handle any of the upgrade duties.  Each service does comparable tasks:  prepare the database for migration and update configuration files for Juno.  The component-upgrade service is now a dependency for each other component-* service.  This way,  Juno versions of OpenStack software won't run until after the upgrade service completes.

The Databases

For the most part, migration of the databases from Havana to Juno is straight-forward.  Since the components deliver the appropriate SQLAlchemy-Migrate scripts, we can simply enable the proper component-db SMF service and let the 'db sync' calls handle the database.   We did hit a few snags along the way, however.  Migration of SQLite-backed databases became increasingly error-prone as we worked on Juno.  Upstream, there's a strong push to do away with SQLite support entirely.  We decided that we would not support migration of SQLite databases explicitly.  That is, if an operator chose to run one or more of the components with SQLite, we would try to upgrade the database automatically for them but there were no guarantees.  It's well documented both in Oracle's documentation and the upstream documentation that SQLite isn't robust enough to handle what OpenStack needs for throughput to the database.

The second major snag we hit was the forced change to 'charset = utf8' in Glance 2014.2.2 for MySQL.  This required our upgrade SMF services to introspect into each component's configuration files, extract their SQLAlchemy connection string, and, if MySQL, convert all the databases to use utf8.  With these checks done and any MySQL databases converted, our databases could migrate cleanly and be ready for Juno

The Configuration Files

Each component's configuration files had to examined to look for deprecations or changes from Havana to Juno.  We started off simply examining the default configuration files for Juno 2014.2.2 and looking for 'deprecated'.  A simple Python dictionary was created to contain the renames and deprecations for Juno.  We then examine each configuration file and, if necessary, move configuration names and values to the proper place.  As an example, the Havana components typically set DEFAULT.sql_connection = <SQLAlchemy Connection String>.  In Juno, those were all changed to database.connection = <SQLAlchemy Connection String> so we had to make sure the upgraded configuration file for Juno brought the new variable along including the rename.

The Safety Net

"Change configuration values automatically?!"
"Update database character sets?!!"
"Are you CRAZY?!  You can't DO that!"

Oh, but remember that you're running Solaris where we have the best enterprise OS tools.  Upgrading to Juno will create a new boot environment for operators.  For anyone unfamiliar with boot environments, please examine the awesome magic here.  What this means is an upgrade to Juno is completely safe.  The Havana deployment and all of the instances, databases and configurations will be saved in the current BE while Juno will be installed into a brand new BE for you.  The new BE activates on the next boot where the upgrade process will happen automatically.  If the upgrade goes sideways, the old BE is calmly sitting there ready to called back into action.

Hopefully this takes the hand-wringing out of upgrading OpenStack for operators running Solaris.  OpenStack is complicated enough as it is without also incurring additional headaches around upgrading from one release to the next.   

OpenStack Juno in Solaris 11.3 Beta

It's been less than year since we announced availability of the Havana version of the OpenStack cloud infrastructure software as part of Solaris 11.2 and we've since continued to see what can only be described as a startling amount of momentum build in the OpenStack community. It's an incredibly exciting space for us, and for Oracle as a whole, as we watch the benefits of cloud based infrastructure and service management transform the way in which our customers run their Enterprises. 

Fully automated self-service provisioning and orchestration of compute, network, and storage is a beautiful thing...empowering developers to self-provision in minutes the infrastructure needed to build, test or deploy applications without having to waste time trying to file tickets, procure systems, or wait on others. Administrators are able to view, and manage what would otherwise be a sprawl of compute, networking, and storage as an actual system. Rather than wasting time repeatedly servicing individual requests, they can instead focus their attention on managing the cloud 's resources as a pool, and ensuring smooth operation of services provided by the cloud.

We've watched this transformation happen internally in Solaris Engineering as we've shifted from ad-hoc management of the test and development systems used, to managing that infrastructure as an OpenStack cloud. Utilization efficiency of our infrastructure has dramatically improved as Engineers who formerly "camped" on systems to ensure those environments would be available when needed no longer need to, since they can easily save and later re-deploy images of their development environment in minutes. Wasted time formerly spent hunting through lists of systems trying to find one that's free, working, and sufficient, is now spent getting actual work done, or better yet, drinking coffee!

If you've been thinking you would like to get started learning about OpenStack, perhaps by experimenting and building yourself a small private cloud, there's really never been a better time. Especially since today we're very excited to announce that OpenStack Juno is now available to you as part of Oracle Solaris 11.3 Beta. You can start small, and in about 10 minutes install a Solaris Unified Archive that essentially is a fully configured OpenStack Cloud-In-A-Box. Deploy the OpenStack Unified Archive to a system, perform a few configuration steps (specific to your environment, e.g. SSH keys and such), and voila you have a functional OpenStack cloud that you can start learning how to operate.

If you are more experienced with OpenStack and are looking to build a cloud system for your Enterprise that is powered by best of breed Solaris technologies, such as Solaris Zones, the ZFS file system, and Solaris SDN...and that leverages SPARC systems, x86 systems (or both) you'll appreciate how well we've integrated the worlds most popular open source cloud infrastructure software with the Solaris technologies you've come to know and trust.

Within Solaris 11.3 Beta, we've integrated the Juno versions of the core OpenStack Cloud Infrastructure services: Nova, Neutron, Cinder, Swift, Keystone, Glance, Heat, and Horizon, along with the drivers enabling OpenStack to drive Solaris virtualization, and ZFS backed shared storage over iSCSI or FC (both from Solaris natively or via the ZFS Storage Appliance). Within OpenStack Horizon, you'll find an integrated Zones Console interface, and you can upgrade your 11.2 Havana based OpenStack cloud via IPS to Juno based Solaris 11.3 Beta.

Post 11.3 Beta, we'll be very excited to introduce bare metal provisioning support for SPARC and x86 systems through OpenStack Ironic. In addition to being able to offer virtualized environments of varying sizes/configs (e.g. flavors) to cloud tenants, Ironic enables bare metal flavors to also be provided. We'll probably also have a few more exciting features to talk about as well. :) But in the meanwhile, we hope you enjoy OpenStack Juno on Solaris 11.3 Beta, and do let us know if you have any questions and/or run into any issues as we would be more than happy to help!

Tuesday May 19, 2015

Oracle Solaris gets OpenStack Juno Release

We've just recently pushed an update to Oracle OpenStack for Oracle Solaris. Supported customers who have access to the Support Repository Updates (SRU) can upgrade their OpenStack environments to the Juno release with the availability of SRU 11.2.10.5.0 onwards.

The Juno release includes a number of new features, and in general offers a more polished cloud experience for users and administrators. We've written a document that covers the upgrade from Havana to Juno for those on SRU 10.5 and SRU 11.5. The process to upgrade involves some manual administrator to copy and merge OpenStack configuration across the two releases, and upgrade the database schemas that the various services use. We've worked hard to provide a seamless automatic upgrade - this is now available from Oracle Solaris 11.2 SRU 12.5 onwards!

-- Glynn Foster

How to setup a HA OpenStack environment with Oracle Solaris Cluster

The Oracle Solaris Cluster team have just released a new technical whitepaper that covers how administrators can use Oracle Solaris Cluster to set up a HA OpenStack environment on Oracle Solaris.

Providing High Availability to the OpenStack Cloud Controller on Oracle Solaris with Oracle Solaris Cluster

In a typical multi-node environment in OpenStack it's important that administrators can set up infrastructure that is resilient to service or hardware failure. Oracle Solaris Cluster is developed in lock step with Oracle Solaris to provide additional HA capabilities and is deeply integrated into the platform. Service availability is maximized with full orchestrated disaster recovery for enterprise applications in both physical and virtual environments. Leveraging these core values, we've written some best practices for how you integrate clustering in an OpenStack with a guide that initially covers a two node cloud controller architecture. Administrators can then use this as a basis for a more complex architecture spanning multiple physical nodes.

-- Glynn Foster

Friday May 15, 2015

Database as a Service with Oracle Database 12c, Oracle Solaris and OpenStack

Just this morning Oracle announced a partnership with Mirantis to bring Oracle Database 12c to OpenStack. This collaboration enables Oracle Solaris and Mirantis OpenStack users to accelerate application and database provisioning in private cloud environments via Murano, the application catalog project in the OpenStack ecosystem. This effort brings Oracle Database 12c and Oracle Multitenant deployed on Oracle Solaris to Murano—the first Oracle cloud-ready products to be available in the catalog.

We've been hearing from lots of customers wanting to quickly deploy Oracle Database instances in their OpenStack environments and we're excited to be able to make this happen. Thanks to Oracle Database 12c and Oracle Multitenant, users can quickly create new Pluggable Databases to use in their cloud applications, backed by the secure and enterprise-scale foundations of Oracle Solaris and SPARC. What's more, with the upcoming generation of Oracle systems based on the new SPARC M7 processors, users will get automatic benefit of advanced security, performance and efficiency of Software in Silicon with features such as Application Data Integrity and the Database In-Memory Query Accelerator.

So if you're heading to Vancouver next week for the OpenStack Users and Developers Summit, stop by booth P9 and P7 to see a demo!

Update: (19/05/15) A technical preview of our work with Murano is now available here on the OpenStack Application Catalog.

Monday Mar 23, 2015

Available Hands-on Labs: Oracle OpenStack for Oracle Linux and Oracle VM

Last year, Hands-on Lab events for OpenStack at Oracle Open World were completely sold out. People who have had no prior experience with OpenStack could not believe how easy it was for them to launch networks and instances and exercise many features of OpenStack. Given the overwhelming demand for the hands-on lab and the positive feedback from the participants, we are announcing its availability to you – all you need is a laptop to download the lab and the 21-page document using the below links in this blog.

This lab takes you through installing and exercising OpenStack. It goes through basic operations, network, storage and guest communication. OpenStack has many more features you can explore using this setup. The lab also shows you how to transfer information in the guest. This is very important when creating templates or when trying to automate deployment process. As we had stated that our goal is to help make OpenStack an enterprise grade solution. The Hands-on Lab gives you a very quick and easy way to learn how to tranfer any key information about your own application  tempate in the guest – a key step in the real world deployment.

We encourage users to go ahead and use this setup to test more OpenStack features. OpenStack is not simple to deal with and usually requires high levels of skill but with this virtual box VM users can try out almost every feature.

Getting started with the Hands-on Lab document is now available to you in following websites:

 - Landing page:

http://www.oracle.com/technetwork/server-storage/openstack/linux/downloads/index.html

- Users can download a pre-installed VirtualBox VM for testing and demo purposes:

Please visit the landing page above to accept the license agreement then download either short or long version. 

Hands-on lab - OpenStack in virtual box (html)


Instructions on how to use the OpenStack VirtualBox image

Download Oracle VM VirtualBox

 If you have any questions, we have an OpenStack Community Forum where you can raise your questions and add your comments.

Wednesday Feb 25, 2015

Key Points To Know About Oracle OpenStack for Oracle Linux

Now generally available, the Oracle OpenStack for Oracle Linux distribution allows users to control Oracle Linux and Oracle VM through OpenStack in production environments. Based on the OpenStack Icehouse release, Oracle’s distribution provides customers with increased choice and interoperability and takes advantage of the efficiency, performance, scalability, and security of Oracle Linux and Oracle VM. Oracle OpenStack for Oracle Linux is available as part of Oracle Linux Premier Support and Oracle VM Premier Support offerings at no additional cost.

The Oracle OpenStack for Oracle Linux distribution is generally available, allowing customers to use OpenStack software with Oracle Linux and Oracle VM.

Oracle OpenStack for Oracle Linux is OpenStack software that installs on top of Oracle Linux. To help ensure flexibility and openness, it can support any guest operating system (OS) that is supported with Oracle VM, including Oracle Linux, Oracle Solaris, Microsoft Windows, and other Linux distributions.

This release allows customers to build a highly scalable, multitenant environment and integrate with the rich ecosystem of plug-ins and extensions available for OpenStack.

In addition, Oracle OpenStack for Oracle Linux can integrate with third-party software and hardware to provide more choice and interoperability for customers.

Oracle OpenStack for Oracle Linux is available as a free download from the Oracle Public Yum Server and Unbreakable Linux Network (ULN).

An Oracle VM VirtualBox image of the product is also available on Oracle Technology Network, providing an easy way to get started with OpenStack.

http://www.oracle.com/technetwork/server-storage/openstack/linux/downloads/index.html


Here are some of the benefits :

  • Extends choice for building public or private clouds with enterprise-class components
  • Accelerates cloud deployment with ease and peace of mind
  • Provides end-to-end support from the OpenStack platform to base OS, guest OS and Oracle workloads from a single vendor
  • Delivers built-in high-availability support with Oracle Clusterware to ensure continuity and resiliency of OpenStack services
  • Reduces total cost of ownership with zero license cost and low enterprise support cost

 

Read more at Oracle OpenStack for Oracle Linux website

Download now

Monday Feb 02, 2015

New OpenStack Hands on Labs

We've just published 2 new Hands on Labs that we ran during last year's Oracle OpenWorld. The labs were originally running on a SPARC T5-4 system with an attached Oracle ZFS Storage Appliance. During the lab, we walked participants through how to set up an OpenStack environment on Oracle Solaris, and then showed them how to create a golden image environment of the Oracle Database to be used to rapidly clone new VMs in the cloud. We've customized the lab so that it can be run in Oracle VM VirtualBox so check out the following labs:

Enjoy!

Tuesday Sep 02, 2014

Building an OpenStack Cloud for Solaris Engineering

Dave Miner has started to blog his experiences in deploying OpenStack internally for the Oracle Solaris engineering organization. Here's a blurb from the first post of the blog series:

In the Solaris engineering organization we've long had dedicated lab systems dispersed among our various sites and a home-grown reservation tool for developers to reserve those systems; various teams also have private systems for specific testing purposes. But as a developer, it can still be difficult to find systems you need, especially since most Solaris changes require testing on both SPARC and x86 systems before they can be integrated. We've added virtual resources over the years as well in the form of LDOMs and zones (both traditional non-global zones and the new kernel zones). Fundamentally, though, these were all still deployed in the same model: our overworked lab administrators set up pre-configured resources and we then reserve them. Sounds like pretty much every traditional IT shop, right? Which means that there's a lot of opportunity for efficiencies from greater use of virtualization and the self-service style of cloud computing. As we were well into development of OpenStack on Solaris, I was recruited to figure out how we could deploy it to both provide more (and more efficient) development and test resources for the organization as well as a test environment for Solaris OpenStack.

You can read the rest of the blog series here (will update this post with new links as they are published):

Wednesday Aug 27, 2014

Multi-node Solaris 11.2 OpenStack on SPARC Servers

In this blog post we are going to look at how to partition a single Oracle SPARC server and configure multi-node OpenStack on the server running OVM Server for SPARC (or LDoms).

If we are going to partition the server into multiple Root domains and, optionally, IO domains (not with SR-IOV VFs),
then configuring Solaris OpenStack Havana on these domains is very similar to setting up OpenStack on multiple individual physical machines.

On the other hand, if we are going to partition the server into multiple domains such that each domain (other than the primary domain) utilizes either

   -- networking service from primary domain OR
   -- SR-IOV Virtual Function (VF)

then there are some networking constraints that dictate how these domains can be used to run OpenStack services and how they can be used as compute nodes to host zones. We will look into these constraints and see how we can use VXLAN tunneling technology to overcome them.

Note: For the purposes of this blog, any non-primary domain is a guest domain. It is assumed that the user is familiar with LDoms Virtual Networking, SR-IOV VFs, and Crossbow VNICs.

Networking Constraint

To support a solaris brand or solaris-kz brand zone inside a guest domain, or just a VNIC inside a guest domain, it is required that the VNET device (or VF device) be instantiated with several alternate MAC addresses (See here). If the devices have just one MAC address, then VNIC creation fails as below:
   +-------------------------------------------------------------------------+
   |guest_domain_1# dladm show-phys net0                                     |
   |LINK              MEDIA                STATE      SPEED  DUPLEX    DEVICE|
   |net0              Ethernet             up         0      unknown   vnet0 |
   |guest_domain_1# dladm show-phys -m net0                                  |
   |LINK                SLOT     ADDRESS            INUSE CLIENT             |
   |net0                primary  0:14:4f:fb:37:a    yes   net0               |
   |guest_domain_1# dladm create-vnic -l net0 vnic0                          |
   |dladm: vnic creation failed: operation not supported                     |
   +-------------------------------------------------------------------------+

If the VNET device was added with several alternate MAC addresses, then one can create a VNIC:

   +--------------------------------------------------------------------------------+
   |guest_domain_1# dladm show-phys -m net1                                         |
   |LINK                SLOT     ADDRESS            INUSE CLIENT                    |
   |net1                primary  0:14:4f:fb:af:ed   no    --                        |
   |                    1        0:14:4f:fb:4c:8a   no    --                        |
   |                    2        0:14:4f:fb:ea:71   no    --                        |
   |                    3        0:14:4f:fa:e9:b8   no    --                        |
   |guest_domain_1# dladm create-vnic -l net1 vnic0                                 |
   |guest_domain_1# dladm show-vnic vnic0                                           |
   |LINK                OVER              SPEED  MACADDRESS        MACADDRTYPE VIDS |
   |vnic0               net1              0      0:14:4f:fb:4c:8a  factory, slot 1 0|
   +--------------------------------------------------------------------------------+

However, we cannot create a VNIC with any random MAC address. The MAC address should be one of the alternate MAC addresses. This is the issue or the constraint that I was alluding to early on. OpenStack Neutron through Solaris EVS applies a random MAC address to a OpenStack Neutron port. When a VM is launched inside the guest domain, it now tries to create a VNIC with this random MAC address and zone boot fails.

In the case of para-virtualized networking, guest domains transmit/receive packets through the primary domain's physical device. If the physical device in the primary domain is unaware of the MAC addresses used inside the guest domains, then the zones or VNICs using the random MAC address will not receive packets.

In the case of SR-IOV VF, guest domains transmit/receive packets through the VF inside the guest domain. However, these VFs are pre-programmed with MAC addresses, and the guest cannot create VNICs outside of these MAC addresses.

The upstream OpenStack has resolved this issue for other hypervisors by re-creating the port at the time of VM launch, using one of the unused hypervisor MAC address. However, this issue is not that straightforward in Solaris. Instead of a list of MAC addresses per server, Solaris has a list of MAC addresses per device. We realize this is a gap, and we are working toward fixing it.

VXLAN (Virtual eXtensible LAN) to the rescue

VXLAN, or Virtual eXtensible LAN, is a tunneling mechanism that provides isolated virtual Layer 2 (L2) segments that can span multiple physical L2 segments. Since it is a tunneling mechanism, it uses IP (IPv4 or IPv6) as its underlying network, which means we can have isolated virtual L2 segments over networks connected by IP. This allows Virtual Machines (VM) to be in the same L2 segment even if they  are located on systems that are in different physical networks. For more info on VXLAN do read this blog post.

VXLAN enables you to create VNICs with any MAC address on top of VXLAN datalinks, and the packets from these VNICs will be wrapped in an IP packet that will use primary MAC address of the VNET or VF device. The inner MAC address is not of importance for routing packets in and out of the guest domain.



In the above case, the packets from the VNIC (vnic0) will be wrapped in a UDP->IP->Ethernet packet before it is finally delivered out of net0.

Basic requirements to use VXLAN 

(a) IP interface of the primary domain and all the guest domains should be in the same subnet. This is not a hard requirement, but avoids the need for multicast routing.


In the setup above, all the domains are part of the 10.129.192.0/24 subnet. The 10.129.192.1 forms the default gateway IP, while the primary domain is assigned 10.129.192.2 and guest domains guest_domain_1 and guest_domain_2 are assigned 10.129.192.3 and 10.129.192.4 respectively. Various VXLAN datalinks will be created on top of these IP interfaces. Note that one VXLAN datalink will be created for each OpenStack Network.


(b) OpenStack services placement

Strictly speaking, only the OpenStack Neutron L3 agent needs to be run in the Primary Domain, while the rest of the OpenStack services can be run in a Guest domain. Neutron L3 agent deals with infrastructure that needs VLANS, for example for like for providing public addresses for tenants' VMs.

In the setup described below, the OpenStack services are placed as shown in the following list:
   +------------------------------+
   |Primary Domain:               |
   |  - Neutron server            |
   |  - Neutron L3 gent           |
   |  - Neutron DHCP agent        |
   |  - EVS controller            |
   |                            |
   |Guest Domain (guest_domain_1):|
   |  - Cinder services           |
   |  - Glance services           |
   |  - Nova services             |
   |  - Keystone services         |
   |  - Horizon services          |
   |                              |
   |Guest Domain (guest_domain_2):|
   |  - Nova compute              |
   +------------------------------+

Configuring OpenStack services on individual nodes

On the primary domain:

   - Modify the following options in /etc/neutron/neutron.conf
   +-------------------------------------------+
   |rabbit_host = 10.129.192.3                 |
   |auth_host = 10.129.192.3                   |
   |identity_uri = http://10.129.192.3:35357   |
   |auth_uri = http://10.129.192.3:5000/v2.0   |
   +-------------------------------------------+

   - Set the EVS controller to 10.129.192.2
   +-------------------------------------------------------------------------+
   |primary_domain# evsadm set-prop -p controller=ssh://evsuser@10.129.192.2 |
   +-------------------------------------------------------------------------+

     Copy neutron's, root's, and evsuser's public keys into /var/user/evsuser/.ssh/authorized_keys so that those users can
     do password-less ssh into 10.129.192.2 as evsuser.

   - Set the following options on EVS controller
   +----------------------------------------------------------------+
   |primary_domain# evsadm set-controlprop -p l2-type=vxlan         |
   |primary_domain# evsadm set-controlprop -p uplink-port=net0      |
   |primary_domain# evsadm set-controlprop -p vxlan-range=2000-3000 |
   |primary_domain# evsadm set-controlprop -p vlan-range=1          |
   +----------------------------------------------------------------+

   - Enable Solaris IP filter feature (svcamd enable ipfilter)

   - Enable IP forwarding (ipadm set-prop -p forwarding=on ipv4)

   - Enable Neutron server (svcadm enable neutron-server)

   - Enable Neutron DHCP agent (svcadm enable neutron-dhcp-agent)

On guest_domain_1:

   - Delete the keystone service endpoint that says neutron is available on 10.129.192.3,
     and add a new service endpoint for neutron as shown in the following keystone command.

    guest_domain_1# set |grep OS_
    OS_AUTH_URL=http://10.129.192.3:5000/v2.0
    OS_PASSWORD=neutron
    OS_TENANT_NAME=service
    OS_USERNAME=neutron
   +---------------------------------------------------------------------------------+
   |guest_domain_1# keystone endpoint-create --region RegionOne \                   |
   |--service 4f49dea054b46cf6f83afff4a216aa13 --publicurl http://10.129.192.2:9696 \|
   |--adminurl http://10.129.192.2:9696 --internalurl http://10.129.192.2:9696       |
   +---------------------------------------------------------------------------------+

   - Set the EVS controller to 10.129.192.2
   +--------------------------------------------------------------------------+
   |guest_domain_1# evsadm set-prop -p controller=ssh://evsuser@10.129.192.2  |
   +--------------------------------------------------------------------------+
     Copy root's public key into 10.129.192.2:/var/user/evsuser/.ssh/authorized_keys so that root on this machine can ssh
     as evsuser into 10.129.192.2. This is needed by zoneadmd that runs as a root to fetch EVS information from the EVS
     controller.

On guest_domain_2:

   - Set the EVS controller to 10.129.192.2
   +-------------------------------------------------------------------------+
   |guest_domain_2# evsadm set-prop -p controller=ssh://evsuser@10.129.192.2 |
   +-------------------------------------------------------------------------+

     Copy root's public key into 10.129.192.2:/var/user/evsuser/.ssh/authorized_keys so that root on this machine can ssh
     as evsuser into 10.129.192.2. This is needed by zoneadmd that runs as a root to fetch EVS information from EVS
     controller.

Create Networks

Creating an internal network for tenant demo:
    guest_domain_1# set |grep OS_
    OS_AUTH_URL=http://10.129.192.3:5000/v2.0
    OS_PASSWORD=secrete
    OS_TENANT_NAME=demo
    OS_USERNAME=admin
   +-------------------------------------------------------------------------------+
   |guest_domain_1# neutron net-create eng_net                                 |
   |guest_domain_1# neutron subnet-create --name eng_subnet eng_net 192.168.10.0/24|
   +-------------------------------------------------------------------------------+

Creating an external network for the tenant service:
    primary_domain# set |grep OS_
    OS_AUTH_URL=http://10.129.192.3:5000/v2.0
    OS_PASSWORD=neutron
    OS_TENANT_NAME=service
    OS_USERNAME=neutron
   +------------------------------------------------------------------------------+
   |primary_domain# neutron net-create --router:external=true ext_net \         |
   |--provider:network_type=vlan  |
   |                           |
   |primary_domain# neutron subnet-create --name ext_subnet --enable_dhcp=false \ |
   |ext_net 10.129.192.0/24                                                       |
   +------------------------------------------------------------------------------+

Creating a router and add interfaces to it
   +------------------------------------------------------+
   |primary_domain# neutron router-create provider_router |
   +------------------------------------------------------+
       Copy the router UUID from the above output and set it to router_id in /etc/neutron/l3_agent.ini.
   +---------------------------------------------------------------------------------+
   |primary_domain# neutron router-gateway-set <router_uuid> <external_network_uuid> |
   |primary_domain# neutron router-interface-add <router_uuid> <internal_subnet_uuid>|
   +---------------------------------------------------------------------------------+

Enable the Neutron L3 agent
   +-----------------------------------------------+
   |primary_domain# svcadm enable neutron-l3-agent |
   +-----------------------------------------------+

        At this point, the following resources are created in primary_domain and they are depicted in the diagram below.

   +--------------------------------------------------------------------------------+
   |primary_domain# dladm show-vxlan                                           |
   |LINK                ADDR                     VNI   MGROUP                     |
   |evs-vxlan2000       10.129.192.2            2000  224.0.0.1                   |
   |primary_domain# dladm show-vnic                                             |
   |LINK                OVER              SPEED  MACADDRESS        MACADDRTYPE VIDS |
   |ldoms-vsw0.vport0   net0              1000   0:14:4f:fb:37:a   fixed       0    |
   |evsb0abc182_2_0     evs-vxlan2000     1000   2:8:20:c9:ee:39   fixed       0    |
   |l3id27a4750_2_0     evs-vxlan2000     1000   2:8:20:af:d0:65   fixed       0    |
   |l3ec631ab64_2_0     net0              1000   2:8:20:32:84:94   fixed       0    |
   |primary_domain# ipadm                                                       |
   |NAME                 CLASS/TYPE STATE        UNDER      ADDR                 |
   |evsb0abc182_2_0      ip         ok           --         --                    |
   |  evsb0abc182_2_0/v4 static     ok           --         192.168.10.2/24        |
   |l3ec631ab64_2_0      ip         ok           --         --                   |
   |  l3ec631ab64_2_0/v4 static     ok           --         10.129.192.5/24        |
   |l3id27a4750_2_0      ip         ok           --         --                    |
   |  l3id27a4750_2_0/v4 static     ok           --         192.168.10.1/24        |
   |net0                 ip         ok           --         --                    |
   |  net0/v4            static     ok           --         10.129.192.2/24        |
   +--------------------------------------------------------------------------------+

Launch a VM

Launch VM connected to the internal network. Once the VM is in the Active state, you will see the following resources created in guest_domain1:
   +--------------------------------------------------------------------------------+
   |guest_domain_1# dladm show-vxlan                                             |
   |LINK                ADDR                     VNI   MGROUP                      |
   |evs-vxlan2000       10.129.192.3           2000  224.0.0.1                    |
   |guest_domain_1# dladm show-vnic                                               |
   |LINK                OVER              SPEED  MACADDRESS        MACADDRTYPE VIDS |
   |instance-00000005/net0 evs-vxlan2000  0      2:8:20:5b:ec:6b   fixed       0    |
   +--------------------------------------------------------------------------------+

From within the zone, you can ping the default gateway IP of 192.168.10.1 that is present in primary domain. The diagram below shows the path taken by ICMP packets.

   +-------------------------------------------------------------------+
   |root@host-192-168-10-3:~# ping -s 192.168.10.1                    |
   |PING 192-168.10.1: 56 data bytes                                |
   |64 bytes from 192.168.10.1: icmp_seq=0. time=0.432 ms             |
   |64 bytes from 192.168.10.1: icmp_seq=1. time=0.452 ms             |
   |64 bytes from 192.168.10.1: icmp_seq=2. time=0.326 ms             |
   |^C                                                              |
   |----192.168.10.1 PING Statistics----                            |
   |3 packets transmitted, 3 packets received, 0% packet loss         |
   |round-trip (ms)  min/avg/max/stddev = 0.326/0.403/0.452/0.068      |
   +-------------------------------------------------------------------+

Create and associate a Floating IP
   +-----------------------------------------------------------------------------+
   |guest_domain_1# neutron floatingip-create <external_network_uuid>            |
   |guest_domain_1# neutron floatingip-associate <floatingip_uuid> <VM_Port_UUID>|
   +-----------------------------------------------------------------------------+

Check the IP Filter and IP NAT rules on the primary domain:
   +------------------------------------------------------------------------+
   |primary_domain# ipadm show-addr l3ec631ab64_2_0/                      |
   |ADDROBJ           TYPE     STATE        ADDR                          |
   |l3ec631ab64_2_0/v4 static  ok           10.129.192.5/24                |
   |l3ec631ab64_2_0/v4a static ok           10.129.192.6/32                |
   |                                                                   |
   |primary_domain# ipfstat -io                                         |
   |empty list for ipfilter(out)                                          |
   |block in quick on l3id27a4750_2_0 from 192.168.10.0/24 to pool/11522149 |
   |                                                                   |
   |primary_domain# ipnat -l      |
   |List of active MAP/Redirect filters:                                  |
   |bimap l3ec631ab64_2_0 192.168.10.3/32 -> 10.129.192.6/32              |
   |                                                                   |
   |List of active sessions:      |
   +------------------------------------------------------------------------+

Now the VM should be accessible from the external network as 10.129.192.6.

   +------------------------------------------------------------------------+
   |[gmoodalb@thunta:~]                                                  |
   |>ping -ns 10.129.192.6                                                |
   |PING 10.129.192.6 (10.129.192.6): 56 data bytes                         |
   |64 bytes from 10.129.192.6: icmp_seq=0. time=0.919 ms                   |
   |64 bytes from 10.129.192.6: icmp_seq=1. time=0.854 ms                   |
   |64 bytes from 10.129.192.6: icmp_seq=2. time=0.828 ms                   |
   |^C                                                                 |
   |----10.129.192.6 PING Statistics----                                    |
   |3 packets transmitted, 3 packets received, 0% packet loss               |
   |round-trip (ms)  min/avg/max/stddev = 0.828/0.867/0.919/0.047           |
   |[gmoodalb@thunta:~]                                                  |
   |>ssh root@10.129.192.6                                                |
   |Password:                                                            |
   |Last login: Fri Aug 22 21:32:38 2014 from 10.132.146.13                 |
   |Oracle Corporation      SunOS 5.11      11.2    June 2014               |
   |root@host-192-168-10-3:~# zonename                                      |
   |instance-00000005                                                    |
   |root@host-192-168-10-3:~#                                               |
   +------------------------------------------------------------------------+

Tuesday Aug 05, 2014

OpenStack Havana Updates

Today we pushed some updates to OpenStack on Oracle Solaris into the release repository. These updates are to provide fixes for a number of bugs that were uncovered leading up the general release of Oracle Solaris 11.2. These fixes can be summarized as the following:

  • General robustness and fit-n-finish cleanup for Horizon
  • DHCP, L3, IPv6, and floating IP fixes for Neutron
  • Nova improvements to deal with halted zones
  • ZS3 Cinder driver fix for attaching multiple volumes
  • Package dependency fixes for minimization
  • Minor configuration file simplifications
These fixes will also be pushed into the support repository when Oracle Solaris 11.2 SRU 1 becomes available.

To update to these packages you can use the following command:

# pkg update
This will automatically apply the new package versions. You will manually need to restart the following OpenStack services:
cinder-volume:default
http:apache22
keystone
neutron-dhcp-agent
neutron-l3-agent
neutron-server
nova-compute

For reference, here's the list of packages that have been updated:

cloud/openstack/cinder
cloud/openstack/glance
cloud/openstack/horizon
cloud/openstack/keystone
cloud/openstack/neutron
cloud/openstack/nova
cloud/openstack/swift
library/python-2/jsonpatch
library/python-2/jsonpatch-26
library/python-2/jsonpatch-27
service/network/dnsmasq

Happy OpenStacking!

-- Glynn Foster

Thursday Jul 31, 2014

OpenStack 101 - How to get started on Oracle Solaris 11

As Eric has already mentioned with Oracle Solaris 11.2 we've included a complete, enterprise-ready distribution of OpenStack based on the "Havana" release of the upstream project. We've talked to many customers who have expressed an interest in OpenStack generally, but also being able to have Oracle Solaris participate in a heterogeneous mix of technologies that you'd typically see in a data center environment. We're absolutely thrilled to be providing this functionality to our customers as part of the core Oracle Solaris platform and support offering, so they can set up agile, self-service private clouds with Infrastructure-as-a-Service (IaaS), or develop Platform-as-a-Service (PaaS) or Software-as-a-Service (SaaS) solutions on top of this infrastructure.

If you haven't really had much experience with OpenStack, you'll almost certainly be confused by the myriad of different project names for some of the core components of an OpenStack cloud. Here's a handy table:

Component Description
Nova OpenStack Nova provides a cloud computing fabric controller that supports a wide variety of virtualization technologies. In addition to its native API, it includes compatibility with the commonly encountered Amazon Elastic Compute Cloud (EC2) and Amazon Simple Storage Service (S3) APIs.
Neutron OpenStack Neutron provides an API to dynamically request and configure virtual networks. These networks connect "interfaces" from other OpenStack services (for example, VNICs from Nova VMs). The Neutron API supports extensions to provide advanced network capabilities, for example, quality of service (QoS), access control lists (ACLs) and network monitoring.
Cinder OpenStack Cinder provides an infrastructure for managing block storage volumes in OpenStack. It allows block devices to be exposed and connected to compute instances for expanded storage, better performance, and integration with enterprise storage platforms.
Swift OpenStack Swift provides object storage services for projects and users in the cloud.
Glance OpenStack Glance provides services for discovering, registering, and retrieving virtual machine images. Glance has a RESTful API that allows querying of VM image metadata as well as retrieval of the actual image. VM images made available through Glance can be stored in a variety of locations from simple file systems to object-storage systems such as OpenStack Swift.
Keystone OpenStack Keystone is the OpenStack identity service used for authentication between the OpenStack services.
Horizon OpenStack Horizon is the canonical implementation of OpenStack's dashboard, which provides a web-based user interface to OpenStack services including Nova, Neutron, Cinder, Swift, Keystone and Glance.

So how do you get started? Due to the distributed architecture of OpenStack and being able to run different services across multiple nodes, OpenStack isn't the easiest thing in the world to configure and get running. We've made that easier for you to be able to set up a single-node pre-configured instance to evaluate initially with an OpenStack Unified Archive and an excellent getting started guide. Once you've got up to speed on a single-node set up, you can use your experience to deploy OpenStack on a multi-node setup. We've also got a bunch of other resource available:

We're just starting our journey of providing OpenStack on Oracle Solaris with this initial integration and we expect to deliver more value over time. Ready to start your journey with OpenStack in your data center?

-- Glynn Foster

Oracle Solaris 11 - Engineered for Cloud

Today's release of Oracle Solaris 11.2 is especially meaningful for many of us in Solaris Engineering that have been hard at work over the last few years making OpenStack Cloud Infrastructure a first class Solaris technology. Today we release not only one of the most significant, complete, and solid versions of Solaris ever, with many new cloud virtualization features, but also included is the fully integrated cloud infrastructure software itself....everything needed (from a software perspective anyway ;)) to stand up a fully functional, OpenStack cloud system providing Infrastructure as a Service (IaaS), and Cloud block/object storage on both SPARC and x86 based systems.

Why is the Solaris Engineering Team tackling Cloud Infrastructure? For the Enterprise, what we consider to be the "Operating System" is shifting thanks to the rise of cloud computing. When you think about the role of an Operating System, what comes to mind? What does it do, fundamentally? Of course, it's the software that manages and allocates compute resources to users and workloads. It virtualizes those resources (CPU, memory, persistent storage) to provide applications with elasticity in their resource use. It runs workloads, hosts services, and provides APIs and interfaces for both workloads and users of those services. Operating Systems have tended to do this within the confines of single physical systems (or VMs) however.

Cloud Systems fundamentally need to provide all of these same basic OS services as well. From a pool of virtualized compute, networking, and storage, those resources need to be virtualized and allocated. Applications needs to have the illusion of resource elasticity to enable them to scale to meet the demands of the workload and users...and the Cloud System needs to run workloads and host services.

We've evolved from the time when enterprise applications were simply comprised of a number of processes/threads running on bare metal or in a VM consuming CPU, memory, storage, and talking over the network...and we see the enterprise OS evolving as well. Today's and tomorrow's enterprise applications are distributed workloads and cloud services that are hosted and run on cloud systems spanning many physical nodes. OpenStack provides a standard set of interfaces which have enabled us to evolve Solaris into a fully open, yet very differentiated platform for hosting cloud services and workloads.

That differentiation comes in part because we've built OpenStack on Solaris to seamlessly leverage many new features newly available with Solaris 11.2, including Kernel Zones based virtualization being offered up via OpenStack Nova, Unified Archive based Image deployment served up via Glance, and Elastic Virtual Switch based SDN managed by OpenStack Neutron. Solaris also provides ZFS backed cloud block and object storage (though OpenStack Cinder and Swift) over iSCSI and Fiber Channel connected storage and/or via Oracle's ZFS Storage Appliance(s).

Differentiation also comes about because Solaris based OpenStack has at its foundation the platform and technology you know and trust for running your mission critical enterprise workloads. Unparalleled reliability, scalability, efficiency and performance...both for hosting mission critical cloud services, as well as your mission critical cloud infrastructure, are all just as important as they've always been.

So what's the best way to get started? You don't need a massive sprawl of infrastructure to begin. With just a system or two, you can get create your own Solaris based OpenStack cloud providing Infrastructure As A Service (IaaS). Check out Getting Started with OpenStack on Solaris 11.2 to get started. You can also find Solaris 11.2 in the OpenStack Marketplace.

You'll find packages for the Havana version of OpenStack available in the Solaris 11 package repositories, including Nova, Neutron, Cinder, Glance, Keystone, Horizon, and Swift.

If you run into issues, or have questions, feel free to drop us a note at solaris_openstack_interest@openstack.java.net...we're happy to help! Enjoy!

About

Oracle OpenStack is cloud management software that provides customers an enterprise-grade solution to deploy and manage their entire IT environment. Customers can rapidly deploy Oracle and third-party applications across shared compute, network, and storage resources with ease, with end-to-end enterprise-class support. For more information, see here.

Search

Archives
« September 2015
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today