Wednesday Feb 13, 2013

Solaris Stories

Demand for Solaris continues to increase, as shown by these recent customer references:

Tuesday Feb 12, 2013

Solaris 10 1/13 (aka "Update 11") Released

Larry Wake lets us know that Solaris 10 1/13 has been released and is available for download.

Tuesday Jan 22, 2013

Analyst commentary on Solaris and SPARC

Forrester's Richard Fichera updates and confirms his earlier views on the present and future of Solaris and SPARC.

Monday Nov 12, 2012

New Solaris Cluster!

We released Oracle Solaris Cluster 4.1 recently. OSC offers both High Availability (HA) and also Scalable Services capabilities. HA delivers automatic restart of software on the same cluster node and/or automatic failover from a failed node to a working cluster node. Software and support is available for both x86 and SPARC systems.

The Scalable Services features manage multiple cluster nodes all providing a load-balanced service such as web servers or app serves.

OSC 4.1 includes the ability to recover services from software failures, failure of hardware components such as DIMMs, CPUs, and I/O cards, a global file system, rolling upgrades, and much more.

Oracle Availability Engineering posted a brief description and links to details. Or, you can just download it now!

Thursday Jul 14, 2011

Extreme Oracle Solaris Virtualization

There will be a live webcast today, explaining how to leverage Oracle Solaris' unmatched virtualization features. The webcast begins at 9 AM PT. Register is required, at: Oracle.com.

Monday Jul 11, 2011

Bob's Live Upgrade Advice

Do you wish you could shorten the service outage during an operating system upgrade? Do you want the ability to easily "backout" an OS upgrade?

Do you already use Solaris Live Upgrade? No? Why not!?!? :-)

Bob Netherton has compiled a comprehensive list of Live Upgrade Survival Tips - important pieces of advice that help you minimize service downtime and shorten service recovery if something goes wrong.

Tuesday Jun 14, 2011

Oracle VM Server for SPARC 2.1

In case you missed it: Oracle VM Server for SPARC 2.1 was released last week. The headline feature for this release is Live Migration - the ability to move a running guest OS from one server to another, with service disruption shorter than 1 second.

Learn all about it via the links in the OVMSS 2.1 press release.

Tuesday Mar 01, 2011

Virtual Network - Part 4

Resource Controls

This is the fourth part of a series of blog entries about Solaris network virtualization. Part 1 introduced network virtualization, Part 2 discussed network resource management capabilities available in Solaris 11 Express, and Part 3 demonstrated the use of virtual NICs and virtual switches.

This entry shows the use of a bandwidth cap on Virtual Network Elements (VNEs). This form of network resource control can effectively limit the amount of bandwidth consumed by a particular stream of packets. In our context, we will restrict the amount of bandwidth that a zone can use.

As a reminder, we have the following network topology, with three zones and three VNICs, one VNIC per zone.

All three VNICs were created on one ethernet interface in Part 3 of this series.

Capping VNIC Bandwidth

Using a T2000 server in a lab environment, we can measure network throughput with the new dlstat(1) command. This command reports various statistics about data links, including the quantity of packets, bytes, interrupts, polls, drops, blocks, and other data. Because I am trying to illustrate the use of commands, not optimize performance, the network workload will be a simple file transfer using ftp(1). This method of measuring network bandwidth is reasonable for this purpose, but says nothing about the performance of this platform. For example, this method reads data from a disk. Some of that data may be cached, but disk performance may impact the network bandwidth measured here. However, we can still achieve the basic goal: demonstrating the effectiveness of a bandwidth cap.

With that background out of the way, first let's check the current status of our links.

GZ# dladm show-link
LINK        CLASS     MTU    STATE    BRIDGE     OVER
e1000g0     phys      1500   up       --         --
e1000g2     phys      1500   unknown  --         --
e1000g1     phys      1500   down     --         --
e1000g3     phys      1500   unknown  --         --
emp_web1    vnic      1500   up       --         e1000g0
emp_app1    vnic      1500   up       --         e1000g0
emp_db1     vnic      1500   up       --         e1000g0
GZ# dladm show-linkprop emp_app1
LINK         PROPERTY        PERM VALUE          DEFAULT        POSSIBLE
emp_app1     autopush        rw   --             --             --
emp_app1     zone            rw   emp-app        --             --
emp_app1     state           r-   unknown        up             up,down
emp_app1     mtu             rw   1500           1500           1500
emp_app1     maxbw           rw   --             --             --
emp_app1     cpus            rw   --             --             --
emp_app1     cpus-effective  r-   1-9            --             --
emp_app1     pool            rw   SUNWtmp_emp-app --             --
emp_app1     pool-effective  r-   SUNWtmp_emp-app --             --
emp_app1     priority        rw   high           high           low,medium,high
emp_app1     tagmode         rw   vlanonly       vlanonly       normal,vlanonly
emp_app1     protection      rw   --             --             mac-nospoof,
                                                                restricted,
                                                                ip-nospoof,
                                                                dhcp-nospoof
<some lines deleted>
Before setting any bandwidth caps, let's determine the transfer rates between a zone on this system and a remote system.

It's easy to use dlstat to determine the data rate to my home system while transferring a file from a zone:

GZ# dlstat -i 10 e1000g0 
           LINK    IPKTS   RBYTES    OPKTS   OBYTES
       emp_app1   27.99M    2.11G   54.18M   77.34G
       emp_app1       83    6.72K        0        0
       emp_app1      339   23.73K    1.36K    1.68M
       emp_app1    1.79K  120.09K    6.78K    8.38M
       emp_app1    2.27K  153.60K    8.49K   10.50M
       emp_app1    2.35K  156.27K    8.88K   10.98M
       emp_app1    2.65K  182.81K    5.09K    6.30M
       emp_app1      600   44.10K      935    1.15M
       emp_app1      112    8.43K        0        0
The OBYTES column is simply the number of bytes transferred during that data sample. I'll ignore the 1.68MB and 1.15MB data points because the file transfer began and ended during those samples. The average of the other values leads to a bandwidth of 7.6 Mbps (megabits per second), which is typical for my broadband connection.

Let's pretend that we want to constrain the bandwidth consumed by that workload to 2 Mbps. Perhaps we want to leave all of the rest for a higher-priority workload. Perhaps we're an ISP and charge for different levels of available bandwidth. Regardless of the situation, capping bandwidth is easy:

GZ# dladm set-linkprop -p maxbw=2000k emp_app1
GZ# dladm show-linkprop -p maxbw emp__app1
LINK         PROPERTY        PERM VALUE          DEFAULT        POSSIBLE
emp_app1     maxbw           rw       2          --             --
GZ# dlstat -i 20 emp_app1 
           LINK    IPKTS   RBYTES    OPKTS   OBYTES
       emp_app1   18.21M    1.43G   10.22M   14.56G
       emp_app1      186   13.98K        0        0
       emp_app1      613   51.98K    1.09K    1.34M
       emp_app1    1.51K  107.85K    3.94K    4.87M
       emp_app1    1.88K  131.19K    3.12K    3.86M
       emp_app1    2.07K  143.17K    3.65K    4.51M
       emp_app1    1.84K  136.03K    3.03K    3.75M
       emp_app1    2.10K  145.69K    3.70K    4.57M
       emp_app1    2.24K  154.95K    3.89K    4.81M
       emp_app1    2.43K  166.01K    4.33K    5.35M
       emp_app1    2.48K  168.63K    4.29K    5.30M
       emp_app1    2.36K  164.55K    4.32K    5.34M
       emp_app1      519   42.91K      643  793.01K
       emp_app1      200   18.59K        0        0
Note that for dladm, the default unit for maxbw is Mbps. The average of the full samples is 1.97 Mbps.

Between zones, the uncapped data rate is higher:

GZ# dladm reset-linkprop -p maxbw emp_app1
GZ# dladm show-linkprop  -p maxbw emp_app1
LINK         PROPERTY        PERM VALUE          DEFAULT        POSSIBLE
emp_app1     maxbw           rw   --             --             --
GZ# dlstat -i 20 emp_app1
           LINK    IPKTS   RBYTES    OPKTS   OBYTES
       emp_app1   20.80M    1.62G   23.36M   33.25G
       emp_app1      208   16.59K        0        0
       emp_app1   24.48K    1.63M  193.94K  277.50M
       emp_app1  265.68K   17.54M    2.05M    2.93G
       emp_app1  266.87K   17.62M    2.06M    2.94G
       emp_app1  255.78K   16.88M    1.98M    2.83G
       emp_app1  206.20K   13.62M    1.34M    1.92G
       emp_app1   18.87K    1.25M   79.81K  114.23M
       emp_app1      246   17.08K        0        0
This five year old T2000 can move at least 1.2 Gbps of data, internally, but that took five simultaneous ftp sessions. (A better measurement method, one that doesn't include the limits of disk drives, would yield better results, and newer systems, either x86 or SPARC, have higher internal bandwidth characteristics.) In any case, the maximum data rate is not interesting for our purpose, which is demonstration of the ability to cap that rate.

You can often resolve a network bottleneck while maintaining workload isolation, by moving two separate workloads onto the same system, within separate zones. However, you might choose to limit their bandwidth consumption. Fortunately, the NV tools in Solaris 11 Express enable you to accomplish that:

GZ# dladm set-linkprop -t -p maxbw=25m emp_app1
GZ# dladm show-linkprop -p maxbw emp_app1
LINK         PROPERTY        PERM VALUE          DEFAULT        POSSIBLE
emp_app1     maxbw           rw      25          --             --
Note that the change to the bandwidth cap was made while the zone was running, potentially while network traffic was flowing. Also, changes made by dladm are persistent across reboots of Solaris unless you specify a "-t" on command line.

Data moves much more slowly now:

GZ# # dlstat  -i 20 emp_app1
           LINK    IPKTS   RBYTES    OPKTS   OBYTES
       emp_app1   23.84M    1.82G   46.44M   66.28G
       emp_app1      192   16.10K        0        0
       emp_app1    1.15K   79.21K    5.77K    8.24M
       emp_app1   18.16K    1.20M   40.24K   57.60M
       emp_app1   17.99K    1.20M   39.46K   56.48M
       emp_app1   17.85K    1.19M   39.11K   55.97M
       emp_app1   17.39K    1.15M   38.16K   54.62M
       emp_app1   18.02K    1.19M   39.53K   56.58M
       emp_app1   18.66K    1.24M   39.60K   56.68M
       emp_app1   18.56K    1.23M   39.24K   56.17M
<many lines deleted>
The data show an aggregate bandwidth of 24 Mbps.

Conclusion

The network virtualization tools in Solaris 11 Express include various resource controls. The simplest of these is the bandwidth cap, which you can use to effectively limit the amount of bandwidth that a workload can consume. Both physical NICs and virtual NICs may be capped by using this simple method. This also applies to workloads that are in Solaris Zones - both default zones and Solaris 10 Zones which mimic Solaris 10 systems.

Next time we'll explore some other virtual network architectures.

Thursday Jan 27, 2011

Virtual Networks - Part 2

This is the second in a series of blog entries that discuss the network virtualization features in Solaris 11 Express. The first entry discussed the basic concepts and the virtual network elements, including virtual NICs, VLANs, virtual switches, and InfiniBand datalinks.

This entry adds to that list the resource controls and security features that are necessary for a well-managed virtual network.

Virtual Networks, Real Resource Controls

In Oracle Solaris 11 Express, there are four main datalink resource controls:
  1. a bandwidth cap, which limits the amount of traffic passing through a datalink in a small amount of elapsed time
  2. assignment of packet processing tasks to a subset of the system's CPUs
  3. flows, which were introduced in the previous blog post
  4. rings, which are hardware or software resources that can be dedicated to a single purpose.
Let's take them one at a time. By default, datalinks such as VNICs can consume as much of the physical NIC's bandwidth as they want. That might be the desired behavior, but if it isn't you can apply the property "maxbw" to a datalink. The maximum permitted bandwidth can be specified in Kbps, Mbps or Gbps. This value can be changed dynamically, so if you set this value too low, you can change without affecting the traffic flowing over that link. Solaris will not allow traffic to flow over that datalink at a rate faster than you specify.

You can "over-subscribe" this bandwidth cap: the sum of the bandwidth caps on the VNICs assigned to a NIC can exceed the rated bandwidth of the NIC. If that happens, the bandwidth caps become less effective.

In addition the bandwidth cap, packet processing computation can be constrained to the CPUs associated with a workload.

First some background. When Solaris boots, it assigns interrupt handler threads to the CPUs in the system. (See Solaris CPUs for an explanation of the meaning of "CPU".) Solaris attempts to spread the interrupt handlers out evenly so that one CPU does not become a bottleneck for interrupt handling.

If you create non-default CPU pools, the interrupt handlers will retain their CPU assignments. One unintended side effect of this is a situation where the CPUs intended for one workload will be handling interrupts caused by another workload. This can occur even with simple configurations of Solaris Zones. In extreme cases, network packet processing for one zone can severely impact the performance of another zone.

To prevent this behavior, Solaris 11 Express offers the ability to assign a datalink's interrupt handler to a set of CPUs or a pool of CPUs. To simplify this further, the obvious choice is made for you, by default, for a zone which is assigned its own resource pool. When such a zone boots, a resource pool is created for the zone, a sufficient quantity of CPUs is moved from the default pool to the zone's pool, and interrupt handlers for that zone's datalink(s) are automatically reassigned to that resource pool. Network flows enable you to create multiple lanes of traffic. This allows the parallelization of network traffic. You can assign a bandwidth cap to a flow. Flows were introduced in the previous post and will be discussed further in future posts.

Finally, the newest high speed NICs support hardware rings: memory resources that can be dedicated to a particular set of network traffic. For inbound packets, this is the first resource control that separates network traffic based on packet information such as destination MAC address. By assigning one or more rings to a stream of traffic, you can commit sufficient hardware resources to it and ensure a greater relative priority for those packets, even if another stream of traffic on the same NIC would otherwise cause congestion and impact packet latency of all streams.

If you are using a NIC that does not support hardware rings, Solaris 11 Express support software rings which cause a similar effect.

Virtual Networks, Real Security

In addition to rescource controls, Solaris 11 Express offers datalink protection controls. These controls are intended to prevent a user from creating improper packets that would cause mischief on the network. The mac-nospoof property requires that outgoing packets have a MAC address which matches the link's MAC address. The ip-nospoof property implements a similar restriction, but for IP addresses. The dhcp-nospoof property prevents improper DHCP assignment.

Summary (so far)

The network virtualization features in Solaris 11 Express enable the creation of virtual network devices, leading to the implementation of an entire network inside one Solaris system. Associated resource control features give you the ability to manage network bandwidth as a resource and reduce the potential for one workload to cause network performance problems for another workload. Finally, security features help you minimize the impact of an intruder.

With all of the introduction out of the way, next time I'll show some actual uses of these concepts.

Monday Dec 06, 2010

What's a Solaris CPU?

In the next few blog entries I will use the phrase "Solaris CPUs" to refer to the view that Solaris has of CPUs. In the old days, a CPU was a CPU - one chip, one computational entity, one ALU, one FPU, etc. Now there are many factors to consider - CPU sockets, CPU cores per socket, hardware threads per core, etc.

Solaris 10 and 11 Express consider "Solaris CPUs" (a phrase I made up) on which to schedule processes. Solaris considers each of these a "Solaris CPU":

  • x86/x64 systems: a CPU core, or in some CPUs, a hardware thread (today, can be one to eight cores per socket, and one to 16 threads per socket), up to 128 "Solaris CPUs" in a Sun Fire X4800
  • UltraSPARC-II, -III[+], -IV[+]: a CPU core, with a maximum of 144 in an E25K
  • SPARC64-VI: a hardware thread, maximum of 256 in a Sun SPARC Enterprise M9000
  • SPARC64-VII[+]: a hardware thread, maximum of 512 in an M9000
  • SPARC CMT (SPARC-T1, -T2+, SPARC T3): a hardware thread, maximum of 512 in a SPARC T3-4
  • SPARC T4: a hardware thread, maximum of 256 in a SPARC T4-4
  • SPARC T5: a hardware thread, maximum of 1,024 in a SPARC T5-8
  • SPARC M5: a hardware thread, maximum of 1,536 in a SPARC M5-32
  • SPARC M6: a hardware thread, maximum of 3,072 in a SPARC M6-32
Each of these "Solaris CPUs" can be controlled independently by Solaris. For example, each one can be configured into a processor set.

[Edit 2013.04.25: Fixed a detail, and added T4, T5 and M5.]
[Edit 2013.11.05: Added M6.]

Monday Nov 15, 2010

Oracle Solaris 11 Express Released!

What's New in Oracle Solaris 11 Express 2010.11

Oracle Solaris 11 Express was announced today. It is a fully supported, production-ready member of the Oracle Solaris family. First, here are the major additions and enhancements. I will expand on a few of them in future blog entries.
  • IPS (Image Packaging System) - a new, network-based package management system which replaces the System V Release 4 packaging system which had been used for Solaris packages as well as non-Sun software. The SVR4 package tools are still there for non-Solaris packages. With IPS, package updates are automatically downloaded and installed, under your control, from a network-based repository. If you need custom environments or deliver software, you can create your own repositories. A system can be configured to get packages from multiple repositories. IPS Documentation

  • Solaris 10 Containers are Zones on a Solaris 11 Express system which mimic the operating environment of a Solaris 10 system. (This is the same concept as Solaris 8 Containers and Solaris 9 Containers, which run on Solaris 10 systems.) This feature set includes P2V and V2V tools to convert Solaris 10 systems, and native zones, respectively, into Solaris 10 Containers on an Oracle Solaris 11 Express system. Solaris 10 Containers Documentation

  • Network Virtualization and Resource Management: A comprehensive set of virtual network components (vNICs, vSwitches) and resource management and observability tools. When combined with Solaris Zones configured as routers, you can creat complete networks within one system. Existing tools, such as IP Filter, complete the picture by enhancing network isolation. The possibilities are endless - worth at least a few blog entries... ;-)
    Components include:
    • Virtual NICs (VNICs): multiple VNICs can be configured to use one (physical) NIC. You can manage a VNIC with the same tools you use to manage NICs.
    • Virtual switches (vSwitches): abstract, software network elements, to which VNICs can be attached. These elements fulfill the same purpose as physical switches.
    • Virtual Routers: you can configure a Zone with multiple VNICs and configure routing in the Zone. You can also turn off all of the other services in the Zone, and configure IP Filter in that zone, to make a Router Zone behave just like a physical router - without using rack space, or using a lot of power, or being limited by the bandwidth of a physical connection.
    • Bandwidth controls: you can use the dladm(1M) command to limit the amount of bandwidth that a NIC or VNIC can use. This can be used to prevent overrunning a NIC, or to simulate the limited bandwidth of a physical NIC, or as a sanity check in sophisticated network configurations.
    Network virtualization documentation

  • ZFS now includes dataset encryption, deduplication and snapshot-diff features. ZFS Documentation

Here is a list of the other major improvements.

  • System Management
    • Boot Environments: ZFS snapshots are used to create alternate boot environments, similar to those created by Live Upgrade in Solaris 10. Unlike LU, S11E BE's exist on the same root pool, so you can have multiple BE's on a single disk drive - although servers should really have mirrored root drives.
    • Installation
      • Automated Install: Replaces and extends JumpStart, built around IPS. Documentation
      • Interactive Text Install: Intended for GUI-less servers.
      • Distribution Constructor: Create pre-configured bootable OS images with Distro Constructor, which uses IPS. Distro Constructor Documentation
  • Virtualization
    • The functionality of the open-source 'zonestat' script has been re-written and expanded as a full, integrated Solaris 11 Express tool.
    • Delegated Administration: The global zone administrator can delegate some of the management tasks for a zone to a specific global zone user.
    • A simpler packaging model, using IPS, decreases the complexity of zoned environments.
  • Networking
    • Significant Infiniband improvements, including SDP support.
    • Improvements to IP Multipathing (IPMP)
    • Network Auto-Magic remove the need to manually re-configure a laptop being moved around, or converting between physical and wireless networks.
    • New L3/L4 load balancer.
  • Storage
    • ZFS is the only root file system type.
    • Time Slider automatic ZFS snapshot management, with a GUI folder-like view of the past.
    • CIFS service enables highly scalable file servers for Microsof Windows environments.
    • COMSTAR targets are now available for iSER, SRP and FcoE.
  • Security
    • Root account is now a role, not a user.
    • Labeled IPsec (and other Trusted Extensions enhancements)
    • Trusted Platform Module support
  • Hardware
    • New hardware such as SPARC T3 systems.
    • NUMA I/O features keep related processes, memory, and I/O channels "near" each other to maximize performance and minimized I/O latency.
    • DISM performance improvements, especially for databases.
    • Suspend and resume to RAM, especially for laptops.
    • GNOME 2.30
All of the Oracle Solaris 11 Express documentation is available.

Thursday Aug 12, 2010

First Light: Solaris 11

Recently, John Fowler (an Oracle Executive VP) announced some plans for Solaris 11. Plans include introduction of Solaris 11 in 2011. He also declared that Solaris 11 would be "as large a product release as Solaris 10 was." You can view the webcast and download the slide deck at: http://www.oracle.com/dm/11h1corp/53947_systems_strategy_webcast.html.

Friday Jul 30, 2010

Oracle and Dell to certify, resell Oracle Solaris


Yesterday Oracle announced that Dell and HP will certify and resell Oracle Solaris, Oracle Enterprise Linux and Oracle VM on their x86 servers.

Thursday Jun 24, 2010

New Oracle Solaris White Paper

Oracle has published a new white paper that discusses the optimizations made to Solaris and SPARC in recent years. These optimizations improve throughput, security, and resiliency throughout the application solution stack, driving maximum ROI and minimum TCO. This white paper is entitled Oracle Solaris and Sun SPARC Systems—Integrated and Optimized for Enterprise Computing.

Friday Apr 02, 2010

Solaris Virtualization Book

This blog has been pretty quiet lately, but that doesn't mean I haven't been busy! For the last 6 months I've been leading the writing of a book: _Oracle Solaris 10 System Virtualization Essentials_.

This book discusses all of the forms of server virtualization, not just hypervisors. It covers the forms of virtualization that Solaris 10 provides and those that it can use. These include Solaris Containers (also called Solaris Zones), VirtualBox, Oracle VM ( x86 and SPARC; the latter was called Logical Domains), and Dynamic Domains.

One chapter is dedicated to the topic of choosing the best virtualization technology for a particular workload or set of workloads. Another chapter shows how to use each virtualization technology to achieve specific goals, including screenshots and command sequences. The last chapter of the book describes the need for virtualization management tools and then uses Oracle EM Ops Center as an example.

The book is available for pre-order at Amazon.com.

About

Jeff Victor writes this blog to help you understand Oracle's Solaris and virtualization technologies.

The views expressed on this blog are my own and do not necessarily reflect the views of Oracle.

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today