Dienstag Jan 06, 2015

What's up with LDoms - Article Index

In the last few years - yes, it's actually years! - I wrote a series of articles about LDoms and their various features.  It's about time to publish a small index to all those articles:

I will update this index if and when I find time for a new article.

What's up with LDoms: Part 11 - IO Recommendations

In the last few articles, I discussed various different options for IO with LDoms.  Here's a very short summary:

IO Option Links to previous articles
SR-IOV
Direct IO
Root Domains
Virtual IO

In this article, I will discuss the pros and cons of each of these options and give some recommendations for their use.

Root Domain SetupIn the case of physical IO, there are several options:  Root Domains, DirectIO and SR-IOV.  Let's start with SR-IOV.  The most recent addition to the LDom IO options, it is by far the most flexible and the most sophisticated PCI virtualization option available.  Please see the diagram on the right (from the Admin Guide) for an overview.  First introduced for Ethernet adapters, Oracle today supports SR-IOV for Ethernet, Infiniband and Fibre Channel.  Note that the exact features depend on the hardware capabilities and built-in support of the individual adapter.  SR-IOV is not a feature of a server but rather a feature of an individual IO card in a server platform that supports it.  Here are the advantages of this solution:

  • It is very fine grain, with between 7 and 63 Virtual Functions per adapter.  The exact number depends on adapter capabilities.  This means that you can create and use as many as 63 virtual devices in a single PCIe slot!
  • It provides bare metal performance (especially latency), although hardware resources like send and receive buffers, MAC slots and other resources are devided between VFs which might lead to slight performance differences in some cases.
  • Particularily for Fibre Channel, there are no limitations to what end-point device (disk, tape, library, etc.) you attach to the fabric.  Since this is a virtual HBA, it is administered like one.
  • Different than Root Domains and Direct IO, most SR-IOV configuration operations can be performed dynamically, if the adapters support it.  This is currently the case for Ethernet and Fibre Channel.  This means you can add or remove SR-IOV VFs to and from domains in a dynamic reconfiguration operation, without rebooting the domain.

Of course, there are also some drawbacks:

  • First of all, you have a hard dependency on the domain owning the root complex.  Here's a little more detail about this:
    As you can see in the diagram, the IO domain owns the physical IO card.  The physical Root Complex (pci_0 in the diagram) remains under the control of the root domain (the control domain in this example).  This means that if the root domain should reboot for whatever reason, it will reset the root complex as part of that reboot.  This reset will cascade down the PCI structures controlled by that root complex and eventually reset the PCI card in the slot and all the VFs given away to the IO domains.  Essentially, seen from the IO domain, its (virtual) IO card will perform an unexpected reset.  The best way to respond to this is with a panic of the IO domain, which is the most likely consequence.  Note that the Admin Guide says that the behaviour of the IO domain is unpredictable, which means that a panic is the best, but not the only possible outcome.  Please also take note of the recommended precautions (by configuring domain dependencies) documented in the same section of the Admin Guide.  Furthermore, you should be aware that this also means that any kind of multi-pathing on top of VFs is counter-productive.  While it is possible to create a configuration where one guest uses VFs from two different root domains (and thus from two different physical adapters), this does not increase the availability of the configuration.  While this might protect against external failures like link failures to a single adapter, it doubles the likelyhood of a failure of the guest, because it now depends on two root domains instead of one.  I strongly recommend against any such configurations at this time.  (There is work going on to mitigate this dependency.)
  • Live Migration is not possible for domains that use VFs.  In the case of Ethernet, this can be worked around by creating an IPMP failover group consiting of one virtual network port and one Ethernet VF and manually removing the VF before initiating the migration as described by Raghuram here.  Note that this is not currently possible for Fibre Channel or IB.
  • Since you are actually sharing one adapter between many guests, these guests do share the IO bandwidth of this one adapter.  Depending on the adapter, there might be bandwidth management available, however, the side effects of sharing should be considered.
  • Not all PCIe adapters support SR-IOV.  Please consult MOS DocID 1325454.1 for details.

SR-IOV is a very flexible solution, especially if you need a larger number of virtual devices and yet don't want to buy into the slightly higher IO latencies of virtual IO.  Due to the limitations mentioned above, I can not currently recommend SR-IOV or Direct IO for use in domains with highest availability requirements.  In all other situations, and definately in test and development environments, it is an interesting alternative to virtual IO.  The performance gap between SR-IOV and virtual IO has been narrowed considerably with the latest improvements in virtual IO.  You will essentially have to weigh the availability, latency and managability characteristics of SR-IOV against virtual IO to make your decision.

Root Domain SetupNext in line is Direct IO.  As described in an earlier post, you give one full PCI slot to the receiving domain.  The hypervisor will create a virtual PCIe infrastructure in the receiving guest and reconfigure the PCIe subsystem accordingly.  This is shown in an abstract view in the diagram (from the Admin Guide) at the right. Here are the advantages:

  • Since Direct IO works on a per slot basis, it is a more fine grain solution, compared to root domains.  For example, you have 16 slots in a T5-4, but only 8 root complexes.
  • The IO domain has full control over the adapter.
  • Like SR-IOV, it will provide bare-metal performance.
  • There is no sharing, and thus no cross-influencing from other domains.
  • It will support all kinds of IO devices, tape drives and tape libraries being the most popular example.

The disadvantages of Direct IO are:

  • There is a hard dependency on the domain owning the root complex.  The reason is the same as with SR-IOV, so there's no need to repeat this here.  Please make sure you understand this and read the recommendations in the Admin Guide on how to deal with this dependency.
  • Not all IO cards are supported with DirectIO.  They must not contain their own PCIe switch.  A list of supported cards is maintained in MOS DocID 1325454.1.
  • Like Root Domains, dynamic reconfiguration is not currently supported with DirectIO slots.  This means that you will need to reboot both the root domain and the receiving guest domain to change this configuration.
  • And of course, Live Migration is not possible with Direct IO devices.

DirectIO was introduced in an early release of the LDoms software.  At the time, systems like the T2000 only supported two Root Complexes.  The most common usecase was to support tape devices in domains other than the control domain.  Today, with a much better ratio of slots/root complex, the need for this feature is diminishing and although it is fully supported, you should consider other alternatives first.

Root Domain Setup Finally there are Root Domains.  Again, a diagram you already know, just as a reminder.

The advantages of Root Domains are:

  • Highest Isolation of all domain types.  Since they own and control their own CPU, memory and one or more PCIe root complex, they are fully isolated from all other domains in the system.  This is very similar to Dynamic System Domains you might know from older SPARC systems, just that we now use a hypervisor instead of a crossbar.
  • This also means no sharing of any IO resources with other domains, and thus no cross-influence of any kind.
  • Bare metal performance.  Since there's no virtualization of any kind involved, there are no performance penalties anywhere.
  • Root Domains are fully independent of all other domains in all aspects.  The only exception is console access, which is usually provided by the control domain.  However, this is not a single point of failure, as the root domain will continue to operate and will be fully available over the network even if the control domain is unavailable.
  • They allow hot-swapping of IO cards under their control, if the chassis supports it.  Today, that is for T5-4 and above.

Of course, there are disadvantages, too:

  • Root Domains are not very flexible.  You can not add or remove PCIe root complexes without rebooting the domain.
  • You are limited in the number of Root Domains, mostly by the number of PCIe root complexes available in the system.
  • As with all physical IO, Live Migration is not possible.

Use Root Domains whenever you have an application that needs at least one socket worth of CPU and memory or more and has high IO requirements, but where you'd prefer to host it on a larger system to allow some flexibility in CPU and memory assignment.  Typically, Root Domains have a memory footprint and CPU activity which is too high to allow sensible live migration. They are typically used for high value applications that are secured with some kind of cluster framework. 

Virtual IO SetupHaving covered all the options for PCI virtualization, there is only virtual IO left to cover. For easier reference, here's the diagram from previous posts that shows this basic setup.  This variant is probably the most widely used one.  It has been available from the very first version, it's performance has been significantly improved recently.  The advantages of this type of IO are mostly obvious:

  • Virtual IO allows live migration of guests.  In fact, only if all the IO of a guest is fully virtualized, can it be live migrated.
  • This type of IO is by far the most flexible from a platform point of view.  The number of virtual networks and the overall network architecture is only limited by the number of available LDCs (which has recently been increased to 1984 per domain).  There is a big choice of disk backends.  Providing disk and networking to a great number of guests can be achieved with a minimum of hardware.
  • Virtual IO fully supports dynamic reconfiguration - the adding and removing of virtual devices.
  • Virtual IO can be configured with redundant IO service domains, allowing a rolling upgrade of the IO service domains without disrupting the guest domains and without requiring live migration of the guests for this purpose.  Especially when running a large number of guests on one platform, this is a huge advantage.

Of course, there are also some drawbacks:

  • As with all virtual IO, there is a small overhead involved.  In the LDoms implementation, there is no limitation of physical bandwidth.  But there is a small amount of additional latency added to each data packet as it is processed through the stack.  Note that this additional latency, while measurable, is very small and not typically an issue for applications.
  • LDoms virtual IO currently supports virtual Ethernet and virtual disk.  While virtual Ethernet provides the same functionality as a physical Ethernet switch, the virtual disk interface works on a LUN by LUN basis.  This is different to other solutions that provide a virtual HBA and comes with some overhead in administration, since you have to add each virtual disk individually instead of just a single (virtual) HBA.  It also means that other SCSI devices like tapes or tape libraries can not be connected with virtual IO.
  • As is natural for virtual IO, the physical devices (and thus their resources) are shared between all consumers.  While recent releases of LDoms do support bandwidth limitations for network traffic, no such limits can currently be set on virtual disk devices.
  • You need to configure sufficient CPU and memory resources in the IO service domains.  The usual recommendation is one to two cores and 8-16 GB of memory.  While this doesn't strictly count as overhead for the CPU resources of the guests, those is still resources that are not directly available to guests.

Some recommendations for virtual IO:

  • In general, use the latest version of LDoms, along with Solaris 11.
  • Other than general networking considerations, there are no specific tunables for networking, if you are using a recent version of LDoms.  Stick to the defaults.
  • The same is true for disk IO.  However, keep in mind what has been true for the last 20 years: More LUNs do more IOPS.  Just because you've virtualized your guest doesn't mean that a single, 10TB LUN would give you more IOPS than 10x1TB LUNs - quite the opposite!  In the special case of the Oracle database: Make sure the redo logs are on dedicated storage.  This has been a recommendation since the "bad old days", and it continues to be true, whether you virtualize or not.

Virtual IO is best used in consolidation scenarios, where you have many smaller systems to host on one chassis.  These smaller systems tend to be lightweight in most of their resource consumption, including IO.  Hence, they will definately work well on virtual IO.  These are also the workloads that lend themselves best to Live Migration because of their smaller memory footprint and lower overall activity.  This is not to say that domains with moderate IO requirements wouldn't be well suited for virtual IO, they are.  However, larger domains with higher overall resource consumption (CPU, Memory, IO), tend to benefit less from the advantages of Live Migration and the flexibility of virtual IO.

To finalize this article, here's a tabular overview of the different options and the most important points to consider:

IO Option
Pros Cons When to use
SR-IOV
  • Highest granularity of all PCIe-based IO solutions
  • Bare metal performance
  • Supports Ethernet, FC and IB
  • Dynamic reconfiguration
  • Depends on support by PCIe card
  • No Live Migration
  • Dependency on root domain
  • For larger number of guests that need bare metal latency and can do without live migration.
  • When administrating a great number of LUNs is a constant burden, consider FC SR-IOV
  • When availability is not the top priority.
Direct IO
  • Dedicated slot, no hardware sharing
  • Bare metal performance
  • Supports Ethernet, FC and IB
  • Granularity limited by number of PCIe slots in the system
  • Not all PCIe cards supported
  • No Live Migration
  • No dynamic reconfiguration
  • Dependency on root domain
  • If you need a dedicated or special purpose IO card
Root Domains
  • Fully independent domains, similar to dynamic domains
  • Full bare metal performance, dedicated to each domain
  • All types of IO cards supported
  • Granularity limited by the number of Root Complexes in the system
  • No Live Migration
  • No dynamic reconfiguration
  • High value applications with high CPU, memory and IO requirements
  • Live Migration is not a requirement and/or not practical because of domain size and activity.
Virtual IO
  • Allows Live Migration
  • Most flexible, including full dynamic reconfiguration
  • No special hardware requirements
  • Almost no limit to the number of virtual devices
  • Allows fully redundant virtual IO configuration for HA deployments
  • Limited to Ethernet and virtual disk
  • Small performance overhead, mostly visible in additional latency
  • vDisk administration complexity
  • Sharing of IO hardware may have performance implications
  • Consolidation Scenarios
  • Many small guests
  • Live Migration is a requirement

There are already quite a few links for further reading spread throughout this article.  Here is just one more:

Montag Dez 15, 2014

What's up with LDoms: Part 10 - SR-IOV

Back after a long "break" filled with lots of interesting work...  In this article, I'll cover the most flexible solution in LDoms PCI virtualization: SR-IOV.

SR-IOV - Single Root IO Virtualization, is a PCI Express standard developed and published by the PCI-SIG.  The idea here is that each PCIe card capable of SR-IOV, also called a "physical function", can create multiple virtual copies or "virtual functions" of itself and present these to the PCIe bus.  There, they appear very similar to the original, physical card and can be assigned to a guest domain very similar to a whole slot in case of DirectIO.  The domain then has direct hardware access to this virtual adapter.  Support for SR-IOV was first introduced to LDoms in version 2.2, quite a while ago.  Since SR-IOV very much depends on the capabilities of the PCIe adapters, support for various communication protocols was added one by one, as the adapters started to support this.  Today, LDoms support SR-IOV for Ethernet, Infiniband and FibreChannel.  Creating, assigning or de-assigning virtual functions (with the exception of Infiniband) is dynamic since LDoms version 3.1 which means you can do all of this without rebooting the domains affected.

All of this is well documented, not only in the LDoms Admin Guide, but also in various blog entries, most of them by Raghuram Kothakota, one of the chief developers for this feature.  However, I do want to give a short example on how this is configured, pointing to a few things to note as we go along.

Just like with DirectIO, the first thing you want to do is an inventory of what SR-IOV capable hardware you have in your system:

root@sun:~# ldm ls-io
NAME                                      TYPE   BUS      DOMAIN   STATUS   
----                                      ----   ---      ------   ------   
pci_0                                     BUS    pci_0    primary           
pci_1                                     BUS    pci_1    primary           
niu_0                                     NIU    niu_0    primary           
niu_1                                     NIU    niu_1    primary           
/SYS/MB/PCIE0                             PCIE   pci_0    primary  EMP      
/SYS/MB/PCIE2                             PCIE   pci_0    primary  OCC      
/SYS/MB/PCIE4                             PCIE   pci_0    primary  OCC      
/SYS/MB/PCIE6                             PCIE   pci_0    primary  EMP      
/SYS/MB/PCIE8                             PCIE   pci_0    primary  EMP      
/SYS/MB/SASHBA                            PCIE   pci_0    primary  OCC      
/SYS/MB/NET0                              PCIE   pci_0    primary  OCC      
/SYS/MB/PCIE1                             PCIE   pci_1    primary  EMP      
/SYS/MB/PCIE3                             PCIE   pci_1    primary  EMP      
/SYS/MB/PCIE5                             PCIE   pci_1    primary  OCC      
/SYS/MB/PCIE7                             PCIE   pci_1    primary  EMP      
/SYS/MB/PCIE9                             PCIE   pci_1    primary  EMP      
/SYS/MB/NET2                              PCIE   pci_1    primary  OCC      
/SYS/MB/NET0/IOVNET.PF0                   PF     pci_0    primary           
/SYS/MB/NET0/IOVNET.PF1                   PF     pci_0    primary           
/SYS/MB/NET2/IOVNET.PF0                   PF     pci_1    primary           
/SYS/MB/NET2/IOVNET.PF1                   PF     pci_1    primary           
We've discussed this example earlier, this time let's concentrate on the four last lines. Those are physical functions (PF) of two network devices (/SYS/MB/NET0 and NET2). Since there are two PFs for each device, we know that each device actually has two ports. (These are the four internal ports of a T4-2 system.) To dynamically create a virtual function of one of these ports, we first have to turn on IO Virtualization on the corresponding PCI bus. Unfortunately, this is not (yet) a dynamic operation, so we have to reboot the domain owning that bus once. But only once. So let's do that now:
root@sun:~# ldm start-reconf primary
Initiating a delayed reconfiguration operation on the primary domain.
All configuration changes for other domains are disabled until the primary
domain reboots, at which time the new configuration for the primary domain
will also take effect.
root@sun:~# ldm set-io iov=on pci_0
------------------------------------------------------------------------------
Notice: The primary domain is in the process of a delayed reconfiguration.
Any changes made to the primary domain will only take effect after it reboots.
------------------------------------------------------------------------------
root@sun:~# reboot

Once the system comes back up, we can check that everything went well:

root@sun:~# ldm ls-io
NAME                                      TYPE   BUS      DOMAIN   STATUS   
----                                      ----   ---      ------   ------   
pci_0                                     BUS    pci_0    primary  IOV      
pci_1                                     BUS    pci_1    primary        
[...]
/SYS/MB/NET2/IOVNET.PF1                   PF     pci_1    primary      

As you can see, pci_0 now shows "IOV" in the Status column. We can use the "-d" option to ldm ls-io to learn a bit more about the capabilities of the PF we intend to use:

root@sun:~# ldm ls-io -d /SYS/MB/NET2/IOVNET.PF1
Device-specific Parameters
--------------------------
max-config-vfs
    Flags = PR
    Default = 7
    Descr = Max number of configurable VFs
max-vf-mtu
    Flags = VR
    Default = 9216
    Descr = Max MTU supported for a VF
max-vlans
    Flags = VR
    Default = 32
    Descr = Max number of VLAN filters supported
pvid-exclusive
    Flags = VR
    Default = 1
    Descr = Exclusive configuration of pvid required
unicast-slots
    Flags = PV
    Default = 0 Min = 0 Max = 32
    Descr = Number of unicast mac-address slots    

All of these capabilities depend on the type of adapter and the driver that supports it.  In this example case, we can see that we can create up to 7 VFs, the VFs support a maximum MTU of 9216 bytes and have hardware support for 32 VLANs and 32 MAC addresses.  Other adapters are likely to give you different values here.

Now we can create a virtual function (VF) and assign it to a guest domain.  We have to do this with a currently unused port - creating VFs doesn't work while there's traffic on the device.

root@sun:~# ldm create-vf /SYS/MB/NET2/IOVNET.PF1 
Created new vf: /SYS/MB/NET2/IOVNET.PF1.VF0
root@sun:~# ldm add-io /SYS/MB/NET2/IOVNET.PF1.VF0 mars
root@sun:~# ldm ls-io /SYS/MB/NET2/IOVNET.PF1    
NAME                                      TYPE   BUS      DOMAIN   STATUS   
----                                      ----   ---      ------   ------   
/SYS/MB/NET2/IOVNET.PF1                   PF     pci_1    primary           
/SYS/MB/NET2/IOVNET.PF1.VF0               VF     pci_1    mars             

The first command here tells the hypervisor, or actually, the NIC located at /SYS/MB/NET2/IOVNET.PF1, to create one virtual function.  The command returns and reports the name of that virtual function.  There is a different variant of this command to create multiple VFs in one go.  The second command then assigns this newly create VF to a domain called "mars".  This is an online operation - mars is already up and running Solaris at this point.  Finally, the third command just shows us that everything went well and mars now owns the VF. 

Used with the "-l" option, the ldm command tells us some details about the device structure of the PF and VF:

root@sun:~# ldm ls-io -l /SYS/MB/NET2/IOVNET.PF1
NAME                                      TYPE   BUS      DOMAIN   STATUS   
----                                      ----   ---      ------   ------   
/SYS/MB/NET2/IOVNET.PF1                   PF     pci_1    primary           
[pci@500/pci@1/pci@0/pci@5/network@0,1]
    maxvfs = 7
/SYS/MB/NET2/IOVNET.PF1.VF0               VF     pci_1    mars             
[pci@500/pci@1/pci@0/pci@5/network@0,81]
    Class properties [NETWORK]
        mac-addr = 00:14:4f:f8:07:ad
        mtu = 1500

Of course, we also want to check if and how this shows up in mars:

root@mars:~# dladm show-phys
LINK              MEDIA                STATE      SPEED  DUPLEX    DEVICE
net0              Ethernet             up         0      unknown   vnet0
net1              Ethernet             unknown    0      unknown   igbvf0
root@mars:~# grep network /etc/path_to_inst
"/virtual-devices@100/channel-devices@200/network@0" 0 "vnet"
"/pci@500/pci@1/pci@0/pci@5/network@0,81" 0 "igbvf"

As you can see, mars now has two network interfaces.  One, net0, is a more conventional, virtual network interface.  The other, net1, uses the VF driver for the underlying physical device, in our case igb.  Checking in /etc/path_to_inst (or, if you prefer, in /devices), we can now find an entry for this network interface that shows us the PCIe infrastructure now plumbed into mars to support this NIC. Of course, it's the same device path as in the root domain (sun).

So far, we've seen how to create a VF in the root domain, how to assign this to a guest and how it shows up there.  I've used Ethernet for this example, as it's readily available in all systems.  As I mentioned earlier, LDoms also support Infiniband and FibreChannel with SR-IOV, so you could also add a FC HBA's VF to a guest domain.  Note that this doesn't work with just any HBA.  The HBA itself has to support this functionality.  There is a list of supported cards maintained in MOS. 

There are a few more things to note with SR-IOV.  First, there's the VFs identity.  You might not have noticed it, but the VF created in the example above has it's own identity - it's own MAC address.  While this seems natural in the case of Ethernet, it is actually something that you should be aware of with FC and IB as well.  FC VFs use WWNs and NPIV to identify themselves in the attached fabric.  This means the fabric has to be NPIV capable and the guest domain using the VF can not layer further software NPIV-HBAs on top.  Likewise, IB VFs use HCAGUIDs to identify themselves.  While you can choose Ethernet MAC-addresses and FC WWNs if you prefer, IB VFs choose their HCAGUIDs automatically.  If you intend to run Solaris zones within a guest domain that uses a SR-IOV VF for Ethernet, remember to assign this VF additional MAC-addresses to be used by the anet devices of these zones.

Finally I want to point out once more that while SR-IOV devices can be moved in and out of domains dynamically, and can be added from two different root domains to the same guest, they still depend on their respective root domains.  This is very similar to the restriction with DirectIO.  So if the root domain owning the PF reboots (for whatever reason), it will reset the PF which will also reset all VFs and have unpredictable results in the guests using them.  Keep this in mind when deciding whether or not to use SR-IOV.  If you do, consider to configure explicit domain dependencies reflecting these physical dependencies.  You can find details about this in the Admin Guide. Development in this area is continuing, so you may expect to see enhancements in this space in upcoming versions. 

Since it is possible to work with multiple root domains and have each of those root domains create VFs of some of their devices, it is important to avoid cyclic dependencies between these root domains.  This is explicitly prevented by the ldm command, which does not allow a VF from one root domain to be assigned to another root domain.

We have now seen multiple ways of providing IO resources to logical domains: Virtual network and disk, PCIe root complexes, PCIe slots and finally SR-IOV.  Each of them have their own pros and cons and you will need to weigh them carefully to find the correct solution for a given task.  I will dedicate one of the next chapters of this series to a discussion of IO best practices and recommendations.  For now, here are some links for further reading about SR-IOV:

Mittwoch Aug 20, 2014

What's up with LDoms: Part 9 - Direct IO

In the last article of this series, we discussed the most general of all physical IO options available for LDoms, root domains.  Now, let's have a short look at the next level of granularity: Virtualizing individual PCIe slots.  In the LDoms terminology, this feature is called "Direct IO" or DIO.  It is very similar to root domains, but instead of reassigning ownership of a complete root complex, it only moves a single PCIe slot or endpoint device to a different domain.  Let's look again at hardware available to mars in the original configuration:

root@sun:~# ldm ls-io
NAME                                      TYPE   BUS      DOMAIN   STATUS  
----                                      ----   ---      ------   ------  
pci_0                                     BUS    pci_0    primary          
pci_1                                     BUS    pci_1    primary          
pci_2                                     BUS    pci_2    primary          
pci_3                                     BUS    pci_3    primary          
/SYS/MB/PCIE1                             PCIE   pci_0    primary  EMP     
/SYS/MB/SASHBA0                           PCIE   pci_0    primary  OCC
/SYS/MB/NET0                              PCIE   pci_0    primary  OCC     
/SYS/MB/PCIE5                             PCIE   pci_1    primary  EMP     
/SYS/MB/PCIE6                             PCIE   pci_1    primary  EMP     
/SYS/MB/PCIE7                             PCIE   pci_1    primary  EMP     
/SYS/MB/PCIE2                             PCIE   pci_2    primary  EMP     
/SYS/MB/PCIE3                             PCIE   pci_2    primary  OCC     
/SYS/MB/PCIE4                             PCIE   pci_2    primary  EMP     
/SYS/MB/PCIE8                             PCIE   pci_3    primary  EMP     
/SYS/MB/SASHBA1                           PCIE   pci_3    primary  OCC     
/SYS/MB/NET2                              PCIE   pci_3    primary  OCC     
/SYS/MB/NET0/IOVNET.PF0                   PF     pci_0    primary          
/SYS/MB/NET0/IOVNET.PF1                   PF     pci_0    primary          
/SYS/MB/NET2/IOVNET.PF0                   PF     pci_3    primary          
/SYS/MB/NET2/IOVNET.PF1                   PF     pci_3    primary

All of the "PCIE" type devices are available for SDIO, with a few limitations.  If the device is a slot, the card in that slot must support the DIO feature.  The documentation lists all such cards.  Moving a slot to a different domain works just like moving a PCI root complex.  Again, this is not a dynamic process and includes reboots of the affected domains.  The resulting configuration is nicely shown in a diagram in the Admin Guide:

There are several important things to note and consider here:

  • The domain receiving the slot/endpoint device turns into an IO domain in LDoms terminology, because it now owns some physical IO hardware.
  • Solaris will create nodes for this hardware under /devices.  This includes entries for the virtual PCI root complex (pci_0 in the diagram) and anything between it and the actual endpoint device.  It is very important to understand that all of this PCIe infrastructure is virtual only!  Only the actual endpoint devices are true physical hardware.
  • There is an implicit dependency between the guest owning the endpoint device and the root domain owning the real PCIe infrastructure:
    • Only if the root domain is up and running, will the guest domain have access to the endpoint device.
    • The root domain is still responsible for resetting and configuring the PCIe infrastructure (root complex, PCIe level configurations, error handling etc.) because it owns this part of the physical infrastructure.
    • This also means that if the root domain needs to reset the PCIe root complex for any reason (typically a reboot of the root domain) it will reset and thus disrupt the operation of the endpoint device owned by the guest domain.  The result in the guest is not predictable.  I recommend to configure the resulting behaviour of the guest using domain dependencies as described in the Admin Guide in Chapter "Configuring Domain Dependencies".
  • Please consult the Admin Guide in Section "Creating an I/O Domain by Assigning PCIe Endpoint Devices" for all the details!

As you can see, there are several restrictions for this feature.  It was introduced in LDoms 2.0, mainly to allow the configuration of guest domains that need access to tape devices.  Today, with the higher number of PCIe root complexes and the availability of SR-IOV, the need to use this feature is declining.  I personally do not recommend to use it, mainly because of the drawbacks of the depencies on the root domain and because it can be replaced with SR-IOV (although then with similar limitations).

This was a rather short entry, more for completeness.  I believe that DIO can usually be replaced by SR-IOV, which is much more flexible.  I will cover SR-IOV in the next section of this blog series.

Dienstag Mai 20, 2014

Improved vDisk Performance for LDoms

In all the LDoms workshops I've been doing in the past years, I've always been cautioning customers to keep their expectations within reasonable limits when it comes to virtual IO.  And I'll not stop doing that today.  Virtual IO will always come at a certain cost, because of the additional work necessary to translate physical IOs to the virtual world.  Until we invent time travel, this will always need some additional time to be done.  But there's some good news about this, too:

First, in many cases the overhead involved in virtualizing IO isn't that much - the LDom implementation is very efficient.  And in many of these many cases, it doesn't hurt.  Often, because the workload involved doesn't care and virtual IO is fast enough.

Second, there are good ways to configure virtual IO, and not so good ways.  If you stick to the good ways (which I previously discussed here), you'll increase the number of cases where virtual IO is more than just good enough. 

But of course, there are always those other cases where it just isn't.  But there's more good news, too:

For virtualized network, we've introduced a new implementation utilizing large segment offload (LSO) and some other techniques to increase throughput and reduce latency to a point where virtual networking has gone away as a reason for performance issues.  This was in LDoms release 3.1.  Now is when we introduce a similar enhancement for virtual disk.

When we talk about disk IO and performance, the most important configuration best practice is to spread IO load to multiple LUNs.  This has always been the case, long before we started to even think about virtualization.  The reason for this is the limited number of IOPS a single LUN will deliver.  Whether that LUN is a single physical disk or a volume in a more sophisticated disk array doesn't matter.  IOPS delivered by one LUN are limited, and IOs will queue up in this LUN's queue in a very sequential manner.  A single physical disk might deliver 150 IOPS, perhaps 300 IOPS.  A SAN LUN with a strong array in the backend might deliver 5000 IOPS or a little more.  But that isn't enough, and has never been.  Disk striping of any kind was invented to solve this problem.  And virtualization of both servers and storage doesn't change the overall picture.  Which means that in LDoms, the best practice has always been to configure several LUNs, which means several vdisks, into a single guest system.  This often provided the required IO performance, but there were quite a few cases where this just wasn't good enough and people had to move back to physical IO.  Of course, there are several ways to provide physical IO and still virtualize using LDoms, but the situation was not ideal. 

With the release of Solaris 11.1 SRU 19 (and a Solaris 10 patch shortly afterwards) we are introducing a new implementation of the vdisk/vds software stack, which significantly improves both latency and throughput of virtual disk IO.  The improvement can best be seen in the graphs below.

This first graph shows the overall number of IOPS during a performance test, comparing bare metal with the old and the new vdisk implementation. As you can see, the new implementation delivers essentially the same performance as bare metal, with a variation that might as well be statistical deviation. Note that these tests were run on a total of 28 SAN LUNs, so please don't expect a single LUN to deliver 150k IOPS anytime soon :-) The improvement over the old implementation is significant, with differences of up to 55% in some cases. Again, note that running only a single stream of IOs against a single LUN will not show as much of an improvement as running multiple streams (denoted as threads in the graphs). This is due to the fact that parts of the new implementation have focused on de-serializing the IO infrastructure, something you'll not notice if you run single threaded IO streams. But then, most IO hungry applications issue multiple IOs.  Likewise, if your storage backend can't provide this kind of performance (perhaps because you're testing on a single, internal disk?), don't expect much change! 

So we know that throughput has been fixed (with 150k IOPS and 1.1 GB/sec virtual IO in this test, I believe I can safely say so). But what about IO latency? This next graphs shows a similar improvement here:

Again, response time (or service time) with the new implementation is very similar to what you get from bare metal.  The maximum difference is in the 2 thread case with less than 4% difference between virtual IO and bare metal.  Close enough to actually start talking about zero overhead IO (at least as far as the IO performance is concerned).  Talking about overhead:  I sometimes call the overhead involved in virtualization the "Virtualization Tax" - the resources you invest in virtualization itself, or, in other words, the performance (or response time) you lose because of virtualization.  In the case of LDoms disk IO, we've just seen a signifcant reduction in virtualization taxes:

The last graph shows how much higher the response time for virtual disk IO was with the old implementation, and how much of that we've been given back by this charming piece of engineering in the new implementation. Where we paid up to 55% of virtualization tax before, we're now down to 4% or less. A big "Thank you!" to engineering!

Of course, there's always a little disclaimer involved:  Your milage will vary.  The results I show here were obtained on 28 LUNs coming from some kind of FC infrastructure.  The tests were done using vdbench in a read/write mix of 60%/40% running from 2 to 20 threads doing random IO.  While this is quite a challenging load for any IO subsystem and represents the load pattern that showed the highest virtualization tax with the old implementation, this still means that real world benefits from this new implementation might not achieve the same improvements.  Although I am very optimistic that they will be similar.

In conclusion, with the new, improved virtual networking and virtual disk IO that are now available, the range of applications that can safely be run on fully virtualized IO has been expanded significantly.  This is in line with the expectations I often find in customer workshops, where high end performance is naturally expected from SPARC systems under all circumstances.

Before I close, here's how to use this new implementation:

  • Update to Solaris 11.1 SRU 19 in
    • all guest domains that want to use the new implementation.
    • all IO domains that provide virtual disks to these guests
    • This will also update LDoms Manager to 3.1.1
    • If only one in the pair (guest|IO domain) is updated, virtual IO will continue to work using the old implementation.
  • A patch for Solaris 10 will be available shortly.

Update 2014-06-16: Patch 150400-13 has now been released for Solaris 10.  See the Readme for details.

Donnerstag Mrz 27, 2014

A few Thoughts about Single Thread Performance


[Read More]

Montag Feb 24, 2014

What's up with LDoms: Part 8 - Physical IO

Virtual IO SetupFinally finding some time to continue this blog series...  And starting the new year with a new chapter for which I hope to write several sections: Physical IO options for LDoms and what you can do with them.  In all previous sections, we talked about virtual IO and how to deal with it.  The diagram at the right shows the general architecture of such virtual IO configurations. However, there's much more to IO than that. 

From an architectural point of view, the primary task of the SPARC hypervisor is partitioning of  the system.  The hypervisor isn't usually very active - all it does is assign ownership of some parts of the hardware (CPU, memory, IO resources) to a domain, build a virtual machine from these components and finally start OpenBoot in that virtual machine.  After that, the hypervisor essentially steps aside.  Only if the IO components are virtual components, do we need hypervisor support.  But those IO components could also be physical.  Actually, that is the more "natural" option, if you like.  So lets revisit the creation of a domain:

We always start with assigning of CPU and memory in some very simple steps:

root@sun:~# ldm create mars
root@sun:~# ldm set-memory 8g mars
root@sun:~# ldm set-core 8 mars

If we now bound and started the domain, we would have OpenBoot running and we could connect using the virtual console.  Of course, since this domain doesn't have any IO devices, we couldn't yet do anything particularily useful with it.  Since we want to add physical IO devices, where are they?

To begin with, all physical components are owned by the primary domain.  This is the same for IO devices, just like it is for CPU and memory.  So just like we need to remove some CPU and memory from the primary domain in order to assign these to other domains, we will have to remove some IO from the primary if we want to assign it to another domain.  A general inventory of available IO resources can be obtained with the "ldm ls-io" command:

root@sun:~# ldm ls-io
NAME                                      TYPE   BUS      DOMAIN   STATUS  
----                                      ----   ---      ------   ------  
pci_0                                     BUS    pci_0    primary          
pci_1                                     BUS    pci_1    primary          
pci_2                                     BUS    pci_2    primary          
pci_3                                     BUS    pci_3    primary          
/SYS/MB/PCIE1                             PCIE   pci_0    primary  EMP     
/SYS/MB/SASHBA0                           PCIE   pci_0    primary  OCC
/SYS/MB/NET0                              PCIE   pci_0    primary  OCC     
/SYS/MB/PCIE5                             PCIE   pci_1    primary  EMP     
/SYS/MB/PCIE6                             PCIE   pci_1    primary  EMP     
/SYS/MB/PCIE7                             PCIE   pci_1    primary  EMP     
/SYS/MB/PCIE2                             PCIE   pci_2    primary  EMP     
/SYS/MB/PCIE3                             PCIE   pci_2    primary  OCC     
/SYS/MB/PCIE4                             PCIE   pci_2    primary  EMP     
/SYS/MB/PCIE8                             PCIE   pci_3    primary  EMP     
/SYS/MB/SASHBA1                           PCIE   pci_3    primary  OCC     
/SYS/MB/NET2                              PCIE   pci_3    primary  OCC     
/SYS/MB/NET0/IOVNET.PF0                   PF     pci_0    primary          
/SYS/MB/NET0/IOVNET.PF1                   PF     pci_0    primary          
/SYS/MB/NET2/IOVNET.PF0                   PF     pci_3    primary          
/SYS/MB/NET2/IOVNET.PF1                   PF     pci_3    primary

The output of this command will of course vary greatly, depending on the type of system you have.  The above example is from a T5-2.  As you can see, there are several types of IO resources.  Specifically, there are

  • BUS
    This is a whole PCI bus, which means everything controlled by a single PCI control unit, also called a PCI root complex.  It typically contains several PCI slots and possibly some end point devices like SAS or network controllers.
  • PCIE
    This is either a single PCIe slot.  In that case, it's name corresponds to the slot number you will find imprinted on the system chassis.  It is controlled by a root complex listed in the "BUS" column.  In the above example, you can see that some slots are empty, while others are occupied.  Or it is an endpoint device like a SAS HBA or network controller.  An example would be "/SYS/MB/SASHBA0" or "/SYS/MB/NET2".  Both of these typically control more than one actual device, so for example, SASHBA0 would control 4 internal disks and NET2 would control 2 internal network ports.
  • PF
    This is a SR-IOV Physical Function - usually an endpoint device like a network port which is capable of PCI virtualization.  We will cover SR-IOV in a later section of this blog.

All of these devices are available for assignment.  Right now, they are all owned by the primary domain.  We will now release some of them from the primary domain and assign them to a different domain.  Unfortunately, this is not a dynamic operation, so we will have to reboot the control domain (more precisely, the affected domains) once to complete this.

root@sun:~# ldm start-reconf primary
root@sun:~# ldm rm-io pci_3 primary
root@sun:~# reboot
[ wait for the system to come back up ]
root@sun:~# ldm add-io pci_3 mars
root@sun:~# ldm bind mars

With the removal of pci_3, we also removed PCIE8, SYSBHA1 and NET1 from the primary domain and added all three to mars.  Mars will now have direct, exclusive access to all the disks controlled by SASHBA1, all the network ports on NET1 and whatever we chose to install in PCIe slot 8.  Since in this particular example, mars has access to internal disk and network, it can boot and communicate using these internal devices.  It does not depend on the primary domain for any of this.  Once started, we could actually shut down the primary domain.  (Note that the primary is usually the home of vntsd, the console service.  While we don't need this for running or rebooting mars, we do need it in case mars falls back to OBP or single-user.) 

Root Domain SetupMars now owns its own PCIe root complex.  Because of this, we call mars a root domain.  The diagram on the right shows the general architecture.  Compare this to the diagram above!  Root domains are truely independent partitions of a SPARC system, very similar in functionality to Dynamic System Domains in the E10k, E25k or M9000 times (or Physical Domains, as they're now called).  They own their own CPU, memory and physical IO.   They can be booted, run and rebooted independently of any other domain.  Any failure in another domain does not affect them.  Of course, we have plenty of shared components: A root domain might share a mainboard, a part of a CPU (mars, for example, only has 2 cores...), some memory modules, etc. with other domains.  Any failure in a shared component will of course affect all the domains sharing that component, which is different in Physical Domains because there are significantly fewer shared components.  But beyond this, root domains have a level of isolation very similar to that of Physical Domains.

Comparing root domains (which are the most general form of physical IO in LDoms) with virtual IO, here are some pros and cons:

Pros:

  • Root domains are fully independet of all other domains (with the exception of console access, but this is a minor limitation).
  • Root domains have zero overhead in IO - they have no virtualization overhead whatsoever.
  • Root domains, because they don't use virtual IO, are not limited to disk and network, but can also attach to tape, tape libraries or any other, generic IO device supported in their PCIe slots.

Cons:

  • Root domains are limited in number.  You can only create as many root domains as you have PCIe root complexes available.  In current T5 and M5/6 systems, that's two per CPU socket.
  • Root domains can not live migrate.  Because they own real IO hardware (with all these nasty little buffers, registers and FIFOs), they can not be live migrated to another chassis.

Because of these different characteristics, root domains are typically used for applications that tend to be more static, have higher IO requirements and/or larger CPU and memory footprints.  Domains with virtual IO, on the other hand, are typically used for the mass of smaller applications with lower IO requirements.  Note that "higher" and "lower" are relative terms - LDoms virtual IO is quite powerful.

This is the end of the first part of the physical IO section, I'll cover some additional options next time.  Here are some links for further reading:

Dienstag Okt 01, 2013

CPU-DR for Zones

In my last entry, I described how to change the memory configuration of a running zone.  The natural next question is of course, if that also works with CPUs that have been assigned to a zone.  The answer, of course, is "yes".

You might wonder why that would be necessary in the first place.  After all, there's the Fair Share Scheduler, that's extremely capable of managing zones' CPU usage.  However, there are reasons to assign dedicated CPU resources to zones, licensing is one, SLAs with specified CPU requirements another.  In such cases, you configure a fixed amount of CPUs (more precisely, strands) for a zone.  Being able to change this configuration on the fly then becomes desirable.  I'll show how to do that in this blog entry.

In general, there are two ways to assign exclusive CPUs to a zone.  The classic approach is by using a resource pool with an associated processor set.  One or more zones can then be bound to that pool.  The easier solution is to use the parameter "dedicated-cpu" directly when configuring the zone.  In this second case, Solaris will create a temporary pool to manage these resources.  So effectively, the implementation is the same in both cases.  Which makes it clear how to change the CPU configuration in both cases: By changing the pool.  If you do this in the classical approach, the change to the pool will be persistent.  If working with the temporary pool created for the zone, you will also need to change the zone's configuration if you want the change to survive a zone restart.

If you configured you zone with "dedicated-cpu", the temporary pool (and also the temporary processor set that goes along with it) will usually be called "SUNWtmp_<zonename>".   If not, you'll know the name of the pool...  In both cases, everything else is the same:

Let's assume a zone called orazone, currently configured with 1 CPU.  It's to be assigned a second CPU.  The current pool configuration is like this:
root@benjaminchen:~# pooladm                

system default
	string	system.comment 
	int	system.version 1
	boolean	system.bind-default true
	string	system.poold.objectives wt-load

	pool pool_default
		int	pool.sys_id 0
		boolean	pool.active true
		boolean	pool.default true
		int	pool.importance 1
		string	pool.comment 
		pset	pset_default

	pool SUNWtmp_orazone
		int	pool.sys_id 5
		boolean	pool.active true
		boolean	pool.default false
		int	pool.importance 1
		string	pool.comment 
		boolean	pool.temporary true
		pset	SUNWtmp_orazone

	pset pset_default
		int	pset.sys_id -1
		boolean	pset.default true
		uint	pset.min 1
		uint	pset.max 65536
		string	pset.units population
		uint	pset.load 687
		uint	pset.size 3
		string	pset.comment 

		cpu
			int	cpu.sys_id 1
			string	cpu.comment 
			string	cpu.status on-line

		cpu
			int	cpu.sys_id 3
			string	cpu.comment 
			string	cpu.status on-line

		cpu
			int	cpu.sys_id 2
			string	cpu.comment 
			string	cpu.status on-line

	pset SUNWtmp_orazone
		int	pset.sys_id 2
		boolean	pset.default false
		uint	pset.min 1
		uint	pset.max 1
		string	pset.units population
		uint	pset.load 478
		uint	pset.size 1
		string	pset.comment 
		boolean	pset.temporary true

		cpu
			int	cpu.sys_id 0
			string	cpu.comment 
			string	cpu.status on-line
As we can see in the definition of pset SUNWtmp_orazone, it has been assigned CPU #0.  To add another CPU to this pool, you'll need these two commands:
root@benjaminchen:~# poolcfg -dc 'modify pset SUNWtmp_orapset \
                     (uint pset.max=2)' 
root@benjaminchen:~# poolcfg -dc 'transfer to pset \
                     orapset (cpu 1)'

To remove that CPU from the pool again, use these:

root@benjaminchen:~# poolcfg -dc 'transfer to pset pset_default \
                     (cpu 1)'
root@benjaminchen:~# poolcfg -dc 'modify pset SUNWtmp_orapset \
                     (uint pset.max=1)' 

That's it.   If you've used "dedicated-cpu" for your zone's configuration, you'll need to change that before the next reboot.  If not, you'd have to use the pool name you assigned to the zone.

Further details:

Montag Aug 19, 2013

Memory-DR for Zones

Zones allow you to limit their memory consumption.  The usual way to configure this is with the zone parameter "capped-memory" and it's three sub-values "physical", "swap" and "locked".  "Physical" corresponds to the resource control "zone.max-rss", which is actual main memory.  "Swap" corresponds to "zone.max-swap", which is swapspace and "locked" corresponds to "zone.max-locked-memory", which is non-pageable memory, typically shared memory segments.  Swap and locked memory are rather hard limits that can't be exceeded.  RSS - physical memory, is not quite as hard, being enforced by rcapd.  This daemon will try to page out all those memory pages that are beyond the allowed amount of memory and are least active.  Depending on the activity of the processes in question, this is more or less successful, but will always result in paging activity.  This will slow down the memory-hungry processes in that zone.

If you change any of these values using zonecfg, these changes will only be in effect after a reboot of the zone.  This is not as dynamic as one might be used to from the LDoms world.  But it can be, as I'd like to show you in a small example:

Let's assume a little zone with a memory configuration like this:

root@benjaminchen:~# zonecfg -z orazone info capped-memory
capped-memory:
    physical: 512M
    [swap: 256M]
    [locked: 512M]

To change these values while the zone is in operation, you need to interact with two different sub-systems.   For physical memory, we'll need to talk to rcapd.  For swap and locked memory, we need prctl for the normal resource controls.  So, if I wanted to double all three limits for my zone, I'd need these commands:

root@benjaminchen:~# prctl -n zone.max-swap -v 512m -r -i zone orazone
root@benjaminchen:~# prctl -n zone.max-locked-memory -v 1g -r -i zone orazone
root@benjaminchen:~# rcapadm -z orazone -m 1g

These new values will be effective immediately - for rcapd after the next reconfigure-interval.  You can also change this interval with rcapadm.  Note that these changes are not persistent - if you reboot your zone, it will fall back to whatever was configured with zonecfg.  So to have both - persistent changes and immediate effect, you'll need to touch both tools.

Links:

  • Solaris Admin Guide:
    http://docs.oracle.com/cd/E19683-01/817-1592/rm.rcapd-1/index.html

Donnerstag Jul 04, 2013

What's up with LDoms: Part 7 - Layered Virtual Networking

Back for another article about LDoms - today we'll cover some tricky networking options that come up if you want to run Solaris 11 zones in LDom guest systems.  So what's the problem?

MAC Tables in an LDom systemLet's look at what happens with MAC addresses when you create a guest system with a single vnet network device.  By default, the LDoms Manager selects a MAC address for the new vnet device.  This MAC address is managed in the vswitch, and ethernet packets from and to that MAC address can flow between the vnet device, the vswitch and the outside world.  The ethernet switch on the outside will learn about this new MAC address, too.  Of course, if you assign a MAC address manually, this works the same way.  This situation is shown in the diagram at the right.  The important thing to note here is that the vnet device in the guest system will have exactly one MAC address, and no "spare slots" with additional addresses. 

Add zones into the picture.  With Solaris 10, the situation is simple.  The default behaviour will be a "shared IP" zone, where traffic from the non-global zone will use the IP (and thus ethernet) stack from the global zone.  No additional MAC addresses required.  Since you don't have further "physical" interfaces, there's no temptation to use "exclusive IP" for that zone, except if you'd use a tagged VLAN interface.  But again, this wouldn't need another MAC address.


MAC Tables in previous versionsWith Solaris 11, this changes fundamentally.  Solaris 11, by default, will create a so called "anet" device for any new zone.  This device is created using the new Solaris 11 network stack, and is simply a virtual NIC.  As such, it will have a MAC address.  The default behaviour is to generate a random MAC address.  However, this random MAC address will not be known to the vswitch in the IO domain and to the vnet device in the global zone, and starting such a zone will fail.


MAC Tables in version 3.0.0.2The solution is to allow the vnet device of the LDoms guest to provide more than one MAC address, similar to typical physical NICs which have support for numerous MAC addresses in "slots" that they manage.  This feature has been added to Oracle VM Server for SPARC in version 3.0.0.2.  Jeff Savit wrote about it in his blog, showing a nice example of how things fail without this feature, and how they work with it.  Of course, the same solution will also work if your global zone uses vnics for purposes other than zones.

To make this work, you need to do two things:

  1. Configure the vnet device to have more than one MAC address.  This is done using the new option "alt-mac-addrs" with either ldm add-vnet or ldm set-vnet.  You can either provide manually selected MAC addresses here, or rely on LDoms Manager to use it's MAC address selection algorithm to provide one.
  2. Configure the zone to use the "auto" option instead of "random" for selecting a MAC address.  This will cause the zone to query the NIC for available MAC addresses instead of coming up with one and making the NIC accept it.

I will not go into the details of how this is configured, as this is very nicely covered by Jeff's blog entry already.  I do want to add that you might see similar issues with layered virtual networking in other virtualization solutions:  Running Solaris 11 vnics or zones with exclusive IP in VirtualBox, OVM x86 or VMware will show the very same behaviour.   I don't know if/when these thechnologies will provide a solution similar to what we now have with LDoms.

Dienstag Jun 18, 2013

A closer look at the new T5 TPC-H result

You've probably all seen the new TPC-H benchmark result for the SPARC T5-4 submitted to TPC on June 7.  Our benchmark guys over at "BestPerf" have already pointed out the major takeaways from the result.  However, I believe there's more to make note of.

Scalability

TPC doesn't promote the comparison of TPC-H results with different storage sizes.  So let's just look at the 3000GB results:

  • SPARC T4-4 with 4 CPUs (that's 32 cores at 3.0 GHz) delivers 205,792 QphH.
  • SPARC T5-4 with 4 CPUs (that's 64 cores at 3.6 GHz) delivers 409,721 QphH.

That's just a little short of 100% scalability, if you'd expect a doubling of cores to deliver twice the result.  Of course, one could expect to see a factor of 2.4, taking the increased clockrate into account.  Since the TPC does not permit estimates and other "number games" with TPC results, I'll leave all the arithmetic to you.  But let's look at some more details that might offer an explanation.

Storage

Looking at the report on BestPerf as well as the full disclosure report, they provide some interesting insight into the storage configuration.  For the SPARC T4-4 run, they had used 12 2540-M2 arrays, each delivering around 1.5 GB/s for a total of 18 GB/s.  These were obviously directly connected to the 24 8GBit FC ports of the SPARC T4-4, using two cables per storage array.  Given the 8GBit ports of the 2540-M2, this setup would be good for a theoretical maximum of 2GB/sec per array.  With 1.5GB/sec actual throughput, they were pretty much maxed out. 

In the SPARC T5-4 run, they report twice the number of disks (via expansion trays for the 2540-M2 arrays) for a total of 33GB/s peak throughput, which isn't quite 2x the number achieved on the SPARC T4-4.  To actually reach 2x the throughput (36GB/s), each array would have had to deliver 3 GB/sec over its 4 8GBit ports.  The FDR only lists 12 dual-port FC HBAs, which explains the use of Brocade FC switches: Connecting all 4 8GBit ports of the storage arrays and using the FC switch to bundle that into 24 16GBit HBA ports.  This delivers the full 48x8GBit FC bandwidth of the arrays to the 24 FC ports of the server.  Again, the theoretical maximum of 4 8GBit ports on each storage array would be 4 GB/sec, but considering all the protocol and "reality overhead", the 2.75 GB/sec they actually delivered isn't bad at all.  Given this, reaching twice the overall benchmark performance is good.  And a possible explanation for not going all the way to 2.4x. Of course, other factors like software scalability might also play a role here.

By the way - neither the SPARC T4-4 nor the SPARC T5-4 used any flash in these benchmarks. 

Competition

Ever since the T4s are on the market, our competitors have done their best to assure everyone that the SPARC core still lacks in performance, and that large caches and high clockrates are the only key to real server performance.  Now, when I look at public TPC-H results, I see this:

TPC-H @3000GB, Non-Clustered Systems
System QphH
SPARC T5-4
3.6 GHz SPARC T5
4/64 – 2048 GB
409,721.8
SPARC T4-4
3.0 GHz SPARC T4
4/32 – 1024 GB
205,792.0
IBM Power 780
4.1 GHz POWER7
8/32 – 1024 GB
192,001.1
HP ProLiant DL980 G7
2.27 GHz Intel Xeon X7560
8/64 – 512 GB
162,601.7

So, in short, with the 32 core SPARC T4-4 (which is 3 GHz and 4MB L3 cache), SPARC T4-4 delivers more QphH@3000GB than IBM with their 32 core Power7 (which is 4.1 GHz and 32MB L3 cache) and also more than HP with the 64 core Intel Xeon system (2.27 GHz and 24MB L3 cache).  So where exactly is SPARC lacking??

Right, one could argue that both competing results aren't exactly new.  So let's do some speculation:

IBM's current Performance Report lists the above mentioned IBM Power 780 with an rPerf value of 425.5.  A successor to the above Power 780 with P7+ CPUs would be the Power 780+ with 64 cores, which is available at 3.72 GHz.  It is listed with an rPerf value of 690.1, which is 1.62x more.  So based on IBM's own performance estimates, and assuming that storage will not be the limiting factor (IBM did test with 177 SSDs in the submitted result, they're welcome to increase that to 400) they would not be able to double the performance of the Power7 system.  And they'd need more than that to beat the SPARC T5-4 result.  This is even more challenging in the "per core" metric that IBM values so highly. 

For x86, the story isn't any better.  Unfortunately, Intel doesn't have such handy rPerf charts, so I'll have to fall back to SPECint_rate2006 for this one. (Note that I am not a big fan of using one benchmark to estimate another.  Especially SPECcpu is not very suitable to estimate database performance as there is almost no IO involved.)  The above HP system is listed with 1580 CINT2006_rate.  The best result as of 2013-06-14 for the new Intel Xeon E7-4870 with 8 CPUs is 2180 CINT2006_rate.  That's an improvement of 1.38x.  (If we just take the increase in clockrate and core count, it would give us 1.32x.)  I'll stop here and let you do the math yourself - it's not very promising for x86...

Of course, IBM and others are welcome to prove me wrong - but as of today, I'm waiting for recent publications in this data range.

 So what have we learned?  

  • There's some evidence that storage might have been the limiting factor that prevented the SPARC T5-4 to scale beyond 2x
  • The myth that SPARC cores don't perform is just that - a myth.  Next time you meet one, ask your IBM sales rep when they'll publish TPC-H for Power7+
  • Cache memory isn't the magic performance switch some people think it is.
  • Scaling a CPU architecture (and the OS on top of it) beyond a certain limit is hard.  It seems to be a little harder in the x86 world.

What did I miss?  Well, price/performance is something I'll let you discuss with your sales reps ;-)

And finally, before people ask - no, I haven't moved to marketing.  But sometimes I just can't resist...


Disclosure Statements

The views expressed on this blog are my own and do not necessarily reflect the views of Oracle.

TPC-H, QphH, $/QphH are trademarks of Transaction Processing Performance Council (TPC). For more information, see www.tpc.org, results as of 6/7/13. Prices are in USD. SPARC T5-4 409,721.8 QphH@3000GB, $3.94/QphH@3000GB, available 9/24/13, 4 processors, 64 cores, 512 threads; SPARC T4-4 205,792.0 QphH@3000GB, $4.10/QphH@3000GB, available 5/31/12, 4 processors, 32 cores, 256 threads; IBM Power 780 QphH@3000GB, 192,001.1 QphH@3000GB, $6.37/QphH@3000GB, available 11/30/11, 8 processors, 32 cores, 128 threads; HP ProLiant DL980 G7 162,601.7 QphH@3000GB, $2.68/QphH@3000GB available 10/13/10, 8 processors, 64 cores, 128 threads.

SPEC and the benchmark names SPECfp and SPECint are registered trademarks of the Standard Performance Evaluation Corporation. Results as of June 18, 2013 from www.spec.org. HP ProLiant DL980 G7 (2.27 GHz, Intel Xeon X7560): 1580 SPECint_rate2006; HP ProLiant DL980 G7 (2.4 GHz, Intel Xeon E7-4870): 2180 SPECint_rate2006.

Mittwoch Jun 12, 2013

Growing the root pool

Some small inbetween laptop experiences...  I finally decided to throw away that other OS (I used it so rarely that I regularily had to use the password reset procedure...).  That gave me another 50g of valuable laptop disk space - furtunately on the right part of the disk.  So in theory, all I'd have to do is resize the Solaris partition, tell ZFS about it and be happy...  Of course, there are the usual pitfalls.

To avoid confusion, much of this is x86 related.  On normal SPARC servers, you don't have any of the problems for which I describe solutions here...

First of all, you should *not* try to resize the partition that hosts your rpool while Solaris is up and running.  It works, but there are nicer ways to do a shutdown.  (What happens is that fdisk will not only create the new partition, but also write a default label in that partition, which means that ZFS will not find it's slice, which will make Solaris very unresponsive...)  The right way to do this is to boot off something else (PXE, USB, DVD, whatever) and then change the partition size.  Once that's done, re-create the slice for the ZFS rpool.  The important part is to use the very same starting cylinder.  The length, naturally, will be larger.  (At least, I had to do that, since the original zpool lived in a slice.)

After that, it's back to the book:  Boot Solaris and choose one of "zpool set autoexpand=on rpool" or "zpool online -e rpool c0t0d0s0" and there you go - 50g more space.

Did I forget to mention that I actually did a full backup before all of this?  I must be getting old...

Donnerstag Apr 04, 2013

A few remarks about T5

By now, most of you will have seen the announcement of the T5 and M5 systems.  I don't intend to repeat any of this, but I would like to share a few early thoughts.  Keep in mind, those thoughts are mine alone, not Oracle's.

It was rather obvious during the Launch Event that we will enjoy the competition with IBM even more than before.  I will not join the battle of words here, but leave you with a very nice summary (of the first few skirmishes) found on Forbes.  It is worth 2 minutes of reading - I find it very interesting how IBM seems to be loosing interest in performance...

Since much of the attention we are getting is based on performance claims, I thought it would be nice to have a short and clearly arranged overview of the more commonly used benchmark results that were posted.  I will not compare the results to any other systems here, but leave this as a very entertaining exercise to you ;-)

There are more performance publications, especially on the BestPerf blog.  Some of these are interesting because they compare T5 to x86 CPUs, something I recommend doing if you don't shy away from reconsidering your view of the world from time to time.  But the ones I listed here are more likely to be accepted as "independent" benchmarks than some others.  Now, we all know that benchmarking is a leap-frogging game, I wonder who will jump next?  (We've leap-frogged our own systems a couple times, too...)    And to finish this entry off, I'd like to remind you that performance is only one part of the equation.  What usually matters just as much, if not more, is price performance.  In the benchmarking game, we can usually only compare list prices - have a go at that!  To quote Larry here:  “You can go faster, but only if you are willing to pay 80% less than what IBM charges.”

Competition is an interesting thing, don't you think?

Montag Jan 14, 2013

LDoms IO Best Practices & T4 Red Crypto Stack

In November, I presented at DOAG Konferenz & Ausstellung 2012.  Now, almost two months later, I finally get around to posting the slides here...

  • In "LDoms IO Best Practices" I discuss different IO options for both disk and networking and give some recommens on how you to choose the right ones for your environment.  A couple hints about performance are also included.

I hope the slides are useful!

Freitag Dez 21, 2012

What's up with LDoms: Part 6 - Sizing the IO Domain

Before Christmas break, let's look at a topic that's one of the more frequently asked questions: Sizing of the Control Domain and IO Domain.

By now, we've seen how to create the basic setup, create a simple domain and configure networking and disk IO.  We know that for typical virtual IO, we use vswitches and virtual disk services to provide virtual network and disk services to the guests.  The question to address here is: How much CPU and memory is required in the Control and IO-domain (or in any additional IO domain) to provide these services without being a bottleneck?

The answer to this question can be very quick: LDoms Engineering usually recommends 1 or 2 cores for the Control Domain.

However, as always, one size doesn't fit all, and I'd like to look a little closer. 

Essentially, this is a sizing question just like any other system sizing.  So the first question to ask is: What services is the Control Domain providing that need CPU or memory resources?  We can then continue to estimate or measure exactly how much of each we will need. 

As for the services, the answer is straight forward: 

  • The Control Domain usually provides
    • Console Services using vntsd
    • Dynamic Reconfiguration and other infrastructure services
    • Live Migration
  • Any IO Domain (either the Control Domain or an additional IO domain) provides
    • Disk Services configured through the vds
    • Network Services configured through the vswitch

For sizing, it is safe to assume that vntsd, ldmd (the actual LDoms Manager daemon), ldmad (the LDoms agent) and any other infrastructure tasks will require very little CPU and can be ignored.  Let's look at the remaining three services:

  • Disk Services
    Disk Services have two parts:  Data transfer from the IO domain to the backend devices and data transfer from the IO Domain to the guest.  Disk IO in the IO domain is relatively cheap, you don't need many CPU cycles to deal with it.  I have found 1-2 threads of a T2 CPU to be sufficient for about 15.000 IOPS.  Today we usually use T4...
    However, this also depends on the type of backend storage you use.  FC or SAS rawdevice LUNs will have very little CPU overhead.  OTOH, if you use files hosted on NFS or ZFS, you are likely to see more CPU activity involved.  Here, your mileage will vary, depending on the configuration and usage pattern.  Also keep in mind that backends hosted on NFS or iSCSI also involve network traffic.
  • Network Services - vswitches
    There is a very old sizing rule that says that you need 1 GHz worth of CPU to saturate 1GBit worth of ethernet.  SAE has published a network encryption benchmark where a single T4 CPU at 2.85 GHz will transmit around 9 GBit at 20% utilization.  Converted into strands and cores, that would mean about 13 strands - less than 2 cores for 9GBit worth of traffic.  Encrypted, mind you.  Applying the mentioned old rule to this result, we would need just over 3 cores at 2.85 GHz to do 9 GBit - it seems we've made some progress in efficiency ;-)
    Applying all of this to IO Domain sizing, I would consider 2 cores an upper bound for typical installations, where you might very well get along with just one core, especially on smaller systems like the T4-1, where you're not likely to have several guest systems that each require  10GBit wirespeed networking.
  • Live Migration
    When considering Live Migration, we should understand that the Control Domains of the two involved systems are the ones actually doing all the work.  They encrypt, compress and send the source system's memory to the target system.  For this, they need quite a bit of CPU.  Of course, one could argue that Live Migration is something happening in the background, so it doesn't matter how fast it's actually done.  However, there's still the suspend-phase, where the guest system is suspended and the remaining dirty memory pages copied over to the other side.  This phase, while typically very very short, significantly impacts the "live" experience of Live Migration.  And while other factors like guest activity level and memory size also play a role, there's also a direct connection between CPU power and the length of this suspend time.  The relation between Control Domain CPU configuration and suspend time has been studied and published in a Whitepaper "Increasing Application Availability Using Oracle VM Server for SPARC (LDoms) An Oracle Database Example".  The conclusion: For minimum suspend times, configure 3 cores in the Control Domain.  I personally have made good experience with 2 cores, measuring suspend times as low as 0.1 second with a very idle domain, so again, your mileage will vary.

    Another thought here:  The Control Domain doesn't usually do Live Migration on a permanent basis.  So if a single core is sufficient for the IO Domain role of the Control Domain, you are in good shape for everyday business with just one core.  When you need additional CPU for a quick Live Migration, why not borrow it from somewhere else, like the domain being migrated, or any other domain not currently very busy?  CPU DR does lend itself for this purpose...

As you've seen, there are some rules, there is some experience, but still, there isn't the single, one answer.  In many cases, you should be ok with a single core on T4 for each IO domain.  If you use Live Migration a lot, you might want to add another core to the Control Domain.  On larger systems with higher networking demands, two cores for each IO Domain might be right.  If these recommendations are good enough for you, you're done.  If you want to dig deeper, simply check what's really going on in your IO Domains.  Use mpstat (1M) to study the utilization of your IO Domain's CPUs in times of high activity.  Perhaps record CPU utilization over a period of time, using your tool of choice.  (I recommend DimSTAT for that.)  With these results, you should be able to adjust the amount of CPU resources of your IO Domains to your exact needs.  However, when doing that, please remember those unannounced utilization peaks - don't be too stingy.  Saving one or two CPU strands won't buy you too much, all things considered.

A few words about memory:  This is much more straight forward.  If you're not using ZFS as a backing store for your virtual disks, you should be well in the green with 2-4GB of RAM.  My current test system, running Solaris 11.0 in the Control Domain, needs less than 600 MB of virtual memory.  Remember that 1GB is the supported minimum for Solaris 11 (and it's changed to 1.5 GB for Solaris 11.1). If you do use ZFS, you might want to reserve a couple GB for its ARC, so perhaps 8 GB are more appropriate.  On the Control Domain, which is the first domain to be bound, take 7680MB, which add up to 8GB together with the hypervisor's own 512MB, nicely fitting the 8GB boundary favoured by the memory controllers.  Again, if you want to be precise, monitor memory usage in your IO domains.

Links:

Update: I just learned that the hypervisor doesn't always take exactly 512MB. So if you do want to align with the 8GB boundary, check the sizes using "ldm ls-devices -a mem". Everything bound to "sys" is owned by the hypervisor.
About

Neuigkeiten, Tipps und Wissenswertes rund um SPARC, CMT, Performance und ihre Analyse sowie Erfahrungen mit Solaris auf dem Server und dem Laptop.

This is a bilingual blog (most of the time). Please select your prefered language:
.
The views expressed on this blog are my own and do not necessarily reflect the views of Oracle.

Search

Categories
Archives
« March 2015
SunMonTueWedThuFriSat
      
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
     
Today