News, tips, partners, and perspectives for Oracle’s virtualization offerings

Oracle VM 3.4.1 and new performance features

Jeff Savit
Product Management Senior Manager

Oracle VM 3.4.1 was just released - a substantial release with multiple new features like storage live migration, support for virtual appliances, FCoE and UEFI boot support, and upgrades to system software components. Please refer to the preceding link and to the What's New page in the Release Notes.

Also, Oracle VM 3.3.4 was recently released. This maintenance release corrects bugs, and also adds new features for performance. Customers who want to stay on the Oracle VM 3.3 release should move to this level for these improvements.

New performance features

The later software versions provide higher performance and scale, especially for 3.4.1. Substantial performance benefits for CPU, network, and disk have been observed in our testing, simply by converting to the new release.

Additionally, a new feature can improve guest disk I/O performance for Oracle Linux guests by increasing the number of in-flight I/O requests, and increasing the amount of data transfered in each request. This applies to Linux guests, and requires administrative steps in the guest domain.

Guests with paravirtualized drivers (PVM or HVM+PV drivers guests) process disk I/O through a ring buffer shared between domU (guest VMs) and dom0. By default, the ring buffer has 32 entries of 128KB bytes, each representing an in-flight I/O request.

I/O requests larger than the buffer size are split into multiple requests, which decreases performance  With Oracle Linux, the maximum size of an I/O request can be controlled by a parameter xen-blkfront.max measured in 4KB units. The maximum value for this setting is 256, which enables I/O requests up to one 1MB (1024KB) (xen-blkfront.max=256) without breaking them into multiple requests. This can improve streaming sequential I/O, in particular if application block size is bigger than the default (iostat in guest could tell, or application knowledge). This parameter can be set on Oracle Linux 6 and Oracle Linux 7.

Another setting xen-blkfront.max_ring_page_order controls the number pages used for the buffers, which permits more in-flight disk I/O operations. Pages for shared ring buffers must be allocated in units of powers of two, and can be adjusted by setting the exponent in 2^N (for 2**N for FORTRAN programmers). xen-blkfront.max_ring_page_order=4 would permit 2^4==16 ring buffers pages, and support as many in-flight operations as fit in that number of pages based on the buffer size. This can improve performance if the storage devices aren't saturated and can handle more concurrent I/O. This is especially for physical disk rather than vdisk from a repository. This feature is available in Oracle Linux 7.2, and requires UEK3 3.8.13-90 or later.

How to set

The variables can be set in /etc/modprobe.conf, for example
options xen-blkfront.max=256 or guest kernel boot command line in grub.

The two settings can be combined on the grub kernel line like this (split over lines for illustration)

kernel /vmlinuz-3.8.13-44.1.4.el6uek.x86_64 ro root=LABEL=/ \
           SYSFONT=latarcyrheb-sun16 \
           LANG=en_US.UTF-8 KEYTABLE=us \ 
           xen-blkfront.max=64 xen-blkfront.max_ring_page_order=4 


Oracle VM 3.4.1 is an important release with many new features, and substantially improves performance without additional administrative overhead for all virtual machines, and also for provides better response time in the Oracle VM Manager user interface and increases scale for number of VMs and servers, and the resources the use. Additionally, Oracle Linux customers can tune I/O performance by changing the settings described in this blog from their defaults.

Results will depend on infrastructure and workload. In particular, a fully saturated storage environment won't be made any faster by adding these tuning parameters. However, these changes will help Oracle VM environments fully exploit the capabilities of the underlying hardware and get closer to bare-metal performance.

Join the discussion

Comments ( 8 )
  • aKuncoro Saturday, March 26, 2016

    Thanks Jsavit,

    Is there any possibilities to install OracleVM Server and OracleVM Manager on same server?

    Best Regards,


  • Jeff Tuesday, April 26, 2016

    Hi aKuncoro,

    Technically it's possible by running a Linux VM to run Oracle VM Manager on the same server it manages, but it's not supported and it's not a good idea. Think about the complications installing the manager in a VM on the server it's going to set up to run VMs in the first place, and how hard it would be to handle problems where the manager or the server have problems. Not just on the same server: the rule is to not run Oracle VM Manager on any of the servers it controls.

    Instead of doing that, you can run Oracle VM Manager in a VM on servers managed by other Oracle VM Manager instances. That's works fine and is very convenient. I hope that helps, Jeff

  • Eric W Wednesday, April 27, 2016

    I read that /etc/modprobe.conf is decrecated in OL6. Could you please let me know where should I can add options xen-blkfront.max=256


  • Jeff Wednesday, April 27, 2016

    Hi Eric - just add that to the kernel line as shown above (bold fonted text). That's the way I did it. Hope that helps, Jeff

  • guest Thursday, April 28, 2016

    Dear, i need concept on the OVM Clustering.

    OPTION 1:

    I have two identical servers for OVS, i will be using the local Disks of both Servers to install the OS/Binaries of the Applications. Suppose ORACLE RAC/Microsoft Cluster for SQL. is this the best scnario?

    OPTION 2:

    I get 500Gb ISCSI Mirrored disk, discover/create on the Repository on it. create 2 VMs for Clustering in a Pool using same DISK.

    There is option of SHARE while creating DISK, i am using same SHARED DISK for the Microsoft Clustering. But Cluster validation is not pass on the disk via this way.

    It passes validation if i allocate the ISCSI disk directly to VM (NOT FROM REPOSITORY)

    any idea?

    Option 2 is the good scnario for the OVM Clustering?

  • Jorge Velasco Tuesday, September 25, 2018
    When I add the option: xen-blkfront.max_ring_page_order=4 I get the following error in the booting process and the Linux gets stuck:

    19 xenbus_dev_probe on device/vbd/51744
    2 writing ring-page-order
    failed to write error node for device/vbd/51904 (2 writing rin-page-order)
    2 writing ring-page-order
    failed to write error node for device/vbd/51920 (2 writing rin-page-order)

    I have Oracle Linux Server release 7.5 with kernel: 4.1.12-124.19.2.el7uek.x86_64

    I only had to left the option: xen-blkfront max_indirect_segments=256 in the mobprobe config file.

    Any idea?
  • Jeff Tuesday, September 25, 2018
    Hi Jorge,

    I would open an SR to follow up on that, or just pull it out to save effort unless there's evidence to suggest that more concurrent I/O is needed. The actual kernel parameter syntax may have changed since the blog was posted so I'll have to research.
  • Jeff Tuesday, September 25, 2018
    I never saw the older comment, sorry: yes, option 2 is right. Give a LUN directly to the VM for any disk needed for Windows Clustering.
Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.