Tuesday Aug 13, 2013

The life of a Linux RPM (package)

Another frequently asked question related to Oracle Linux is how versions of specific packages (RPMs) are picked.

A Linux distribution is basically a collection of a ton of open source projects that make up the Operating System environment, with at its core the Linux kernel. Linux as a development project is about the Linux kernel specifically. There are then many (1000's) of random open source projects out there and a Linux distribution basically is an OS made up of, at its core, the kernel, and tons of those other projects packaged up. Now some packages are more critical than others, there's a small true core of packages that you will find in any Linux distribution, glibc (c library), gcc (compiler), filesystem utils, core utils, binutils, bash,... etc. A good guess would be about 150-200 packages that make up pretty much any more or less usable environment (yes you can do with far fewer but I 'm talking standard OS installs here...)

All these projects have their own development cycle, their own maintainers/developers, their own project plans, and their own dependencies. They just kind of all move along on their own pace, they're worked on by (usually) different people and so on. So how do these 1000's of packages get into Oracle Linux?

Oracle Linux is an exact replica of Red Hat Enterprise Linux (same packages, same source code, same versions,...)... so that's what we base our distribution on (similar to CentOS). Now Red Hat Enterprise Linux, in turn, uses the Fedora Linux distribution as its upstream baseline. Fedora is a community distribution (fedoraproject.org) that typically lives far ahead of what any one would install for a stable environment. It's a community driven, cutting edge Linux distribution. Fedora as a project has very frequent releases (like every 6 or so months). The fedora maintainers distribute maintainership of these 1000's of RPMs across a group of people and they gather newer versions of all these projects and build them for a given version of Fedora. They then stabilize this and release it. By stabilizing, I mean, they create packages, test out if all the dependencies work, if there's a build environment that's consistent and if there are bugs, and fix them of course.

So Fedora evolves rapidly (like every 6 months a new release) and as you go from Fedora 12 to 13 to 14 etc, you see the packages of gcc and glibc and all other stuff evolve version by version, gcc 3.2 to 3.3 to 4.0 to 4.4 etc. Depending on when the fedora project starts "freezing" the package list of the next version, that's what the various versions of those 1000's of packages will be based off of. The maintainers usually will, at some point of the release cycle, take the latest "stable" version of a given project (say gcc) and check that version into the Fedora tree. What happens here is that you typically see Fedora pick up the stable versions of projects pretty regularly. It helps shake out bugs, it reaches a large (end)-userbase and it helps the Linux community that wants to be on the cutting edge by doing a lot of the packaging for them and it helps the downstream use of Fedora because many of the base/generic obvious bugs and build issues have been resolved during the Fedora test, dev and release process.

Now, because Fedora moves so rapidly, newer versions of RHEL, and as such OL, obviously skip quite a few versions of Fedora. For instance, RHEL6 is based on Fedora 12/13, RHEL5 was based on Fedora Core6, etc... A new version of RHEL is released every few years. So Red Hat decides at a certain point in time to take a snapshot of a given Fedora release they deem stable enough at that point in time and then fork that internally into a separate repository, change the trademarks,logos, add some packages that might not be in Fedora, tests the components for a more commercial use, server use, (most of Fedora is desktop use) and then releases that as the next version of RHEL. And we then similarly follow with OL.

An important point to make is that within a given release cycle of RHEL or OL, the version of the packages typically doesn't change, at least not the major version (usually not 2 digit versions). For the lifetime of, say, OL5, the version of, say, glibc, will remain pretty static. It will include bugfixes over the lifetime of the distribution version, security fixes and sometimes minor important things that might get backported of a newer version (albeit rare) but that's it. So you have a relatively static vesion of an OS, it improves in stability, quality, security but it doens't improve much in terms of functionality. (Most of the enhancements would probably be in the kernel.).

This also means that a Linux distribution (RHEL, OL, CentOS,...) can skip package versions, if some external project goes from version 2.1 to 2.2 to 2.3 to 3.0 to 3.1 etc... over the lifecycle of OL5 and OL6, then OL5 might contain 2.1 and OL6 might contain 3.0 or 3.1. You won't see versions in between get picked up. Or, again, in some rare cases, if there's something really important that went into 2.3 that would be really relevant, it might have gotten backported to 2.1 as part of RHEL/OL. You cannot expect that OL5 would go from 2.1 to 2.2 to 3.0 for that given package. That's just not how things work. So if you expect major enhancements or features of some package that's newer than the version that's in the current distribution, you might (likely) have to wait until the next major release. Example : OL5 contains glibc 2.5, OL6 contains glibc 2.12. If there was something really, really important in glibc 2.8, that might have gotten backported into 2.5 and gotten into OL5, but it's unlikely. And OL5 will not start adding 2.6, or 2.7 or so into the distribution. And then the same cycle starts again with OL6, it contains glibc 2.12... but the current version of glibc upstream is 2.18, and Fedora 19 contains glibc 2.17. So the future version of RHEL7/OL7 might end up with 2.16 or 2.17, and it would have skipped over 2.13-2.15. One cannot expect that the commercial distributions backport features of future package versions into prior versions. That doesn't happen for OL, or for RHEL or CentOS or SLES etc.

What does happen, and it's important to point this out, is the fixing of CVE's/Security vulnerabilities or critical bugfixes. Example, let's say there is a security issue found in glibc 2.17 (upstream), and this is also relevant to glibc 2.5 found in RHEL5/OL5. We obviously will end up fixing that in 2.5 (backport the security fix) and in 2.12 (OL6)... So in terms of critical fixes and security vulnerabilities found in any version of a supported Linux distribution's package, those will be found in various versions, where they matter.

You can always track this by looking at the changelog of an RPM or look for a CVE number and you will see hits on different versions of an RPM in different versions of OL where it is relevant.

This is pretty normal stuff, new features go into new versions of a product, like new features go into new versions of the oracle database and we will fix problems and backport changes into older versions but you will not see new features for a new versions of the database pop up in an old version of the database. It's not rocket science. A Linux Distributions is a product based on tons of small subcomponents but in the end the major release is the overarching "feature" release.

The few exceptions (obviously there are exceptions :-) are : (1) it's possible that new packages for new products or components are introduced during the lifetime of the OS release, a new RPM can be introduced in 5.8 or 5.9 or so... (2) some of the backports of features I talked about earlier can introduce some enhancements, although rare (3) the Linux kernel is probably the most lively component in the OS where the rate of enhancements is the largest compared to anything else.

I hope this helps.

Of updates and errata.

A frequently discussed topic inside Oracle and also outside with customers and partners is Oracle Linux versions and how to treat updates and support and certifications and minimum levels. Here's our take on it, from the Oracle Linux side.

When talking about Oracle Linux and versions, there really are 3 major components :

-1- A major new release, such as Oracle Linux 5, Oracle Linux 6,...

A major new release is an update of the entire OS, kernel, userspace, all the 1000's of packages that make up Oracle Linux. A major release is significant in change compared to the previous version. You will see pretty much every package(RPM) updated to a whole new version, like Apache, MySQL, GCC, glibc, X-windows, gnome, etc etc... In a number of cases, the owners of the packages are not so careful about maintaining backward compatibility and introduce different style config files that could make upgrades difficult or sometimes impossible. This is one of the reasons why it's not easy to 'upgrade' from, say, Oracle Linux 5 to Oracle Linux 6. While it would be ok for a good number of RPMs, it's not guaranteed to work for everything. A config file might get overwritten or the older edited config file is not compatible with the new version...etc. So upgrading here, very often is a new/fresh install of the OS on a server. We see most customers go to new versions at time of a hardware refresh and use that as a good opportunity, and that makes total sense.

Because a major version has signifcant changes (new kernel, with new features, new glibc,... so core components), unfortunately sometimes changes that affect or really need change not just testing (upstart from OL5 to OL6,... potentially interesting changes due to systemd in future versions), there's typically a good reason to do certification testing of userspace applications on top of the new version. The way we work here is that we build on the lowest common version of the OS we want to support and run certification against newer versions. So we build an Oracle product on Oracle Linux 5, we won't support anything on a version that's older than Oracle Linux 5, and we do extras testing and certification for any new major version, like Oracle Linux 6. This can require us to do some changes to our application (like the database) in order for us to be able to complete that certifcation (or OS bugfixes), this was a big reason as to why it took a long time to be able to consider Oracle Linux 6 certified for the database. (due to, for instance, things like upstart)

ISV's typically will do the same thing, it takes some time for development teams to add a new major version of the OS, again, because sometimes application changes might be required or OS changes might be required, like bugfixes due to testing.

A major OS release happens every few years, not more frequently. OL4 -> (for us 2006 since we started with update 4), OL5 -> mid 2006 OL6 -> early 2011

-2- A minor update, really just a point in time current snapshot of a given major release with bugfixes and security updates applied (Oracle Linux 5 update 9, Oracle Linux 6 update 4...)

A minor update, released on a regular basis (several months), is really just a snapshot in time of the major OS release. As a version is out, on a regular basis, there are bugfixes, security updates introduced into that release, so out of the 1000's of packages, a number of those will have a minor bugfix update every now and then. These updates really are focused on fixing bugs and fixing security vulnerabilities. New features are normally not introduced into packages as part of a certain major OS release. Sometimes there are some new things added but they are only introduced if it doesn't break compatibility or doesn't change the understood use of that package. Because a major OS release is only every few years and to make it easy to provide a good snapshot of fixes within the release cycle, updates are done on a regular basis. The update is literally a snapshot at a given date and then the latest version of all packages within the release are bundled together and put onto an updated media (iso).

This makes it convenient for users in a number of ways : 1) As mentioned in the first point, sometimes we need to create OS bugfixes for an application to work on a given version of the OS, these fixes go into the OS at some point or another. It is very convenient to use the update releases to point customers to the minimum version as a starting point. 2) a very important component that gets updated regularly (bugfixes, updated device drivers/hardware support) is the kernel. So if new hardware is released, you typically need a new boot kernel to recognize newer hardware, an update release pretty much always have a newer version of the linux kernel with updated device drivers on the installation media and that's required to be able to install a given OS version on newer hardware. So it might be that you need a minimum level of Oracle Linux 5.9 to install on the latest version of a given server... but it's still "Oracle Linux 5" as a product.

So, look at update releases as minimum patch levels of an OS release, not as a product version. Oracle Linux 5.9 is not a product version, Oracle Linux 5 is the product, update 9 just implies a recent point in time of the fixes made in the product. Too often a customer (or product team) certifies on very specific update versions, and make the mistake of implying really just that version and not that version and newer. We all stand by the fact that if a minimum version required is, let's say Oracle Linux 5 update 6, then that always implies Oracle Linux 5, starting from the snapshot point update 6 and anything newer since then, as part of Oracle Linux 5.

There is no point in sticking to a given update version and consider that a release, there are -always- important fixes, whether it be potential crashes or security vulnerabilities released after that update and it really is not a best practice to stick at a certain point in time. We always do a lot of testing on any bugfixes update or security vulnerability and there's no breakage or introduction of incompatibilities within a given OS version. Look at Oracle Linux X update Y as running Oracle Linux X, update Y is just a point in time. ISVs should point out a starting update of what is considered supported or certified but with the understanding that it will imply anything changed from that point on for the same major version X. So if 5.6 is the base certification level, then anything post 5.6 as part of Oracle Linux 5 should be OK and it makes sense to try to remain as current as possible.

-3- An errata update, an update of a given package, either because of a bugfix or a security vulnerability fix.

At any point in time during the lifetime of a major OS version (OL5, OL6,...) we obviously fix bugs or address a security issue. These fixes are introduced in a continous stream of updates for each major OS. They always update the minor digits of the package version, for instance the kernel, 2.6.39-300.0.1 to 2.6.39-300.0.2 etc... They do not introduce behavior change or impact how an application runs, they make things more stable and secure. We try to not do this more often than necessary. It would be highly recommended to apply these updates soon after they are released. Most critically the kernel and glibc ones as they are under every application. Of course, with ksplice updates we make kernel updates a breeze since there's no reboot involved. And as mentioned in -2-, these errata are on a regular basis bundled into a newer snapshot of the OS.

What's the take-away here?

We recommend you look at an update release as a starting point, not as a product version, and we highly recommend customers and partners to be as current as possible in applying errata packages on their OS. It makes things more stable... it contains fixes... it contains patched security vulnerabilities... those all seem rather important to keep in mind.

So often, a customer service request comes into the support organization and it's a problem that's known and fixed in an errata, a downtime that could've been prevented by being more current...

Wednesday Aug 07, 2013

Oracle VM templates for Database 12c 12.1.0.1.0 both single instance and rac

Today we made available a few new Oracle VM templates on edelivery. A set of VM templates for database 12c and another set for database 11g 11.2.0.3.7.

You can find more information on the otn pages here.

A very important new feature added is the ability to deploy single instance database. In the past the database templates were focused on RAC deployments (Real Application Cluster) but because of popular demand, we also added support for Single Instance. With Single Instance you can really create a new VM with the database up and running in a matter of a few (very few) minutes, and with a very simple config file.

Example config file for single instance :

$ cat netconfig.ini 
NODE1=dbsingle1
NODE1IP=192.168.1.72
PUBADAP=eth0
PUBMASK=255.255.255.0
PUBGW=192.168.1.1
DOMAINNAME=wimmekes.net  # May be blank
DNSIP=8.8.8.8  # Starting from 2013 Templates allows multi value
CLONE_SINGLEINSTANCE=yes  # Setup Single Instance

That's literally it. You don't need to do anything other than run a few Oracle VM CLI or UI commands and run deploycluster and you're all set. After a few minutes, the VM will be pingable and you can run sqlplus against the database running inside the VM.

If you use the CLI, here is a sample workflow :

  • import the template
  • - importtemplate repository name=[reponame] url=[http://myurl/template.tbz] server=[servername]
  • create vm from template
  • - clone vm name=[templatename] destType=Vm destName=[vmname] serverpool=[serverpoolname]
  • Create new vnic
  • - create vnic name=[macaddress] network=[network] (list network, will show you the various networks)
  • remove old vnics (you could rename one or alter one but to simplify I just remove the old vnics of the cloned vm and add the newly created
  • - remove vnic name=[macaddr] from vm name=[vmname]
    - show vm name=[vmname] to see the attached vnics

    And that's it, now you can use that netconfig.ini example, edit it for your environment and run deploycluster:

    On top of single instance, the templates also expose or give you the ability to easily configure and enable many of the new rdbms 12c functionality :

    - Oracle Flex Cluster and/or Flex ASM, Hub/Leaf nodes
    - Container Database with x number of pluggable databases
    - Database Express
    - ACFS filesystem
    - Oracle Restart (single instance database with HA)
    - local or shared filesystem installs, including OCFS2 and ACFS
    - Admin Managed or Policy managed database creation with serverpools
    - OS kernel updated to the latest uek 2 version 2.6.39-400

    And all of the above are simple parameters in the config files. This can be 100% automated, 100% reproducible and you don't need to know how to configure them all yourself. As always, high quality work by Saar Maoz.

    Production ready, not trial, not using a random OS, all ready to go. Production-ready virtual appliances.

    Wednesday Jul 24, 2013

    The Ksplice differentiator

    It's been exactly two years since we acquired a small startup called Ksplice. Ksplice as a company created the zero downtime update technology for the Linux kernel and they provided a service to their customers which tracked Linux kernel security fixes and providing these fixes as zero downtime Ksplice updates.

    Essentially the ksplice technology allows us to create Linux kernel patches that can be applied in an online fashion. We are not talking about the ability to install a patch while the system is running and make it active after reboot. We are talking about a running system with a given kernel being patched and this patch becoming active instantly, without the need for a costly reboot (costly in terms of downtime caused by a reboot that has to be scheduled or coordinated and causing systems and applications to be unavailable during this time).

    We offer this service as part of Oracle Linux Premier (and premier-limited) support, there's no extra $$ add-on option for this, anyone with Premier/Premier-limited has full access to this service. We support both the, what we call, Red Hat Compatible Kernel(RHCK) and the Unbreakable Enterprise Kernel (UEK). So whether a customer starts from RHEL, from OL with RHCK or OL with UEK, they're covered.

    Essentially, when we release security errata for Oracle Linux, specifically for the Linux kernel itself, we release, as usual, a new kernel RPM and customers can just apply this RPM, reboot the server and they have the errata applied/active. Or, if they install the Ksplice update, then we provide what we call Ksplice zero downtime patches for each of the security fixes and they can then be applied to their running systems without reboot and the fixes are active/effective immediately. This can be done while production applications continue to run, at any point during the day or night, no need to bring applications down or do any specific planning.

    Aside from continuing the model of providing a service for security updates, we have since done a few additional things for our customers :

    1) We integrated the Ksplice portal with the Unbreakable Linux Network (ULN). When a customer logs into ULN they can generate a Ksplice key or when they run up2date to register a system they can automatically set it up to be enabled for ksplice tools. So there is no longer a need to have a separate registration, one with ULN and one with the Ksplice update server. We now do this behind the scenes. The customer can then install the uptrack tools (these are the tools that download and apply the updates) and be ready to use apply the updates. You can read more about that here.

    2) While the above made it very convenient, it still meant that every server had to connect directly to servers hosted by us (oracle.com) and for many customers, it's very difficult to have servers in a datacenter be directly connected or have direct access to the internet. So to help these customers we created the offline client. A customer can create a local yum repository which contains RPMs that contain the ksplice updates for a given kernel and then distribute the updates locally within the company from a server on the intranet. This makes it easier to have one system that is registered with our system instead of having to register each server individually. You can read more about that here.

    Now - there is another fundamental use-case for the Ksplice technology that we have incorporated in the Oracle Linux support service.

    We obviously have a large and rapidly growing customer base that runs mission critical systems on Oracle Linux. Database servers running on top of Oracle Linux, WebLogic servers, Oracle Applications,... etc.. in order to provide the very best customer experience we have trained our engineers on the Oracle Linux side to make use of Ksplice technology in the case of gathering diagnostics and even fix specific problems.

    Imagine a server that cannot have downtime but we are working on diagnosing a problem. In some cases, we would typically create a kernel with some extra debug or diagnostics code, provide this to the customer and then they schedule a reboot to apply this kernel, they run their system, after gathering the data, they re-install the original kernel and continue. Then, if we find out the issue and have a fix for this, we can provide them with a kernel that has the fix, they have to schedule downtime, apply the fix and reboot. Typically these systems are interconnected. What do I mean by that? Typically a database server has an application frontend, they are multitiered environments, you have, for instance, 3 middle tier servers connecting to a database server. So in order to reboot the database server, the application admin has to first schedule downtime for the app, bring down the app, and then the database admin can bring down the database and sysadmin can reboot the server. Yes it's that complex. So if you have to do three reboots, you can imagine the cost of that and the time impact. It's not just about a quick reboot of the server, there's a whole ecosystem that goes along with this.

    In our case, when a customer has a critical production ticket open with us for their database server, we can do the following :

    1) if we cannot get diagnostics without adding code to the Linux kernel, we can create a Ksplice update for the diagnosics and provide this Ksplice update. The customer can then apply this update onto their production system, without any downtime whatsoever. (1 reboot saved on the backend, and saved an application shutdown for each app)

    2) once we gather the diagnostics, we can ask the customer to UNDO this Ksplice update, without downtime, the technology supports applying and removing patches online. (2 reboots saved and again saved an application shutdown for each app)

    3) if we then determine what the problem is, and we come up with a patch/bugfix, we can create a Ksplice update for this fix and let the customer apply this on their production system, again, without downtime, without any additional work. (3 reboots saved and yet again saved an application shutdown for each app)

    This is service is only provided in critical situations. We apply this model internally (1) on our own production system that run Oracle Linux (2) to provide fixes to customers running engineered systems like Exadata and Exalogic.

    Another thing that is important to point out is that you do not have to reboot your system in order to start using this service. An existing environment that's up and running can be made Ksplice ready by just installing a few additional tools and no reboot needed. Once the tools are installed they can be used to apply the updates to your current running system, so even prepping the server is without downtime.

    Some customers have procedures in place that apply security updates on a regular basis and as such they don't always want or need to be current when a security update is released, so they might have less use for the security update service but certainly still very much can rely on the latter model of fixing critical issues. Other customers have very strict requirements in patching vulnerabilities as soon as possible, for them, the service we offer by releasing security errata both as a separate kernel RPM and a Ksplice update, this service is just absolutely invaluable. They are released at the same time, so you can be constantly up to date without worry.

    Let me give you an example using Oracle Linux 5 :

    I went to the extreme and installed Oracle Linux 5 update 4 (released 9-3-2009). Installed the Ksplice uptrack tools and without doing any reboot, I can now start updating my system using uptrack-upgrade.

    A timed 50 seconds later, the following errata updates were applied on this running system (without any impact on any running applications) :

    Installing [v5267zuo] Clear garbage data on the kernel stack when handling signals.
    Installing [u4puutmx] CVE-2009-2849: NULL pointer dereference in md.
    Installing [302jzohc] CVE-2009-3286: Incorrect permissions check in NFSv4.
    Installing [k6oev8o2] CVE-2009-3228: Information leaks in networking systems.
    Installing [tvbl43gm] CVE-2009-3613: Remote denial of service in r8169 driver.
    Installing [690q6ok1] CVE-2009-2908: NULL pointer dereference in eCryptfs.
    Installing [ijp9g555] CVE-2009-3547: NULL pointer dereference opening pipes.
    Installing [1ala9dhk] CVE-2009-2695: SELinux does not enforce mmap_min_addr sysctl.
    Installing [5fq3svyl] CVE-2009-3621: Denial of service shutting down abstract-namespace sockets.
    Installing [bjdsctfo] CVE-2009-3620: NULL pointer dereference in ATI Rage 128 driver.
    Installing [lzvczyai] CVE-2009-3726: NFSv4: Denial of Service in NFS client.
    Installing [25vdhdv7] CVE-2009-3612: Information leak in the netlink subsystem.
    Installing [wmkvlobl] CVE-2007-4567: Remote denial of service in IPv6
    Installing [ejk1k20m] CVE-2009-4538: Denial of service in e1000e driver.
    Installing [c5das3zq] CVE-2009-4537: Buffer underflow in r8169 driver.
    Installing [issxhwza] CVE-2009-4536: Denial of service in e1000 driver.
    Installing [kyibbr3e] CVE-2009-4141: Local privilege escalation in fasync_helper().
    Installing [jfp36tzw] CVE-2009-3080: Privilege Escalation in GDT driver.
    Installing [4746ikud] CVE-2009-4021: Denial of service in fuse_direct_io.
    Installing [234ls00d] CVE-2009-4020: Buffer overflow mounting corrupted hfs filesystem.
    Installing [ffi8v0vl] CVE-2009-4272: Remote DOS vulnerabilities in routing hash table.
    Installing [fesxf892] CVE-2006-6304: Rewrite attack flaw in do_coredump.
    Installing [43o4k8ow] CVE-2009-4138: NULL pointer dereference flaw in firewire-ohci driver.
    Installing [9xzs9dxx] Kernel panic in do_wp_page under heavy I/O load.
    Installing [qdlkztzx] Kernel crash forwarding network traffic.
    Installing [ufo0resg] CVE-2010-0437: NULL pointer dereference in ip6_dst_lookup_tail.
    Installing [490guso5] CVE-2010-0007: Missing capabilities check in ebtables module.
    Installing [zwn5ija2] CVE-2010-0415: Information Leak in sys_move_pages
    Installing [n8227iv2] CVE-2009-4308: NULL pointer dereference in ext4 decoding EROFS w/o a journal.
    Installing [988ux06h] CVE-2009-4307: Divide-by-zero mounting an ext4 filesystem.
    Installing [2jp2pio6] CVE-2010-0727: Denial of Service in GFS2 locking.
    Installing [xem0m4sg] Floating point state corruption after signal.
    Installing [bkwy53ji] CVE-2010-1085: Divide-by-zero in Intel HDA driver.
    Installing [3ulklysv] CVE-2010-0307: Denial of service on amd64
    Installing [jda1w8ml] CVE-2010-1436: Privilege escalation in GFS2 server
    Installing [trws48lp] CVE-2010-1087: Oops when truncating a file in NFS
    Installing [ij72ubb6] CVE-2010-1088: Privilege escalation with automount symlinks
    Installing [gmqqylxv] CVE-2010-1187: Denial of service in TIPC
    Installing [3a24ltr0] CVE-2010-0291: Multiple denial of service bugs in mmap and mremap
    Installing [7mm0u6cz] CVE-2010-1173: Remote denial of service in SCTP
    Installing [fd1x4988] CVE-2010-0622: Privilege escalation by futex corruption
    Installing [l5qljcxc] CVE-2010-1437: Privilege escalation in key management
    Installing [xs69oy0y] CVE-2010-1641: Permission check bypass in GFS2
    Installing [lgmry5fa] CVE-2010-1084: Privilege escalation in Bluetooth subsystem.
    Installing [j7m6cafl] CVE-2010-2248: Remote denial of service in CIFS client.
    Installing [avqwduk3] CVE-2010-2524: False CIFS mount via DNS cache poisoning.
    Installing [6qplreu2] CVE-2010-2521: Remote buffer overflow in NFSv4 server.
    Installing [5ohnc2ho] CVE-2010-2226: Read access to write-only files in XFS filesystem.
    Installing [i5ax6hf4] CVE-2010-2240: Privilege escalation vulnerability in memory management.
    Installing [50ydcp2k] CVE-2010-3081: Privilege escalation through stack underflow in compat.
    Installing [59car2zc] CVE-2010-2798: Denial of service in GFS2.
    Installing [dqjlyw67] CVE-2010-2492: Privilege Escalation in eCryptfs.
    Installing [5mgd1si0] Improved fix to CVE-2010-1173.
    Installing [qr5isvgk] CVE-2010-3015: Integer overflow in ext4 filesystem.
    Installing [sxeo6c33] CVE-2010-1083: Information leak in USB implementation.
    Installing [mzgdwuwp] CVE-2010-2942: Information leaks in traffic control dump structures.
    Installing [19jigi5v] CVE-2010-3904: Local privilege escalation vulnerability in RDS sockets.
    Installing [rg7pe3n8] CVE-2010-3067: Information leak in sys_io_submit.
    Installing [n3tg4mky] CVE-2010-3078: Information leak in xfs_ioc_fsgetxattr.
    Installing [s2y6oq9n] CVE-2010-3086: Denial of Service in futex atomic operations.
    Installing [9subq5sx] CVE-2010-3477: Information leak in tcf_act_police_dump.
    Installing [x8q709jt] CVE-2010-2963: Kernel memory overwrite in VIDIOCSMICROCODE.
    Installing [ff1wrijq] Buffer overflow in icmpmsg_put.
    Installing [4iixzl59] CVE-2010-3432: Remote denial of service vulnerability in SCTP.
    Installing [7oqt6tqc] CVE-2010-3442: Heap corruption vulnerability in ALSA core.
    Installing [ittquyax] CVE-2010-3865: Integer overflow in RDS rdma page counting.
    Installing [0bpdua1b] CVE-2010-3876: Kernel information leak in packet subsystem.
    Installing [ugjt4w1r] CVE-2010-4083: Kernel information leak in semctl syscall.
    Installing [n9l81s9q] CVE-2010-4248: Race condition in __exit_signal with multithreaded exec.
    Installing [68zq0p4d] CVE-2010-4242: NULL pointer dereference in Bluetooth HCI UART driver.
    Installing [cggc9uy2] CVE-2010-4157: Memory corruption in Intel/ICP RAID driver.
    Installing [f5ble6od] CVE-2010-3880: Logic error in INET_DIAG bytecode auditing.
    Installing [gwuiufjq] CVE-2010-3858: Denial of service vulnerability with large argument lists.
    Installing [usukkznh] Mitigate denial of service attacks with large argument lists.
    Installing [5tq2ob60] CVE-2010-4161: Deadlock in socket queue subsystem.
    Installing [oz6k77bm] CVE-2010-3859: Heap overflow vulnerability in TIPC protocol.
    Installing [uzil3ohn] CVE-2010-3296: Kernel information leak in cxgb driver.
    Installing [wr9nr8zt] CVE-2010-3877: Kernel information leak in tipc driver.
    Installing [5wrnhakw] CVE-2010-4073: Kernel information leaks in ipc compat subsystem.
    Installing [hnbz3ppf] Integer overflow in sys_remap_file_pages.
    Installing [oxczcczj] CVE-2010-4258: Failure to revert address limit override after oops.
    Installing [t44v13q4] CVE-2010-4075: Kernel information leak in serial core.
    Installing [8p4jsino] CVE-2010-4080 and CVE-2010-4081: Information leaks in sound drivers.
    Installing [3raind7m] CVE-2010-4243: Denial of service due to wrong execve memory accounting.
    Installing [od2bcdwj] CVE-2010-4158: Kernel information leak in socket filters.
    Installing [zbxtr4my] CVE-2010-4526: Remote denial of service vulnerability in SCTP.
    Installing [mscc8dnf] CVE-2010-4655: Information leak in ethtool_get_regs.
    Installing [8r9231h7] CVE-2010-4249: Local denial of service vulnerability in UNIX sockets.
    Installing [2lhgep6i] Panic in kfree() due to race condition in acpi_bus_receive_event.
    Installing [uaypv955] Fix connection timeouts due to shrinking tcp window with window scaling.
    Installing [7klbps5h] CVE-2010-1188: Use after free bug in tcp_rcv_state_process.
    Installing [u340317o] CVE-2011-1478: NULL dereference in GRO with promiscuous mode.
    Installing [ttqhpxux] CVE-2010-4346: mmap_min_addr bypass in install_special_mapping.
    Installing [ifgdet83] Use-after-free in MPT driver.
    Installing [2n7dcbk9] CVE-2011-1010: Denial of service parsing malformed Mac OS partition tables.
    Installing [cy964b8w] CVE-2011-1090: Denial of Service in NFSv4 client.
    Installing [6e28ii3e] CVE-2011-1079: Missing validation in bnep_sock_ioctl.
    Installing [gw5pjusn] CVE-2011-1093: Remote Denial of Service in DCCP.
    Installing [23obo960] CVE-2011-0726: Information leak in /proc/[pid]/stat.
    Installing [pbxuj96b] CVE-2011-1080, CVE-2011-1170, CVE-2011-1171, CVE-2011-1172: Information leaks in netfilter.
    Installing [9oepi0rc] Buffer overflow in iptables CLUSTERIP target.
    Installing [nguvvw6h] CVE-2011-1163: Kernel information leak parsing malformed OSF partition tables.
    Installing [8v9d3ton] USB Audio regression introduced by CVE-2010-1083 fix.
    Installing [jz43fdgc] Denial of service in NFS server via reference count leak.
    Installing [h860edrq] Fix a packet flood when initializing a bridge device without STP.
    Installing [3xcb5ffu] CVE-2011-1577: Missing boundary checks in GPT partition handling.
    Installing [wvcxkbxq] CVE-2011-1078: Information leak in Bluetooth sco.
    Installing [n5a8jgv9] CVE-2011-1494, CVE-2011-1495: Privilege escalation in LSI MPT Fusion SAS 2.0 driver.
    Installing [3t5fgeqc] CVE-2011-1576: Denial of service with VLAN packets and GRO.
    Installing [qsvqaynq] CVE-2011-0711: Information leak in XFS filesystem.
    Installing [m1egxmrj] CVE-2011-1573: Remote denial of service in SCTP.
    Installing [fexakgig] CVE-2011-1776: Missing validation for GPT partitions.
    Installing [rrnm0hzm] CVE-2011-0695: Remote denial of service in InfiniBand setup.
    Installing [c50ijj1f] CVE-2010-4649, CVE-2011-1044: Buffer overflow in InfiniBand uverb handling.
    Installing [eywxeqve] CVE-2011-1745, CVE-2011-2022: Privilege escalation in AGP subsystem.
    Installing [u83h3kej] CVE-2011-1746: Integer overflow in agp_allocate_memory.
    Installing [kcmghb3m] CVE-2011-1593: Denial of service in next_pidmap.
    Installing [s113zod3] CVE-2011-1182: Missing validation check in signals implementation.
    Installing [2xn5hnvr] CVE-2011-2213: Denial of service in inet_diag_bc_audit.
    Installing [fznr6cbr] CVE-2011-2492: Information leak in bluetooth implementation.
    Installing [nzhpmyaa] CVE-2011-2525: Denial of Service in packet scheduler API
    Installing [djng1uvs] CVE-2011-2482: Remote denial of service vulnerability in SCTP.
    Installing [mbg8auhk] CVE-2011-2495: Information leak in /proc/PID/io.
    Installing [ofrder8l] Hangs using direct I/O with XFS filesystem.
    Installing [tqkgmwz7] CVE-2011-2491: Local denial of service in NLM subsystem.
    Installing [wkw7j4ov] CVE-2011-1160: Information leak in tpm driver.
    Installing [1f4r424i] CVE-2011-1585: Authentication bypass in CIFS.
    Installing [kr0lofug] CVE-2011-2484: Denial of service in taskstats subsystem.
    Installing [zm5fxh2c] CVE-2011-2496: Local denial of service in mremap().
    Installing [4f8zud01] CVE-2009-4067: Buffer overflow in Auerswald usb driver.
    Installing [qgzezhlj] CVE-2011-2695: Off-by-one errors in the ext4 filesystem.
    Installing [fy2peril] CVE-2011-2699: Predictable IPv6 fragment identification numbers.
    Installing [idapn9ej] CVE-2011-2723: Remote denial of service vulnerability in gro.
    Installing [i1q0saw7] CVE-2011-1833: Information disclosure in eCryptfs.
    Installing [uqv087lb] CVE-2011-3191: Memory corruption in CIFSFindNext.
    Installing [drz5ixw2] CVE-2011-3209: Denial of Service in clock implementation.
    Installing [2zawfk0b] CVE-2011-3188: Weak TCP sequence number generation.
    Installing [7gkvlyfi] CVE-2011-3363: Remote denial of service in cifs_mount.
    Installing [8einfy3y] CVE-2011-4110: Null pointer dereference in key subsystem.
    Installing [w9l57w7p] CVE-2011-1162: Information leak in TPM driver.
    Installing [hl96s86z] CVE-2011-2494: Information leak in task/process statistics.
    Installing [5vsbttwa] CVE-2011-2203: Null pointer dereference mounting HFS filesystems.
    Installing [ycoswcar] CVE-2011-4077: Buffer overflow in xfs_readlink.
    Installing [rw8qiogc] CVE-2011-4132: Denial of service in Journaling Block Device layer.
    Installing [erniwich] CVE-2011-4330: Buffer overflow in HFS file name translation logic.
    Installing [q6rd6uku] CVE-2011-4324: Denial of service vulnerability in NFSv4.
    Installing [vryc0xqm] CVE-2011-4325: Denial of service in NFS direct-io.
    Installing [keb8azcn] CVE-2011-4348: Socket locking race in SCTP.
    Installing [yvevd42a] CVE-2011-1020, CVE-2011-3637: Information leak, DoS in /proc.
    Installing [thzrtiaw] CVE-2011-4086: Denial of service in journaling block device.
    Installing [y5efh27f] CVE-2012-0028: Privilege escalation in user-space futexes.
    Installing [wxdx4x4i] CVE-2011-3638: Disk layout corruption bug in ext4 filesystem.
    Installing [cd2g2hvz] CVE-2011-4127: KVM privilege escalation through insufficient validation in SG_IO ioctl.
    Installing [aqo49k28] CVE-2011-1083: Algorithmic denial of service in epoll.
    Installing [uknrp2eo] Denial of service in filesystem unmounting.
    Installing [97u6urvt] Soft lockup in USB ACM driver.
    Installing [01uynm3o] CVE-2012-1583: use-after-free in IPv6 tunneling.
    Installing [loizuvxu] Kernel crash in Ethernet bridging netfilter module.
    Installing [yc146ytc] Unresponsive I/O using QLA2XXX driver.
    Installing [t92tukl1] CVE-2012-2136: Privilege escalation in TUN/TAP virtual device.
    Installing [aldzpxho] CVE-2012-3375: Denial of service due to epoll resource leak in error path.
    Installing [bvoz27gv] Arithmetic overflow in clock source calculations.
    Installing [lzwurn1u] ext4 filesystem corruption on fallocate.
    Installing [o9b62qf6] CVE-2012-2313: Privilege escalation in the dl2k NIC.
    Installing [9do532u6] Kernel panic when overcommiting memory with NFSd.
    Installing [zf95qrnx] CVE-2012-2319: Buffer overflow mounting corrupted hfs filesystem.
    Installing [fx2rxv2q] CVE-2012-3430: kernel information leak in RDS sockets.
    Installing [wo638apk] CVE-2012-2100: Divide-by-zero mounting an ext4 filesystem.
    Installing [ivl1wsvt] CVE-2012-2372: Denial of service in Reliable Datagram Sockets protocol.
    Installing [xl2q6gwk] CVE-2012-3552: Denial-of-service in IP options handling.
    Installing [l093jvcl] Kernel panic in SMB extended attributes.
    Installing [qlzoyvty] Kernel panic in ext3 indirect blocks.
    Installing [8lj9n3i6] CVE-2012-1568: A predictable base address with shared libraries and ASLR.
    Installing [qn1rqea3] CVE-2012-4444: Prohibit reassembling IPv6 fragments when some data overlaps.
    Installing [wed7w5th] CVE-2012-3400: Buffer overflow in UDF parsing.
    Installing [n2dqx9n3] CVE-2013-0268: /dev/cpu/*/msr local privilege escalation.
    Installing [p8oacpis] CVE-2013-0871: Privilege escalation in PTRACE_SETREGS.
    Installing [cbdr6azh] CVE-2012-6537: Kernel information leaks in network transformation subsystem.
    Installing [1qz0f4lv] CVE-2013-1826: NULL pointer dereference in XFRM buffer size mismatch.
    Installing [s0q68mb1] CVE-2012-6547: Kernel stack leak from TUN ioctls.
    Installing [s1c6y3ee] CVE-2012-6546: Information leak in ATM sockets.
    Installing [2zzz6cqb] Data corruption on NFSv3/v2 short reads.
    Installing [kfav9h9d] CVE-2012-6545: Information leak in Bluetooth RFCOMM socket name.
    Installing [coeq937e] CVE-2013-3222: Kernel stack information leak in ATM sockets.
    Installing [43shl6vr] CVE-2013-3224: Kernel stack information leak in Bluetooth sockets.
    Installing [whoojewf] CVE-2013-3235: Kernel stack information leak in TIPC protocol.
    Installing [7vap7ys6] CVE-2012-6544: Information leak in Bluetooth L2CAP socket name.
    Installing [0xjd0c1r] CVE-2013-0914: Information leak in signal handlers.
    
    That's 190 kernel fixes that were released between 2009 and now, applied in one go, on a running system. When I go and look at the number of kernels we have released to customers for this version since 2009, I find just over 60 kernel RPMS. So that means, if you wanted to be current using the traditional model of applying kernel updates, the model used by the other Linux vendors (or any other OS for that matter), you'd have scheduled 60 reboots on your servers (for each server with all the multitiered complexities), or if you did it once every several months, still at least 6-10 reboots per server.

    Now, with us, 0. Rebootless updates, active when installed, not after reboot. Zero downtime updates, active when installed not after reboot.

    Thursday Jul 18, 2013

    clarify oracle linux and oracle clusterware

    Someone forwarded a document to me earlier today that had Some Company make a statement that implied that Oracle Clusterware was not free with Oracle Linux. I found it sort of amuzing because I think we've been rather clear on this for quite some time.

    So let me spell it out to make sure it's very, very clear.

    When a customer purchases Oracle Linux support subscriptions (Basic, Basic Limited, Premier, Premier Limited) or purchases an Oracle x86 server with Support, they have the full right-to-use included for Oracle Clusterware without any additional fees. There is no requirement to have another Oracle product in this cluster. There is no additional fee for a high availability option for Oracle Linux, it's included.

    This Clusterware is the software used as part of Oracle Real Application Cluster (RAC) and is a very solid highly scalable, feature-rich clusterware. You can write your own extentions for your own applications or use some of the existing extentions out there for other apps.

    There is no additional fee. In fact you can see this on the clusterware homepage as well.

    Wednesday Jul 03, 2013

    easily install Oracle RDBMS 12cR1 on Oracle Linux 6

    This week we released the latest version of our database, Oracle database 12c Release 1. To make it very easy for people to start using it or trying it out, we already created the oracle-rdbms preinstall rpm and uploaded it to both ULN and public-yum.

    So in order to start the database install without trouble, these few simple steps will get you going :

    If you want to create a virtual machine

  • Download Oracle VM VirtualBox from virtualbox.org
  • or
  • Download Oracle VM Server from edelivery
  • Download Oracle Linux from edelivery
  • You can do a minimal installation of OL6, or any other installation that you prefer (Desktop, Workstation, etc). The install by default, if you don't register with ULN, will point to our free public-yum repository with all the latest RPMs (errata, bugfixes,...). It might make sense to run yum update although you don't really have to.

    Then just install the oracle-rdbms-server-preinstall RPM and your OS is completely configured to start the Oracle database installer. Simply type yum install oracle-rdbms-server-12cR1-preinstall and it will download all required dependencies, create the oracle user id, modify sysctl.conf and modify limits.conf.

    Next, download the 12c R1 software, start the installer and you're good to go.

    We make it easy, or at least try to :). enjoy.

    Tuesday Jun 11, 2013

    ovm_utils 0.6.5

    Finally found some time to play with ovm_utils again and added another little tool to the package.

    ovm_utils is a collection of little tools I wrote over the last year or 2. They can help make command line use a little easier. Of course we have since introduced a real ovm_cli in Oracle VM Manager in 3.1 which is officially part of the product and officially supported. ovm_utils is provided as-is, for fun. If you find them useful, great, if not, oh well :-)

    ovm_logger (there's also a man page as part of the utilities man/man8/...) is a little tool that you can run as a daemon or just as a log dump tool. Oracle VM Manager runs most of it's tasks as jobs and handles most responses as events. So we have a joblog and an eventlog in the Oracle VM Manager database. When an action occurs from the UI or if an error gets reported from an agent, these things then create jobs and events. If you run the ovm_logger with -d, it will just start up, open the joblog and eventlog and dump the history to stdout, completed with the timestamp of when it occured. You probably want to re-direct that output to a file because it can be a lot of data.

    If you run ovm_logger by itself, (without -d) then it basically starts logging events and jobs as of the time you start the tool. Any new job or event that occurs from then on, will be displayed, until you cancel the tool, kill it or use ctrl-c.

    Examples :

    ./ovm_logger -u admin -p MyPassword -h localhost -X -d > /tmp/logoutput

    ./ovm_logger -u admin -p MyPassword -h localhost -X

    # ./ovm_logger -u admin -p Manager1 -h localhost -X 
    Oracle VM Log utility 0.6.4.
    Connecting with a secure connection.
    Connected.
    Tue Jun 11 03:48:34 PDT 2013  Oracle VM Log
    Tue Jun 11 03:48:34 PDT 2013  Oracle VM Manager Version : 3.2.3.521
    Tue Jun 11 03:48:34 PDT 2013  Oracle VM Manager IP      : 192.168.1.5
    Tue Jun 11 03:48:34 PDT 2013  Oracle VM Manager UUID    : 0004fb0000010000b66b471827b0b09d
    Tue Jun 11 03:49:04 PDT 2013  Job - Rediscover Server wcoekaer-srv1
    Tue Jun 11 03:49:29 PDT 2013  Job - Refresh File Server srv4nfs
    Tue Jun 11 03:49:39 PDT 2013  Job - Start Virtual Machine ol6u3apitest
    Tue Jun 11 03:49:54 PDT 2013  Event - Job Aborted
    Tue Jun 11 03:49:54 PDT 2013  (06/11/2013 03:49:51:970 AM)
    Due to Abort by user: admin
    Tue Jun 11 03:49:54 PDT 2013  Job - Discover Server thisonedoesntexist
    Tue Jun 11 03:49:54 PDT 2013  []
    Tue Jun 11 03:50:29 PDT 2013  Event - Job Internal Error (Operation)
    Tue Jun 11 03:50:29 PDT 2013  (06/11/2013 03:50:26:420 AM)
    OVMAPI_4010E Attempt to send command: get_api_version to server: 192.168.1.10 failed. OVMAPI_4004E Server Failed Command: get_api_version , Status: org.apache.xmlrpc.XmlRpcException: I/O error while communicating with HTTP server: Connection refused [Tue Jun 11 03:50:26 PDT 2013] [Tue Jun 11 03:50:26 PDT 2013]
    Tue Jun 11 03:50:29 PDT 2013  Job - Discover Server wcoekaer-srv3
    < Tue Jun 11 03:50:29 PDT 2013  [{OPERATION_NAME=Discover Manager Server Discover, JOB_STEP=Commit, SERVER_NAME=Unknown, EXIT_STATUS=Failed:OVMAPI_4010E Attempt to send command: get_api_version to server: 192.168.1.10 failed. OVMAPI_4004E Server Failed Command: get_api_version , Status: org.apache.xmlrpc.XmlRpcException: I/O error while communicating with HTTP server: Connection refused [Tue Jun 11 03:50:26 PDT 2013] [Tue Jun 11 03:50:26 PDT 2013], MANAGED_OBJECT_NAME=OVM Foundry : Discover Manager<235>}, {OPERATION_NAME=Discover Manager Server Discover, JOB_STEP=Rollback, SERVER_NAME=Unknown, EXIT_STATUS=DONE, MANAGED_OBJECT_NAME=OVM Foundry : Discover Manager<235>}]
    
    

    Anyway it's simple but it helps to easily do some form of audit on operations that happened and highlights errors in red.
    have fun...

    Thursday May 16, 2013

    ksplice and how it really helps with 0day stuff

    So a nasty bug report came out the other day on linux, a serious exploit. Everyone scrambled to get a kernel built and (tested) and released and then there's of course the effort of bringing down applications, multi-tiered environments being way more complex in terms of orchestration of bringing down multiple systems, installing the updated kernel and rebooting and bringing everything back up in an orderly fashion.

    Of course for all our customers that use ksplice and enjoy the cool zero downtime patching, theyt might not even have noticed if they ran *as many do* ksplice in automated mode or others just had to issue one single very simple command and they were done. No applications to bring down, no systems to reboot... and still safe, secure, patched, current.

    some more specifics on the ksplice blog here.

    There's also Time to release. The ksplice patch was available on Tuesday (5/14) while the RPM for the kernel was released on Thursday (5/16) by us and the other similar distributions. No hassle...

    Tuesday Apr 30, 2013

    Oracle Secure Global Desktop 5.0

    We just released version 5.0 of Oracle Secure Global Desktop (for those that don't know what it is, formerly known as Tarantella...). It's a great product that I have been using every for a long time now. I have it installed at home on my servers so that I can get access to my home network from anywhere...without vpn.

    Anyway, a few nice things that I personally like in the new release :

    (1) html5 client support. In particular, at this time the ipad. So now, I can use my ipad to log into SGD and connect to my apps without having to download and install a client. It just works with the built-in Safari browser. We will expand this over time, right now it's ipad only.

    (2) the tta rpm will automatically pull in all dependencies on Oracle Linux 6. So all you need to do is download the tta (sgd) rpm from oracle.com and type yum install tta-5.00-907.i386.rpm. When Oracle Linux is configured to connect to ULN or just go to http://public-yum.oracle.com it will grab all the required OS rpms. This makes it super easy to install and get going.

    To download the software, go to http://edelivery.oracle.com, go to the Oracle Desktop Virtualization Products product pack and click on Oracle Secure Global Desktop 5.0 Media Pack.

    Sunday Apr 21, 2013

    Importing Oracle VM templates through a proxy

    I am working on a little tool that makes it easy to import an Oracle VM template in a more automated fashion, using python's built-in SimpleHTTPServer. While working on this, I realized that in many environments the Oracle VM Servers might be in an isolated network so that they don't have direct access to the intranet. We're talking about the management network here.

    One simple way around this, is to take one server that's on the same network as the Oracle VM Server's management network, for instance, the Oracle VM Manager system... and install something like TinyProxy on that machine. Then, use that servername as the proxy in Oracle VM Manager when you import a VM, VM Template or VM Assembly.

    TinyProxy can be found in the EPEL repository (http://fedoraproject.org/wiki/EPEL). The tinyproxy RPM will install without issue on Oracle Linux. It is very easy/simple to configure and this can be a good workaround or solution to make it easy to import templates or VMs while the servers are on a more isolated network.

    Monday Mar 18, 2013

    new Oracle VDI and Oracle Sun Ray Software releases!

    A good Monday morning for Desktop Virtualization at Oracle.

    We just released a few new products :

    Oracle VDI 3.5

  • support for Ubuntu 12 and Windows 8 VMs
  • complete single server installation and the ability to just add nodes to scale
  • includes the latest version of Oracle VM VirtualBox 4.2.10
  • install on top of Solaris 11 and/or Oracle Linux 6 support added
  • hd 720p video playback on the Sun Ray thin clients with Windows Media Player
  • Oracle Sun Ray Software 5.4

  • install on top of Solaris 11 and/or Oracle Linux 6 support added
  • new firmware update for the Oracle Sun Ray thin clients
  • hd 720p video playback on the Sun Ray thin clients with Windows Media Player
  • support IPsec from thin client to server with aes256
  • introducing vastly improved smart card support when using a Linux host
  • plugin for Oracle Enterprise Manager 12c for VDI and SRS

    The press release is here

    Release notes for VDI are here

    Release notes for SRS are here

    Tuesday Jan 22, 2013

    oracle vm 3.2.1 released!

    Pleased to announce the release of Oracle VM 3.2.1

    The press release is here. The documentation library can be found here.

    The release notes in the documentation show what's new and also a list of bugs fixed. Here's the summary of what's new :

    The new features and enhancements in Oracle VM Release 3.2.1 include:

    Performance, Scalability and Security

    Support for Oracle VM Server for SPARC: Oracle VM Manager can now be used to discover SPARC servers running Oracle VM Server for SPARC, and perform virtual machine management tasks.

    New Dom0 Kernel in Oracle VM Server for x86: The Dom0 kernel in Oracle VM Server for x86 has been updated so that it is now the same Oracle Unbreakable Enterprise Kernel 2 (UEK2) as used in Oracle Linux, for complete binary compatibility with drivers supported in Oracle Linux. Due to the specialized nature of the Oracle VM Dom0 environment (as opposed to the more general purpose Oracle Linux environment) some Linux drivers may not be appropriate to support in the context of Oracle VM, even if the driver is fully compatible with the UEK2 kernel in Oracle Linux. Do not install any additional drivers unless directed to do so by Oracle Support Services.

    Installation

    MySQL Database Support: MySQL Database is used as the bundled database for the Oracle VM Manager management repository for simple installations. Support for an existing Oracle SE/EE Database is still included within the installer so that you can perform a custom installation to take advantage of your existing infrastructure. Simple installation using the bundled MySQL Database is fully supported within production environments.

    Discontinued inclusion of Oracle XE Databases: Oracle VM Manager no longer bundles the Oracle XE database as a backend database. If you are currently running Oracle VM Manager using Oracle XE and you intend to upgrade you must first migrate your database to Oracle SE or Oracle EE.

    Oracle VM Server Support Tools: A meta-package is provided on the Oracle VM Server ISO enabling you to install packages to assist with support. These packages are not installed automatically as they are Oracle VM Server does not depend on them. Installation of the meta-package and its dependencies may assist with the resolution of support queries and can be installed at your own discretion. Note that the sudo package was previously installed as a dependency for Oracle VM Server, but that this package has now been made a dependency of the ovs-support-tools meta-package. If you require sudo on your Oracle VM Server installations, you should install the ovs-support-tools meta-package.

    Improved Usability

    Oracle VM Command Line Interface (CLI): The new Oracle VM Command Line Interface can be used to perform the same functions as the Oracle VM Manager Web Interface, such as managing all your server pools, servers and guests. The CLI commands can be scripted and run in conjunction with the Web Interface, thus bringing more flexibility to help you deploy and manage an Oracle VM environment. The CLI supports public-key authentication, allowing users to write scripts without embedding passwords, to facilitate secure remote login to Oracle VM Manager. The CLI also includes a full audit log for all commands executed using the facility. See the Oracle VM Command Line Interface User's Guide for information on using the CLI.

    Accessibility options: Options to display the UI in a more accessible way for screen readers, improve the contrast, or increase the font size. See Oracle VM Manager user interface Accessibility Features for more information.

    Health tab: Monitor the overall health and status of your virtualization environment and view historical statistics such as memory and CPU usage. See Health Tab for information on using the Health tab.

    Multi-select of objects: Select one or more objects to perform an action on multiple objects, for example, upgrading multiple Oracle VM Servers in one step, rather than upgrading them individually. See Multi-Select Functionality for information on using the multi-select feature.

    Search for objects: In many of the tab management panes and in some of the dialog boxes you can search for objects. This is of particular benefit to large deployments with many objects such as virtual machines or Oracle VM Servers. See Name Filters for information on using the search feature.

    Tagging of objects: It is now possible to tag virtual machines, servers and server pool objects within Oracle VM Manager to create logical groupings of items, making it easier to search for objects by tag.

    Alphabetized tables and other UI listings: Items listed in tables and other UI listings are now sorted alphabetically within Oracle VM Manager by default, to make it easier to find objects in larger deployments.

    Present repository to server pools: In addition to presenting a storage repository to individual Oracle VM Servers, you can now present a repository to all Oracle VM Servers in one or more server pools. See Presenting or Unpresenting a Storage Repository for more information.

    OCFS2 timout configuration: An additional attribute has been added to allow you to determine the timout in seconds for a cluster when configuring a clustered server pool within Oracle VM Manager.

    NFS refresh servers and access lists for non-uniform exports: For NFS configurations where different server pools are exposed to different exports, it is now possible to configure non-uniform exports and access lists to control how server pool refreshes are performed. For more information on this feature, please see NFS Access Groups for Non-uniform Exports.

    Configure multiple iSCSI access hosts: You can now configure multiple access hosts for iSCSI storage devices

    Sizes of disks, ISOs and vdisks: Oracle VM Manager now shows the sizes of disks, ISOs and vdisks within the virtual machine edit dialog, to make it easier to select a disk.

    Automated backups and easy restore: Oracle VM Manager installations taking advantage of the bundled MySQL Enterprise Edition Database include fully automated database backups and a quick restore tool that can help with easy database restoration.

    Serial console access: A serial console java applet has been included within Oracle VM Manager to allow serial console access to virtual machines running on both SPARC and x86 hardware. This facility complements the existing VNC-based console access to virtual machines running on x86 hardware.

    Set preferences for recurring jobs: Facilities have been provided within Oracle VM Manager to control the preferences for recurring jobs. These include the ability to enable, disable or set the interval for tasks such as refreshing repositories and file systems; and to control the Yum Update checking task.

    Processor Compatibility Groups: Since virtual machines can only be migrated between servers that use compatible processor types, Oracle VM Manager now provides the ability to define Processor Compatibility Groups to enable you to pick which servers a virtual machine can be migrated between.

    Configure additional Utility and Virtual Machine roles: New roles are now supported on Oracle VM Servers to control the type of functionality that the server will be responsible for. The Virtual Machine role is required in order for an Oracle VM Server to run a virtual machine. Oracle VM Servers configured with the Utility role are favoured for performing operations such as file cloning, importing of templates, the creation of repositories, and other operations not directly related to running a virtual machine.

    Directly import a virtual machine: It is now possible to directly import a virtual machine using Oracle VM Manager, no longer requiring that you first import to a template and then clone.

    Virtual machine start policy: You can now specify a start policy for a virtual machine, determining whether to always start the virtual machine on the server on which it has been placed, or to start the virtual machine on the best possible server in the server pool.

    Hot-add a VNIC to a virtual machine: It is now possible to add a VNIC directly to a running virtual machine from within Oracle VM Manager.

    Send messages to a virtual machine: Facilities have been provided within Oracle VM Manager to send messages directly to a virtual machine in the form of key-value pairs.

    NTP configuration: Ensuring that time is synchronized across all servers is important. Oracle VM Manager now provides a facility to bulk configure NTP across all servers.


    My personal favorites are (1) MySQL as a repository database (2) adding support for SPARC servers running Oracle VM for SPARC in Oracle VM Manager (3) the CLI server (4) Server Utility versus VM server roles (5) cluster timeout configuration (and a better default) (6) direct VM import and (7) serial console for a VM.

    have fun

    Monday Jan 21, 2013

    oracle linux playground channel sample

    If you have a system with Oracle Linux 6 installed but you are not using public-yum, and you want to play with our mainline kernel builds from the playground channel, then you need to create a simple, small yum repo file and you are all set.

    Some reasons could be that your system is configured for a local yum repository for updates, or you are registered directly with ULN.

    Either way, a very simple example file can be found here. Just put the file in /etc/yum.repos.d.

    # cat /etc/yum.repos.d/playground.repo 
    [ol6_playground]
    name=Oracle Linux mainline kernel playground $releasever ($basearch)
    baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL6/playground/latest/$basearch/
    gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6
    gpgcheck=1
    enabled=1
    

    Once this file exists, you can use yum to install the new kernels. At time of writing, this is kernel-3.7.2-3.7.y.20130115.ol6.x86_64. Just go look in the directory to see which kernels have been published and pick the one you want to install. As you can see source, binary, devel, debug, headers, firmware and doc versions of the packages are there.

    # yum install kernel-3.7.2-3.7.y.20130115.ol6.x86_64
    Loaded plugins: refresh-packagekit, rhnplugin, security
    Setting up Install Process
    Resolving Dependencies
    --> Running transaction check
    ---> Package kernel.x86_64 0:3.7.2-3.7.y.20130115.ol6 will be installed
    --> Processing Dependency: kernel-firmware = 3.7.2-3.7.y.20130115.ol6 
          for package: kernel-3.7.2-3.7.y.20130115.ol6.x86_64
    --> Running transaction check
    ---> Package kernel-firmware.noarch 0:2.6.32-279.19.1.el6 will be updated
    ---> Package kernel-firmware.noarch 0:3.7.2-3.7.y.20130115.ol6 will be an update
    --> Finished Dependency Resolution
    
    Dependencies Resolved
    
    ================================================================================
     Package           Arch     Version                      Repository        Size
    ================================================================================
    Installing:
     kernel            x86_64   3.7.2-3.7.y.20130115.ol6     ol6_playground    24 M
    Updating for dependencies:
     kernel-firmware   noarch   3.7.2-3.7.y.20130115.ol6     ol6_playground   997 k
    
    Transaction Summary
    ================================================================================
    Install       1 Package(s)
    Upgrade       1 Package(s)
    
    Total download size: 25 M
    Is this ok [y/N]: y
    Downloading Packages:
    (1/2): kernel-3.7.2-3.7.y.20130115.ol6.x86_64.rpm  
                                          |  24 MB     00:18     
    (2/2): kernel-firmware-3.7.2-3.7.y.20130115.ol6.noarch.rpm   
                                          | 997 kB     00:00     
    --------------------------------------------------------------
    Total                                                             
                                 1.3 MB/s |  25 MB     00:19     
    Running rpm_check_debug
    Running Transaction Test
    Transaction Test Succeeded
    Running Transaction
      Updating   : kernel-firmware-3.7.2-3.7.y.20130115.ol6.noarch            
                                    1/3 
      Installing : kernel-3.7.2-3.7.y.20130115.ol6.x86_64                      
                                    2/3 
      Cleanup    : kernel-firmware-2.6.32-279.19.1.el6.noarch                   
                                    3/3 
      Verifying  : kernel-firmware-3.7.2-3.7.y.20130115.ol6.noarch                     
                                    1/3 
      Verifying  : kernel-3.7.2-3.7.y.20130115.ol6.x86_64                             
                                    2/3 
      Verifying  : kernel-firmware-2.6.32-279.19.1.el6.noarch                          
                                    3/3 
    
    Installed:
      kernel.x86_64 0:3.7.2-3.7.y.20130115.ol6                                                                    
    
    Dependency Updated:
      kernel-firmware.noarch 0:3.7.2-3.7.y.20130115.ol6                                                           
    
    Complete!
    
    Now just a simple reboot and you are all set.

    Wednesday Jan 16, 2013

    Oracle Linux 5.9

    Oracle Linux 5.9 was uploaded yesterday to http://linux.oracle.com (ULN) and to http://public-yum.oracle.com. The _latest channels are current and the 5.9_base channels contain the core.

    ISO images will be available shortly from http://edelivery.oracle.com. If there is an urgent need to get the ISOs through My Oracle Support, simply file a service request.

    Release notes are here.

    Sunday Jan 06, 2013

    oracle vm template config script example

    The programmatic way to extend Oracle VM Template Configure is to build your own module.

    To write your own module, you have to build an RPM that contains a configure script in a specific format, let's go through the steps to do this.

    Oracle VM template configure works very similar to the init.d and chkconfig script model. For template config we have the /etc/template.d directory, all the scripts go into /etc/template.d/scripts. Then symlinks are made to other subdirectories based on the type of target the scripts provide. At this point we handle configure and cleanup. When a script/module gets added using ovm-chkconfig, the header of the script is read to verify the name, priority and targets and then a symlink is made to the corresponding subdirectories under /etc/template.d.

    As an example, you have /etc/init.d/sshd which is the main sshd initscript and when sshd is enabled you will find a symlink in /etc/rc3.d/S55sshd to /etc/init.d/sshd. These symlinks are created by chkconfig when you enable or disable a service. The same thing goes for Oracle VM template config and the content of /etc/template.d/scripts. You will see /etc/template.d/scripts/ssh and since ssh (on my system) is enabled for the configure target, I have a symlink to /etc/template.d/configure.d/70ssh.

    Like init.d, the digit in front of the script name specifies the priority at which it should be run.

    The most important and complex part is writing your own script for your own application. Our scripts are in python, theoretically you could write it in a different language, as long as the input, output and argument handling remains the same. The examples here will all be in python. Each script has 2 main part : (1) the script header which contains information like script name, targets, priorities and description and (2) the actual script which has to handle a small set of parameters. You can take a look at the existing scripts for examples.

    (1) script header
    Aside from a copyright header that suits your needs, the script headers require a very specific comment block, here is an example :

    ### BEGIN PLUGIN INFO
    # name: network
    # configure: 50
    # cleanup: 50
    # description: Script to configure template network.
    ### END PLUGIN INFO
    

    You have to use the exact same format. Provide your own script name, which will be used when calling ovm-chkconfig, the targets (right now we implement configure and cleanup) and the priority for your script. The priority will specify in what order the scripts get executed. You do not have to implement all targets, if you have a configure target but not cleanup, that is OK, same goes for cleanup versus configure. It is up to you. The configure target gets called when a first boot/initial start of the VM happens, cleanup happens when you manually initiate a cleanup in your VM or when you want to restore the VM to its original state.

    ### BEGIN PLUGIN INFO
    # name: [script name]
    # [target]: [priority]
    # [target]: [priority]
    # description: a description and can
    #   cross multiple lines.
    ### END PLUGIN INFO
    

    Now for the body of the script. Basically the main requirement is that it accepts a [target] parameter. Let's say we have script called foo that needs to be run at configure time, then the script (/etc/template.d/scripts) will have to accept and understand handling the parameter configure. If you also want to call it for cleanup, then it has to handle cleanup. You can have your script handle any other arguments, this is totally up to you, they are optional for our purposes. There is one optional parameter which is useful to implement and this is -e or --enumerate. ovm-template-config uses this to be able to enumerate the parameters for a target for your script.

    Here is the firewall example:

    # ovm-template-config --human-readable --enumerate configure --script firewall
    [('41',
      'firewall',
      [{u'description': u'Whether to enable network firewall: True or False.',
        u'hidden': True,
        u'key': u'com.oracle.linux.network.firewall'}])]
    
    and if you run the script manually :

    # ./firewall configure -e
    [{"hidden": true, "description": "Whether to enable network firewall: True or False.", "key": "com.oracle.linux.network.firewall"}]
    

    In other words, the firewall script lists the parameters it expects when run as a configure target.

    Now here is an example of the script body, in python. It implements the configure and cleanup target and handles the enumerate argument. Part of the magic is handled in templateconfig.cli.

    try:
        import json
    except ImportError:
        import simplejson as json
    from templateconfig.cli import main
    
    
    def do_enumerate(target):
        param = []
        if target == 'configure':
            param += []
        elif target == 'cleanup':
            param += []
        return json.dumps(param)
    
    
    def do_configure(param):
        param = json.loads(param)
        return json.dumps(param)
    
    
    def do_unconfigure(param):
        param = json.loads(param)
        return json.dumps(param)
    
    
    def do_cleanup(param):
        param = json.loads(param)
        return json.dumps(param)
    
    
    if __name__ == '__main__':
        main(do_enumerate, {'configure': do_configure, 'cleanup': do_cleanup})
    

    So now you can fill this out with your own parameters and code. Again taking the firewall script as an example, to add expected keys :

    def do_enumerate(target):
        param = []
        if target == 'configure':
            param += [{'key': 'com.oracle.linux.network.firewall',
                       'description': 'Whether to enable network firewall: True or False.',
                       'hidden': True}]
        return json.dumps(param)
    

    The above shows that this script expect the key com.oracle.linux.firewall to be set and what the default is, along with a description. Add this for each key/value pair that you expect for your script and then afterwards it is easy to understand what the input to your script needs to be, again by running ovm-template-config.

    To execute actions at configure time, based on values set, here's a do_configure() example:

    def do_configure(param):
        param = json.loads(param)
        firewall = param.get('com.oracle.linux.network.firewall')
        if firewall == 'True':
            shell_cmd('service iptables start')
            shell_cmd('service ip6tables start')
            shell_cmd('chkconfig --level 2345 iptables on')
            shell_cmd('chkconfig --level 2345 ip6tables on')
        elif firewall == 'False':
            shell_cmd('service iptables stop')
            shell_cmd('service ip6tables stop')
            shell_cmd('chkconfig --level 2345 iptables off')
            shell_cmd('chkconfig --level 2345 ip6tables off')
        return json.dumps(param)
    

    When the script is called, you can use param.get() to retrieve key/value variables and then just make use of it. Just like in the firewall example, you can do whatever you want, call out other commands, add more python code, it's up to you...

    It is also possible to alter keys or add new keys which then get sent back. So if you want your script to communicate values back which can be retrieved later through the manager API, for instance with ovm_vmmessage -q, you can simply to this :

    param['key'] = 'some value'
    

    Key can be an existing key, or a new one.

    And that's really it... for the script. Next up is packaging.

    In order to install and configure these template configure scripts, they have to be packaged in an RPM, with a specific naming convention. Package the script(s), there can be more than one, as ovm-template-config-[scriptname]. Ideally in the post install of the RPM you want to add the script automatically. Execute # /usr/sbin/ovm-chkconfig --add [scriptname]. When de-installing a script/RPM, remove it at un-install time, # /usr/sbin/ovm-chkconfig --del [scriptname].

    Here is an example of an RPM spec file that can be used:

    Name: ovm-template-config-example
    Version: 3.0
    Release: 1%{?dist}
    Summary: Oracle VM template example configuration script.
    Group: Applications/System
    License: GPL
    URL: http://www.oracle.com/virtualization
    Source0: %{name}-%{version}.tar.gz
    BuildRoot: %(mktemp -ud %{_tmppath}/%{name}-%{version}-%{release}-XXXXXX)
    BuildArch: noarch
    Requires: ovm-template-config
    
    %description
    Oracle VM template example configuration script.
    
    %prep
    %setup -q
    
    %install
    rm -rf $RPM_BUILD_ROOT
    make install DESTDIR=$RPM_BUILD_ROOT
    
    %clean
    rm -rf $RPM_BUILD_ROOT
    
    %post
    if [ $1 = 1 ]; then
        /usr/sbin/ovm-chkconfig --add example
    fi
    
    %preun
    if [ $1 = 0 ]; then
        /usr/sbin/ovm-chkconfig --del example
    fi
    
    %files
    %defattr(-,root,root,-)
    %{_sysconfdir}/template.d/scripts/example
    
    %changelog
    * Tue Mar 22 2011 Zhigang Wang  - 3.0-1
    - Initial build.
    

    Modify the content to your liking, change the name example to your script name, and add whatever else dependencies you might have or whatever files need to be bundled along with this. If you want to bundle executables or scripts that live in other locations, that's allowed. As you can see from the spec file, it automatically called ovm-chkconfig --add and --del at post-install and pre-uninstall time of the RPM.

    In order to create RPMs, you have to install rpmbuild, # yum install rpm-build.

    To make it easy, here's a Makefile you can use and help automate all of this :

    DESTDIR=
    PACKAGE=ovm-template-config-example
    VERSION=3.0
    
    help:
    	@echo 'Commonly used make targets:'
    	@echo '  install    - install program'
    	@echo '  dist       - create a source tarball'
    	@echo '  rpm        - build RPM packages'
    	@echo '  clean      - remove files created by other targets'
    
    dist: clean
    	mkdir $(PACKAGE)-$(VERSION)
    	tar -cSp --to-stdout --exclude .svn --exclude .hg --exclude .hgignore \
    	    --exclude $(PACKAGE)-$(VERSION) * | tar -x -C $(PACKAGE)-$(VERSION)
    	tar -czSpf $(PACKAGE)-$(VERSION).tar.gz $(PACKAGE)-$(VERSION)
    	rm -rf $(PACKAGE)-$(VERSION)
    
    install:
    	install -D example $(DESTDIR)/etc/template.d/scripts/example
    
    rpm: dist
    	rpmbuild -ta $(PACKAGE)-$(VERSION).tar.gz
    
    clean:
    	rm -fr $(PACKAGE)-$(VERSION)
    	find . -name '*.py[cdo]' -exec rm -f '{}' ';'
    	rm -f *.tar.gz
    
    .PHONY: dist install rpm clean
    

    Create a directory, copy over your script, the spec file and this Makefile. Run # make dist, to create a src tarball of your code and then # make rpm. This will generate an RPM in the RPMS/noarch directory. For instance: /root/rpmbuild/RPMS/noarch/ovm-template-config-test-3.0-1.el6.noarch.rpm

    Next you can take this RPM and install it on a target system.

    # rpm -ivh  /root/rpmbuild/RPMS/noarch/ovm-template-config-test-3.0-1.el6.noarch.rpm
    Preparing...                ########################################### [100%]
       1:ovm-template-config-tes########################################### [100%]
    

    And as you can see, it's added to the ovm-chkconfig list :

    # ovm-chkconfig --list|grep testtest                 on:75       
    off         off         on:25       off         off         off         off        
    

    One point of caution : the configure scripts get executed very early on in the bootstage. ovmd is executed as S00ovmd. This is well before many other services are (1) configured, (2) running. So if your product requires services like network connectivity or others to be up and running, then you have to split up the configuration into two parts. First, use the above to gather configuration data remotely, store it in a way that you can use it, and then add your own /etc/init.d scripts which can take this data afterwards. So you can have your own init scripts executed at a late stage when the services you depend on are available.

    That's really all there is to it. Thanks to Zhigang for example code I have used here.

    About

    Wim Coekaerts is the Senior Vice President of Linux and Virtualization Engineering for Oracle. He is responsible for Oracle's complete desktop to data center virtualization product line and the Oracle Linux support program.

    You can follow him on Twitter at @wimcoekaerts

    Search

    Categories
    • Oracle
    Archives
    « April 2014
    SunMonTueWedThuFriSat
      
    1
    2
    3
    4
    5
    6
    7
    9
    10
    11
    12
    13
    14
    15
    16
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
       
           
    Today