Tuesday Apr 05, 2011
Saturday May 17, 2008
By tpm on May 17, 2008
At the risk of sounding like a late-night TV commercial for toothpaste ... I'm occasionally asked the question "which virtualization technology should I use?" As if there's a one-size-fits-all answer. Well, admittedly it's more often a desire for a simplicity in the face of a complex world, and probably mostly because I've been confusing them with too many possibilities! Often, one technology does makes more sense than another, depending on what the customer is trying to do. However, there are many virtualization opportunities up and down the SW stack, and sometimes combining technologies can lead to more effective solutions.
Since Solaris Containers (Zones) and hardware virtualization components like hypervisors operate using different mechanisms at different levels in the SW stack, they can be used simultaneously to provide enhanced capabilities and efficiencies. One combined usage pattern we've seen treats the hardware virtualization facility almost as a static partitioning technology, doing coarse-grain resource partitioning, while exploiting the unique capabilities that hardware virtualization brings e.g. being able to run completely different operating systems, or operating system versions side by side. Obvious examples are x86/x64 machines using VMware products, or the xVM hypervisor built into OpenSolaris, or other x86 hypervisor solutions. Or customers with our CMT SPARC systems using the Logical Domains hypervisor built into the firmware.
Now, inside the Solaris or OpenSolaris domains running in these environments, customers use Solaris containers to encapsulate a set of applications as their unit of deployment, and manage fine-grain resource allocation for those containers from inside the global zone using e.g. processor pools, the fair share scheduler, and all the other resource management and resource accounting capabilities Solaris can bring to bear. Making the hypervisor configuration export a relatively static set of virtual hardware resources is a workaround for some of the more problematic aspects of two (or more) resource schedulers fighting over the same resource e.g. a multiprocessor CPU scheduler in the hypervisor at odds with an multiprocessor CPU scheduler in the guest, with the latter unaware of the presence of the former.
Of course in this picture, all the operating systems run as hypervisor guests, and thus all of them lose a bit of performance to the overheads associated with hardware virtualization technologies. And these overheads can be non-trivial for I/O intensive workloads - at least on todays x86 systems. And yet I think it's abundantly clear that the market is prepared to accept this overhead in exchange for the new management capabilities and business agility that a hypervisor-based virtualization appliance coupled with a sophisticated systems management solution brings; this observation is also at the core of our xVM Server and OpsCenter projects, as well as related efforts around both open source and proprietary hypervisors by several other companies. And as you'd expect, we're all working to reduce the overhead with hardware and software solutions, though some of the problems are thorny.
But is that the only useful virtualization technology combination?
Well, there's an interesting "flipped-over" version of this stack that the combination of Containers and a type 2 hypervisor like VirtualBox provides. Today you can run Solaris 10 or OpenSolaris as a host operating system directly "on the metal", running several Containers at the level efficiency and throughput that the container-style solutions bring, i.e. with no significant overhead involved. But what's new in the 1.6 release of VirtualBox is that you can now run other operating systems inside VirtualBox, with the VirtualBox hypervisor also running under the resource and namespace constraints of a Solaris Container i.e. hosted in a Solaris container.
As I noted in my "live" blog from Community One, VirtualBox 1.6 enables a new kind of Solaris container that can run any x86 OS that VirtualBox can support as a guest - and as people who've used VirtualBox know, that's a whole lot of different guest OSes. I was talking to a large customer two weeks ago about a workload dominated by a set of application servers. The application servers were already configured in containers; they'd done that because containers provided them with a convenient way to manage the app servers while consolidating the workload onto fewer machines. But they still have to have a number of Windows boxes to host some proprietary Windows applications that were a part of their stack that they hadn't yet found open source alternatives for. They'd tried running their entire workload on a commercial hypervisor platform, but the performance impact on their app server workload was just too high.
So what got both of us excited was the idea of putting those legacy Windows applications inside a VirtualBox container running Windows, because it meant the best of both worlds - their high performance applications will run as fast as native Solaris lets them - "on the metal" - while their low utilization legacy apps will run side-by-side in VirtualBox containers. The key point being that they would be paying for the overhead of hardware virtualization only where they needed to pay for it, not penalizing all the workloads running on the hardware. Which is also a reminder of an important consideration for assessing virtualization solutions - efficiency.
However, before anyone gets too excited, I have to point out that we're still working on integrating VirtualBox with the networking capabilities that arrive with the Crossbow project, so some of the server applications of this idea don't quite work yet. But if the purpose of the zone-hosted VirtualBox guest OS is to run as a client-side desktop, then the idea already works today. And that has an interesting impact on the Trusted Desktop.
So, let's combine VirtualBox, Containers and Trusted Extensions.
Recall that the Solaris Trusted Extensions technology that Glenn Faden has been blogging about here is built on zones. And what that means is that you can associate a label with a container, and because of VirtualBox hosted in a container, a label around an entire OS and the system resources it is consuming. All of which leads to the following screen shot that Christoph Schuba sent around internally 10 days after we announced the acquisition, and it was what Christoph was showing in one of the demos at the VirtualBox talk at Community One.
That's Vista, running in VirtualBox, hosted inside an OpenSolaris container with Trusted Extensions.
Spelling it out -- it's Vista, contained :)Technorati Tag: OpenSolaris
Technorati Tag: VirtualBox
Wednesday Mar 19, 2008
By tpm on Mar 19, 2008
So, here I am still on the road, starting to write this in Stuttgart airport, after spending a couple of days with the VirtualBox team. We've been introducing them to Sun, and doing a first pass on the roadmap ahead. The usual balance of lots of things we'd like to do, with software physics nipping at our heels - there's always more to do than we have people or time for!
Talking about VirtualBox in Australia was great fun; I introduced it to the audience of my keynote, which as I remarked in my earlier post was one of the largest audiences I've presented to. The full talk should eventually make it to the techdays site, but there are a couple of slides I wanted to review here now for people that can't get them and don't have time to navigate the site to find them. Basically, my talk was about Open Source Virtualization and Project Indiana, and how the two are connected. At first sight, one might imagine that they're not particularly connected at all. VirtualBox is an open source desktop hypervisor, a key component in our virtualization technology portfolio, but otherwise only related to OpenSolaris as a possible host and guest OS. VirtualBox is part of a more general application lifecycle picture involving xVM server and xVM Ops Center, as is nicely captured in this marketing graphic.
On the other hand, Project Indiana is, among other things, about bringing the technology of distribution to the OpenSolaris technology code base. This graphic from Ian Murdock's presentation at the Hyderbad techdays conference captures this notion of the OpenSolaris core surrounded by additional, up-to-date, packages from various open source communities nicely.
And this one summarizes the Indiana project goals:
So what is the connection between them? The connection is in the technology called the distro builder. This is the part the takes the recipe for a distribution that performs a particular set of functions and capabilities, and translates it into the right set of interdependent packages that can perform that function. At first sight, this seems a bit of a niche interest too. After all, relatively few people want to build OS distros .. surely? Well, it's more than you might think. In fact, if you take a look at many IT organizations, they tend to build something very similar for their internal use as a way of reducing costs. They rarely if ever take a vanilla OS install from a vendor. There's always some customization; packages added, removed, administrative pre-configuration of various kinds. So one important target for our distro builder technology are those IT organizations. Another important target are the developers creating virtual appliances to demonstrate their technologies. You can find these virtual appliances on lots of companies web sites these days - usually a file-based image that you can download and quickly instantiate on top of a hypervisor. The advantage being that you -don't- have to install and configure the software you're interested in exploring directly on your desktop operating system - you run it inside a preconfigured OS as a virtual machine. Then when you tire of the demo, you can discard it quickly, and completely, by discarding the virtual machine. So here's the graphic I kludged up to represent how the Indiana project and VirtualBox fit into that picture.
On the left side is the distro builder, on the right side is the developer to deployer flow using VirtualBox on the desktop, and xVM Server and xVM OpsCenter in the data center. If you want to find out more about the IPS packaging system that underpins this picture, then I'd recommend starting on Stephen Hahn's blog.
An open question we've pondered for a while is when virtual appliances will move beyond internal-use and demo-ware to be a mainstream style of SW distribution. There are several vendors who would like this to be true sooner than later. But the question is open for several reasons, performance is a universal concern, licensing of well-known proprietary operating systems in this context is another issue. Another important barrier is how to resolve the support issues for all the software involved that the creator of the application, or facility being demonstrated, didn't write. Let's think about a more concrete example. Assume I'm a developer at a small company, and I create a compelling application that uses various application infrastructure components e.g. an application server, a database, and, e.g. a chunk of ruby-on-rails to do it's work. Then I install and configure it all on an OS image of my choosing, make it into a virtual appliance, then deliver it to my customers that way. That packaging and configuration work I did is a clear advantage to my customer in terms of getting started with the application and exploring the value of the offering - which is one of the reasons vendors are using it more and more for demoware. But having delivered a complete package that way, it seems like as well as being responsible for the defects in my application, I'm now -also- responsible for all the defects in the customized stack I created too. Do I now have to start worrying about all the security patches I might need to include in my virtual appliance? Eeek! That seem really hard. We probably need a better answer to that support question that involves the vendors of the components I used before delivering fully-customized yet fully supported software stacks as virtual appliances really takes off.Technorati Tag: OpenSolaris
Technorati Tag: VirtualBox
Wednesday Mar 05, 2008
By tpm on Mar 05, 2008
Hmm. My first blog entry on the road; writing this in terminal 3 in Sydney airport with my battery running out. Let's see how well this works.
There are quite a number of Sun software engineers in Australia this week. We've been holding one of our TechDays developer conferences here in Sydney. If you get a chance to attend a TechDay in your city or within reach, I do recommend it: lots of information, lots of cool demos and giveaways, and real engineers presenting interesting talks about what they're passionate about. As an example, I heard George Wilson's excellent talk about the work going on in the opensolaris storage community for the first time today - and was intrigued by how interested the audience was in the ZFS-aware installer demo he showed. It's very clear that engineers all over the world are excited by ZFS.
I had a small part in the proceedings too - I did the community keynote on Wednesday morning, and talked about virtualization, Project Indiana, and the connections between them. Probably the largest audience I've ever spoken too; rather daunting; I hope it made some kind of sense to people. And this afternoon I did an opensolaris virtualization technologies talk. In between those events, I've been flying around Australia, visiting universities in Sydney, Melbourne, Canberra and (tomorrow) Brisbane. As you can tell from the title of this post, I'm a little dizzy. But as you'd probably expect, I just can't resist talking about Sun xVM, and VirtualBox and doing demos wherever I go. A few people at UTS had heard of VirtualBox, and were simply pleased to hear about the acquisition, the evolving roadmap, and our investment in this technology. Most of the others hadn't heard of it, but were definitely interested. At the conference, I started seeing it on everyone's desktop I caught sight of.
Don Kretsch, and Liang Chen are also visiting the Universities, talking about HPC and HPC Tools which has sparked a lot of interesting conversations. Josh Marinacci was also with us in Sydney, talking to the students about Dynamic Languages. He showed some fun JavaFX demos - you can find them on Josh's blog.
Though it's been tough to keep track of all the people I've met, it's been wonderful meeting students and faculty across the country. Really smart people who are interested in the technologies Sun is working on around Virtualization, HPC and Dynamic Languages. And we were there to listen to them talk about the technical and scientific problems they're working on - mostly around HPC, but also in other areas e.g. the challenges of scale and parallel programming presented by multicore architectures, complex real-time systems, and more.
As an engineer, it's always been important to me to build real things that other people find useful. That's the ultimate intellectual reward for me, and I think for most of the other engineers at Sun. It's not just about sharing and community in some remote, abstract sense, it's about making positive, real, engineering contributions to a community that can then use them. I think that's fundamentally what makes all engineers tick. So, this morning at ANU, I was surprised and pleased to hear of some work they've been doing, assessing the effectiveness of the MPO subsystem (memory placement optimizations, aka interfaces to describe NUMA machines) we built in OpenSolaris for Opteron-based systems. And in particular, they'd been using the lgroup abstraction, which I had a small hand in the architecture and initial design of a few years ago. So it was great to see the lgroup API being used, the implementation assessed against real needs, and found to be doing well; being used just how we hoped it would. And I'm looking forward to connecting these graduate students and their results with my colleagues Jonathan Chew and Bart Smaalders who put the hard work in designing and implementing the MPO code - I know they'll be interested in the results, and looking for ways to make MPO even better.
I think I'm completely smitten with Australia the country. It's my first visit here, and even though I haven't seen that much beyond CS department seminar rooms, the Darling Harbour Convention center, and the insides of various airports, hotels and taxis, I'm really drawn to the people I've met, and the comfortable feel of the culture. I need to come back again on a family vacation so we can sample the great outdoors too, and imbibe the history (and some of the wine). It's also looks to be a very beautiful place outside of the cities - even though I've only seen tantalizing glimpses on this trip.
Technorati Tag: VirtualBox
Sunday Feb 10, 2008
By tpm on Feb 10, 2008
- I'm an AFOL - an Adult Fan Of Lego. I have far too much Lego at home, but it's impossible for me to resist the new sets every year. I just got a note from amazon.com telling me that a book my sister chose for me for Christmas just shipped - it has the coolest title - "Forbidden Lego: Build the Models Your Parents Warned You Against!" Though I don't recall my parents ever warning me ...
- I grew up in a tourist town on the coast of the north west of England. Pleasant enough, yes, but not really an intellectual hotspot. I still remember the first time I picked up and read Scientific American in W H Smiths - I was 13 and it was literally as if a whole new world opened up. First of all it contained a bunch of exotic-seeming ads for US technology companies and products. But most importantly, Martin Gardner's column, Mathematical Games, gave me a completely different perspective on what, up to that point, had been a difficult subject for me.
- A lot of my time at Cambridge as a graduate student was spent on EM fields - I guess I just got hooked on Maxwell's equations. So I was thrilled in 2002 to visit the Very Large Array in New Mexico with James, Rob Gingell, Jim Mitchell, Josh Simons and others. The VLA was designed in the late 1970's and used very long segments of circular waveguide - basically hollow pipes - to carry the signals from each antenna to the central facility. This particular waveguide was constructed from a single insulated strand of helical wire wound on the inside of an otherwise hollow tube. That construction allows a low-loss TE01 transmission mode to propagate down the guide, and prevent it's conversion to other, lossier modes. Before the trip I'd only ever thought about this in theoretical form, so it was very cool to suddenly be face-to-face with a huge deployment of this idea. But I remember my colleagues thinking I was a little bit strange to be so fascinated by this stuff. Hmm, there on the web you can find a performance evaluation of the system.
- Back in 1987 I was a Fellow of Clare College, Cambridge, and rowed in the Fellows VIII in the May Races. Since I hadn't rowed as an undergraduate, it involved learning how, and I made many early morning outings on the River Cam, getting up at 6am and cycling across town in the cold spring air. At the time I was still living the nocturnal life of the hacker, so the truly surprising part was that I stuck with it through to the day of the competition! Of course I don't remember if we were bumped, or if we bumped the boat in front of us, but I guess it's the taking part that counts.
- When I first started using Sun's software at Cambridge University, one of the things that really impressed me back then was the quality of the documentation - in particular the SunOS 4.1 manpages. We used to use them as a kind of definitive reference work, they certainly were a lot better than the offerings of the other vendors at the time. Nine years after I joined Sun, in a typical twist of geekdom, I married one of the writers that created them.
In the meantime, what else has been happening? Well, since the last entry, Bob Brewin and I have been working for Rich Green as the two CTOs responsible for Sun's Software technology portfolio. We also report to Greg Papadopolous. I like to joke that Bob handles everything beginning with J, and I do the rest - though in reality we work pretty closely, managing the technology portfolio, reviewing what we're doing and how we're doing it, listening to customers, identifying gaps and opportunities. I've also been continuing a special interest in Virtualization and the technology underlying the Indiana program. It has been, and continues to be sometimes frustrating, sometimes fun but almost always interesting. Though the pace is becoming frenetic as more and more opportunity comes our way as Sun's software business expands - the truly cool mySQL acquisition being the most recent example of that.
Monday May 16, 2005
By tpm on May 16, 2005
It was fascinating to hear what things really keep our customers up at night, and their perceptions of our technology strengths and gaps. For our part, we discussed different aspects of our technology, including processors, platforms, operating systems, middleware and the Sun Grid.
Reactions varied - but apart from occasional surprises ("I didn't know Sun even did [this product or that service] .."), we heard the following (paraphrased) question "but why are you investing in [this component]?" This theme was particularly evident when we talked about the capabilities of each area, isolated from the others. Perhaps the way we presented our technologies reinforced the common misperception that all computer system components are now just commodities," i.e. "done," at least as far as techological innovation goes.
But by the end of the day, the customer CTOs seemed to have internalized our set of investment areas; and had a clearer idea of how each component investment related to the other, and thus improved the effectiveness of the whole offering. A great example was our CMT processor technologies, ambitious new platform designs, Solaris 10 threading and related observability capabilities like DTrace and plockstat, and so on up the stack to the Sun Grid. And then we reached one of those "Eureka!" moments, when I think the room did the synthesis step, and rediscovered the intrinsic value of a systems company - that takes an integrated approach to the coevolution of all the components of an information processing system.
Then after a brief moment basking in our little ray of sunshine, we were quickly taken to task for not communicating that message to the marketplace effectively! Sun is a company that lives and breathes innovation, and tends to focus on detailed technology messages. But for many of these CTOs and CIOs, their executive staffs need simpler, more direct, messages that make sense to less technical people.
Everyone is searching for simplicity, in particular, simple answers to complex problems, yet the levels of abstraction spanned by modern operating systems makes their implementation intrinsically complex, and it's easy to get lost in the detail. For my organization, and the other members of the OpenSolaris community, our biggest challenge may simply be communicating the full value of Solaris 10 and OpenSolaris to a broad audience - both as a set of component technologies, and as a systems technology.
One of the participants suggested that Sun needs to find a "great communicator" who can take our technology portfolio, and translate it into something more accessible to non-technical people, to really get our messages and core values across.
That's an interesting idea, and one that's certainly set us thinking.
Tuesday May 10, 2005
By tpm on May 10, 2005
I'm an engineer by calling; like my colleagues, we're in this to build software artifacts that make other peoples lives better. I've worked on Solaris for many years, on numerous subsystems and problems, from architectural direction to fixing bugs. We're very excited about Solaris 10 and how much it seems to be helping customers with their problems, and changing the way people think about Solaris, Sun, and Operating Systems technology in general. I think we're living in a time of transitions, and operating systems are relevant again, as the boundaries between software system components are shifting, hardware devices become ever more capable and complex, while new business models emerge.
During Solaris 10, my principal code contributions were around modernizing the Solaris kernel port to the x86 architectures, and on bringing 64-bit Solaris up on x64 platforms - our slightly boringly named "amd64" project. Some months ago a member of that team blogged about some of the work he'd been doing on that project, and hoped that someone would spend a bit more time talking about the other work that we did during the amd64 port. That seemed like a good topic for a blog entry, so that's what I thought I'd write about first.