Friday Feb 12, 2010
Wednesday Feb 03, 2010
By hjfoxwell on Feb 03, 2010
Everybody, it seems, wants a cloud today. "Cloud Computing" has captured the imaginations of the trade press, IT managers, CTOs, and profit-hungry vendors of computing infrastructure software and hardware. But for those who claim they want cloud-like data centers, do they really know what they are asking for and what they truly need? Probably not, from what I've observed recently, and their confusion is understandable given the myriad self-serving definitions of cloud computing.
A reasonably objective definition of this supposedly new method of providing IT services can be found at NIST, the US Government's National Institute of Standards and Technology. A fully implemented "public cloud", according to NIST, includes these essential characteristics:
On-demand self-service: cloud users can unilaterally provision computing capabilities such as server time and network storage automatically without requiring human interaction with each service's provider.
Broad network access: all services are available over the network and accessed through standard mechanisms using "thin" or "thick" client devices (smart mobile phones, laptops, and desktop PCs).
Resource pooling: computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to each consumer's demand.
Rapid elasticity: services can be rapidly and automatically provisioned to quickly scale out and be rapidly released to quickly scale in.
Measured Service: resource usage can be monitored and controlled, allowing for chargeback to tenants only for the resources consumed.
Obviously, not all IT departments will shut down their data centers and host all their services on public clouds, so they express interest in building "private clouds" or in transforming existing infrastructure into cloud-like services. But are ALL the features of the NIST cloud model required for such efforts? Not if you're not implementing a multi-tenant environment, already own hardware resources that can be repurposed, and don't need to implement usage based chargeback to your internal clients.
In fact, many IT departments that claim a need for private cloud computing are mainly interested only in self-provisioning and efficient consolidation, requirements that can often be met with modern virtualization and distributed computing technologies, including grid computing and even with traditional large-scale SMP/multicore systems.
So, think before you put your head in the clouds. Identify what your technology goals and resources are, and implement only the solution components you need. Call it a "cloud" if you must, but remember that this "new" way of computing is in part just a repackaging and renaming of traditional technologies only some of which may be relevant to your policies and mission.
Thursday Nov 05, 2009
By hjfoxwell on Nov 05, 2009
The 9th Fallacy of Distributed Computing
While working recently with colleagues and customers to define and
architect public and private "cloud computing" systems and to explore
the technical challenges of implementing such systems, I was reminded
of Peter Deutsch's observation in 1994 of the Seven Fallacies of Distributed Computing along with the Eight Fallacy added in 1996 by James Gosling:
- The network is reliable.
- Latency is zero.
- Bandwidth is infinite.
- The network is secure.
- Topology doesn't change.
- There is one administrator.
- Transport cost is zero.
- The network is homogeneous.
- 9. Location is irrelevant.
By suggesting this fallacy I mean the assumption that where computing happens and data resides is not an issue in today's massively connected global Internet. With sufficient connectivity and bandwidth, you might assume that outsourcing your computing services, possibly even outside your home country, is simply a matter of economics. This is clearly false. While end users of public cloud based applications may not be aware that or even care that their computation is occurring on some randomly and dynamically assigned set of virtualized servers which may change even as they use them, nor be concerned about precisely what storage devices are dynamically assigned to host their data, nevertheless these resources do indeed have physical presences which tie them to specific locations that have geographic and jurisdictional characteristics.
The overall stability and reliability of a cloud provider data center depends in part on its geographic location - its proximity to sufficient power and cooling resources, and its safety from natural and man-made disasters. That's why Google has built data centers close to power generating facilities and why Switch Communications built its huge SuperNAP center in geologically stable and meteorologically quiet Las Vegas.
But even more critical than physical location is the legal jurisdiction in which your computation occurs and where your data resides. Laws governing privacy, data ownership, intellectual property, monitoring, and auditing vary from state to state in the US and globally from one country to another. And pinning down the exact location of a global distributed IT service is difficult. In the event of legal disputes over liability or disclosure issues, where will cases be tried? Many such jurisdictional questions remain unanswered, and some countries are reacting with understandable caution about sharing global computing resources. Canada, for example, has prohibited the use of US data centers for certain government projects due to concerns about the provisions of the US Patriot Act, and India is considering legislation requiring IT business services to originate within the country.
So, if you haven't already frightened yourself examining the myriad cloud security issues, google for "cloud computing" with "jurisdiction" for some additional reading material. You'll find that, as with real estate, location is anything but irrelevant.
Cloud Computing Brings New Legal Challenges
The Determination of Jurisdiction in Grid and Cloud Service Level Agreements
Legal Implications of Cloud Computing
The Boundaries of Cloud Computing: World, Nation or Jurisdiction?
Tuesday Jul 07, 2009
By hjfoxwell on Jul 07, 2009
- The Age of American Unreason, by Susan Jacoby
- Why so many Americans are stupid and proud of it.
- The Day We Found the Universe, by Marcia Bartusiak
- It wasn't so long ago that humans still believed in their centrality in the universe. Those fuzzy little blobs in the sky...what were they? Clouds of gas? Evolving planetary systems within the only galaxy in the universe? No, they were galaxies like our own Milky Way. In fact, there were BILLIONS of them, and we're not so special anymore. Hubble and others finally figured it out.
- The Snows of Kilimanjaro and Other Stories, by Ernest Hemingway
- Never read much Hemingway, and having recently written my own book I thought I'd read a bit about other writers. Hemingway got the Nobel Prize for Literature in 1954, for his "...straightforward prose, his spare dialogue, and his predilection for understatement". Hmmm...not that I could see from Snows. Perhaps I should (re)read The Old Man and the Sea...
- ...until then, Ray Bradbury will still top my list of the world's best authors.
- Softwar: An Intimate Portrait of Larry Ellison and Oracle, by Matthew Symonds
- Getting to know Sun's future CEO
- Moby Dick, by Herman Melville
- Yes, read all 135 chapters, including all the exposition on whales and whaling life, and all the biblical and mythological allusions. Interesting, but tough to get through. Can't beat that final chapter however!
Monday Jun 01, 2009
By hjfoxwell on Jun 01, 2009
- Network Virtualization with Crossbow
- Turn any host into a SCSI target with COMSTAR
- Host virtual guests using xVM Hypervisor or LDoms
- SPARC® support for Distro Constructor, Auto Install, and Snap
- Intel Xeon 5500 processor support with deep power management
- MySQL and PHP DTrace Probes in the WebStack
And by the way, our book, Pro OpenSolaris, is still perfectly relevant, since it focuses on key OpenSolaris features that have not changed since the earlier 2008.11 release.
Tuesday Apr 21, 2009
By hjfoxwell on Apr 21, 2009
Apress' Pro OpenSolaris is the second English language book to be published specifically about Sun Microsystems' OpenSolaris open source operating system. The first was the comprehensive,1000-page, OpenSolaris Bible published by Wiley in March 2009. That book purposely covered all aspects of OpenSolaris for those with only basic familiarity with Solaris and UNIX as well as for those with greater administration and developer experience; it reviewed desktop tools, networking, shell programming, and system administration along with the unique features of OpenSolaris.
Pro OpenSolaris, published in April 2009 and based on the OpenSolaris 2008.11 release, assumes the reader is already comfortable with the user and development environments of GNOME and Linux; it focuses primarily on the key OpenSolaris features that should be learned and exploited for Web development. It includes an extensive chapter detailing a sample Web stack project based on the zones, ZFS, security, and SMF topics introduced in the preceding chapters. The book also highlights relevant online references and resources for further learning. Although all of the information about OpenSolaris is available on myriad Web sites, books such as Pro OpenSolaris give you a roadmap and recommended sequence of what to learn first. It also strongly emphasizes that open source solutions can be effectively hosted on OpenSolaris as well as on Linux.
Monday Jul 07, 2008
Tuesday May 20, 2008
By hjfoxwell on May 20, 2008
No, not the source code...but the source of learning about operating systems: university CS courses. Recently I presented Real World Observability Tools in Computer Science Education at the ACM's SIGCSE08 meeting in Portland, OR. Sun was one of the symposium's Platinum Sponsors; I attended both as a Sun system engineer who specializes in OpenSolaris and as a university CS educator who teaches a graduate course in operating systems based on OpenSolaris.
My course was in progress when I presented it at the symposium; it's completed now. I focused on two major topics: OS Observability, and Virtualization. For the observability topic we studied DTrace in OpenSolaris along with other OS tools such as Linux SystemTap. For virtualization, we studied Xen, OpenSolaris containers, and Linux features such as vServers. In general, such topics are not covered in depth in university CS curricula.
Most of the students were in the MS in CS program at GMU, although there were a few in the CS PhD program as well. The students' backgrounds were quite varied; some were already experienced and employed in highly technical IT work, some were only beginning their CS graduate studies and careers. I required the students to do two projects, one on observability and one on virtualization. Most of the projects, but not all, centered on OpenSolaris features and issues; I allowed students to choose the specific areas of investigation within the two broad topic areas.
I asked several students for permission to share their projects in order to illustrate the variety and utility of such assignments in learning about operating systems in general and about OpenSolaris in particular:
- D.Corner: Deadlock Detection Using DTrace
- K.Chuon: The Limit of DTrace - A Failed Attempt at Deadlock Detection
- K.Locher: Tracing ZFS on OS X
- M.Spainhower: Feasibility Analysis of DTrace for Rootkit Detection
- M.Revelle: TOPO: AN EXPLORATION IN BUILDING CUSTOMIZED UTILITIES FOR DATA COLLECTION AND VISUALIZATION OF OPERATING SYSTEM BEHAVIOR
- K.Locher: Storage Management and File System Differences Between Container-based Operating System Virtualization Implementations
- G.Southern: Analysis of SMP VM CPU Scheduling
As you can see, providing students with state-of-the-art open source tools inspires them to investigate real world problems and to create interesting solutions. And, since I insisted on inclusion and fair coverage of observability and virtualization technologies other than those found in OpenSolaris, students were encouraged to report on the good, bad, and ugly of these technologies.
Student feedback about this course has been very positive, although many students expresed a preference for focusing in greater detail on just one of the two topics. I'll propose such a change for the next offering of this course; for the DTrace topic, I would follow the excellent example being set by the OpenSolaris-based curriculum currently being taught in China, as was presented at SIGCSE08.
Friday Feb 15, 2008
By hjfoxwell on Feb 15, 2008
Well, here's another one: '\*AMP'. I'm sure you've heard of or even used 'LAMP' Web solutions: Linux, Apache, MySQL, and PHP. In fact, a very large fraction of Web sites use these four technologies. But any of the four letters can be replaced with a wildcard, including the 'L'. Think '\*AMP'. But what could possibly replace the '\*'?
How about 'S' for 'Solaris'? Or 'O' for 'OpenSolaris'? Because that first letter represents the operating system foundation that supports the other three. And all those useful AMP applications are not only available but run fine, or even better, on other letters of the alphabet. Like 'S' and 'O'.
And now, to make using either of those two letters even easier, the 'AMP' stack, as it is generally called, is integrated with and optimized for Solaris, on Intel, AMD, and SPARC systems. Get the record-setting 'AMP' stack for Solaris along with all your favorite developer tools like Ruby, Squid, and Java. Build leading edge Web 2.0 solutions with the Solaris Express Developer Edition 1/08 release, which includes Sun's new xVM virtualization technologies.
So expand your view of '\*AMP'. Think of Sun's continuing and innovative support for open source solutions. Oh, and more about that third letter, 'M'. Check out the news about how Sun has enhanced its level of support for your favorite open source database software.
Friday Nov 02, 2007
By hjfoxwell on Nov 02, 2007
I received and immediately read my November 07 subscription edition of Linux Journal several days ago. But Jon 'Maddog' Hall's rant about Sun has been bugging me since I read it...I must respond. Before I do, I should note that I worked as a UNIX specialist with Jon at Digital Equipment Corporation from 1989 through 1995, back when "Open" meant "UNIX", and us UNIX folks at DEC took a lot of heat from the VMS crowd that ran the company. I'm sure I don't need to review the history and demise of DEC nor point out its failures to respond to customer requirements. Jon was a Linux expert even back then, porting it to DEC's Alpha processor.
In '95 I joined Sun; Jon's career after DEC led him through Linux-oriented companies like VA Linux and SGI to his current role as head of Linux International. Our paths have crossed a few times in recent years. In April '05 Jon gave a talk promoting open source HPC software at a Beowulf cluster user group meeting. I attended the meeting in part to participate and contribute to the group, but moreso for that specific meeting I wanted to show him Sun's plans to open source Solaris. This was a huge step for Sun, not lightly considered, and admittedly littered with numerous technical and marketing challenges. Jon glanced at the plan and summarily dismissed it, commenting "it's not GPL".
Now from someone who claims to "believe in free and open source software", this was a rather disappointing reaction, but one that is common among the GPL and Linux fundamentalists. Essentially, Jon said at that '05 meeting and again in his article that there is only One True Open Source Way - GPL and Linux. All others are infidels to be ignored or derided for their efforts. Jon adds insult to injury by dragging up old news of Sun's past offenses: the change from BSD to System 5, the admittedly wrongheaded decommitment from Solaris on Intel five years ago (and its subsequent very successful reinstatement), and the nonissue of processor endian differences. Nothing confuses customer decision-making more than comparing favored new technogy with a competitor's old technology. Or ignoring significant progress in bringing innovative technology into the open source communities, such as ZFS, DTrace, and OpenSolaris itself, which Jon barely acknowledges. But then he's not an Open Source guy, he's (and I quote), "a Linux Guy". There is a difference.
I freely admit that Sun's marketing messages around open source software in general and about OpenSolaris in particular have had a rough time evolving and clarifying. But, as Ian Murdock recently pointed out to the large and growing OpenSolaris community, in less than two years Sun has open sourced its crown jewel, Solaris, opened its engineering and development processes, created a large, active, and contributing developer community, and now has a working binary distribution of OpenSolaris that has received almost universal acclaim. "Dead reckoning" from Sun's position three years ago has brought users and developers a real choice of open source operating systems. This contradicts the dogma that the only true open source platform is Linux. Users of open source applications can build their solutions on a choice of open source operating systems; some of these OS's are better than others; I guess choice is a heretical idea.
Jon also criticizes Sun for preferring and promoting its own technology, as if this were uncommon behavior for a profit-making corporation. Even Red Hat, which promotes the "Open Source = Linux" philosophy, wants Linux users to turn to their commercial product when they need mission critical support. Sun cooperates, for the mutual benefit of itself and its customers, with Red Hat, Novell/SuSE, HP, IBM, and yes, even with Microsoft. But having a bias towards one's own products is hardly unique to Sun, and cannot reasonably be written off as "bait-and-switch".
So, Jon, in the interest of promoting a more ecumenical open source world, I suggest that you sit down with us infidels and see what we are doing on behalf of the end users and developers. And remember that neither Rome, Linux, nor OpenSolaris were built in a day. You probably won't be converted, but you may become a bit more tolerant.
Monday Feb 26, 2007
By hjfoxwell on Feb 26, 2007
Tuesday Feb 13, 2007
By hjfoxwell on Feb 13, 2007
By hjfoxwell on Feb 13, 2007
He has worked for Sun Microsystems since 1995. Prior to that he worked for 6 years as a UNIX and Internet specialist for Digital Equipment Corporation; he has worked with UNIX systems since 1979. He also maintains Sun's internal website of Linux technical information and has been a Linux user since 1995; he has been influential in developing and promoting Sun's Linux, open source, and x86 strategy and messages. He is coauthor of two Sun BluePrints: Slicing and Dicing Servers: A Guide to Virtualization and Containment Technologies, Sun BluePrints Online, October 2005, and The Sun BluePrints Guide to Solaris Containers: Virtualization in the Solaris Operating System, Sun BluePrints Online, October 2006, and author of the book Pro OpenSolaris.
He received his doctorate in Information Techology in 2003 from George Mason University (Fairfax, VA), and has since taught graduate courses at GMU in Operating Systems, Computer Architecture, Computer Security, and in Electronic Commerce.
Harry is also a Vietnam veteran; he served as a platoon sergeant in the US Army's 1st Infantry Division in 1968-1969.
See more info here