Wednesday Nov 11, 2009

OpenSolaris based JeOS VMWare Appliance available

Sun and VMware have just released a prototype of a OpenSolaris 2009.06 based JeOS VMWare appliance for building your own lightweight virtual machines. Its only 660MB in size and offers all the advanced OpenSolaris features like ZFS, DTrace, Solaris Containers, Network virtualization with Crossbow, our new Service management framework (SMF), new Image Packaging System (IPS), Solaris containers...All the features you have come to love and enjoy about OpenSolaris. And its Free. Enjoy!

Learn more about OpenSolaris JeOs here

Friday Nov 06, 2009

The new Mantra: Consolidate, Virtualize, Automate, Self Service

So, there you have it. Every CIO there is being tasked these days to potentially reduce costs and accelerate the time to market by incorporating this Mantra in their data center. Either by building in-house or by buying the service from an external vendor that offers these features to the user.

Its no news that Virtualization and Cloud Computing go together. Virtualization enables Cloud Computing, and in fact is one of the key under lying technology enabling the Cloud. However, often people are left wondering if Cloud Computing is no more than "Next-Gen Virtualization". In my mind, Yes and No, especially if you are talking about Private Clouds. I got motivated to write this piece to bring some clarity to this topic. 

A Cloud is virtualized by definition, but the degree of cloudiness depends on the level of automation and self service that is built into it. If the offering is a Public Cloud, then it must incorporate almost all principles of a Cloud Computing architecture that include multi-tenancy and pay-as-you-go model, enabled by a virtualized self-service platform with built-in services like billing, metering, charge back, along with public APIs or a Portal for public access to the Cloud. This is much more than an evolved virtualized environment, where you can quickly get a pre-deployed server or a service, ready to use, but does not necessarily have all these other built in services like self-service and pay-as-you go. In fact, a cloud service should be such that it can be abstracted for use on an as-need basis, and not just be a service that can be used over the internet.

In case of a Private Cloud, the users are more targeted and there is more leeway in how much of Cloud Computing principles are built into the infrastructure.  Infact, most enterprises are going with the incremental approach so as to leverage their existing legacy systems, yet begin to harvest the benefits of Cloud Computing. This is making Hybrid Clouds more popular where enterprises can spin off certain type of functional workloads or burst loads to an external Public Cloud. Several Public Cloud vendors are offering this feature where the customers can embed the compute nodes from the public cloud into the company VPN, hence making the public cloud nodes part of the company data center. While this architecture has some security concerns, given the public compute nodes in most cases are physically not separate from other nodes in the cloud, it being a virtualized environment, its still a good mid-way between an exclusive public cloud and a private cloud. Its worthwhile to mention here that there are companies like GoGrid and Rackspace who would offer dedicated hardware in their data centers to complement an enterprise private data center. However, the more dedicated the hardware, less are the cost and flexibility benefits of a cloud available to you.

Wednesday Aug 26, 2009

Cloudify your Enterprise Data Center: Two emerging models

With recent advancements and announcements in the Industry, its clear that there are two emerging models for taking an Enterprise data center into the Clouds.

The first approach "the private cloud", requires an enterprise to purchase "Cloud in a Box" software such as vCloud and vSphere along with virtualization software provided by vendors like VMware. VMware is going a few steps further up in the stack with acquisition of SpringSource which will enable their existing and future customer base to seamlessly develop, deploy and manage applications in VMware based Clouds. The Private Cloud is applicable when the cloud is confined to an enterprise owned data center and provides a great way to scale the existing virtualized customer data centers by adding the flexibility and utilization efficiencies of a Cloud. Vendors like Rackspace and GoGrid are building Managed Private Clouds for their enterprise customers using this approach.

While a private cloud offers the CIO the benefits of a Cloud architecture, unleashing resource management, utilization, and on demand scaling capabilities, it still does not meet the goals of a pure Cloud as it only offers limited elasticity and does not eliminate capex. The enterprise still needs to own and manage all the resources. Werner Vogel has explained this very eloquently in his blog here. Nevertheless, it enables CIOs to better manage existing resources by means of metering, billing usage, and charge back to other business units.

The second emerging approach is that of Hybrid architectures where enterprises can extend their existing IT infrastructure to leverage on-demand resources of an external cloud thus adding scalability on demand. The enterprise continues to utilize their existing data center and augment it by offloading certain types of usage to an external cloud. They can also use this approach for handling the occasional burst loads, and not having to over provision their own infrastructure to meet peak demand. This approach is in line with Amazon's announcement of their Amazon Virtual Private Cloud (Amazon VPC). While it has its limitations, its a solid first step in this direction.

The two approaches are complimentary, and can be combined towards building Hybrid Clouds where resources are moved between multiple Clouds seamlessly. However, I see a need for standardization of protocols and APIs offered by the clouds from different vendors before we can offer this level of flexibility to all users.

.

.

.

.

 [Fig. from Wikipedia: Cloud Computing]

Friday Mar 27, 2009

Sun talks out Cloud: Open Cloud Platform

Sun's Open Cloud Vision unveilled: Open Cloud Platform, an open infrastructure powered by Java, MySQL, OpenSolaris, and Open Storage software technologies.  Open APIs, Open formats and Open source.

On March 18th, at CommunityONE aka CloudONE, Sun unveiled the open cloud platform for powering public and private clouds. We also  announced that we are building our own Public Cloud. This will include a Storage and Compute Cloud. Our Cloud will be compatible with Amazon S3 and EC2 at the API level. Meaning, we will provide S3 and EC2 compatibility APIs in addition to our own, hence enabling an easy migration from Amazon services to Sun Cloud. All clouds - public, private or hybrid, built on Sun's Open Cloud platform will be interoperable and there will be minimal vendor lockin given the cloud platform will be built on open standards and APIs.

Storage Cloud:  The Sun Cloud Storage Service is a set of web service APIs and WebDAV protocols that provide open standard based, on-demand, programmatic access to highly scalable storage infrastructure via the Internet, (" the cloud "). With the Sun Cloud Storage Service you will get:

  • Ability to store and retrieve data in multiple data formats
  • Programmatic web services API operations and administration control, using industry standard that don't lock you in
  • Ability to clone and snapshot volumes
  • Ability to mount cloud drives via multiple WebDAV clients including DavFS
  • AWS S3 compatibility

At the CommunityONE conference, Zmanda' CEO Chander Kant showcased Amanda Enterprise (AE) and Zmanda Cloud Backup (ZCB) integration with Sun Cloud APIs to provide customers with backup and recovery solutions for the Sun Cloud that combine fast installation, simplified management, enterprise-class functionality and the benefit of using open formats to ensure that customers are not locked into a vendor to recover archived data. Zmanda engaged with us to get a Storage Account and were able to complete the integration in less than one week for both S3 and WebDAV set of APIs.

Compute Cloud: The core of the Sun Compute Service is the Virtual Data Center (VDC), based on capabilities acquired when Sun bought Q-layer in January.  VDC is a self service UI for Orchestration and Provisioning of resources. It provides everything developers need to build and run a cloud-based data center, including an integrated interface to stage an application that runs on either the OpenSolaris, Linux or Windows operating systems. The VDC enables you to design applications from pre-built components using drag-and-drop, deploy to cloud, monitor, manage and reconfigure the system, and is compatibile with programmatic APIs. The data center  abstraction layer allows for seamless encapsulation of system architecture of an application, and ability to model, save and deploy entire system into a cloud.

At CommunityONE, Sun's Cloud Computing CTO, Lew Tucker demonstrated a functional virtual data center in the cloud, running the Wikipedia and Facebook design patterns. He showed how to build a VDC using the drag-and-drop GUI interface as well as the Sun Cloud RESTful APIs.

Ref. Fig. left: The tool's left pane lists the different sorts of gear/virtual machine images (VMIs) that you might put into your data center as drag-and-dropp'able objects. The objects can be Linux servers, Windows servers, Solaris servers, firewalls, Web servers, load balancers, caching servers, databases, networking switches and so on. Some are standard configurations that Sun will offer. Others will be built by the Sun Cloud community and published in a catalog that you can use. On the right is a blank pane representing an empty cloud that's waiting for you to drop your personalized virtual data center into.

What happens next could not be simpler. You start picking up servers, switches, firewalls, etc., and you just drop them into the cloud. Then, you connect them. Certain objects like servers can be configured. For example, you can describe a server's processor attributes (GHz rating, core count, memory, etc.) and the resulting pay-as-you-go cost depends on that configuration. More cores, more memory, more GHz... more cost. The VDC is automatically asigned one public IP and the servers in the vLAN get private IPs. The diagram on the left shows the typical Facebook design pattern built using the VDC.

A replay of the demo is available here.  We are blown away by the interest in our Cloud and everyone's eagerness to give us their Credit Cards to get access to the Services TODAY.  We can't wait to roll this out this summer! In the meantime, please keep the feedback coming.


Monday Mar 16, 2009

Be our Cloud guest: Real or Virtual


CommunityOne EAST is here and we at Sun are excited to tell you all about what is cooking at our end. Be our Guest. In the Cloud Real or Virtual.

If your thinking is Cloudy about cloud computing, you will get some sunshine. __̴ı̴̴̡̡̡ ̡͌l̡̡̡ ̡͌l̡\*̡̡ ̴̡ı̴̴̡ ̡̡͡|̲̲̲͡͡͡ ̲▫̲͡ ̲̲̲͡͡π̲̲͡͡ ̲̲͡▫̲̲͡͡ ̲|̡̡̡ ̡ ̴̡ı̴̡̡ ̡͌l̡̡̡̡.__You have heard that we are building a Sun Public Cloud, we will tell you more about that.

March 18 2009

Wednesday Jan 14, 2009

Cloud Computing Services Officer

So Long CIO, Welcome CCSO.

How cool is that for a Job Title?

Seems like the CIO job at a company of managing business's information is expanding towards one that would require additional responsibility of  designing IT operations as services that can be consumed and paid for internally and externally. CCSOs might oversee the use of subscription services and might design their systems and networks to be used by others for subscription purposes.

Cant agree more with Mark Everett Hall of CompterWorld that the arrival of CCSOs (or whatever they end up being called) will be the ultimate recognition by business that IT is less and less about systems, software or even the information they support, and more about the services those tools bring to an organization.

Wednesday Jan 07, 2009

Sun expand its Cloud Computing Offering: Acquires Q-Layer

Sun is elevating investment in cloud computing to fulfill our vision of enabling the creation of a large number of public and private clouds that are Sun-powered, open, and compatible.

Yesterday, on January 6th, 2009, Sun officially closed the acquisition of Q-layer, a cloud computing technology provider that simplifies cloud management and lets users quickly provision and deploy applications.

The Q-layer acquisition adds unique technologies to Sun's cloud computing portfolio that will serve as key differentiators for Sun. By combining Sun's leadership in data-intensive computing, openness, interoperability, and virtualization with Q-layer's technologies for resource management and deployment, cloud computing will be simplified for customers deploying applications to public and private clouds.

For more information regarding this deal, please go to the Sun's official announcement.

Clouds on the Horizon..

Not Storm Clouds, but Computing Clouds.

Happy New Cloud to One and All! Check out the comments from Industry leading experts, execs and commentators on  Shape of the Cloud to Come. Besides there being a common message that IaaS, PaaS and SaaS services will grow in the coming year, many of those interviewed believe in the emergence of Next generation of  "Middleware for the Cloud" in dominance over the traditional Java Enterprise Application Servers. Some other thoughts that seem to shine include more play from Enterprise companies, user experience of Cloud applications becoming complex and key, surge in PaaS services and a "2.0" version of applications in industries like healthcare, government and others.

Tuesday Dec 23, 2008

Cloud Computing for the White House?

Ok, Now we are talking!

Oh well, we know the Obama team is quite technology savvy and want to run the administration on the state of the art computer technologies. As an example, Obama campaign website used MySQL on the backend. 

So then, can Cloud Computing benefits lure the administration? Security and Technology experts discuss on national public radio  if Cloud Computing will work for the White House and how their computers should run.  Kevin L. Jackson further muses if the Obama Administration should use Cloud Computing.  He believes that Cloud Computing technology can indeed be used to implement the recommendations made by Center for Strategic and International Studies (CSIS)  that recently released the Commission on Cybersecurity for the 44th Presidency.  

Provide your comments to NPR.

Monday Dec 22, 2008

2008 Wrapup: Top 10, What Say.

YouTube anounces the Top 10 YouTube videos of all times! Guitar videos are at #3 and I am not surprised. I for one have been learning Guitar online with YouTube videos! (Yeah, that's me in the photo:-)). 

Also check the Top 10 Web Platforms, Semantic Web Products, Consumer Web AppsMobile Web Products and Enterprise Web Apps of 2008 as identified by ReadWriteWeb. While many of them on the list are no surprise (Facebook, OpenSocial, Google Apps, Twitter, AWS, Android, FireFox, Meebo, LinkedIn, Google Maps (Mobile), Y! Search Monkey) , a few that caught my attention are: Pandora, Shazam, Fring, Brightkite,  Last.fm, Ning, Hulu, Qik, Cooliris, Atlassian, DimDim, WordPress, Mindtouch, Dapper, Hakia, BooRah, Zemanta, Uptake and Zoho. I'll let you Google explore them if you havent been playing with them already.

While Apple has been identified as the Best BigCo. of 2008, Zoho has been identified as the Best LittleCo. of 2008. Zoho is an indian startup that is likely to compete headon with the likes of Microsoft Office, Google Apps and SalesForce.com with their office productivity suite, product management tools and CRM solutions.


<script type="text/javascript">var addthis_pub="geekyjewel";</script> <script type="text/javascript" src="http://s7.addthis.com/js/152/addthis_widget.js"></script>

Saturday Dec 20, 2008

Cloud Optimized Storage

Check out this pretty neat outline by Dave Graham of a Cloud Optimized Storage (COS) architecture/solution.

Also check out a very interesting term coined recently by Reuven Cohen called the Content Delivery Cloud


<script type="text/javascript">var addthis_pub="geekyjewel";</script> <script type="text/javascript" src="http://s7.addthis.com/js/152/addthis_widget.js"></script>

Thursday Dec 11, 2008

Watch this Space in 2009: Sun's Cloud ChalkTalk with Analysts

In a ChalkTalk discussion with Analysts, Sun Executives talk about our Cloud Division and more. InformationWeek's artcile quotes Sun Cloud VP, Dave Douglas's advise: "Watch this Space in 2009"

And I echo that sentiment!

Yes, Sun is Reaching for the Clouds. 

<script type="text/javascript">var addthis_pub="geekyjewel";</script> <script type="text/javascript" src="http://s7.addthis.com/js/152/addthis_widget.js"></script>

Friday Dec 05, 2008

Sys-Con on Sun's Cloud Computing Portfolio

Sys-Con talks about Sun's Cloud Computing Portfolio. Check it out.


<script type="text/javascript">var addthis_pub="geekyjewel";</script> <script type="text/javascript" src="http://s7.addthis.com/js/152/addthis_widget.js"></script>

Thursday Dec 04, 2008

Wired Planet: Cloud Computing Interoperability

In some of my earlier Blog entries, I have mentioned the need for Cloud Interoperability, in order to prevent cloud vendor lockin. I am glad to see some industry movement in that direction. Cloud Computing Interoperability Forum (CCIF) was formed recently and is a group of industry stakeholders that are active in cloud computing. Its objective is to enable a global cloud computing ecosystem whereby organizations are able to seamlessly work together for the purposes for wider industry adoption of cloud computing technology and related services. A key focus will be placed on the creation of a common agreed upon framework / ontology that enables the ability of two or more cloud platforms to exchange information in an unified manor.

Encourage everybody to contribute and participate as appropriate.

<script type="text/javascript">var addthis_pub="geekyjewel";</script> <script type="text/javascript" src="http://s7.addthis.com/js/152/addthis_widget.js"></script>

Sunday Nov 23, 2008

Sun appoints the new Cloud Computing Czar

Yes We Can! 

Hey Folks, Sun is serious about Cloud Computing. We announced a brand new Cloud Computing organization last week to be led by Senior VP Dave Douglas.  The newly formed Cloud Computing organization will be in addition to two other organizations at Sun: Systems(Includes Solaris, Storage) and Application Platform Software(Includes the Software Stack)

Stay tuned for more! Also, we'd love to hear your ideas on Cloud services and features you'd like to see from Sun. Click here to provide feedback to us. Thanks.

Check special offers we have for you to develop your own Cloud. 

Also see OnTheRecord link here for an earlier update.

.

.

.

.


Friday Nov 21, 2008

Drupal For Facebook and OpenSocial

If you are a Drupal user and have been wanting to develop Facebook Applications using Facebook APIs or Facebook Connect, there is Good News. Dave Cohen just released a Drupal for Facebook  add-on module for Drupal which allows Drupal sites to integrate with Facebook, and cutting-edge Facebook Applications to be built on top of Drupal. I attended a Drupal meetup yesterday where Dave walked us through how to build Drupal Apps for Facebook and it was very clear that Dave has put in tremdous amount of work and due diligence to make it easy for rest of the community to easily plug into Facebook. Try it out!

Sun is offering Drupal/Open Solaris/Apache/MySQL bundled image on EC2. For details, Pl. click here.


Drupal comunity is further looking for a volunteer to stepup and build a plugin for OpenSocial. Any enthusiasts out there willing to help out? The OpenSocial plugin would enable Drupal developers to build Apps for Orkut, MySpace, Ning, Hi5, Hyves, Friendster, RockYou, SocialSite and all the upcoming OpenSocial Platforms. I am imagining a site that might mashup social data from all these different platforms to build a unified community across them! Cozy!

Wednesday Nov 19, 2008

Response: Comment on "Save money with Open Storage"

This is in response to a comment on the previous post with regards to the benchmark demonstrating no penalty hit for using a NAS storage device like AmberRoad instead of a LocalFS for a Web 2.0 application. The comment stated:
"I respectfully disagree with your comment that there is no performance penalty  
with NFS...You show a 12% increase in Processing utilization which is an        
overall performance hit.  You are doing approx. the same amount of work 
but chewing up more CPU...while your users are still the same, if scaled to 100%    
Util, NFS wouldn't allow as many users as LocalFS, because you are turning Blocks 
into packets into blocks, which is SLOW."
As per the subject matter experts at Sun, this is a fairly common argument in regards to idle %. We don't know if it's wrong in this case or not - but idle is not always an indicator of future performance or perceived headroom, it is wrong to assume so.  The reality is you don't know how much more you can get from your system as configured unless you push it to do so.  % idle is  a wrong metric to focus on (very common).  The metric that matters in this case is supporting concurrent users with response times for all transactions falling within the guidelines.  The NAS solution does this just fine for less $$.  Also as per detailed data here, the io latency reported by iostat is the same for both DAS and NAS - therefore the statement that NFS is slow is not backed up by evidence.

To the question if NFS io ultimately cost more?  Sure - but the missing point here is that it doesn't matter with this test.  Might it hurt us down the road pushing to 100%?  Maybe - but with a cheaper solution, and load scaled to 75% of the DAS solution, user response time was fine (this is what matters).  And to use the analogy of idle, with over 25% cpu free, we believe it could scale another 800 users, equaling the DAS result. Once we arrange for hardware (better network and drivers) to push the load beyond 2400 users, we would know for sure. Pl. stay tuned.

Lastly, do most customer environments really run at 100%?  Will the purported "NAS penalty" really affect them?  This shows that even if they run at 75% (pretty high for most environments I've seen), it's not an issue. HTH.

Tuesday Nov 18, 2008

Save money with Open Storage

Open Storage helps you save time and money for Web-scale Applications.

Want Proof?

Check this excellent  benchmark for Web 2.0 run on Sun Storage 7410 (aka AmberRoad) and CMT technology based Sun Fire T5120 servers.

Its also evident from the benchmark data that you don't suffer a performance penalty for using NAS.  There is a fairly common impression that performance could/would be slower than DAS, this shows that it's just not true with this environment.

System Processors Results
Storage Ch, Cr, Th GHz Type users Util RU watts / user users /RU
Sun Fire T5120 Sun Storage 7410 Unified storage array (NFS) 1, 8, 64 1.4 UltraSPARC T2 2,400 72% 1 0.20 2,400
Sun Fire T5120 LocalFS 1, 8, 64 1.4 UltraSPARC T2 2,400 60% 1 0.20

2,400

Why OpenSocial for Social Networking applications and platforms

Today, you dont have to be an abnormal geeky engineer to develop IT applications. Infact, its almost where software engineers are an endangered species and neophytes will take over all engineering jobs. Particularly in the Social Networking world, with the advent of  ever growing supply of interesting Web APIs and Open Application platforms, almost anyone and everyone can develop a Web-scale application with very little to no technical background, and potentially make millions if the idea is cool and clever.

The most popular Application Platform today seems to be the Facebook Platform, given it has the maximum users (over 120 million users worldwide) and applications (over 40k). Facebook has its own proprietary programming API and language called FBML and an application written to FBML would run only on Facebook or a site built on Facebook platform.

OpenSocial on the  other hand is an open source API originally developed by Google. It is a community effort,  supported by Sun(Check out Project SocialSite), MySpace, Orkut, Ning, Hi5, Google App Engine among others. Languages supported are HTML, JavaScript, XML, ReST.

As a developer, building an application on OpenSocial would mean that it can be deployed on multiple Social Networking sites that support OpenSocial and your total number of users across these sites might add up to more than what you might get from a single proprietary site. Plus, you might be able to tap into different geographies and age groups depending on which social networking site is more popular in a given region. Eg. Orkut is more popular in Asia whereas MySpace is more popular in USA and Bebo is more popular in the UK. Also, you automatically build safety in the numbers by spreading the risk, such that if one site is down, other sites are up and still churning money for you.

Yesterday I had a chance to attend a presentation by Dave, author of BuddyPoke, one of the most popular application on Orkut. It was very interesting to learn that BuddyPoke is an application developed by 2 developers, is live on 8 Social Networking sites (MySpace, NetLog, Orkut, Friendster, Hyves, Hi5), has 26 million+ users  with a peak install rate of 260K installs in one day. It was launched just 6 months ago. Compare this to some of the most popular Facebook applications like RockYou that has I am told about 20 million Facebook users. It takes Dave less than a few hours to get his BuddyPoke application up and running on a new OpenSocial platform.

You get my point. Web is Good, and being Social is Good. However, Social Web is even better. We all have the opportunity to shape the future of Social Networking via our contributions to OpenSocial.


And if you are a OpenSocial or Facebook developer, dont forget to leverage the Sun's offer of free hosting on Joyent for 1 yr.

Monday Nov 17, 2008

Cloud Computing Expo: San Jose

The Cloud Computing Expo at Fairmont Hotel in san Jose, CA is fast approaching and I plan to be there to learn and meet the folks interested in Cloud Computing. If you plan to be there, dont miss the hands on Cloud Computing BootCamp, also being held at Fairmont hotel on 20th Nov. and is Free.

Led by Williamson, the Cloud Computing Bootcamp will illustrate all the major players and provide a hands-on program with configuration samples, live demos and working setups you can further adapt and play with.

Hope to see you there and chat about how you can leverage the Sun Cloud offerings to build applications for the cloud or build your own clouds.



Wednesday Nov 12, 2008

Bebo Apps hosting free on Open Solaris

Yesterday, it was my pleasure to be a part of the BeboDevNite at AOL's offices in Mountain View. Bebo is a social networking platform/website, just like Facebook, MySpace, Hi5 and is owned by AOL. Bebo has been more popular in Europe so far but is gaining popularity in USA. 


Sun Startup Essentials team including myself was there and we announced our newly launched program to offer developers free hosting of Bebo Apps with our hosting partner Joyent (learn about the Joyent-Bebo developer program). We also offered access to discounted Sun hardware, open source software, professional services, and connections with VCs. 

If you are a Bebo developer, check out this program. Plus Bebo has made it very simple and intuitive to migrate Facebook Apps to Bebo. You have an opportunity here to capture new users of the Bebo community, especially outside of USA where Bebo is more popular. 


Sunday Nov 09, 2008

EUCALYPTUS : Open Source Cloud Infrastructure, The Skies are Opening!

Last week I attended a fantastic talk on EUCALYPTUS at a cloud computing meetup.  The presentor Rich Wolski, is a a professor in the Computer Science Department at the University of California, Santa Barbara.  He created EUCALYPTUS, an open source cloud computing implementation, that is interface compatible with Amazon EC2.  It was a great educational talk, very relevant to the budding cloud computing industry. Rich Wolski put in perspective the popularity of Cloud Computing when he mentioned that the term "Cloud Computing" was only coined about a year ago, by Google on Oct. 8 2007 in a press release.  Today after 1 year and 1 month, a google search on "cloud computing" gives ~9 million results! This explosive growth in cloud computing got Prof. Rich Wolski interested in the subject for research and gave birth to the project EUCALYPTUS.

EUCALYPTUS is an acronym and expands to: Elastic Utility Computing Architecture Linking Your Programs To Useful Systems. The infrastructure is designed to support multiple client-side interfaces besides EC2. It is implemented using commonly-available Linux tools and basic Web-service technologies making it easy to install and maintain.

The fig. below illustrates the EUCALYPTUS architecture.

 


The Cloud Controller implements all the gory details of the Cloud backend provisioning, whereas the Client side API Translator emulates a Cloud interface like EC2. This design makes EUCALYPTUS modular and extensible to emulate clouds other than EC2. Eg., EUCALYPTUS plans to emulate the Google Apps Engine in the near future by adding yet another API Translation layer. In its current version available today, EUCALYPTUS translator is built to EC2 WSDL published by Amazon and is 100% interface compatible with EC2. When RightScale management and monitoring tools connected with EUCALYPTUS, they were not able to identify any difference between EC2 and EUCALYPTUS.

Security and authentication mechanism is very similar to EC2 except for its without a credit card. User signup is web based and ssh key generation and installation is implemented just like EC2. Since there are no published administration and accounting tools published by EC2, EUCALYPTUS defines its own tools for user management and cloud management.

 When EUCALYPTUS project was launched, the objective was to keep it simple, extensible, easy to install and maintain, and build it on widely available and popular open source technologies. Another objective was to ensure that it is a cloud indeed, given there has been a lot of confusion about what a Cloud really is or not is. To ensure that, the team decided to emulate an existing cloud and made the following design decisions:

  • EUCALYPTUS would be interface compatible with Amazon EC2 and S3
  • It  would work with command line tools directly from Amazon without any modifications
  • It would leverage exisitng EC2 value added services like RightScale

Given EUCALYPTUS was an open source project and would need to run on any hardware without prior knowledge of the underlying infrastructure, it was also designed  to function as a software overlay such that existing installation is not violated too much and no assumptions are made about the hardware.

Some of the Goals of EUCALYPTUS were:

  • Foster research in elastic/cloud/utility computing
  • Experimental vehicle prior to buying commercial services from EC2 and other clouds
  • Providing a debugging and development platform for EC2 and other clouds
  • Provide a basic cloud platform for the open source community. Might evolve into a Linux experience..
  • Not designed as a replacement technology for EC2 or other commercial cloud services. In its current form, it can scale upto 1000 nodes.

Some of the biggest challenges addressed by Rich Wolski and his team of 5 research students around building EUCALYPTUS were:

  • Extensibility
  • Client side interface (modular design so that its compatible with EC2 and other clouds)
  • Networking
  • Security
  • Packaging and Installation (One click install)

EUCALYPTUS is hosted as a public cloud and its free for use. However, only installed images can be run and usage is limited to 6 hours. EPC (EUCALYPTUS Public Cloud) configuration consists of:

  • 8 Pentium Xeon processors (3.2 GHz)
  • 2.5 GB of memory per image
  • 3.6GB disk space
  • 1GB ethernet interconnect
  • Linux 2.6.18-xen-3.1
  • Xen 3.2

Yes, its as big as an electron in the EC2 cloud. So clearly, even though its not a replacement for a commercial cloud, cloud vendors could learn a lot from its implementation if they wish to build their own cloud. Developers and end users could use it for testing and debugging purposes before deploying it on a real cloud. Given the popularity of cloud Computing, it could be the next Linux experience! Who knows.

IMHO, its a fabulous piece of work done by a team of 7 engineers at UCSB, using open source technologies, working with a limited budget in a duration of about 6-8 months. Check it out!!

<script type="text/javascript">var addthis_pub="geekyjewel";</script> <script type="text/javascript" src="http://s7.addthis.com/js/152/addthis_widget.js"></script>

Wednesday Nov 05, 2008

Barack Obama and MySQL

It is widely acknowledged that Barack Obama put together the best minds and team together to run his campaign. One of the proof points is that his website www.barackobama.com was driven by MySQL! 

From Jonathan's blog:
http://blogs.sun.com/jonathan/

" On behalf of Sun Microsystems, I would like to offer my sincerest congratulations to President elect Barack Obama. What an extraordinary accomplishment.

I would also like to extend my congratulations to his web team for having chosen MySQL as the platform behind their election web site, BarackObama.com. "


Open Social Networks built on Open Identity and Open APIs

This has been an year of Application Platforms and we have seen a number of Social Networks and XaaS flourish. Facebook, MySpace, Hi5, Friendster, Orkut, Google Apps Engine, Y!OS, EC2 to name a few. Everything is syndicated and social is everywhere. Its the people who are the center of the services, not the software. There is a seemingly infinite demand for applications, almost difficult to deliver. Social computing is becoming a pillar of mass culture. Software is increasingly mediating between people and Apps is a way of life.

However, its also evident that in the current world, these social networks and platforms live alone and isolated from each other and there is no easy way to maneuver seamlessly between different networks.  This limitation is giving birth to Identity platforms wherein every user has an Open Identity and the social network vendors are being encourged to implement their platforms on top of these open Identity standards. In the coming year, we are likely to see more "Open Social Networks" so as to lower the barrier to entry in a social network, ease user acquisition and make socializing a more vibrant experience.

An Open Identity is an entity built on open standards that would define some of the following user attributes:

  • My profile
  • What people say about me
  • Who my friends are
  • What is my content
  • Which sites I can go to
  • etc.

Basically this entity would define a user. If all platforms leverage the same standard identity platform, then it would be seamless for a user to enter new networks. This would enable the service providers to get valuable insights into new user base by allowing them to easily plugin to their networks using open identity.

Yahoo! , MySpace and other vendors in this space are preaching what they are calling "Open Stack" that would make user data portable and enable a user to carry their address books with them while signing on to a new network or platform. As an example, figure below illustrates the Y! Open Platform.

With the web becoming more open and social, standards are needed to power the social functionality without having to reinvent the wheel every time. It only makes sense to tap into the existing semantic information and participate in the echo system. Its important to understand this distributed landscape and plan intelligently so that one can best leverage the building blocks.

Sunday Nov 02, 2008

Sun's Cloud Computing Portfolio

Update: Sun has expanded its Cloud Computing portfolio with the recent acquisition of Qlayer, a cloud computing company that automates the deployment and management of both public and private clouds.  The Q-layer organization, based in Belgium, is now part of Sun's Cloud Computing business unit which develops and integrates cloud computing technologies, architectures and services.

Cloud computing is about managing petascale data. Sun's server and storage systems can radically improve the data-intensive computing emerging in the cloud. Some clouds are closed platforms that lock you in. Sun's open source philosophy and Java principles form the core of a strategy that provides interoperability for large-scale computing resources. Sun's virtualization solutions for advanced high-performance computing deployments are integrated with Solaris and Web 2.0 technologies such as Java and MySQL.

Check out Sun's Cloud Computing Porfolio below:

  • MySQL is almost the defacto database of choice powering the web-scale next-generation of database driven web applications in the cloud. Cloud computing solutions for MySQL makes it easy to develop, deploy, and manage your new and existing MySQL-backed applications in a virtual computing environment. The MySQL Enterprise for Amazon EC2 subscription is a comprehensive offering of database software and production support to deliver applications on Amazon EC2 with optimal performance, reliability, security, and uptime. For the first time, organizations can now cost-effectively deliver database driven web-scale computing in the "cloud", fully backed by the MySQL database experts at Sun. You can learn more about it here


  • The Webstack from Sun is the optimized open source software stack and is bundled with latest release of OpenSolaris 2008.11. It is pre-configured to have the most popular applications (Apache, PHP, MySQL) to work seamlessly out of a Solaris box. By using Solaris with these binaries in a Cloud, you can enjoy the best levels of performance, while also reducing your time-to-service.
  • Performance is one of the key metrics that users are skeptical about in the cloud. White its not a critical criteria, they still want to be able to profile their applications running in the cloud. Netbeans  provides plugins to profile your application on Amazon EC2. Check out the steps here on how to use Netbeans for profiling your application in the Cloud. You can learn more on this at the Cloud Computing Bootcamp on Nov. 19 2008.  

  • Virtualization is key to enabling a Cloud Computing environment. The Sun xVM portfolio offers a simple and efficient way to leverage a heterogeneous, virtualized environment:
    • xVM Ops Center Discover, provision, update, and manage globally dispersed IT environments from one console
    • xVM VirtualBox Build, test, and run applications on one desktop or laptop for multiple OS platforms side by side
    • xVM Server Securely and reliably virtualize systems and services in a Windows, Solaris OS, or Linux environment
    • Sun VDI Software Securely access a virtual desktop from nearly any client on the network

Further, Solaris 10 includes the Containers technology which is an implementation of operating system level virtualization technology first made available in 2005 as part of Solaris 10. A Solaris Container is the combination of system resource controls and the boundary separation provided by zones. Zones act as completely isolated virtual servers within a single operating system instance. By consolidating multiple sets of application services onto one system and by placing each into isolated virtual server containers, system administrators can reduce cost and provide all the same protections of separate machines on a single machine, hence making it a perfect technology for the Clouds.

  • Besides the product portfolio, Sun is also offering services in the Cloud Computing space.
    • Zembly is a service from Sun hosted on Network.Com which is a place to create social applications, together.  At Zembly, you easily create and host social applications of all shapes and sizes, targeting the most popular social platforms on the web like Facebook, Meebo, iPhone, Google Gadgets etc. And, you do it along with other people, using just your browser and your creativity, and working collaboratively with others.
    • Project Kenai is the foundation for the connected developer of tomorrow. It allows you to freely host your open source projects and code. Find and collaborate with developers of like mind and passion from around the globe.
    • Project SocialSite, is an open source ( CDDL/GPL2) project for building Widgets and Web Services that make it easy for you to add social networking features to your existing web sites, including the ability to run OpenSocial Gadgets and have them backed by the same social graph.
    • Project Caroline is an advanced R&D project at Sun Microsystems. It is a hosting platform for development and delivery of dynamically scalable Internet-based services. It is designed to serve an emerging market of small and medium sized software-as-a-service (SaaS) providers.


  • Sun is offering Open Solaris on Amazon EC2 OpenSolaris, which comes with tools such as ZFS and Dynamic Tracing (D-Trace), are offered for free, in contrast to some Linux offerings that cost money. ZFS allows instant rollback and continual check-summing capabilities, something developers have found lacking in the EC2 platform.  In addition, Sun is offering several popular EC2 images like Drupal, Roby on Rails, Apache, Tomcat etc. For the entire list, click here.


  • Sun is running several promotions for  hosting Facebook and OpenSocial applications on OpenSolaris free for 1 year with some of our cloud computing partners like Joyent. For more details, check out our Startup Essentials program.

Stay tuned for more to come from Sun in this space.

Wednesday Oct 29, 2008

My 2 cents on Microsoft Azure

Microsoft recently announded their Cloud offering called Azure. Azure is more of a PaaS Windows cloud, offering their proprietary closed Winows products  as a service. Seems to me like that traditional Software as a Service (Saas) model. Does not REALLY excite me! Marketing dollars well spent though in picking up the name Azure, meaning "clear cloudless sky". In addition, they have not yet announced anything around SLAs and price for the service, key for successful adoption of cloud based computing.

Yahoo announced yesterday their own PaaS Y!Open platform based on OpenSocial APIs that is geared more towards targeting Social networking audience unlike Azure. This is more interesting to me as they will immediately be able to capture new developers and expand on their current echosystem.

But yes, Azure is offering more than what Google Apps Engine offers today. To me, it would be interesting to see what M$ offers around industry standard APIs like OpenSocial which is how they can attract more of the Next-gen developers building mashup services around their PaaS. They seem to have endorsed OpenSocial anyways.

Facebook has gone up from 27 million users to 140 million users after opening up their Facebook API to developers in the last one year with 40 thousand third party applications hosted on facebook. Now thats revolution. Its all about expanding the echosystem of your developers in this era of internet computing and thus monopolising the market/making yourself indispensible. Its all about the open APIs, rather than spinning your own proprietary API clouds like Azure. 

Web-scale and Cloud Computing open APIs is almost like the Open Source model which we have all come to love and thrive on. Stay tuned on my next blog on Open Web Identity leading to Identity platforms and Open Social Networks.

Monday Oct 27, 2008

Open Solaris + MySQL + ZFS Success Story in Production at SmugMug

SmugMug , a photos and videos publishing site, goes into Production on Open Solaris + MySQL + ZFS. Check out this story on why a Linux geek decided to move his site from Linux To Open Solaris.

Don MacAskill, chief geek and CEO of SmugMug says"  ZFS is the most amazing filesystem I’ve ever come across. Integrated volume management. Copy-on-write. Transactional. End-to-end data integrity. On-the-fly corruption detection and repair. Robust checksums. No RAID-5 write hole. Snapshots. Clones (writable snapshots). Dynamic striping. Open source software. " He is also excited about the CoolStack 5.1 stack available in Open Solaris along with MySQL.

Full Story on SumgMug Powered by  Open Solaris  and  MySQL

http://smugmug.com 

<script type="text/javascript">var addthis_pub="geekyjewel";</script> <script type="text/javascript" src="http://s7.addthis.com/js/152/addthis_widget.js"></script>

Wednesday Oct 22, 2008

Its all about Evolution: IDC Research survey results on Cloud Computing

IDC made a Press Release this week on a research survey on Cloud Computing adoption in the next five years. Good to see some numbers backing the hype.

As per IDC, Cloud computing is reshaping the IT marketplace, creating new opportunities for suppliers and catalyzing changes in traditional IT offerings. Over the next five years, IDC expects spending on IT cloud services to grow almost threefold, reaching $42 billion by 2012 and accounting for 9% of revenues in five key market segments. More importantly, spending on cloud computing will accelerate throughout the forecast period, capturing 25% of IT spending growth in 2012 and nearly a third of growth the following year.

In addition, they added that to succeed, cloud services providers need to address a mixture of traditional and cloud concerns. According to survey respondents, the two most important things a cloud services provider can offer are competitive pricing and performance level assurances. These are followed by the ability to demonstrate an understanding of the customer's industry and the ability to move cloud services back on-premises if necessary.

For more, go to http://idc.com/getdoc.jsp?containerId=prUS21480708

Monday Oct 20, 2008

Scaling WikiPedia with LAMP: 7 billion page views per month

I recently attended an interesting talk by Brion Vibber, CTO of WikiMedia Foundation, a non-profit organisation that runs the infrastructure for Wikipedia. He described how his team of 7 engineers manages the Wikipedia site that gets on an average of 7 billion page views per month. The highlights from the talk are listed below that included the architecture of the site infrastructure to scale up to the traffic that is received. They are ranked amongst the Top 10 sites in terms of traffic.

The site runs on the LAMP stack and you know what that is:

  • Linux
  • Apache
  • MySQL from Sun
  • Perl/PHP/Python/Pwhatever :-)

WikiMedia runs the site on about 400 x86 servers. Of those, about 250 run the webservers and the remaining run MySQL database. Recently they acquired the OpenSolaris Thumper machines from Sun which they are exploring. Sun Fire X4500 aka Thumper is the World's first Open Source Storage Server running Open Solaris and ZFS. Currently they are using the thumpers for storing the media files using the ZFS file system and they are simply loving it. They have also begun to use the DTrace feature of Open Solaris and cant stop raving about it!!

11/21/08 Update: Link to the recent Press Release WikiMedia selects Sun Microsystems to Enhance Multimedia Experience.

At the core, Wikipedia runs on a very simple system architecture as shown below and given its a non-profit organisation, almost all software is open source and FREE.


Simple is nice but it can be SLOW :-) In order to speedup, the first thing is to add cache in the front end as well as at the backend of the system. On the webfront side, Wikipedia uses the Squid reverse proxy cache for caching and at the backend, they use memcached as shown below:


Squid is a proxy server and  a web cache daemon.. It has a wide variety of uses, from speeding up a web server by caching repeated requests, to caching web, DNS and othercomputer network lookups for a group of people sharing network resources. Squid is good for static dynamic sites like Wiki where the content does not change as often. The public face of a given Wiki page does not change that often, so one can cache at the HTTP level. Wikipedia also uses Squid for geographical load balancing too so that they can use cheaper, faster local bandwidth.

Along with Apache/PHP servers, Wikipedia also uses APC, the alternate PHP caching tool. Since PHP compiles the scripts to bytecode, then throws it away after execution, compilation everytime adds a lot of un-necessary overhead. Hence it is recommended to always use an opcode cache with PHP. This drastically reduces the startup time for large apps.

Another speedup technique used by Wikipedia is memcached. memcached is a general-purpose distributed memory caching system often used to speed up dynamic database-driven websites by caching data and objects in memory to reduce the number of times the database must be read. memcached allows you to share temporary data in the network memory. Even though one needs to go over the network to get the data, the latency is still smaller than disk-based database access. Wikipedia usually stores the rendered pages in the memcached.

After adding all possible cache, next thing is to add CASH! :-) ie add more servers to gain scalability.

Those 250 or so webservers come with plenty of memory. The underutilized memory can be used for memcached and adds up to a big memcached store space.

Further, for getting the speedup at the database level, Wikipedia uses simple sharding techniques. They split the data along logical data partitions, such as subsites that dont interact closely.


They also do functional sharding and split the machines along functional boundaries for speedup.


Next popular technique used by them is Replication to gain speed. They have a master server for all writes and slave servers for most Reads. The secret truth they claim behind configuring the master and slave machines is to make sure the slave machines are faster than the masters as slaves need to keep up with the masters, hence handle writes faster than the master.


As you can see, the beauty of the architecture is that it is SIMPLE and all Open Source and it rocks!


About

alkagupta

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today