Friday Nov 06, 2009

The new Mantra: Consolidate, Virtualize, Automate, Self Service

So, there you have it. Every CIO there is being tasked these days to potentially reduce costs and accelerate the time to market by incorporating this Mantra in their data center. Either by building in-house or by buying the service from an external vendor that offers these features to the user.

Its no news that Virtualization and Cloud Computing go together. Virtualization enables Cloud Computing, and in fact is one of the key under lying technology enabling the Cloud. However, often people are left wondering if Cloud Computing is no more than "Next-Gen Virtualization". In my mind, Yes and No, especially if you are talking about Private Clouds. I got motivated to write this piece to bring some clarity to this topic. 

A Cloud is virtualized by definition, but the degree of cloudiness depends on the level of automation and self service that is built into it. If the offering is a Public Cloud, then it must incorporate almost all principles of a Cloud Computing architecture that include multi-tenancy and pay-as-you-go model, enabled by a virtualized self-service platform with built-in services like billing, metering, charge back, along with public APIs or a Portal for public access to the Cloud. This is much more than an evolved virtualized environment, where you can quickly get a pre-deployed server or a service, ready to use, but does not necessarily have all these other built in services like self-service and pay-as-you go. In fact, a cloud service should be such that it can be abstracted for use on an as-need basis, and not just be a service that can be used over the internet.

In case of a Private Cloud, the users are more targeted and there is more leeway in how much of Cloud Computing principles are built into the infrastructure.  Infact, most enterprises are going with the incremental approach so as to leverage their existing legacy systems, yet begin to harvest the benefits of Cloud Computing. This is making Hybrid Clouds more popular where enterprises can spin off certain type of functional workloads or burst loads to an external Public Cloud. Several Public Cloud vendors are offering this feature where the customers can embed the compute nodes from the public cloud into the company VPN, hence making the public cloud nodes part of the company data center. While this architecture has some security concerns, given the public compute nodes in most cases are physically not separate from other nodes in the cloud, it being a virtualized environment, its still a good mid-way between an exclusive public cloud and a private cloud. Its worthwhile to mention here that there are companies like GoGrid and Rackspace who would offer dedicated hardware in their data centers to complement an enterprise private data center. However, the more dedicated the hardware, less are the cost and flexibility benefits of a cloud available to you.

Wednesday Aug 26, 2009

Cloudify your Enterprise Data Center: Two emerging models

With recent advancements and announcements in the Industry, its clear that there are two emerging models for taking an Enterprise data center into the Clouds.

The first approach "the private cloud", requires an enterprise to purchase "Cloud in a Box" software such as vCloud and vSphere along with virtualization software provided by vendors like VMware. VMware is going a few steps further up in the stack with acquisition of SpringSource which will enable their existing and future customer base to seamlessly develop, deploy and manage applications in VMware based Clouds. The Private Cloud is applicable when the cloud is confined to an enterprise owned data center and provides a great way to scale the existing virtualized customer data centers by adding the flexibility and utilization efficiencies of a Cloud. Vendors like Rackspace and GoGrid are building Managed Private Clouds for their enterprise customers using this approach.

While a private cloud offers the CIO the benefits of a Cloud architecture, unleashing resource management, utilization, and on demand scaling capabilities, it still does not meet the goals of a pure Cloud as it only offers limited elasticity and does not eliminate capex. The enterprise still needs to own and manage all the resources. Werner Vogel has explained this very eloquently in his blog here. Nevertheless, it enables CIOs to better manage existing resources by means of metering, billing usage, and charge back to other business units.

The second emerging approach is that of Hybrid architectures where enterprises can extend their existing IT infrastructure to leverage on-demand resources of an external cloud thus adding scalability on demand. The enterprise continues to utilize their existing data center and augment it by offloading certain types of usage to an external cloud. They can also use this approach for handling the occasional burst loads, and not having to over provision their own infrastructure to meet peak demand. This approach is in line with Amazon's announcement of their Amazon Virtual Private Cloud (Amazon VPC). While it has its limitations, its a solid first step in this direction.

The two approaches are complimentary, and can be combined towards building Hybrid Clouds where resources are moved between multiple Clouds seamlessly. However, I see a need for standardization of protocols and APIs offered by the clouds from different vendors before we can offer this level of flexibility to all users.

.

.

.

.

 [Fig. from Wikipedia: Cloud Computing]

Wednesday Jan 14, 2009

Cloud Computing Services Officer

So Long CIO, Welcome CCSO.

How cool is that for a Job Title?

Seems like the CIO job at a company of managing business's information is expanding towards one that would require additional responsibility of  designing IT operations as services that can be consumed and paid for internally and externally. CCSOs might oversee the use of subscription services and might design their systems and networks to be used by others for subscription purposes.

Cant agree more with Mark Everett Hall of CompterWorld that the arrival of CCSOs (or whatever they end up being called) will be the ultimate recognition by business that IT is less and less about systems, software or even the information they support, and more about the services those tools bring to an organization.

Wednesday Jan 07, 2009

Sun expand its Cloud Computing Offering: Acquires Q-Layer

Sun is elevating investment in cloud computing to fulfill our vision of enabling the creation of a large number of public and private clouds that are Sun-powered, open, and compatible.

Yesterday, on January 6th, 2009, Sun officially closed the acquisition of Q-layer, a cloud computing technology provider that simplifies cloud management and lets users quickly provision and deploy applications.

The Q-layer acquisition adds unique technologies to Sun's cloud computing portfolio that will serve as key differentiators for Sun. By combining Sun's leadership in data-intensive computing, openness, interoperability, and virtualization with Q-layer's technologies for resource management and deployment, cloud computing will be simplified for customers deploying applications to public and private clouds.

For more information regarding this deal, please go to the Sun's official announcement.

Tuesday Dec 23, 2008

Cloud Computing for the White House?

Ok, Now we are talking!

Oh well, we know the Obama team is quite technology savvy and want to run the administration on the state of the art computer technologies. As an example, Obama campaign website used MySQL on the backend. 

So then, can Cloud Computing benefits lure the administration? Security and Technology experts discuss on national public radio  if Cloud Computing will work for the White House and how their computers should run.  Kevin L. Jackson further muses if the Obama Administration should use Cloud Computing.  He believes that Cloud Computing technology can indeed be used to implement the recommendations made by Center for Strategic and International Studies (CSIS)  that recently released the Commission on Cybersecurity for the 44th Presidency.  

Provide your comments to NPR.

Monday Dec 22, 2008

2008 Wrapup: Top 10, What Say.

YouTube anounces the Top 10 YouTube videos of all times! Guitar videos are at #3 and I am not surprised. I for one have been learning Guitar online with YouTube videos! (Yeah, that's me in the photo:-)). 

Also check the Top 10 Web Platforms, Semantic Web Products, Consumer Web AppsMobile Web Products and Enterprise Web Apps of 2008 as identified by ReadWriteWeb. While many of them on the list are no surprise (Facebook, OpenSocial, Google Apps, Twitter, AWS, Android, FireFox, Meebo, LinkedIn, Google Maps (Mobile), Y! Search Monkey) , a few that caught my attention are: Pandora, Shazam, Fring, Brightkite,  Last.fm, Ning, Hulu, Qik, Cooliris, Atlassian, DimDim, WordPress, Mindtouch, Dapper, Hakia, BooRah, Zemanta, Uptake and Zoho. I'll let you Google explore them if you havent been playing with them already.

While Apple has been identified as the Best BigCo. of 2008, Zoho has been identified as the Best LittleCo. of 2008. Zoho is an indian startup that is likely to compete headon with the likes of Microsoft Office, Google Apps and SalesForce.com with their office productivity suite, product management tools and CRM solutions.


<script type="text/javascript">var addthis_pub="geekyjewel";</script> <script type="text/javascript" src="http://s7.addthis.com/js/152/addthis_widget.js"></script>

Saturday Dec 20, 2008

Cloud Optimized Storage

Check out this pretty neat outline by Dave Graham of a Cloud Optimized Storage (COS) architecture/solution.

Also check out a very interesting term coined recently by Reuven Cohen called the Content Delivery Cloud


<script type="text/javascript">var addthis_pub="geekyjewel";</script> <script type="text/javascript" src="http://s7.addthis.com/js/152/addthis_widget.js"></script>

Thursday Dec 11, 2008

Watch this Space in 2009: Sun's Cloud ChalkTalk with Analysts

In a ChalkTalk discussion with Analysts, Sun Executives talk about our Cloud Division and more. InformationWeek's artcile quotes Sun Cloud VP, Dave Douglas's advise: "Watch this Space in 2009"

And I echo that sentiment!

Yes, Sun is Reaching for the Clouds. 

<script type="text/javascript">var addthis_pub="geekyjewel";</script> <script type="text/javascript" src="http://s7.addthis.com/js/152/addthis_widget.js"></script>

Friday Dec 05, 2008

Sys-Con on Sun's Cloud Computing Portfolio

Sys-Con talks about Sun's Cloud Computing Portfolio. Check it out.


<script type="text/javascript">var addthis_pub="geekyjewel";</script> <script type="text/javascript" src="http://s7.addthis.com/js/152/addthis_widget.js"></script>

Wednesday Nov 19, 2008

Response: Comment on "Save money with Open Storage"

This is in response to a comment on the previous post with regards to the benchmark demonstrating no penalty hit for using a NAS storage device like AmberRoad instead of a LocalFS for a Web 2.0 application. The comment stated:
"I respectfully disagree with your comment that there is no performance penalty  
with NFS...You show a 12% increase in Processing utilization which is an        
overall performance hit.  You are doing approx. the same amount of work 
but chewing up more CPU...while your users are still the same, if scaled to 100%    
Util, NFS wouldn't allow as many users as LocalFS, because you are turning Blocks 
into packets into blocks, which is SLOW."
As per the subject matter experts at Sun, this is a fairly common argument in regards to idle %. We don't know if it's wrong in this case or not - but idle is not always an indicator of future performance or perceived headroom, it is wrong to assume so.  The reality is you don't know how much more you can get from your system as configured unless you push it to do so.  % idle is  a wrong metric to focus on (very common).  The metric that matters in this case is supporting concurrent users with response times for all transactions falling within the guidelines.  The NAS solution does this just fine for less $$.  Also as per detailed data here, the io latency reported by iostat is the same for both DAS and NAS - therefore the statement that NFS is slow is not backed up by evidence.

To the question if NFS io ultimately cost more?  Sure - but the missing point here is that it doesn't matter with this test.  Might it hurt us down the road pushing to 100%?  Maybe - but with a cheaper solution, and load scaled to 75% of the DAS solution, user response time was fine (this is what matters).  And to use the analogy of idle, with over 25% cpu free, we believe it could scale another 800 users, equaling the DAS result. Once we arrange for hardware (better network and drivers) to push the load beyond 2400 users, we would know for sure. Pl. stay tuned.

Lastly, do most customer environments really run at 100%?  Will the purported "NAS penalty" really affect them?  This shows that even if they run at 75% (pretty high for most environments I've seen), it's not an issue. HTH.

Tuesday Nov 18, 2008

Save money with Open Storage

Open Storage helps you save time and money for Web-scale Applications.

Want Proof?

Check this excellent  benchmark for Web 2.0 run on Sun Storage 7410 (aka AmberRoad) and CMT technology based Sun Fire T5120 servers.

Its also evident from the benchmark data that you don't suffer a performance penalty for using NAS.  There is a fairly common impression that performance could/would be slower than DAS, this shows that it's just not true with this environment.

System Processors Results
Storage Ch, Cr, Th GHz Type users Util RU watts / user users /RU
Sun Fire T5120 Sun Storage 7410 Unified storage array (NFS) 1, 8, 64 1.4 UltraSPARC T2 2,400 72% 1 0.20 2,400
Sun Fire T5120 LocalFS 1, 8, 64 1.4 UltraSPARC T2 2,400 60% 1 0.20

2,400

Wednesday Nov 12, 2008

Bebo Apps hosting free on Open Solaris

Yesterday, it was my pleasure to be a part of the BeboDevNite at AOL's offices in Mountain View. Bebo is a social networking platform/website, just like Facebook, MySpace, Hi5 and is owned by AOL. Bebo has been more popular in Europe so far but is gaining popularity in USA. 


Sun Startup Essentials team including myself was there and we announced our newly launched program to offer developers free hosting of Bebo Apps with our hosting partner Joyent (learn about the Joyent-Bebo developer program). We also offered access to discounted Sun hardware, open source software, professional services, and connections with VCs. 

If you are a Bebo developer, check out this program. Plus Bebo has made it very simple and intuitive to migrate Facebook Apps to Bebo. You have an opportunity here to capture new users of the Bebo community, especially outside of USA where Bebo is more popular. 


Monday Oct 27, 2008

Open Solaris + MySQL + ZFS Success Story in Production at SmugMug

SmugMug , a photos and videos publishing site, goes into Production on Open Solaris + MySQL + ZFS. Check out this story on why a Linux geek decided to move his site from Linux To Open Solaris.

Don MacAskill, chief geek and CEO of SmugMug says"  ZFS is the most amazing filesystem I’ve ever come across. Integrated volume management. Copy-on-write. Transactional. End-to-end data integrity. On-the-fly corruption detection and repair. Robust checksums. No RAID-5 write hole. Snapshots. Clones (writable snapshots). Dynamic striping. Open source software. " He is also excited about the CoolStack 5.1 stack available in Open Solaris along with MySQL.

Full Story on SumgMug Powered by  Open Solaris  and  MySQL

http://smugmug.com 

<script type="text/javascript">var addthis_pub="geekyjewel";</script> <script type="text/javascript" src="http://s7.addthis.com/js/152/addthis_widget.js"></script>

Wednesday Oct 22, 2008

Its all about Evolution: IDC Research survey results on Cloud Computing

IDC made a Press Release this week on a research survey on Cloud Computing adoption in the next five years. Good to see some numbers backing the hype.

As per IDC, Cloud computing is reshaping the IT marketplace, creating new opportunities for suppliers and catalyzing changes in traditional IT offerings. Over the next five years, IDC expects spending on IT cloud services to grow almost threefold, reaching $42 billion by 2012 and accounting for 9% of revenues in five key market segments. More importantly, spending on cloud computing will accelerate throughout the forecast period, capturing 25% of IT spending growth in 2012 and nearly a third of growth the following year.

In addition, they added that to succeed, cloud services providers need to address a mixture of traditional and cloud concerns. According to survey respondents, the two most important things a cloud services provider can offer are competitive pricing and performance level assurances. These are followed by the ability to demonstrate an understanding of the customer's industry and the ability to move cloud services back on-premises if necessary.

For more, go to http://idc.com/getdoc.jsp?containerId=prUS21480708

Monday Oct 20, 2008

Scaling WikiPedia with LAMP: 7 billion page views per month

I recently attended an interesting talk by Brion Vibber, CTO of WikiMedia Foundation, a non-profit organisation that runs the infrastructure for Wikipedia. He described how his team of 7 engineers manages the Wikipedia site that gets on an average of 7 billion page views per month. The highlights from the talk are listed below that included the architecture of the site infrastructure to scale up to the traffic that is received. They are ranked amongst the Top 10 sites in terms of traffic.

The site runs on the LAMP stack and you know what that is:

  • Linux
  • Apache
  • MySQL from Sun
  • Perl/PHP/Python/Pwhatever :-)

WikiMedia runs the site on about 400 x86 servers. Of those, about 250 run the webservers and the remaining run MySQL database. Recently they acquired the OpenSolaris Thumper machines from Sun which they are exploring. Sun Fire X4500 aka Thumper is the World's first Open Source Storage Server running Open Solaris and ZFS. Currently they are using the thumpers for storing the media files using the ZFS file system and they are simply loving it. They have also begun to use the DTrace feature of Open Solaris and cant stop raving about it!!

11/21/08 Update: Link to the recent Press Release WikiMedia selects Sun Microsystems to Enhance Multimedia Experience.

At the core, Wikipedia runs on a very simple system architecture as shown below and given its a non-profit organisation, almost all software is open source and FREE.


Simple is nice but it can be SLOW :-) In order to speedup, the first thing is to add cache in the front end as well as at the backend of the system. On the webfront side, Wikipedia uses the Squid reverse proxy cache for caching and at the backend, they use memcached as shown below:


Squid is a proxy server and  a web cache daemon.. It has a wide variety of uses, from speeding up a web server by caching repeated requests, to caching web, DNS and othercomputer network lookups for a group of people sharing network resources. Squid is good for static dynamic sites like Wiki where the content does not change as often. The public face of a given Wiki page does not change that often, so one can cache at the HTTP level. Wikipedia also uses Squid for geographical load balancing too so that they can use cheaper, faster local bandwidth.

Along with Apache/PHP servers, Wikipedia also uses APC, the alternate PHP caching tool. Since PHP compiles the scripts to bytecode, then throws it away after execution, compilation everytime adds a lot of un-necessary overhead. Hence it is recommended to always use an opcode cache with PHP. This drastically reduces the startup time for large apps.

Another speedup technique used by Wikipedia is memcached. memcached is a general-purpose distributed memory caching system often used to speed up dynamic database-driven websites by caching data and objects in memory to reduce the number of times the database must be read. memcached allows you to share temporary data in the network memory. Even though one needs to go over the network to get the data, the latency is still smaller than disk-based database access. Wikipedia usually stores the rendered pages in the memcached.

After adding all possible cache, next thing is to add CASH! :-) ie add more servers to gain scalability.

Those 250 or so webservers come with plenty of memory. The underutilized memory can be used for memcached and adds up to a big memcached store space.

Further, for getting the speedup at the database level, Wikipedia uses simple sharding techniques. They split the data along logical data partitions, such as subsites that dont interact closely.


They also do functional sharding and split the machines along functional boundaries for speedup.


Next popular technique used by them is Replication to gain speed. They have a master server for all writes and slave servers for most Reads. The secret truth they claim behind configuring the master and slave machines is to make sure the slave machines are faster than the masters as slaves need to keep up with the masters, hence handle writes faster than the master.


As you can see, the beauty of the architecture is that it is SIMPLE and all Open Source and it rocks!


Wednesday Oct 15, 2008

World Map of Social Networks

Interesting World Map of Social Networks Distribution.

Source: ValleyMag


Test Drive MySQL on Solaris 10 for FREE in EZQual Virtual Labs. No Installation, No Cost

MySQL is the defacto database of choice for most WebScale and Cloud Computing deployments.

Every day you go to website like Facebook, CraigsList , eBay, Google, PriceGrabber, Yahoo!, and Zappos, you are touching a page that  uses MySQL.
MySQL's popularity is due in large part to its flexibility. MySQL supports over 20 platforms and scales to handle terabytes of data. And, because MySQL is open source, it can be customized to an application's unique specifications. This flexibility has two-fold benefits for ISVs: MySQL is better able to address their applications' specific needs and it won't impose restrictions on their future development.

Through the Sun Partner Advantage Program(SPA) , ISVs can now leverage Sun's entire portfolio of offerings - including MySQL. The SPA Program connects ISVs with free or deeply discounted technology offerings as well as marketing and sales engagement opportunities so they can deliver their solutions and services to an expanded market.

We have set up a secure, remote, on-line test facility called the EZQual Virtual Lab! designed to make it easier for ISVs to develop, test and qualify applications on Solaris 10 or OpenSolaris and MySQL for free! ISVs can join and get their applications tested and ready for MySQL today in our EZQual lab for Free. The EZQual Virtual Lab features pre-installed SPARC or x86 processor-based Sun servers with Sun Studio Development Tools, Java SE, Sun's Cool Stack, MySQL, Solaris 10, OpenSolaris and more. These servers can be accessed conveniently with Sun's Secure Global Desktop Software.  This is as convenient as it gets. No servers to install, no OS installations; you're ready to go and can sign up today! 

Monday Oct 06, 2008

Cloud Computing and Beyond: The Web Grows up (Finally!)

I recently attended a conference titled "Cloud Computing and Beyond: The Web Grows up (Finally!) hosted by SDForum in Santa Clara. It was a good informative conference with views presented from the enterprise world like Sun, IBM, HP, SAP, Salesforce, from Cloud providers/vendors like Joyent, GoGrid, Nirvanix, from Cloud Application providers like Mashery, PBWiki and several VCs.

Lew Tucker of Sun gave a fine presentation calling out that one of the major driver of Cloud Computing is the Web APIs.

Russ Daniels of HP called out that the economics of Cloud computing will have a huge influence on what Clouds do and the value will be in targeting new markets, driving higher margins, differentiation and increased share in the market.

Jayashree of IBM talked about the IBM Cloud Computing centers around the world to enable cloud providers use IBM technologies to build clouds and get their feedback as well.

There was an interesting Panel discussion on When to use the Cloud and When Not To. Participants included Joyent, Nirvanix, Elastra, eComputer. Most agreed that Cloud Computing changes the cost and consumption model for applications and services moving from a Capex to an Opex model.

Some of the chalenges of Cloud Computing that were discussed in another panel included lock-in to specific cloud provider proprietary APIs, hence a need for Cloud API standardization. In the absence of such next-gen standard APIs, there is plenty of oportunity for startups here to deliver software that would make migration of applications from one cloud vendor(say EC2) to another (say Google App engine) seamless.

The VCs showed confidence in Cloud Computing and suggested that it is here to stay. It is not necessarily the next revolution but evolution of some of the internet technologies that have been around for decades like Grid computing and more recent ones like Utility computing and SaaS.

Bottomline was that those who learn to do Cloud Computing profitably will survive in the long term, especially in view of the economic recession.

Friday Oct 03, 2008

WebScale and CloudComputing Defined

Cloud Computing: There are multiple definitions of CloudComputing, here is one from Forrester Research: A pool of highly scalable, abstracted infrastructure, capable of hosting end-customer applications, that is billed by consumption. Another simple one by Appistry: Cloud computing consists of shared computing resources that are virtualized and accessed as a service, through an API on a pay-to-use basis, delivered by IP-based connectivity, providing highly scalable, reliable on-demand services with agile management capabilities.

WebScale: Not sure when and how this term was coined but is quite popular in the Sun marketing community. I like to define it as that segment of applications that need to scale to millions of users on the web. These applications would be of the likes of YouTube and Facebook. Such apps are being deployed increasingly in a cloud computing environments. This is because a lot of these apps need to scale dynamically depending on the unpredictable peak loads. Classic example is that of Animoto which had to scale from 50 EC2 instances to 4500 instances in 3 days after it was launched due to the unexpected increased demand of the app by the end users.  On the other extreme, some of the Web applications may not take off, in which case, the application provider has no long term commitment with the cloud hosting provider for leasing the infrastructure. Hence that upfront deployment costs can be avoided.

Camping in the Cloud - An Unconference

Hi, This is Alka Gupta of Sun Microsystems. I work in the area of Cloud Computing and WebScale in helping align Sun's initiatives in this space from partner perspective.  On this blog, I plan to discuss subjects related to the industry buzz "Cloud Computing".

Recently I attended a very interesting event called CloudCamp. As the name suggests, it was  an event where best effort was made to keep it most informal and interactive. The organisers like to call it an "Unconference", in that there is no planned agenda. The attendees meet at a venue and volunteer topics for discussion.  And Bingo, very soon an "on-the-fly" agenda is generated.

CloudCamp Unconference Agenda

Let me talk a bit about the topic "What is a Cloud?" that seemed like one of the most popular discussions of the evening. We were about 50 people in a room, analysing and disecting what CloudComputing is supposed to be, how it differentiates from Grid and Utility Computing et al. Below are several attributes of a Cloud that were thrown out:

Cloud Computing is:

1. A virtualized environment that can elastically scale up and down on demand

2. Instant easy access to aggregation of resources

3. Common Interface (API) hypervisor

4. Outsourcing of Data, compute and infrastructure

5. Enables Peak Demand

6. Grid is a multi-tenant environment where nodes are shared between applications. Cloud is a multi-tenant environment as well but users get a dedicated virtualised private set of nodes.

7. Results in Cloud vendor specific API lockin

8. Many startups to emerge to help seemless migration of apps from one cloud to another

9. Likely be specialized clouds

10. No commitment, pay for it with a Credit Card on use basis.

11. Its yet another name for Utility Computing

12. 90% of the applications being deployed in the cloud are Web 2.0 apps, rest are enterprise

13. Most large enterprise IT deployments in future would be hybrid, partly private, partly in the cloud.

Punch line was: We need to get a lot more cloudy before we are cloudy enough. :-) 

There were some other interesting sessions like:

  1. Cloud Storage
  2. Hadoop in Cloud
  3. Testing with Cloud
  4. Google App Engine
  5. Database engineering for Clouds
  6. Real performance numbers from EC2 and GoogeApps.





About

alkagupta

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today