Wednesday Aug 26, 2009

Cloudify your Enterprise Data Center: Two emerging models

With recent advancements and announcements in the Industry, its clear that there are two emerging models for taking an Enterprise data center into the Clouds.

The first approach "the private cloud", requires an enterprise to purchase "Cloud in a Box" software such as vCloud and vSphere along with virtualization software provided by vendors like VMware. VMware is going a few steps further up in the stack with acquisition of SpringSource which will enable their existing and future customer base to seamlessly develop, deploy and manage applications in VMware based Clouds. The Private Cloud is applicable when the cloud is confined to an enterprise owned data center and provides a great way to scale the existing virtualized customer data centers by adding the flexibility and utilization efficiencies of a Cloud. Vendors like Rackspace and GoGrid are building Managed Private Clouds for their enterprise customers using this approach.

While a private cloud offers the CIO the benefits of a Cloud architecture, unleashing resource management, utilization, and on demand scaling capabilities, it still does not meet the goals of a pure Cloud as it only offers limited elasticity and does not eliminate capex. The enterprise still needs to own and manage all the resources. Werner Vogel has explained this very eloquently in his blog here. Nevertheless, it enables CIOs to better manage existing resources by means of metering, billing usage, and charge back to other business units.

The second emerging approach is that of Hybrid architectures where enterprises can extend their existing IT infrastructure to leverage on-demand resources of an external cloud thus adding scalability on demand. The enterprise continues to utilize their existing data center and augment it by offloading certain types of usage to an external cloud. They can also use this approach for handling the occasional burst loads, and not having to over provision their own infrastructure to meet peak demand. This approach is in line with Amazon's announcement of their Amazon Virtual Private Cloud (Amazon VPC). While it has its limitations, its a solid first step in this direction.

The two approaches are complimentary, and can be combined towards building Hybrid Clouds where resources are moved between multiple Clouds seamlessly. However, I see a need for standardization of protocols and APIs offered by the clouds from different vendors before we can offer this level of flexibility to all users.





 [Fig. from Wikipedia: Cloud Computing]

Monday Dec 22, 2008

2008 Wrapup: Top 10, What Say.

YouTube anounces the Top 10 YouTube videos of all times! Guitar videos are at #3 and I am not surprised. I for one have been learning Guitar online with YouTube videos! (Yeah, that's me in the photo:-)). 

Also check the Top 10 Web Platforms, Semantic Web Products, Consumer Web AppsMobile Web Products and Enterprise Web Apps of 2008 as identified by ReadWriteWeb. While many of them on the list are no surprise (Facebook, OpenSocial, Google Apps, Twitter, AWS, Android, FireFox, Meebo, LinkedIn, Google Maps (Mobile), Y! Search Monkey) , a few that caught my attention are: Pandora, Shazam, Fring, Brightkite,, Ning, Hulu, Qik, Cooliris, Atlassian, DimDim, WordPress, Mindtouch, Dapper, Hakia, BooRah, Zemanta, Uptake and Zoho. I'll let you Google explore them if you havent been playing with them already.

While Apple has been identified as the Best BigCo. of 2008, Zoho has been identified as the Best LittleCo. of 2008. Zoho is an indian startup that is likely to compete headon with the likes of Microsoft Office, Google Apps and with their office productivity suite, product management tools and CRM solutions.

<script type="text/javascript">var addthis_pub="geekyjewel";</script> <script type="text/javascript" src=""></script>

Saturday Dec 20, 2008

Cloud Optimized Storage

Check out this pretty neat outline by Dave Graham of a Cloud Optimized Storage (COS) architecture/solution.

Also check out a very interesting term coined recently by Reuven Cohen called the Content Delivery Cloud

<script type="text/javascript">var addthis_pub="geekyjewel";</script> <script type="text/javascript" src=""></script>

Wednesday Nov 19, 2008

Response: Comment on "Save money with Open Storage"

This is in response to a comment on the previous post with regards to the benchmark demonstrating no penalty hit for using a NAS storage device like AmberRoad instead of a LocalFS for a Web 2.0 application. The comment stated:
"I respectfully disagree with your comment that there is no performance penalty  
with NFS...You show a 12% increase in Processing utilization which is an        
overall performance hit.  You are doing approx. the same amount of work 
but chewing up more CPU...while your users are still the same, if scaled to 100%    
Util, NFS wouldn't allow as many users as LocalFS, because you are turning Blocks 
into packets into blocks, which is SLOW."
As per the subject matter experts at Sun, this is a fairly common argument in regards to idle %. We don't know if it's wrong in this case or not - but idle is not always an indicator of future performance or perceived headroom, it is wrong to assume so.  The reality is you don't know how much more you can get from your system as configured unless you push it to do so.  % idle is  a wrong metric to focus on (very common).  The metric that matters in this case is supporting concurrent users with response times for all transactions falling within the guidelines.  The NAS solution does this just fine for less $$.  Also as per detailed data here, the io latency reported by iostat is the same for both DAS and NAS - therefore the statement that NFS is slow is not backed up by evidence.

To the question if NFS io ultimately cost more?  Sure - but the missing point here is that it doesn't matter with this test.  Might it hurt us down the road pushing to 100%?  Maybe - but with a cheaper solution, and load scaled to 75% of the DAS solution, user response time was fine (this is what matters).  And to use the analogy of idle, with over 25% cpu free, we believe it could scale another 800 users, equaling the DAS result. Once we arrange for hardware (better network and drivers) to push the load beyond 2400 users, we would know for sure. Pl. stay tuned.

Lastly, do most customer environments really run at 100%?  Will the purported "NAS penalty" really affect them?  This shows that even if they run at 75% (pretty high for most environments I've seen), it's not an issue. HTH.

Tuesday Nov 18, 2008

Save money with Open Storage

Open Storage helps you save time and money for Web-scale Applications.

Want Proof?

Check this excellent  benchmark for Web 2.0 run on Sun Storage 7410 (aka AmberRoad) and CMT technology based Sun Fire T5120 servers.

Its also evident from the benchmark data that you don't suffer a performance penalty for using NAS.  There is a fairly common impression that performance could/would be slower than DAS, this shows that it's just not true with this environment.

System Processors Results
Storage Ch, Cr, Th GHz Type users Util RU watts / user users /RU
Sun Fire T5120 Sun Storage 7410 Unified storage array (NFS) 1, 8, 64 1.4 UltraSPARC T2 2,400 72% 1 0.20 2,400
Sun Fire T5120 LocalFS 1, 8, 64 1.4 UltraSPARC T2 2,400 60% 1 0.20





« July 2016