Thursday Oct 01, 2009

The Hybrid Distributed Data Center -er- Cloud?

One thing is for sure -- the cloud computing model has affected IT. Some of its still occurring, some of its maturing, other areas seem stuck in momentum -- and may not be solved. But I consider the three factors below to be influencers beyond the cloud buzz hype:

The Cloud Effect on EVERY IT shop...


1) the effect on time to market: user's of IT won't wait for 3 weeks to get a VM or hardware installed, much less their "stack" configured


2) packaging: speaking of stacks -- virtual machine images have won out as the preferred packaging element -- good or bad.


3) control of resources: this will continue to be a struggle between organizational units within a enterprise but the developer is gaining more control -- IT administration may be able to wiggle some back if the deal with #1 and #2 above.

Towards the Hybrid Data Center...

So where are we on the cloud and the data center beyond these key elements? It is and will continue to change how we view data centers and services deployed within them. A new model with concerns around business and IT risk at the forefront versus after the fact as we see "provisioning" and time to market solved by the adoption of cloud-like models. As IT users continue to adopt these models (API-driven, self-provisioning, VMIs, etc) it will start to form a more distributed model of the data center as we know it.


Risk profiles...


At the lowest level (let's call it 3) there will be the major concerns about how you operate IT as your business relies upon it. RISK concerns, mitigation, tradeoffs, etc. will continue to be the common phrases. If I am running a $50b business today and I need these DBs to be up all the time and have 0 downtime, I have a different risk profile than someone just starting with one customer -- their spouse.


For many large scale enterprises they are concerned but have much of this under control in their own data centers or via hosting providers. Services around these core elements may grow and shrink but core systems remain pretty stable -- think OSS/BSS (BILLING!) for a Telecom. You don't mess with billing.


But what about application distribution and provisioning for my platforms: phones, home media devices, video, etc -- I'll be able to bill you -- we covered that in risk profile 3. But how do I scale, offer new, distinctive services and do quickly, globally?


ng_dc_model01_simp_nolab.jpg

Layer 3 Attributes:


-- Managed risk profile


-- high level of compliance/reporting


-- less refactoring because of risk


-- trusted/true -- appliances, vertical scaled, etc.


-- "platinum" support



--

"I'll probably use the cloud."



or at least have some of my services on the cloud -- either privately within my own DCs or increasingly in someone else's. My core data platforms will be in my own data centers, with my compliance rules and corporate governance laws, but I might have some more flexibility (and needs to scale beyond the core) that push me to different models of service delivery. Maybe I'm running our of space/cooling (the data center TTL (time to live) -- take a look at NCompass' Inc. health check up -- what's your DC's TTL?


You will require the flexibility to be able to host applications and their data across data centers, providers, continents. Layer 2 starts become more dynamic. Apps are updated, scaled, etc. dynamically. The risk profile is less since much of the critical data is held back in the 3 layer.



While layer 3 maybe risk adverse, solid solutions, layer 2 and above may have more flexibility. Since one of the attributes is global and scale this may make more sense to look at open source (license friendly) solutions that may be more agile and certainly more cost effective.


ng_dc_model_snap01.jpg

Some Layer 2 Attributes:


-- global scale/reliability factor


-- cost/performance


-- dynamic - ability to change overall footprints dynamically


-- replicated & repeatable -- "virtual appliances"


-- replicas/copies of data (COHERENCY)


-- "bronze" support contracts, more open source


Clouds become CDNs? Clients?



Layer 1 is a content delivery network -- think Akamai but I wonder how this changes if you have lots more data centers, etc in the mix for layer 2. Functionally though they are a caching layer for content that needs fast access and globally available.



So if I put SSDs and memcached on everything what do I need here?


Clients?


The client layer is increasingly important -- GSLB and DNS as a strategy is limited at best. Better is P2P technologies that may help with service discovery and quality of service. We are starting to see this tier changing in terms of IDEs and other "management" related products -- loose coupling of resources, discover/re-discover vs hard code IPs, etc. What else needs to happen here? Thoughts?



UPDATE


Please see http://archmatters.wordpress.com for further thoughts.

Monday Aug 31, 2009

Cloud Urban Planning via Virtual Appliance Stacks Update

There's an increasing trend in the compute virtualization space to package apps with OS images as "virtual appliances." A great example is VMWare's virtual appliance library. As we continue to look at how the world of private cloud-like virtualization and public cloud computing starts to merge, looking at how we package and deploy apps is important.


cloud_types_graphic_02.jpg


For many IT shops in the enterprise, they may not get involved with basic cloud deployments on Amazon and such -- and frankly may not even be aware. Large-scale custom architectures are typically well thought and have lots of support from vendors and professional services -- they are high risk and deserved to be treated as such. But is everything custom?



This is the struggle -- the "urban planning" of the data center such that applications are zoned and deployed in environments created by TYPE vs single purpose. This has been the holy grail for some time, and projects like Sun's Dynamic Infrastructure, HP's Agile, etc attempts to address this. Increasingly cloud-like thinking is starting to come into play, with so called "private clouds." But this is really the same area that utility computing et al has been trying to solve for for a long time.


server_to_appl_to_cloud_01.jpg


So how do we move forward? How do we start to package, restrict, automate, provision, change, adapt our applications in a model that makes sense moving forward in a very dynamically paced environment while maintaining some level of control over data, privacy, security etc?


Does it make sense to create this urban planning zone as a place to deploy these virtualized appliance stacks as a way to help standardize and deploy apps? What kind of "auto-deploy" features could you leverage some the entire stack didn't change every time you changed some code? We know the cloud model prefers to re-deploy vs patch -- going back to the virtual machine image "master" and making the changes, re-versioning the deployment, and hitting the deploy switch.



We could create virtual appliance stacks as a way to provide higher order application functionality -- e.g. app state clustering and other HA models, as well as providing better control of standards in the environment. Public cloud can be a free-for-all but we don't necessarily want that in production.


A couple of examples...


VDC_to_SDN_logical_01_mod.jpg

-- IDE to the cloud -- app stack appliances with standard operating environment (for more on SOE == look at DCRG here ) for application deployment -- e.g. glassfish or JEE or some other stack, where the IDE attaches and deploys to the stack. The stack exists in an environment that provides an SLA -- you don't want that SLA you choose a different "zoning." As the diagram above points out -- this may include even a higher order abstraction. Application elements (cluster instances) working together as a clustered app tier may require you to only deploy apps to the cluster manager and it helps manage the instances.


hybrid_paas_mgmt_01.jpg

-- App and Enterprise Management to the cloud -- lot's of tools exist like Oracle's Enterprise Management suite of products that offer application management. These tools should be able to discover and attach to instances in this environment, giving the administrator an access point to admin higher order functions -- e.g. setting up clustering, DB configs, etc. But the basics of configuring a system via commandline etc should be taken care of by the deployment process.


(NOTE to those that build these types of products -- notice I said discover and remember things can change from time to time to IP address mapping is not sufficient. :) )

Monday Aug 03, 2009

Evolving SDNA and Virtual Data Center Concepts

Cloud computing continues to evolve. We are seeing more and more critical apps deployed onto and managed via cloud infrastructures and platforms. A great example is Rightscale's announcement a few weeks ago that they have kicked off more than 500,000 cloud VMs.


Another aspect of the evolution is moving beyond the VM but to grouping servers and services together and treating them as a virtual data center. Sun's Qlayer acquisition in January provided a unique view and model into how these resources can be grouped and managed together. This shifts the lifecycle to managing many if not all aspects of a deployment versus a box or cluster at a time. Typically during a major upgrade, code changes but so does DB tables, presentation tier components, etc. How do you manage the lifecycle of this change vs a component at a time?


VDCs_versons_01.jpg

NOTE: SDCs = Service Delivery Controller, See the blog below for more info.)

Using VDCs, services can be deployed within a VDC, and eventually even span multiple data centers. A VDC might implement a single business service, very much like Service Delivery Network Architectures' (SDNA) service module (NOTE PDF) concepts detailed here several years ago. As we continue to evolve and release new technologies -- the focus should be on the lifecycle -- and the first step is to realize every data center service out there is transient and has a "time to live (TTL)."


Look for more "evolution" on the SDNA front at Sun's Blueprints program blog.

Monday Jun 01, 2009

New Cloud Architecture Paper Online

Many of us at Sun have worked on a new whitepaper around building cloud architectures for enterprises and service providers. This paper can be downloaded here.

It cover's some nice "cloudy" principles:
\*Evolving application architectures
\*Changing approaches to architecture
\*Changing application designs
\*The goals remain the same
\*Consistent and stable abstraction layer
\*Standards help to address complexity
\*Loose-coupled, stateless, fail-in-place computing
\*Horizontal scaling
\*Parallelization
\*Divide and conquer
\*Data physics
\*The relationship between data and processing
\*Programming strategies
\*Compliance and data physics
\*Security and data physics
\*Network security practices

About

Thoughts from Jason Carolan -- Distinguished Engineer @ Sun and Global Systems Engineering Director - http://twitter.com/jtcarolan - http://archmatters.wordpress.com

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today