The Hybrid Distributed Data Center -er- Cloud?

One thing is for sure -- the cloud computing model has affected IT. Some of its still occurring, some of its maturing, other areas seem stuck in momentum -- and may not be solved. But I consider the three factors below to be influencers beyond the cloud buzz hype:

The Cloud Effect on EVERY IT shop...


1) the effect on time to market: user's of IT won't wait for 3 weeks to get a VM or hardware installed, much less their "stack" configured


2) packaging: speaking of stacks -- virtual machine images have won out as the preferred packaging element -- good or bad.


3) control of resources: this will continue to be a struggle between organizational units within a enterprise but the developer is gaining more control -- IT administration may be able to wiggle some back if the deal with #1 and #2 above.

Towards the Hybrid Data Center...

So where are we on the cloud and the data center beyond these key elements? It is and will continue to change how we view data centers and services deployed within them. A new model with concerns around business and IT risk at the forefront versus after the fact as we see "provisioning" and time to market solved by the adoption of cloud-like models. As IT users continue to adopt these models (API-driven, self-provisioning, VMIs, etc) it will start to form a more distributed model of the data center as we know it.


Risk profiles...


At the lowest level (let's call it 3) there will be the major concerns about how you operate IT as your business relies upon it. RISK concerns, mitigation, tradeoffs, etc. will continue to be the common phrases. If I am running a $50b business today and I need these DBs to be up all the time and have 0 downtime, I have a different risk profile than someone just starting with one customer -- their spouse.


For many large scale enterprises they are concerned but have much of this under control in their own data centers or via hosting providers. Services around these core elements may grow and shrink but core systems remain pretty stable -- think OSS/BSS (BILLING!) for a Telecom. You don't mess with billing.


But what about application distribution and provisioning for my platforms: phones, home media devices, video, etc -- I'll be able to bill you -- we covered that in risk profile 3. But how do I scale, offer new, distinctive services and do quickly, globally?


ng_dc_model01_simp_nolab.jpg

Layer 3 Attributes:


-- Managed risk profile


-- high level of compliance/reporting


-- less refactoring because of risk


-- trusted/true -- appliances, vertical scaled, etc.


-- "platinum" support



--

"I'll probably use the cloud."



or at least have some of my services on the cloud -- either privately within my own DCs or increasingly in someone else's. My core data platforms will be in my own data centers, with my compliance rules and corporate governance laws, but I might have some more flexibility (and needs to scale beyond the core) that push me to different models of service delivery. Maybe I'm running our of space/cooling (the data center TTL (time to live) -- take a look at NCompass' Inc. health check up -- what's your DC's TTL?


You will require the flexibility to be able to host applications and their data across data centers, providers, continents. Layer 2 starts become more dynamic. Apps are updated, scaled, etc. dynamically. The risk profile is less since much of the critical data is held back in the 3 layer.



While layer 3 maybe risk adverse, solid solutions, layer 2 and above may have more flexibility. Since one of the attributes is global and scale this may make more sense to look at open source (license friendly) solutions that may be more agile and certainly more cost effective.


ng_dc_model_snap01.jpg

Some Layer 2 Attributes:


-- global scale/reliability factor


-- cost/performance


-- dynamic - ability to change overall footprints dynamically


-- replicated & repeatable -- "virtual appliances"


-- replicas/copies of data (COHERENCY)


-- "bronze" support contracts, more open source


Clouds become CDNs? Clients?



Layer 1 is a content delivery network -- think Akamai but I wonder how this changes if you have lots more data centers, etc in the mix for layer 2. Functionally though they are a caching layer for content that needs fast access and globally available.



So if I put SSDs and memcached on everything what do I need here?


Clients?


The client layer is increasingly important -- GSLB and DNS as a strategy is limited at best. Better is P2P technologies that may help with service discovery and quality of service. We are starting to see this tier changing in terms of IDEs and other "management" related products -- loose coupling of resources, discover/re-discover vs hard code IPs, etc. What else needs to happen here? Thoughts?



UPDATE


Please see http://archmatters.wordpress.com for further thoughts.

Comments:

Post a Comment:
Comments are closed for this entry.
About

Thoughts from Jason Carolan -- Distinguished Engineer @ Sun and Global Systems Engineering Director - http://twitter.com/jtcarolan - http://archmatters.wordpress.com

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today