Monday Aug 31, 2009

Cloud Urban Planning via Virtual Appliance Stacks Update

There's an increasing trend in the compute virtualization space to package apps with OS images as "virtual appliances." A great example is VMWare's virtual appliance library. As we continue to look at how the world of private cloud-like virtualization and public cloud computing starts to merge, looking at how we package and deploy apps is important.


cloud_types_graphic_02.jpg


For many IT shops in the enterprise, they may not get involved with basic cloud deployments on Amazon and such -- and frankly may not even be aware. Large-scale custom architectures are typically well thought and have lots of support from vendors and professional services -- they are high risk and deserved to be treated as such. But is everything custom?



This is the struggle -- the "urban planning" of the data center such that applications are zoned and deployed in environments created by TYPE vs single purpose. This has been the holy grail for some time, and projects like Sun's Dynamic Infrastructure, HP's Agile, etc attempts to address this. Increasingly cloud-like thinking is starting to come into play, with so called "private clouds." But this is really the same area that utility computing et al has been trying to solve for for a long time.


server_to_appl_to_cloud_01.jpg


So how do we move forward? How do we start to package, restrict, automate, provision, change, adapt our applications in a model that makes sense moving forward in a very dynamically paced environment while maintaining some level of control over data, privacy, security etc?


Does it make sense to create this urban planning zone as a place to deploy these virtualized appliance stacks as a way to help standardize and deploy apps? What kind of "auto-deploy" features could you leverage some the entire stack didn't change every time you changed some code? We know the cloud model prefers to re-deploy vs patch -- going back to the virtual machine image "master" and making the changes, re-versioning the deployment, and hitting the deploy switch.



We could create virtual appliance stacks as a way to provide higher order application functionality -- e.g. app state clustering and other HA models, as well as providing better control of standards in the environment. Public cloud can be a free-for-all but we don't necessarily want that in production.


A couple of examples...


VDC_to_SDN_logical_01_mod.jpg

-- IDE to the cloud -- app stack appliances with standard operating environment (for more on SOE == look at DCRG here ) for application deployment -- e.g. glassfish or JEE or some other stack, where the IDE attaches and deploys to the stack. The stack exists in an environment that provides an SLA -- you don't want that SLA you choose a different "zoning." As the diagram above points out -- this may include even a higher order abstraction. Application elements (cluster instances) working together as a clustered app tier may require you to only deploy apps to the cluster manager and it helps manage the instances.


hybrid_paas_mgmt_01.jpg

-- App and Enterprise Management to the cloud -- lot's of tools exist like Oracle's Enterprise Management suite of products that offer application management. These tools should be able to discover and attach to instances in this environment, giving the administrator an access point to admin higher order functions -- e.g. setting up clustering, DB configs, etc. But the basics of configuring a system via commandline etc should be taken care of by the deployment process.


(NOTE to those that build these types of products -- notice I said discover and remember things can change from time to time to IP address mapping is not sufficient. :) )

Monday Aug 03, 2009

Evolving SDNA and Virtual Data Center Concepts

Cloud computing continues to evolve. We are seeing more and more critical apps deployed onto and managed via cloud infrastructures and platforms. A great example is Rightscale's announcement a few weeks ago that they have kicked off more than 500,000 cloud VMs.


Another aspect of the evolution is moving beyond the VM but to grouping servers and services together and treating them as a virtual data center. Sun's Qlayer acquisition in January provided a unique view and model into how these resources can be grouped and managed together. This shifts the lifecycle to managing many if not all aspects of a deployment versus a box or cluster at a time. Typically during a major upgrade, code changes but so does DB tables, presentation tier components, etc. How do you manage the lifecycle of this change vs a component at a time?


VDCs_versons_01.jpg

NOTE: SDCs = Service Delivery Controller, See the blog below for more info.)

Using VDCs, services can be deployed within a VDC, and eventually even span multiple data centers. A VDC might implement a single business service, very much like Service Delivery Network Architectures' (SDNA) service module (NOTE PDF) concepts detailed here several years ago. As we continue to evolve and release new technologies -- the focus should be on the lifecycle -- and the first step is to realize every data center service out there is transient and has a "time to live (TTL)."


Look for more "evolution" on the SDNA front at Sun's Blueprints program blog.

Sunday May 31, 2009

Lots of Clouds at CommunityOne West

Preparing to speak with Robert Holt at CommunityOne West in San Fran.

Check it out -- lot's of cloud talks, great keynote from Lew Tucker and Dave Douglas, and of course you might be able to catch our webcast on OpenSolaris DSC (Dynamic Service Containers with Nimsoft's service level management helping us out.) Will post the slides soon.

UPDATE:

Slides posted here.

Friday Apr 24, 2009

Speaking at Cloud Event in Denver

Statera is putting on a cloud panel in Denver next week, April 30 at the Broomfield Omni. If you get a chance come hear cloud perspectives from Sun, Salesforce, Google, and Microsoft.

Stolen from Statera's event website, here's some of the issues addressed:

COULD YOU BENEFIT FROM ANY OF THESE CLOUD COMPUTING ADVANTAGES?

\* Reduced Cost - Cloud technology is paid incrementally, saving organizations money.

\* Increased Storage - Organizations can store more data than on private computer systems.

\* Highly Automated - No longer do IT personnel need to worry about keeping software up to date.

\* Flexibility - Cloud computing offers much more flexibility than past computing methods.

\* More Mobility - Employees can access information wherever they are, rather than having to remain at their desks.

\* Allows IT to Shift Focus - No longer having to worry about constant server updates and other computing issues, government organizations will be free to concentrate on innovation.

Hope you can come!

Tuesday Apr 14, 2009

Cloud Management - A Continuous Perspective

As I think about managing resources and services on the cloud (and yes I realize that one persons resource is another's service but...) there's a couple of thoughts I've been sharing with folks and Sun and our customers.

Clouds are a great example of applying "continuous architecture." Continuous architecture is a subject that has a basis in architecture but its really the cross section of biological thinking and the human task of creation. It's the notion of complex adaptive systems applied to something that historically might have been viewed as static (though buildings and houses are anything but.)

In the cloud, static really doesn't apply. A workload might be on one server one minute, on many the next, or on another in the next instant. This may or may not be apparent to the "developer" or service designer, or even the operator of the cloud, in fact its most likely not.

So how is this accomplished? For years we've (via dynamic infrastructure, JxSON, and other projects) thought that workloads are a composite of elements -- to run they require vLANSs, IPs, CPU, Storage, etc. As computer system technology continues to mature, so do the features, especially around the deployment, management, and virtualization technologies that enable much of the cloud today. Anyone that uses GridEngine or other grid/HPC applications -- this is old school. Yeah there's a resource manager, resources are mapped by the manager process, and they run, report back results.

But how does one assemble a bunch of virtualized resources that need to be brought together to meet the needs of a service? Are all the sub-elements assembled via a model and "put on the stack" or do you do this at runtime? How long does this process take? How does a service know when to return its composite resources back to the "pool?" Does it have a default time to live? The model map of the resources need to ensure that they are providing the level of service required by the service.

service_queues_01.jpeg

I was speaking to a customer the other day and every service by default is deployed in their DC with a 30 day TTL (time to live.) You can "purchase" an exception... I won't get into the downstream effects of this thinking -- and their are many but this forethought is significant.

But it illustrates these two processes....the consumption of composite resources by the deployment (run) requests for services.

About

Thoughts from Jason Carolan -- Distinguished Engineer @ Sun and Global Systems Engineering Director - http://twitter.com/jtcarolan - http://archmatters.wordpress.com

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today