So you have a new project & don't want to “buy” more computers?

Depending on who you talk to, computing is a competitive weapon or a cost of doing business, either way it's an investment. One question that each and every new project has as it moves from the business plan to implementation is the often excessive cost and time to put into place the infrastructure so that you can finally realize that killer application.

With the advent of utility computing, and before that, outsourcing, businesses gained the ability to shift the costs from capital costs to expenses, and in doing so have improved their capital portfolio and cash flow positions. Now you have a new initiative, and though you have models/projections wrt. the data transfer, loading, processing and storage rates, you are probably still not quite certain of how this should be architected. One potential BluePrint comes from Sun in it's Sun Grid Rack System K2A Gridrack 3and Sun Grid Utility Services approach. This blueprint gives you the ability to take industry standard x64 servers or the novel SPARC CoolThread (Niagra T1) based systems to grow your computing plant just in time, in small incremental grains with incremental cost ( the value engineering done by the core utility team).
I1 T1 Lg

Take advantage of package density, operational automation, systemic monitoring and metering and workload scheduling advancements that are currently under investigation by the Sun Grid team as your solution matures. Furthermore, this puts you under initial control of your near term deliverables (your hw, your sw, onsite) and aligns with options for fiscal improvements/flexibility in longer term. If your initiative is radically successful to the business, you look to “join” Sun Grid, allowing your work to be distributed onto the Sun Grid mesh of data centers in addition to your own; this gives both time to market, peak scale as well as geographic diversity for high availability.

How do we get started?

  1. Let's talk about the blueprinting - will your application fit this design... horizontal computing services are different from vertically integrated services in their treatment of memory (the largest issue) specifically the unified memory architecture of SMP machines is a critical facilitator to some large scale transactional systems.
  2. If we can go horizontal - which most if not all new applications (green field) can, then let's think about the data flow to look at the points of constriction. Areas where we become hardware limited because of disk, network and cpu speed/contention.
  3. With these “critical to scale points” in mind, let's determine a scalability strategy to help us parse this load coherently and with availability. Many of todays applications are well suited for pipelined approaches as in the image here.
  4. determine the core services that need to be shared, and how these core services can be federated benefiting both your company and your partners - federation is about the sharing of responsibility and control.
  5. go back to 2, and continue to refactor until scalability can be addressed with some fudge factor that lends the software developers some flexibility in approach.

We are ready when you are with a set of System Integrator Partners, ISV's, Client Solutions team, and a very active Open Community to help you take advantage of these emerging models, simplifying your Data Center, and changing the economics of corporate and research computing.

Keywords:
Comments:

Post a Comment:
Comments are closed for this entry.
About

dhushon

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today