Thursday Nov 13, 2008

The SuperNAP

I was invited to visit Switch Communication's Supernap facility. This must be the best datacentere in the world. It is purpose built, and designed to host new age high density computing. They set out to build a 35Kw/Rack data centre and every decision they took was to enable this goal. There is no compromise. For instance they have invented their air conditioning plant, since, so they claim, the industry leader wasn't interested in innovating for them, they pump cold air into the top, of the room, and suck the rising hot air out, leveraging the laws of physics. They have three power distribution systems, a fixed floor which supports the PDU system. If you require high density computing, these people are the people to go to.

The have a video on their home site that explains some more, and this video on Youtube, which explains even more.

I found this by querying youtube using the tag supernap.

They talk exclusively about cooling, power, and availability, and they summarise the offering this page. We i.e. Sun are quite good at fitting out data centres, but have rented space in their centre? If you can put up with US jurisdiction, its fantastic. It begs the question why anyone that's not a specialist would build another. Look at the videos.

This was posted in Jan 2008 and backdated to the time of occurrence.


Thursday Nov 06, 2008

Talking about Cloud Computing

The current technical state of systems, storage and networking and specifically the cost of broad band networking has created a tipping point. Over the last 10 years, organisations and people have been learning to build new distributed computing server complexes. It may be too late to copy the leaders, but certain design criteria and the regulatory constraints may mean that there is a slower commercial adoption cylce. The privacy, availability and response time requirements are for businesses are all different. In my mind, its commercial adoption that turns grids into clouds.

"One class of grid is where we locate one application, which has many identical parts on a distributed computing platform and we call this HPC; where we locate many copies of one application be it apache, glassfish or MySQL on a distributed computing platform we call it web 2.0 and when we locate many applications on a distributed computing platform we call it Cloud Computing"

Dave Levy

Its commerce that has the need for the Cloud, because they have usually have a large portfolio of applications, some of which behave like HPC and some of which behave like Web 2.0 and its the economics of utility that drives this. Sun's ERP solutions have leveraged our product portfolio and Moore's law to become a tiny fraction of Sun's IT estate, with the community infrastructure and the design support solutions being implemented on web 2.0 and HPC grids, now dominating Sun's internal network in terms of cycles, storage and cost.

Admittedly, there are other aspects of what makes a cloud different from the payroll bureau of thirty years ago.

Data Centres are expensive and as we are discovering in the last few years, they are best built for purpose. Building and running Data Centres also benefits from 'specialisation'. In "The Big Switch", Nicholas Carr argues that the efficiencies of the plant apply to IT. (I'm really going to have to read it). Historically, applications developers have tightly coupled their code with an operating system image, specifying the version, library installs, package cluster and patch state. This is beginning to end. Developers want to and do develop to new contracts, be it Java, Python or another run time. Also with virtualisation technology such as Virtual Box and VMware, deployers can build their utility plant and take an application appliance with an integrated OS and applications run time, this allows developers to choose whether to use modern dynamic runtimes or to tightly integrate their code with the environment.

A second driver is the amount of data coming on-line. This cornucopia of data is enabling/creating new applications, of which internet search is an obvious one. Google scans the web, but many companies and increasing social networks are scanning their storage to discover new valuable pieces of information. Internet scale also means the "clever people work elsewhere" rule of life is generating new questions. The growing number of devices attached to the internet is also discovering and delivering new digital facts. The evolution of the internet of things will make the growth in data explosive so its a good time to be introducing a new disruptive storage capability and economics. The need to analyse this massive new data source is what's driving the emergence of Hadoop and Map/Reduce. Only parallel computing is capable of getting information out of the data in any reasonable time. A fascinating proof point is documented on the NYT Blog, where Derek Gottfrid shows how he used Amazon's cloud offerings to convert the NYT's 4TB archive into .pdf using Hadoop. I'd hate to think how long it might have taken using traditional techniques.

One tendency I have observed from my work over the last year is that today building grids is now longer hard, and most dramatically Amazon and Google are turning their grids to applications hosting. A number of public sector research institutions have also been building publicly available grids for a wile, although they tend to share amongst themselves. In the public sector world at least, they have begun to address the question of grid interoperability, and everyone is looking at how to 'slice' resource chunks out of the grid for users, on demand of course.

In the commercial world the competitive positioning of various players has led to them competing with different services and different levels of abstraction. The offerings of Google's "google apps engine" vs "Amazon's EC2" are quite different. Sun believes that cloud computing offerings need to organise above the OS level now and that developers don't want to worry about the operating system, merely their run time execution environment. This is only possible because modern development and runtime environments can protect developers from both the cpu architecture and now the operating system implementation. I know that as I search for a new solution for the services I run on my Qube, I'm happy to configure the applications and their backups, but I don't want to worry about disk reliability and other system services.

Jim Baty made the comment that we're entering a Web 3.0 world which is chmod 777 for everyone. :)

So the economics are compelling, the state of technology is right, developers are ready to leave these decisions behind and the first movers are moving.

Can and will Sun play a role in this next stage of the maturing of IT?

This article is I hope the first of two, written from notes made during a presentation by Jim Baty, Chief Architect, Sun Global Sales and Services, Scott Matton, one of the senior architects in GSS and Lew Tucker, VP & CTO of The article is back dated to about the time of occurrence.

Thursday May 17, 2007

Project Black Box, its real you know!

Yesterday, Sun's Project Blackbox Tour visited the Thames Valley at Sun's UK HQ Campus and today we have taken it to the National Army Museum so prospective customers, journalists and analysts can inspect it and 'kick the tyres', and I am one of the engineers answering the mediumly hard questions. The really difficult ones have been handled by Joe Carvalho, one of the designers.


Project Black Box


One of the difficult questions was "How come you can't look yourself in?". I took some pictures as did Andy Williams of Easynet and I have posted them in a set called "Project Black Box" at Flickr. (Now this should be a pretty good use of snap preview). Andy's are copyright to him, I have posted mine with my normal creative commons licence.

This is another article backdated to the time it occurred since it has taken the best part of a month to get Andy's permission and upload the pictures to flickr.  


Tuesday Nov 21, 2006

blackbox is a video star!

Jonathan announced Project Black Box at the end of last month. Its a Data Centre in a shipping container and expanded on its unique value in his blog article "A picture's worth.... Jonathan said that customer reaction has varied with

Jonathan:Equal measures of a) nervous laughter, b) incredulity, c) profound curiosity and a recognition that we're working on the right problems for the future of datacenters. And we have an enviably beefy pipeline of customers and integrators wanting to talk to us, which is the right starting point.

As part of the reaction by customers in the UK I have been asked to talk to two major UK based customers and have thus checked out the YouTube videos published by Sun (& others; the link queries Youtube for "sun+blackbox" tags). I have also uploaded the customer presentation I use to our media caster [.pdf]).

Its clear after some research that the big advantages are it can supply & cool 25Kw/rack, so racks can be full of modern space efficient computers; you don't have to spend your space budget on cooling. With a footprint, of 30'x15', and capable of hosting 250 n-way systems with between 1000 & 2000 cores, we claim it can save 80% of your space costs, and reduce the demand for space by 50%. The land is cheaper; it doesn't have to be air-conditioned, doesn't need a raised floor, etc., and Sun Blackboxes can be stacked, if you have the headroom!


Rack'em + Stack'em


All you need is power, networks and chilled water.


Saturday Mar 04, 2006


I have now (yeah really) added a picture to my article "Designing Data Centre automation solutions" illustrating the two dimensions of virtualisation & posted my notes from the N1 Architecture sessions.


Friday Mar 03, 2006

Power, Heat and Cooling in the data centre

I'm about to head for the airport to travel home, but I must let you know that Peter Snelling of Sun Canada has just presented on Data Centre enviromentals. It is a very comprehensive presentation and I shall be borrowing from it significantly. He calculates from power draw, to heat generated to air conditioning requirements.

Thanks Peter!


Virtualisation has two dimensions

I have added a picture to my article "Designing Data Centre automation solutions" illustrating the two dimensions of virtualisation.. It seems like I didn't! Sorry. I have now. Mysterious Smiley


Data Centre Vision

Sohrab has sent me an edited version of his slides. You can download it here... and I have put it up as a bookmark in the right hand side bar. Its the "N1 (.pdf)" one.


Wednesday Mar 01, 2006

Architecting DC automation

During a presentations on N1, Garima Thockburn & Doan Nguyen showed an architectural functional model for the applications provisioning technology.

N1 Service Provisioning Architecture diagram

What's really cool is that, as I'd expect from Sun, although we can forget what it made us, is that openness is being designed in from the ground level. Its recognised that a DC automation solution will have to utilise external capabilities. A second piece of ideology is borrowed from SOA and the products need to workflow manage various transactions, i.e. systems management transactions in the data centre. (Thuis is the service manager ).The whole SOA vocabulary of composition & orcestration fits the problem. The industry (or community if you prefer) are also developing business applications design skills for the infrastructure. This is also necessary if automation is to suceed. The block diagram puts work flow at the centre of the design. NB The diagram is accurate today but may change.

We were given the opportunity to workshop with members of the team. We had a discussion about solutions selling and examined the reality of the partnering dimension. I argued that we need to take more ownership of the relationship; in many cases, our "Partners" don't use that word to describe us. On a good day, we're suppliers on a bad day we're competition. I have thought through this since I said it, and it clear that this problem requires inter-company co-operation. At least Sun is comfortable working with others and partnership will help our customers meet broader requirements today, and as I said partnering helps meet the hetrogeneity requirement, at the least we'll need help dealing with oddities. Matthias Pfuetzner (a german colleague) argued for openness of the products APIs at both the plugin level, which determines what objects the software can manage and at the UI level, because as the plugin capability extends, it may be necessary to customise the UI to invoke and monitor the extended functionality. I like his idea of making a "plugin builder".





« July 2016