Wednesday Jan 13, 2010

Can government clouds be any other way than full of open source?

These days, I keep reading how open source will be everywhere in government clouds, and how this is, for some strange reason, something that is new to everybody on the planet, or should be.

The thing is... look around! Most of the cloud computing technologies that we are going to be using in the years to come is directly, or based on, open source. This is not really news, since you can really consider cloud computing to be an evolution towards very high dynamicity of what we've been doing in the past up to now. And the result of what we've been doing is a huge collection of fantastic open source software. So fantastic, actually, that in the European area, in particular, open source is really what is driving the government IT business.

So of course, it seems logical that all the solutions for developing networked based applications (call it client-server, grid, 3-tier, SOA... cloud... you name it) is the basis of what we're now seing put in practice to build the clouds of tomorrow. Or the clouds of today, actually.

Yes, some new technology has appeared, in particular around provisionning, since that is one of the key differentiators between the N-1 iteration (SOA) and the current (Cloud) of our computing model. But even that new technology is very often open source itself.

Cloud computing is a world of open source. Which is interesting because there's a lot of money to be made there. It's a world of services, integration, very fancy support models... all that is needed to deploy mission critical applications.

But because cloud computing is using open source, not only do clouds offer the means to scale from zero to very high loads in just moments, but they also offer the means to start at zero with zero software costs, and then you scale your costs from services / consulting at the start to full-fledged 24x7 enterprise-class support when your cloud becomes a production machine with revenue, or any other source of value (homeland security, education of the populations, taxes management...)

So yes, government clouds (and any other clouds) will be based mostly on open source technology. We shouldn't be surprised. We had it comming, for quite some time. And it's a good thing.

Monday Feb 23, 2009

Solving the governmental secure SOA jigsaw puzzle with soda...

With the growing requirements for more and more complex services to be offered to citizens, and also the ever increasing requirements for integrated internal processes (like integrated tax systems, for example), governments around the world are being faced with what I call the secure SOA jigsaw puzzle syndrome. (That is, of course, if one believes that SOA is still a relevant terms... It seems some people, and not the least, think we need to shift away from the SOA terminology...

How do you mash up multiple services that have been provided by different vendors? How do you ensure that this mashup remains secure. You might trust one of the vendors more than another. How do you prevent the code running on one of the services from turning rogue and doing a bit more than you expected on your network? How do you ensure that there is no data leak between services, either through the services infrastructure itself, or at the user point of view?

Your typical solution to the problem becomes very complicated as you have to go through all kinds of network topology hoops to separate your different service providers, and even doing that, at the point of user interaction, it still is very hard to prevent a state revenue service worker from accessing a citizen's tax records and then copy/pasting it to a web-mail connected to the internet.

I'm currently designing an architecture that is aimed at solving these issues in a very elegant way. We call it SODA, or, to be precise, S3ODA for Secure Shared Services Open Delivery Architecture.

The idea is very simple. Solaris 10 and OpenSolaris both share a common feature called Trusted Extensions (TX, as we call them, between friends). By using TX you can assign a label (you can think in military terms, with labels like Confidential, Secret, Top Secret... but also without hiearchy, you can think labels in terms of names of services like ServiceA, ServiceB, ServiceC).

You basically assign a label to each service component you are going to plug into your services architecture. Either you are running the service on a Solaris system, in which case you enable TX on that system and run the service inside the corresponding label, or, if the service isn't running on Solaris, you proxy it behind a Solaris + TX server which enforces the labeling, or using a network environment supporting CIPSO labels and map your service to the corresponding label.

On the service switch side, the magic resides in implementing your ESB stacked inside a multi-label Solaris system. You create one label per service, and you have one instance of the ESB per label running on the Solaris machine. And behind that, you implement, as part of your policy engine, the rules than enable the different labels to communicate only when the application workflow mandates it.

That way, service A components can only talk to service B components when they are allowed to. At any other time, since they are running at different labels, both the network infrastructure (CIPSO or Solaris TX servers) do not allow different labels to intercommunicate, and the ESBs can't communicate between each other unless it's time in the application workflow and the Solaris TX switch server opens up the communication for that specific task. This takes care of the information leaks from the services infrastructure side of things...

Now how do you handle the prevention of data leaks at the user point of interaction? Sun has been developing (initially for defense customers, but it's really usable by everybody) an environment called SNAP for Secure Network Access Platform. The general idea behind SNAP is that you implement a Sun Ray server on top of Solaris and Trusted Extensions. Sun Ray clients are very slick devices. They are thin clients, with absolutely zero state, no hard disk, and minimal information in FLASH (basically just enough to boot using BOOT/DHCP on a network and then figure out from there how to load their software and start being useful). The advantage is that since there is no local storage, theft of a device brings no data theft... and is also useless as a Sun Ray without the Sun Ray server behind it is pretty much a paper weight. :) Now once you have the proper server infrastructure, they are very very useful. And if you use SNAP, what happens is that on your terminal, each window you see on the screen operates at a specific security label. Yes, the same as the ones on the network and services switch. What happens then is that it is not possible to copy/paste between windows that don't share the same label (or, in defense environments, you can't paste from a high security level to a low one - you can't declassify). So here is what you do. Your tax worker can be accessing tax records of celebrities, but that person has no possibility to copy from the tax application window to, say, a web browser that might be open elsewhere to enable him to do background checks... impossible to take the the tax data and paste it in the browser. But the system may have a rule in place enabling pasting from the browser to the tax application in order to keep a track of things like pictures of expensive houses used to justify that tax was maybe under declared by the celebrity...

Do you want to know more about this architecture? Send an e-mail!




« August 2016