Tuesday Jun 13, 2006

Real Money Here!

As many have seen, Sun is offering up a $50k bounty for the coolest application.

Coolapps Graphic I'm starting to get some really good questions, which lead me to believe that there are some mighty cool things being done. If you are writing a cool-app, and need some help, please do query the SunGrid Developer Community. I'll hide my email from the blog, but if you are interested in contacting me directly, it's fname.lname@sun.com.

Have some fun, build something that simulates protein folding, airfoil shaping, analog circuit routing, 3d rendering ... there are lots of cool things that I think can leverage all this compute power. Good Luck!

Friday May 12, 2006

More on the “Google-Mart” proposition

A number of readers have contacted me to discuss my last blog on the emergence of Microsoft and Google not just as online/marketing entities and content service providers, but because they ware well funded/positioned to be mega-ASP's dis-intermediating a swath of independent service providers. The initial inception of my thinking in this area came from Robert Cringley's blog back in November, 2005. In this article, Cringley cites ample evidence of this strategy along the following axes:

  • Google is buying up lots of Dark Fiber (note that bandwidth is one of the critical factors in a successful application services platform)
  • Google's “shipping contianer” in Mountain View which is said to contain “5000 Opteron processors and 3.5 petabytes of disk storage that can be dropped-off overnight by a tractor-trailer rig”
  • Google is building data-centers in places where there is cheap power and excellent “rights of way”
  • The re-introduction of the Google Accelerator

So I'm not saying that we should support/react to Cringley's conspiracy theory, but we have to look seriously at “someone” doing this, because the opportunity is just too lucrative not to explore. All of a sudden, small self contained data centers start cropping up in macro-level packaged solutions, located at/near power sub-stations which are also frequently co-located with networking rights of way... and Google prepares to deliver it's services very, very close to the consumers... and that proximity can lend tremendous contextual benefit to the service mix (demographically aligned) as well as doing it at prices that traditional competitors cannot touch.

To me, the exceedingly interesting Computer Science aspect of this work is the establishment of a new layer within the traditional IT infrastructure - the Network Operating System, which, like failed attempts at NUMA systems, may finally hold the reliability, serviceability and scalability benefits that could allow one to run an orchestrated service on a “grid” of computers, just as they used to run in “user space” on a single computer - the promise of Java (for homogeneity) and SOA (for dynamic orchestration and late binding), realized.

Wednesday Jun 08, 2005

Utilimatic, it's Autonomic on Steroids

As a team we were joking around the other day looking for new naming models that could properly reflect the self-managing, self-repairing nature of our Grid environment. And what became very clear was that we needed to create a new term that reflected the higher order policies associated with resource management - like financial/revenue arbitrage.

It’s one thing to talk about automated, lights out management, in which most if not all of the precipitating events are well known, and appropriate reactions coded in scripts. Further, you could allow for cascading events, and policy based (weighted) rules tied to a federating paradigm - btw, this is really starting to really reflect the state of the art here (fair harbour clause). But what happens when you actually bring financial/market driven rules and experiences into the solving of these problems? You get utility value optimizing automation… or the “Free Market Utilimatic” (there's a reason I'm not in branding)

Back in February we (Sun and Archipelago Holdings) described the potential of developing a commodity exchange for more traditional Data Center capabilities: network, compute and storage as they really are beginning to become commodities. We know that they’re commodities because of the competition across specific platforms that Sun and it’s competitors always face: it’s not a battle of speeds, feeds or features, but rather price that differentiates - in fact, that compute products are increasingly interchangeable, has been driven by buyers, our customers who wanted choice. As such, it’s fully appropriate that these commodity elements have a market derived value, and we all know that where there are free markets, there are people who trade in their futures. So if spot commodities and futures both have inherent value, then why cannot we leverage this value information to help us prioritize tasking (distributed scheduling), direct our resources (straight thru provisioning) and even help to plan the “onlining” of capacity (resource planning).

Friday Apr 29, 2005

Software development process, parallels in EDA world

I'm beginning to think that the software development community can benefit strongly from the innovations that have taken place in the Electronic Design Automation (EDA) world over the past 10+ years - specifically in re-use, process and in declarative models like VHDL (including a repository for free componentry). What is very interesting is the strong use of meta-models, very UML like, but obviously inclusive of systemic models/annotation - in the case of EDA, thinks like heat, power, timing that translate well into enterprise development to systemic characteristics like security, availaiblity, scalability which allow us to take micro-architectural patterns an annotate them as componentry which can then be assembled, verified, synthesized and planned.

Another interesting element, that most monitoring this community have seen is an increasing recognition that IP not invented within corporate 4 walls is okay, so long as it can be successfully integrated - which at times can be challenging:

EETimes.com - IP assembly represents a sea change in the design of ASICs :

“This has led to the rise of a new term to describe ASIC design: IP assembly. The suggestion is that ASIC design now is different in kind from the days when we wrote new register transfer level for everything. Instead of designing, we are assembling prefabricated blocks into new configurations.

Somehow, the word doesn't match reality, though. A lot more goes into producing a working, yielding SoC than just putting things together. Not the least of the efforts is in creating new blocks to implement functions that aren't available as needed from the internal and external IP worlds. But the amount of work in ”just“ connecting the dots-floor planning, interconnecting the blocks correctly, placement and routing, timing, signal integrity and power closure and rule checking-can be huge. It can also require considerable creativity. And no one, other than an IP vendor, has suggested that employing existing IP reduces the verification task in the least.”

That aside, I think that SaaS and modular software approaches are following this trend, especially with the move to virtualized infrastructures and utility models. I have spoken before on Intellectual Capital marketplaces for software components, and think that we should look to the IP foundries, communities for IP sharing, and customizability for IC componentry as we as a software IC cottage industry execute on our plans for SaaS on a Sun Grid substrate.

Tuesday Apr 26, 2005

Like old wine in a new bottle

I was sent a presentation this morning outlining some of the various approaches proposed within the WS-I community for Service Oriented Architecture (SOA) service interaction. And a few of the graphics just jumped out at me:

WS Invocation

I find it remarkably amusing that the notions of “registration”, “discovery” and “binding” are so challenging for the industry... we were doing this with CORBA 10+ years ago and a full 7 years ago with Jini. What am I missing, is it just the need by competitive marketing groups/pre-IPO's to re-name things just to get funding? I just feel like this is a NIH problem, or worse that some competitors wanted to introduce discontinuous technologies so that they could fill the void. Even Adam Bosworth , who ran in that crowd for a while, points out here: (thanks Simon)

'Vendors such as IBM and Microsoft, in proposing the standards, were big, institutionalized companies trying to protect themselves, [Adam] Bosworth (formerly of BEA and now a VP at Google) said. “They were deliberately, in my opinion, making something hard,” he said, citing specifications such as Web Services Reliable Messaging.' InfoWorld

WS Security
With the simple use of proxies/stubs (which of course, can be dynamically generated) we eliminate the need for intermediate message formats, with leasing we pick up dynamic binding and as a plus, can get resource management, and with Jini Transactions and Spaces, I get the ACID properties... and put the control of recovery back where it should be, with the calling (initiating) service, and without a “required” coordinator I can drive to distributed/peer oriented scale. What the heck am I missing, except the fact that everyone thinks that Jini, by default = Java and RMI?

Alignment
And now let me really get up on my stump... language neutrality vs. platform/protocol neutrality.... why the heck should we use a description language to write code... even business code. There are terrific code generators for Java, moving from UML! Do Microsoft and IBM think that we, as a developer community, are that nieve?

A colleague of mine, Murali pointed out there there is little new ground in the WS space, but it's still popular because, let's face it Java isn't interoperable. I suggest that there are people who didn't want to be wrong in their dismissal of Java and Jini who now must invent their own “versions” of this venerable technology... let's just take a look at a problem decomposition solved by WS and by Jini:

So what's the hitch... Murali suggests “The fundamental thing we need to do is to externalize Java objects into a compactly encoded binary XML, through serializers with FastInfosets and JAX-FAST implementations. We then need to implement SOAP messaging within JERI, to provide pluggable transport. Then it doesn't matter what platform the service-endpoint is or how services are invoked. You have the best of both worlds: JINI and WS.”

We can soon prove to the WS-I community, neigh, the developer community at large, that its a lot easier to discover and reuse interesting services within this platform as opposed to wasting time in politicized standardization efforts. Furthermore, recognizing that more homogeneous environments wouldn't have to take the lowest common denominator approach; would allow us, as a community, to create hugely performant and well behaved service oriented networks.

Oh well, Tim, count me into the Illuminati, the Loyal Opposition!

Friday Apr 22, 2005

Jini and the NOS

We are now experiencing a drive toward a commodity “Network Operating System”, the next step in technology assisted computing environments. The knowledge that we have driven a huge amount of cost out of the hardware puzzle through a market move to commodity x86 based systems, but what is really next, it's gotta be the Operating System, and the software to manage a cluster of systems, what I'm calling the NOS.

So I talk about the need for a Network Operating System, what do I mean... we'll let's look at what kernels typically do: Process Provisioning (in isolation) and Management, System Resource Management, Inter-process communication enablement, Fault Management and Recovery (abstraction for user level processes).

Some in the Jini community like Gigaspaces (where my good friend Dennis Reedy has just taken a leadership role), Paremus, Intamission, and even open source projects from the community itself like Project Rio and even these tutorials on JavaSpace based worker patterns attrib. Tom White

computefarm graphic

are working on workflow scheduling & distribution models, and more dynamic resource management techniques. By and large these companies have been relatively successful, but still lack the excitement that I think that they deserve. Causes of the lack of excitement/broad adotpion can be traced back to a couple of things (misconceptions and misgivings):

Misconception #1: “Jini, isn't that a technology to connect networks of devices”
Yes, sure, but what is it about networks of devices that are shared in a global computing grid... Peter Deutsch's fallacies, and a intrinsic knowledge that the network will change, participants in the network will change/fail, and that the environment must tolerate these failures in a consistent and graceful way.

Misconception #2: “Jini is reliant on RMI/JRMP as it's protocol and RMI/JRMP has problems in big networks, across firewalls, etc...”
I'm first going to point you at this article by an esteemed colleague, Dr. Jim Waldo, basically any Java system can use RMI/JRMP for Remote (to the current VM) Method Invocation, but, and a big BUT, Jini uses proxies & proxy code to interface between services, and the interface/protocol that the proxy exposes is up to the developer. Sure, it's easy to allow the proxies to rely on RMI, but it's not required. I've seen proxies that leverage JXTA, I've seen CORBA bindings using IIOP, and of course there is a cottage industry in over HTTP/SOAP and WS interfaces across the Jini community. Besides there are existing capabilities that enable tunneling of core Jini services across firewalls and networks: Lincoln Tunnel.

Misgiving #1: “Jini is a technology that Sun has abandoned?”
Hunh Scooby. Okay so Jini hasn't become a mainline infrastructure, yet, but it's darn sure getting there. Jini is the core backplane for our RFID event manager (graci Larry Mitchell), it's been furthermore re-released under the Apache License v2.0, we continue to actively support a large community of developers, and there's more that I cannot talk about ;).

Misgiving #2: “Most of the companies with commercial projects are small, and require substantial changes to existing infrastructure to implement”
Yes, BUT the value of re-factoring the problem impacts the scalability, availability, developer productivity and other aspects of the core - I hate this term - middleware infrastructure for segments of the system. Most companies that I have consulted with are incrementally phasing in Jini, and others like Orbitz are moving business critical functions.

So what is it specific to Jini that get's me going?

Lookup and Discovery

  • allows for decentralized lookup, and ability to provide federations of federations in order to help with “local maters” and provide for “best fit” resource management and scheduling.
  • Leasing Models match very cleanly with resource management
  • cancellation of Leases aligns well with both prioritization, graceful degradation, contention and distributed partial failure

Distributed Transaction support

  • most people today rely on Message Oriented Middleware (MOMS) to provide reliable workflow despite the penalties paid for normalization, bussing, queuing, etc... obviously new breeds of MOMS are emerging with peer messaging, but still these mechanisms rely on the message or infrastructure to cleanup when there are problems rather than the calling service - where the failure recovery models are more appropriately managed.

JavaSpaces

  • when we do need a shared memory space for optimal information distribution (as many Highly Parallel Application Grids do) JavaSpaces provides a very simple distributed worker model (see Tom White's article above).
  • Javaspaces furthermore scale extremely well, and the pattern is quite fault tolerant

I just want to wrap up with an interesting and yet older article by Peter Coffee on the need for distributed control within IT.

Mark my words, Jini's day will come!

Keywords: , ,

Wednesday Apr 20, 2005

Evolutionary Software

In the past I've talked about the need to enable a new development model... something in which the agility is shared between the developers and the IT department. Recently I read this article on a move toward ASP's and the inherent pace that can be achieved where an ASP is the software companies primary target:

BeyondVC: ASPs/Service Providers:

“In the time it takes Microsoft to deliver an application (went from 1 year to 5 years), a company delivering software as a service can deliver 60 iterations of its product. As Adam points out, ”things that breed rapidly more quickly adopt through natural selection to a changing environment.“ I have never thought about software in evolutionary terms, but it certainly makes sense.”

This really brings home some of the key points around the ability to create agile business processes, through an evolutionary approach and constant refinement. One thing that I think that a utility centric approach to development in which the develop->deploy paradigm is both iterative and shares a common runbook is the ability to “try things out” and have the digital runbook (the functional and systemic design for a production system - like our N1 technology enables) become aware of successes and failures. This builds upon the patterns and micro-architecture work done by the patterns community, but adds a learning path which allows knowledge/experiences to be documented and preserved - not in the developers per-se, but in their annotations within the design.

I think that one of the things that may be holding some developers back is the fact that their development environments are frequently substantially different from their deployed environments... something that I think Sun Grid has the opportunity to change... why not instantiate a developers instance(s) of a web server (ours, theirs, open - whichever containers have deployment plans for the Sun Grid), app server (ours, theirs, open...) and db container(s) on the grid, attach to it from Java Studio Creator... do intial development, and gather statistics. Begin to instantiate additional elements of infrastructure: identity, entitlements, logging, firewalls/packet filters as development progresses using your corporate runbooks to find deployment errors early, gather statistics, fix incompatibilities and re-factor as necessary to make optimal use of environment... and so on until you have a system.

Keywords:
About

dhushon

Search

Categories
Archives
« February 2015
SunMonTueWedThuFriSat
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
       
       
Today