Transistor overproduction crisis

It is, perhaps, a little know fact that agricultural overproduction is now considered to be one of the biggest contributing factors to the health problems found in most developed nations in the world. If you don't quite believe me yet, I really recommend reading an eye-opening book "The Omnivore's Dilemma" by Michael Pollan (or at least his short article covering the same subject). For the purpose of this blog entry, however, suffice it to say that most contraptions of the modern day supermarket are direct products of food scientist trying to sell you more corn and soy. After all, at the rate that the industrial agricultural complex produces those crops it would be foolish to try and sustain their consumption relying only on their raw form. Human beings simply can not consume that many corn ears and soy beans. The industry has to play tricks with us offering soft drinks (Cola, Pepsi, etc.) and chicken nuggets. It is still the same corn and soy, just packaged differently. And judging by the number of humans turned into "...Disposall for all the unhealthful calories that the farm bill has encouraged American farmers to overproduce" the trick works quite well.

It wasn't till I spoke with a good friend of mine (who goes by the name Fedor Sergeev and happens to be one of the key people on Sun Studio Project) that I discovered a striking similarity: perhaps, hardware industry is trying to turn all software developers into Disposal for transistors that they grossly overproduce. But is there really an overproduction? After all, Moore's Law (guaranteeing us doubling of the number of transistors available on the same chip every 18 moths) used to be a "good thing". True enough. It used to be. Approximately till 2003. We had a nice stream of innovative CPUs coming down the pipeline: 386, 486, Pentium, Pentium Pro, Xeon and finally Itanium. All that time Intel and others knew pretty well how to put the transistors available to them to a good use. That is how to build ALUs, FPUs and conveyors of an ever increasing complexity. With a perfect hindsight, perhaps, it is not a coincidence that putting transistors to a good use through overengineering lead to the biggest fiasco of all -- Itanium. The idea tanked, but the transistors kept coming. In fact they kept being produced at a rate when even implementing additional logic needed for things like HyperThreading could only consume so much of them.

And lets face it, when all is said and done the hardware industry is in a business of selling silicone. As much as industrial agriculture these days is in a business of selling corn and soy. Both of these industries have a problem: consumers have as little use for pure silicon as they have for raw corn. So how do you sell 280 million metric tons of corn? SImple: just turn it into all sorts of "useful" products ranging from corn syrup all the way to ethanol. How do you sell 500+ million of transistors? Well, here it gets complicated. On one hand, you can turn them into a really useful product or you can try to convince customers that they are getting 4 "useful" products instead of one.

The problem I have with the later approach is pretty simple: I am not convinced yet. You see, I have no qualms about articulating the value proposition of nVidia's latest video card (and who would?). But as a software engineer I have to suffer some great pains to really make that Intel's latest quad-core CPU perform at a promised level. Could it be that all that PR around all things parallel, multithreaded and concurrent is no different from the latest Coca Cola commercials -- simply an industry's attempt to sell us more stuff we have no use for? At a conceptual level I don't think it is: after all the most powerful analytical device know to the human race happens to be mindbogglingly parallel so there's definitely a need for a better understanding of how to build massively parallel computational and analytical devices. On the other hand, I slowly start to realize that exposing to software the fact that one CPU now looks like a dozen might be a dead end. And it doesn't matter what techniques you use to combat that complexity (be it transactional memory or Cilk) it just doesn't seem to get us much.

Yet, those soon to be billion transistors have to be sold (and if they don't get sold beware!), so what are the reasonable choices? In my opinion (and please, please, please don't hesitate to express your own in comments!) the most fruitful directions that I can see are: special devices hidden behind well defined APIs (think GPUs and OpenGL) and FPGAs hidden behind yet to be invented language facilitating spatial approach to problem solving and algorithms. With the current trends in GPU development it seems quite obvious that the first approach is a reasonable one. As for the second, FPGAs have long been treated as a step child of both the hardware and software industries alike. I believe that has to be challenged. In the words of Frank Vahid it's time to stop calling circuits "Hardware". Instead he suggest, and I quote: "computing departments in universities might reconsider how they introduce circuits to their students. We also can broaden our use of the word software beyond just microprocessors. In this way, circuits and other modeling approaches with a spatial emphasis can assume their proper role in the increasingly concurrent computation world of software development". And then, perhaps, the two approaches can even blend together so that every time you go to your local electronic store and ask for a videocard the sales clerk would reach into a drawer full of generic FPGAs and hand one over to you coupled with the latest "driver". Could it be the ultimate software version of the ethanol revolution? I don't know. But it sure looks more promising than whan you have to go through in order to efficiently compute Fibonacci numbers on a multicore Intel CPU.

Post a Comment:
  • HTML Syntax: NOT allowed



Top Tags
« June 2016