Wednesday Dec 16, 2009

48 Core Microprocessors... What's next? - Quantum Computing?

Intel has produced a gem with the Nehalem microprocessor.  While the architecture limits the chip to 8 processing cores today, Intel is doing research and advanced development with much bigger ideas.  The samples of 48 core processors (see picture on left) that some people in the industry are envisioning capable of scaling to 100 plus cores one day is impressive.  In the old days when I was doing large SMP server development, single core clock rate was a major vector that drove computational performance.  Then, as predicted, a few years into the twenty first century the technology started to run into the laws of physics.  Increasing the clock rate was no longer an available option.  However scaling problems were now starting to be addressed by building multi core architectures.  In other words placing multiple processing engines on a single die addressable by software.  CPU to cache interconnects become smaller and parallel computing is beginning to approach a commodity.   Introduce software to program these multi-core CPUs and parallel computing can realize significant gains.

An impressive fact of the Intel 48 core research chip is the power consumption of 125 Watts.  That averages to about 2.6 Watts per core.  This 48 core chip also has the ability to dynamically control voltage and frequency via software.  So the power consumption can be much lower than 125 Watts.  Keep in mind in the early 2000 time frame, 32 microprocessors in a SMP server would alone consume ~2 Kilowatts!  In my opinion the rising costs of energy and cooling will become a barrier similar to that of the laws of physics.  If your interested you can find more info on extreme core architecture here.

What are the industry folks in academics and research thinking 10, 20, 30 or more years out?  The semiconductor world is quickly approaching process geometry limits.  For example 45 nanometer technology is so small that you are manipulating layer thickness measured by atoms in the single digits.  Margin of error keeps getting smaller and smaller.  Is reliability a factor too?  Absolutely.  So where could technology lead us once we are manipulating single atoms to produce silicon?  In research labs today the industry has already been able to manipulate single atoms.

One option would be to discover another variable beyond clock rate or multi core.  Another option may have some promising hopes.  Have you heard of Quantum Computing?  All of us in Computer Science can relate to binary, octal, hexadecimal and decimal.  Are you ready to learn about the difference between bits and qubits?  Click here.

Blog is also available at:

Wednesday Oct 21, 2009

Around Oracle Open World in less than 180 hours

The turnout of customers and partners of the enterprise technology segment certainly did not disappoint at Oracle Open World this year.  While other large IT events have been canceled this year do to the economic downturn, Oracle Open World attendance of 42000 IT professionals was basically unaffected from the 2008 attendance.  Even more impressive was that virtually every enterprise vendor that partners, competes and analyzes Oracle attended this yearly gathering in downtown San Francisco. The multiple exhibit halls, sessions, events, activities and networking certainly created an environment for plenty of information exchange.

Every vendor at Oracle Open World is a cog (of varying sizes from small to large) that builds into the enterprise IT stack of:

  • applications
  • middleware
  • database

Every item from storage management, computational speeds, networking feeds, disaster recovery, hosted IT, employee productivity tools and various communication mediums all factor back into connecting the above 3 areas of the enterprise stack.

In my opinion enterprise IT is becoming much less driven by vendor loyalty and a great price to the vendors that can provide competitive advantage to their customers.  During an economic downturn as well as post recovery, the competitive advantage will more than outshine a good price.

Blog is available also at:

Tuesday May 26, 2009

CPU or GPU? one or the other or both?

The transistor counts of both the CPU and GPU are escalating almost as fast as toxic assets from the sub prime mortgage meltdown.  As in every good debate there are usually 2 opposed sides to a given topic.  Political parties such as Democrats and Republicans thrive on the point versus counterpoint arguments.  This analogy certainly is applicable to the technology of semiconductors.  Gone are the days of the CPU as the center of the computer.  With the advancement of visual applications in both the commercial and entertainment sectors, graphic processing has made a claim as the center of the computer.  Today 50 years after the first silicon transistor, semiconductor advancements have exceeded industry predictions 25 years ago.  It is truly amazing that computer and graphic processor transistor counts have gone from 100s of millions and exceeded the billion of transistor ceiling!  That is one large mass of circuits that have to be designed, verified, placed, routed and timed for chip signoff.

As the industry has pretty much hit the celing on clock speed, multiple instances of cores have appeared.  However having a quad-core CPU does not mean that your office productivity suite will run faster on your desktop as this application is single threaded.  Applications that are muti threaded will be able to take advantage of mutiple cores.  A good example is visualization hypervisor software that will run on multiple bare metal cores.  When you are managing multiple virtual machine instances many cpu threads come in handy.

It is obvious that word processing applications do not need extreme graphics processing either.  Then what does require high end graphics?  The graphics capability of the microprocessor is pretty impressive these days.  I can think of two areas: high end video games and visualization software for high end computer modeling and manipulation.  Both of these areas have a viable market as evidenced by the sales of popular gaming consoles out there such as PlayStation3 and the new consoles under development.  In the commercial sector 3D crash simulations are very cost effective for automobile manufactures when designing a safer automobile.

Ferraris and Fiat Cinquecentos both can go 50 mph (80 kph).  However not everyone has the need or monetary opportunity to purchase a Ferrari.  The same applies for CPU and/or GPUs depending on what you are trying to do.

Blog is available also at:


The blog of Bob Porras - Vice President, Data, Availability, Scalability & HPC for Sun Microsystems, Inc.


« July 2016