Java and Multiple Cores: A Brief History
By dannycoward on Oct 30, 2008
To get traffic flowing faster on roads, planners don't raise the speed limit, they build more lanes. That's what chip manufacturers have been doing for some time now as traffic per core has become too hot to handle.
And, all the way back since 1998 with the introduction of the E10K with 64 parallel processors, that's what the HotSpot JVM has been doing: parallelizing the work of the JVM so that garbage collection, classloading and other supporting tasks happen concurrently. Which is exactly how to get the traffic flowing faster on the multi-core superhighway.
Great for scaling Java web application performance on multiple cores, because they are inherently multithreaded (and you got your own thread to load this page). But for computing intensive apps like large monte carlo simulations, or processing massive data sets, more traffic management is needed at the application level to juice the system.
With the addition of the Java Concurrency framework back in Java SE 5.0, developers with a big gnarly task to give to a multi-core processor have an API to break it down into multiple parallel threads using, for example, task scheduling and sychronization policies. Helping them use all the lanes on the computing freeway instead of just one.
And there's more good news in the works: a new, fully concurrent garbage collector and more concurrency utilities for developers in JDK 7; and perhaps direct support for concurrency in Java EE too.