Multi-Core or Hyper-Threaded? Or Both?
By ThinGuy on Jul 30, 2010
Recently a question was posed to the Sun Ray User Community: Intel or AMD for Linux Sun Ray Server?
Ford vs Chevy! Coke vs Pepsi! What a great blog topic for a Friday.
You could choose from dual Intel 6 core X5670 2.93 GHz ("Nehalem") or dual AMD Opteron 12 core 6168 1.9 GHz ("Magny-Cours")
Of course the "I love my job and I'd really like to keep it" answer would be Intel since Oracle does not offer any servers based on the 12 core Opteron (just 8 core models). But let's throw caution to the wind and think about this in the context of what a Sun Ray Server in "traditional" mode (i.e. not kiosk mode) really is. It's a desktop. Unlike kiosk mode where normally applications execute "somewhere else" (i.e. terminal server, a VM, etc) in traditional mode the applications execute on the Sun Ray Server. Unlike a desktop, it's multi-user.
So while you definitely want something "server class", you also want something that is going to run \*your\* applications at the best price/performance ratio.
At the end of the day, both options offer 24 threads. The Intel solution does so by offering 6 Hyper-Threaded Technology (HTT) cores per socket and the AMD by offering 12 single threaded cores per socket. There's a 1 GHz clock speed difference favoring the Intel solution, but let's not fall prey to the "megahertz myth". Not just yet anyways.
While you can go out there and find all kinds of "Bench This", "Spec That" types of reviews, those tests are generally written to take the most advantage out of any platform. However, most of the end user applications we all use aren't.
So, which design is better for "desktop applications", Intel with HTT or AMD with all those glorious physical cores?
Here's I get to use the most popular, catch all answer of all-time when it comes to any Server Based Computing or VDI question.
It depends on the applications. Doesn't everything?
Recent history would indicate that desktop applications prefer the multiple cores over HTT. Or perhaps better stated, the developers of those applications may prefer multi-core development (or at least find it easier).
Remember that Pentium HTT ("Northwood") actually was replaced on the desktop in favor of multi-core processors (see CoreDuo). In a traditional Sun Ray environment where a variety of "desktop applications" execute on the Sun Ray Server, understanding some of the possible reasons HTT was replaced by multi-core is interesting, if not important.
When HTT was introduced, most desktop applications simply weren't able to take advantage of the it. Add to that, the HTT chips actually consumed a lot more power. End result was a system that increased your energy costs while decreasing your application's performance. Explain that one to your boss, Mr Technology influencer. Especially with "all those CPUs" showing up in mpstat or perfmon.
None of that of course was the fault of the technology, well the power was, but not the bad performance or the misconception of threads as physical processor that sits in a socket. Truthfully our traditional performance monitoring tools still promote that misconception. The performance was due to applications not taking advantage of the HTT and it being on a single core. Didn't it seem like around 2004-05, the catch-all response to all desktop application performance queries was: "Pentium 4, you say? Did you try disabling Hyper-Threading in the BIOS?"
With Nehalem, Intel put all that bad PR behind them and brought HTT back to the desktop, but with a twist, it's also multi-core.
This is different, but is it better? Maybe. Maybe not. Probably, but...it depends. (Ha!)
We know that the OSes are better equipped for HTT (i.e. Solaris is now optimized for it along with a million other things), and they actually don't consume that much more power, so they are "greener". Goodness for the data center.
From my experience, I'd say both Sun Ray Software and the Oracle VDI stack performs better with HTT (based on sizing numbers in kiosk mode and per core VM sizing data under Solaris) than they did under the non-HTT models of those chips. Considerably better, all other things being equal (clock speed, # of cores, etc). But those aren't typically considered "desktop applications", they are more in the realm of pseudo-operating systems, or at least "Server Systems". Both of which have been HTT aware for a long time, but that doesn't exactly help \*your\* application. Which leads us to the million dollar question:
How many of the applications that you use today are parallelized so they
can execute across multiple threads simultaneously (i.e. HTT aware)? If the answer is
"very few" then you're not taking advantage of the Intel design and the physical cores on
AMD solution may actually perform better for your apps even with the "lower clock
Making applications multi-core aware is fairly easy (says the non-programming "developer"), and most existing applications already support this. However adding HTT capabilities to existing applications is considerably far more work. And sure, there are those that will say that HTT can help certain multi-core aware applications depending on what they are doing. Though I think a lot of these arguments mistake multi-threading for Hyper-Threading, which in fact is simultaneous multi-threading.
But really, to get the most out of HTT, you need to code your applications a certain way. Intel has guides, and all kinds of tools to aid the application developer get the most out of HTT. But what if those aren't used?
In a single user use case, the average person might never know the applications they are using aren't taking advantage of HTT technology because of the multi-core and relatively high clock rate. The HTT multi-core becomes a Swiss Army Knife so to speak. If your app can take advantage of HTT, great. If it can't, we've got cores. And on top of that we have speed! That's beautiful for a PC. A single user PC.
But how well does it scale out when we are talking about multiple users running those "non-HTT aware" apps on the same server? In the AMD design, multi-core (but non-HTT aware) apps have 24 "physical" cores to work with, what's the trade off of the "virtual" cores on the HTT chips? Is the clock speed enough to overcome? The other features on HTT chips enough to tip the scales? Maybe. Probably. It depends.
If you were running Sun Ray Server Software in Kiosk mode or choosing a server to be the hypervisor for Oracle VDI, go with Intel and their HTT "Nehalem" processors. You won't be disappointed. At least I haven't been. I'm sure I'd also have a lot of good things to say about the AMD as well.
But if you are actually running desktop apps on the Sun Ray Server, and trying to do so at any kind of scale, I'd say it's at least worth doing some investigating and maybe even some application testing at scale. Then you can really understand what's the best fit for your environment.