Tuesday Apr 14, 2009

Nehalem Memory Configuration

Sun is announcing several new Nehalem-based servers today, so do watch out for the news. Among other things, we are also announcing a Sun Rapid Solution for highly scalable storage based on Lustre. I'll blog about it more soon.

Speaking of Nehalem, few days ago I read a nice article giving a nice introductory guide on the different memory configurations for the Xeon 5500, the offical model name for Nehalem. Those who understand the new Nehalem architecture will know that there are performance implications to how you configure the memory.

Here is my summary of the main points:

\* Nehalem supports 3 memory channels per socket, 3 DIMMs per memory channels.

\* All memory channels must be filled with at least 1 DIMM, else there will be a performance hit.

\* Memory speed varies by the number of DIMMs in each channel:
1 DIMM in each memory channel gives the best performance at 1333 MHz
2 DIMMs in any memory channel drops the speed to 1066 MHz
3 DIMMS in any memory channel drops the speed to 800 MHz

\* It is strongly recommended to use a "balanced" configuration - meaning filling up the memory channels in 3s. So it'll be like 3 DIMMS (1 per channel), 6 DIMMS (2 per channel) or 9 DIMMS (everything populated). Unbalanced memory config will reduce bandwidth (by as much as 23%).

\* Use same size DIMMS, else will lose 50-10% memory bandwidth.

\* For Nehalem processors with a QPI speed of 6.4 GT/s and using three 1333 MHz DIMMs (one per memory channel) per socket, expected memory bandwidth is 35 GB/s. For 1066 MHz and 800 MHz, expected about 32 GB/s and 25 GB/s respectively.

If you are totally new to Nehalem, have no idea about the new memory design and what is QPI or Turbo Boost, this other article might be useful for you.

About

Melvin Koh

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today