My swap space on an SSD?

I had an interesting discussion with 2 colleagues about the possible interest of putting the swap space of a system on a SSD.

If I consider the gain in latency that an SSD brings versus a capacity disk - in the region of 100x - the solution seems obvious. Swapping - or more precisely paging - must be much faster/ with an SSD. Since RAM is expensive versus SSD, I could even be tempted to design a system with a small amount of RAM and a large amount of swap space on SSDs. In other words, I can ask myself if trying to prevent my system to page is still a good fight?

Let's try to shed some light on these questions.

Paging takes place when my system runs out of RAM because more processes are created or because existing processes requires more memory (check this article for details about how to monitor paging). At some point, the operating system keeps on looking for pages in RAM that can be transfered into the swap space, while at the same time it brings back in RAM pages that were paged out and that are required by running applications. This situation is commonly referred to as a paging system. At this point, the performance of my system brutally goes down: copying memory pages back and forth between RAM and disk slows down my whole system mainly because of disk performance. Moving the swap space from a disk to an SSD does not reduce this activity. It only makes it faster. Bare in mind that the CPU doesn't have direct access to the swap space, to the SSD. For the CPU to access data or instructions that have been paged out, these data or instructions still need to be copied back into RAM, which brings us to another side effect of paging: it creates traffic on the IO bus.

In addition, ahead of the critical paging situation, when the demand for RAM starts to grow, other things happen on my system. I am using ZFS for my storage and ZFS has its primary cache - the ARC - in RAM. When RAM gets under pressure, this cache sees its size reduced. The data removed from the ARC goes in the ZFS level 2 cache - the L2ARC. The L2ARC can be located either on disks or SSDs, but as soon as it is involved there is some additional traffic on the IO bus that now competes with the traffic created by the paging activity. Eventually, when the L2ARC gets full, the data is not cached anymore. A long story short, if I am running an application that creates a lot of IOs the shortage of RAM impacts its performance.

Finally, we compared the performance of an SSD versus a disk, but in terms of latency an SSD is still 1000x slower than RAM, so the impact that paging brings (i.e. moving from RAM to SSD) is still noticeable. At the end, even though SSDs can improve paging performance, preventing my system to page is still a must if I want to get the best out of it. If I have some extra money to spend on performance after increasing the RAM of my system and if my application is IO intensive I would rather buy an SSD for the L2ARC cache rather than for swapping. This will certainly have a positive impact on the IO perf of my application.

Comments:

Post a Comment:
Comments are closed for this entry.
About

Application tuning, sizing, monitoring, porting on Solaris 11

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
5
6
8
9
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today