Jumbo Frames for RAC Interconnect

At the moment for a customer I am investigating whether it's a good idea to use "Jumbo Frames" within a RAC environment. In the next couple of days I will post some results here.

Until that time, I like to share some thoughts.

First: why do you possibly want "Jumbo Frames" ?

Answer: Jumbo Frames (in my Oracle world) are Ethernet frames that do not have a conventional payload of 1500 bytes, but 9000 bytes.

As you can imagine the standard 1500 bytes is too small for our 'regular' Oracle database block of 8k. When you make sure the frames are 9000 byte, a block should fit in and this way eliminate the overhead.

Eliminating the overhead would potentially improve performance and decrease CPU usage. This sound like a good theory and very applicable to use for the RAC interconnect where we do sent 8k (UDP) blocks from instance A to instance B.

Investigation tells us Jumbo Frames are not (yet) an IEEE standard. This means, you could (I am not saying you will) have problems between the devices that supposed to be configured to handle frames of this size.

The OS, network card as well as the switch all need to 'talk' the same size of "Jumbo Frames". Note that some vendors may call 4000 bytes Jumbo, some call 9000 bytes Jumbo.

Theory tells us properly configured Jumbo Frames can eliminate 10% of overhead on UDP traffic.

So how to test ?

I guess an 'end to end' test would be best way. So my first test is a 30 minute Swingbench run against a two node RAC, not too much stress in the begin.

The MTU configuration of the network bond (and the slave nics will be 1500 initially).

After the test, collect the results on the total transactions, the average transactions per second, the maximum transaction rate (results.xml), interconnect traffic (awr) and cpu usage. Then, do exactly the same, but now with an MTU of 9000 bytes. For this we need to make sure the switch settings are also modified to use an MTU of 9000.

B.t.w.: yes, it's possible to measure network only, but real-life end-to-end testing with a real Oracle application talking to RAC feels like the best approach to see what the impact is on for example the avg. transactions per second.

In order to make the test as reliable as possible some remarks:
- use guaranteed snapshots to flashback the database to its original state.
- stop/start the database (clean the cache)

B.t.w: before starting the test with an MTU of 9000 bytes the correct setting had to be proofed.

One way to do this is using ping with a packet size (-s) of 8972 and prohibiting fragmentation (-M do).
One could send Jumbo Frames and see if they can be sent without fragmentation.

[root@node01 rk]# ping -s 8972 -M do node02-ic -c 5
PING node02-ic. (192.168.23.32) 8972(9000) bytes of data.
8980 bytes from node02-ic. (192.168.23.32): icmp_seq=0 ttl=64 time=0.914 ms

As you can see this is not a problem. While for packages larger then 9000 bytes, this is a problem:

[root@node01 rk]# ping -s 8973 -M do node02-ic -c 5
--- node02-ic. ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4003ms
rtt min/avg/max/mdev = 0.859/0.955/1.167/0.109 ms, pipe 2
PING node02-ic. (192.168.23.32) 8973(9001) bytes of data.
From node02-ic. (192.168.23.52) icmp_seq=0 Frag needed and DF set (mtu = 9000)

Bringing back the MTU size to 1500 should also prohibit sending of fragmented 9000 packages:

[root@node01 rk]# ping -s 8972 -M do node02-ic -c 5
PING node02-ic. (192.168.23.32) 8972(9000) bytes of data.
--- node02-ic. ping statistics ---
5 packets transmitted, 0 received, 100% packet loss, time 3999ms

Bringing back the MTU size to 1500 and sending 'normal' packages should work again:

[root@node01 rk]# ping node02-ic -M do -c 5
PING node02-ic. (192.168.23.32) 56(84) bytes of data.
64 bytes from node02-ic. (192.168.23.32): icmp_seq=0 ttl=64 time=0.174 ms

--- node02-ic. ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 3999ms
rtt min/avg/max/mdev = 0.174/0.186/0.198/0.008 ms, pipe 2

An other way to verify the correct usage of the MTU size is the command 'netstat -a -i -n' (the column MTU size should be 9000 when you are performing tests on Jumbo Frames):

Kernel Interface table
Iface MTU Met RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg
bond0 1500 0 10371535 0 0 0 15338093 0 0 0 BMmRU
bond0:1 1500 0 - no statistics available - BMmRU
bond1 9000 0 83383378 0 0 0 89645149 0 0 0 BMmRU
eth0 9000 0 36 0 0 0 88805888 0 0 0 BMsRU
eth1 1500 0 8036210 0 0 0 14235498 0 0 0 BMsRU
eth2 9000 0 83383342 0 0 0 839261 0 0 0 BMsRU
eth3 1500 0 2335325 0 0 0 1102595 0 0 0 BMsRU
eth4 1500 0 252075239 0 0 0 252020454 0 0 0 BMRU
eth5 1500 0 0 0 0 0 0 0 0 0 BM

As you can see my interconnect in on bond1 (build on eth0 and eth2). All 9000 bytes.

Not finished yet, no conclusions yet, but here is my first result.
You will notice the results are not that significantly.

MTU 1500:
TotalFailedTransactions : 0
AverageTransactionsPerSecond : 1364
MaximumTransactionRate : 107767
TotalCompletedTransactions : 4910834

MTU 9000:
TotalFailedTransactions : 1
AverageTransactionsPerSecond : 1336
MaximumTransactionRate : 109775
TotalCompletedTransactions : 4812122

In a chart this will look like this:
udp_traf01.png

As you can see, the number of transactions between the two tests isn't really that significant, but the UDP traffic is less ! Still, I expected more from this test, so I have to put more stress to the test.

I noticed the failed transaction, and found "ORA-12155 TNS-received bad datatype in NSWMARKER packet". I did verify this and I am sure this is not related to the MTU size. This is because I only changed the MTU size for the interconnect and there is no TNS traffic on that network.

As said, I will now continue with tests that have much more stress on the systems:
- number of users changed from 80 to 150 per database
- number of databases changed from 1 to 2
- more network traffic:
- rebuild the Swingbench indexes without the 'REVERSE' option
- altered the sequences and lowered increment by value to 1 and cache size to 3. (in stead of 800)
- full table scans all the time on each instance
- run longer (4 hours in stead of half an hour)

Now, what you see is already improving. For the 4 hour test, the amount of extra UDP packets sent with an MTU size of 1500 compared to an MTU size of 9000 is about 2.5 to 3 million, see this chart:

udptraf02.png

Imagine yourself what an impact this has. Each package you not send save you the network-overhead of the package itself and a lot of CPU cycles that you don't need to spend.

The load average of the Linux box also decreases from an avg of 16 to 14.
load_avg01.png

In terms of completed transactions on different MTU sizes within the same timeframe, the chart looks like this:

trans01.png

To conclude this test two very high load runs are performed. Again, one with an MTU of 1500 and one with an MTU of 9000.

In the charts below you will see less CPU consumption when using 9000 bytes for MTU.

Also less packets are sent, although I think that number is not that significant compared to the total number of packets sent.

cpu_load_01.png

packets_01.png

My final thoughts on this test:

1. you will hardly notice the benefits of using Jumbo on a system with no stress
2. you will notice the benefits of Jumbo using Frames on a stressed system and such a system will then use less CPU and will have less network overhead.

This means Jumbo Frames help you scaling out better then regular frames.

Depending on the interconnect usage of your applications the results may vary of course. With interconnect traffic intensive applications you will see the benefits earlier then with application that have relatively less interconnect activity.

I would use Jumbo Frames to scale better, since it saves CPU and reduces network traffic and this way leaves space for growth.

Rene Kundersma
Oracle Expert Services, The Netherlands

Comments:

Hi Rene, Excellent article. looking forward for the next. :) much appreciated. Thanks Suresh

Posted by SURESH on June 02, 2009 at 06:08 AM PDT #

How do you set MTU to 9K from Oracle perspective, do we do it with SDU and TDU parameters. Can you please provide example.

Posted by Mukesh Sharma on December 02, 2010 at 12:14 PM PST #

Excellent Article, specially the last portion - where it is useful.
You said - "Depending on the interconnect usage of your applications the results may vary of course".
Typically - what is the range beyond which you would rate the interconnect usage as heavy.

Thanks & regards
Subrata Saha

Posted by subrata saha on April 05, 2013 at 02:07 PM PDT #

Fist, don't expect a network card to run at full capacity with 100% efficiency. Due to collisions and retransmitting efficiency can be lower when the
utilization gets is increasing. There is no golden number so I am not going there, you need to test that for your env. But imagine a network utilized 66%, this means that 2 out of 3 times it's 'busy'

Posted by Rene on April 09, 2013 at 02:36 AM PDT #

Thanks for good information!

We are currently running oracle RAC 11.2.0.3.7 and planning implement Jumbo frames do we need database restart after implementation??

Thanks Advance,

Posted by Ramesh Revella on April 08, 2014 at 10:33 AM PDT #

Hi,

I would bring down the database (and GI), make the change and then restart the stack in a planned maintenance window, test your change and then open back for service.

Changing this on-the-fly is not recommended - to the cluster and database it might look like the interconnect going down resulting in evictions.

Rene

Posted by Rene on April 08, 2014 at 02:09 PM PDT #

Post a Comment:
  • HTML Syntax: NOT allowed
About

Blog of Rene Kundersma, Principal Member of Technical Staff at Oracle Development USA. I am designing and evaluating solutions and best practices around database MAA focused on Exadata. This involves HA, backup/recovery, migration and database consolidation and upgrades on Exadata. Opinions are my own and not necessarily those of Oracle Corporation. See http://www.oracle.com/technology/deploy/availability/htdocs/maa.htm.

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today