Solaris Network Tuning for WebSphere Application Environment

This blog entry clarifies the recommendation of Solaris network tuning discussed in WAS v6.1 on Solaris 10 Redbook (Ch 2.2 p.25) and WAS v6.1 ND Tuning Guide. [Note: The following tuning parameters are for WAS v6.1 or newer]

TCP Tuning Parameters

In order for a system to sustain good network throughput and performance, it is recommended that you provide adequate TCP tuning parameters in Solaris 10. You can do this by using the ndd command for the TCP parameters on Solaris as follows:

    # ndd -set /dev/tcp tcp_conn_req_max_q 16384 
    # ndd -set /dev/tcp tcp_conn_req_max_q0 16384 
    # ndd -set /dev/tcp tcp_max_buf 4194304 
    # ndd -set /dev/tcp tcp_cwnd_max 2097152 
    # ndd -set /dev/tcp tcp_recv_hiwat 400000 
    # ndd -set /dev/tcp tcp_xmit_hiwat 400000 
You can use the following set of commands to query what the current TCP settings on your system are:
    # ndd -get  tcp_conn_req_max_q 
    # ndd -get  tcp_conn_req_max_q0 
    # ndd -get  tcp_max_buf 
    # ndd -get  tcp_cwnd_max 
    # ndd -get  tcp_recv_hiwat 
    # ndd -get  tcp_xmit_hiwat 
Additionally, you can also use the following settings:
    # ndd -set /dev/tcp tcp_ip_abort_interval 60000
    # ndd -set /dev/tcp tcp_rexmit_interval_initial 4000
    # ndd -set /dev/tcp tcp_rexmit_interval_max 10000
    # ndd -set /dev/tcp tcp_rexmit_interval_min 3000 
If needed, you can adjust these parameters higher or lower after a number of performance test iterations. For more details on network tuning, refer to the Networking Section of the Solaris Internal book.

This is a system wide setting and is done in the global zone.

Soft Ring Count

Another recommended network related tuning on Solaris 10 is in the /etc/system.

    set ip:ip_soft_rings_cnt = 8
    set ddi_msix_alloc_limit = 8
The details of these settings are defined in the Networking section of the Solaris Internal book and are shown below:
  • ip_soft_rings_cnt: This is a system-wide setting of how many software rings (aka soft rings) to use to process received packets. The default is 2 on Niagara systems. For optimal receive throughput, it is recommended to start with 8 to 16 software rings on CMT, and 16 or 32 on M-series. The optimal number of software rings depends on network device and workload. You can specify different number of software rings per network device.
  • ddi_msix_alloc_limit: This is a system-wide setting of the maximum number of MSI (Message Signaled Interrupt) and MSI-X that can be allocated per PCI device. The default is to allocate maximum 2 MSI per device. Each receive DMA channel of a NIC can generate one interrupt, and each interrupt will target one CPU. Sun Multi-threaded 10GbE has 8 receive DMA channels per port, and Quad GbE has 4, so their interrupts can target at most 8 and 4 different CPU, respectively. To avoid interrupt CPU becoming the performance bottleneck, it is recommended to start with a value of the number of receive DMA channels per port or (# of CPU), whichever is lower, so that interrupt loads are distributed to enough CPU.
This is a system wide setting and is done in the global zone.

Comments:

Post a Comment:
Comments are closed for this entry.
About

Mostly pertaining to Cloud Computing, Application Infrastructure, Oracle Exastack, Exalogic, Solaris, Java and Sun servers for the enterprise!

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today