Friday Feb 08, 2013

Improved Oracle Solaris 10 1/13 Secure Copy Performance for High Latency Networks

With Oracle Solaris 10 1/13, the performance of secure copy or scp is significantly improved for high latency networks.

  • Oracle Solaris 10 1/13 enabling a TCP receive window size up to 1 MB has up to 8 times faster transfer times over the latency range 50 - 200 msec compared to the previous Oracle Solaris 10 8/11.

  • The default TCP receive window size of 48 KB delivered similar performance in both Oracle Solaris 10 1/13 and Oracle Solaris 10 8/11.

  • In this study, settings above 1 MB for the TCP receive window size delivered similar performance to the 1 MB results.

  • The tuning of the TCP receive window has been available in Oracle Solaris for some time. This improved performance is available with Oracle Solaris 10 1/13 and Oracle Solaris 11.

Performance Landscape

T4-4_SSH_SCP.png

X4170M2_SSH_SCP.png

Configuration Summary

Test Systems:

SPARC T4-4 server
4 x SPARC T4 processor 3.0 GHz
1 TB memory
Oracle Solaris 10 1/13
Oracle Solaris 10 8/11

Sun Fire X4170 M2
2 x Intel Xeon X5675 3.06 GHz
48 GB memory
Oracle Solaris 10 1/13
Oracle Solaris 10 8/11

Driver System:

Sun Fire X4170 M2
2 x Intel Xeon X5675 3.06 GHz
48 GB memory
Oracle Solaris 10

Router / Programmable Delay System:

Sun Fire X4170 M2
2 x Intel Xeon X5675 3.06 GHz
48 GB memory
Oracle Solaris 10

Switch in between the router and the 2 test systems

Cisco linksys SR2024C

Benchmark Description

This benchmark measures the scp performance between two systems with variable router delays in the network between the two systems. A file size of 48 MB was used while measuring the affects of varying the latency (network delays) and varying the TCP receive window size.

Key Points and Best Practices

  • The WAN emulator (aka. hxbt) is used in the router to achieve delays. Verification of network function and characteristics confirmed after setting the simulator using Netperf latency and bandwidth tests between driver and test system.

  • Transfers performed over 1 GbE private, dedicated network.

  • Files were transferred to and from /tmp (i.e. in memory) on the test systems to minimize effect of filesystem performance and variability on the measurements.

  • Larger TCP receive windows than default can be enabled using the system-wide parameter tcp_recv_hiwat (e.g. to enable 1024 KB windows using this method, use the command: ndd -set /dev/tcp tcp_recv_hiwat 1048576). To make this change persistent the command will have to be added to system startup scripts.

  • sshd on target system must be restarted before any benefit can be observed after increasing the enabled tcp receive buffer size. (e.g: can restart with the command /usr/sbin/svcadm restart svc:/network/ssh:default)

  • Note that tcp_recv_hiwat is a system-wide variable that adjusts the entire TCP stack. Care, therefore, must be taken to make sure that changes do not adversely affect your environment.

  • Geographically distant servers can be affected by connection latencies of the kind presented here.

See Also

Disclosure Statement

Copyright 2013, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 2/08/2013.

Thursday Sep 29, 2011

SPARC T4-1 Server Outperforms Intel (Westmere AES-NI) on IPsec Encryption Tests

Oracle's SPARC T4 processor has significantly greater performance than the Intel Xeon X5690 processor when both are using Oracle Solaris 11 secure IP networking (IPsec). The SPARC T4 processor using IPsec AES-256-CCM mode achieves line speed over a 10 GbE network.

  • On IPsec, SPARC T4 processor is 23% faster than the 3.46 GHz Intel Xeon X5690 processor (Intel AES-NI).

  • The SPARC T4 processor is only at 23% utilization when running at its maximum throughput making it 3.6 times more efficient at secure networking than the 3.46 GHz Intel Xeon X5690 processor.

  • The 3.46 GHz Intel Xeon X5690 processor is nearly fully utilized at its maximum throughput leaving little CPU for application processing.

  • The SPARC T4 processor using IPsec AES-256-CCM mode achieves line speed over a 10 GbE network.

  • The SPARC T4 processor approaches line speed with fewer than one-quarter the number of IPsec streams required for the Intel Xeon X5690 processor to achieve its peak throughput. The SPARC T4 processor supports the additional streams with minimal extra CPU utilization.

IPsec provides general purpose networking security which is transparent to applications. This is ideal for supplying the capability to those networking applications that don't have cryptography built-in. IPsec provides for more than Virtual Private Networking (VPN) deployments where the technology is often first encountered.

Performance Landscape

Performance was measured using the AES-256-CCM cipher in megabits per second (Mb/sec) aggregate over sufficient numbers of TCP/IP streams to achieve line rate threshold (SPARC T4 processor) or drive a peak throughput (Intel Xeon X5690).

Processor GHz AES Decrypt AES Encrypt
B/W (Mb/sec) CPU Util Streams B/W (Mb/sec) CPU Util Streams
– Peak performance
SPARC T4 2.85 9,800 23% 96 9,800 20% 78
Intel Xeon X5690 3.46 8,000 83% 4,700 81%
– Load at which SPARC T4 processor performance crosses 9000 Mb/sec
SPARC T4 2.85 9,300 19% 17 9,200 15% 17
Intel Xeon X5690 3.46 4,700 41% 3,200 47%

Configuration Summary

SPARC Configuration:

SPARC T4-1 server
1 x SPARC T4 processor 2.85 GHz
128 GB memory
Oracle Solaris 11
Single 10-Gigabit Ethernet XAUI Adapter

Intel Configuration:

Sun Fire X4270 M2
1 x Intel Xeon X5690 3.46 GHz, Hyper-Threading and Turbo Boost active
48 GB memory
Oracle Solaris 11
Sun Dual Port 10GbE PCIe 2.0 Networking Card with Intel 82599 10GbE Controller

Driver Systems Configuration:

2 x Sun Blade 6000 chassis each with
1 x Sun Blade 6000 Virtualized Ethernet Switched Network Express Module 10GbE (NEM)
10 x Sun Blade X6270 M2 server modules each with
2 x Intel Xeon X5680 3.33 GHz, Hyper-Threading and Turbo Boost active
48 GB memory
Oracle Solaris 11
Dual 10-Gigabit Ethernet Fabric Expansion Module (FEM)

Benchmark Configuration:

Netperf 2.4.5 network benchmark adapted for testing bandwidth of multiple streams in aggregate.

Benchmark Description

The results here are derived from runs of the Netperf 2.4.5 benchmark. Netperf is a client/server benchmark measuring network performance providing a number of independent tests, including the TCP streaming bandwidth tests used here.

Netperf is, however, a single network stream benchmark and to demonstrate peak network bandwidth over a 10 GbE line under encryption requires many streams.

The Netperf documentation provides an example of using the software to drive multiple streams. The example is not sufficient to develop the workload because it does not scale beyond a single driver node which limits the processing power that can be applied. This subsequently limits how many full bandwidth streams can be supported. We chose to have a single server process on the target system (containing either the SPARC T4 processor or the Intel Xeon processor) and to spawn one or more Netperf client processes each across a cluster of the driver systems. The client processes are managed by the mpirun program of the Oracle Message Passing Toolkit.

Tabular results include aggregate bandwidth and CPU utilization. The aggregate bandwidth is computed by dividing the total traffic of the client processes by the overall runtime. CPU utilization on the target system is the average of that reported by all of the Netperf client processes.

IPsec is configured in the operating system of each participating server transparently to Netperf and applied to the dedicated network connecting the target system to the driver systems.

Key Points and Best Practices

  • Line speed is defined as data bandwidth within 10% of theoretical maximum bit rate of network line. For 10 GbE greater than 9000 Mb/sec bandwidth is defined as line speed.

  • IPsec provides network security that is configured completely in the operating system and is transparent to the application.

  • Peak bandwidths under IPsec are achieved only in aggregate with multiple client network streams to the target server.

  • Oracle Solaris receiver fanout must be increased from the default to support the large numbers of streams at quoted peak rates.

  • The ixgbe network driver relevant on servers with Intel 82599 10GbE controllers (driver systems and Intel Xeon target system) was limited to only a single receiver queue to maximize utilization of extra fanout.

  • IPsec is configured to make a unique security association (SA) for each connection to avoid a bottleneck over the large stream counts.

  • Jumbo frames are enabled (MTU of 9000) and network interrupt blanking (sometimes called interrupt coalescence) is disabled.

  • The TCP streaming bandwidth tests, which run continuously for minutes and multiple times to determine statistical significance, are configured to use message sizes of 1,048,576 bytes.

  • IPsec configuration defines that each SA is established through the use of a preshared key and Internet Key Exchange (IKE).

  • IPsec encryption uses the Solaris Cryptographic Framework which applies the appropriate accelerated provider on both the SPARC T4 processor and the Intel Xeon processor.

  • There is no need to configure a specific authentication algorithm for IPsec. With the Encapsulated Security Payload (ESP) security protocol and choosing AES-256-CCM for the encryption algorithm, the encapsulation is self-authenticating.

See Also

Disclosure Statement

Copyright 2011, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 9/26/2011.

SPARC T4-2 Server Beats Intel (Westmere AES-NI) on SSL Network Tests

Oracle's SPARC T4 processor is faster and more efficient than the Intel Xeon X5690 processor (with AES-NI) when running network SSL thoughput tests.

  • The SPARC T4 processor at 2.85 GHz is 20% faster than the 3.46 GHz Intel Xeon X5690 processor on single stream network SSL encryption.

  • The SPARC T4 processor requires fewer streams to attain near-linespeed of a 10 GbE secure network and does this with 5 times less CPU resources compared to the Intel Xeon X5690 processor.

  • Oracle's SPARC T4-2 server using 8 threads achieves line speed over a 10 GbE network with only 9% CPU utilization.

  • Oracle's Sun Fire X4270 M2 with two Intel Xeon X5690 processors achieves line speed with 8 threads, but at 45% CPU utilization.

The SPARC T4 processor has hardware support via Encryption Instruction Accelerators for encryption and decryption for AES and many other ciphers. The Intel Xeon X5690 processor has AES-NI instructions which accelerate only AES ciphers.

Performance Landscape

The following table shows single stream results running encrypted (SSL Read) and unencrypted (Clear Text) messages of 1 MB in size. These tests were run with the uperf benchmark and used the AES-256-CBC cipher. They were run across a 10 GbE connection. Write messages saw similar performance.

Single Stream Network Communication with Uperf
Processor Performance (Mb/sec)
Clear Text SSL Read
SPARC T4, 2.85 GHz 4,194 1,678
Intel Xeon X5690, 3.46 GHz 5,591 1,398

The next table shows how many streams it takes to achieve 90% of the 10 GbE network bandwidth (9000 Mb/sec) for encrypted read messages of 1 MB in size. These tests were run with the uperf benchmark and used the AES-256-CBC cipher. Write messages saw similar performance.

Uperf SSL Read with AES-256-CBC
Processor Number of
Streams for 90%
Network Utilization
CPU Utilization
SPARC T4, 2.85 GHz 8 9%
Intel Xeon X5690, 3.46 GHz 12 45%

Configuration Summary

SPARC T4 Configuration:

2 x SPARC T4-2 servers each with
2 x SPARC T4 processors, 2.85 GHz
128 GB memory
1 x 10-Gigabit Ethernet XAUI Adapter
Oracle Solaris 11
Back-to-back 10 GbE connection

Intel Configuration:

2 x Sun Fire X4270 M2 servers each with
2 x Intel Xeon X5690 processors, 3.46 GHz
48 GB memory
1 x Sun Dual Port 10GbE PCIe 2.0 Networking Card with Intel 82599 10GbE Controller
Oracle Solaris 11
Back-to-back 10 GbE connection

Software Configuration:

OpenSSL 1.0.0.d
uperf 1.0.3
gcc 3.4.3

Benchmark Description

Uperf is an open source benchmark program for simulating and measuring network performance. Uperf is able to measure the performance of various protocols, including TCP, UDP, SCTP and SSL. The uperf benchmark uses an input-defined workload to test network performance. This input workload can be used to model complex situations or to isolate simple tasks. The workload used for these tests was simple network reads and simple network writes.

Key Points and Best Practices

  • The Encryption Instruction Accelerators are accessed through a platform independent API for cryptographic engines.
  • The OpenSSL libraries use the API. The default is to not use the Encryption Instruction Accelerators.
  • Cryptography is compute intensive. Using 8 streams, the SPARC T4 processor was able to match the bandwidth of the 10 GbE network with 8 threads.

See Also

Disclosure Statement

Copyright 2011, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 9/26/2011.

About

BestPerf is the source of Oracle performance expertise. In this blog, Oracle's Strategic Applications Engineering group explores Oracle's performance results and shares best practices learned from working on Enterprise-wide Applications.

Index Pages
Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today