Oracle's SPARC T7-1 server can encrypt/decrypt at near clear text throughput. The SPARC T7-1 server can encrypt/decrypt on the fly and have CPU cycles left over for the application.
The SPARC T7-1 server performed 475,123 Clear 8k read IOPs. With AES-256-CCM enabled on the file syste, 8K read IOPS only drop 3.2% to 461,038.
The SPARC T7-1 server performed 461,038 AES-256-CCM 8K read IOPS and a two-chip x86 E5-2660 v3 server performed 224,360 AES-256-CCM 8K read IOPS. The SPARC M7 processor result is 4.1 times faster per chip.
The SPARC T7-1 server performed 460,600 AES-192-CCM 8K read IOPS and a two chip x86 E5-2660 v3 server performed 228,654 AES-192-CCM 8K read IOPS. The SPARC M7 processor result is 4.0 times faster per chip.
The SPARC T7-1 server performed 465,114 AES-128-CCM 8K read IOPS and a two chip x86 E5-2660 v3 server performed 231,911 AES-128-CCM 8K read IOPS. The SPARC M7 processor result is 4.0 times faster per chip.
The SPARC T7-1 server performed 475,123 clear text 8K read IOPS and a two chip x86 E5-2660 v3 server performed 438,483 clear text 8K read IOPS The SPARC M7 processor result is 2.2 times faster per chip.
Results presented below are for random read performance for 8K size. All of the following results were run as part of this benchmark effort.
|Read Performance – 8K|
|Encryption||SPARC T7-1||2 x E5-2660 v3|
|IOPS||Resp Time||% Busy||IOPS||Resp Time||% Busy|
|Clear||475,123||0.8 msec||43%||438,483||0.8 msec||95%|
|AES-256-CCM||461,038||0.83 msec||56%||224,360||1.6 msec||97%|
|AES-192-CCM||465,114||0.83 msec||56%||228,654||1.5 msec||97%|
|AES-128-CCM||465,114||0.82 msec||57%||231,911||1.5 msec||96%|
IOPS – IO operations per second
Resp Time – response time
% Busy – percent cpu usage
The benchmark tests the performance of running an encrypted ZFS file system compared to the non-encrypted (clear text) ZFS file system. The tests were executed with Oracle's Vdbench tool Version 5.04.03. Three different encryption methods are tested, AES-256-CCM, AES-192-CCM and AES-128-CCM.
The ZFS file system was configured with data cache disabled, meta cache enabled, 4 pools, 128 luns, and 192 file systems with 8K record size. Data cache was disable to insure data would be decrypted as it was read from storage. This is not a recommended setting for normal customer operations.
The tests were executed with Oracle's Vdbench tool against 192 file systems. Each file system was run with a queue depth of 2. The script used for testing is listed below.
hd=default,jvms=16 sd=sd001,lun=/dev/zvol/rdsk/p1/vol001,size=5g,hitarea=100m sd=sd002,lun=/dev/zvol/rdsk/p1/vol002,size=5g,hitarea=100m # # sd003 through sd191 statements here # sd=sd192,lun=/dev/zvol/rdsk/p4/vol192,size=5g,hitarea=100m # VDBENCH work load definitions for run # Sequential write to fill storage. wd=swrite1,sd=sd*,readpct=0,seekpct=eof # Random Read work load. wd=rread,sd=sd*,readpct=100,seekpct=random,rhpct=100 # VDBENCH Run Definitions for actual execution of load. rd=default,iorate=max,elapsed=3h,interval=10 rd=seqwritewarmup,wd=swrite1,forxfersize=(1024k),forthreads=(16) rd=default,iorate=max,elapsed=10m,interval=10 rd=rread8k-50,wd=rread,forxfersize=(8k),iorate=curve, \ curve=(95,90,80,70,60,50),forthreads=(2)
Copyright 2015, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 10/25/2015.