In June of
this year Oracle announced its new SPARC S7 servers which deliver commodity x86 economics and significant
enterprise-class functionalities for security and analytics with Software in
Silicon. We recently completed extensive testing of SAP ASE (Sybase) database server on these new
servers. SAP ASE 16.0 shows excellent scalability and performance running high-transaction
OLTP workloads on SPARC S7 servers. This post is a summary of the testing results
as well as some tuning tips.
The solid scalability
of SAP ASE on SPARC S7 servers is in line with previous results of ASE running on a SPARC T7 server by customer UZ Leuven. Oracle SPARC systems continue to
provide an excellent platform for running SAP ASE database server.
System - Oracle SPARC Server
- Oracle SPARC S7-2 server with two4.27 GHz, 8-core/64-thread SPARC S7 processors and 500GB
- Oracle Flash Accelerator F160 PCIe
card for DBMS storage
- 10GE private network connecting
S7 and the workload generation system
- Oracle Solaris 11.3 SRU8
Software - DBMS
- SAP ASE 16.0 SP02 PL03 HF01,
- Database created with separate
data and log spaces of 40GB (ZFS files) each on the F160 card
- OLTP database consisted of 100
tables filled with random binary data; each table with 20K rows, total of
Software - Workload
- An internal database testing framework,
written in Java, was used with separate DB server and test driver systems.
- A configurable OLTP workload
plug-in generated a workload of Zipfian distribution; 70% Select, 28% Update, 1%
Insert, 1% Delete operations, which more realistically represent real life
- Various configuration options
were used, including number of user connections and JDBC connection pools,
in order to increase performance.
Tuning - SAP ASE Server
- ZFS was used for data storage
following mostly what has been outlined in this blog post. The only difference was that as both database
data and log files are created on the F160 SSD, a separate ZIL was not used. These
settings were done for ZFS:
- Separate zpools for data and log
were created on the F160, then ZFS file systems for data and log were
created on the zpools.
- 8kB ASE DB page size was used
during ASE installation time. ZFS record size of 32k was used for both DB
data and DB log as suggested in the blog post above.
- These settings were done via
ASE configuration parameter:
- Set data cache (default or
named cache for the DB data) large enough to store the database. For
current test, 40GB was enough to store the OLTP database of two million
- Matched the number of ASE
engines to that of vcpus (hardware threads). For 2-socket, 16-core
system, it is 128 threads, so maximum of 128 on-line engines.
- Set the number of local and global
cache partitions to 2's power smaller than or equal to number of engines.
- Set lock to spinlock ratio = 1. Default value of 85 is too
large for latest SPARC CMT systems.
- Set the number of user connections to the largest possible for
given memory resources. Make sure the value is correct in run time using sp_configure 'user
setting the value somehow fails, the value defaults to 25, which is much
smaller than necessary for most realistic OLTP workloads. For the given
S7 system (memory of 1TB), 30,000 was the maximum obtainable.
- Set max network connection = 4096, and max network packet size = 4096. Make sure network is not the
bottleneck by checking netstat output on the db server (or from the
workload service for current case). The value should be well below 100%.
- Set solaris async io = 1. Default of 0 disables
asynchronous io which causes a significant performance degradation.
Tuning - Driver
- The testing framework uses JDBC
connection pools for driver connections
- Best results on the S7 system
were obtained when all the following four parameters were set to the same
- connection pool size on the driver
- driver thread number on the
- number of online ASE engines
on the server
- number of total hardware
threads on the server
Therefore, for the 2-socket S7-2 system with 16 cores and 128 hardware
threads, all four numbers should be set to 128 in order to reach maximum
performance (greatest throughput, least latency).
- Note: above settings work well for SPARC S7,
where all eight hardware threads in a core possess equal amounts of
computational power. Other
platforms such as Linux on x86 may require different settings.
Excellent Scalability and Performance
- A 2-socket S7 system showed
excellent scaling for OLTP workload of Zipfian distribution with a
database of 2 million rows which consisted of 100 tables of 20k rows each.
- In the graph below, X-axis is
the number of active cores (inactive cores were disabled) and throughput
values are shown with respect to the single core case. At 8 cores, which
is the socket boundary, scaling efficiency is at 88%, while at 16 cores (a full system,) 81% efficiency was
- Average response times remained
below 5 msec and at 16 cores increased by less than 25% of that of a single
Oracle SPARC continues to deliver great performance
enhancements fro OLTP. SPARC S7 servers, based on the latest SPARC S7 CPU provide
an excellent scalable and efficient platform to process high-transaction OLTP
workloads with SAP ASE 16.0 database server.