ProMAX Performance and Throughput on Sun Fire X2270 and Sun Storage 7410
By Brian Whitney on Sep 21, 2010
Halliburton/Landmark's ProMAX 3D Prestack Kirchhoff Time Migration's single job scalability and multiple job throughput using various scheduling methods are evaluated on a cluster of Oracle's Sun Fire X2270 servers attached via QDR InfiniBand to Oracle's Sun Storage 7410 system.
Two resource scheduling methods, compact and distributed, are compared while increasing the system load with additional concurrent ProMAX jobs.
A single ProMAX job has near linear scaling of 5.5x on 6 nodes of a Sun Fire X2270 cluster.
A single ProMAX job has near linear scaling of 7.5x on a Sun Fire X2270 server when running from 1 to 8 threads.
ProMAX can take advantage of Oracle's Sun Storage 7410 system features compared to dedicated local disks. There was no significant difference in run time observed when running up to 8 concurrent 16 thread jobs.
The 8-thread ProMAX job throughput using the distributed scheduling method is equivalent or slightly faster than the compact scheme for 1 to 4 concurrent jobs.
The 16-thread ProMAX job throughput using the distributed scheduling method is up to 8% faster when compared to the compact scheme on an 8-node Sun Fire X2270 cluster.
The multiple job throughput characterization revealed in this benchmark study are key in pre-configuring Oracle Grid Engine resource scheduling for ProMAX on a Sun Fire X2270 cluster and provide valuable insight for server consolidation.
Single Job ScalingSingle job performance on a single node is near linear up the number of cores in the node, i.e. 2 Intel Xeon X5570s with 4 cores each. With hyperthreading (2 active threads per core) enabled, more ProMAX threads are used increasing the load on the CPU's memory architecture causing the reduced speedups.
Single Job 6-Node Scalability
Hyperthreading Enabled - 16 Threads/Node Maximum
|Number of Nodes||Threads Per Node||Speedup to 1 Thread||Speedup to 1 Node|
\* 2 threads contend with two master node daemons
Multiple Job Throughput Scaling, Compact SchedulingWith the Sun Storage 7410 system, performance of 8 concurrent jobs on the cluster using compact scheduling is equivalent to a single job.
Multiple Job Throughput Scalability
Hyperthreading Enabled - 16 Threads/Node Maximum
|Number of Nodes||Number of Nodes per Job||Threads Per Node per Job||Performance Relative to 1 Job||Total Nodes||Percent Cluster Used|
Multiple 8-Thread Job Throughput Scaling, Compact vs. Distributed SchedulingThese results report the difference of different distributed method resource scheduling levels to 1, 2, and 4 concurrent job compact method baselines.
Multiple 8-Thread Job Scheduling
HyperThreading Enabled - Use 8 Threads/Node Maximum
|Number of Jobs||Number of Nodes per Job||Threads Per Node per Job||Performance Relative to 1 Job||Total Nodes||Total Threads per Node Used||Percent of PVM Master 8 Threads Used|
Multiple 16-Thread Job Throughput Scaling, Compact vs. Distributed SchedulingThe results are reported relative to the performance of 1, 2, 4, and 8 concurrent 2-node, 8-thread jobs.
Multiple 16-Thread Job Scheduling
HyperThreading Enabled - 16 Threads/Node Available
|Number of Jobs||Number of Nodes per Job||Threads Per Node per Job||Performance Relative to 1 Job||Total Nodes||Total Threads per Node Used||Percent of PVM Master 16 Threads Used|
\* master PVM host; running 20 to 21 total threads (over-subscribed)
Results and Configuration Summary
48 GB memory at 1333 MHz
1 x 500 GB SATA
128 GB memory
2 Internal 233GB SAS drives = 466 GB
2 Internal 93 GB read optimized SSD = 186 GB
1 External Sun Storage J4400 array with 22 1TB SATA drives and 2 18GB write optimized SSD
Parallel Virtual Machine 3.3.11
Oracle Grid Engine
Intel 11.1 Compilers
OpenWorks Database requires Oracle 10g Enterprise Edition
Libraries: pthreads 2.4, Java 1.6.0_01, BLAS, Stanford Exploration Project Libraries
The ProMAX family of seismic data processing tools is the most widely used Oil and Gas Industry seismic processing application. ProMAX is used for multiple applications, from field processing and quality control, to interpretive project-oriented reprocessing at oil companies and production processing at service companies. ProMAX is integrated with Halliburton's OpenWorks Geoscience Oracle Database to index prestack seismic data and populate the database with processed seismic.
This benchmark evaluates single job scalability and multiple job throughput of the ProMAX 3D Prestack Kirchhoff Time Migration while processing the Halliburton benchmark data set containing 70,808 traces with 8 msec sample interval and trace length of 4992 msec. Alternative thread scheduling methods are compared for optimizing single and multiple job throughput. The compact scheme schedules the threads of a single job in as few nodes as possible, whereas, the distributed scheme schedules the threads across a many nodes as possible. The effects of load on the Sun Storage 7410 system are measured. This information provides valuable insight into determining the Oracle Grid Engine resource management policies.
Hyperthreading is enabled for all of the tests. It should be noted that every node is running a PVM daemon and ProMAX license server daemon. On the master PVM daemon node, there are three additional ProMAX daemons running.
The first test measures single job scalability across a 6-node cluster with an additional node serving as the master PVM host. The speedup relative to a single node, single thread are reported.
The second test measures multiple job scalability running 1 to 8 concurrent 16-thread jobs using the Sun Storage 7410 system. The performance is reported relative to a single job.
The third test compares 8-thread multiple job throughput using different job scheduling methods on a cluster. The compact method involves putting all 8 threads for a job on the same node. The distributed method involves spreading the 8 threads of job across multiple nodes. The results report the difference of different distributed method resource scheduling levels to 1, 2, and 4 concurrent job compact method baselines.
The fourth test is similar to the second test except running 16-thread ProMAX jobs. The results are reported relative to the performance of 1, 2, 4, and 8 concurrent 2-node, 8-thread jobs.
The ProMAX processing parameters used for this benchmark:
Maximum output inline = 85
Inline output sampling interval = 1
Minimum output xline = 1
Maximum output xline = 200 (fold)
Xline output sampling interval = 1
Antialias inline spacing = 15
Antialias xline spacing = 15
Stretch Mute Aperature Limit with Maximum Stretch = 15
Image Gather Type = Full Offset Image Traces
No Block Moveout
Number of Alias Bands = 10
3D Amplitude Phase Correction
Maximum Number of Cache Blocks = 500000
Key Points and Best Practices
The application was rebuilt with the Intel 11.1 Fortran and C++ compilers with these flags.-xSSE4.2 -O3 -ipo -no-prec-div -static -m64 -ftz -fast-transcendentals -fp-speculation=fast
There are additional execution threads associated with a ProMAX node. There are two threads that run on each node: the license server and PVM daemon. There are at least three additional daemon threads that run on the PVM master server: the ProMAX interface GUI, the ProMAX job execution - SuperExec, and the PVM console and control. It is best to allocate one node as the master PVM server to handle the additional 5+ threads. Otherwise, hyperthreading can be enabled and the master PVM host can support up to 8 ProMAX job threads.
When hyperthreading is enabled in on one of the non-master PVM hosts, there is a 7% penalty going from 8 to 10 threads. However, 12 threads are 11 percent faster than 8. This can be contributed to the two additional support threads when hyperthreading initiates.
Single job performance on a single node is near linear up the number of
cores in the node, i.e. 2 Intel Xeon X5570s with 4 cores each. With
hyperthreading (2 active threads per core) enabled, more ProMAX threads
are used increasing the load on the CPU's memory architecture causing the
Users need to be aware of these performance differences and how it effects their production environment.
- Oracle Oil and Gas
- ProMAX Family Seismic Data Processing Software
- OpenWorks Geosciences Project Database Software
- GeoProbe Volume Interpretation Software
The following are trademarks or registered trademarks of Halliburton/Landmark Graphics: ProMAX. Results as of 9/20/2010.