Tuesday Mar 26, 2013

SPARC T5-2 Performance Running Oracle Fusion Middleware SOA

Oracle's SPARC T5-2 server running Oracle Fusion Middleware SOA Suite 11g on Oracle Solaris 11 demonstrated 2.1x to 2.4x throughput improvement with 2x concurrency over a similarly configured SPARC T4-2 server for Fusion Order Demo and Oracle Service Bus (OSB) benchmark workloads using 5 KB message size.

  • Oracle Fusion Middleware SOA was deployed on virtualized environments using Oracle VM for SPARC to demonstrate consolidation of multiple SOA services onto a single system.

  • The benchmark demonstrates SPARC hardware crypto performance within an OSB service using 100-byte element encrypted with AES and signed with RSA128.

Performance Landscape

OSB Tests
System ch/co/th OS  Concurrency 
T5/T4 Test
SPARC T5-2
SPARC T5-2 (db)
1/8/64
2/32/256
Oracle
Solaris 11
144 2.1x http_passthrough
96 2.4x dyn_transform
64 2.3x body encryption

ch/co/th – chips, cores, threads


BPEL Test
System ch/co/th OS Users T5/T4 Test
SPARC T5-2
SPARC T5-2 (db)
1.5/24/192
2/32/256
S11 400 2.2x Fusion order demo

ch/co/th – chips, cores, threads

Configuration Summary

Application Server:

SPARC T5-2
2 x SPARC T5 processors, 3.6 GHz
256 GB memory
2 x 300 GB internal disks
Oracle Solaris 11.1
Oracle WebLogic 10.3.6
Oracle SOA 11.1.1.6 (PS5)
Oracle OSB 11.1.1.6 (PS5)
Oracle JDK 7

Database Server:

SPARC T5-2
2 x SPARC T5 processors, 3.6 GHz
256 GB memory
2 x 300 GB internal disks
1 x Sun Storage 6180, 16 x 146 GB SAS disks
Oracle Solaris 11.1
Oracle Database 11g Release 2 (11.2.0.3)

Benchmark Description

Three tests were performed as part of the Oracle SOA Suite profiling:

HTTP Passthrough (http_passthrough)

The client sends a 5 KB message to a HTTP Web Services Description Language (WSDL)-based proxy service on an Oracle Service Bus server. The proxy routes (using route action) the message to the backend servlet in a WLS domain. Oracle Service Bus monitoring is enabled as the message goes through the bus. The proxy's operation selection algorithm is SOAP Action Header. This workload involves more networking load than any of the other Oracle Service Bus microbenchmarks described.

Dynamic Transformation (dyn_transformation)

In this benchmark the HTTP proxy receives a 5 KB XML document. The XML document has an Xquery resource name in one of its leaf nodes. The pipeline uses an Xpath to retrieve the Xquery resource name and executes transformation on the inbound XML. The majority of CPU is spent on XML processing.

Body Encryption (body_encryption)

This benchmark tests the crypto performance within an Oracle Service Bus service. The client sends a 5 KB message, within which a 100-byte element is encrypted, to the WSDL-based Oracle Service Bus proxy service over HTTP. The WSDL binding references an Oracle Web Services Manager policy. The business service is also WSDL-based. The element is encrypted with AES and signed with RSA128. The encrypted element is decrypted, and the message is routed to the backend service as a clear SOAP message.

See Also

Disclosure Statement

Copyright 2013, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 26 March 2013.

Friday Feb 08, 2013

Improved Oracle Solaris 10 1/13 Secure Copy Performance for High Latency Networks

With Oracle Solaris 10 1/13, the performance of secure copy or scp is significantly improved for high latency networks.

  • Oracle Solaris 10 1/13 enabling a TCP receive window size up to 1 MB has up to 8 times faster transfer times over the latency range 50 - 200 msec compared to the previous Oracle Solaris 10 8/11.

  • The default TCP receive window size of 48 KB delivered similar performance in both Oracle Solaris 10 1/13 and Oracle Solaris 10 8/11.

  • In this study, settings above 1 MB for the TCP receive window size delivered similar performance to the 1 MB results.

  • The tuning of the TCP receive window has been available in Oracle Solaris for some time. This improved performance is available with Oracle Solaris 10 1/13 and Oracle Solaris 11.

Performance Landscape

T4-4_SSH_SCP.png

X4170M2_SSH_SCP.png

Configuration Summary

Test Systems:

SPARC T4-4 server
4 x SPARC T4 processor 3.0 GHz
1 TB memory
Oracle Solaris 10 1/13
Oracle Solaris 10 8/11

Sun Fire X4170 M2
2 x Intel Xeon X5675 3.06 GHz
48 GB memory
Oracle Solaris 10 1/13
Oracle Solaris 10 8/11

Driver System:

Sun Fire X4170 M2
2 x Intel Xeon X5675 3.06 GHz
48 GB memory
Oracle Solaris 10

Router / Programmable Delay System:

Sun Fire X4170 M2
2 x Intel Xeon X5675 3.06 GHz
48 GB memory
Oracle Solaris 10

Switch in between the router and the 2 test systems

Cisco linksys SR2024C

Benchmark Description

This benchmark measures the scp performance between two systems with variable router delays in the network between the two systems. A file size of 48 MB was used while measuring the affects of varying the latency (network delays) and varying the TCP receive window size.

Key Points and Best Practices

  • The WAN emulator (aka. hxbt) is used in the router to achieve delays. Verification of network function and characteristics confirmed after setting the simulator using Netperf latency and bandwidth tests between driver and test system.

  • Transfers performed over 1 GbE private, dedicated network.

  • Files were transferred to and from /tmp (i.e. in memory) on the test systems to minimize effect of filesystem performance and variability on the measurements.

  • Larger TCP receive windows than default can be enabled using the system-wide parameter tcp_recv_hiwat (e.g. to enable 1024 KB windows using this method, use the command: ndd -set /dev/tcp tcp_recv_hiwat 1048576). To make this change persistent the command will have to be added to system startup scripts.

  • sshd on target system must be restarted before any benefit can be observed after increasing the enabled tcp receive buffer size. (e.g: can restart with the command /usr/sbin/svcadm restart svc:/network/ssh:default)

  • Note that tcp_recv_hiwat is a system-wide variable that adjusts the entire TCP stack. Care, therefore, must be taken to make sure that changes do not adversely affect your environment.

  • Geographically distant servers can be affected by connection latencies of the kind presented here.

See Also

Disclosure Statement

Copyright 2013, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 2/08/2013.

Wednesday Nov 02, 2011

SPARC T4-2 Server Beats 2-Socket 3.46 GHz x86 on Black-Scholes Option Pricing Test

Oracle's SPARC T4-2 server (two SPARC T4 processors at 2.85 GHz) delivered 21% better performance compared to a two-socket x86 server (with two Intel X5690 3.46 GHz processors) running a Black-Scholes options pricing test on 10 million options.

  • The hyper-threads of the Intel processor did not deliver additional performance, it actually caused a reduction in performance of 6%. The performance of hyper-threading on Intel processors will vary depending on workload

  • This test shows how delivered performance is not easily predicted just by processor frequency alone. It is vital that hardware and software be designed in tandem in order to deliver best performance.

Performance Landscape

Black-Scholes options pricing, 10 million options, results in seconds, 100 iterations of the test, smaller is better.

System Time (sec)
SPARC T4-2 (2 x SPARC T4, 2.85 GHz, 128 software threads) 9.2
2-socket x86 (2 x X5690, 3.46 GHz, 12 software threads) 11.7

Advantage SPARC T4-2 21% faster

The hyper-threads of the Intel processor did not deliver additional performance, causing a reduction in performance of 6%.

Configuration Summary

SPARC Configuration:

SPARC T4-2 server
2 x SPARC T4 processor 2.85 GHz
128 GB memory
Oracle Solaris 10 8/11

Intel Configuration:

Sun Fire X4270 M2
2 x Intel Xeon X5690 3.46 GHz, Hyper-Threading and Turbo Boost active
48 GB memory
Oracle Linux 6.1

Benchmark Description

Black-Scholes option pricing model is a financial market algorithm that uses the Black-Scholes partial differential equation (PDE) to calculate prices for European stock options. The key idea is that the value of the option fluctuates over time with the actual value of the stock. The reported time is just for calculating the options, no I/O component. The computation is floating point intensive and requires the calculation of logarithms, exponentials and square roots.

See Also

Disclosure Statement

Copyright 2011, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 11/1/2011.

Thursday Sep 29, 2011

SPARC T4-1 Server Outperforms Intel (Westmere AES-NI) on IPsec Encryption Tests

Oracle's SPARC T4 processor has significantly greater performance than the Intel Xeon X5690 processor when both are using Oracle Solaris 11 secure IP networking (IPsec). The SPARC T4 processor using IPsec AES-256-CCM mode achieves line speed over a 10 GbE network.

  • On IPsec, SPARC T4 processor is 23% faster than the 3.46 GHz Intel Xeon X5690 processor (Intel AES-NI).

  • The SPARC T4 processor is only at 23% utilization when running at its maximum throughput making it 3.6 times more efficient at secure networking than the 3.46 GHz Intel Xeon X5690 processor.

  • The 3.46 GHz Intel Xeon X5690 processor is nearly fully utilized at its maximum throughput leaving little CPU for application processing.

  • The SPARC T4 processor using IPsec AES-256-CCM mode achieves line speed over a 10 GbE network.

  • The SPARC T4 processor approaches line speed with fewer than one-quarter the number of IPsec streams required for the Intel Xeon X5690 processor to achieve its peak throughput. The SPARC T4 processor supports the additional streams with minimal extra CPU utilization.

IPsec provides general purpose networking security which is transparent to applications. This is ideal for supplying the capability to those networking applications that don't have cryptography built-in. IPsec provides for more than Virtual Private Networking (VPN) deployments where the technology is often first encountered.

Performance Landscape

Performance was measured using the AES-256-CCM cipher in megabits per second (Mb/sec) aggregate over sufficient numbers of TCP/IP streams to achieve line rate threshold (SPARC T4 processor) or drive a peak throughput (Intel Xeon X5690).

Processor GHz AES Decrypt AES Encrypt
B/W (Mb/sec) CPU Util Streams B/W (Mb/sec) CPU Util Streams
– Peak performance
SPARC T4 2.85 9,800 23% 96 9,800 20% 78
Intel Xeon X5690 3.46 8,000 83% 4,700 81%
– Load at which SPARC T4 processor performance crosses 9000 Mb/sec
SPARC T4 2.85 9,300 19% 17 9,200 15% 17
Intel Xeon X5690 3.46 4,700 41% 3,200 47%

Configuration Summary

SPARC Configuration:

SPARC T4-1 server
1 x SPARC T4 processor 2.85 GHz
128 GB memory
Oracle Solaris 11
Single 10-Gigabit Ethernet XAUI Adapter

Intel Configuration:

Sun Fire X4270 M2
1 x Intel Xeon X5690 3.46 GHz, Hyper-Threading and Turbo Boost active
48 GB memory
Oracle Solaris 11
Sun Dual Port 10GbE PCIe 2.0 Networking Card with Intel 82599 10GbE Controller

Driver Systems Configuration:

2 x Sun Blade 6000 chassis each with
1 x Sun Blade 6000 Virtualized Ethernet Switched Network Express Module 10GbE (NEM)
10 x Sun Blade X6270 M2 server modules each with
2 x Intel Xeon X5680 3.33 GHz, Hyper-Threading and Turbo Boost active
48 GB memory
Oracle Solaris 11
Dual 10-Gigabit Ethernet Fabric Expansion Module (FEM)

Benchmark Configuration:

Netperf 2.4.5 network benchmark adapted for testing bandwidth of multiple streams in aggregate.

Benchmark Description

The results here are derived from runs of the Netperf 2.4.5 benchmark. Netperf is a client/server benchmark measuring network performance providing a number of independent tests, including the TCP streaming bandwidth tests used here.

Netperf is, however, a single network stream benchmark and to demonstrate peak network bandwidth over a 10 GbE line under encryption requires many streams.

The Netperf documentation provides an example of using the software to drive multiple streams. The example is not sufficient to develop the workload because it does not scale beyond a single driver node which limits the processing power that can be applied. This subsequently limits how many full bandwidth streams can be supported. We chose to have a single server process on the target system (containing either the SPARC T4 processor or the Intel Xeon processor) and to spawn one or more Netperf client processes each across a cluster of the driver systems. The client processes are managed by the mpirun program of the Oracle Message Passing Toolkit.

Tabular results include aggregate bandwidth and CPU utilization. The aggregate bandwidth is computed by dividing the total traffic of the client processes by the overall runtime. CPU utilization on the target system is the average of that reported by all of the Netperf client processes.

IPsec is configured in the operating system of each participating server transparently to Netperf and applied to the dedicated network connecting the target system to the driver systems.

Key Points and Best Practices

  • Line speed is defined as data bandwidth within 10% of theoretical maximum bit rate of network line. For 10 GbE greater than 9000 Mb/sec bandwidth is defined as line speed.

  • IPsec provides network security that is configured completely in the operating system and is transparent to the application.

  • Peak bandwidths under IPsec are achieved only in aggregate with multiple client network streams to the target server.

  • Oracle Solaris receiver fanout must be increased from the default to support the large numbers of streams at quoted peak rates.

  • The ixgbe network driver relevant on servers with Intel 82599 10GbE controllers (driver systems and Intel Xeon target system) was limited to only a single receiver queue to maximize utilization of extra fanout.

  • IPsec is configured to make a unique security association (SA) for each connection to avoid a bottleneck over the large stream counts.

  • Jumbo frames are enabled (MTU of 9000) and network interrupt blanking (sometimes called interrupt coalescence) is disabled.

  • The TCP streaming bandwidth tests, which run continuously for minutes and multiple times to determine statistical significance, are configured to use message sizes of 1,048,576 bytes.

  • IPsec configuration defines that each SA is established through the use of a preshared key and Internet Key Exchange (IKE).

  • IPsec encryption uses the Solaris Cryptographic Framework which applies the appropriate accelerated provider on both the SPARC T4 processor and the Intel Xeon processor.

  • There is no need to configure a specific authentication algorithm for IPsec. With the Encapsulated Security Payload (ESP) security protocol and choosing AES-256-CCM for the encryption algorithm, the encapsulation is self-authenticating.

See Also

Disclosure Statement

Copyright 2011, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 9/26/2011.

Tuesday Sep 27, 2011

SPARC T4 Servers Set World Record on Siebel Loyalty Batch

Oracle's SPARC T4-2 and SPARC T4-4 servers running Oracle's Siebel Loyalty Batch engine delivered a world record result for batch processing.

  • The SPARC T4-2 and SPARC T4-4 servers running Siebel Loyalty Batch engine, part of Siebel Loyalty Solution, with Oracle Database 11g Release 2 running on Oracle Solaris 10 achieved 7.65M TPH on Accrual (Reward) processing using three Siebel Servers.

  • The world record result was achieved with 24M members and 50M records in the base transaction table.

  • Siebel Loyalty Application was configured with 50 Active Promotions with three Assign Points and four Update Attributes.

  • Oracle's Siebel Server scaled near linearly on SPARC T4 systems achieving 2.72M TPH on a single Siebel Server to 7.65M TPH with three Siebel Servers.

  • The average CPU utilization on the database tier server was 25% and on the application tier server was 65%, leaving significant room for application growth.

Performance Landscape

System Processor TPH Version
3 x SPARC T4-2 (app)
1 x SPARC T4-4 (db)
SPARC T4, 2.85 GHz
SPARC T4, 3.0 GHz
7.65M 8.1.1.1FP
2 x SPARC T3-2 (app)
1 x SPARC T3-1 (app)
1 x SPARC M5000 (db)
SPARC T3, 1.65 GHz
SPARC T3, 1.65 GHz
SPARC64 VII, 2.52 GHz
3.9M 8.1.1.1FP
Customer (app)
Customer (db)
4 x Intel E5540, 2.53 GHz
1 x Itanium, 1.6 GHz
1.5M 8.1.x

Configuration Summary

Hardware Configuration:

3 x SPARC T4-2 servers, each with
2 x SPARC T4 processors, 2.85 GHz
128 GB main memory
1 x SPARC T4-4 server with
4 x SPARC T4 processors, 3.0 GHz
256 GB main memory
1 x Sun Storage 6180 array
16 disk drives
CSM200 with 16 disk drives

Software Configuration:

Oracle Solaris 10
Siebel Server 8.1.1.1FP
Oracle Database 11g Release 2 Enterprise Edition 11.2.0.1

Benchmark Description

Siebel Loyalty enables companies to simulate and process loyalty rewards for their activities across channels and process very high volume accrual and tier assessment transactions via batch process.

The benchmark simulates a workload of Accrual Batch Transactions Processing which imports data through Enterprise Integration Manager (EIM), evaluates eligible promotion and calculates rewards. The key performance metric is transactions per hour (TPH). Key aspects of the workload simulation include:

  • Batch Engine evaluating all accrual promotions and applying all actions in one go,
  • Users do not have control over the sequence in which promotion applied,
  • Promotion actions (assign/redeem points) are rolled back in case of failure.
The number of active promotions and, in particular, the Assign Point action has very significant impact on performance. The load simulated 50 Active promotions with 3 for Assign Points and 7 Update attribute actions configured.

The number of members and the number of queued transactions in the backend database have significant impact on the performance. The benchmark had 24 million members and 52 million records in the base transaction table. The simplified process flow of the benchmark is:

  • calculate accruals base on promotions,
  • credit points to members,
  • initiate any other actions specified in promotions.

See Also

Disclosure Statement

Copyright 2011, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 9/26/2011.

SPARC T4-4 Server Sets World Record on PeopleSoft Payroll (N.A.) 9.1, Outperforms IBM Mainframe, HP Itanium

Oracle's SPARC T4-4 server achieved world record performance on the Unicode version of Oracle's PeopleSoft Enterprise Payroll (N.A) 9.1 extra-large volume model benchmark using Oracle Database 11g Release 2 running on Oracle Solaris 10.

  • The SPARC T4-4 server was able to process 1,460,544 payments/hour using PeopleSoft Payroll N.A 9.1.

  • The SPARC T4-4 server UNICODE result of 30.84 minutes on Payroll 9.1 is 2.8x faster than IBM z10 EC 2097 Payroll 9.0 (UNICODE version) result of 87.4 minutes. The IBM mainframe is rated at 6,512 MIPS.

  • The SPARC T4-4 server UNICODE result of 30.84 minutes on Payroll 9.1 is 3.1x faster than HP rx7640 Itanium2 non-UNICODE result of 96.17 minutes, on Payroll 9.0.

  • The average CPU utilization on the SPARC T4-4 server was only 30%, leaving significant room for business growth.

  • The SPARC T4-4 server processed payroll for 500,000 employees, 750,000 payments, in 30.84 minutes compared to the earlier world record result of 46.76 minutes on Oracle's SPARC Enterprise M5000 server.

  • The SPARC Enterprise M5000 server configured with eight 2.66 GHz SPARC64 VII processors has a result of 46.76 minutes on Payroll 9.1. That is 7% better than the result of 50.11 minutes on the SPARC Enterprise M5000 server configured with eight 2.53 GHz SPARC64 VII processors on Payroll 9.0. The difference in clock speed between the two processors is ~5%. That is close to the difference in the two results, thereby showing that the impact of the Payroll 9.1 benchmark on the overall result is about the same as that of Payroll 9.0.

Performance Landscape

PeopleSoft Payroll (N.A.) 9.1 – 500K Employees (7 Million SQL PayCalc, Unicode)

System OS/Database Payroll Processing
Result (minutes)
Run 1
(minutes)
Num of
Streams
SPARC T4-4, 4 x 3.0 GHz SPARC T4 Solaris/Oracle 11g 30.84 43.76 96
SPARC M5000, 8 x 2.66 GHz SPARC64 VII+ Solaris/Oracle 11g 46.76 66.28 32

PeopleSoft Payroll (N.A.) 9.0 – 500K Employees (3 Million SQL PayCalc, Non-Unicode)

System OS/Database Time in Minutes Num of
Streams
Payroll
Processing
Result
Run 1 Run 2 Run 3
Sun M5000, 8 x 2.53 GHz SPARC64 VII Solaris/Oracle 11g 50.11 73.88 534.20 1267.06 32
IBM z10 EC 2097, 9 x 4.4 GHz Gen1 Z/OS /DB2 58.96 80.5 250.68 462.6 8
IBM z10 EC 2097, 9 x 4.4 GHz Gen1 Z/OS /DB2 87.4 ** 107.6 - - 8
HP rx7640, 8 x 1.6 GHz Itanium2 HP-UX/Oracle 11g 96.17 133.63 712.72 1665.01 32

** This result was run with Unicode. The IBM z10 EC 2097 UNICODE result of 87.4 minutes is 48% slower than IBM z10 EC 2097 non-UNICODE result of 58.96 minutes, both on Payroll 9.0, each configured with nine 4.4GHz Gen1 processors.

Payroll 9.1 Compared to Payroll 9.0

Please note that Payroll 9.1 is Unicode based and Payroll 9.0 had non-Unicode and Unicode versions of the workload. There are 7 million executions of an SQL statement for the PayCalc batch process in Payroll 9.1 and 3 million executions of the same SQL statement for the PayCalc batch process in Payroll 9.0. This gets reflected in the elapsed time (27.33 min for 9.1 and 23.78 min for 9.0). The elapsed times of all other batch processes is lower (better) on 9.1.

Configuration Summary

Hardware Configuration:

SPARC T4-4 server
4 x 3.0 GHz SPARC T4 processors
256 GB memory
Sun Storage F5100 Flash Array
80 x 24 GB FMODs

Software Configuration:

Oracle Solaris 10 8/11
PeopleSoft HRMS and Campus Solutions 9.10.303
PeopleSoft Enterprise (PeopleTools) 8.51.035
Oracle Database 11g Release 2 11.2.0.1 (64-bit)
Micro Focus COBOLServer Express 5.1 (64-bit)

Benchmark Description

The PeopleSoft 9.1 Payroll (North America) benchmark is a performance benchmark established by PeopleSoft to demonstrate system performance for a range of processing volumes in a specific configuration. This information may be used to determine the software, hardware, and network configurations necessary to support processing volumes. This workload represents large batch runs typical of OLTP workloads during a mass update.

To measure five application business process run times for a database representing a large organization. The five processes are:

  • Paysheet Creation: Generates payroll data worksheets consisting of standard payroll information for each employee for a given pay cycle.

  • Payroll Calculation: Looks at paysheets and calculates checks for those employees.

  • Payroll Confirmation: Takes information generated by Payroll Calculation and updates the employees' balances with the calculated amounts.

  • Print Advice forms: The process takes the information generated by Payroll Calculations and Confirmation and produces an Advice for each employee to report Earnings, Taxes, Deduction, etc.

  • Create Direct Deposit File: The process takes information generated by the above processes and produces an electronic transmittal file that is used to transfer payroll funds directly into an employee's bank account.

Key Points and Best Practices

  • The SPARC T4-4 server with the Sun Storage F5100 Flash Array device had an average read throughput of up to 103 MB/sec and an average write throughput of up to 124 MB/sec while consuming 30% CPU on average.

  • The Sun Storage F5100 Flash Array device is a solid-state device that provides a read latency of only 0.5 msec. That is about 10 times faster than the normal disk latencies of 5 msec measured on this benchmark.

See Also

  • Oracle PeopleSoft Benchmark White Papers
    oracle.com
  • PeopleSoft Enterprise Human Capital Management (Payroll)
    oracle.com

  • PeopleSoft Enterprise Payroll 9.1 Using Oracle for Solaris (Unicode) on an Oracle's SPARC T4-4 – White Paper
    oracle.com

  • SPARC T4-4 Server
    oracle.com
  • Oracle Solaris
    oracle.com
  • Oracle Database 11g Release 2 Enterprise Edition
    oracle.com
  • Sun Storage F5100 Flash Array
    oracle.com

Disclosure Statement

Oracle's PeopleSoft Payroll 9.1 benchmark, SPARC T4-4 30.84 min,
http://www.oracle.com/us/solutions/benchmark/apps-benchmark/peoplesoft-167486.html, results 9/26/2011.

About

BestPerf is the source of Oracle performance expertise. In this blog, Oracle's Strategic Applications Engineering group explores Oracle's performance results and shares best practices learned from working on Enterprise-wide Applications.

Index Pages
Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today