Tuesday Feb 25, 2014

AIX customers: Run for the Hills ..

.. or keep your cool and embrace Solaris.

When Oracle acquired Sun, IBM tried to capitalize the situation just like every other competitor Sun had – doubts raised about Oracle's ability to turn Sun's hardware business around, and Solaris customers were advised to flee SPARC. Fast forward four years .. Oracle appears to have successfully dispelled the doubts with proven long-term commitment to the Solaris/SPARC business with consistent investment and delivery on established roadmaps. Besides, Oracle has been innovating in the server space with engineered systems that are pre-integrated to reduce the cost and complexity of IT infrastructures while increasing productivity and performance.

On the other hand, judging by the recent turn of events at IBM such as selling off critical server technologies, decline in data center business, employee furloughs, layoffs etc., it appears that Big Blue has its own struggles to deal with. In any case, irrespective of what is happening at IBM, AIX customers who are contemplating to migrate to a modern operating platform that is reliable, secure, cloud-ready and offers a rich set of features to virtualize, consolidate, diagnose, debug and most importantly scale and perform, have an attractive alternative — Oracle Solaris. Act before it is too late.

Unfortunately migrating larger deployments from one platform to another is not as easy as migrating desktop users from one operating system to another. So, Oracle put together a bunch of documents to make the AIX to Solaris transition as smooth as possible for the existing AIX customers. Access the AIX-to-Solaris migration pages at:

     http://www.oracle.com/aixtosolaris
     Modernizing IBM AIX/Power to Oracle Solaris/SPARC (Oracle Technology Network)

The above pages have pointers to white papers such as IBM AIX to Oracle Solaris Technology Mapping Guide (for system admins, power users), Simplify the Migration of Oracle Database and Oracle Applications from AIX to Oracle Solaris (for DBAs, application specific admins) and IBM AIX Technologies Compared to Oracle Solaris 11 along with hands-on labs, training, blogs and other useful resources. Check those out, and use the contact information available in those pages to speak or chat with relevant Oracle team(s) who can help get started with the migration process. Good luck.

Friday May 31, 2013

Oracle Internet Directory 11g Benchmark on SPARC T5

SUMMARY

System Under Test (SUT)     Oracle's SPARC T5-2 server
Software     Oracle Internet Directory 11gR1-PS6
Target Load     50 million user entries
Reference URL     OID/T5 benchmark white paper

Oracle Internet Directory (OID) is an LDAP v3 Directory Server that has multi-threaded, multi-process, multi-instance process architecture with Oracle database as the directory store.


BENCHMARK WORKLOAD DESCRIPTION

Five test scenarios were executed in this benchmark - each test scenario performing a different type of LDAP operation. The key metrics are throughput -- the number of operations completed per second, and latency -- the time it took in milliseconds to complete an operation.

TEST SCENARIOS & RESULTS

1. LDAP Search operation : search for and retrieve specific entries from the directory

In this test scenario, each LDAP search operation matches a single unique entry. Each Search operation results in the lookup of an entry in such a way that no client looks up the same entry twice and no two clients lookup the same entry, and all entries are looked-up randomly.

#clients Throughput
Operations/Second
Latency
milliseconds
1,000 944,624 1.05

2. LDAP Add operation : add entries, their object classes, attributes and values to the directory

In this test scenario, 16 concurrent LDAP clients added 500,000 entries of object class InetOrgPerson with 21 attributes to the directory.

#clients Throughput
Operations/Second
Latency
milliseconds
16 1,000 15.95

3. LDAP Compare operation : compare a given attribute value to the attribute value in a directory entry

In this test scenario, userpassword attribute was compared. That is, each LDAP Compare operation matches user password of a user.

#clients Throughput
Operations/Second
Latency
milliseconds
1,000 594,426 1.68

4. LDAP Modify operation : add, delete or replace attributes for entries

In this test scenario, 50 concurrent LDAP clients updated a unique entry each time and a total of 50 million entries were updated. Attribute that is being modified was not indexed

#clients Throughput
Operations/Second
Latency
milliseconds
50 16,735 2.98

5. LDAP Authentication operation : authenticates the credentials of a user

In this test scenario, 1000 concurrent LDAP clients authenticated 50 million users.

#clients Throughput
Operations/Second
Latency
milliseconds
1,000 305,307 3.27

BONUS: LDAP Mixed operations Test

In this test scenario, 1000 LDAP clients were used to perform LDAP Search, Bind and Modify operations concurrently.
Operation breakdown (load distribution): Search: 65%. Bind: 30%. Modify: 5%

LDAP Operation #clients Throughput
Operations/Second
Latency
milliseconds
Search 650 188,832 3.86
Bind 300 87,159 1.08
Modify 50 14,528 12

And finally, the:

HARDWARE CONFIGURATION

1 x Oracle SPARC T5-2 Server
    » 2 x 3.6 GHz SPARC T5 sockets each with 16 Cores (Total Cores: 32) and 8 MB L3 cache
    » 512 GB physical memory
    » 2 x 10 GbE cards
    » 1 x Sun Storage F5100 Flash Array with 80 flash modules
    » Oracle Solaris 11.1 operating system

ACKNOWLEDGEMENTS

Major credit goes to our colleague, Ramaprakash Sathyanarayan

Friday Apr 12, 2013

Siebel 8.1.1.4 Benchmark on SPARC T5

Hardly six months after announcing Siebel 8.1.1.4 benchmark results on Oracle SPARC T4 servers, we have a brand new set of Siebel 8.1.1.4 benchmark results on Oracle SPARC T5 servers. There are no updates to the Siebel benchmark kit in the last couple years - so, we continued to use the Siebel 8.1.1.4 benchmark workload to measure the performance of Siebel Financial Services Call Center and Order Management business transactions on the recently announced SPARC T5 servers.

Benchmark Details

The latest Siebel 8.1.1.4 benchmark was executed on a mix of SPARC T5-2, SPARC T4-2 and SPARC T4-1 servers. The benchmark test simulated the actions of a large corporation with 40,000 concurrent active users. To date, this is the highest user count we achieved in a Siebel benchmark.


User Load Breakdown & Achieved Throughput

Siebel Application Module %Total Load #Users Business Trx per Hour
Financial Services Call Center 70 28,000 273,786
Order Management 30 12,000 59,553
Total     100 40,000 333,339

Average Transaction Response Times for both Financial Services Call Center and Order Management transactions were under one second.


Software & Hardware Specification

 Test Component Software Version Server Model Server Qty Per Server Specification OS
Chips Cores vCPUs CPU Speed CPU Type Memory
Application Server Siebel 8.1.1.4 SPARC T5-2 2 2 32 256 3.6 GHz SPARC-T5 512 GB Solaris 10 1/13 (S10U11)
Database Server Oracle 11g R2 11.2.0.2 SPARC T4-2 1 2 16 128 2.85 GHz SPARC-T4 256 GB Solaris 10 8/11 (S10U10)
Web Server iPlanet Web Server 7.0.9 (7 U9) SPARC T4-1 1 1 8 64 2.85 GHz SPARC-T4 128 GB Solaris 10 8/11 (S10U10)
Load Generator Oracle Application Test Suite 9.21.0043 SunFire X4200 1 2 4 4 2.6 GHz AMD Opteron 285 SE 16 GB Windows 2003 R2 SP2
Load Drivers (Agents) Oracle Application Test Suite 9.21.0043 SunFire X4170 8 2 12 12 2.93 GHz Intel Xeon X5670 48 GB Windows 2003 R2 SP2

Additional Notes:

  • Siebel Gateway Server was configured to run on one of the application server nodes
  • Four Siebel application servers were configured in the Siebel Enterprise to handle 40,000 concurrent users
    • - Each SPARC T5-2 was configured to run two Siebel application server instances
    • - Each of the Siebel application server instances on SPARC T5-2 servers were separated using Solaris virtualization technology, Zones
    • - 40,000 concurrent user sessions were load balanced across all four Siebel application server instances
  • Siebel database was hosted on a Sun Storage F5100 Flash Array consisting 80 x 24 GB flash modules (FMODs)
    • - Siebel 8.1.1.4 benchmark workload is not I/O intensive and does not require flash storage for better I/O performance
  • Fourteen iPlanet Web Server virtual servers were configured with Siebel Web Server Extension (SWSE) plug-in to handle 40,000 concurrent user load
    • - All fourteen iPlanet Web Server instances forwarded HTTP requests from Siebel clients to all four Siebel application server instances in a round robin fashion
  • Oracle Application Test Suite (OATS) was stable and held up amazingly well over the entire duration of the test run.
  • The benchmark test results were validated and thoroughly audited by the Siebel benchmark and PSR teams
    • - Nothing new here. All Sun published Siebel benchmarks including the SPARC T4 one were properly audited before releasing those to the outside world

Resource Utilization

Component #Users CPU% Memory Footprint
Gateway/Application Server 20,000 67.03 205.54 GB
Application Server 20,000 66.09 206.24 GB
Database Server 40,000 33.43 108.72 GB
Web Server 40,000 29.48 14.03 GB

Finally, how does this benchmark stack up against other published benchmarks? Short answer is "very well". Head over to the Oracle Siebel Benchmark White Papers webpage to do the comparison yourself.



[Credit to our hard working colleagues in SAE, Siebel PSR, benchmark and Oracle Platform Integration (OPI) teams. Special thanks to Sumti Jairath and Venkat Krishnaswamy for the last minute fire drill]

Copy of this blog post is also available at:
Siebel 8.1.1.4 Benchmark on SPARC T5

Tuesday Feb 12, 2013

OBIEE 11g Benchmark on SPARC T4

Just like the Siebel 8.1.x/SPARC T4 benchmark post, this one too was overdue for at least four months. In any case, I hope the Oracle BI customers already knew about the OBIEE 11g/SPARC T4 benchmark effort. In here I will try to provide few additional / interesting details that aren't covered in the following Oracle PR that was posted on oracle.com on 09/30/2012.

    SPARC T4 Server Delivers Outstanding Performance on Oracle Business Intelligence Enterprise Edition 11g


Benchmark Details

System Under Test

The entire BI middleware stack including the WebLogic 11g Server, OBI Server, OBI Presentation Server and Java Host was installed and configured on a single SPARC T4-4 server consisting four 8-Core 3.0 GHz SPARC T4 processors (total #cores: 32) and 128 GB physical memory. Oracle Solaris 10 8/11 is the operating system.

BI users were authenticated against Oracle Internet Directory (OID) in this benchmark - hence OID software which was part of Oracle Identity Management 11.1.1.6.0 was also installed and configured on the system under test (SUT). Oracle BI Server's Query Cache was turned on, and as a result, most of the query results were cached in OBIS layer, that resulted in minimal database activity making it ideal to have the Oracle 11g R2 database server with the OBIEE database running on the same box as well.

Oracle BI database was hosted on a Sun ZFS Storage 7120 Appliance. The BI Web Catalog was under a ZFS/zpool on a couple of SSDs.


Test Scenario

In this benchmark, 25000 concurrent users assumed five different business user roles -- Marketing Executive, Sales Representative, Sales Manager, Sales Vice-president, and Service Manager. The load was distributed equally among those five business user roles. Each of those different BI users accessed five different pre-built dashboards with each dashboard having an average of five reports - a mix of charts, tables and pivot tables - and returning 50-500 rows of aggregated data. The benchmark test scenario included drilling down into multiple levels from a table or chart within a dashboard. There is a 60 second think time between requests, per user.


BI Setup & Test Results

OBIEE 11g 11.1.1.6.0 was deployed on SUT in a vertical scale-out fashion. Two Oracle BI Presentation Server processes, one Oracle BI Server process, one Java Host process and two instances of WebLogic Managed Servers handled 25,000 concurrent user sessions smoothly. This configuration resulted in a sub-second overall average transaction response time (average of averages over a duration of 120 minutes or 2 hours). On average, 450 business transactions were executed per second, which triggered 750 SQL executions per second.

It took only 52% of CPU on average (~5% system CPU and rest in user land) to do all this work to achieve the throughput outlined above. Since 25,000 unique test/BI users hammered different dashboards consistently, not so surprisingly bulk of the CPU was spent in Oracle BI Presentation Server layer, which took a whopping 29%. BI Server consumed about 10-11% and the rest was shared by Java Host, OID, WebLogic Managed Server instances and the Oracle database.


So, what is the key take away from this whole exercise?

SPARC T4 rocks Oracle BI world. OBIEE 11g/SPARC T4 is an ideal combination that may work well for majority of OBIEE deployments on Solaris platform. Or in marketing jargon - The excellent vertical and horizontal scalability of the SPARC T4 server gives customer the option to scale up as well as scale out growth, to support large BI EE installations, with minimal hardware investment.

Evaluate and decide for yourself.

[Credit to our colleagues in Oracle FMW PSR, ISVe teams and SCA lab support engineers]

Wednesday Jan 30, 2013

Siebel 8.1.1.4 Benchmark on SPARC T4

Siebel is a multi-threaded native application that performs well on Oracle's T-series SPARC hardware. We have several versions of Siebel benchmarks published on previous generation T-series servers ranging from Sun Fire T2000 to Oracle SPARC T3-4. So, it is natural to see that tradition extended to the current genration SPARC T4 as well.

Benchmark Details

29,000 user Siebel 8.1.1.4 benchmark on a mix of SPARC T4-1 and T4-2 servers was announced during the Oracle OpenWorld 2012 event. In this benchmark, Siebel application server instances ran on three SPARC T4-2/Solaris 10 8/11 systems where as the Oracle database server 11gR2 was configured on a single SPARC T4-1/Solaris 11 11/11 system. Several iPlanet web server 7 U9 instances with the Siebel Web Plug-in (SWE) installed ran on one SPARC T4-1/Solaris 10 8/11 system. Siebel database was hosted on a single Sun Storage F5100 flash array consisting 80 flash modules (FMODs) each with capacity 24 GB.

Siebel Call Center and Order Management System are the modules that were tested in the benchmark. The benchmark workload had 70% of virtual users running Siebel Call Center transactions and the remaining 30% vusers running Siebel Order Management System transactions. This benchmark on T4 exhibited sub-second response times on average for both Siebel Call Center and Order Management System modules.

Load balancing at various layers including web and test client systems ensured near uniform load across all web and application server instances. All three Siebel application server systems consumed ~78% CPU on average. The database and web server systems consumed ~53% and ~18% CPU respectively.

All these details are supposed to be available in a standard Oracle|Siebel benchmark template - but for some reason, I couldn't find it on Oracle's Siebel Benchmark White Papers web page yet. Meanwhile check out the following PR that was posted on oracle.com on 09/28/2012.

    SPARC T4 Servers Set World Record on Siebel CRM 8.1.1.4 Benchmark

Looks like the large number of vusers (29,000 to be precise) sets this benchmark apart from the other benchmarks published with the same Siebel 8.1.1.4 benchmark workload.

[Credit to our colleagues in Siebel PSR, benchmark, SAE and ISVe teams]

Monday Oct 15, 2012

Consolidating Oracle E-Business Suite R12 on Oracle's SPARC SuperCluster

An Optimized Solution for Oracle E-Business Suite (EBS) R12 12.1.3 is now available on oracle.com.

    The Oracle Optimized Solution for Oracle E-Business Suite

This solution was centered around the engineered system, SPARC SuperCluster T4-4. Check the business and technical white papers along with a bunch of relevant useful resources online at the above optimized solution page for EBS.

What is an Optimized Solution?

Oracle's Optimized Solutions are designed, tested and fully documented architectures that are tuned for optimal performance and availability. Optimized solutions are NOT pre-packaged, fully tuned, ready-to-install software bundles that can be downloaded and installed. An optimized solution is usually a well documented architecture that was thoroughly tested on a target platform. The technical white paper details the deployed application architecture along with various observations from installing the application on target platform to its behavior and performance in highly available and scalable configurations.

Oracle E-Business Suite R12 Use Case

Multiple E-Business Suite R12 12.1.3 application modules were tested in this optimized solution -- Financials (online - oracle forms & web requests), Order Management (online - oracle forms & web req uests) and HRMS (online - web requests & payroll batch). The solution will be updated with additional application modules, when they are available.

Oracle Solaris Cluster is responsible for the high availability portion of the solution.

Performance Data

For the sake of completeness, test results were also documented in the optimized solution white paper. Those test results are mainly for educational purposes only. They give good sense of application behavior under the circumstances the application was tested. Since the major focus of the optimized solution is around highly available and scalable configurations, the application was configured to me et those criteria. Hence the documented test results are not directly comparable to any other E-Business Suite performance test results published by any vendor including Oracle. Such an attempt may lead to skewed, incorrect conclusions.

Questions & Requests

Feel free to direct your questions to the author of the white papers. If you are a potential customer who would like to test a specific E-Business Suite application module on any non-engineered syste m such as SPARC T4-X or engineered system such as SPARC SuperCluster, contact Oracle Solution Center.

Friday Aug 03, 2012

Enabling 2 GB Large Pages on Solaris 10

Few facts:

  • - 8 KB is the default page size on Solaris 10 and 11 as of this writing
  • - both hardware and software must have support for 2 GB large pages
  • - SPARC T4 hardware is capable of supporting 2 GB pages
  • - Solaris 11 kernel has in-built support for 2 GB pages
  • - Solaris 10 has no default support for 2 GB pages
  • - Memory intensive 64-bit applications may benefit the most from using 2 GB pages

Prerequisites:

OS: Solaris 10 8/11 (Update 10) or later
Hardware: SPARC T4. eg., SPARC T4-1, T4-2 or T4-4

Steps to enable 2 GB large pages on Solaris 10:

  1. Install the latest kernel patch or ensure that 147440-04 or later was installed

  2. Add the following line to /etc/system and reboot
    • set max_uheap_lpsize=0x80000000

  3. Finally check the output of the following command when the system is back online
    • pagesize -a

    eg.,
    % pagesize -a
    8192		<-- 8K
    65536		<-- 64K
    4194304		<-- 4M
    268435456	<-- 256M
    2147483648	<-- 2G
    
    % uname -a
    SunOS jar-jar 5.10 Generic_147440-21 sun4v sparc sun4v
    

Also See:

Thursday Apr 14, 2011

Oracle Solaris: Show Me the CPU, vCPU, Core Counts and the Socket-Core-vCPU Mapping

[Replaced old code with new code on 10/03/11]

It should be easy to find this information just by running an OS command. However for some reason it ain't the case as of today. The user must know few details about the underlying hardware and run multiple commands to figure out the exact number of physical processors, cores etc.,

For the benefit of our customers, here is a simple shell script that displays the number of physical processors, cores, virtual processors, cores per physical processor, number of hardware threads (vCPUs) per core and the virtual CPU mapping for all physical processors and cores on a Solaris system (SPARC or x86/x64). This script showed valid output on recent T-series, M-series hardware as well as on some older hardware - Sun Fire 4800, x4600. Due to the changes in the output of cpu_info over the years, it is possible that the script may return incorrect information in some cases. Since it is just a shell script, tweak the code as you like. The script can be executed by any OS user.

Download the script : showcpucount


% cat showcpucount

--------------------------------------- CUT HERE -------------------------------------------
#!/bin/bash

/usr/bin/kstat -m cpu_info | egrep "chip_id|core_id|module: cpu_info" > /var/tmp/cpu_info.log

nproc=`(grep chip_id /var/tmp/cpu_info.log | awk '{ print $2 }' | sort -u | wc -l | tr -d ' ')`
ncore=`(grep core_id /var/tmp/cpu_info.log | awk '{ print $2 }' | sort -u | wc -l | tr -d ' ')`
vproc=`(grep 'module: cpu_info' /var/tmp/cpu_info.log | awk '{ print $4 }' | sort -u | wc -l | tr -d ' ')`

nstrandspercore=$(($vproc/$ncore))
ncoresperproc=$(($ncore/$nproc))

speedinmhz=`(/usr/bin/kstat -m cpu_info | grep clock_MHz | awk '{ print $2 }' | sort -u)`
speedinghz=`echo "scale=2; $speedinmhz/1000" | bc`

echo "Total number of physical processors: $nproc"
echo "Number of virtual processors: $vproc"
echo "Total number of cores: $ncore"
echo "Number of cores per physical processor: $ncoresperproc"
echo "Number of hardware threads (strands or vCPUs) per core: $nstrandspercore"
echo "Processor speed: $speedinmhz MHz ($speedinghz GHz)"

# now derive the vcpu-to-core mapping based on above information #

echo -e "\n** Socket-Core-vCPU mapping **"
let linenum=2

for ((i = 1; i <= ${nproc}; ++i ))
do
        chipid=`sed -n ${linenum}p /var/tmp/cpu_info.log | awk '{ print $2 }'`
        echo -e "\nPhysical Processor $i (chip id: $chipid):"

        for ((j = 1; j <= ${ncoresperproc}; ++j ))
        do
                let linenum=($linenum + 1)
                coreid=`sed -n ${linenum}p /var/tmp/cpu_info.log | awk '{ print $2 }'`
                echo -e "\tCore $j (core id: $coreid):"

                let linenum=($linenum - 2)
                vcpustart=`sed -n ${linenum}p /var/tmp/cpu_info.log | awk '{ print $4 }'`

                let linenum=(3 * $nstrandspercore + $linenum - 3)
                vcpuend=`sed -n ${linenum}p /var/tmp/cpu_info.log | awk '{ print $4 }'`

                echo -e "\t\tvCPU ids: $vcpustart - $vcpuend"
                let linenum=($linenum + 4)
        done
done

rm /var/tmp/cpu_info.log
--------------------------------------- CUT HERE -------------------------------------------

# prtdiag | head -1
System Configuration:  Sun Microsystems  sun4u SPARC Enterprise M4000 Server

# ./showcpucount
Total number of physical processors: 4
Number of virtual processors: 32
Total number of cores: 16
Number of cores per physical processor: 4
Number of hardware threads (strands or vCPUs) per core: 2
Processor speed: 2660 MHz (2.66 GHz)

** Socket-Core-vCPU mapping **

Physical Processor 1 (chip id: 1024):
        Core 1 (core id: 0):
                vCPU ids: 0 - 1
        Core 2 (core id: 2):
                vCPU ids: 2 - 3
        Core 3 (core id: 4):
                vCPU ids: 4 - 5
        Core 4 (core id: 6):
                vCPU ids: 6 - 7

Physical Processor 2 (chip id: 1032):
        Core 1 (core id: 8):
                vCPU ids: 8 - 9
        Core 2 (core id: 10):
                vCPU ids: 10 - 11
        Core 3 (core id: 12):
                vCPU ids: 12 - 13
        Core 4 (core id: 14):
                vCPU ids: 14 - 15

Physical Processor 3 (chip id: 1040):
        Core 1 (core id: 16):
                vCPU ids: 16 - 17
        Core 2 (core id: 18):
                vCPU ids: 18 - 19
        Core 3 (core id: 20):
                vCPU ids: 20 - 21
        Core 4 (core id: 22):
                vCPU ids: 22 - 23

Physical Processor 4 (chip id: 1048):
        Core 1 (core id: 24):
                vCPU ids: 24 - 25
        Core 2 (core id: 26):
                vCPU ids: 26 - 27
        Core 3 (core id: 28):
                vCPU ids: 28 - 29
        Core 4 (core id: 30):
                vCPU ids: 30 - 31

Sunday Jan 30, 2011

PeopleSoft Financials 9.0 (Day-in-the-Life) Benchmark on Oracle Sun

It is very rare to see any vendor publishing a benchmark on competing products of their own let alone products that are 100% compatible with each other. Well, it happened at Oracle|Sun. M-series and T-series hardware was the subject of two similar / comparable benchmarks; and PeopleSoft Financials 9.0 DIL was the benchmarked workload.

Benchmark report URLs

PeopleSoft Financials 9.0 on Oracle's SPARC T3-1 Server
PeopleSoft Financials 9.0 on Oracle's Sun SPARC Enterprise M4000 Server

Brief description of workload

The benchmark workload simulated Financial Control and Reporting business processes that a customer typically performs when closing their books at period end. "Closing the books" generally involves Journal generation, editing & posting; General Ledger allocations, summary & consolidations and reporting in GL. The applications that were involved in this process are: General Ledger, Asset Management, Cash Management, Expenses, Payables and Receivables.

The benchmark execution simulated the processing required for closing the books (background batch processes) along with some online concurrent transaction activity by 1000 end users.

Summary of Benchmark Test Results

The following table summarizes the test results of the "close the books" business processes. For the online transaction response times, check the benchmark reports (too many transactions to summarize here). Feel free to draw your own conclusions.

As of this writing no other vendor published any benchmark result with PeopleSoft Financials 9.0 workload.

(If the following table is illegible, click here for cleaner copy of test results.)

Hardware Configuration Elapsed Time Journal Lines per Hour Ledger Lines per Hour
Batch only Batch + 1K users Batch only Batch + 1K users Batch only Batch + 1K users
DB + Proc Sched  1 x Sun SPARC Enterprise M5000 Server
 8 x 2.53 GHz QC SPARC64 VII processors, 128 GB RAM
App + Web  1 x SPARC T3-1 Server
 1 x 1.65 GHz 16-Core SPARC T3 processor, 128 GB RAM
24.15m
Reporting: 11.67m
25.03m
Reporting: 11.98m
6,355,093 6,141,258 6,209,682 5,991,382
DB + Proc Sched  1 x Sun SPARC Enterprise M5000 Server
 8 x 2.66 GHz QC SPARC64 VII+ processors, 128 GB RAM
App + Web  1 x Sun SPARC Enterprise M4000 Server
 4 x 2.66 GHz QC SPARC64 VII+ processors, 128 GB RAM
21.74m
Reporting: 11.35m
23.30m
Reporting: 11.42m
7,059,591 6,597,239 6,898,060 6,436,236

Software Versions

Oracle’s PeopleSoft Enterprise Financials/SCM 9.00.00.331
Oracle’s PeopleSoft Enterprise (PeopleTools) 8.49.23 64-bit
Oracle’s PeopleSoft Enterprise (PeopleTools) 8.49.23 32-bit on Windows Server 2003 SP2 for generating reports using nVision
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 64-bit + RDBMS patch 9699654
Oracle Tuxedo 9.1 RP36 Jolt 9.1 64-bit
Oracle WebLogic Server 9.2 MP3 64-bit (Java version "1.5.0_12")
MicroFocus Server Express 4.0 SP4 64-bit
Oracle Solaris 10 10/09 and 09/10

Acknowledgments

It is one of the complex and stressful benchmarks that I have ever been involved in. It is a collaborative effort from different teams within Oracle Corporation. A sincere thanks to the PeopleSoft benchmark team for providing adequate support throughout the execution of the benchmark and for the swift validation of benchmark results numerous times (yes, "numerous" - it is not a typo.)

Sunday Jan 09, 2011

Oracle 11g : Poor Performance Accessing V$SESSION_FIX_CONTROL

PeopleSoft HCM, Financials/SCM 9.x customers may have to patch their Oracle database server with RDBMS patch 9699654. Rest of the Oracle customers: read the symptoms and decide.

In couple of PeopleSoft deployments it is observed that the following SQL is the top query when all queries are sorted by elapsed time or CPU time. 11.2.0.1.0 is the Oracle database server version.


SELECT VALUE FROM V$SESSION_FIX_CONTROL WHERE BUGNO = :B1 AND SESSION_ID = USERENV('SID')

The target query is being executed thousands of times. The poor performance is due to the lack of a proper index. Here is the explain plan that exhibits the performance issue.

-------------------------------------------------------------------------------
| Id  | Operation	 | Name       | Starts | E-Rows | A-Rows |   A-Time   |
-------------------------------------------------------------------------------
|   0 | SELECT STATEMENT |	      |      1 |	|      1 |00:00:00.02 |
|\*  1 |  FIXED TABLE FULL| X$QKSBGSES |      1 |      1 |      1 |00:00:00.02 |
-------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(("BUGNO_QKSBGSEROW"=:B1 AND
	      "SID_QKSBGSEROW"=USERENV('SID') AND "INST_ID"=USERENV('INSTANCE')))

20 rows selected.

Oracle Corporation accepted this behavior as a bug and agreed to fix in Oracle RDBMS 12.1. Meanwhile an RDBMS patch was made available to the customers running 11.2.0.1 or later. 9699654 is the bug# (Bad performance of V$SESSION_FIX_CONTROL query) - so, Solaris SPARC customers can download the RDBMS patch 9699654 directly from the support web site. Customers on other platforms: please search the bug database and support web site with appropriate keywords.

After applying the RDBMS patch 9699654, the optimizer was using an index and the query performance was improved as expected. Also the target SQL query was no longer the top SQL - in fact, no references to this particular query were found in the AWR report. The new explain plan is shown below.

----------------------------------------------------------------------------------------------
| Id  | Operation               | Name               | Starts | E-Rows | A-Rows |   A-Time   |
----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT        |                    |      1 |        |      1 |00:00:00.01 |
|\*  1 |  FIXED TABLE FIXED INDEX| X$QKSBGSES (ind:1) |      1 |      1 |      1 |00:00:00.01 |
----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(("BUGNO_QKSBGSEROW"=:B1 AND "SID_QKSBGSEROW"=USERENV('SID') AND
              "INST_ID"=USERENV('INSTANCE')))

20 rows selected.
About

Benchmark announcements, HOW-TOs, Tips and Troubleshooting

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today