Tuesday Feb 12, 2013

OBIEE 11g Benchmark on SPARC T4

Just like the Siebel 8.1.x/SPARC T4 benchmark post, this one too was overdue for at least four months. In any case, I hope the Oracle BI customers already knew about the OBIEE 11g/SPARC T4 benchmark effort. In here I will try to provide few additional / interesting details that aren't covered in the following Oracle PR that was posted on oracle.com on 09/30/2012.

    SPARC T4 Server Delivers Outstanding Performance on Oracle Business Intelligence Enterprise Edition 11g


Benchmark Details

System Under Test

The entire BI middleware stack including the WebLogic 11g Server, OBI Server, OBI Presentation Server and Java Host was installed and configured on a single SPARC T4-4 server consisting four 8-Core 3.0 GHz SPARC T4 processors (total #cores: 32) and 128 GB physical memory. Oracle Solaris 10 8/11 is the operating system.

BI users were authenticated against Oracle Internet Directory (OID) in this benchmark - hence OID software which was part of Oracle Identity Management 11.1.1.6.0 was also installed and configured on the system under test (SUT). Oracle BI Server's Query Cache was turned on, and as a result, most of the query results were cached in OBIS layer, that resulted in minimal database activity making it ideal to have the Oracle 11g R2 database server with the OBIEE database running on the same box as well.

Oracle BI database was hosted on a Sun ZFS Storage 7120 Appliance. The BI Web Catalog was under a ZFS/zpool on a couple of SSDs.


Test Scenario

In this benchmark, 25000 concurrent users assumed five different business user roles -- Marketing Executive, Sales Representative, Sales Manager, Sales Vice-president, and Service Manager. The load was distributed equally among those five business user roles. Each of those different BI users accessed five different pre-built dashboards with each dashboard having an average of five reports - a mix of charts, tables and pivot tables - and returning 50-500 rows of aggregated data. The benchmark test scenario included drilling down into multiple levels from a table or chart within a dashboard. There is a 60 second think time between requests, per user.


BI Setup & Test Results

OBIEE 11g 11.1.1.6.0 was deployed on SUT in a vertical scale-out fashion. Two Oracle BI Presentation Server processes, one Oracle BI Server process, one Java Host process and two instances of WebLogic Managed Servers handled 25,000 concurrent user sessions smoothly. This configuration resulted in a sub-second overall average transaction response time (average of averages over a duration of 120 minutes or 2 hours). On average, 450 business transactions were executed per second, which triggered 750 SQL executions per second.

It took only 52% of CPU on average (~5% system CPU and rest in user land) to do all this work to achieve the throughput outlined above. Since 25,000 unique test/BI users hammered different dashboards consistently, not so surprisingly bulk of the CPU was spent in Oracle BI Presentation Server layer, which took a whopping 29%. BI Server consumed about 10-11% and the rest was shared by Java Host, OID, WebLogic Managed Server instances and the Oracle database.


So, what is the key take away from this whole exercise?

SPARC T4 rocks Oracle BI world. OBIEE 11g/SPARC T4 is an ideal combination that may work well for majority of OBIEE deployments on Solaris platform. Or in marketing jargon - The excellent vertical and horizontal scalability of the SPARC T4 server gives customer the option to scale up as well as scale out growth, to support large BI EE installations, with minimal hardware investment.

Evaluate and decide for yourself.

[Credit to our colleagues in Oracle FMW PSR, ISVe teams and SCA lab support engineers]

Wednesday Jan 30, 2013

Siebel 8.1.1.4 Benchmark on SPARC T4

Siebel is a multi-threaded native application that performs well on Oracle's T-series SPARC hardware. We have several versions of Siebel benchmarks published on previous generation T-series servers ranging from Sun Fire T2000 to Oracle SPARC T3-4. So, it is natural to see that tradition extended to the current genration SPARC T4 as well.

Benchmark Details

29,000 user Siebel 8.1.1.4 benchmark on a mix of SPARC T4-1 and T4-2 servers was announced during the Oracle OpenWorld 2012 event. In this benchmark, Siebel application server instances ran on three SPARC T4-2/Solaris 10 8/11 systems where as the Oracle database server 11gR2 was configured on a single SPARC T4-1/Solaris 11 11/11 system. Several iPlanet web server 7 U9 instances with the Siebel Web Plug-in (SWE) installed ran on one SPARC T4-1/Solaris 10 8/11 system. Siebel database was hosted on a single Sun Storage F5100 flash array consisting 80 flash modules (FMODs) each with capacity 24 GB.

Siebel Call Center and Order Management System are the modules that were tested in the benchmark. The benchmark workload had 70% of virtual users running Siebel Call Center transactions and the remaining 30% vusers running Siebel Order Management System transactions. This benchmark on T4 exhibited sub-second response times on average for both Siebel Call Center and Order Management System modules.

Load balancing at various layers including web and test client systems ensured near uniform load across all web and application server instances. All three Siebel application server systems consumed ~78% CPU on average. The database and web server systems consumed ~53% and ~18% CPU respectively.

All these details are supposed to be available in a standard Oracle|Siebel benchmark template - but for some reason, I couldn't find it on Oracle's Siebel Benchmark White Papers web page yet. Meanwhile check out the following PR that was posted on oracle.com on 09/28/2012.

    SPARC T4 Servers Set World Record on Siebel CRM 8.1.1.4 Benchmark

Looks like the large number of vusers (29,000 to be precise) sets this benchmark apart from the other benchmarks published with the same Siebel 8.1.1.4 benchmark workload.

[Credit to our colleagues in Siebel PSR, benchmark, SAE and ISVe teams]

Friday Dec 28, 2012

Solaris Tips : CPU Cache Sizes, Changing System Date

Tip #1: Finding the CPU cache sizes from Solaris operating environment

Use the prtpicl utility to list out system configuration, and look for the cache sizes within that output.

eg.,

$ /usr/sbin/prtpicl -v |grep cache
              :l1-icache-size    0x10000
              :l1-icache-line-size       0x40
              :l1-icache-associativity   0x2
              :l1-dcache-size    0x10000
              :l1-dcache-line-size       0x40
              :l1-dcache-associativity   0x2
              :l2-cache-size     0x500000
              :l2-cache-line-size        0x100
              :l2-cache-associativity    0xa

[Updated 01/14/13] The above output was gathered from an M4000 system that has SPARC64 VII processors.

Recent update releases of Solaris 10 and 11 show the prtpicl reported cache sizes in decimal numbers.

Here is a slightly improved prtpicl command that filters out unwanted output. (Courtesy: Georg)

/usr/sbin/prtpicl -v -c cpu | egrep "^ +cpu|ID|cache"

Tip #2: Changing the System Date

Use date to change the system date. For example, to set the system date to March 9, 2008 08:15 AM, run the following command. Syntax: date mmddHHMMyy

#date 0309081508

Sun Mar 9 08:15:03 PST 2008

Friday Nov 23, 2012

emca fails with "Database instance is unavailable" though available

The following example shows the symptoms of failure, and the exact error message.

$ emca -repos create

...
Password for SYSMAN user:  

Do you wish to continue? [yes(Y)/no(N)]: Y
Nov 19, 2012 10:33:42 AM oracle.sysman.emcp.DatabaseChecks \
         checkDbAvailabilityImpl
WARNING: ORA-01034: ORACLE not available

Nov 19, 2012 10:33:42 AM oracle.sysman.emcp.DatabaseChecks \
         throwDBUnavailableException
SEVERE: 
Database instance is unavailable. Fix the ORA error thrown and 
run EM Configuration Assistant again.

Some of the possible reasons may be : 

1) Database may not be up. 
2) Database is started setting environment variable ORACLE_HOME 
with trailing '/'. Reset ORACLE_HOME and bounce the database. 

For eg. Database is started setting environment variable 
ORACLE_HOME=/scratch/db/ . Reset ORACLE_HOME=/scratch/db  and bounce 
the database.

Fix:

Ensure that the ORACLE_HOME is pointing to the right location in $ORACLE_HOME/bin/emca file.

Rather than installing from scratch, if ORACLE_HOME was copied over from another location, likely it results in wrong location for ORACLE_HOME in several Enterprise Manager (EM) specific scripts and files. It usually happens when the directory structure on the target machine is not identical to the structure on the original/source machine, including the top level directory location where Oracle RDBMS was installed properly using the installer.

Monday Oct 15, 2012

Consolidating Oracle E-Business Suite R12 on Oracle's SPARC SuperCluster

An Optimized Solution for Oracle E-Business Suite (EBS) R12 12.1.3 is now available on oracle.com.

    The Oracle Optimized Solution for Oracle E-Business Suite

This solution was centered around the engineered system, SPARC SuperCluster T4-4. Check the business and technical white papers along with a bunch of relevant useful resources online at the above optimized solution page for EBS.

What is an Optimized Solution?

Oracle's Optimized Solutions are designed, tested and fully documented architectures that are tuned for optimal performance and availability. Optimized solutions are NOT pre-packaged, fully tuned, ready-to-install software bundles that can be downloaded and installed. An optimized solution is usually a well documented architecture that was thoroughly tested on a target platform. The technical white paper details the deployed application architecture along with various observations from installing the application on target platform to its behavior and performance in highly available and scalable configurations.

Oracle E-Business Suite R12 Use Case

Multiple E-Business Suite R12 12.1.3 application modules were tested in this optimized solution -- Financials (online - oracle forms & web requests), Order Management (online - oracle forms & web req uests) and HRMS (online - web requests & payroll batch). The solution will be updated with additional application modules, when they are available.

Oracle Solaris Cluster is responsible for the high availability portion of the solution.

Performance Data

For the sake of completeness, test results were also documented in the optimized solution white paper. Those test results are mainly for educational purposes only. They give good sense of application behavior under the circumstances the application was tested. Since the major focus of the optimized solution is around highly available and scalable configurations, the application was configured to me et those criteria. Hence the documented test results are not directly comparable to any other E-Business Suite performance test results published by any vendor including Oracle. Such an attempt may lead to skewed, incorrect conclusions.

Questions & Requests

Feel free to direct your questions to the author of the white papers. If you are a potential customer who would like to test a specific E-Business Suite application module on any non-engineered syste m such as SPARC T4-X or engineered system such as SPARC SuperCluster, contact Oracle Solution Center.

Monday Sep 24, 2012

E-Business Suite : Role of CHUNK_SIZE in Oracle Payroll

Different batch processes in Oracle Payroll flow have the ability to spawn multiple child processes (or threads) to complete the work in hand. The number of child processes to fork is controlled by the THREADS parameter in APPS.PAY_ACTION_PARAMETERS view.

THREADS parameter

The default value for THREADS parameter is 1, which is fine for a single-processor system but not optimal for the modern multi-core multi-processor systems. Setting the THREADS parameter to a value equal to or less than the total number of [virtual] processors available on the system may improve the performance of payroll processing. However on the down side, since multiple child processes operate against the same set of payroll tables in HR schema, database may experience undesired consequences such as buffer busy waits and index contention, which results in giving up some of the gains achieved by using multiple child processes/threads to process the work. Couple of other action parameters, CHUNK_SIZE and CHUNK_SHUFFLE, help alleviate the database contention.

eg.,

Set a value for THREADS parameter as shown below.

CONNECT APPS/APPS_PASSWORD

UPDATE PAY_ACTION_PARAMETERS
SET PARAMETER_VALUE = DESIRED_VALUE
WHERE PARAMETER_NAME = 'THREADS';

COMMIT;

(I am not aware of any maximum value for THREADS parameter)


CHUNK_SIZE parameter

The size of each commit unit for the batch process is controlled by the CHUNK_SIZE action parameter. In other words, chunking is the act of splitting the assignment actions into commit groups of desired size represented by the CHUNK_SIZE parameter. The default value is 20, and each thread processes one chunk at a time -- which means each child process inserts or processes 20 assignment actions at any time.

When multiple threads are configured, each thread picks up a chunk to process, completes the assignment actions and then picks up another chunk. This is repeated until all the chunks are exhausted.

It is possible to use different chunk sizes in different batch processes. During the initial phase of processing, CHUNK_SIZE number of assignment actions are inserted into relevant table(s). When multiple child processes are inserting data at the same time into the same set of tables, as explained earlier, database may experience contention. The default value of 20 is mostly optimal in such a case. Experiment with different values for the initial phase by +/-10 for CHUNK_SIZE parameter and observe the performance impact. A larger value may make sense during the main processing phase. Again experimentation is the key in finding the suitable value for your environment. Start with a large value such as 2000 for the chunk size, then increment or decrement the size by 500 at a time until an optimal value is found.

eg.,

Set a value for CHUNK_SIZE parameter as shown below.

CONNECT APPS/APPS_PASSWORD

UPDATE PAY_ACTION_PARAMETERS
SET PARAMETER_VALUE = DESIRED_VALUE
WHERE PARAMETER_NAME = 'CHUNK_SIZE';

COMMIT;

CHUNK_SIZE action parameter accepts a value that is as low as 1 or as high as 16000.


CHUNK SHUFFLE parameter

By default, chunks of assignment actions are processed sequentially by all threads - which may not be a good thing especially given that all child processes/threads performing similar actions against the same set of tables almost at the same time. By saying not a good thing, I mean to say that the default behavior leads to contention in the database (in data blocks, for example).

It is possible to relieve some of that database contention by randomizing the processing order of chunks of assignment actions. This behavior is controlled by the CHUNK SHUFFLE action parameter. Chunk processing is not randomized unless explicitly configured.

eg.,

Set chunk shuffling as shown below.

CONNECT APPS/APPS_PASSWORD

UPDATE PAY_ACTION_PARAMETERS
SET PARAMETER_VALUE = 'Y'
WHERE PARAMETER_NAME = 'CHUNK SHUFFLE';

COMMIT;

Finally I recommend checking the following document out for additional details and additional pay action tunable parameters that may speed up the processing of Oracle Payroll.
    My Oracle Support Doc ID: 226987.1 Oracle 11i & R12 Human Resources (HRMS) & Benefits (BEN) Tuning & System Health Checks

Also experiment with different combinations of parameters and values until the right set of action parameters and values are found for your deployment.

Friday Aug 03, 2012

Enabling 2 GB Large Pages on Solaris 10

Few facts:

  • - 8 KB is the default page size on Solaris 10 and 11 as of this writing
  • - both hardware and software must have support for 2 GB large pages
  • - SPARC T4 hardware is capable of supporting 2 GB pages
  • - Solaris 11 kernel has in-built support for 2 GB pages
  • - Solaris 10 has no default support for 2 GB pages
  • - Memory intensive 64-bit applications may benefit the most from using 2 GB pages

Prerequisites:

OS: Solaris 10 8/11 (Update 10) or later
Hardware: SPARC T4. eg., SPARC T4-1, T4-2 or T4-4

Steps to enable 2 GB large pages on Solaris 10:

  1. Install the latest kernel patch or ensure that 147440-04 or later was installed

  2. Add the following line to /etc/system and reboot
    • set max_uheap_lpsize=0x80000000

  3. Finally check the output of the following command when the system is back online
    • pagesize -a

    eg.,
    % pagesize -a
    8192		<-- 8K
    65536		<-- 64K
    4194304		<-- 4M
    268435456	<-- 256M
    2147483648	<-- 2G
    
    % uname -a
    SunOS jar-jar 5.10 Generic_147440-21 sun4v sparc sun4v
    

Also See:

Sunday Jul 29, 2012

[OID] ldap_modify: Failed to find member in mandatory or optional attribute list

A sample LDAP entry and the resulting error message are shown below. The objective is simple - adding a new member (employee) to an existing group (Administrators).

% cat assigngrp.ldif

dn: cn=Administrators,ou=groups,ou=entapp
changetype: modify
add: member
member: cn=emp1234,ou=people,ou=entapp

% ldapmodify -p 3060 -h localhost -D "cn=orcladmin" -w passwd -f assigngrp.ldif
add member:
	cn=emp1234,ou=people,ou=entapp
modifying entry cn=Administrators,ou=groups,ou=entapp
ldap_modify: Object class violation
ldap_modify: additional info: Failed to find member in mandatory or \
     optional attribute list.

The above error message is a generic one. It would have been nice had it shown the expected and actual inputs as part of the error. However it gave us a hint that the object class was violated. In this example, the group "Administrators" was created under object class groupOfUniqueNames.

% ldapsearch -p 3060 -h localhost -b "ou=groups,ou=entapp" -A "(objectclass=*)"
..
cn=Administrators,ou=groups,ou=entapp
Administrators,groups,entapp
cn
uniquemember
objectclass
..

RFC 4519 for Lightweight Directory Access Protocol (LDAP) requires the uniqueMember attribute within the groupOfUniqueNames object class. An excerpt from the original RFC:

3.6.  'groupOfUniqueNames'
	...

      ( 2.5.6.17 NAME 'groupOfUniqueNames'
         SUP top
         STRUCTURAL
         MUST ( uniqueMember $
               cn )
         MAY ( businessCategory $
               seeAlso $
               owner $
               ou $
               o $
               description ) )

Going back to the issue in hand, the "add" attribute must be uniqueMember, not member, in "modify" LDAP entry. That's the object class violation in this case. Now the fix to the issue is obvious.

The modified entry and the output from Oracle Internet Directory's ldapmodify command are shown below.

% cat assigngrp.ldif

dn: cn=Administrators,ou=groups,ou=entapp
changetype: modify
add: uniqueMember
uniqueMember: cn=emp1234,ou=people,ou=entapp

$ ldapmodify -p 3060 -h localhost -D "cn=orcladmin" -w passwd -f assigngrp.ldif
add uniqueMember:
	cn=emp1234,ou=people,ou=entapp
modifying entry cn=Administrators,ou=groups,ou=entapp
modify complete

Though the above example was derived from an Oracle Internet Directory (OID) environment, the problem and the solution are applicable to all environments running LDAP servers.

Tuesday Jun 12, 2012

Oracle E-Business Suite Tip : SQL Tracing

Issue:

Attempts to enable SQL tracing from concurrent request form fails with error:

Function not available to this responsibility.
Change Responsibilities or contact your System Administrator

Resolution:

Switch responsibility to "System Administrator". Navigate to System -> Profiles, and query for "%Diagnostics% ("Utilities : Diagnostics")". Once found the profile, change its value to "Yes". Restart web browser and try enabling SQL trace again.

Tuesday May 08, 2012

OBIEE 11g: Resolving Presentation Services Startup Failure

ISSUE:

Starting Presentation Services fail with the error:

[OBIPS] [ERROR:1] [] [saw.security.odbcuserpopulationimpl.getbisystemconnection] [ecid: ] [tid: ] Authentication Failure.
Odbc driver returned an error (SQLDriverConnectW).
State: 08004.  Code: 10018.  [NQODBC] [SQL_STATE: 08004] [nQSError: 10018] Access for the requested connection is refused.
[nQSError: 43113] Message returned from OBIS.
[nQSError: 43126] Authentication failed: invalid user/password. (08004)[[

Also connecting to the metadata repository (RPD) in online mode fails with similar error.

Looking through the BI server log, nqserver.log, you may find an error message similar to the following:

[OracleBIServerComponent] [ERROR:1] [] [] [ecid: 0001J1LfUetFCC3LVml3ic0000pp000000] [tid: 1] 
[13026] Error in getting roles from BI Security Service:	 
'Error Message From BI Security Service: [nQSError: 46164] HTTP Server returned 404 (Not Found) for URL .' ^M

RESOLUTION:

  • Connect to WebLogic Server (WLS) Console -> Deployments. Ensure that all deployed components are in 'Active' state.

  • If any of the components is in 'Prepared' state, select that application and then click on "start servicing all requests"

  • Restart BI Server and Presentation Services

In some cases, the following additional step might be needed to resolve the issue.

  • Access the Enterprise Manager Fusion Middleware control: http://<host.domain>:port/em

  • Navigate to Business Intelligence -> coreapplication

  • 'Capacity Management' tab -> 'Scalability' sub-tab

  • Click on 'Lock and Edit Configuration' button

  • Enter the IP address in the 'Listen Address' field

  • Click on 'Activate Changes' followed by 'Release Configuration' buttons

  • Restart BI Server and Presentation Services

Also check these My Oracle Support (MOS) documents for more clues and information.

1387283.1 Authentication failed: invalid user/password
1251364.1 Error: "[nQSError: 10018] Access .. Refused. [nQSError: 43126] Authentication Failed .." when Installing OBIEE 11g
1410233.1 How To Bind Components / Ports To A Specific IP Address On Multiple Network Interface (NIC) Machines

About

Benchmark announcements, HOW-TOs, Tips and Troubleshooting

Search

Archives
« June 2016
SunMonTueWedThuFriSat
   
1
2
3
4
5
6
7
8
9
10
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
  
       
Today