Friday Apr 12, 2013

Siebel 8.1.1.4 Benchmark on SPARC T5

Hardly six months after announcing Siebel 8.1.1.4 benchmark results on Oracle SPARC T4 servers, we have a brand new set of Siebel 8.1.1.4 benchmark results on Oracle SPARC T5 servers. There are no updates to the Siebel benchmark kit in the last couple years - so, we continued to use the Siebel 8.1.1.4 benchmark workload to measure the performance of Siebel Financial Services Call Center and Order Management business transactions on the recently announced SPARC T5 servers.

Benchmark Details

The latest Siebel 8.1.1.4 benchmark was executed on a mix of SPARC T5-2, SPARC T4-2 and SPARC T4-1 servers. The benchmark test simulated the actions of a large corporation with 40,000 concurrent active users. To date, this is the highest user count we achieved in a Siebel benchmark.


User Load Breakdown & Achieved Throughput

Siebel Application Module %Total Load #Users Business Trx per Hour
Financial Services Call Center 70 28,000 273,786
Order Management 30 12,000 59,553
Total     100 40,000 333,339

Average Transaction Response Times for both Financial Services Call Center and Order Management transactions were under one second.


Software & Hardware Specification

 Test Component Software Version Server Model Server Qty Per Server Specification OS
Chips Cores vCPUs CPU Speed CPU Type Memory
Application Server Siebel 8.1.1.4 SPARC T5-2 2 2 32 256 3.6 GHz SPARC-T5 512 GB Solaris 10 1/13 (S10U11)
Database Server Oracle 11g R2 11.2.0.2 SPARC T4-2 1 2 16 128 2.85 GHz SPARC-T4 256 GB Solaris 10 8/11 (S10U10)
Web Server iPlanet Web Server 7.0.9 (7 U9) SPARC T4-1 1 1 8 64 2.85 GHz SPARC-T4 128 GB Solaris 10 8/11 (S10U10)
Load Generator Oracle Application Test Suite 9.21.0043 SunFire X4200 1 2 4 4 2.6 GHz AMD Opteron 285 SE 16 GB Windows 2003 R2 SP2
Load Drivers (Agents) Oracle Application Test Suite 9.21.0043 SunFire X4170 8 2 12 12 2.93 GHz Intel Xeon X5670 48 GB Windows 2003 R2 SP2

Additional Notes:

  • Siebel Gateway Server was configured to run on one of the application server nodes
  • Four Siebel application servers were configured in the Siebel Enterprise to handle 40,000 concurrent users
    • - Each SPARC T5-2 was configured to run two Siebel application server instances
    • - Each of the Siebel application server instances on SPARC T5-2 servers were separated using Solaris virtualization technology, Zones
    • - 40,000 concurrent user sessions were load balanced across all four Siebel application server instances
  • Siebel database was hosted on a Sun Storage F5100 Flash Array consisting 80 x 24 GB flash modules (FMODs)
    • - Siebel 8.1.1.4 benchmark workload is not I/O intensive and does not require flash storage for better I/O performance
  • Fourteen iPlanet Web Server virtual servers were configured with Siebel Web Server Extension (SWSE) plug-in to handle 40,000 concurrent user load
    • - All fourteen iPlanet Web Server instances forwarded HTTP requests from Siebel clients to all four Siebel application server instances in a round robin fashion
  • Oracle Application Test Suite (OATS) was stable and held up amazingly well over the entire duration of the test run.
  • The benchmark test results were validated and thoroughly audited by the Siebel benchmark and PSR teams
    • - Nothing new here. All Sun published Siebel benchmarks including the SPARC T4 one were properly audited before releasing those to the outside world

Resource Utilization

Component #Users CPU% Memory Footprint
Gateway/Application Server 20,000 67.03 205.54 GB
Application Server 20,000 66.09 206.24 GB
Database Server 40,000 33.43 108.72 GB
Web Server 40,000 29.48 14.03 GB

Finally, how does this benchmark stack up against other published benchmarks? Short answer is "very well". Head over to the Oracle Siebel Benchmark White Papers webpage to do the comparison yourself.



[Credit to our hard working colleagues in SAE, Siebel PSR, benchmark and Oracle Platform Integration (OPI) teams. Special thanks to Sumti Jairath and Venkat Krishnaswamy for the last minute fire drill]

Copy of this blog post is also available at:
Siebel 8.1.1.4 Benchmark on SPARC T5

Wednesday Jan 30, 2013

Siebel 8.1.1.4 Benchmark on SPARC T4

Siebel is a multi-threaded native application that performs well on Oracle's T-series SPARC hardware. We have several versions of Siebel benchmarks published on previous generation T-series servers ranging from Sun Fire T2000 to Oracle SPARC T3-4. So, it is natural to see that tradition extended to the current genration SPARC T4 as well.

Benchmark Details

29,000 user Siebel 8.1.1.4 benchmark on a mix of SPARC T4-1 and T4-2 servers was announced during the Oracle OpenWorld 2012 event. In this benchmark, Siebel application server instances ran on three SPARC T4-2/Solaris 10 8/11 systems where as the Oracle database server 11gR2 was configured on a single SPARC T4-1/Solaris 11 11/11 system. Several iPlanet web server 7 U9 instances with the Siebel Web Plug-in (SWE) installed ran on one SPARC T4-1/Solaris 10 8/11 system. Siebel database was hosted on a single Sun Storage F5100 flash array consisting 80 flash modules (FMODs) each with capacity 24 GB.

Siebel Call Center and Order Management System are the modules that were tested in the benchmark. The benchmark workload had 70% of virtual users running Siebel Call Center transactions and the remaining 30% vusers running Siebel Order Management System transactions. This benchmark on T4 exhibited sub-second response times on average for both Siebel Call Center and Order Management System modules.

Load balancing at various layers including web and test client systems ensured near uniform load across all web and application server instances. All three Siebel application server systems consumed ~78% CPU on average. The database and web server systems consumed ~53% and ~18% CPU respectively.

All these details are supposed to be available in a standard Oracle|Siebel benchmark template - but for some reason, I couldn't find it on Oracle's Siebel Benchmark White Papers web page yet. Meanwhile check out the following PR that was posted on oracle.com on 09/28/2012.

    SPARC T4 Servers Set World Record on Siebel CRM 8.1.1.4 Benchmark

Looks like the large number of vusers (29,000 to be precise) sets this benchmark apart from the other benchmarks published with the same Siebel 8.1.1.4 benchmark workload.

[Credit to our colleagues in Siebel PSR, benchmark, SAE and ISVe teams]

Friday Nov 18, 2011

Siebel Troubleshooting : An ODBC error occurred; SBL-GEN-03006: Error calling function: DICFindTable m_pReqTbl

Symptom:

A newly installed Siebel application server fails to start despite successful ODBC connectivity to the database. SRProc process logs ODBC error messages similar to the following:


Message: GEN-13,
 Additional Message: dict-ERR-1109: 
       Unable to read value from export file (Data length (32) > Column definition (3)).

Message: GEN-13,
 Additional Message: dict-ERR-1107: Unable to read row 0 from export file (UTLDataValRead pBuf, col 4 ).

GenericLog  GenericError  1     0002157..  11-11-18 13:28  Message: Generated SQL statement:,
 Additional Message: SQLFetch:
   SELECT RDOBJ.DOCK_ID, RDOBJ.RELATED_DOCK_ID, RDOBJ.SQL_STATEMENT, RDOBJ.CHECK_VISIBILITY,
          'N', RDOBJ.COMMENTS, RDOBJ.ACTIVE, RDOBJ.SEQUENCE, RDOBJ.VIS_STRENGTH,
          RDOBJ.REL_VIS_STRENGTH, RDOBJ.VIS_EVT_COLS
     FROM ORAPERF.S_DOCK_REL_DOBJ RDOBJ, ORAPERF.S_DOCK_OBJECT DOBJ
    WHERE RDOBJ.REPOSITORY_ID = (SELECT ROW_ID FROM ORAPERF.S_REPOSITORY WHERE NAME = ?)
      AND DOBJ.ROW_ID = RDOBJ.DOCK_ID
      AND (DOBJ.INACTIVE_FLG = 'N' OR DOBJ.INACTIVE_FLG IS NULL)
      AND (RDOBJ.INACTIVE_FLG = 'N' OR RDOBJ.INACTIVE_FLG IS NULL)

Message: Error: An ODBC error occurred,
 Additional Message: Function: DICGetRDObjects; ODBC operation: SQLFetch

Message: GEN-13,
 Additional Message: dict-ERR-1109: Unable to read value from export file (UTLCompressFRead (fseek)).

Message: GEN-13,
 Additional Message: dict-ERR-1107: Unable to read row 0 from export file (UTLDataValRead pBuf, col 0 ).

Message: GEN-10,
 Additional Message: Calling Function: DICLoadDObjectInfo; Called Function: Calling DICGetRDObjects

Message: GEN-10,
 Additional Message: Calling Function: DICLoadDict; Called Function: DICLoadDObjectInfo

GenericError
(srpdb.cpp (860) err=3006 sys=2) SBL-GEN-03006: Error calling function: DICFindTable m_pReqTbl
(srpsmech.cpp (74) err=3006 sys=0) SBL-GEN-03006: Error calling function: DICFindTable m_pReqTbl
(srpmtsrv.cpp (107) err=3006 sys=0) SBL-GEN-03006: Error calling function: DICFindTable m_pReqTbl
(smimtsrv.cpp (1203) err=3006 sys=0) SBL-GEN-03006: Error calling function: DICFindTable m_pReqTbl
SmiLayerLog Error       Terminate process due to unrecoverable error: 3006. (Main Thread)

An inconsistent or corrupted dictionary file "diccache.dat" is likely the cause.

Solution:

  • Stop the application server and manually kill the remaining Siebel application specific processes

    eg.,

    stop_server all
    
    pkill siebmtsh
    pkill siebproc
    ..
    
  • Remove $SIEBEL_HOME/bin/diccache.dat file. It will be re-generated during the application server startup

  • Start the application server
    start_server all
    

Thursday Oct 06, 2011

Siebel Connection Broker Load Balancing Algorithm

Siebel server architecture supports spawning multiple application object manager processes. The Siebel Connection Broker, SCBroker, tries to balance the load (incoming requests) across different object manager processes running in a single Siebel server.

Least Loaded or Round Robin?

By default, SCBroker forwards the incoming request to any object manager process that is least loaded - meaning the process with the least number of running tasks. In Siebel terminology, this behavior is referred as "least-loaded" or "LL" connection forwarding algorithm. While the default LL algorithm provides the optimal behavior in the best case scenarios, it may lead to serious availability problems if one of several object manager prcesses running in a Siebel server stops responding in a timely fashion [for some reason]. Such an object manager may still accept requests though it may timeout. At some point, the unresponsive/hung or erroneous object manager will have the least number of tasks that may prompt SCBroker component to forward new incoming requests to that object manager process - which in turn leads to a stalemate. To avoid such situations, it is recommended to configure "round-robin" or "RR" algorithm in SCBroker component. When round-robin algorithm is configured, SCBroker ignores the number of running tasks per object manager process and routes all requests to all object managers in a round robin fashion.

While both algorithms have their strengths and weaknesses, customers must weigh both options and choose the one that fits best in their deployment.

eg.,

Find the current load balancing algorithm:

srvrmgr>  list advanced param ConnForwardAlgorithm for comp SCBroker \
             show PA_ALIAS, PA_VALUE, PA_NAME

PA_ALIAS              PA_VALUE  PA_NAME                                    
--------------------  --------  -----------------------------------------  
ConnForwardAlgorithm  LL        Connection Forward algorithm for SCBroker

Configure SCBroker to use round-robin algorithm:

srvrmgr> change param ConnForwardAlgorithm=RR for comp SCBroker server SERVER_NAME
Command completed successfully.

srvrmgr> list advanced param ConnForwardAlgorithm for comp SCBroker \
            show PA_ALIAS, PA_VALUE, PA_NAME

PA_ALIAS              PA_VALUE  PA_NAME                                    
--------------------  --------  -----------------------------------------  
ConnForwardAlgorithm  RR        Connection Forward algorithm for SCBroker

Other SCBroker parameters of interest: ConnForwardTimeout and ConnRequestTimeout

Saturday Dec 04, 2010

Oracle's Optimized Solution for Siebel CRM 8.1.1

A brief explanation of what an optimized solution is and what it is not can be found in the previous blog entry Oracle's Optimized Solution for PeopleSoft HCM 9.0. We went through a similar exercise to publish another optimized solution around Siebel CRM 8.1.1.

The Siebel solution implements Oracle Siebel CRM using a unique combination of SPARC servers, Sun storage, Solaris OS virtualization, Oracle application middleware and Oracle database products.

URLs to the Siebel CRM white papers:

White you are at it, do not forget to check the 13,000 user Siebel CRM benchmark on the latest SPARC T3 platform.

Sunday Oct 24, 2010

SPARC T3 reiterates Siebel CRM's Supremacy on T-series Hardware

It's been mentioned and proved several times that Sun/Oracle's T-series hardware is the best fit to deploy and run Siebel CRM. Feel free to browse through the list of Siebel benchmarks that Sun published in the past on T-series:

        2004-2010 : A Look Back at Sun Published Oracle Benchmarks

Oracle Corporation announced the availability of SPARC T3 servers in Oracle OpenWorld 2010, and sure enough there is a Siebel CRM benchmark on SPARC T3-1 server to support the server launch event. Check the following web page for high level details of the benchmark.

        SPARC T3-1 Server Posts a High Score on New Siebel CRM 8.1.1 Benchmark

I intend to provide the missing pieces of information in this blog post.

First of all, it is not a "Platform Sizing and Performance Program" (PSPP) benchmark. Siebel 8.1.1 was used to run the benchmark, and there is no Siebel PSPP benchmark kit available as of today for v8.1.1. Hence the test results from this benchmark exercise are not directly comparable to the Siebel 8.0 PSPP benchmark results.

Workload

The benchmark workload consists of a mix of Siebel Financial Services Call Center and Siebel Web Services / EAI transactions. The FINS Call Center transactions create a bunch of Opportunities, Quotes and Orders, where as the Web Services / EAI transactions submit new Service Requests (SR), search for and update existing SRs. The transaction mix is 40% FINS Call Center transactions and 60% Web Services / EAI transactions.

Software Versions

  • Siebel CRM 8.1.1
  • Oracle RDBMS 11g R2 (11.2.0.1), 64-bit
  • iPlanet Web Server 7.0 Update 8, 32-bit
  • Solaris 10 09/10 in the application-tier and
  • Solaris 10 10/09 in the web- and database-tiers

Hardware Configuration

  • Application Server : 1 x SPARC T3-1 Server (2 RU system)
    • One socket 16-Core 1.65 GHz SPARC T3 processor, 128 hardware threads, 6 MB L2 Cache, 64 GB RAM
  • Web Server + Database Server : 1 x Sun SPARC Enterprise T5240 Server (2 RU system)
    • Two socket 16-Core 1.165 GHz UltraSPARC T2 Plus processors, 128 hardware threads, 4 MB L2 Cache, 64 GB RAM

Virtualization Technology

iPlanet Web Server and the Oracle 11g Database Server were configured on a single Sun SPARC Enterprise T5240 Server. Those software layers were isolated from each other with the help of Oracle Solaris Containers virtualization technology. Resource allocations are shown below.

Tier #vCPU Memory (GB)
Database 96 48
Web 32 16

Test Results

#vUsers Avg Trx Resp Time (sec) Business Trx
Throughput/HR
Avg CPU Utilization (%) Avg Memory Footprint (GB)
FINS EAI FINS EAI App DB Web App DB + Web
13,000 0.43 0.2 48,409 116,449 58 42 37 52 35

Why stop at 13K users?

Notice that the average CPU utilization on the application server node (SPARC T3-1) is only ~58%. The application server node has room to accommodate more online vusers - however, there is not enough free memory left on the server to scale beyond 13,000 concurrent users. That is the main reason to stop at 13,000 user count in this benchmark.

Siebel Best Practices

Check the following presentation:

        Siebel on Oracle Solaris : Best Practices, Tuning Tips

Acknowledgments

Credit to all our peers at Oracle Corporation who helped us with the hardware, workload, verification and validation etc., in a timely manner. Also Jenny deserves special credit for spending enormous amount of time running the benchmark with patience.

Thursday Sep 23, 2010

OOW 2010 : Accelerate and Bullet-Proof Your Siebel CRM Deployment with Oracle's Sun Servers

The best practices slides from today's OpenWorld presentation can be downloaded from the following location.

        Siebel on Oracle Solaris : Best Practices, Tuning Tips

The entire presentation with proper disclaimers and Oracle Solaris Cluster specific slides will be posted on Oracle's web site soon. Stay tuned.

Sunday May 17, 2009

Installing Siebel Web Extension (SWE) on top of Sun Java System Web Server 7.0

As of today, Sun Java System Web Server 7.0 is not a certified platform to deploy Siebel 8.x enterprise on. We are working with Oracle Corporation to make this certification happen so our customers can take advantage of the performance optimizations that went into the web server release 7.0.

Meanwhile those who want to give it a try can do so with little effort. In release SJSWS 7.0, the start/stop/restart/.. scripts were appropriately relocated to bin directory under the virtual web server instance. The installer for Siebel 8.x Web Server Extension looks for the start script [of the web server] under the home directory of the virtual web server instance. (because it was the default location until the release of SJSWS 7.0). The installation fails if the installer cannot find the start script in the location it is expecting it to be.

Due to the relocation mentioned above, installation of the Siebel Web Server Extension fails at the very last step where it tries to modify the start script with a bunch of LD_PRELOADs so the Siebel Web Extension loads up and runs on the Sun Java System Web Server. To get around this failure, all you have to do is to create a symbolic link in the home directory of the virtual web server instance pointing to the startserv script residing in the bin directory.

The following example shows the necessary steps.


% pwd
/export/pspp/SJWS7U5/https-siebel-pspp

% ln -s bin/startserv start

% ls -l start
lrwxrwxrwx   1 pspp     dba           13 May 17 17:01 start -> bin/startserv

Install Siebel Web Extension in the normal way. No other changes are required.

AFTER SWE INSTALLATION:


% ls -l start\*
-rwxr-xr-x   1 pspp     dba         4157 May 17 17:38 start
-rwxr-xr-x   1 pspp     dba         3456 May 17 17:38 start_.bak

% mv bin/startserv bin/startserv.orig
% mv start bin/startserv

Notice that the Siebel installer actually made two copies of the startup script from the symbolic link. The original bin/startserv remained intact after the SWE installation.

Finally start the Web Server instance by running the startserv script. It should start with no issues.


% pwd
/export/pspp/SJWS7U5/https-siebel-pspp/bin

% ./startserv
Sun Java System Web Server 7.0U5 B03/10/2009 16:38
info: swe_init reports: SWE plug-in log file
info: CORE5076: Using [Java HotSpot(TM) Server VM, Version 1.5.0_15] from [Sun Microsystems Inc.]
info: HTTP3072: http-listener-1: http://siebel-pspp:8000 ready to accept requests
info: CORE3274: successful server startup

Before we conclude, do not forget the fact that Sun Java System Web Server 7.0 is not yet certified with Siebel 8.x release. Use the instructions mentioned in this blog post at your own risk. However if you do like to take that risk, consider installing the latest release of Sun Java System Web Server, which is SJSWS 7.0 Update 5 as of this writing.

Stay tuned for the certification news though.

Saturday Dec 20, 2008

Siebel on Sun Solaris: More Performance with Less Number of mprotect() Calls

By default each transaction in Siebel CRM application makes a large number of serialized mprotect() calls which in turn may degrade the performance of Siebel. When the load is very high on the Siebel application servers, the mprotect() calls are serialized by the operating system kernel resulting in high number of context switches and low CPU utilization.

If a Siebel deployment exhibits the above mentioned pathological conditions, performance / scalability of the application can be improved by limiting the number of mprotect() calls from the application server processes during the run-time. To achieve this behavior, set the value of Siebel CRM's AOM tunable parameter MemProtection to FALSE. Starting with the release of Siebel 7.7, the parameter MemProtection is a hidden one with the default value of TRUE. To set its value to FALSE, run the following command from the CLI version of Siebel Server Manager - srvrmgr.


change param MemProtection=False for comp <component_alias_name> server <siebel_server_name>

where:

component_alias_name is the alias name of the AOM component to be configured. eg., SCCObjMgr_enu is the alias for the Call Center Object Manager, and

siebel_server_name is the name of the Siebel Server for which the component being configured.

Note that this parameter is not a dynamic one - hence the Siebel application server(s) must be restarted for this parameter to be effective.

Run truss -c -p <pid_of_any_busy_siebmtshmw_process> before and after the change to see how the mprotect system call count varies.

For more information about this tunable on Solaris platform, check Siebel Performance Tuning Guide Version 7.7 or later in Siebel Bookshelf.

See Also:
Siebel on Sun CMT hardware : Best Practices

(Originally posted on blogger at:
Siebel on Sun Solaris: More Performance with Less mprotect() Calls)

Monday Dec 08, 2008

Consolidating Siebel CRM 8.0 on a Single Sun SPARC Enterprise Server, T5440

.. blueprint document is now available on wikis.sun.com. Here is the direct link to the blueprint:
            Consolidating Oracle Siebel CRM 8 on a Single Sun SPARC Enterprise Server.

Siebel 8.0 Platform Sizing and Performance Program (PSPP) benchmark workload was used in running all the performance tests using Solaris Containers and Logical Domains (LDoms) on a single Sun SPARC Enterprise T5440 server running Solaris 10 5/08 (containers) and 10/08 (LDoms). The focus of the blueprint is around 3 major configurations -- performance numbers at 3500 users (small configuration), 7000 users (medium configuration) and 14000 user (large configuration) loads. Hence this blueprint document complements the 14,000 user Siebel 8.0 benchmark that we published earlier back in October 2008.

The blueprint has details around the resource allocations for all the tiers of a typical Siebel deployment to support 3500, 7000 and 14000 concurrent users, offers performance tuning tips that are specific to Solaris and Sun CMT systems, and shows the results from the 3500, 7000 and 14000 user performance tests using Solaris 10's virtualization technologies - Solaris Containers and the Logical Domains.

Notes:

  1. All the performance tests were conducted either with Solaris Containers or with the Logical Domains, but not with a mix of both of those technologies.

  2. Resource allocations were identical in both cases -- that is, with the Solaris Containers and the Logical Domains.
About

Benchmark announcements, HOW-TOs, Tips and Troubleshooting

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today