Wednesday Jan 20, 2010

PeopleSoft NA Payroll 240K EE Benchmark with 16 Job Streams : Another Home Run for Sun

Poor Steve A.[1] ... This entry is not about Steve A. though. It is about the new PeopleSoft NA Payroll benchmark result that Sun published today.

First things first. Here is the direct URL to our latest benchmark results:

        PeopleSoft Enterprise Payroll 9.0 using Oracle for Solaris on a Sun SPARC Enterprise M4000 (16 job streams[2] -- simply referred as 'stream' hereonwards)

The summary of the benchmark test results is shown below only for the 16 stream benchmarks. These numbers were extracted from the very first page of the benchmark results white papers where Oracle|PeopleSoft highlights the significance of the results and the actual numbers that are of interest to the customers. The results in the following table are sorted by the hourly throughput (payments/hour) in the descending order. The goal is to achieve as much hourly throughput as possible. Click on the link that is underneath the hourly throughput values to open corresponding benchmark result.

Oracle PeopleSoft North American Payroll 9.0 - Number of employees: 240,000 & Number of payments: 360,000
Vendor OS Hardware Config #Job Streams Elapsed Time (min) Hourly Throughput
Payments per Hour
Sun Solaris 10 5/09 1x Sun SPARC Enterprise M4000 with 4 x 2.53 GHz SPARC64-VII Quad-Core processors and 32 GB memory
1 x Sun Storage F5100 Flash Array with 40 Flash Modules for data, indexes
1 x Sun Storage J4200 Array for redo logs
16 43.78 493,376
HP HP-UX 1 x HP Integrity rx6600 with 4 x 1.6 GHz Intel Itanium2 9000 Dual-Core processors and 32 GB memory
1 x HP StorageWorks EVA 8100
16 68.07 317,320

This is all public information. Feel free to compare the hardware configurations and the data presented in both of the rows and draw your own conclusions. Since both Sun and HP used the same benchmark toolkit, workload and ran the benchmark with the same number of job streams, comparison should be pretty straight forward.

If you want to compare the 8 stream results, check the other blog entry: PeopleSoft North American Payroll on Sun Solaris with F5100 Flash Array : A blog Reprise. Sun used the same hardware to run both benchmark tests with 8 and 16 streams respectively. We could have gotten away with 20+ Flash Modules (FMODs), but we want to keep the benchmark environment consistent with our prior benchmark effort around the same benchmark workload with 8 job streams. Due to the same hardware setup, now we can easily demonstrate the advantage of parallelism (simply by comparing the test results from 8 and 16 stream benchmarks) and how resilient and scalable the F5100 Flash array is.

Our benchmarks showed an improvement of ~55% in overall throughput when the number of job streams were increased from 8 to 16. Also our 16 stream results showed ~55% improvement in overall throughput over HP's published results with the same number of streams at a maximum average CPU utilization of 45% compared to HP's maximum average CPU utilization of 89%. The half populated Sun Storage F5100 Flash Array played the key role in both of those benchmark efforts by demonstrating superior I/O performance over the traditional disk based arrays.

Before concluding, I would like to highlight a few known facts (just for the benefit of those people who may fall for the PR trickery):

  1. 8 job streams != 16 job streams. In other words, the results from an 8 stream effort is not comparable to that of a 16 stream result.
  2. The throughput should go up with increased number of job streams [ only up to some extent -- do not forget that there will be a saturation point for everything ]. For example, the throughput with 16 streams might be higher compared to the 8 stream throughput.
  3. The Law of Diminishing Returns applies to the software world too, not just for the economics. So, there is no guarantee that the throughput will be much better with 24 or 32 job streams.

Other blog posts and documents of interest:

  1. Best Practices for Oracle PeopleSoft Enterprise Payroll for North America using the Sun Storage F5100 Flash Array or Sun Flash Accelerator F20 PCIe Card
  2. PeopleSoft Enterprise Payroll 9.0 using Oracle for Solaris on a Sun SPARC Enterprise M4000 (8 streams benchmark white paper)
  3. PeopleSoft North American Payroll on Sun Solaris with F5100 Flash Array : A blog Reprise
  4. App benchmarks, incorrect conclusions and the Sun Storage F5100
  5. Oracle PeopleSoft Payroll (NA) Sun SPARC Enterprise M4000 and Sun Storage F5100 World Record Performance


[1] Steve A. tried so hard and his best to make everyone else believe that HP's 16 job stream NA Payroll 240K EE benchmark results are on par with Sun's 8 stream benchmark results. Apparently Steve A. failed and gave up after we showed the world a few screenshots from a published and eventually withdrawn benchmark [ by HP ]. You can read all his arguments, comparisons etc., in the comments section of my other blog entry PeopleSoft North American Payroll on Sun Solaris with F5100 Flash Array : A blog Reprise as well as in Joerg Moellenkamp's blog entries around the same topic.

[2] In PeopleSoft terminology, a job stream is something that is equivalent to a thread.

Friday Oct 09, 2009

Sun achieves the Magic Number 50,000 on T5440 with Oracle Business Intelligence EE

Less than two months ago, Sun Microsystems published an Oracle Business Intelligence benchmark with the best single system performance of 28,000 concurrent BI EE users at ~75% CPU utilization. Sun and Oracle Corporation announced another Oracle Business Intelligence benchmark result today with two identical T5440 servers in the Oracle BI Cluster serving 50,000 concurrent BI EE users.

An Oracle white paper with Sun's 50,000 user benchmark results can be accessed from Oracle's Business Intelligence web.

The hardware specifications for each of the T5440s are similar to the hardware that was used in the prior benchmark effort on a single T5440 server. However this time the Presentation Catalog (also frequently referred as the Web Catalog) was moved to a T5220 server where the NFS server was running. Besides this the only other change from the earlier 28,000 user benchmark exercise is the addition of another T5440 to the test rig.

The following graph shows the scalability of the application from one node to four nodes to eight nodes running on T5440 servers.

OBIEE on T5440 : Scalability Graph

Without further ado, here is the summary of the benchmark results along with their significance and some interesting facts:

  • One of the major goals of this benchmark effort is to show the horizontal and vertical scalability of the application (OBIEE) by highlighting the superior performance and the resilience of the underlying hardware (T5440) and the operating system (Solaris). Needless to say the goal has been met.

  • Another goal of this benchmark is to show decent number of concurrent BI EE users executing transactions with good response times. Since we already showed the maximum load that can be achieved on a single BI instance (7500 users) and on a single T5440 server running multiple BI instances (28,000 users), this time we did not attempt to get the peak number that can be achieved from the two T5440 servers in the benchmark environment. Now that there is an additional server in the test setup that is taking care of the Presentation Catalog and the database server, 2 \* 28000 = 56,000 BI EE users would have been an achievable target -- but we opted to stop at the "magic" and the "respectable" number 50,000 instead.

  • The entire benchmark run lasted for about 9 hours 45 minutes, and out of which 8 hours were the rampup hours where the 50,000 BI virtual users were logging into the application few users at a time. LoadRunner tool reported only 4 errors for the entire duration of the run; and there are zero errors in the 60 minute steady state period during which the statistics reported in the document were collected.

  • Two Sun SPARC Enterprise T5440 servers each with 4 x 8-Core 1.6 GHz UltraSPARC T2 Plus processors delivered the best performance of 50,000 concurrent BI EE users at around 63% CPU utilization.

  • The BI EE Cluster was deployed on two T5440 servers running Solaris 10 5/09 operating system. All the nodes in the BI Cluster were consolidated onto two T5440 servers using the free and efficient Solaris Containers virtualization technology.

  • The Presentation Catalog was hosted on ZFS powered file system that was created on top of four internal Solid State Drive (SSD) disks. The Catalog was shared among all eight BI nodes in the cluster as an NFS share. One 8-Core 1.2 GHz UltraSPARC T2 processor powered T5220 server was used to run the NFS server. Due to the minimal activity of the database, Oracle 11g database was also hosted on the same server. Solaris 10 5/09 is the operating system.

  • Solid State Drive (SSD) disks with ZFS file system showed significant I/O performance improvement over traditional disks for the Presentation Catalog activity. In addition, ZFS helped get past the UFS limitation of 32767 sub-directories in a Presentation Catalog directory.

  • Caching was turned ON at the application server, which led to minimal database activity on the server. Note hat the caching mechanism was turned ON even in the prior benchmark exercise.

  • The low end CoolThreads CMT Server T5220 and the mid-range T5440 server once again proved to be ideal candidates to deploy and run multi-thread workloads by exhibiting resilient performance when handling large number of simultaneous requests from 50,000 BI EE virtual users. T5220 handled large number of concurrent asynchronous read/write requests from eight different NFS clients.

  • NFS v3 was configured at the NFS Server as well as at the NFS Client nodes. NFS version 4 is the default on Solaris 10, and it might have worked as expected. However a handful of bug reports prompted us to go with the more matured and less buggy version 3.

  • 3283 watts is the average power consumption when all the 50,000 concurrent BI users are in the steady state of the benchmark test. That is, in the case of similarly configured workloads, the T5440 server supports 15.2 users per watt of energy consumed and supports 5,000 users per rack unit.

  • A summary of the results with system-wide averages of CPU and memory utilization is shown below. The latest results are highlighted in blue color.

    #Vusers Clustered #BI Nodes #CPU #Core RAM CPU Memory Avg Trx Response Time #Trx/sec
    7,500 No 1 1 8 32 GB 72.85% 18.11 GB 0.22 sec 155
    28,000 Yes 4 4 32 128 GB 75.04% 76.16 GB 0.25 sec 580
    50,000 Yes 8 8 64 256 GB 63.32% 172.21 GB 0.28 sec 1031


The topology diagram in the benchmark results white paper is almost illegible. Here is the original topology diagram that was inserted into the white paper.

OBIEE on T5440 : 50K User Benchmark Topology

Quite frankly I'm not very proud of this drawing -- but that's the best that I could come up with in a short span. Rather than showing the flow of communication between each and every component in the benchmark setup, I simplified the drawing by introducing a "black box" sort of thing - "private network" - in the middle, which protected the drawing from getting messy.


The following two-dimensional graph shows the CPU utilization patterns at all 3 nodes in the benchmark setup for the 60 minute steady state of the benchmark run. This graph was generated using the free GNUplot tool with sar data as the inputs.

OBIEE on T5440 : 50K User Benchmark CPU Usage Graph


And finally here is a quick summary of all the results that are published by different vendors so far with similar benchmark kit. Feel free to draw your own conclusions. All this is public information. Check the corresponding benchmark reports by clicking on the URLs under the "#Users" column.

Server Processors #Users OS
Chips Cores Threads GHz Type
  2 x Sun SPARC Enterprise T5440 (APP)
  1 x Sun SPARC Enterprise T5220 (NFS,DB)
UltraSPARC T2 Plus
50,000 Solaris 10 5/09
  1 x Sun SPARC Enterprise T5440 4 32 256 1.6 UltraSPARC T2 Plus 28,000 Solaris 10 5/09
  5 x Sun Fire T2000 1 8 32 1.2 UltraSPARC T1 10,000 Solaris 10 11/06
  3 x HP DL380 G4 2 4 4 2.8 Intel Xeon 5,800 OEL
  1 x IBM x3755 4 8 8 2.8 AMD Opteron 4,000 RHEL4

Before you go, do not forget to check the best practices for configuring / deploying Oracle Business Intelligence on top of Solaris 10 running on Sun CMT hardware.

Related Blog Posts:
T5440 Rocks [again] with Oracle Business Intelligence Enterprise Edition Workload

Monday Aug 17, 2009

Oracle Business Intelligence on Sun : Few Best Practices

(Updated on 10/16/09 with additional content and restructured the blog entry for clarity and easy navigation)

The following suggested best practices are applicable to all Oracle BI EE deployments on Sun hardware (CMT and M-class) running Solaris 10 or later. These recommendations are based on our observations from the 50,000 user benchmark on Sun SPARC Enterprise T5440. It is not the complete list, and your mileage may vary.

Hardware : Firmware

Ensure that the system's firmware is up-to-date.

Solaris Recommendations

  • Upgrade to the latest update release of Solaris 10.

  • Solaris runs in 64-bit mode by default on SPARC platform. Consider running 64-bit BI EE on Solaris.

      64-bit BI EE platform is immune to the 4 GB virtual memory limitation of the 32-bit BI EE platform -- hence can potentially support even more users and have larger caches as long as the hardware resources are available.

  • Enable 256M large pages on all nodes. By default, the latest update of Solaris 10 will use a maximum of 4M pages even when 256M pages are a good fit.

      256M pages can be enabled with the following /etc/system tunables.
      \* 256M pages for the process heap
      set max_uheap_lpsize=0x10000000
      \* 256M pages for ISM
      set mmu_ism_pagesize=0x10000000

  • Increase the file descriptor limits by adding the following lines to /etc/system on all BI nodes.
      \* file descriptor limits
      set rlim_fd_cur=65536
      set rlim_fd_max=65536
  • On larger systems with more CPUs or CPU cores, try not to deploy Oracle BI EE in the global zone.

      In our benchmark testing, we have observed unpredictable and abnormal behavior of the BI server process (nqsserver) in the global zone under moderate loads. This behavior is clearly noticeable when there are more than 64 vcpus allocated to the global zone.

  • If the BI presentation catalog is stored on a local file system, create a ZFS file system to hold the catalog.

      If there are more than 25,000 authorized users in a BI deployment, the default UFS file system may run into Too many links error when the Presentation Server tries to create more than 32,767 sub-directories (refer to LINK_MAX on Solaris)

  • Store the Presentation Catalog on a disk with faster I/O such as a Solid State Drive (SSD). For uniform reads and writes across different disk drives [ and of course for better performance ], we recommend creating ZFS file system on top of a zpool with multiple SSDs.

    Here is an example that shows the ZFS file system creation steps for the BI Presentation Catalog.

    # zpool create -f BIshare c1t2d0s6 c1t3d0s0 c1t4d0s6 c1t5d0s6
    # zpool list
    BIshare   118G    97K   118G     0%  ONLINE  -
    # zfs create BIshare/WebCat
    # fstyp /dev/dsk/c1t2d0s6
    # zpool status -v
      pool: BIshare
     state: ONLINE
     scrub: none requested
            NAME        STATE     READ WRITE CKSUM
            BIshare     ONLINE       0     0     0
              c1t2d0s6  ONLINE       0     0     0
              c1t3d0s0  ONLINE       0     0     0
              c1t4d0s6  ONLINE       0     0     0
              c1t5d0s6  ONLINE       0     0     0
    errors: No known data errors

    Observe the I/O activity on ZFS file system by running zpool iostat -v command.

Solaris : ZFS Recommendations

  • If the file system is mainly used for storing the Presentation Catalog, consider setting the ZFS record size to 8K. This is because of the relatively small size (8K or less) reads/writes from/into the BI Catalog.

            # zfs set recordsize=8K BIshare/WebCat

    In the case of database, you may have to set the ZFS record size to the database block size.

  • Even though disabling ZFS Intent Log (ZIL) may improve the performance of synchronous write operations, it is not a recommended practice to disable ZIL. Doing so may compromise the data integrity.

      Disabling the ZIL on an NFS Server can lead to client side corruption.

  • When running CPU intensive workloads, consider disabling the ZFS' metadata compression to provide more CPU cycles to the application.

      Starting with Solaris 10 11/06, metadata compression can be disabled and enabled dynamically as shown below.

      To disable the metadata compression:

              # echo zfs_mdcomp_disable/W0t1 | mdb -kw

      To enable the metadata compression:

              # echo zfs_mdcomp_disable/W0t0 | mdb -kw

      To permanently disable the metadata compression, set the following /etc/system tunable.

              set zfs:zfs_mdcomp_disable=1

Solaris : NFS Recommendations

One of the requirements of OBIEE is that the BI Presentation Catalog must be shared across different BI nodes in the BI Cluster. (There will be only one copy of the presentation catalog). Unless the catalog has been replicated on different nodes, there is no other choice but to share it across different nodes. One way to do this is to create an NFS share with the top level directory of the catalog, and then to mount it over NFS at the BI nodes.

  • Version 4 is the default NFS version on Solaris 10. However it appears that as of this writing, NFS v4 is not as mature as v3. So we recommend experimenting with both versions to see which one fits well to the needs of the BI deployment.

    To enable NFS v3 on both server and the client, edit /etc/default/nfs and make the changes as shown below.

      NFS Server
      NFS Client
  • Experiment with the following NFS tunables.

      NFS Server
      NFSD_SERVERS=<desired_number> <-- on CMT systems with large number of hardware threads you can go as high as 512
      NFS_SERVER_DELEGATION=[ON|OFF] <-- ON is the default. Experiment with OFF
      NFS Client
  • Monitor the DNLC hit rate and tune the directory name look-up cache (DNLC).

      To monitor the DNLC hit rate, run "vmstat -s | grep cache" command. It is ideal to see a hit rate of 95% or above.

      Add the following tunable parameter to /etc/system on NFS server with a desired value for the DNLC cache.

              set ncsize=<desired_number>
  • Mounting NFS Share

    Mount the NFS share that contains the Presentation Services Catalog on all the NFS clients (BI nodes in this context) using the following mount options:

            rw, forcedirectio, nocto

Oracle BI EE Cluster Deployment Recommendations

  • Ensure that all the BI components in the cluster are configured in a many-to-many fashion

  • For proper load balancing, configure all BI nodes to be identical in the BI Cluster

  • When planning to add an identically configured new node to the BI Cluster, simply clone an existing well-configured BI node running in a non-global zone.

      Cloning a BI node running in a dedicated zone results in an exact copy of the BI node being cloned. This approach is simple, less error prone and eliminates the need to configure the newly added node from scratch.

Oracle BI Presentation Services Configuration Recommendations

  • Increase the file descriptors limit. Edit SAROOTDIR/setup/ to increase the value from 1024 to any other value of your choice. In addition you must increase the shell limit using the ulimit -n command

    	ulimit -n 2048

  • Configure 256M large pages for the JVM heap of Chart server and OC4J web server (this recommendation is equally applicable to other web servers such as WebLogic or Sun Java system Web Server). Also use parallel GC, and restrict the number of parallel GC threads to 1/8th of the number of virtual cpus.


    	-XX:LargePageSizeInBytes=256M -XX:+UseParallelGC -XX:ParallelGCThreads=8

  • The Oracle BI Presentation Server keeps the access information of all the users in the Web Catalog. When there are large number of unique BI users, it can take a significant amount of time to look up a user if all the users reside in a single directory. To avoid this, hash the user directories. It can be achieved by having the following entry in SADATADIR/web/config/instanceconfig.xml



    HashUserHomeDirectories specifies the number of characters to use to hash user names into sub directories. When this element is turned on, for example, the default name for user Steve's home directory would become /users/st/steve.

  • BI Server and BI Presentation Server processes create many temporary files while rendering reports and dashboards for a user. This can result in significant I/O activity on the system. The I/O waits can be minimized by pointing the temporary directories to a memory resident file system such as /tmp on Solaris OS. To achieve this, add the following line to the instanceconfig.xml configuration file.



    Similarly the Temporary directory (SATEMPDIR) can be pointed to a memory resident file system such as /tmp to minimize the I/O waits.

  • Consider tuning the value of CacheMaxEntries in instanceconfig.xml. A value of 20,000 was used in the 50,000 user OBIEE benchmark on T5440 servers. Be aware that the Presentation Services process (sawserver64) consumes more virtual memory when this parameter is set to a high value.

  • If the presentation services log contains errors such as "The queue for the thread pool AsyncLogon is at it's maximum capacity of 50 jobs.", consider increasing the Presentation Services' asynchronous job queue. 50 is the default value.

    The following example increases the job queue size to 200.

  • Increase the query cache expiry time especially when the BI deployment is supposed to handle large number of concurrent users. The default is 60 minutes. However under very high loads, a cache entry may be removed before one hour if many queries are being run. Hence it is necessary to tune the parameter CacheMaxExpireMinutes in Presentation Services' instanceconfig.xml.

    The following example increases the query cache expiry time to 3 hours.

  • Consider increasing the Presentation Services' cache timeout values to keep the cached data intact for longer periods.

    The following example increases the cache timeout values to 5 hours in instanceconfig.xml configuration file.


Oracle BI Server Configuration Recommendations

  • Enable caching at the BI server and control/tune the cache expiry time for each of the table based on your organizations' needs.

  • Unless the repository needs to be edited online frequently, consider setting up the "read only" mode for the repository. It may ease lock contention up to some extent.

  • Increase the session limit and the number of requests per session limit especially when the BI deployment is expected to handle large number of concurrent users. Also increase the number of BI server threads.

    The following configuration was used in 50,000 user OBIEE benchmark on T5440 servers.

    (Source configuration file: NQSConfig,.INI)
    [ CACHE ]
    DATA_STORAGE_PATHS = "/export/oracle/OracleBIData/cache" 500 MB;
    // Cluster-aware cache
    GLOBAL_CACHE_STORAGE_PATH = "/export/oracle/OracleBIsharedRepository/GlobalCacheDirectory" 2048 MB;
    [ SERVER ]
    MAX_SESSION_LIMIT = 20000 ;
    SERVER_THREAD_RANGE = 512-2048;

Related Blog Posts

Saturday Dec 20, 2008

Siebel on Sun Solaris: More Performance with Less Number of mprotect() Calls

By default each transaction in Siebel CRM application makes a large number of serialized mprotect() calls which in turn may degrade the performance of Siebel. When the load is very high on the Siebel application servers, the mprotect() calls are serialized by the operating system kernel resulting in high number of context switches and low CPU utilization.

If a Siebel deployment exhibits the above mentioned pathological conditions, performance / scalability of the application can be improved by limiting the number of mprotect() calls from the application server processes during the run-time. To achieve this behavior, set the value of Siebel CRM's AOM tunable parameter MemProtection to FALSE. Starting with the release of Siebel 7.7, the parameter MemProtection is a hidden one with the default value of TRUE. To set its value to FALSE, run the following command from the CLI version of Siebel Server Manager - srvrmgr.

change param MemProtection=False for comp <component_alias_name> server <siebel_server_name>


component_alias_name is the alias name of the AOM component to be configured. eg., SCCObjMgr_enu is the alias for the Call Center Object Manager, and

siebel_server_name is the name of the Siebel Server for which the component being configured.

Note that this parameter is not a dynamic one - hence the Siebel application server(s) must be restarted for this parameter to be effective.

Run truss -c -p <pid_of_any_busy_siebmtshmw_process> before and after the change to see how the mprotect system call count varies.

For more information about this tunable on Solaris platform, check Siebel Performance Tuning Guide Version 7.7 or later in Siebel Bookshelf.

See Also:
Siebel on Sun CMT hardware : Best Practices

(Originally posted on blogger at:
Siebel on Sun Solaris: More Performance with Less mprotect() Calls)

Sunday Nov 30, 2008

PeopleSoft on Solaris 10: Fixing the "msgget: No space left on device" Error

(Crossposting the 8+ month old blog entry from my other blog hosted on blogger. Source URL:

When a large number of application server processes are configured in a single PeopleSoft domain or in multiple domains cumulative, it is very likely that the PeopleSoft application server domain boot process may fail with errors like:

Booting server processes ...
exec PSSAMSRV -A -- -C psappsrv.cfg -D CS90SPV -S PSSAMSRV :
113954.ben15!PSSAMSRV.29746.1.0: LIBTUX_CAT:681: ERROR: Failure to create message queue
113954.ben15!PSSAMSRV.29746.1.0: LIBTUX_CAT:248: ERROR: System init function failed, Uunixerr = : 
                   msgget: No space left on device
113954.ben15!tmboot.29708.1.-2: CMDTUX_CAT:825: ERROR: Process PSSAMSRV at ben15 failed with /T 
                   tperrno (TPEOS - operating system error)

In this particular example, the PeopleSoft Enterprise is running on a Solaris 10 system. Fortunately the error message is very clear in this case; and the failure is related to the message queues. During the domain boot up process, there is a call to msgget() to create a message queue. If the call to msgget() succeeds, it returns a non-negative integer that serves as the identifier for the newly created message queue. However in the case of a failure, it returns -1 and sets the error number to EACCES, EEXIST, ENOENT or ENOSPC depending on the underlying reason.

From the above error messages it clear that the msgget() failed with the errno set to ENOSPC (No space left on device). Man page of msgget(2) has the following explanation for ENOSPC error code on Solaris:

     The msgget() function will fail if:
     ENOSPC    A message queue identifier is to  be  created  but
               the  system-imposed limit on the maximum number of
               allowed  message  queue  identifiers  system  wide
               would be exceeded. See NOTES.


     The system-imposed limit on  the  number  of  message  queue
     identifiers  is  maintained on a per-project basis using the
     project.max-msg-ids resource control.

It has enough clues to suspect the configured number for the message queue identifiers.

Prior to the release of Solaris 10, the /etc/system System V IPC tunable, msgsys:msginfo_msgmni, was used to control the maximum number of message queues that can be created. The default value on pre-Solaris 10 systems is 50.

With the release of Solaris 10, majority of the System V IPC tunables were obsoleted and equivalent resource controls were created for the remaining tunables to reduce the administrative overhead. On Solaris 10 and later versions, System V IPC can be tuned on a per project basis using the newly introduced resource controls.

On any Solaris 10 system, the resource control, project.max-msg-ids, replaced the old /etc/system tunable, msginfo_msgmni. And the default value has been raised to 128.

Now back to the failure in PeopleSoft environment. Let's first check the current value configured for project.max-msg-ids.

  • Get the project ID.
     % id -p
    uid=222227(psft) gid=2294(dba) projid=3(default)
  • Examine the project.max-msg-ids resource control for the project with ID 3, using the prctl utility.
     % prctl -n project.max-msg-ids -i project 3
    project: 3: default
    NAME    PRIVILEGE       VALUE    FLAG   ACTION                       RECIPIENT
            privileged        128       -   deny                                 -
            system          16.8M     max   deny                                 -

Alternatively run the command ipcs -q to check the number of active message queues. Note that the project with id '3' is configured to create a maximum of 128 (default) message queues. In any case, the number of active message queues from the ipcs -q output may almost match with the configured value for the project.max-msg-ids.

Since it appears the configured PeopleSoft domain(s) needs more than 128 message queues in order to bring up all the application server processes that constitute the PeopleSoft Enterprise, the solution is to increase the value for the resource control, project.max-msg-ids, to any value beyond 128. For the sake of simplicity, let's increase it to 256 (2 \* default value, that is). Again prctl utility can be used to set the new value for the resource control.

  • Assume the privileges of the 'root' user
     % su
  • Increase the maximum value for the message queue identifiers to 256 using the prctl utility.
     # prctl -n project.max-msg-ids -r -v 256 -i project 3
  • Verify the new maximum value for the message queue identifiers
     # prctl -n project.max-msg-ids -i project 3
    project: 3: default
    NAME    PRIVILEGE       VALUE    FLAG   ACTION                       RECIPIENT
            privileged        256       -   deny                                 -
            system          16.8M     max   deny                                 -

With this change, the PeopleSoft Enterprise should boot up at least with no Failure to create message queue .. msgget: No space left on device errors.

Before we conclude, note that the above mentioned solution is not persistent across multiple operating system reboots. To make it persistent, create a new project using the projadd command. The man page for projadd(1M) has an example showing the creation of a project.

Friday Nov 21, 2008

Oracle on Solaris 10 : Fixing the 'ORA-27102: out of memory' Error

(Crossposting the 2+ year old blog entry from my other blog hosted on blogger. Source URL:


As part of a database tuning effort you increase the SGA/PGA sizes; and Oracle greets with an ORA-27102: out of memory error message. The system had enough free memory to serve the needs of Oracle.

SQL> startup
ORA-27102: out of memory
SVR4 Error: 22: Invalid argument

$ oerr ORA 27102
27102, 00000, "out of memory"
// \*Cause: Out of memory
// \*Action: Consult the trace file for details

Not so helpful. Let's look the alert log for some clues.

% tail -2 alert.log
WARNING: EINVAL creating segment of size 0x000000028a006000
fix shm parameters in /etc/system or equivalent

Oracle is trying to create a 10G shared memory segment (depends on SGA/PGA sizes), but operating system (Solaris in this example) responded with an invalid argument (EINVAL) error message. There is a little hint about setting shm parameters in /etc/system.

Prior to Solaris 10, shmsys:shminfo_shmmax parameter has to be set in /etc/system with maximum memory segment value that can be created. 8M is the default value on Solaris 9 and prior versions; where as 1/4th of the physical memory is the default on Solaris 10 and later. On a Solaris 10 (or later) system, it can be verified as shown below:

% prtconf | grep Mem
Memory size: 32760 Megabytes

% id -p
uid=59008(oracle) gid=10001(dba) projid=3(default)

% prctl -n project.max-shm-memory -i project 3
project: 3: default
NAME    PRIVILEGE       VALUE    FLAG   ACTION                       RECIPIENT
        privileged      7.84GB      -   deny                                 -
        system          16.0EB    max   deny                                 -

Now it is clear that the system is using the default value of 8G in this scenario, where as the application (Oracle) is trying to create a memory segment (10G) larger than 8G. Hence the failure.

So, the solution is to configure the system with a value large enough for the shared segment being created, so Oracle succeeds in starting up the database instance.

On Solaris 9 and prior releases, it can be done by adding the following line to /etc/system, followed by a reboot for the system to pick up the new value.

set shminfo_shmmax = 0x000000028a006000

However shminfo_shmmax parameter was obsoleted with the release of Solaris 10; and Sun doesn't recommend setting this parameter in /etc/system even though it works as expected.

On Solaris 10 and later, this value can be changed dynamically on a per project basis with the help of resource control facilities . This is how we do it on Solaris 10 and later:

% prctl -n project.max-shm-memory -r -v 10G -i project 3

% prctl -n project.max-shm-memory -i project 3
project: 3: default
NAME    PRIVILEGE       VALUE    FLAG   ACTION                       RECIPIENT
        privileged      10.0GB      -   deny                                 -
        system          16.0EB    max   deny                                 -

Note that changes made with the prctl command on a running system are temporary, and will be lost when the system is rebooted. To make the changes permanent, create a project with projadd command and associate it with the user account as shown below:

% projadd -p 3  -c 'eBS benchmark' -U oracle -G dba  -K 'project.max-shm-memory=(privileged,10G,deny)' OASB
% usermod -K project=OASB oracle
Finally make sure the project is created with projects -l or cat /etc/project commands.

% projects -l
        projid : 3
        comment: "eBS benchmark"
        users  : oracle
        groups : dba
        attribs: project.max-shm-memory=(privileged,10737418240,deny)

% cat /etc/project
OASB:3:eBS benchmark:oracle:dba:project.max-shm-memory=(privileged,10737418240,deny)

With these changes, Oracle would start the database up normally.

SQL> startup
ORACLE instance started.

Total System Global Area 1.0905E+10 bytes
Fixed Size                  1316080 bytes
Variable Size            4429966096 bytes
Database Buffers         6442450944 bytes
Redo Buffers               31457280 bytes
Database mounted.
Database opened.

Related information:

  1. What's New in Solaris System Tuning in the Solaris 10 Release?
  2. Resource Controls (overview)
  3. System Setup Recommendations for Solaris 8 and Solaris 9
  4. Man page of prctl(1)
  5. Man page of projadd

Addendum : Oracle RAC settings

Anonymous Bob suggested the following settings for Oracle RAC in the form of a comment for the benefit of others who run into similar issue(s) when running Oracle RAC. I'm pasting the comment as is (Disclaimer: I have not verified these settings):

Thanks for a great explanation, I would like to add one comment that will help those with an Oracle RAC installation. Modifying the default project covers oracle processes great and is all that is needed for a single instance DB. In RAC however, the CRS process starts the DB and it is a root owned process and root does not use the default project. To fix ORA-27102 issue for RAC I added the following lines to an init script that runs before the script fires.

# Recommended Oracle RAC system params
ndd -set /dev/udp udp_xmit_hiwat 65536
ndd -set /dev/udp udp_recv_hiwat 65536

# For root processes like crsd
prctl -n project.max-shm-memory -r -v 8G -i project system
prctl -n project.max-shm-ids -r -v 512 -i project system

# For oracle processes like sqlplus
prctl -n project.max-shm-memory -r -v 8G -i project default
prctl -n project.max-shm-ids -r -v 512 -i project default

So simple yet it took me a week working with Oracle and SUN to come up with that answer...Hope that helps someone out.

# posted by Blogger Bob : 6:48 AM, April 25, 2008

Saturday Nov 01, 2008

Ramifications of the Solaris 10 kernel patch 137111


A recent code change in Solaris 10 inadvertantly exposed an inherent bug in some of the 32-bit applications that rely on their own memory allocators. Due to this, some of the 3rd party applications which were working earlier without the KU 137111 may crash on Solaris 10/SPARC with the KU 137111 (any revision).

Symptoms & the Cause

It was identified that majority of such application failures are mainly due to the applications' custom memory allocator that incorrectly returns 4-byte aligned mutexes in place of the required 8-byte aligned mutexes. In Solaris, mutex_t and pthread_mutex_t structures have been defined to be aligned on an 8-byte boundary. Both of those structures contain the upad64_t member, which is a double even for the 32-bit applications. The natural alignment of a double is 8 bytes; and per the SPARC Compliance Definition 2.4, the structures must be aligned according to their strictest member. That is, applications which create 4-byte aligned mutexes are technically non-compliant on Solaris/SPARC (for the sake of simplicity, such code will be referred to as the non-complying code for the remainder of this blog entry).

Due to a change in the implementation of the userland mutexes introduced by CR 6296770 in KU 137111-01, objects of type mutex_t and pthread_mutex_t must start at 8-byte aligned addresses. If this requirement is not satisfied, all non-compliant applications on Solaris/SPARC may fail with the signal SEGV with a callstack similar to the following one or with similar callstacks containing the function mutex_trylock_process.

  \*_atomic_cas_64(0x141f2c, 0x0, 0xff000000, 0x1651, 0xff000000, 0x466d90)
  set_lock_byte64(0x0, 0x1651, 0xff000000, 0x0, 0xfec82a00, 0x0)
  fast_process_lock(0x141f24, 0x0, 0x1, 0x1, 0x0, 0xfeae5780)

Patches & the Next Steps

Note that only non-compliant 32-bit applications will be affected by the KU 137111. All other complying 32-bit applications continue to run as expected even with the KU 137111 - hence the customers, partners, ISVs and the other software vendors must understand the fact that it is not a Solaris issue. Customers running into this issue must work with the respective software vendors to obtain a patch/fix. We suggest the ISVs and the rest of the software vendors to pro-actively check their 32-bit native code for any discrepancies like the one mentioned in this blog entry.

In our testing of some of the enterprise applications, we have identified Oracle's Siebel CRM as one of the potential applications that is vulnerable to the KU 137111. It appears that IBM's Lotus Domino Server is also prone to a crash on Solaris 10 with the same kernet patch. Speaking of these two known cases, Oracle/Siebel and IBM/Lotus Domino customers (running Solaris) should approach Oracle and IBM Corporations respectively but not Sun Microsystems for a proper fix.

As it may take some time for the ISVs / software vendors to identify and fix the non-complying code in their applications, Sun is planning to provide an interim fix to the mutex byte alignment issue in the form of a Solaris kernel patch. As of this writing, we expect the fix to be integrated into the KU 137137-07. The fix is already available in the latest update of the Solaris, Solaris 10 10/08. Those who cannot upgrade to Solaris 10 10/08 from the prior versions of Solaris 10 must wait for the patch KU [Updated 12/07/08] 137137-07 137137-09.

One must note that the fix in Solaris is a tentative one that allows the non-complying code to run on SPARC hardware for the time being. There is no guarantee that the non-complying code continues to run 'as is' in the future with new Solaris kernel patches and/or major updates/releases of the Solaris operating system. So the best long term solution is for the software vendors to fix the non-compliant code before it is too late.


Steve S and Roger F of Sun Microsystems.

Tuesday Nov 27, 2007

Solaris/SPARC: Oracle 11gR1 client for Siebel 8.0

First things first - Oracle 11g Release 1 for Solaris/SPARC is available now; and can be downloaded from here.

In some Siebel 8.0 environments where Oracle database is being used, customers might be noticing intermittent Siebel object manager crashes under high loads when the work is actively being done by tons of LWPs with less number of object managers. Usually the call stack might look something like:

/export/home/oracle/lib32/ [ Signal 11 (SEGV)]


Setting the Siebel environment variable SIEBEL_STDERROUT to 1 shows the following heap dump in StdErrOut directory under Siebel enterprise logs directory.

% more stderrout_7762_23511113.txt
\*\*\*\*\*\*\*\*\*\* Internal heap ERROR 17112 addr=35dddae8 \*\*\*\*\*\*\*\*\*

\*\*\*\*\* Dump of memory around addr 35dddae8:
35DDCAE0 00000000 00000000 [........]
35DDCAF0 00000000 00000000 00000000 00000000 [................]
Repeat 243 times
35DDDA30 00000000 00000000 00003181 00300020 [..........1..0. ]
35DDDA40 0949D95C 35D7A888 10003179 00000000 [.I.\\5.....1y....]
35DDDA50 0949D95C 0949D8B8 35D7A89C C0000075 [.I.\\.I..5......u]
HEAP DUMP heap name="Alloc environm" desc=949d8b8
extent sz=0x1024 alt=32767 het=32767 rec=0 flg=3 opc=2
parent=949d95c owner=0 nex=0 xsz=0x1038
EXTENT 0 addr=364fb324
Chunk 364fb32c sz= 4144 free " "
EXTENT 1 addr=364f8ebc
Chunk 364f8ec4 sz= 4144 free " "
EXTENT 2 addr=364f7d5c
Chunk 364f7d64 sz= 4144 free " "
EXTENT 3 addr=364f6d04
Chunk 364f6d0c sz= 4144 recreate "Alloc statemen " latch=0
ds 2c38df34 sz= 4144 ct= 1
EXTENT 406 addr=35ddda54
Chunk 35ddda5c sz= 116 free " "
Chunk 35dddad0 sz= 24 BAD MAGIC NUMBER IN NEXT CHUNK (6)
freeable assoc with mark prv=0 nxt=0

Dump of memory from 0x35DDDAD0 to 0x35DDEAE8
35DDDAD0 20000019 35DDDA5C 00000000 00000000 [ ...5..\\........]
35DDDAE0 00000095 0000008B 00000006 35DDDAD0 [............5...]
35DDDAF0 00000000 00000000 00000095 35DDDB10 [............5...]
EXTENT 2080 addr=d067a6c
Chunk d067a74 sz= 2220 freeable "Alloc statemen " ds=2b0fffe4
Chunk d068320 sz= 1384 freeable assoc with mark prv=0 nxt=0
Chunk d068888 sz= 4144 freeable "Alloc statemen " ds=2b174550
Chunk d0698b8 sz= 4144 recreate "Alloc statemen " latch=0
ds 1142ea34 sz= 112220 ct= 147
223784cc sz= 4144
240ea014 sz= 884
28eac1bc sz= 900
2956df7c sz= 900
1ae38c34 sz= 612
92adaa4 sz= 884
2f6b96ac sz= 640
c797bc4 sz= 668
2965dde4 sz= 912
1cf6ad4c sz= 656
10afa5e4 sz= 656
2f6732bc sz= 700
27cb3964 sz= 716
1b91c1fc sz= 584
a7c28ac sz= 884
169ac284 sz= 900
Chunk 2ec307c8 sz= 12432 free " "
Chunk 3140a3f4 sz= 4144 free " "
Chunk 31406294 sz= 4144 free " "
Bucket 6 size=16400
Bucket 7 size=32784
Total free space = 340784
Chunk 949f3c8 sz= 100 perm "perm " alo=100
Permanent space = 100
Hla: 255

ORA-21500: internal error code, arguments: [17112], [0x35DDDAE8], [], [], [], [], [], []
Errors in file :
ORA-21500: internal error code, arguments: [17112], [0x35DDDAE8], [], [], [], [], [], []

----- Call Stack Trace -----
NOTE: +offset is used to represent that the
function being called is offset bytes from
calling call entry argument values in hex
location type point (? means dubious value)
-------------------- -------- -------------------- ----------------------------
F2365738 CALL +23052 D7974250 ? D797345C ? DD20 ?
D79741AC ? D79735F8 ?
F2ECD7A0 ?
F286DDB8 PTR_CALL 00000000 949A018 ? 14688 ? B680B0 ?
F2365794 ? F2ECD7A0 ? 14400 ?
F286E18C CALL +77460 949A018 ? 0 ? F2F0E8D4 ?
1004 ? 1000 ? 1000 ?
F286DFF8 CALL +66708 949A018 ? 0 ? 42D8 ? 1 ?
D79743E0 ? 949E594 ?
__1cN_smiWorkQdDueu CALL __1cN_smiWorkQdDueu 1C8F608 ? 18F55F38 ?
30A5A008 ? 1A98E208 ?
FDBDF178 ? FDBE0424 ?
__1cQSmiThrdEntryFu PTR_CALL 00000000 1C8F608 ? FDBE0424 ?
1AB6EDE0 ? FDBDF178 ? 0 ?
1500E0 ?
__1cROSDWslThreadSt PTR_CALL 00000000 1ABE8250 ? 600140 ? 600141 ?
105F76E8 ? 0 ? 1AC74864 ?
__1cP_AfxThreadEntr PTR_CALL 00000000 0 ? FF30A420 ? 203560 ?
1AC05AF0 ? E2 ? 1AC05A30 ?
__1cIMwThread6Fpv_v PTR_CALL 00000000 D7A7DF6C ? 17F831E0 ? 0 ? 1 ?
0 ? 17289C ?
_lwp_start()+0 ? 00000000 1 ? 0 ? 1 ? 0 ? FCE6C710 ?
1AC72680 ?

----- Argument/Register Address Dump -----

Argument/Register addr=d7974250. Dump of memory from 0xD7974210 to 0xD7974350
D7974200 0949A018 00014688 00B680B0 F2365794
D7974220 F2ECD7A0 00014400 D7974260 F286DDB8 800053FC 80000000 00000002 D79743E0
D7974240 4556454E 545F3231 35303000 32000000 F23654D4 F2365604 00000000 0949A018
D7974260 00000000 0949A114 0000000A F2ECD7A0 FC873504 00001004 F2365794 F2F0E8D4
D7974280 0949A018 00000000 F2F0E8D4 00001004 00001000 00001000 D79742D0 F286E18C
D79742A0 F6CB7400 FC872CC0 00000004 00CDE34C 4556454E 545F3437 31313200 00000000
D79742C0 00000000 00000001 D79743E0 00000000 F2F0E8D4 00001004 00001000 00007530
D79742E0 00007400 00000001 FC8731D4 00003920 0949A018 00000000 000042D8 00000001
D7974300 D79743E0 0949E594 D7974330 F286DFF8 D7974338 F244D5A8 00000000 01000000
D7974320 364FBFFF 01E2C000 00001000 00000000 00001000 00000000 F2ECD7A0 F23654D4
D7974340 000000FF 00000001 F2F0E8D4 F25F9824
Argument/Register addr=d797345c. Dump of memory from 0xD797341C to 0xD797355C
D7973400 F2365738
Argument/Register addr=1eb21388. Dump of memory from 0x1EB21348 to 0x1EB21488
1EB21340 00000000 00000000 FE6CE088 FE6CE0A4 0000000A 00000010
1EB21360 00000000 00000000 00000000 00000010 00000000 00000000 00000000 FEB99308
1EB21380 00000000 00000000 00000000 1C699634 FFFFFFFF 00000000 FFFFFFFF 00000001
1EB213A0 00000000 00000000 00000000 00000000 00000081 00000000 F16F1038 67676767
1EB213C0 00000000 FEB99308 00000000 00000000 FEB99308 FEB46EF4 00000000 00000000
1EB213E0 00000000 00000000 FEB99308 00000000 00000000 00000001 0257E3B4 03418414
1EB21400 00000000 00000000 FEB99308 FEB99308 00000000 00000000 00000000 00000000
1EB21420 00000000 00000000 00000000 00000000 1EB213B0 00000000 00000041 00000000
1EB21440 0031002D 00410036 00510038 004C0000 00000000 00000000 00000000 00000000
1EB21460 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
1EB21480 00000109 00000000

----- End of Call Stack Trace -----

Although I'm not sure what exactly is the underlying issue for the core dump, my suspicion is that there is some memory corruption in Oracle client's code; and the Siebel Object Manager crash is due to the Oracle bug 5682121 - Multithreaded OCI clients do not mutex properly for LOB operations. The fix to this particular bug would be available in Oracle release; and is already available as part of Oracle In case if you notice the symptoms of failure as described in this blog post, upgrade the Oracle client in the application-tier to Oracle 11gR1 and see if it brings stability to the Siebel environment.

Original blog post is at:

Friday Aug 10, 2007

Oracle 10gR2/Solaris x64: Must Have Patches for E-Business Suite 11.5.10

If you have an Oracle E-Business Suite 11.5.10 database running on Oracle 10gR2 ( / Solaris x86-64 platform, make sure you have the following two Oracle patches to avoid concurrency issues and intermittent Oracle shadow process crashes.

Oracle patches

1) 4770693 BUG: Intel Solaris: Unnecessary latch sleeps by default
2) 5666714 BUG: ORA-7445 ON DELETE


1) If the top 5 database timed events look something similar to the following in AWR, it is very likely that the database is running into the bug 4770693 Intel Solaris: Unnecessary latch sleeps by default.

Top 5 Timed Events

EventWaitsTime(s)Avg Wait(ms)% Total Call TimeWait Class
latch: cache buffers chains 94,301 169,403 1,796 86.1Concurrency
CPU time-- 5,478-- 2.8--
wait list latch free 247,466 4,756 19 2.4Other
buffer busy waits 14,928 1,382 93 .7Concurrency
db file sequential read 98,750 552 6 .3User I/O

Apply Oracle server patch 4770693 to get rid of the concurrency issue(s). Note that the fix will be part of the release

2) If the application becomes unstable and if you notice core dumps in cdump directory, have a look at the corresponding stack traces generated in udump directory. If the call stack looks similar to the following stack, apply Oracle server patch 5666714 to overcome this problem.

Alert log will have the following errors:

Errors in file /opt/oracle/admin/VIS/udump/vis_ora_1040.trc:
ORA-07445: exception encountered: core dump [SIGSEGV] [Address not mapped to object] [2168] [] [] []
Fri Jun 15 01:30:38 2007
Errors in file /opt/oracle/admin/VIS/udump/vis_ora_1040.trc:
ORA-07445: exception encountered: core dump [SIGSEGV] [Address not mapped to object] [9] [] [] []
ORA-07445: exception encountered: core dump [SIGSEGV] [Address not mapped to object] [2168] [] [] []
Fri Jun 15 01:30:38 2007
Errors in file /opt/oracle/admin/VIS/udump/vis_ora_1040.trc:
ORA-07445: exception encountered: core dump [SIGSEGV] [Address not mapped to object] [9] [] [] []
ORA-07445: exception encountered: core dump [SIGSEGV] [Address not mapped to object] [9] [] [] []
ORA-07445: exception encountered: core dump [SIGSEGV] [Address not mapped to object] [2168] [] [] [] 

% more vis_ora_1040.trc
\*\*\* 2007-06-15 01:30:38.403
\*\*\* SERVICE NAME:(VIS) 2007-06-15 01:30:38.402
\*\*\* SESSION ID:(1111.4) 2007-06-15 01:30:38.402
Exception signal: 11 (SIGSEGV), code: 1 (Address not mapped to object), addr: 0x878
\*\*\* 2007-06-15 01:30:38.403
ksedmp: internal or fatal error
ORA-07445: exception encountered: core dump [SIGSEGV] [Address not mapped to object] [2168] [] [] []
Current SQL statement for this session:
INSERT /\*+ IDX(0) \*/ INTO "INV"."MLOG$_MTL_SUPPLY" (dmltype$$,old_new$$,snaptime$$,change_vector$$,m_row$$) 
VALUES (:d,:o,to_date('4000-01-01:00:00:00','YYYY-MM-DD:HH24:MI:SS'),:c,:m)
----- Call Stack Trace -----
calling              call     entry                argument values in hex      
location             type     point                (? means dubious value)     
-------------------- -------- -------------------- ----------------------------
ksedst()+23          ?        0000000000000001     00177A9EC 000000000 0061D0A60
ksedmp()+636         ?        0000000000000001     001779481 000000000 00000000B
ssexhd()+729         ?        0000000000000001     000E753CE 000000000 0061D0B90
_sigsetjmp()+25      ?        0000000000000001     0FDDCB7E6 0FFFFFD7F 0061D0B50
call_user_handler()  ?        0000000000000001     0FDDC0BA2 0FFFFFD7F 0061D0EF0
+589                                               000000000
sigacthandler()+163  ?        0000000000000001     0FDDC0D88 0FFFFFD7F 000000002
kglsim_pin_simhp()+  ?        0000000000000001     0FFFFFFFF 0FFFFFFFF 00000000B
173                                                000000000
kxsGetRuntimeLock()  ?        0000000000000001     001EBF830 000000000 005E5D868
+683                                               000000000
kksfbc()+7361        ?        0000000000000001     001FB60A6 000000000 005E5D868
opiexe()+1691        ?        0000000000000001     0029045D0 000000000 0FFDF9250
opiall0()+1316       ?        0000000000000001     0028E9FB9 000000000 000000001
opikpr()+536         ?        0000000000000001     00290B2DD 000000000 0000000B7
opiodr()+1087        ?        0000000000000001     000E7BE1C 000000000 000000001
rpidrus()+217        ?        0000000000000001     000E8058E 000000000 0FFDFA6B8
skgmstack()+163      ?        0000000000000001     003F611D0 000000000 005E5D868
rpidru()+129         ?        0000000000000001     000E808A6 000000000 005E6FAD0
rpiswu2()+431        ?        0000000000000001     000E7FD8C 000000000 0FFDFB278
kprball()+1189       ?        0000000000000001     000E86E6A 000000000 0FFDFB278
kntxslt()+3150       ?        0000000000000001     0030601F3 000000000 005F7C538
kntxit()+998         ?        0000000000000001     003058EBB 000000000 005F7C538
0000000001E4866E     ?        0000000000000001     001E4864B 000000000 000000000
delrow()+9170        ?        0000000000000001     0032020B7 000000000 000000002
qerdlFetch()+640     ?        0000000000000001     0033545F5 000000000 0EF38B020
delexe()+909         ?        0000000000000001     0032034EA 000000000 005E6FC50
opiexe()+9267        ?        0000000000000001     002906368 000000000 000000001
opiodr()+1087        ?        0000000000000001     000E7BE1C 000000000 0FFDFCD10
ttcpip()+1168        ?        0000000000000001     003D031AD 000000000 0FFDFEDF4
opitsk()+1212        ?        0000000000000001     000E77C41 000000000 000E7BA00
opiino()+931         ?        0000000000000001     000E7B0D8 000000000 005E5B8F0
opiodr()+1087        ?        0000000000000001     000E7BE1C 000000000 000000000
opidrv()+748         ?        0000000000000001     000E76A11 000000000 0FFDFF6D8
sou2o()+86           ?        0000000000000001     000E73E6B 000000000 000000000
opimai_real()+127    ?        0000000000000001     000E3A7C4 000000000 000000000
main()+95            ?        0000000000000001     000E3A694 000000000 000000000
0000000000E3A4D7     ?        0000000000000001     000E3A4DC 000000000 000000000
--------------------- Binary Stack Dump ---------------------
========== FRAME [1] (ksedst()+23 -> 0000000000000001) ==========
Dump of memory from 0x00000000061D0910 to 0x00000000061D0920
0061D0910 061D0920 00000000 0177A9EC 00000000  [ .........w.....]
========== FRAME [2] (ksedmp()+636 -> 0000000000000001) ==========
Dump of memory from 0x00000000061D0920 to 0x00000000061D0A60
0061D0920 061D0A60 00000000 01779481 00000000  [`.........w.....]
0061D0930 0000000B 00000000 061D0EF0 00000000  [................]
0061D0940 05E5B96C 00000000 05E5C930 00000000  [l.......0.......]
0061D0950 05E5C930 00000000 FE0D2000 FFFFFD7F  [0........ ......]
0061D0960 061D0A40 00000000 00000000 00000000  [@...............]
0061D0970 00000000 00000000 00000000 00000000  [................]

After installing the Oracle database patches, check the installed patches by running opatch lsinventory on your database server.

% opatch lsinventory
Invoking OPatch

Oracle interim Patch Installer version
Copyright (c) 2005, Oracle Corporation.  All rights reserved..

Oracle Home       : /oracle/product/10.1.0
Central Inventory : /export/home/oracle/oraInventory
   from           : /oracle/product/10.1.0/oraInst.loc
OPatch version    :
OUI version       :
OUI location      : /oracle/product/10.1.0/oui
Log file location : /oracle/product/10.1.0/cfgtoollogs/opatch/opatch-2007_Aug_10_21-56-03-PDT_Fri.log

Lsinventory Output file location : 

Installed Top-level Products (3): 

Oracle Database 10g                                        
Oracle Database 10g Products                               
Oracle Database 10g Release 2 Patch Set 1                  
There are 3 products installed in this Oracle Home.

Interim patches (2) :

Patch  4770693      : applied on Thu Aug 02 15:27:23 PDT 2007
   Created on 12 Jul 2006, 11:52:39 hrs US/Pacific
   Bugs fixed:

Patch  5666714      : applied on Fri Jul 20 10:21:33 PDT 2007
   Created on 29 Nov 2006, 04:52:58 hrs US/Pacific
   Bugs fixed:


OPatch succeeded.

Benchmark announcements, HOW-TOs, Tips and Troubleshooting


« April 2016