Monday Oct 15, 2012

Consolidating Oracle E-Business Suite R12 on Oracle's SPARC SuperCluster

An Optimized Solution for Oracle E-Business Suite (EBS) R12 12.1.3 is now available on oracle.com.

    The Oracle Optimized Solution for Oracle E-Business Suite

This solution was centered around the engineered system, SPARC SuperCluster T4-4. Check the business and technical white papers along with a bunch of relevant useful resources online at the above optimized solution page for EBS.

What is an Optimized Solution?

Oracle's Optimized Solutions are designed, tested and fully documented architectures that are tuned for optimal performance and availability. Optimized solutions are NOT pre-packaged, fully tuned, ready-to-install software bundles that can be downloaded and installed. An optimized solution is usually a well documented architecture that was thoroughly tested on a target platform. The technical white paper details the deployed application architecture along with various observations from installing the application on target platform to its behavior and performance in highly available and scalable configurations.

Oracle E-Business Suite R12 Use Case

Multiple E-Business Suite R12 12.1.3 application modules were tested in this optimized solution -- Financials (online - oracle forms & web requests), Order Management (online - oracle forms & web req uests) and HRMS (online - web requests & payroll batch). The solution will be updated with additional application modules, when they are available.

Oracle Solaris Cluster is responsible for the high availability portion of the solution.

Performance Data

For the sake of completeness, test results were also documented in the optimized solution white paper. Those test results are mainly for educational purposes only. They give good sense of application behavior under the circumstances the application was tested. Since the major focus of the optimized solution is around highly available and scalable configurations, the application was configured to me et those criteria. Hence the documented test results are not directly comparable to any other E-Business Suite performance test results published by any vendor including Oracle. Such an attempt may lead to skewed, incorrect conclusions.

Questions & Requests

Feel free to direct your questions to the author of the white papers. If you are a potential customer who would like to test a specific E-Business Suite application module on any non-engineered syste m such as SPARC T4-X or engineered system such as SPARC SuperCluster, contact Oracle Solution Center.

Monday Sep 24, 2012

E-Business Suite : Role of CHUNK_SIZE in Oracle Payroll

Different batch processes in Oracle Payroll flow have the ability to spawn multiple child processes (or threads) to complete the work in hand. The number of child processes to fork is controlled by the THREADS parameter in APPS.PAY_ACTION_PARAMETERS view.

THREADS parameter

The default value for THREADS parameter is 1, which is fine for a single-processor system but not optimal for the modern multi-core multi-processor systems. Setting the THREADS parameter to a value equal to or less than the total number of [virtual] processors available on the system may improve the performance of payroll processing. However on the down side, since multiple child processes operate against the same set of payroll tables in HR schema, database may experience undesired consequences such as buffer busy waits and index contention, which results in giving up some of the gains achieved by using multiple child processes/threads to process the work. Couple of other action parameters, CHUNK_SIZE and CHUNK_SHUFFLE, help alleviate the database contention.

eg.,

Set a value for THREADS parameter as shown below.

CONNECT APPS/APPS_PASSWORD

UPDATE PAY_ACTION_PARAMETERS
SET PARAMETER_VALUE = DESIRED_VALUE
WHERE PARAMETER_NAME = 'THREADS';

COMMIT;

(I am not aware of any maximum value for THREADS parameter)


CHUNK_SIZE parameter

The size of each commit unit for the batch process is controlled by the CHUNK_SIZE action parameter. In other words, chunking is the act of splitting the assignment actions into commit groups of desired size represented by the CHUNK_SIZE parameter. The default value is 20, and each thread processes one chunk at a time -- which means each child process inserts or processes 20 assignment actions at any time.

When multiple threads are configured, each thread picks up a chunk to process, completes the assignment actions and then picks up another chunk. This is repeated until all the chunks are exhausted.

It is possible to use different chunk sizes in different batch processes. During the initial phase of processing, CHUNK_SIZE number of assignment actions are inserted into relevant table(s). When multiple child processes are inserting data at the same time into the same set of tables, as explained earlier, database may experience contention. The default value of 20 is mostly optimal in such a case. Experiment with different values for the initial phase by +/-10 for CHUNK_SIZE parameter and observe the performance impact. A larger value may make sense during the main processing phase. Again experimentation is the key in finding the suitable value for your environment. Start with a large value such as 2000 for the chunk size, then increment or decrement the size by 500 at a time until an optimal value is found.

eg.,

Set a value for CHUNK_SIZE parameter as shown below.

CONNECT APPS/APPS_PASSWORD

UPDATE PAY_ACTION_PARAMETERS
SET PARAMETER_VALUE = DESIRED_VALUE
WHERE PARAMETER_NAME = 'CHUNK_SIZE';

COMMIT;

CHUNK_SIZE action parameter accepts a value that is as low as 1 or as high as 16000.


CHUNK SHUFFLE parameter

By default, chunks of assignment actions are processed sequentially by all threads - which may not be a good thing especially given that all child processes/threads performing similar actions against the same set of tables almost at the same time. By saying not a good thing, I mean to say that the default behavior leads to contention in the database (in data blocks, for example).

It is possible to relieve some of that database contention by randomizing the processing order of chunks of assignment actions. This behavior is controlled by the CHUNK SHUFFLE action parameter. Chunk processing is not randomized unless explicitly configured.

eg.,

Set chunk shuffling as shown below.

CONNECT APPS/APPS_PASSWORD

UPDATE PAY_ACTION_PARAMETERS
SET PARAMETER_VALUE = 'Y'
WHERE PARAMETER_NAME = 'CHUNK SHUFFLE';

COMMIT;

Finally I recommend checking the following document out for additional details and additional pay action tunable parameters that may speed up the processing of Oracle Payroll.
    My Oracle Support Doc ID: 226987.1 Oracle 11i & R12 Human Resources (HRMS) & Benefits (BEN) Tuning & System Health Checks

Also experiment with different combinations of parameters and values until the right set of action parameters and values are found for your deployment.

Friday Aug 03, 2012

Enabling 2 GB Large Pages on Solaris 10

Few facts:

  • - 8 KB is the default page size on Solaris 10 and 11 as of this writing
  • - both hardware and software must have support for 2 GB large pages
  • - SPARC T4 hardware is capable of supporting 2 GB pages
  • - Solaris 11 kernel has in-built support for 2 GB pages
  • - Solaris 10 has no default support for 2 GB pages
  • - Memory intensive 64-bit applications may benefit the most from using 2 GB pages

Prerequisites:

OS: Solaris 10 8/11 (Update 10) or later
Hardware: SPARC T4. eg., SPARC T4-1, T4-2 or T4-4

Steps to enable 2 GB large pages on Solaris 10:

  1. Install the latest kernel patch or ensure that 147440-04 or later was installed

  2. Add the following line to /etc/system and reboot
    • set max_uheap_lpsize=0x80000000

  3. Finally check the output of the following command when the system is back online
    • pagesize -a

    eg.,
    % pagesize -a
    8192		<-- 8K
    65536		<-- 64K
    4194304		<-- 4M
    268435456	<-- 256M
    2147483648	<-- 2G
    
    % uname -a
    SunOS jar-jar 5.10 Generic_147440-21 sun4v sparc sun4v
    

Also See:

Sunday Jul 29, 2012

[OID] ldap_modify: Failed to find member in mandatory or optional attribute list

A sample LDAP entry and the resulting error message are shown below. The objective is simple - adding a new member (employee) to an existing group (Administrators).

% cat assigngrp.ldif

dn: cn=Administrators,ou=groups,ou=entapp
changetype: modify
add: member
member: cn=emp1234,ou=people,ou=entapp

% ldapmodify -p 3060 -h localhost -D "cn=orcladmin" -w passwd -f assigngrp.ldif
add member:
	cn=emp1234,ou=people,ou=entapp
modifying entry cn=Administrators,ou=groups,ou=entapp
ldap_modify: Object class violation
ldap_modify: additional info: Failed to find member in mandatory or \
     optional attribute list.

The above error message is a generic one. It would have been nice had it shown the expected and actual inputs as part of the error. However it gave us a hint that the object class was violated. In this example, the group "Administrators" was created under object class groupOfUniqueNames.

% ldapsearch -p 3060 -h localhost -b "ou=groups,ou=entapp" -A "(objectclass=*)"
..
cn=Administrators,ou=groups,ou=entapp
Administrators,groups,entapp
cn
uniquemember
objectclass
..

RFC 4519 for Lightweight Directory Access Protocol (LDAP) requires the uniqueMember attribute within the groupOfUniqueNames object class. An excerpt from the original RFC:

3.6.  'groupOfUniqueNames'
	...

      ( 2.5.6.17 NAME 'groupOfUniqueNames'
         SUP top
         STRUCTURAL
         MUST ( uniqueMember $
               cn )
         MAY ( businessCategory $
               seeAlso $
               owner $
               ou $
               o $
               description ) )

Going back to the issue in hand, the "add" attribute must be uniqueMember, not member, in "modify" LDAP entry. That's the object class violation in this case. Now the fix to the issue is obvious.

The modified entry and the output from Oracle Internet Directory's ldapmodify command are shown below.

% cat assigngrp.ldif

dn: cn=Administrators,ou=groups,ou=entapp
changetype: modify
add: uniqueMember
uniqueMember: cn=emp1234,ou=people,ou=entapp

$ ldapmodify -p 3060 -h localhost -D "cn=orcladmin" -w passwd -f assigngrp.ldif
add uniqueMember:
	cn=emp1234,ou=people,ou=entapp
modifying entry cn=Administrators,ou=groups,ou=entapp
modify complete

Though the above example was derived from an Oracle Internet Directory (OID) environment, the problem and the solution are applicable to all environments running LDAP servers.

Tuesday Jun 12, 2012

Oracle E-Business Suite Tip : SQL Tracing

Issue:

Attempts to enable SQL tracing from concurrent request form fails with error:

Function not available to this responsibility.
Change Responsibilities or contact your System Administrator

Resolution:

Switch responsibility to "System Administrator". Navigate to System -> Profiles, and query for "%Diagnostics% ("Utilities : Diagnostics")". Once found the profile, change its value to "Yes". Restart web browser and try enabling SQL trace again.

Tuesday May 08, 2012

OBIEE 11g: Resolving Presentation Services Startup Failure

ISSUE:

Starting Presentation Services fail with the error:

[OBIPS] [ERROR:1] [] [saw.security.odbcuserpopulationimpl.getbisystemconnection] [ecid: ] [tid: ] Authentication Failure.
Odbc driver returned an error (SQLDriverConnectW).
State: 08004.  Code: 10018.  [NQODBC] [SQL_STATE: 08004] [nQSError: 10018] Access for the requested connection is refused.
[nQSError: 43113] Message returned from OBIS.
[nQSError: 43126] Authentication failed: invalid user/password. (08004)[[

Also connecting to the metadata repository (RPD) in online mode fails with similar error.

Looking through the BI server log, nqserver.log, you may find an error message similar to the following:

[OracleBIServerComponent] [ERROR:1] [] [] [ecid: 0001J1LfUetFCC3LVml3ic0000pp000000] [tid: 1] 
[13026] Error in getting roles from BI Security Service:	 
'Error Message From BI Security Service: [nQSError: 46164] HTTP Server returned 404 (Not Found) for URL .' ^M

RESOLUTION:

  • Connect to WebLogic Server (WLS) Console -> Deployments. Ensure that all deployed components are in 'Active' state.

  • If any of the components is in 'Prepared' state, select that application and then click on "start servicing all requests"

  • Restart BI Server and Presentation Services

In some cases, the following additional step might be needed to resolve the issue.

  • Access the Enterprise Manager Fusion Middleware control: http://<host.domain>:port/em

  • Navigate to Business Intelligence -> coreapplication

  • 'Capacity Management' tab -> 'Scalability' sub-tab

  • Click on 'Lock and Edit Configuration' button

  • Enter the IP address in the 'Listen Address' field

  • Click on 'Activate Changes' followed by 'Release Configuration' buttons

  • Restart BI Server and Presentation Services

Also check these My Oracle Support (MOS) documents for more clues and information.

1387283.1 Authentication failed: invalid user/password
1251364.1 Error: "[nQSError: 10018] Access .. Refused. [nQSError: 43126] Authentication Failed .." when Installing OBIEE 11g
1410233.1.1 How To Bind Components / Ports To A Specific IP Address On Multiple Network Interface (NIC) Machines

Friday Apr 27, 2012

Solaris Volume Manager (SVM) on Solaris 11

SVM is not installed on Solaris 11 by default.

# metadb
-bash: metadb: command not found

# /usr/sbin/metadb
-bash: /usr/sbin/metadb: No such file or directory

Install it using pkg utility.

# pkg info svm
pkg: info: no packages matching the following patterns you specified are
installed on the system.  Try specifying -r to query remotely:

        svm

# pkg info -r svm
          Name: storage/svm
       Summary: Solaris Volume Manager
   Description: Solaris Volume Manager commands
      Category: System/Core
         State: Not installed
     Publisher: solaris
       Version: 0.5.11
 Build Release: 5.11
        Branch: 0.175.0.0.0.2.1
Packaging Date: October 19, 2011 06:42:14 AM 
          Size: 3.48 MB
          FMRI: pkg://solaris/storage/svm@0.5.11,5.11-0.175.0.0.0.2.1:20111019T064214Z

# pkg install storage/svm
           Packages to install:   1
       Create boot environment:  No
Create backup boot environment: Yes
            Services to change:   1

DOWNLOAD                                  PKGS       FILES    XFER (MB)
Completed                                  1/1     104/104      1.6/1.6

PHASE                                        ACTIONS
Install Phase                                168/168 

PHASE                                          ITEMS
Package State Update Phase                       1/1 
Image State Update Phase                         2/2 

# which metadb
/usr/sbin/metadb

This time metadb may fail with a different error.

# metadb
metadb: <HOST>: /dev/md/admin: No such file or directory

Check if md.conf exists.

# ls -l  /kernel/drv/md.conf 
-rw-r--r--   1 root     sys          295 Apr 26 15:07 /kernel/drv/md.conf

Dynamically re-scan md.conf so the device tree gets updated.

# update_drv -f md

# ls -l  /dev/md/admin
lrwxrwxrwx   1 root root 31 Apr 20 10:12 /dev/md/admin -> ../../devices/pseudo/md@0:admin

# metadb
metadb: <HOST>: there are no existing databases

Now Solaris Volume Manager is ready to use.

eg.,
#  metadb -f -a c0t5000CCA00A5A7878d0s0

# metadb
        flags           first blk       block count
     a        u         16              8192          /dev/dsk/c0t5000CCA00A5A7878d0s0

Friday Mar 30, 2012

Resolving "PLS-00201: identifier 'DBMS_SYSTEM.XXXX' must be declared" Error

Here is a failure sample.

SQL> set serveroutput on
SQL> alter package APPS.FND_TRACE compile body;

Warning: Package Body altered with compilation errors.

SQL> show errors
Errors for PACKAGE BODY APPS.FND_TRACE:

LINE/COL ERROR
-------- -----------------------------------------------------------------
235/6    PL/SQL: Statement ignored
235/6    PLS-00201: identifier 'DBMS_SYSTEM.SET_EV' must be declared
..

By default, DBMS_SYSTEM package is accessible only from SYS schema. Also there is no public synonym created for this package. So, the solution is to create the public synonym and grant "execute" privilege on DBMS_SYSTEM package to all database users or a specific user.

eg.,

SQL> CREATE PUBLIC SYNONYM dbms_system FOR dbms_system;

Synonym created.

SQL> GRANT EXECUTE ON dbms_system TO APPS;

Grant succeeded.

- OR -

SQL> GRANT EXECUTE ON dbms_system TO PUBLIC;

Grant succeeded.

SQL>  alter package APPS.FND_TRACE compile body;

Package body altered.

Note that merely granting execute privilege is not enough -- creating the public synonym is as important to resolve this issue.

Tuesday Feb 28, 2012

Oracle RDBMS & Solaris : Few Random Tips (Feb 2012)

These tips are just some quick solutions or workarounds. Use these quickies at your own risk.

[#1] Oracle Data Pump

Q: How to exclude the table definition while importing a table using Oracle Data Pump import utility?

A: Use EXCLUDE=TABLE/TABLE option.

eg.,

impdp login/password DUMPFILE=<DUMP_FILENAME> LOGFILE=<LOGFILE_NAME> \
 DIRECTORY=<DB_DIR_NAME> TABLES=<TABLE_NAME> EXCLUDE=TABLE/TABLE



[#2] Workaround to ORA-01089: immediate shutdown in progress - no operations are permitted

When the database is in the middle of an instance shutdown, if another shutdown or startup was attempted, Oracle RDBMS may throw the above ORA-01089 error. The workaround is to force Oracle to start the database instance using startup force option. This option will shutdown the database instance (if running) using the abort command and then starts it up.

eg.,

SQL> STARTUP FORCE



[#3] Quick steps to upgrade the Oracle database from version 11.2.0.[1 or 2] to 11.2.0.3

Execute the following in the same sequence as sysdba.

startup upgrade
!cd $ORACLE_HOME/rdbms/admin
@utlu112i.sql		/* pre-upgrade information tool */
exec dbms_stats.gather_dictionary_stats (DEGREE => 64);
@catupgrd.sql		/* create/modify data dictionary tables */
@utlu112s 		/* all components should be in VALID state */
shutdown immediate
startup
@catuppst.sql		/* upgrade actions that do not require DB in UPGRADE mode */
@utlrp.sql		/* recompile stored PL/SQL and Java code */
SELECT count(*) FROM dba_invalid_objects;		
                        /* verify that all packages and classes are valid */
exit



[#4] Q: Solaris: how to get rid of zombie processes?

A: Run the following with appropriate user privileges.

ps -eaf | grep defunct | grep -v grep | preap `awk '{ print $2 }'`

Alternative way: (not as good as the previous one - still may work as expected)

prstat -n 500 1 1 | grep zombie | preap `awk '{ print $1 }'`



[Added on 03/01/2012]

[#5] Solaris: Many TCP listen drops

eg.,

# netstat -sP tcp | grep tcpListenDrop
        tcpListenDrop       =2442553     tcpListenDropQ0     =     0

To alleviate numerous TCP listen drops, bump up the value for the tunable tcp_conn_req_max_q

# ndd -set /dev/tcp tcp_conn_req_max_q <value>



[Added on 03/02/2012]

[#6] Solaris ZFS: listing all properties and values for a zpool

Run: zfs get all <zpool_name> as any OS user

eg.,

% zpool list
NAME    SIZE  ALLOC   FREE    CAP  HEALTH  ALTROOT
rpool   276G   167G   109G    60%  ONLINE  -
spec    556G   168G   388G    30%  ONLINE  -

% zfs get all rpool
NAME   PROPERTY              VALUE                  SOURCE
rpool  type                  filesystem             -
rpool  creation              Fri May 27 17:06 2011  -
...
rpool  compressratio         1.00x                  -
rpool  mounted               yes                    -
rpool  quota                 none                   default
rpool  reservation           none                   default
rpool  recordsize            128K                   default
...
rpool  checksum              on                     default
rpool  compression           off                    default
...
rpool  logbias               latency                default
rpool  sync                  standard               default
rpool  rstchown              on                     default



[#7] Solaris: listing all ZFS tunables

Run: echo "::zfs_params" | mdb -k with root/super-user privileges

eg.,

# echo "::zfs_params" | mdb -k
arc_reduce_dnlc_percent = 0x3
zfs_arc_max = 0x10000000
zfs_arc_min = 0x10000000
arc_shrink_shift = 0x5
zfs_mdcomp_disable = 0x0
zfs_prefetch_disable = 0x0
..
..
zio_injection_enabled = 0x0
zvol_immediate_write_sz = 0x8000

Tuesday Dec 27, 2011

Oracle Application Testing Suite (OATS): Few Tips & Tricks

OATS is a suite of applications that can be used for performance and scalability testing, functional and regression testing. It is a thin client application that runs within a web browser - so, it is easy to use the tool from anywhere as long as the web server running on the host node is accessible. Hopefully the following tips and tricks will benefit some of the users of the Oracle Application Testing Suite.

Few technical details first - OATS is a 32-bit Java application that runs in a WebLogic container (WLS) with Oracle XE database being the backend store for test session data.



[Trick] Issue : OATS software fails to install on 64-bit Windows systems

Resolution:
Download and install 64-bit .NET framework manually before installing the OATS software. Look for .NET framework on Microsoft's downloads website.




[Trick] Issue : OATS software fails to install on systems with large number of [virtual] CPUs

Resolution:
On systems with many cores/vCPUs, Oracle database in general requires large amounts of memory to be configured for SGA - so, one solution would be to allocate as much memory as required. However Oracle XE limits the memory utilization within the database to 1 GB. Besides, XE uses only one CPU even if there are multiple CPUs available on a system. Hence one workaround is to limit the number of vCPUs that the system exposes during the installation of OATS software. The steps are shown below.

  • Start button -> Run -> type "msconfig"
  • Click on Boot tab -> Advanced Options
  • Check "Number of processors" and set appropriate value (I believe we can go up to 16)
  • Reboot Windows
  • Uninstall failed OATS installation and try installing again
  • Undo the above made changes after the successful installation of OATS
  • Reboot Windows one final time

Thanks to my colleague Bao Doan for providing this workaround.




[Trick] Issue : During runtime, OATS drive the load and executes the test as expected but fails to collect runtime statistics

Resolution:
This is another limitation of Oracle XE database. Until 10g, XE limits the maximum amount of user data in the database to 4 GB. This limit was raised to 11 GB in release Oracle 11g XE. OATS 9.x releases bundle Oracle 10g XE. To take advantage of the larger limit for data, install Oracle 11g XE manually before installing OATS software. OATS installer gives the option to use an existing installation of Oracle XE. Besides, it is not possible to have multiple Oracle XE installations on a single box anyway (that's another XE limitation).

For existing installations, one workaround is to remove old and unwanted sessions to make room for new sessions in the database. Listed below are the steps.

  • Connect to the Oracle Load Testing (OLT) tool
  • Click on "Manage" top-level menu (upper right corner) -> Sessions
  • Click on any unwanted session and press "Delete" button (I recommend deleting one session at a time)



[Trick] Issue : Under load, there are many network timeouts with ton of sockets in TIME_WAIT state on OATS agent systems including the OATS Controller node

Resolution:
Tune TCP/IP parameters on Windows as shown below.

  • Launch Windows registry
  • Navigate to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\TcpIP\parameters
  • Configure the following two parameters. If not found, create those parameters by selecting Edit -> New -> DWORD Value from the menu bar. Select "Decimal" under Base.
      TcpTimedWaitDelay : 30 [seconds]
      MaxUserPort : 65534
  • Reboot Windows

Thanks to my colleagues Dino and Vishnu for sharing this workaround.




[Trick] Issue : OATS Controller does not show any graphs or analysis reports

Resolution:
Install Adobe Flash Plugin and try again.




[Trick] Issue : Under load, OATS Controller stops collecting runtime statistics at some random point

Resolution:
Check Oracle database alert log for some clue(s). If there is an error message such as "ORA-12516: TNS:listener could not find available handler with matching protocol stack", connect to the database, query v$resource_limit view and compare the values reported under CURRENT_UTILIZATION and MAX_UTILIZATION for the resource "processes". If the current utilization is pretty close to the configured maximum value, raise the value for processes parameter in [S]PFILE.




[Tip] Balancing the load among multiple OATS agent systems

One simple way is to create a VU Agent System Group based on the available agent systems. Steps listed below.

  • Connect to the Oracle Load Testing (OLT) tool
  • Click on "Manage" top-level menu (upper right corner) -> Systems
  • Click on "VU Agent System Group" in the left hand side
  • On the right hand side, click on "New" option
  • Select all the agent systems that you want to be part of the "VU Agent System Group"
  • Finally name the newly created system group and save

Note that it is not possible to attach weights to the agent systems - so, it is suggested to have agent systems with similar hardware configurations in the VU Agent System Group.



[Tip] Balancing the load among multiple web servers using OATS Controller

If there are multiple web server instances running in a enterprise application deployment; and OATS software is being used to test the performance and scalability of the application, parameterizing the web server hostname and port number in OATS test script will take care of the web server load balancing problem. Of course there are many alternatives to this approach such as using a hardware load balancer, using web server Reverse Proxy etc.,



[Added on 01/19/2012]

[Tip] How-To check the available space in USERS tablespace?

Run the following on OATS Controller node:

Start -> All Programs -> Oracle Database XX Express Edition -> Run SQL Command Line

SQL> connect / as sysdba

SQL> SELECT /* + RULE */  df.tablespace_name "Tablespace",
       df.bytes / (1024 * 1024) "Size (MB)",
       SUM(fs.bytes) / (1024 * 1024) "Free (MB)",
       Nvl(Round(SUM(fs.bytes) * 100 / df.bytes),1) "% Free",
       Round((df.bytes - SUM(fs.bytes)) * 100 / df.bytes) "% Used"
  FROM dba_free_space fs,
       (SELECT tablespace_name,SUM(bytes) bytes
          FROM dba_data_files
         GROUP BY tablespace_name) df
 WHERE fs.tablespace_name (+)  = df.tablespace_name
 GROUP BY df.tablespace_name,df.bytes
UNION ALL
SELECT /* + RULE */ df.tablespace_name tspace,
       fs.bytes / (1024 * 1024),
       SUM(df.bytes_free) / (1024 * 1024),
       Nvl(Round((SUM(fs.bytes) - df.bytes_used) * 100 / fs.bytes), 1),
       Round((SUM(fs.bytes) - df.bytes_free) * 100 / fs.bytes)
  FROM dba_temp_files fs,
       (SELECT tablespace_name,bytes_free,bytes_used
          FROM v$temp_space_header
         GROUP BY tablespace_name,bytes_free,bytes_used) df
 WHERE fs.tablespace_name (+)  = df.tablespace_name
 GROUP BY df.tablespace_name,fs.bytes,df.bytes_free,df.bytes_used
 ORDER BY 4 DESC;

Copy/paste the above SQL code in a text file with sql extension and execute that SQL statement by calling the SQL script from SQL> command prompt. eg., assuming the above code was saved in a plain text file called chktblspcusg.sql under C:\ drive, execute the SQL script as shown below:

SQL> @C:\chktblspcusg.sql




[Added on 06/27/2012]

[Trick] Issue : An attempt to open a test script in OpenScript fails with error

'Failed to open script' has encountered a problem.
Failed to open <script_name>. See error log for details.

Clicking on "Details" button provides the following clue.

The project description file (.project) for '<script_name>' is missing"

In addition the title bar shows "Relocating Eclipse Projects: The project description file (.project) for XXX is missing".

Resolution:
Navigate to C:\Documents and Settings\Administrator\osworkspace\.metadata\.plugins\org.eclipse.core.resources\.projects\

Look for the directory by name "<failing_script_name>" and remove it



[Added on 08/03/2012]

[Trick] Issue: Unexpected Agent exit. Code = 51 in the middle of an OLT load test

When running a load scenario in Oracle Load Testing (OLT) that uses a databank, the scenario runs fine for some time and then all of a sudden fails with the following error: Unexpected Agent exit. Code = 51.

Workaround:

The following settings may alleviate the issue.

  • - toggle/experiment with the settings for "Clear cache between iterations" and "Clear cache before playing back"
    • those settings can be found under the test script preferences -> Playback -> Web Functional -> Miscellaneous
  • - experiment with different values for "Maximum users per process" setting
    • this setting is under OLT -> Configure all parameters -> Advanced
  • - increase the Java heap size (both min & max) in file <OATS_HOME>\agentmanager\bin\AgentManagerService.conf
    • default values: min heap size: 16 MB; max heap size: 64 MB

Contributors: John Snyder, Richard Barry

[Added 02/25/13]

Another colleague Dave Suri has an alternate tip to resolve the Agent 51 issue.

Edit <OATS_HOME>\agentmanager\processDescriptors\JavaAgent.properties

Change the following lines:

#process.debug=y
#process.debug.suspend=y
#process.debug.port=8123
#process.debug.custom=

To:

process.debug=y
#process.debug.suspend=y
#process.debug.port=8123
process.debug.custom=-verbose:gc -XX:+HeapDumpOnOutOfMemoryError -Xms512M 
-Xmx1536M -jrockit -Xrs -XgcPrio:deterministic -XpauseTarget=50ms 
-XX:+UseCallProfiling -XX:+UseAdaptiveFatSpin -XX:+ExitOnOutOfMemoryError 
-XXnoSystemGC -XX:+UseFastTime


PS:
I'm no OATS expert - merely documented what we found along with solutions and workarounds. Feel free to post your question(s) in the comments section. Hopefully someone will respond.

See Also:

About

Benchmark announcements, HOW-TOs, Tips and Troubleshooting

Search

Archives
« March 2015
SunMonTueWedThuFriSat
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
    
       
Today