Saturday Sep 10, 2011

Oracle RDBMS : Generic Large Object (LOB) Performance Guidelines

This blog post is generic in nature and based on my recent experience with a content management system where securefile BLOBs are critical in storing and retrieving the checked in content. It is stro ngly suggested to check the official documentation in addition to these brief guidelines. In general, Oracle Database SecureFiles and Large Objects Developer's Guide 11g Release 2 (11.2) is a good starting point when creating tables involving SecureFiles and LOBs.

Guidelines

  • Tablespace: create the LOB in a different tablespace isolated from the rest of the database
  • Block size: consider larger block size (default 8 KB) if the expected size of the LOB is big
  • Chunk size: consider larger chunk size (default 8 KB) if larger LOBs are expected to be stored and retrieved
  • Inline or Out-of-line: choose "DISABLE STORAGE IN ROW" (out-of-line) if the average LOB size is expected to be > 4 KB. The default inlining is fine for smaller LOBs
  • CACHE or NOCACHE: consider bypassing the database buffer cache (NOCACHE) if large number of LOBs are stored and not expected to be retrieved frequently
  • COMPRESS or NOCOMPRESS: choose COMPRESS option if storage capacity is a concern and a constraint. It saves disk space at the expense of some performance overhead. In a RAC database environment, it is recommended to compress the LOBs to reduce the interconnect traffic
  • De-duplication: by default, duplicate LOBs are stored as a separate copy in the database. Choosing DEDUPLICATE option enables sharing the same data blocks for similar files thus reducing storage overhead and simplifying storage management
  • Partitioning: consider partitioning the parent table to maximize application performance. Hash partitioning is one of the options if there is no potential partition key in the table
  • Zero-Copy I/O protocol: turned on by default. Turning it off in a RAC database environment could be beneficial. Set the initialization parameter _use_zero_copy_io=FALSE to turn o ff the Zero-Copy I/O protocol
  • Shared I/O pool: database uses the shared I/O pool to perform large I/O operations on securefile LOBs. The shared I/O pool uses shared memory segments. If this pool is not large enough or if there is not enough memory available in this pool for a securefile LOB I/O operation, Oracle uses a portion of PGA until there is sufficient memory available in the shared I/O pool. Hence it is recommen ded to size the shared I/O pool appropriately by monitoring the database during the peak activity. Relevant initialization parameters: _shared_io_pool_size and _shared_iop_max_size

Also see:
Oracle Database Documentation : LOB Performance Guidelines

Wednesday Apr 28, 2010

Oracle Database: Say No Thanks! to a New Index

.. unless you are working with a database that is largely read-only or if the new index is supposed to be fixing a critical performance issue with no side effect(s).

Two topics covered in this blog entry with plenty of simple examples:

  1. Why creating new indexes on a heavily modified table may not be a good idea? and
  2. How to identify unused indexes?
Read on.

Indexes are double-edged swords that may improve the performance of targeted queries, but in some cases they may accidentally degrade the performance of other queries that are not targeted. In any case, exercise caution while adding a new index to the database. After adding a new index, monitor the overall performance of the database, not just the targeted query.

If DML statements that modify data (INSERT, UPDATE, or DELETE) are being executed large number of times on a table, make sure that the addition of a new index on the same table does not negatively affect the performance of those DML operations. Usually this is not a problem if the SQLs being executed are simply retrieving but not adding or modifying the existing data. In all other cases, there is some performance overhead induced by the addition of each new index. For example, if there are 10 indexes created on a table DUMMY, adding a new row of data to the table DUMMY may require updating all 10 indexes behind the scenes by the database management system.

Here is an example demonstrating the performance overhead of a new index on a table.


SQL> CREATE TABLE VIDEO
  2  (BARCODE VARCHAR(10) NOT NULL,
  3  TITLE VARCHAR2(25) NOT NULL,
  4  FORMAT VARCHAR2(10),
  5  PRICE NUMBER,
  6  DATA_OF_RELEASE DATE)
  7  /

Table created.

SQL> insert into VIDEO values ('9301224321', 'AVATAR', 'BLU-RAY', 19.99, '22-APR-2010');

1 row created.

..

SQL> insert into VIDEO values ('3782460017', 'THE SIMPSONS - SEASON 20', 'BLU-RAY', 29.99, '04-JUL-2009');

1 row created.

SQL> select \* from VIDEO;

BARCODE 	TITLE				    FORMAT		 PRICE DATA_OF_RELEASE
--------------- ----------------------------------- --------------- ---------- ---------------
9301224321	AVATAR				    BLU-RAY		 19.99 22-APR-10
7619203043	BEN-HUR 			    VHS 		  9.79 12-MAR-63
7305832093	THE MATRIX			    DVD 		 12.29 03-DEC-99
4810218795	MEMENTO 			    DVD 		  8.49 02-FEB-02
3782460017	THE SIMPSONS - SEASON 20	    BLU-RAY		 29.99 04-JUL-09

SQL> select \* from USER_INDEXES where TABLE_NAME = 'VIDEO';

no rows selected

SQL> alter session set events '10046 trace name context forever, level 8';

Session altered.

SQL> select \* from VIDEO where FORMAT = 'BLU-RAY';

BARCODE 	TITLE				    FORMAT		 PRICE DATA_OF_RELEASE
--------------- ----------------------------------- --------------- ---------- ---------------
9301224321	AVATAR				    BLU-RAY		 19.99 22-APR-10
3782460017	THE SIMPSONS - SEASON 20	    BLU-RAY		 29.99 04-JUL-09

SQL> alter session set events '10046 trace name context off';

Session altered.

SQL trace file has the following contents.


SQL ID: 0pu5s70nsdnzv
Plan Hash: 3846322456
SELECT \* 
FROM
 VIDEO WHERE FORMAT = :"SYS_B_0"


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      0.00       0.00          0         16          0           2
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      0.00       0.00          0         16          0           2

Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 28  

Rows     Row Source Operation
-------  ---------------------------------------------------
      2  TABLE ACCESS FULL VIDEO (cr=16 pr=0 pw=0 time=3 us cost=4 size=100 card=2)

Let's create an index and see what happens.


SQL> create index VIDEO_IDX1 on VIDEO (FORMAT);

Index created.

SQL>  alter session set events '10046 trace name context forever, level 8';

Session altered.

SQL> select \* from VIDEO where FORMAT = 'BLU-RAY';

BARCODE 	TITLE				    FORMAT		 PRICE DATA_OF_RELEASE
--------------- ----------------------------------- --------------- ---------- ---------------
9301224321	AVATAR				    BLU-RAY		 19.99 22-APR-10
3782460017	THE SIMPSONS - SEASON 20	    BLU-RAY		 29.99 04-JUL-09

SQL> alter session set events '10046 trace name context off';

Session altered.

The latest contents of the trace file are as follows. Notice the reduction in buffer gets from 16 to 4. That is, the new index improved the query performance by 75%.


SQL ID: 0pu5s70nsdnzv
Plan Hash: 2773508764
SELECT \* 
FROM
 VIDEO WHERE FORMAT = :"SYS_B_0"


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      0.00       0.00          0          4          0           2
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      0.00       0.00          0          4          0           2

Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 28  (CS90)

Rows     Row Source Operation
-------  ---------------------------------------------------
      2  TABLE ACCESS BY INDEX ROWID VIDEO (cr=4 pr=0 pw=0 time=12 us cost=2 size=100 card=2)
      2   INDEX RANGE SCAN VIDEO_IDX1 (cr=2 pr=0 pw=0 time=10 us cost=1 size=0 card=2)(object id 76899)


Rows     Execution Plan
-------  ---------------------------------------------------
      0  SELECT STATEMENT   MODE: ALL_ROWS
      2   TABLE ACCESS (BY INDEX ROWID) OF 'VIDEO' (TABLE)
      2    INDEX   MODE: ANALYZED (RANGE SCAN) OF 'VIDEO_IDX1' (INDEX)

So far so good. Let's add a new row of data and examine the trace file one more time. From hereafter, keep an eye on the "current" column (logical IOs performed due to an INSERT, UPDATE or DELETE) and notice how it changes with different actions -- adding and removing: indexes, new row(s) of data etc.,


SQL ID: dnb2d8cpdj56p
Plan Hash: 0
INSERT INTO VIDEO 
VALUES
 (:"SYS_B_0", :"SYS_B_1", :"SYS_B_2", :"SYS_B_3", :"SYS_B_4")


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          1          7           1
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      0.00       0.00          0          1          7           1

Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 28  (CS90)

Rows     Row Source Operation
-------  ---------------------------------------------------
      0  LOAD TABLE CONVENTIONAL  (cr=1 pr=0 pw=0 time=0 us)


Rows     Execution Plan
-------  ---------------------------------------------------
      0  INSERT STATEMENT   MODE: ALL_ROWS
      0   LOAD TABLE CONVENTIONAL OF 'VIDEO'

Now drop the index, re-insert the last row and get the tracing data again.


SQL> drop index VIDEO_IDX1;

Index dropped.

SQL> delete from VIDEO where BARCODE ='4457332907';

1 row deleted.

SQL> commit;

Commit complete.

SQL> alter session set events '10046 trace name context forever, level 8';

Session altered.

SQL> insert into VIDEO values ('4457332907', 'KING OF THE HILL - ALL', 'DVD', 90.00, '01-JAN-2011');

1 row created.

SQL> alter session set events '10046 trace name context off';

Session altered.

The contents of the latest trace file are shown below.


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.01       0.00          0          2          5          1
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      0.01       0.00          0          2          5           1

Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 28  (CS90)

Rows     Row Source Operation
-------  ---------------------------------------------------
      0  LOAD TABLE CONVENTIONAL  (cr=1 pr=0 pw=0 time=0 us)


Rows     Execution Plan
-------  ---------------------------------------------------
      0  INSERT STATEMENT   MODE: ALL_ROWS
      0   LOAD TABLE CONVENTIONAL OF 'VIDEO'

This time create two indexes and see what happens.


SQL> CREATE INDEX VIDEO_IDX1 ON VIDEO (FORMAT);

Index created.

SQL> CREATE INDEX VIDEO_IDX2 ON VIDEO (TITLE);

Index created.

Trace file contents:

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          1          9           1
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      0.00       0.00          0          1          9           1

Notice the two additional logical IOs (look under "current" column). Those additional logical input/output operations are the result of the new indexes. The number goes up as we add more indexes and data to the table VIDEO.


SQL> delete from VIDEO where BARCODE ='4457332907';

1 row deleted.

SQL> commit;

Commit complete.

SQL> create index VIDEO_IDX3 on VIDEO (PRICE, DATA_OF_RELEASE);

Index created.

SQL>  alter session set events '10046 trace name context forever, level 8';

Session altered.

SQL> insert into VIDEO values ('4457332907', 'KING OF THE HILL - ALL', 'DVD', 90.00, '01-JAN-2011');

1 row created.

SQL> alter session set events '10046 trace name context off';

Session altered.


SQL trace:

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          1         11           1
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      0.00       0.00          0          1         11           1

You can try other operations such as UPDATE, DELETE on your own.

Since there are only few rows of data in the table VIDEO, it is hard to notice the real performance impact in these examples. If you really want to see the negative performance impact due to the large number of indexes on a heavily updated table, try adding thousands or millions of rows of data and few more indexes.

Moral of the story: Indexes aren't always cheap - they may have some overhead associated with them. Be aware of those overheads and ensure that the index maintenance overhead do not offset the performance gains resulting from the indexes created on a particular table.


Monitoring Index Usage

Now we know the possible disadvantage of having too many indexes on a heavily updated table. One way to reduce the index maintenance overhead is to instrument the indexes so we can monitor their usage from time to time and remove the unused indexes. To start monitoring the index usage, alter the index by specifying the keywords MONITORING USAGE.


SQL> select index_name from user_indexes where table_name = 'VIDEO';

INDEX_NAME
--------------------------------------------------------------------------------
VIDEO_IDX3
VIDEO_IDX1
VIDEO_IDX2

SQL> alter index VIDEO_IDX1 MONITORING USAGE;

Index altered.

SQL> alter index VIDEO_IDX2 MONITORING USAGE;

Index altered.

SQL> alter index VIDEO_IDX3 MONITORING USAGE;

Index altered.

Once the indexes are instrumented, query the V$OBJECT_USAGE view occasionally to see if the instrumented indexes are being used in executing SQL queries.


SQL> select \* from VIDEO where BARCODE LIKE '%22%';

BARCODE 	TITLE				    FORMAT		 PRICE DATA_OF_RELEASE
--------------- ----------------------------------- --------------- ---------- ---------------
9301224321	AVATAR				    BLU-RAY		 19.99 22-APR-10

SQL> select \* from VIDEO where FORMAT = 'VHS';

BARCODE 	TITLE				    FORMAT		 PRICE DATA_OF_RELEASE
--------------- ----------------------------------- --------------- ---------- ---------------
7619203043	BEN-HUR 			    VHS 		  9.79 12-MAR-63

SQL> select \* from VIDEO where PRICE < 20;

BARCODE 	TITLE				    FORMAT		 PRICE DATA_OF_RELEASE
--------------- ----------------------------------- --------------- ---------- ---------------
4810218795	MEMENTO 			    DVD 		  8.49 02-FEB-02
7619203043	BEN-HUR 			    VHS 		  9.79 12-MAR-63
7305832093	THE MATRIX			    DVD 		 12.29 03-DEC-99
9301224321	AVATAR				    BLU-RAY		 19.99 22-APR-10

SQL> select \* from VIDEO where FORMAT = 'BLU-RAY' AND DATA_OF_RELEASE < '01-JAN-2010';

BARCODE 	TITLE				    FORMAT		 PRICE DATA_OF_RELEASE
--------------- ----------------------------------- --------------- ---------- ---------------
3782460017	THE SIMPSONS - SEASON 20	    BLU-RAY		 29.99 04-JUL-09


SQL> column INDEX_NAME format A25
SQL> column START_MONITORING format A30

SQL> select INDEX_NAME, USED, START_MONITORING 
  2  from V$OBJECT_USAGE 
  3  where INDEX_NAME LIKE 'VIDEO_IDX%'
  4  /

INDEX_NAME		  USED	    START_MONITORING
------------------------- --------- ------------------------------
VIDEO_IDX1		  YES	    04/27/2010 01:10:20
VIDEO_IDX2		  NO	    04/27/2010 01:10:25
VIDEO_IDX3		  YES	    04/27/2010 01:10:31

In the above example, the index VIDEO_IDX2 was not in use during the period of index monitoring. If we are convinced that the queries that will be executed are similar to the ones that were executed during the index monitoring period, we can go ahead and remove the index VIDEO_IDX2 to reduce the performance overhead during updates on table VIDEO.

To stop monitoring the index usage, alter the index with the keywords NOMONITORING USAGE.


SQL> alter index VIDEO_IDX1 NOMONITORING USAGE;

Index altered.

Monday Aug 17, 2009

Oracle Business Intelligence on Sun : Few Best Practices

(Updated on 10/16/09 with additional content and restructured the blog entry for clarity and easy navigation)

The following suggested best practices are applicable to all Oracle BI EE deployments on Sun hardware (CMT and M-class) running Solaris 10 or later. These recommendations are based on our observations from the 50,000 user benchmark on Sun SPARC Enterprise T5440. It is not the complete list, and your mileage may vary.

Hardware : Firmware

Ensure that the system's firmware is up-to-date.

Solaris Recommendations

  • Upgrade to the latest update release of Solaris 10.

  • Solaris runs in 64-bit mode by default on SPARC platform. Consider running 64-bit BI EE on Solaris.

      64-bit BI EE platform is immune to the 4 GB virtual memory limitation of the 32-bit BI EE platform -- hence can potentially support even more users and have larger caches as long as the hardware resources are available.

  • Enable 256M large pages on all nodes. By default, the latest update of Solaris 10 will use a maximum of 4M pages even when 256M pages are a good fit.

      256M pages can be enabled with the following /etc/system tunables.
      
      \* 256M pages for the process heap
      set max_uheap_lpsize=0x10000000
      
      \* 256M pages for ISM
      set mmu_ism_pagesize=0x10000000
      
      

  • Increase the file descriptor limits by adding the following lines to /etc/system on all BI nodes.
      
      \* file descriptor limits
      set rlim_fd_cur=65536
      set rlim_fd_max=65536
      
      
  • On larger systems with more CPUs or CPU cores, try not to deploy Oracle BI EE in the global zone.

      In our benchmark testing, we have observed unpredictable and abnormal behavior of the BI server process (nqsserver) in the global zone under moderate loads. This behavior is clearly noticeable when there are more than 64 vcpus allocated to the global zone.

  • If the BI presentation catalog is stored on a local file system, create a ZFS file system to hold the catalog.

      If there are more than 25,000 authorized users in a BI deployment, the default UFS file system may run into Too many links error when the Presentation Server tries to create more than 32,767 sub-directories (refer to LINK_MAX on Solaris)

  • Store the Presentation Catalog on a disk with faster I/O such as a Solid State Drive (SSD). For uniform reads and writes across different disk drives [ and of course for better performance ], we recommend creating ZFS file system on top of a zpool with multiple SSDs.

    Here is an example that shows the ZFS file system creation steps for the BI Presentation Catalog.

    
    # zpool create -f BIshare c1t2d0s6 c1t3d0s0 c1t4d0s6 c1t5d0s6
    
    # zpool list
    NAME      SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
    BIshare   118G    97K   118G     0%  ONLINE  -
    
    # zfs create BIshare/WebCat
    
    # fstyp /dev/dsk/c1t2d0s6
    zfs
    
    # zpool status -v
      pool: BIshare
     state: ONLINE
     scrub: none requested
    config:
    
            NAME        STATE     READ WRITE CKSUM
            BIshare     ONLINE       0     0     0
              c1t2d0s6  ONLINE       0     0     0
              c1t3d0s0  ONLINE       0     0     0
              c1t4d0s6  ONLINE       0     0     0
              c1t5d0s6  ONLINE       0     0     0
    
    errors: No known data errors
    
    

    Observe the I/O activity on ZFS file system by running zpool iostat -v command.

Solaris : ZFS Recommendations

  • If the file system is mainly used for storing the Presentation Catalog, consider setting the ZFS record size to 8K. This is because of the relatively small size (8K or less) reads/writes from/into the BI Catalog.

    eg.,
    
            # zfs set recordsize=8K BIshare/WebCat
    
    

    In the case of database, you may have to set the ZFS record size to the database block size.

  • Even though disabling ZFS Intent Log (ZIL) may improve the performance of synchronous write operations, it is not a recommended practice to disable ZIL. Doing so may compromise the data integrity.

      Disabling the ZIL on an NFS Server can lead to client side corruption.

  • When running CPU intensive workloads, consider disabling the ZFS' metadata compression to provide more CPU cycles to the application.

      Starting with Solaris 10 11/06, metadata compression can be disabled and enabled dynamically as shown below.

      To disable the metadata compression:

      
              # echo zfs_mdcomp_disable/W0t1 | mdb -kw
      
      

      To enable the metadata compression:

      
              # echo zfs_mdcomp_disable/W0t0 | mdb -kw
      
      

      To permanently disable the metadata compression, set the following /etc/system tunable.

      
              set zfs:zfs_mdcomp_disable=1
      
      

Solaris : NFS Recommendations

One of the requirements of OBIEE is that the BI Presentation Catalog must be shared across different BI nodes in the BI Cluster. (There will be only one copy of the presentation catalog). Unless the catalog has been replicated on different nodes, there is no other choice but to share it across different nodes. One way to do this is to create an NFS share with the top level directory of the catalog, and then to mount it over NFS at the BI nodes.

  • Version 4 is the default NFS version on Solaris 10. However it appears that as of this writing, NFS v4 is not as mature as v3. So we recommend experimenting with both versions to see which one fits well to the needs of the BI deployment.

    To enable NFS v3 on both server and the client, edit /etc/default/nfs and make the changes as shown below.

      NFS Server
      NFS_SERVER_VERSMIN=3 NFS_SERVER_VERSMAX=3
      NFS Client
      NFS_CLIENT_VERSMIN=3 NFS_CLIENT_VERSMAX=3
  • Experiment with the following NFS tunables.

      NFS Server
      
      NFSD_SERVERS=<desired_number> <-- on CMT systems with large number of hardware threads you can go as high as 512
      NFS_SERVER_DELEGATION=[ON|OFF] <-- ON is the default. Experiment with OFF
      NFSMAPID_DOMAIN=<network_domain_where_BI_was_deployed>
      
      
      NFS Client
      NFSMAPID_DOMAIN=<network_domain_where_BI_was_deployed>
  • Monitor the DNLC hit rate and tune the directory name look-up cache (DNLC).

      To monitor the DNLC hit rate, run "vmstat -s | grep cache" command. It is ideal to see a hit rate of 95% or above.

      Add the following tunable parameter to /etc/system on NFS server with a desired value for the DNLC cache.

      
              set ncsize=<desired_number>
      
      
  • Mounting NFS Share

    Mount the NFS share that contains the Presentation Services Catalog on all the NFS clients (BI nodes in this context) using the following mount options:

    
            rw, forcedirectio, nocto
    
    

Oracle BI EE Cluster Deployment Recommendations

  • Ensure that all the BI components in the cluster are configured in a many-to-many fashion

  • For proper load balancing, configure all BI nodes to be identical in the BI Cluster

  • When planning to add an identically configured new node to the BI Cluster, simply clone an existing well-configured BI node running in a non-global zone.

      Cloning a BI node running in a dedicated zone results in an exact copy of the BI node being cloned. This approach is simple, less error prone and eliminates the need to configure the newly added node from scratch.

Oracle BI Presentation Services Configuration Recommendations

  • Increase the file descriptors limit. Edit SAROOTDIR/setup/systunesrv.sh to increase the value from 1024 to any other value of your choice. In addition you must increase the shell limit using the ulimit -n command

    eg.,
    
    	ulimit -n 2048
    
    

  • Configure 256M large pages for the JVM heap of Chart server and OC4J web server (this recommendation is equally applicable to other web servers such as WebLogic or Sun Java system Web Server). Also use parallel GC, and restrict the number of parallel GC threads to 1/8th of the number of virtual cpus.

    eg.,

    
    	-XX:LargePageSizeInBytes=256M -XX:+UseParallelGC -XX:ParallelGCThreads=8
    
    

  • The Oracle BI Presentation Server keeps the access information of all the users in the Web Catalog. When there are large number of unique BI users, it can take a significant amount of time to look up a user if all the users reside in a single directory. To avoid this, hash the user directories. It can be achieved by having the following entry in SADATADIR/web/config/instanceconfig.xml

    eg.,

    
    	<Catalog>
    		<HashUserHomeDirectories>2</HashUserHomeDirectories>
    	</Catalog>
    
    

    HashUserHomeDirectories specifies the number of characters to use to hash user names into sub directories. When this element is turned on, for example, the default name for user Steve's home directory would become /users/st/steve.

  • BI Server and BI Presentation Server processes create many temporary files while rendering reports and dashboards for a user. This can result in significant I/O activity on the system. The I/O waits can be minimized by pointing the temporary directories to a memory resident file system such as /tmp on Solaris OS. To achieve this, add the following line to the instanceconfig.xml configuration file.

    eg.,

    
    	<TempDir>/tmp/OracleBISAW</TempDir>
    
    

    Similarly the Temporary directory (SATEMPDIR) can be pointed to a memory resident file system such as /tmp to minimize the I/O waits.

  • Consider tuning the value of CacheMaxEntries in instanceconfig.xml. A value of 20,000 was used in the 50,000 user OBIEE benchmark on T5440 servers. Be aware that the Presentation Services process (sawserver64) consumes more virtual memory when this parameter is set to a high value.

    eg.,
    
    	<CacheMaxEntries>20000</CacheMaxEntries>
    
    
  • If the presentation services log contains errors such as "The queue for the thread pool AsyncLogon is at it's maximum capacity of 50 jobs.", consider increasing the Presentation Services' asynchronous job queue. 50 is the default value.

    The following example increases the job queue size to 200.

    
    	<ThreadPoolDefaults>
    		<AsyncLogon>
    			<MaxQueue>200</MaxQueue>
    		</AsyncLogon>
    	</ThreadPoolDefaults>
    
    
  • Increase the query cache expiry time especially when the BI deployment is supposed to handle large number of concurrent users. The default is 60 minutes. However under very high loads, a cache entry may be removed before one hour if many queries are being run. Hence it is necessary to tune the parameter CacheMaxExpireMinutes in Presentation Services' instanceconfig.xml.

    The following example increases the query cache expiry time to 3 hours.

    
    	<CacheMaxExpireMinutes>180</CacheMaxExpireMinutes>
    
    
  • Consider increasing the Presentation Services' cache timeout values to keep the cached data intact for longer periods.

    The following example increases the cache timeout values to 5 hours in instanceconfig.xml configuration file.

    
    	<AccountIndexRefreshSecs>18000</AccountIndexRefreshSecs>
    	<AccountCacheTimeoutSecs>18000</AccountCacheTimeoutSecs>
    	<CacheTimeoutSecs>18000</CacheTimeoutSecs>
    	<CacheCleanupSecs>18000</CacheCleanupSecs>
    	<PrivilegeCacheTimeoutSecs>18000</PrivilegeCacheTimeoutSecs>
    
    

Oracle BI Server Configuration Recommendations

  • Enable caching at the BI server and control/tune the cache expiry time for each of the table based on your organizations' needs.

  • Unless the repository needs to be edited online frequently, consider setting up the "read only" mode for the repository. It may ease lock contention up to some extent.

  • Increase the session limit and the number of requests per session limit especially when the BI deployment is expected to handle large number of concurrent users. Also increase the number of BI server threads.

    The following configuration was used in 50,000 user OBIEE benchmark on T5440 servers.

    
    (Source configuration file: NQSConfig,.INI)
    
    [ CACHE ]
    ENABLE = YES;
    DATA_STORAGE_PATHS = "/export/oracle/OracleBIData/cache" 500 MB;
    MAX_ROWS_PER_CACHE_ENTRY = 0;
    MAX_CACHE_ENTRY_SIZE = 10 MB;
    MAX_CACHE_ENTRIES = 5000;
    POPULATE_AGGREGATE_ROLLUP_HITS = NO;
    USE_ADVANCED_HIT_DETECTION = NO;
    
    // Cluster-aware cache
    GLOBAL_CACHE_STORAGE_PATH = "/export/oracle/OracleBIsharedRepository/GlobalCacheDirectory" 2048 MB;
    MAX_GLOBAL_CACHE_ENTRIES = 10000;
    CACHE_POLL_SECONDS = 300;
    CLUSTER_AWARE_CACHE_LOGGING = NO;
    
    [ SERVER ]
    READ_ONLY_MODE = YES;
    MAX_SESSION_LIMIT = 20000 ;
    MAX_REQUEST_PER_SESSION_LIMIT = 1500 ;
    SERVER_THREAD_RANGE = 512-2048;
    SERVER_THREAD_STACK_SIZE = 0;
    DB_GATEWAY_THREAD_RANGE = 512-512;
    
    #SERVER_HOSTNAME_OR_IP_ADDRESSES = "ALLNICS";
    CLUSTER_PARTICIPANT = YES;
    
    

Related Blog Posts

Tuesday Jan 27, 2009

PHP: Memory savings with mysqlnd

mysqlnd may save memory. In the best cases, it may consume only 50% memory as that of libmysql esp. when the client application does not modify the data in the result set after executing a query. Keep in mind that the client must use ext/mysqli and treat the data returned by the query as read-only in order to fully realize mysqlnd's memory gains. If the client application modifies any of the data, mysqlnd behaves just like libmysql.

Let's have a quick look at the memory consumption in both the cases (mysqlnd and libmysql) with an example before delving into the internals. For easy comparison, the sample PHP script does not modify any of the arrays returned from the fetch method. The following example uses DTrace on Sun Solaris to monitor the calls to malloc() and prints the requested bytes of memory on the standard output.

Source code for the script: PHPmysqliClient.php. MySQL table structure and the sample data are shown in the other blog post: Demonstrating the Features of MySQL Native Driver for PHP, mysqlnd.


bash# cat monitormalloc.d

#!/usr/sbin/dtrace -s

pid$1:libc:malloc:entry
{
 	printf("\\t\\tSize : %d Bytes", arg0);
	ustack();
	@malloc[probefunc] = quantize(arg0);
}

CASE 1: ext/mysqli with libmysql

In one terminal window:

bash# /opt/coolstack/php5/bin/php PHPmysqliClient.php
In another terminal window, run monitormalloc.d `pgrep php` and press any key to continue ..

Retrieved 3 row(s).

CityName
--------
Hyderabad, India
San Francisco, USA
Sydney, Australia

In another terminal window:

bash# ./monitormalloc.d `pgrep php`
dtrace: script './monitormalloc.d' matched 1 probe
CPU	ID			FUNCTION:NAME
  0  80920			malloc:entry		Size : 964 Bytes
              libc.so.1`malloc
              libmysqlclient.so.16.0.0`0xce2173c1
              libmysqlclient.so.16.0.0`0xce23bb47
              mysqli.so`0xce11d292

  0  80920			malloc:entry		Size : 20 Bytes
              libc.so.1`malloc
              libmysqlclient.so.16.0.0`0xce2173c1
              libmysqlclient.so.16.0.0`0xce23da60
              mysqli.so`0xce11dc72

  0  80920			malloc:entry		Size : 20 Bytes
              libc.so.1`malloc
              libmysqlclient.so.16.0.0`0xce2173c1
              libmysqlclient.so.16.0.0`0xce21d991
              libmysqlclient.so.16.0.0`0xce21d9ce
              libmysqlclient.so.16.0.0`0xce23da72
              mysqli.so`0xce11dc72

  0  80920			malloc:entry		Size : 17 Bytes
		... elided stack traces for brevity ...
  0  80920			malloc:entry		Size : 152 Bytes
  0  80920			malloc:entry		Size : 16384 Bytes
  0  80920			malloc:entry		Size : 8199 Bytes
  0  80920			malloc:entry		Size : 7 Bytes
  0  80920			malloc:entry		Size : 4088 Bytes
  0  80920			malloc:entry		Size : 4088 Bytes
  0  80920			malloc:entry		Size : 4088 Bytes
  0  80920			malloc:entry		Size : 4088 Bytes
  0  80920			malloc:entry		Size : 4088 Bytes
  0  80920			malloc:entry		Size : 4088 Bytes
  0  80920			malloc:entry		Size : 4088 Bytes
  0  80920			malloc:entry		Size : 4088 Bytes
  0  80920			malloc:entry		Size : 4088 Bytes
  0  80920			malloc:entry		Size : 18261 Bytes
  0  80920			malloc:entry		Size : 58 Bytes
  0  80920			malloc:entry		Size : 4088 Bytes
  0  80920			malloc:entry		Size : 4088 Bytes
  0  80920			malloc:entry		Size : 120 Bytes
  0  80920			malloc:entry		Size : 5 Bytes
  0  80920			malloc:entry		Size : 6 Bytes
  0  80920			malloc:entry		Size : 5 Bytes
  0  80920			malloc:entry		Size : 56 Bytes
  0  80920			malloc:entry		Size : 8164 Bytes
  0  80920			malloc:entry		Size : 8164 Bytes
  0  80920			malloc:entry		Size : 92 Bytes
  0  80920			malloc:entry		Size : 56 Bytes
  0  80920			malloc:entry		Size : 8164 Bytes
              libc.so.1`malloc
              libmysqlclient.so.16.0.0`0xce2173c1
              libmysqlclient.so.16.0.0`0xce21a27b
              libmysqlclient.so.16.0.0`0xce23b8a4
              libmysqlclient.so.16.0.0`0xce23d4fa
              mysqli.so`0xce11fe56

  0  80920			malloc:entry		Size : 262144 Bytes
              libc.so.1`malloc
              php`0x856fb98

\^C

  malloc		
           value  ------------- Distribution ------------- count
               2 |                                         0
               4 |@@@@@                                    4
               8 |                                         0
              16 |@@@@                                     3
              32 |@@@@                                     3
              64 |@@@                                      2
             128 |@                                        1
             256 |                                         0
             512 |@                                        1
            1024 |                                         0
            2048 |@@@@@@@@@@@@@@			   11
            4096 |@@@@                                     3
            8192 |@                                        1
           16384 |@@@                                      2
           32768 |                                         0
           65536 |                                         0
          131072 |                                         0
          262144 |@                                        1
          524288 |                                         0

CASE 2: ext/mysqli with mysqlnd

In one terminal window:

bash# /export/home/php53/bin/php PHPmysqliClient.php
In another terminal window, run monitormalloc.d `pgrep php` and press any key to continue ..

Retrieved 3 row(s).

CityName
--------
Hyderabad, India
San Francisco, USA
Sydney, Australia

In another terminal window:

bash# ./monitormalloc.d `pgrep php`
dtrace: script './monitormalloc.d' matched 1 probe
CPU	ID			FUNCTION:NAME
  0  80920			malloc:entry		Size : 262144 Bytes
              libc.so.1`malloc
              php`0x82f702b
              php`0x82f80ab
              php`0x82f841f
              php`0x82f98c4
              php`0x82c7668
              php`0x83c30ae
              php`0x80c059c

\^C

  malloc		
           value  ------------- Distribution ------------- count
          131072 |                                         0
          262144 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 1
          524288 |                                         0

In the case of ext/mysqli with libmysql, there are more than 25 calls to malloc() accounting to a total size around 367KB, where as in the case of ext/mysqli with mysqlnd, there is only one call to malloc() with a size of 256KB. In other words, mysqlnd is consuming 30% less memory relative to libmysql to do similar database operations (in reality, it is incorrect to treat every byte allocated as the memory consumed unless there exists a corresponding memory mapping -- however for the sake of this discussion, let's just assume that all the allocated bytes are eventually consumed).

The memory savings in the above example are the result of mysqlnd's ability to hold the results only once in the memory. On the other hand, as libmysql is not a part of PHP, some of the results fetched by libmysql will be copied into memory twice. When libmysql fetches the data from the MySQL Server, it puts the data into its own buffers. Then the data gets copied from the libmysql buffers into respective ext/mysqli data structures (often referred as zvals) before it is made available to the PHP clients to consume. So with ext/mysqli and libmysql, there might be two copies of the data in the main memory - one copy inside the libmysql buffers and the other inside zvals. With mysqlnd, there might be only one copy of the data in the memory. mysqlnd also uses buffers but links the zval structures directly to the read buffers, wherever possible. Therefore in majority of the instances, mysqlnd consumes less memory relative to libmysql. In the worst case, it may consume as much memory as that of libmysql. The total memory savings depend on the size of the buffered result set.

Shown below is the simplified behind-the-scenes actions of ext/mysqli with libmysql and ext/mysqli with mysqlnd when mysqli sends a query:

ext/mysqli with libmysql

  1. mysqli sends a query
  2. result set gets fetched into libmysql buffers
  3. mysqli allocates zvals, then new buffers
  4. mysqli copies data from libmysql to its own buffers
  5. mysqli calls mysql_free_result() and deallocates libmysql buffers

ext/mysqli with mysqlnd

  1. mysqli sends a query
  2. result set gets fetched row by row -- every row is a different buffer
  3. mysqlnd creates a result set of zvals pointing to the buffers
  4. mysqli calls mysqlnd_free_result() and deallocates the row buffers

In short, ext/mysqli with libmysql does:

  • one extra allocation for mysqli buffers
  • one extra data copy
  • one extra zval allocation (which can be saved with the zval cache)

when compared to ext/mysqli with mysqlnd.

Related Blog Posts:

  1. MySQL Native Driver for PHP, mysqlnd
  2. Demonstrating the Features of MySQL Native Driver for PHP, mysqlnd

Acknowledgments
Andrey Hristov & Ulf Wendel, Sun-MySQL AB

Saturday Dec 20, 2008

Siebel on Sun Solaris: More Performance with Less Number of mprotect() Calls

By default each transaction in Siebel CRM application makes a large number of serialized mprotect() calls which in turn may degrade the performance of Siebel. When the load is very high on the Siebel application servers, the mprotect() calls are serialized by the operating system kernel resulting in high number of context switches and low CPU utilization.

If a Siebel deployment exhibits the above mentioned pathological conditions, performance / scalability of the application can be improved by limiting the number of mprotect() calls from the application server processes during the run-time. To achieve this behavior, set the value of Siebel CRM's AOM tunable parameter MemProtection to FALSE. Starting with the release of Siebel 7.7, the parameter MemProtection is a hidden one with the default value of TRUE. To set its value to FALSE, run the following command from the CLI version of Siebel Server Manager - srvrmgr.


change param MemProtection=False for comp <component_alias_name> server <siebel_server_name>

where:

component_alias_name is the alias name of the AOM component to be configured. eg., SCCObjMgr_enu is the alias for the Call Center Object Manager, and

siebel_server_name is the name of the Siebel Server for which the component being configured.

Note that this parameter is not a dynamic one - hence the Siebel application server(s) must be restarted for this parameter to be effective.

Run truss -c -p <pid_of_any_busy_siebmtshmw_process> before and after the change to see how the mprotect system call count varies.

For more information about this tunable on Solaris platform, check Siebel Performance Tuning Guide Version 7.7 or later in Siebel Bookshelf.

See Also:
Siebel on Sun CMT hardware : Best Practices

(Originally posted on blogger at:
Siebel on Sun Solaris: More Performance with Less mprotect() Calls)

About

Benchmark announcements, HOW-TOs, Tips and Troubleshooting

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today