Oracle Business Intelligence on Sun : Few Best Practices
By Giri Mandalika on Aug 17, 2009
The following suggested best practices are applicable to all Oracle BI EE deployments on Sun hardware (CMT and M-class) running Solaris 10 or later. These recommendations are based on our observations from the 50,000 user benchmark on Sun SPARC Enterprise T5440. It is not the complete list, and your mileage may vary.
Hardware : Firmware
Ensure that the system's firmware is up-to-date.
- Check the Sun System Firmware Release Hub for the latest firmware.
- Upgrade to the latest update release of Solaris 10.
- Solaris runs in 64-bit mode by default on SPARC platform. Consider running 64-bit BI EE on Solaris.
64-bit BI EE platform is immune to the 4 GB virtual memory limitation of the 32-bit BI EE platform -- hence can potentially support even more users and have larger caches as long as the hardware resources are available.
- Enable 256M large pages on all nodes. By default, the latest update of Solaris 10 will use a maximum of 4M pages even when 256M pages are a good fit.
- 256M pages can be enabled with the following
\* 256M pages for the process heap set max_uheap_lpsize=0x10000000 \* 256M pages for ISM set mmu_ism_pagesize=0x10000000
\* file descriptor limits set rlim_fd_cur=65536 set rlim_fd_max=65536
In our benchmark testing, we have observed unpredictable and abnormal behavior of the BI server process (nqsserver) in the global zone under moderate loads. This behavior is clearly noticeable when there are more than 64 vcpus allocated to the global zone.
If the BI presentation catalog is stored on a local file system, create a ZFS file system to hold the catalog.
- If there are more than 25,000 authorized users in a BI deployment, the default UFS file system may run into
Too many linkserror when the Presentation Server tries to create more than 32,767 sub-directories (refer to LINK_MAX on Solaris)
Store the Presentation Catalog on a disk with faster I/O such as a Solid State Drive (SSD). For uniform reads and writes across different disk drives [ and of course for better performance ], we recommend creating ZFS file system on top of a zpool with multiple SSDs.
Here is an example that shows the ZFS file system creation steps for the BI Presentation Catalog.
# zpool create -f BIshare c1t2d0s6 c1t3d0s0 c1t4d0s6 c1t5d0s6 # zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT BIshare 118G 97K 118G 0% ONLINE - # zfs create BIshare/WebCat # fstyp /dev/dsk/c1t2d0s6 zfs # zpool status -v pool: BIshare state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM BIshare ONLINE 0 0 0 c1t2d0s6 ONLINE 0 0 0 c1t3d0s0 ONLINE 0 0 0 c1t4d0s6 ONLINE 0 0 0 c1t5d0s6 ONLINE 0 0 0 errors: No known data errors
Observe the I/O activity on ZFS file system by running
zpool iostat -v command.
Solaris : ZFS Recommendations
If the file system is mainly used for storing the Presentation Catalog, consider setting the ZFS record size to 8K. This is because of the relatively small size (8K or less) reads/writes from/into the BI Catalog.eg.,
# zfs set recordsize=8K BIshare/WebCat
In the case of database, you may have to set the ZFS record size to the database block size.
Even though disabling ZFS Intent Log (ZIL) may improve the performance of synchronous write operations, it is not a recommended practice to disable ZIL. Doing so may compromise the data integrity.
Disabling the ZIL on an NFS Server can lead to client side corruption.
When running CPU intensive workloads, consider disabling the ZFS' metadata compression to provide more CPU cycles to the application.
Starting with Solaris 10 11/06, metadata compression can be disabled and enabled dynamically as shown below.
To disable the metadata compression:
# echo zfs_mdcomp_disable/W0t1 | mdb -kw
To enable the metadata compression:
# echo zfs_mdcomp_disable/W0t0 | mdb -kw
To permanently disable the metadata compression, set the following /etc/system tunable.
Solaris : NFS Recommendations
One of the requirements of OBIEE is that the BI Presentation Catalog must be shared across different BI nodes in the BI Cluster. (There will be only one copy of the presentation catalog). Unless the catalog has been replicated on different nodes, there is no other choice but to share it across different nodes. One way to do this is to create an NFS share with the top level directory of the catalog, and then to mount it over NFS at the BI nodes.
Version 4 is the default NFS version on Solaris 10. However it appears that as of this writing, NFS v4 is not as mature as v3. So we recommend experimenting with both versions to see which one fits well to the needs of the BI deployment.
To enable NFS v3 on both server and the client, edit /etc/default/nfs and make the changes as shown below.
- NFS Server
Experiment with the following NFS tunables.
- NFS Server
NFSD_SERVERS=<desired_number> <-- on CMT systems with large number of hardware threads you can go as high as 512 NFS_SERVER_DELEGATION=[ON|OFF] <-- ON is the default. Experiment with OFF NFSMAPID_DOMAIN=<network_domain_where_BI_was_deployed>
- NFS Client
Monitor the DNLC hit rate and tune the directory name look-up cache (DNLC).
To monitor the DNLC hit rate, run "
vmstat -s | grep cache" command. It is ideal to see a hit rate of 95% or above.
Add the following tunable parameter to /etc/system on NFS server with a desired value for the DNLC cache.
Mounting NFS Share
Mount the NFS share that contains the Presentation Services Catalog on all the NFS clients (BI nodes in this context) using the following mount options:
rw, forcedirectio, nocto
Oracle BI EE Cluster Deployment Recommendations
- Ensure that all the BI components in the cluster are configured in a many-to-many fashion
- For proper load balancing, configure all BI nodes to be identical in the BI Cluster
- When planning to add an identically configured new node to the BI Cluster, simply clone an existing well-configured BI node running in a non-global zone.
Cloning a BI node running in a dedicated zone results in an exact copy of the BI node being cloned. This approach is simple, less error prone and eliminates the need to configure the newly added node from scratch.
Oracle BI Presentation Services Configuration Recommendations
Increase the file descriptors limit. Edit SAROOTDIR/setup/systunesrv.sh to increase the value from 1024 to any other value of your choice. In addition you must increase the shell limit using theeg.,
ulimit -n 2048
- Configure 256M large pages for the JVM heap of Chart server and OC4J web server (this recommendation is equally applicable to other web servers such as WebLogic or Sun Java system Web Server). Also use parallel GC, and restrict the number of parallel GC threads to 1/8th of the number of virtual cpus.
-XX:LargePageSizeInBytes=256M -XX:+UseParallelGC -XX:ParallelGCThreads=8
- The Oracle BI Presentation Server keeps the access information of all the users in the Web Catalog. When there are large number of unique BI users, it can take a significant amount of time to look up a user if all the users reside in a single directory. To avoid this, hash the user directories. It can be achieved by having the following entry in SADATADIR/web/config/instanceconfig.xml
<Catalog> <HashUserHomeDirectories>2</HashUserHomeDirectories> </Catalog>
HashUserHomeDirectoriesspecifies the number of characters to use to hash user names into sub directories. When this element is turned on, for example, the default name for user Steve's home directory would become /users/st/steve.
- BI Server and BI Presentation Server processes create many temporary files while rendering reports and dashboards for a user. This can result in significant I/O activity on the system. The I/O waits can be minimized by pointing the temporary directories to a memory resident file system such as /tmp on Solaris OS. To achieve this, add the following line to the instanceconfig.xml configuration file.
Similarly the Temporary directory (SATEMPDIR) can be pointed to a memory resident file system such as /tmp to minimize the I/O waits.
Consider tuning the value ofeg.,
CacheMaxEntriesin instanceconfig.xml. A value of 20,000 was used in the 50,000 user OBIEE benchmark on T5440 servers. Be aware that the Presentation Services process (sawserver64) consumes more virtual memory when this parameter is set to a high value.
If the presentation services log contains errors such as
"The queue for the thread pool AsyncLogon is at it's maximum capacity of 50 jobs.", consider increasing the Presentation Services' asynchronous job queue. 50 is the default value.
The following example increases the job queue size to 200.
<ThreadPoolDefaults> <AsyncLogon> <MaxQueue>200</MaxQueue> </AsyncLogon> </ThreadPoolDefaults>
Increase the query cache expiry time especially when the BI deployment is supposed to handle large number of concurrent users. The default is 60 minutes. However under very high loads, a cache entry may be removed before one hour if many queries are being run. Hence it is necessary to tune the parameter
CacheMaxExpireMinutesin Presentation Services' instanceconfig.xml.
The following example increases the query cache expiry time to 3 hours.
Consider increasing the Presentation Services' cache timeout values to keep the cached data intact for longer periods.
The following example increases the cache timeout values to 5 hours in instanceconfig.xml configuration file.
<AccountIndexRefreshSecs>18000</AccountIndexRefreshSecs> <AccountCacheTimeoutSecs>18000</AccountCacheTimeoutSecs> <CacheTimeoutSecs>18000</CacheTimeoutSecs> <CacheCleanupSecs>18000</CacheCleanupSecs> <PrivilegeCacheTimeoutSecs>18000</PrivilegeCacheTimeoutSecs>
Oracle BI Server Configuration Recommendations
Enable caching at the BI server and control/tune the cache expiry time for each of the table based on your organizations' needs.
Unless the repository needs to be edited online frequently, consider setting up the "read only" mode for the repository. It may ease lock contention up to some extent.
Increase the session limit and the number of requests per session limit especially when the BI deployment is expected to handle large number of concurrent users. Also increase the number of BI server threads.
The following configuration was used in 50,000 user OBIEE benchmark on T5440 servers.
(Source configuration file: NQSConfig,.INI) [ CACHE ] ENABLE = YES; DATA_STORAGE_PATHS = "/export/oracle/OracleBIData/cache" 500 MB; MAX_ROWS_PER_CACHE_ENTRY = 0; MAX_CACHE_ENTRY_SIZE = 10 MB; MAX_CACHE_ENTRIES = 5000; POPULATE_AGGREGATE_ROLLUP_HITS = NO; USE_ADVANCED_HIT_DETECTION = NO; // Cluster-aware cache GLOBAL_CACHE_STORAGE_PATH = "/export/oracle/OracleBIsharedRepository/GlobalCacheDirectory" 2048 MB; MAX_GLOBAL_CACHE_ENTRIES = 10000; CACHE_POLL_SECONDS = 300; CLUSTER_AWARE_CACHE_LOGGING = NO; [ SERVER ] READ_ONLY_MODE = YES; MAX_SESSION_LIMIT = 20000 ; MAX_REQUEST_PER_SESSION_LIMIT = 1500 ; SERVER_THREAD_RANGE = 512-2048; SERVER_THREAD_STACK_SIZE = 0; DB_GATEWAY_THREAD_RANGE = 512-512; #SERVER_HOSTNAME_OR_IP_ADDRESSES = "ALLNICS"; CLUSTER_PARTICIPANT = YES;