Thursday Oct 11, 2012

Essbase BSO Data Fragmentation

Essbase BSO Data Fragmentation

Data fragmentation naturally occurs in Essbase Block Storage (BSO) databases where there are a lot of end user data updates, incremental data loads, many lock and send, and/or many calculations executed.  If an Essbase database starts to experience performance slow-downs, this is an indication that there may be too much fragmentation.  See Chapter 54 Improving Essbase Performance in the Essbase DBA Guide for more details on measuring and eliminating fragmentation: http://docs.oracle.com/cd/E17236_01/epm.1112/esb_dbag/daprcset.html

Fragmentation is likely to occur in the following situations:

  • Read/write databases that users are constantly updating data
  • Databases that execute calculations around the clock
  • Databases that frequently update and recalculate dense members
  • Data loads that are poorly designed
  • Databases that contain a significant number of Dynamic Calc and Store members
  • Databases that use an isolation level of uncommitted access with commit block set to zero.
[Read More]

Thursday Jul 19, 2012

Essbase “Virtual Cube” BSO Tuning

Essbase “Virtual Cube” BSO Tuning

An Essbase “virtual cube” is a specific design setup where transparent partitioning is used and consists of multiple source databases that are partitioned to a single target Block Storage (BSO) database.  The single target BSO database is empty meaning contains no data.  The purpose of this design setup is so that end users have one consolidated BSO cube for reporting.  In some cases, the target BSO database may have different dynamic calc member formulas then the source databases.  Also, the target BSO database may have more dimensions to accommodate the source databases and may have different sparse/dense configuration since source databases are tuned with data accordingly.

The Essbase Database Administration Guide mentions how to tune Essbase BSO databases that contains data; however, there is no documentation on how to tune a BSO database that contains no data like in a “virtual cube” setup. 

[Read More]

Wednesday Jan 11, 2012

Essbase 11.1.2 - Commit Block Tuning

The commit blocks setting is a tunable Block Storage (BSO) database setting. For additional information, definition and explanation what the commit blocks setting is, see the Essbase Database Administration Guide, aka Essbase DBA Guide, under the “Ensuring Data Integrity” chapter. This post is to provide additional guidance with how to tune commit blocks for a given database but does not supersede what is documented in the Essbase DBA Guide.

For some Essbase databases, the default commit block setting of 3,000 is not appropriate because the compressed block size is so small that transactions are committed too often, which produces too much overhead. If this is the case, Essbase will adjust the commit block interval automatically and the adjusted value will be recorded in the application log. This adjusted commit block interval value that is recorded in the application log provides a hint on what a better commit block threshold might be for this given database.

The commit block adjustment interval is based on a range from 3,000 to 100,000. Meaning, commit blocks will not adjust higher than 100,000. Adjusted block intervals can happen with any setting if the algorithm determines block size is small and commit blocks occurs too often. It is possible to set commit blocks to 0, as per the Essbase DBA Guide, if you set commit blocks to 0, the synchronization point occurs at the end of the transaction. For this post, we are only focusing on the adjusted commit block intervals.

If the data committed by one of the first transactions before this adjustment was made is less than 10 MB of data then Essbase will try to make it commit at least 10 MB of data.  The goal with each implicit commit is to commit at least 10 MB of data. If it is less than 10 MB of data then Essbase is saying there is no point in committing this early and will try to make the commit interval larger.

In general, if the commit block interval automatically sets larger than 20,000, then it is recommended to reconsider the block density. Poor selection of block density will make too many blocks and potentially the reason for increased fragmentation. Once the best outline sparse/dense configuration is determined, the next statistic to monitor would be the compression ratio. If the compression ratio it is too low, this typically means there is too much compression, which is a good indicator that there are too many #Missing blocks.

The factors that impact whether Essbase adjusts the commit blocks interval are as follows:

  • The number of blocks dirtied by the calculation is different at the first implicit commit.
  • If the number of non-missing cells changes in the compressed block at the first implicit commit, thus affects the size of the data buffers to be flushed out to the page file.
  • Under parallel thread calculations, if the first adjusted commit block interval happens to be too low, than commit blocks will continue to re-adjust. If the adjusted commit blocks value is appropriate, the same interval may be applied to all remaining calculation threads. Basically, if parallel threads are enabled then commit blocks may adjust differently based on which thread reaches that point first. .

Using commit blocks interval to determine the data cache setting:

See the Essbase DBA Guide for additional options for sizing the data cache. However, a reasonable estimate for tuning data cache based on the commit blocks interval would be:

Data Cache = uncompressed block size * commit blocks * number of parallel calculation threads

We need to use the uncompressed block size because data cache holds only uncompressed blocks.

Determining the optimal size of commit blocks interval:

This is hard to generalize and has to be tuned for the particular application and data model. Therefore, it can only be arrived at by meticulous testing.

One suggestion would be to experiment with values from 3,000 – 30,000 (in general) and see what gives better performances when the same batch calculation(s) are ran again and again. Generally a higher value will make the initial calculation run faster and subsequent calculations run slower. Also, the commit block setting may need additional tuning during performance testing with high end user concurrency. For instance, Planning applications with multiple end users running business rules concurrently would require experimenting with different settings to avoid possible block contention.


About

A blog focused on Tips & Tricks about Oracle Business Intelligence (OBI), Oracle Exalytics and Oracle Enterprise Performance Management (EPM) products.
[Blog Admin: ahmed awan]

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today