Essbase 11.1.2 - Commit Block Tuning

The commit blocks setting is a tunable Block Storage (BSO) database setting. For additional information, definition and explanation what the commit blocks setting is, see the Essbase Database Administration Guide, aka Essbase DBA Guide, under the “Ensuring Data Integrity” chapter. This post is to provide additional guidance with how to tune commit blocks for a given database but does not supersede what is documented in the Essbase DBA Guide.

For some Essbase databases, the default commit block setting of 3,000 is not appropriate because the compressed block size is so small that transactions are committed too often, which produces too much overhead. If this is the case, Essbase will adjust the commit block interval automatically and the adjusted value will be recorded in the application log. This adjusted commit block interval value that is recorded in the application log provides a hint on what a better commit block threshold might be for this given database.

The commit block adjustment interval is based on a range from 3,000 to 100,000. Meaning, commit blocks will not adjust higher than 100,000. Adjusted block intervals can happen with any setting if the algorithm determines block size is small and commit blocks occurs too often. It is possible to set commit blocks to 0, as per the Essbase DBA Guide, if you set commit blocks to 0, the synchronization point occurs at the end of the transaction. For this post, we are only focusing on the adjusted commit block intervals.

If the data committed by one of the first transactions before this adjustment was made is less than 10 MB of data then Essbase will try to make it commit at least 10 MB of data.  The goal with each implicit commit is to commit at least 10 MB of data. If it is less than 10 MB of data then Essbase is saying there is no point in committing this early and will try to make the commit interval larger.

In general, if the commit block interval automatically sets larger than 20,000, then it is recommended to reconsider the block density. Poor selection of block density will make too many blocks and potentially the reason for increased fragmentation. Once the best outline sparse/dense configuration is determined, the next statistic to monitor would be the compression ratio. If the compression ratio it is too low, this typically means there is too much compression, which is a good indicator that there are too many #Missing blocks.

The factors that impact whether Essbase adjusts the commit blocks interval are as follows:

  • The number of blocks dirtied by the calculation is different at the first implicit commit.
  • If the number of non-missing cells changes in the compressed block at the first implicit commit, thus affects the size of the data buffers to be flushed out to the page file.
  • Under parallel thread calculations, if the first adjusted commit block interval happens to be too low, than commit blocks will continue to re-adjust. If the adjusted commit blocks value is appropriate, the same interval may be applied to all remaining calculation threads. Basically, if parallel threads are enabled then commit blocks may adjust differently based on which thread reaches that point first. .

Using commit blocks interval to determine the data cache setting:

See the Essbase DBA Guide for additional options for sizing the data cache. However, a reasonable estimate for tuning data cache based on the commit blocks interval would be:

Data Cache = uncompressed block size * commit blocks * number of parallel calculation threads

We need to use the uncompressed block size because data cache holds only uncompressed blocks.

Determining the optimal size of commit blocks interval:

This is hard to generalize and has to be tuned for the particular application and data model. Therefore, it can only be arrived at by meticulous testing.

One suggestion would be to experiment with values from 3,000 – 30,000 (in general) and see what gives better performances when the same batch calculation(s) are ran again and again. Generally a higher value will make the initial calculation run faster and subsequent calculations run slower. Also, the commit block setting may need additional tuning during performance testing with high end user concurrency. For instance, Planning applications with multiple end users running business rules concurrently would require experimenting with different settings to avoid possible block contention.


Comments:

Hi Ann,

Based what you mentioned above, Data Cache = uncompressed block size * commit blocks * number of parallel calculation threads. I would like to confirm whether my calculation is correct. We have a database with 3 Dense dimension with total members including Dync Calc: 27, 77 and 7.

Uncompressed Block Size = 27*77*7 bytes = 86,240 bytes
Commit Block = 100,000
Parallel Calc = 4

Data Cache = 86,240 bytes * 100,000 * 4 = 33,387,500 KB or 33 GB. Is this correct?

Is there another way of sizing the commit block other than trial and error as you suggested above? For the application I use as an example above, Essbase always adjust the commit block to 100,000 each time we use 4 parallel calc. Does it means that we need to adjust the Commit Block to 100,000?

How to measure the 10MB data that essbase would commit?

Thanks,

Lian

Posted by Lian on March 01, 2012 at 12:50 PM SGT #

Hi Lian

Sorry for the late response I was out on vacation.

Your calculation is correct using the formula provided in this post to calculate data cache using commit blocks and parallel threads. As you can see there is a cost associated setting commit blocks randomly high and/or setting parallel threads globally at the server. Clearly, 33 GB of data cache is unrealistic; however, the point of the formula is to show the impact of setting high values for commit blocks or using your sample a global setting for parallel threads.

Tuning an Essbase application is a methodical iterative process and there are no formulas that are absolute when determining a database or configuration setting. Any and all formulas provided in the Essbase Database Administrator‘s Guide or in this post are meant to be a guide or rule of thumb for best practices. The goal of all Essbase application tuning is to find the right balance with disk IO, memory utilization and CPU utilization. Making the assumption that setting anything in Essbase to the maximum setting or arbitrarily can be detrimental to performance so all setting combinations should be tested and compared to derive the property settings for any given application’s design, data and process.

Tuning an Essbase application, which includes Planning applications, can vary from one customer application to another because tuning is design, data and process dependent. For example, tuning a Planning application can vary significantly because the outline has a different number of dimensions, hierarchies and members. The outline dimension order alone has an impact on how parallel threads are utilized and how calculator cache is determine. What impacts parallel threads can impact commit blocks and these two areas alone can impact what is needed in data cache as you sample shows. Also, which dimensions selected as sparse or dense and the data distribution that is dependent on the sparse/dense configuration has an impact with tuning. Calculation design and the overall customer processes impact tuning. Therefore, tuning has to be a methodical iterative process so if you prefer to call it “trial and error” then yes that is the case. In regards to your example, it would seem like the compressed block size is really small. Also, I’m not sure what the calculation design is but I would question the need with using 4 parallel threads. As another rule of thumb, data cache testing would include tracking the data cache hit ratio, which higher than 70% is normally considered very good. Although, I have seen some applications perform well with data cache hit ratio 30% and higher.

The 10 MB data is internal to Essbase and part of the algorithm. There are no statistics available in Essbase to determine 10 MB data size.

Thanks for your question and hope this helps.
Ann

Posted by guest on March 20, 2012 at 09:42 PM SGT #

Post a Comment:
  • HTML Syntax: NOT allowed
About

A blog focused on Tips & Tricks about Oracle Business Intelligence (OBI), Oracle Exalytics and Oracle Enterprise Performance Management (EPM) products.
[Blog Admin: ahmed awan]

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today