Tuesday Jan 29, 2013

Defining Essbase Security using Metaread Filter

Defining security in multidimensional databases like Essbase is a subject that some administrators would rather prefer to get around. Anyway it’s no rocket science and follows a clear logic, how read, write or no access should be applied to applications, databases or numbers in the database, by using filters, in order to achieve the respective settings for making data available to users – or even not available. But there is another, fourth setting available in Essbase filters, which some administrators are not yet aware of, or at least not aware of how to apply it correctly: the so called Metaread access setting. So let’s have a look at this very useful option and shed some light on its efficient use.

Not only giving or denying access to the data and figures in an Essbase database is frequently required, but also in some cases blocking the user’s ability to see members and hierarchy, in other words parts of the metadata. This is what the Metaread filter addresses. But Metaread in various ways is different from the other three filter options, None, Read and Write. First of all, as it does not apply to the data/numbers in the database like all the others, but to metadata, where it limits visibility of members and parts of the hierarchy. Also it doesn’t know an AND logic, but only OR. And finally it overrides definitions made with Read or Write in the way, that even granted Read or Write data access on given members or member combinations cannot be executed, when Metaread definitions exclude these members from being seen at all by the user.

So how does it work in detail: Usually users can display all metadata, meaning they can see all members in the hierarchy, even if no Read or Write data access is given to them on these members. Metaread now adds another layer to existing filter definitions and enforces them by removing certain members or branches from the user’s view. It only allows users to see the explicitly defined members and their ancestors in the hierarchy, where for the ancestors only the hierarchy/member names are visible, while for the declared members always at least read access for the respective data is granted. In the case that data Write access would be given at the same time for the members defined in Metaread, this access would be maintained as Write and not reduced to read-only. Siblings of the defined Metaread members are not visible at all. This is illustrated in the example below.

essbase screenshot

For all other cells not being specified in the Metaread filter, unless not defined differently in another filter, the minimum database access applies, first the one defined at the user access level, or with second priority the setting from the global database access level, like this would be also the case with the common filter definitions.

Of course, also for Metaread, overlapping definitions might occur, but here we have to watch out as they are again treated in a different way, like the following example referring to the hierarchy seen above shows:

essbase screenshot

This definition, unlike for None, Read or Write, would not grant data access to California and West. Instead it would allow data access only to California, but not to its parent West, for which only the hierarchy would be shown. In order to avoid these conflicting settings for Metaread, it is recommended to define all members from the same dimension in one single filter row, hence the correct way to define data access on both members would be:

essbase screenshot

So as you can see, Metaread is a bit special, but not that complicated to use. And it adds another helpful option to the common None, Read and Write settings in Essbase security filters.

Want to learn more?

If you are also interested in other new features and smart enhancements in Essbase or Hyperion Planning stay tuned for coming articles or check our training courses and web presentations.

You can find general information about offerings for the Essbase and Planning curriculum or other Oracle-Hyperion products here (please make sure to select your country/region at the top of this page) or in the OU Learning paths section , where Planning, Essbase and other Hyperion products can be found under the Fusion Middleware heading (again, please select the right country/region.

Please drop me a note directly if you have any questions: bernhard.kinkel@oracle.com .

About the Author:

Angela Chandler

Bernhard Kinkel started working for Hyperion Solutions as a Presales Consultant and Consultant in 1998 and moved to Hyperion Education Services in 1999. He joined Oracle University in 2007 where he is a Principal Education Consultant. Based on these many years of working with Hyperion products he has detailed product knowledge across several versions. He delivers both classroom and live virtual courses. His areas of expertise are Oracle/Hyperion Essbase, Oracle Hyperion Planning and Hyperion Web Analysis.

Wednesday Nov 30, 2011

New ways for backup, recovery and restore of Essbase Block Storage databases – part 2 by Bernhard Kinkel

After discussing in the first part of this article new options in Essbase for the general backup and restore, this second part will deal with the also rather new feature of Transaction Logging and Replay, which was released in version 11.1, enhancing existing restore options.

Tip: Transaction logging and replay cannot be used for aggregate storage databases. Please refer to the Oracle Hyperion Enterprise Performance Management System Backup and Recovery Guide (rel. 11.1.2.1).

Even if backups are done on a regular, frequent base, subsequent data entries, loads or calculations would not be reflected in a restored database. Activating Transaction Logging could fill that gap and provides you with an option to capture these post-backup transactions for later replay. The following table shows, which are the transactions that could be logged when Transaction Logging is enabled:



In order to activate its usage, corresponding statements can be added to the Essbase.cfg file, using the TRANSACTIONLOGLOCATION command. The complete syntax reads:

TRANSACTIONLOGLOCATION [ appname [ dbname]] LOGLOCATION NATIVE ENABLE | DISABLE

Where appname and dbname are optional parameters giving you the chance in combination with the ENABLE or DISABLE command to set Transaction Logging for certain applications or databases or to exclude them from being logged. If only an appname is specified, the setting applies to all databases in that particular application. If appname and dbname are not defined, all applications and databases would be covered. LOGLOCATION specifies the directory to which the log is written, e.g. D:\temp\trlogs. This directory must already exist or needs to be created before using it for log information being written to it. NATIVE is a reserved keyword that shouldn’t be changed.

The following example shows how to first enable logging on a more general level for all databases in the application Sample, followed by a disabling statement on a more granular level for only the Basic database in application Sample, hence excluding it from being logged.

TRANSACTIONLOGLOCATION Sample Hyperion/trlog/Sample NATIVE ENABLE
TRANSACTIONLOGLOCATION Sample Basic Hyperion/trlog/Sample NATIVE DISABLE

Tip: After applying changes to the configuration file you must restart the Essbase server in order to initialize the settings.

A maybe required replay of logged transactions after restoring a database can be done only by administrators. The following options are available:

In Administration Services selecting Replay Transactions on the right-click menu on the database:

Here you can select to replay transactions logged after the last replay request was originally executed or after the time of the last restored backup (whichever occurred later) or transactions logged after a specified time.
Or you can replay transactions selectively based on a range of sequence IDs, which can be accessed using Display Transactions on the right-click menu on the database:

These sequence ID s (0, 1, 2 … 7 in the screenshot below) are assigned to each logged transaction, indicating the order in which the transaction was performed.

This helps to ensure the integrity of the restored data after a replay, as the replay of transactions is enforced in the same order in which they were originally performed. So for example a calculation originally run after a data load cannot be replayed before having replayed the data load first. After a transaction is replayed, you can replay only transactions with a greater sequence ID. For example, replaying the transaction with sequence ID of 4 includes all preceding transactions, while afterwards you can only replay transactions with a sequence ID of 5 or greater.

Tip: After restoring a database from a backup you should always completely replay all logged transactions, which were executed after the backup, before executing new transactions.

But not only the transaction information itself needs to be logged and stored in a specified directory as described above. During transaction logging, Essbase also creates archive copies of data load and rules files in the following default directory:

ARBORPATH/app/appname/dbname/Replay

These files are then used during the replay of a logged transaction. By default Essbase archives only data load and rules files for client data loads, but in order to specify the type of data to archive when logging transactions you can use the command TRANSACTIONLOGDATALOADARCHIVE as an additional entry in the Essbase.cfg file. The syntax for the statement is:

TRANSACTIONLOGDATALOADARCHIVE [appname [dbname]] [OPTION]

While to the [appname [dbname]] argument the same applies like before for TRANSACTIONLOGLOCATION, the valid values for the OPTION argument are the following:

Make the respective setting for which files copies should be logged, considering from which location transactions are usually taking place. Selecting the NONE option prevents Essbase from saving the respective files and the data load cannot be replayed. In this case you must first manually load the data before you can replay the transactions.

Tip: If you use server or SQL data and the data and rules files are not archived in the Replay directory (for example, you did not use the SERVER or SERVER_CLIENT option), Essbase replays the data that is actually in the data source at the moment of the replay, which may or may not be the data that was originally loaded.

You can find more detailed information in the following documents:

Or on the Oracle Technology Network.

If you are also interested in other new features and smart enhancements in Essbase or Hyperion Planning stay tuned for coming articles or check our training courses and web presentations.

You can find general information about offerings for the Essbase and Planning curriculum or other Oracle-Hyperion products here; (please make sure to select your country/region at the top of this page) or in the OU Learning paths section, where Planning, Essbase and other Hyperion products can be found under the Fusion Middleware heading (again, please select the right country/region). Or drop me a note directly: bernhard.kinkel@oracle.com.

About the Author:

Bernhard Kinkel

Bernhard Kinkel started working for Hyperion Solutions as a Presales Consultant and Consultant in 1998 and moved to Hyperion Education Services in 1999. He joined Oracle University in 2007 where he is a Principal Education Consultant. Based on these many years of working with Hyperion products he has detailed product knowledge across several versions. He delivers both classroom and live virtual courses. His areas of expertise are Oracle/Hyperion Essbase, Oracle Hyperion Planning and Hyperion Web Analysis.

Disclaimer:

All methods and features mentioned in this article must be considered and tested carefully related to your environment, processes and requirements. As guidance please always refer to the available software documentation. This article does not recommend or advise any explicit action or change, hence the author cannot be held responsible for any consequences due to the use or implementation of these features.

Tuesday Jun 14, 2011

New ways for backup, recovery and restore of Essbase Block Storage databases – part 1 by Bernhard Kinkel

Backing up databases and providing the necessary files and information for a potential recovery or restore is crucial in today’s working environments. I will therefore present the new interesting options that Essbase provides for this, starting from version 11, and related to this a powerful data export option using Calc Scripts, which has been available since release 9.3.

Let’s start with the last point: If you wanted to backup just the data from your database, formerly you could use the Export utility that Essbase provides as an item in the database right-click menu in the Administration Services Console. This feature is still available, supporting both Block Storage (BSO) and Aggregate Storage (ASO) databases. But regarding usability, some limitations exist: for example, the focus on which data to export can be set only to Level0, Input Level or All data (the last two options are only available for BSO) – more detailed definitions are not possible. Also the ASCII format of the export file causes them to become rather large, maybe even larger than your Page and Index files.

Anyway, importing these files is quite simple, as this export can be (re-)loaded without any load rule, as long as the outline structure is the same – even if the database resides on another server. Also modifications are possible while using load rules in combination with an export file in column format.

But now the way to export data using a Calc Script promises more flexibility, smaller files and faster performance. However, this option is only available to BSO, as ASO cubes do not leverage Calc Scripts.

For example, in order to focus on even very detailed subsets of data, which is very usual in Calc Scripts, you can take advantage of common commands like FIX | ENDFIX and EXCLUDE | ENDEXCLUDE. In addition the new SET DATAEXPORTOPTIONS command provides more options to refine export content, formatting, and processing, including the possibility to export dynamically calculated values. You can also request statistics and an estimate of export time before actually exporting the data. The following syntax gives you an overview of the available settings:

SET DATAEXPORTOPTIONS
{
DataExportLevel ALL | LEVEL0 | INPUT;
DataExportDynamicCalc ON | OFF;
DataExportNonExistingBlocks ON | OFF;
DataExportDecimal n;
DataExportPrecision n;
DataExportColFormat ON | OFF;
DataExportColHeader dimensionName;
DataExportDimHeader ON | OFF;
DataExportRelationalFile ON | OFF;
DataExportOverwriteFile ON | OFF;
DataExportDryRun ON | OFF;
}

Looking at most of these options will probably already give you an idea on their use and functionality. For more detailed information about the SET DATAEXPORTOPTIONS command options, please see the available Oracle Essbase Online Documentation (rel. 11.1.2.1) or the Enterprise Performance Management System Documentation (including previous releases) on the Oracle Technology Network.

My example should focus on the binary export and import, as it provides faster export and load performance than export/import with ASCII files. Thus in the first section of my script I will use only two of the data export options, in order to export all data and to overwrite an eventually existing old export file with the new one. The subsequent syntax for the binary export itself is DATAEXPORT "Binfile" "fileName", where "Binfile" is the required keyword and "fileName" is the full pathname for the exported binfile. So the complete script reads:

SET DATAEXPORTOPTIONS
{
DataExportLevel "ALL";
DATAEXPORTOVERWRITEFILE ON;
}
DATAEXPORT "BinFile" "c:\Export\MyDB_expALL.bin";

Tip: Export file names can have more than 8 characters; the extension “.bin” is not mandatory.

The import of the binary file with the Calc Script uses the command DATAIMPORTBIN fileName;. In order to avoid potentially importing a wrong file or importing into a wrong database, each export file includes an outline timestamp, which the import by default checks. Just in case, this check should be bypassed, the command SET DATAIMPORTIGNORETIMESTAMP ON; could be placed before the DATAIMPORTBIN line. The import definition for the preceding export could look like the following:

SET DATAIMPORTIGNORETIMESTAMP ON;
DATAIMPORTBIN "c:\Export\MyDB_expALL.bin";

After this rather new option for data export and import let’s turn to the new backup and restore option for complete databases provided in Administration Services Console starting with release 11. As well as or instead of the common strategies and methods used previously (like running a third party backup utility while the database is in read-only mode), this new feature provides an easy ad-hoc way to archive a database.

Select the Archive Database item from the right-click menu on the database node and in the subsequent window define the full path and name for the archive file, where the extension “.arc” is a recommendation from Oracle, but not mandatory.


The process could optionally be run as a background process, while Force archive would overwrite an existing file with the same name.

After starting the archive procedure, the database is set to read-only mode and a copy of the following files will be written to the archive file:


After this the database returns to read-write mode. However, not all files are backed up automatically using this procedure. The following table shows a list of files and file types that you would need to backup manually:


Tip: Also make a backup of the file essbase.bak_startup. This file is created after a successful start of the Essbase server (formerly this file was named just essabse.bak), as well as the essbase.bak file, which now has a different function: while the essbase.bak_startup is only created at the server start and no changes apply to this file until a next successful server start, the essbase.bak could be compared to the security file and updated manually or by using a MaxL command at any time. For a manual update in Administration Services Console under the respective Essbase server right-click Security, and select Update security backup file.

In MaxL run the command alter system sync security backup. Security files and the CFG-file reside in the ARBORPATH\bin directory, where you installed Essbase.

As the Archive option by default creates one large file, you have to make sure that the system you save your archive files to supports large files (e.g. in Windows NTFS). If you need smaller files, Essbase can be configured to create multiple files no larger than 2 GB by creating the SPLITARCHIVEFILE TRUE entry in the essbase.cfg file.

Restoring an archived database works as simply as the backup itself. First make sure that the database to be restored is stopped. Then from the right-click menu select Restore Database. Provide the required information about the archive file to be imported including the full path.


If the backed-up database used disk volumes, select Advanced. The database can be restored to the same disk volumes without any further definitions, or you define a new mapping for the volume names (e.g. “C” could be replaced by “F”), but you can neither change the number of volumes nor the space used on each volume compared to the original backed-up database. Select to restore in the background if desired, and click OK. The restore is done and confirmed in the Messages panel.

Tip: Usually, the same database would be restored that has been previously backed-up. But this doesn’t necessarily have to be the case. You can also use the restore feature to create a copy of your database (excluding the files mentioned above, which are not included in the archive file) or to overwrite another database. In both cases you must have an existing database to overwrite. From this “target” database select the Restore Database feature, but make sure to have checked Force Restore in the Restore Database dialog box.

Depending on the frequency of your archiving cycles, maybe the latest backup doesn’t restore the actual latest state of your database: following the backup, you might, for example, have run Dimension Build Rules or Data Load Rules, data loads from client interfaces or calculations. These would not be reflected in the restored database. In this case the new Transaction Logging and Replay option provides a good way to capture and replay post-backup transactions. Thus, a backed-up database can be recovered to the most recent state before the interruption occurred. This feature will be described in the second part of this article coming later this year.

Or – if you can’t wait – maybe you should learn how to use it as well as other important administration topics in our Essbase for System Administrators class; please refer also to the links provided below.

If you are also interested in other new features and smart enhancements in Essbase or Hyperion Planning stay tuned for coming articles or check our training courses and web presentations.

You can find general information about offerings for the Essbase and Planning curriculum or other Oracle-Hyperion products here; (please make sure to select your country/region at the top of this page) or in the OU Learning paths section, where Planning, Essbase and other Hyperion products can be found under the Fusion Middleware heading (again, please select the right country/region). Or drop me a note directly: bernhard.kinkel@oracle.com.



Bernhard Kinkel started working for Hyperion Solutions as a Presales Consultant and Consultant in 1998 and moved to Hyperion Education Services in 1999. He joined Oracle University in 2007 where he is a Principal Education Consultant. Based on these many years of working with Hyperion products he has detailed product knowledge across several versions. He delivers both classroom and live virtual courses. His areas of expertise are Oracle/Hyperion Essbase, Oracle Hyperion Planning and Hyperion Web Analysis.

Disclaimer:


All methods and features mentioned in this article must be considered and tested carefully related to your environment, processes and requirements. As a guidance please always refer to the available software documentation. This article does not recommend or advise any explicit action or change, hence the author cannot be held responsible for any consequences due to the use or implementation of these features.
About

Expert trainers from Oracle University share tips and tricks and answer questions that come up in a classroom.

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today