Monday Nov 18, 2013

Using the Container Database in Oracle Database 12c by Christopher Andrews


The first time I examined the Oracle Database 12c architecture, I wasn’t quite sure what I thought about the Container Database (CDB). In the current release of the Oracle RDBMS, the administrator now has a choice of whether or not to employ a CDB.

Bundling Databases Inside One Container

In today’s IT industry, consolidation is a common challenge. With potentially hundreds of databases to manage and maintain, an administrator will require a great deal of time and resources to upgrade and patch software. Why not consider deploying a container database to streamline this activity? By “bundling” several databases together inside one container, in the form of a pluggable database, we can save on overhead process resources and CPU time. Furthermore, we can reduce the human effort required for periodically patching and maintaining the software.

Minimizing Storage

Most IT professionals understand the concept of storage, as in solid state or non-rotating. Let’s take one-to-many databases and “plug” them into ONE designated container database. We can minimize many redundant pieces that would otherwise require separate storage and architecture, as was the case in previous releases of the Oracle RDBMS. The data dictionary can be housed and shared in one CDB, with individual metadata content for each pluggable database. We also won’t need as many background processes either, thus reducing the overhead cost of the CPU resource.

Improve Security Levels within Each Pluggable Database 

We can now segregate the CDB-administrator role from that of the pluggable-database administrator as well, achieving improved security levels within each pluggable database and within the CDB. And if the administrator chooses to use the non-CDB architecture, everything is backwards compatible, too.

 The bottom line: it's a good idea to at least consider using a CDB.


About the author:

Chris Andrews is a Canadian instructor of Oracle University who teaches the Server Technologies courses for both 11g and 12c. He has been with Oracle University since 1997 and started training customers back with Oracle 7. While now a Senior Principal Instructor with OU, Chris had an opportunity to work as a DBA for about 10 years before joining Oracle. His background and experiences as a DBA, Database Analyst and also a developer is occasionally shared with customers in the classroom. His skill set includes the Database Administration workshops, Backup & Recovery, Performance Tuning, RAC, Dataguard, ASM, Clusterware and also Exadata and Database Machine administration workshop. While not teaching, Chris enjoys aviation and flying gliders, underwater photography and tennis.

Friday Oct 18, 2013

Oracle Database 12c New Partition Maintenance Features by Gwen Lazenby

One of my favourite new features in Oracle Database 12c is the ability to perform partition maintenance operations on multiple partitions. This means we can now add, drop, truncate and merge multiple partitions in one operation, and can split a single partition into more than two partitions also in just one command. This would certainly have made my life slightly easier had it been available when I administered a data warehouse at Oracle 9i.

To demonstrate this new functionality and syntax, I am going to create two tables, ORDERS and ORDERS_ITEMS which have a parent-child relationship. ORDERS is to be partitioned using range partitioning on the ORDER_DATE column, and ORDER_ITEMS is going to partitioned using reference partitioning and its foreign key relationship with the ORDERS table. This form of partitioning was a new feature in 11g and means that any partition maintenance operations performed on the ORDERS table will also take place on the ORDER_ITEMS table as well.

First create the ORDERS table -

SQL> CREATE TABLE orders
      ( order_id NUMBER(12),
        order_date TIMESTAMP,
        order_mode VARCHAR2(8),
        customer_id NUMBER(6),
        order_status NUMBER(2),
        order_total NUMBER(8,2),
        sales_rep_id NUMBER(6),
        promotion_id NUMBER(6),
       CONSTRAINT orders_pk PRIMARY KEY(order_id)
     )
    PARTITION BY RANGE(order_date)
   (PARTITION Q1_2007 VALUES LESS THAN (TO_DATE('01-APR-2007','DD-MON-YYYY')),
    PARTITION Q2_2007 VALUES LESS THAN (TO_DATE('01-JUL-2007','DD-MON-YYYY')),
    PARTITION Q3_2007 VALUES LESS THAN (TO_DATE('01-OCT-2007','DD-MON-YYYY')),
    PARTITION Q4_2007 VALUES LESS THAN (TO_DATE('01-JAN-2008','DD-MON-YYYY'))
    );

Table created.

Now the ORDER_ITEMS table

SQL> CREATE TABLE order_items
     ( order_id NUMBER(12) NOT NULL,
       line_item_id NUMBER(3) NOT NULL,
       product_id NUMBER(6) NOT NULL,
       unit_price NUMBER(8,2),
       quantity NUMBER(8),
       CONSTRAINT order_items_fk
       FOREIGN KEY(order_id) REFERENCES orders(order_id) on delete cascade)  
       PARTITION BY REFERENCE(order_items_fk) tablespace example;

Table created.

Now look at DBA_TAB_PARTITIONS to get details of what partitions we have in the two tables –

SQL>  select table_name,partition_name,
     partition_position position, high_value
     from dba_tab_partitions
     where table_owner='SH' and
     table_name  like 'ORDER_%'
     order by partition_position, table_name;

TABLE_NAME    PARTITION_NAME   POSITION HIGH_VALUE
-------------- --------------- -------- -------------------------
ORDERS         Q1_2007                1 TIMESTAMP' 2007-04-01 00:00:00'
ORDER_ITEMS    Q1_2007                1
ORDERS         Q2_2007                2 TIMESTAMP' 2007-07-01 00:00:00'
ORDER_ITEMS    Q2_2007                2
ORDERS         Q3_2007                3 TIMESTAMP' 2007-10-01 00:00:00'
ORDER_ITEMS    Q3_2007                3
ORDERS         Q4_2007                4 TIMESTAMP' 2008-01-01 00:00:00'
ORDER_ITEMS    Q4_2007                4

Just as an aside it is also now possible in 12c to use interval partitioning on reference partitioned tables. In 11g it was not possible to combine these two new partitioning features.

For our first example of the new 12cfunctionality, let us add all the partitions necessary for 2008 to the tables using one command. Notice that the partition specification part of the add command is identical in format to the partition specification part of the create command as shown above -

SQL> alter table orders add
PARTITION Q1_2008 VALUES LESS THAN (TO_DATE('01-APR-2008','DD-MON-YYYY')),
PARTITION Q2_2008 VALUES LESS THAN (TO_DATE('01-JUL-2008','DD-MON-YYYY')),
PARTITION Q3_2008 VALUES LESS THAN (TO_DATE('01-OCT-2008','DD-MON-YYYY')),
PARTITION Q4_2008 VALUES LESS THAN (TO_DATE('01-JAN-2009','DD-MON-YYYY'));

Table altered.

Now look at DBA_TAB_PARTITIONS and we can see that the 4 new partitions have been added to both tables –

SQL> select table_name,partition_name,
     partition_position position, high_value
     from dba_tab_partitions
     where table_owner='SH' and
     table_name  like 'ORDER_%'
     order by partition_position, table_name;

TABLE_NAME    PARTITION_NAME   POSITION HIGH_VALUE
-------------- --------------- -------- -------------------------
ORDERS         Q1_2007                1 TIMESTAMP' 2007-04-01 00:00:00'
ORDER_ITEMS    Q1_2007                1
ORDERS         Q2_2007                2 TIMESTAMP' 2007-07-01 00:00:00'
ORDER_ITEMS    Q2_2007                2
ORDERS         Q3_2007                3 TIMESTAMP' 2007-10-01 00:00:00'
ORDER_ITEMS    Q3_2007                3
ORDERS         Q4_2007                4 TIMESTAMP' 2008-01-01 00:00:00'
ORDER_ITEMS    Q4_2007                4
ORDERS         Q1_2008                5 TIMESTAMP' 2008-04-01 00:00:00'
ORDER_ITEMS    Q1_2008                5
ORDERS         Q2_2008                6 TIMESTAMP' 2008-07-01 00:00:00'
ORDER_ITEM     Q2_2008                6
ORDERS         Q3_2008                7 TIMESTAMP' 2008-10-01 00:00:00'
ORDER_ITEMS    Q3_2008                7
ORDERS         Q4_2008                8 TIMESTAMP' 2009-01-01 00:00:00'
ORDER_ITEMS    Q4_2008                8

Next, we can drop or truncate multiple partitions by giving a comma separated list in the alter table command. Note the use of the plural ‘partitions’ in the command as opposed to the singular ‘partition’ prior to 12c

SQL> alter table orders drop partitions Q3_2008,Q2_2008,Q1_2008;

Table altered.

Now look at DBA_TAB_PARTITIONS and we can see that the 3 partitions have been dropped in both the two tables –

TABLE_NAME    PARTITION_NAME   POSITION HIGH_VALUE
-------------- --------------- -------- -------------------------
ORDERS         Q1_2007                1 TIMESTAMP' 2007-04-01 00:00:00'
ORDER_ITEMS    Q1_2007                1
ORDERS         Q2_2007                2 TIMESTAMP' 2007-07-01 00:00:00'
ORDER_ITEMS    Q2_2007                2
ORDERS         Q3_2007                3 TIMESTAMP' 2007-10-01 00:00:00'
ORDER_ITEMS    Q3_2007                3
ORDERS         Q4_2007                4 TIMESTAMP' 2008-01-01 00:00:00'
ORDER_ITEMS    Q4_2007                4
ORDERS         Q4_2008                5 TIMESTAMP' 2009-01-01 00:00:00'
ORDER_ITEMS    Q4_2008                5

Now let us merge all the 2007 partitions together to form one single partition –

SQL> alter table orders merge partitions
   Q1_2005, Q2_2005, Q3_2005, Q4_2005
   into partition Y_2007;

Table altered.

TABLE_NAME    PARTITION_NAME   POSITION HIGH_VALUE
-------------- --------------- -------- -------------------------
ORDERS         Y_2007                 1 TIMESTAMP' 2008-01-01 00:00:00'
ORDER_ITEMS    Y_2007                 1
ORDERS         Q4_2008                2 TIMESTAMP' 2009-01-01 00:00:00'
ORDER_ITEMS    Q4_2008                2

Splitting partitions is a slightly more involved. In the case of range partitioning one of the new partitions must have no high value defined, and in list partitioning one of the new partitions must have no list of values defined. I call these partitions the ‘everything else’ partitions, and will contain any rows contained in the original partition that are not contained in the any of the other new partitions.

For example, let us split the Y_2007 partition back into 4 quarterly partitions –

SQL> alter table orders split partition Y_2007 into 
(PARTITION Q1_2007 VALUES LESS THAN (TO_DATE('01-APR-2007','DD-MON-YYYY')),
PARTITION Q2_2007 VALUES LESS THAN (TO_DATE('01-JUL-2007','DD-MON-YYYY')),
PARTITION Q3_2007 VALUES LESS THAN (TO_DATE('01-OCT-2007','DD-MON-YYYY')),
PARTITION Q4_2007);

Now look at DBA_TAB_PARTITIONS to get details of the new partitions –


TABLE_NAME    PARTITION_NAME   POSITION HIGH_VALUE
-------------- --------------- -------- -------------------------
ORDERS         Q1_2007                1 TIMESTAMP' 2007-04-01 00:00:00'
ORDER_ITEMS    Q1_2007                1
ORDERS         Q2_2007                2 TIMESTAMP' 2007-07-01 00:00:00'
ORDER_ITEMS    Q2_2007                2
ORDERS         Q3_2007                3 TIMESTAMP' 2007-10-01 00:00:00'
ORDER_ITEMS    Q3_2007                3
ORDERS         Q4_2007                4 TIMESTAMP' 2008-01-01 00:00:00'
ORDER_ITEMS    Q4_2007                4
ORDERS         Q4_2008                5 TIMESTAMP' 2009-01-01 00:00:00'
ORDER_ITEMS    Q4_2008                5

Partition Q4_2007 has a high value equal to the high value of the original Y_2007 partition, and so has inherited its upper boundary from the partition that was split.

As for a list partitioning example let look at the following another table, SALES_PAR_LIST, which has 2 partitions, Americas and Europe and a partitioning key of country_name.

SQL> select table_name,partition_name,
   high_value
   from dba_tab_partitions
   where table_owner='SH' and
   table_name = 'SALES_PAR_LIST';

TABLE_NAME      PARTITION_NAME   HIGH_VALUE
--------------  ---------------  -----------------------------
SALES_PAR_LIST  AMERICAS         'Argentina', 'Canada', 'Peru',
                                 'USA', 'Honduras', 'Brazil', 'Nicaragua'
SALES_PAR_LIST  EUROPE           'France', 'Spain', 'Ireland', 'Germany',
                                 'Belgium', 'Portugal', 'Denmark'

Now split the Americas partition into 3 partitions –

SQL> alter table sales_par_list split partition americas into
   (partition south_america values ('Argentina','Peru','Brazil'),
   partition north_america values('Canada','USA'),
   partition central_america);

Table altered.

Note that no list of values was given for the ‘Central America’ partition. However it should have inherited any values in the original ‘Americas’ partition that were not assigned to either the ‘North America’ or ‘South America’ partitions. We can confirm this by looking at the DBA_TAB_PARTITIONS view.

SQL> select table_name,partition_name,
   high_value
   from dba_tab_partitions
   where table_owner='SH' and
   table_name = 'SALES_PAR_LIST';

TABLE_NAME      PARTITION_NAME   HIGH_VALUE
--------------- ---------------  --------------------------------
SALES_PAR_LIST  SOUTH_AMERICA    'Argentina', 'Peru', 'Brazil'
SALES_PAR_LIST  NORTH_AMERICA    'Canada', 'USA'
SALES_PAR_LIST  CENTRAL_AMERICA  'Honduras', 'Nicaragua'
SALES_PAR_LIST  EUROPE           'France', 'Spain', 'Ireland', 'Germany',
                                 'Belgium', 'Portugal', 'Denmark'

In conclusion, I hope that DBA’s whose work involves maintaining partitions will find the operations a bit more straight forward to carry out once they have upgraded to Oracle Database 12c.

Gwen Lazenby

Gwen Lazenby is a Principal Training Consultant at Oracle.

She is part of Oracle University's Core Technology delivery team based in the UK, teaching Database Administration and Linux courses. Her specialist topics include using Oracle Partitioning and Parallelism in Data Warehouse environments, as well as Oracle Spatial and RMAN.

Wednesday Feb 27, 2013

Transporting a Single Partition of a Table

Untitled Document

A feature of the Oracle database that appeared in 11gR1 but seems to have gone largely unnoticed is the ability to transport single (or multiple) partitions of a table to another database. This is particularly useful when archiving out older data held in older partitions of a table.

In previous releases of the Oracle database, it was only possible to transport entire tables, which meant every partition in that table. Therefore the standard methodology used to transport a single partition was to perform a partition exchange at the source database, making the partition a standalone table, and then to transport this table using the normal Transportable Tablespace method to the target database. The result is a standalone table in the target database. The Oracle 11.2 documentation still states that you cannot use the Transportable Tablespace feature to transport a single partition of a table, however we are going to use a different technique, known as ‘datafile copy’ to do exactly that.

The datafile copy technique is actually part of Data Pump. In a similar way to transporting tablespaces, Data Pump is used to export out the metadata describing the objects to be copied to the new database. The datafiles that contain these objects are then copied to the new database, and Data Pump is used to import the metadata. These objects can now be seen by the new database.

This sounds exactly like transporting a tablespace, but there is a difference. In the datafile copy technique we might only be interested in a subset of what is in the datafiles, not the entire contents. This is achieved by invoking Data Pump export in the table mode, rather than tablespace mode. The datafile then effectively becomes an alternative to a dumpfile containing the exported data.

If the only contents of the tablespace in question are a single partition (or subpartition) of a table, then the final result appears to be identical to that achieved using the partition exchange and transportable tablespace technique traditionally used.

Now look at a worked example:
First create a tablespace TRANS to hold the partition to be transported

SQL> CREATE TABLE "OE"."ORDERS_PART"
( "ORDER_ID" NUMBER(12),
"ORDER_DATE" TIMESTAMP(6) WITH local TIME ZONE CONSTRAINT "ORDER_PART_DATE_NN" NOT NULL ,
"ORDER_MODE" VARCHAR2(8),
"CUSTOMER_ID" NUMBER(6) CONSTRAINT "ORDER_PART_CUST_ID_NN" NOT NULL ,
"ORDER_STATUS" NUMBER(2),
"ORDER_TOTAL" NUMBER(8, 2),
"SALES_REP_ID" NUMBER(6),
"PROMOTION_ID" NUMBER(6))
TABLESPACE "EXAMPLE"
PARTITION BY RANGE ("ORDER_DATE")
(PARTITION "H1_2004" VALUES LESS THAN (TIMESTAMP'2004-07-01 00:00:00 +0:00') TABLESPACE TRANS,
PARTITION "H2_2004" VALUES LESS THAN (TIMESTAMP'2005-01-01 00:00:00 +0:00') TABLESPACE TRANS,
PARTITION "H1_2005" VALUES LESS THAN (TIMESTAMP'2005-07-01 00:00:00 +0:00'),
PARTITION "H2_2005" VALUES LESS THAN (TIMESTAMP'2006-01-01 00:00:00 +0:00'),
PARTITION "H1_2006" VALUES LESS THAN (TIMESTAMP'2006-07-01 00:00:00 +0:00'),
PARTITION "H2_2006" VALUES LESS THAN (MAXVALUE));
Table created

And then populate it:

SQL> insert into oe.orders_part select * from oe.orders;
105 rows created.
SQL> commit;

Look at the segments that are contained inside this tablespace:

SQL> select owner, segment_name,partition_name from dba_segments where tablespace_name='TRANS'

OWNER SEGMENT_NAME PARTITION_NAME -------- --------------------- --------------
OE ORDERS_PART H1_2004 OE ORDERS_PART H2_2004

Just like when transporting tablespaces any tablespaces whose datafiles we want to copy must first be made read only.

SQL> alter tablespace trans read only;
Tablespace altered

Now the Data Pump export can be performed. The ‘table’ option is used to indicate which partitions we want to export, and also the ‘transportable=always’ to indicate that the generated dumpfile is to contain the table/partition metadata only as the datafiles containing segments involved will be copied to the new database.

expdp system/oracle tables=oe.orders_part:H1_2004 transportable=always dumpfile=ordersH1_2004.dmp reuse_dumpfiles=y

Export: Release 11.2.0.2.0 - Production on Thu Feb 21 16:20:31 2013

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_TABLE_01":  system/******** tables=oe.orders_part:H1_2004 transportable=always dumpfile=ordersH1_2004.dmp reuse_dumpf
s=y
Processing object type TABLE_EXPORT/TABLE/PLUGTS_BLK
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/END_PLUGTS_BLK
Master table "SYSTEM"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_TABLE_01 is:
  D:\DATAPUMP\ORDERSH1_2004.DMP
******************************************************************************
Datafiles required for transportable tablespace TRANS:
  D:\ORADATA\TRANS.DBF
Job "SYSTEM"."SYS_EXPORT_TABLE_01" successfully completed at 16:20:53

Now the datafile is copied to its new location and the TRANS tablespace is made read-write again.

copy d:\oradata\trans.dbf d:\oradata\trans.cpy
        1 file(s) copied.

(If necessary, RMAN can now be invoked to convert the datafile to different platform format.)

SQL> alter tablespace trans read write;

Tablespace altered.

Now move over to the second database and import the partition’s metadata into the database. Use the option ‘partition_options=departition’ which will import the partition as a standalone table. In this example the schema owner is also being changed.

impdp system/oracle partition_options=departition dumpfile=ordersH1_2004.dmp transport_datafiles='d:/oradata/trans.cpy' remap_schema=OE:SH

Import: Release 11.2.0.2.0 - Production on Thu Feb 21 16:21:28 2013

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01":  system/******** partition_options=departition dumpfile=ordersH1_2004.dmp transport_datafiles='d:/
data/trans.cpy' remap_schema=OE:SH
Processing object type TABLE_EXPORT/TABLE/PLUGTS_BLK
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/END_PLUGTS_BLK
Job "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01" successfully completed at 16:21:34

Look for the newly imported table

SQL> select table_name, tablespace_name  from dba_tables
  2  where table_name like 'ORDERS%';

TABLE_NAME			TABLESPACE_NAME
------------------------------ --------------------------------------
ORDERS_PART_H1_2004		TRANS
ORDERS_QUEUETABLE 		EXAMPLE
ORDERS				EXAMPLE
ORDERS				EXAMPLE

Its new name is the name of the original table suffixed with the name of the partition. If this was the only segment that was originally stored in this datafile we would now be finished but, although this partition is the only segment whose metadata was imported into this database, there are actually other segments in this datafile of which this 'new' database does not know anything about.

SQL> select owner, segment_name, partition_name from dba_segments where tablespace_name='TRANS'

OWNER    SEGMENT_NAME         PARTITION_NAME
------------------------------ --------------
SH       ORDERS_PART_H1_2004          

So move the new table to a new tablespace enabling the TRANS tablespace to be dropped.

SQL> alter table sh.orders_part_h1_2004 move tablespace example;

Table altered.

SQL> drop tablespace trans;

Tablespace dropped.

Datafile copying as a means to transport partitions is one of the many features covered in the Oracle Database 11g : Implement Partitioning course.

About the Author:

Gwen Lazenby



Gwen Lazenby is Principal Training Consultant at Oracle. She is part of the UK and IE Core Tech Delivery team, teaching database administration, Spatial and Linux courses. Gwen is also a member of the OU EMEA SME team, with a special responsibility for Oracle Spatial.

Wednesday Nov 30, 2011

New ways for backup, recovery and restore of Essbase Block Storage databases – part 2 by Bernhard Kinkel

After discussing in the first part of this article new options in Essbase for the general backup and restore, this second part will deal with the also rather new feature of Transaction Logging and Replay, which was released in version 11.1, enhancing existing restore options.

Tip: Transaction logging and replay cannot be used for aggregate storage databases. Please refer to the Oracle Hyperion Enterprise Performance Management System Backup and Recovery Guide (rel. 11.1.2.1).

Even if backups are done on a regular, frequent base, subsequent data entries, loads or calculations would not be reflected in a restored database. Activating Transaction Logging could fill that gap and provides you with an option to capture these post-backup transactions for later replay. The following table shows, which are the transactions that could be logged when Transaction Logging is enabled:



In order to activate its usage, corresponding statements can be added to the Essbase.cfg file, using the TRANSACTIONLOGLOCATION command. The complete syntax reads:

TRANSACTIONLOGLOCATION [ appname [ dbname]] LOGLOCATION NATIVE ENABLE | DISABLE

Where appname and dbname are optional parameters giving you the chance in combination with the ENABLE or DISABLE command to set Transaction Logging for certain applications or databases or to exclude them from being logged. If only an appname is specified, the setting applies to all databases in that particular application. If appname and dbname are not defined, all applications and databases would be covered. LOGLOCATION specifies the directory to which the log is written, e.g. D:\temp\trlogs. This directory must already exist or needs to be created before using it for log information being written to it. NATIVE is a reserved keyword that shouldn’t be changed.

The following example shows how to first enable logging on a more general level for all databases in the application Sample, followed by a disabling statement on a more granular level for only the Basic database in application Sample, hence excluding it from being logged.

TRANSACTIONLOGLOCATION Sample Hyperion/trlog/Sample NATIVE ENABLE
TRANSACTIONLOGLOCATION Sample Basic Hyperion/trlog/Sample NATIVE DISABLE

Tip: After applying changes to the configuration file you must restart the Essbase server in order to initialize the settings.

A maybe required replay of logged transactions after restoring a database can be done only by administrators. The following options are available:

In Administration Services selecting Replay Transactions on the right-click menu on the database:

Here you can select to replay transactions logged after the last replay request was originally executed or after the time of the last restored backup (whichever occurred later) or transactions logged after a specified time.
Or you can replay transactions selectively based on a range of sequence IDs, which can be accessed using Display Transactions on the right-click menu on the database:

These sequence ID s (0, 1, 2 … 7 in the screenshot below) are assigned to each logged transaction, indicating the order in which the transaction was performed.

This helps to ensure the integrity of the restored data after a replay, as the replay of transactions is enforced in the same order in which they were originally performed. So for example a calculation originally run after a data load cannot be replayed before having replayed the data load first. After a transaction is replayed, you can replay only transactions with a greater sequence ID. For example, replaying the transaction with sequence ID of 4 includes all preceding transactions, while afterwards you can only replay transactions with a sequence ID of 5 or greater.

Tip: After restoring a database from a backup you should always completely replay all logged transactions, which were executed after the backup, before executing new transactions.

But not only the transaction information itself needs to be logged and stored in a specified directory as described above. During transaction logging, Essbase also creates archive copies of data load and rules files in the following default directory:

ARBORPATH/app/appname/dbname/Replay

These files are then used during the replay of a logged transaction. By default Essbase archives only data load and rules files for client data loads, but in order to specify the type of data to archive when logging transactions you can use the command TRANSACTIONLOGDATALOADARCHIVE as an additional entry in the Essbase.cfg file. The syntax for the statement is:

TRANSACTIONLOGDATALOADARCHIVE [appname [dbname]] [OPTION]

While to the [appname [dbname]] argument the same applies like before for TRANSACTIONLOGLOCATION, the valid values for the OPTION argument are the following:

Make the respective setting for which files copies should be logged, considering from which location transactions are usually taking place. Selecting the NONE option prevents Essbase from saving the respective files and the data load cannot be replayed. In this case you must first manually load the data before you can replay the transactions.

Tip: If you use server or SQL data and the data and rules files are not archived in the Replay directory (for example, you did not use the SERVER or SERVER_CLIENT option), Essbase replays the data that is actually in the data source at the moment of the replay, which may or may not be the data that was originally loaded.

You can find more detailed information in the following documents:

Or on the Oracle Technology Network.

If you are also interested in other new features and smart enhancements in Essbase or Hyperion Planning stay tuned for coming articles or check our training courses and web presentations.

You can find general information about offerings for the Essbase and Planning curriculum or other Oracle-Hyperion products here; (please make sure to select your country/region at the top of this page) or in the OU Learning paths section, where Planning, Essbase and other Hyperion products can be found under the Fusion Middleware heading (again, please select the right country/region). Or drop me a note directly: bernhard.kinkel@oracle.com.

About the Author:

Bernhard Kinkel

Bernhard Kinkel started working for Hyperion Solutions as a Presales Consultant and Consultant in 1998 and moved to Hyperion Education Services in 1999. He joined Oracle University in 2007 where he is a Principal Education Consultant. Based on these many years of working with Hyperion products he has detailed product knowledge across several versions. He delivers both classroom and live virtual courses. His areas of expertise are Oracle/Hyperion Essbase, Oracle Hyperion Planning and Hyperion Web Analysis.

Disclaimer:

All methods and features mentioned in this article must be considered and tested carefully related to your environment, processes and requirements. As guidance please always refer to the available software documentation. This article does not recommend or advise any explicit action or change, hence the author cannot be held responsible for any consequences due to the use or implementation of these features.

Tuesday Jun 14, 2011

New ways for backup, recovery and restore of Essbase Block Storage databases – part 1 by Bernhard Kinkel

Backing up databases and providing the necessary files and information for a potential recovery or restore is crucial in today’s working environments. I will therefore present the new interesting options that Essbase provides for this, starting from version 11, and related to this a powerful data export option using Calc Scripts, which has been available since release 9.3.

Let’s start with the last point: If you wanted to backup just the data from your database, formerly you could use the Export utility that Essbase provides as an item in the database right-click menu in the Administration Services Console. This feature is still available, supporting both Block Storage (BSO) and Aggregate Storage (ASO) databases. But regarding usability, some limitations exist: for example, the focus on which data to export can be set only to Level0, Input Level or All data (the last two options are only available for BSO) – more detailed definitions are not possible. Also the ASCII format of the export file causes them to become rather large, maybe even larger than your Page and Index files.

Anyway, importing these files is quite simple, as this export can be (re-)loaded without any load rule, as long as the outline structure is the same – even if the database resides on another server. Also modifications are possible while using load rules in combination with an export file in column format.

But now the way to export data using a Calc Script promises more flexibility, smaller files and faster performance. However, this option is only available to BSO, as ASO cubes do not leverage Calc Scripts.

For example, in order to focus on even very detailed subsets of data, which is very usual in Calc Scripts, you can take advantage of common commands like FIX | ENDFIX and EXCLUDE | ENDEXCLUDE. In addition the new SET DATAEXPORTOPTIONS command provides more options to refine export content, formatting, and processing, including the possibility to export dynamically calculated values. You can also request statistics and an estimate of export time before actually exporting the data. The following syntax gives you an overview of the available settings:

SET DATAEXPORTOPTIONS
{
DataExportLevel ALL | LEVEL0 | INPUT;
DataExportDynamicCalc ON | OFF;
DataExportNonExistingBlocks ON | OFF;
DataExportDecimal n;
DataExportPrecision n;
DataExportColFormat ON | OFF;
DataExportColHeader dimensionName;
DataExportDimHeader ON | OFF;
DataExportRelationalFile ON | OFF;
DataExportOverwriteFile ON | OFF;
DataExportDryRun ON | OFF;
}

Looking at most of these options will probably already give you an idea on their use and functionality. For more detailed information about the SET DATAEXPORTOPTIONS command options, please see the available Oracle Essbase Online Documentation (rel. 11.1.2.1) or the Enterprise Performance Management System Documentation (including previous releases) on the Oracle Technology Network.

My example should focus on the binary export and import, as it provides faster export and load performance than export/import with ASCII files. Thus in the first section of my script I will use only two of the data export options, in order to export all data and to overwrite an eventually existing old export file with the new one. The subsequent syntax for the binary export itself is DATAEXPORT "Binfile" "fileName", where "Binfile" is the required keyword and "fileName" is the full pathname for the exported binfile. So the complete script reads:

SET DATAEXPORTOPTIONS
{
DataExportLevel "ALL";
DATAEXPORTOVERWRITEFILE ON;
}
DATAEXPORT "BinFile" "c:\Export\MyDB_expALL.bin";

Tip: Export file names can have more than 8 characters; the extension “.bin” is not mandatory.

The import of the binary file with the Calc Script uses the command DATAIMPORTBIN fileName;. In order to avoid potentially importing a wrong file or importing into a wrong database, each export file includes an outline timestamp, which the import by default checks. Just in case, this check should be bypassed, the command SET DATAIMPORTIGNORETIMESTAMP ON; could be placed before the DATAIMPORTBIN line. The import definition for the preceding export could look like the following:

SET DATAIMPORTIGNORETIMESTAMP ON;
DATAIMPORTBIN "c:\Export\MyDB_expALL.bin";

After this rather new option for data export and import let’s turn to the new backup and restore option for complete databases provided in Administration Services Console starting with release 11. As well as or instead of the common strategies and methods used previously (like running a third party backup utility while the database is in read-only mode), this new feature provides an easy ad-hoc way to archive a database.

Select the Archive Database item from the right-click menu on the database node and in the subsequent window define the full path and name for the archive file, where the extension “.arc” is a recommendation from Oracle, but not mandatory.


The process could optionally be run as a background process, while Force archive would overwrite an existing file with the same name.

After starting the archive procedure, the database is set to read-only mode and a copy of the following files will be written to the archive file:


After this the database returns to read-write mode. However, not all files are backed up automatically using this procedure. The following table shows a list of files and file types that you would need to backup manually:


Tip: Also make a backup of the file essbase.bak_startup. This file is created after a successful start of the Essbase server (formerly this file was named just essabse.bak), as well as the essbase.bak file, which now has a different function: while the essbase.bak_startup is only created at the server start and no changes apply to this file until a next successful server start, the essbase.bak could be compared to the security file and updated manually or by using a MaxL command at any time. For a manual update in Administration Services Console under the respective Essbase server right-click Security, and select Update security backup file.

In MaxL run the command alter system sync security backup. Security files and the CFG-file reside in the ARBORPATH\bin directory, where you installed Essbase.

As the Archive option by default creates one large file, you have to make sure that the system you save your archive files to supports large files (e.g. in Windows NTFS). If you need smaller files, Essbase can be configured to create multiple files no larger than 2 GB by creating the SPLITARCHIVEFILE TRUE entry in the essbase.cfg file.

Restoring an archived database works as simply as the backup itself. First make sure that the database to be restored is stopped. Then from the right-click menu select Restore Database. Provide the required information about the archive file to be imported including the full path.


If the backed-up database used disk volumes, select Advanced. The database can be restored to the same disk volumes without any further definitions, or you define a new mapping for the volume names (e.g. “C” could be replaced by “F”), but you can neither change the number of volumes nor the space used on each volume compared to the original backed-up database. Select to restore in the background if desired, and click OK. The restore is done and confirmed in the Messages panel.

Tip: Usually, the same database would be restored that has been previously backed-up. But this doesn’t necessarily have to be the case. You can also use the restore feature to create a copy of your database (excluding the files mentioned above, which are not included in the archive file) or to overwrite another database. In both cases you must have an existing database to overwrite. From this “target” database select the Restore Database feature, but make sure to have checked Force Restore in the Restore Database dialog box.

Depending on the frequency of your archiving cycles, maybe the latest backup doesn’t restore the actual latest state of your database: following the backup, you might, for example, have run Dimension Build Rules or Data Load Rules, data loads from client interfaces or calculations. These would not be reflected in the restored database. In this case the new Transaction Logging and Replay option provides a good way to capture and replay post-backup transactions. Thus, a backed-up database can be recovered to the most recent state before the interruption occurred. This feature will be described in the second part of this article coming later this year.

Or – if you can’t wait – maybe you should learn how to use it as well as other important administration topics in our Essbase for System Administrators class; please refer also to the links provided below.

If you are also interested in other new features and smart enhancements in Essbase or Hyperion Planning stay tuned for coming articles or check our training courses and web presentations.

You can find general information about offerings for the Essbase and Planning curriculum or other Oracle-Hyperion products here; (please make sure to select your country/region at the top of this page) or in the OU Learning paths section, where Planning, Essbase and other Hyperion products can be found under the Fusion Middleware heading (again, please select the right country/region). Or drop me a note directly: bernhard.kinkel@oracle.com.



Bernhard Kinkel started working for Hyperion Solutions as a Presales Consultant and Consultant in 1998 and moved to Hyperion Education Services in 1999. He joined Oracle University in 2007 where he is a Principal Education Consultant. Based on these many years of working with Hyperion products he has detailed product knowledge across several versions. He delivers both classroom and live virtual courses. His areas of expertise are Oracle/Hyperion Essbase, Oracle Hyperion Planning and Hyperion Web Analysis.

Disclaimer:


All methods and features mentioned in this article must be considered and tested carefully related to your environment, processes and requirements. As a guidance please always refer to the available software documentation. This article does not recommend or advise any explicit action or change, hence the author cannot be held responsible for any consequences due to the use or implementation of these features.
About

Expert trainers from Oracle University share tips and tricks and answer questions that come up in a classroom.

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today