Wednesday Feb 27, 2013

Transporting a Single Partition of a Table

Untitled Document

A feature of the Oracle database that appeared in 11gR1 but seems to have gone largely unnoticed is the ability to transport single (or multiple) partitions of a table to another database. This is particularly useful when archiving out older data held in older partitions of a table.

In previous releases of the Oracle database, it was only possible to transport entire tables, which meant every partition in that table. Therefore the standard methodology used to transport a single partition was to perform a partition exchange at the source database, making the partition a standalone table, and then to transport this table using the normal Transportable Tablespace method to the target database. The result is a standalone table in the target database. The Oracle 11.2 documentation still states that you cannot use the Transportable Tablespace feature to transport a single partition of a table, however we are going to use a different technique, known as ‘datafile copy’ to do exactly that.

The datafile copy technique is actually part of Data Pump. In a similar way to transporting tablespaces, Data Pump is used to export out the metadata describing the objects to be copied to the new database. The datafiles that contain these objects are then copied to the new database, and Data Pump is used to import the metadata. These objects can now be seen by the new database.

This sounds exactly like transporting a tablespace, but there is a difference. In the datafile copy technique we might only be interested in a subset of what is in the datafiles, not the entire contents. This is achieved by invoking Data Pump export in the table mode, rather than tablespace mode. The datafile then effectively becomes an alternative to a dumpfile containing the exported data.

If the only contents of the tablespace in question are a single partition (or subpartition) of a table, then the final result appears to be identical to that achieved using the partition exchange and transportable tablespace technique traditionally used.

Now look at a worked example:
First create a tablespace TRANS to hold the partition to be transported

SQL> CREATE TABLE "OE"."ORDERS_PART"
( "ORDER_ID" NUMBER(12),
"ORDER_DATE" TIMESTAMP(6) WITH local TIME ZONE CONSTRAINT "ORDER_PART_DATE_NN" NOT NULL ,
"ORDER_MODE" VARCHAR2(8),
"CUSTOMER_ID" NUMBER(6) CONSTRAINT "ORDER_PART_CUST_ID_NN" NOT NULL ,
"ORDER_STATUS" NUMBER(2),
"ORDER_TOTAL" NUMBER(8, 2),
"SALES_REP_ID" NUMBER(6),
"PROMOTION_ID" NUMBER(6))
TABLESPACE "EXAMPLE"
PARTITION BY RANGE ("ORDER_DATE")
(PARTITION "H1_2004" VALUES LESS THAN (TIMESTAMP'2004-07-01 00:00:00 +0:00') TABLESPACE TRANS,
PARTITION "H2_2004" VALUES LESS THAN (TIMESTAMP'2005-01-01 00:00:00 +0:00') TABLESPACE TRANS,
PARTITION "H1_2005" VALUES LESS THAN (TIMESTAMP'2005-07-01 00:00:00 +0:00'),
PARTITION "H2_2005" VALUES LESS THAN (TIMESTAMP'2006-01-01 00:00:00 +0:00'),
PARTITION "H1_2006" VALUES LESS THAN (TIMESTAMP'2006-07-01 00:00:00 +0:00'),
PARTITION "H2_2006" VALUES LESS THAN (MAXVALUE));
Table created

And then populate it:

SQL> insert into oe.orders_part select * from oe.orders;
105 rows created.
SQL> commit;

Look at the segments that are contained inside this tablespace:

SQL> select owner, segment_name,partition_name from dba_segments where tablespace_name='TRANS'

OWNER SEGMENT_NAME PARTITION_NAME -------- --------------------- --------------
OE ORDERS_PART H1_2004 OE ORDERS_PART H2_2004

Just like when transporting tablespaces any tablespaces whose datafiles we want to copy must first be made read only.

SQL> alter tablespace trans read only;
Tablespace altered

Now the Data Pump export can be performed. The ‘table’ option is used to indicate which partitions we want to export, and also the ‘transportable=always’ to indicate that the generated dumpfile is to contain the table/partition metadata only as the datafiles containing segments involved will be copied to the new database.

expdp system/oracle tables=oe.orders_part:H1_2004 transportable=always dumpfile=ordersH1_2004.dmp reuse_dumpfiles=y

Export: Release 11.2.0.2.0 - Production on Thu Feb 21 16:20:31 2013

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_TABLE_01":  system/******** tables=oe.orders_part:H1_2004 transportable=always dumpfile=ordersH1_2004.dmp reuse_dumpf
s=y
Processing object type TABLE_EXPORT/TABLE/PLUGTS_BLK
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/END_PLUGTS_BLK
Master table "SYSTEM"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_TABLE_01 is:
  D:\DATAPUMP\ORDERSH1_2004.DMP
******************************************************************************
Datafiles required for transportable tablespace TRANS:
  D:\ORADATA\TRANS.DBF
Job "SYSTEM"."SYS_EXPORT_TABLE_01" successfully completed at 16:20:53

Now the datafile is copied to its new location and the TRANS tablespace is made read-write again.

copy d:\oradata\trans.dbf d:\oradata\trans.cpy
        1 file(s) copied.

(If necessary, RMAN can now be invoked to convert the datafile to different platform format.)

SQL> alter tablespace trans read write;

Tablespace altered.

Now move over to the second database and import the partition’s metadata into the database. Use the option ‘partition_options=departition’ which will import the partition as a standalone table. In this example the schema owner is also being changed.

impdp system/oracle partition_options=departition dumpfile=ordersH1_2004.dmp transport_datafiles='d:/oradata/trans.cpy' remap_schema=OE:SH

Import: Release 11.2.0.2.0 - Production on Thu Feb 21 16:21:28 2013

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01":  system/******** partition_options=departition dumpfile=ordersH1_2004.dmp transport_datafiles='d:/
data/trans.cpy' remap_schema=OE:SH
Processing object type TABLE_EXPORT/TABLE/PLUGTS_BLK
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/END_PLUGTS_BLK
Job "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01" successfully completed at 16:21:34

Look for the newly imported table

SQL> select table_name, tablespace_name  from dba_tables
  2  where table_name like 'ORDERS%';

TABLE_NAME			TABLESPACE_NAME
------------------------------ --------------------------------------
ORDERS_PART_H1_2004		TRANS
ORDERS_QUEUETABLE 		EXAMPLE
ORDERS				EXAMPLE
ORDERS				EXAMPLE

Its new name is the name of the original table suffixed with the name of the partition. If this was the only segment that was originally stored in this datafile we would now be finished but, although this partition is the only segment whose metadata was imported into this database, there are actually other segments in this datafile of which this 'new' database does not know anything about.

SQL> select owner, segment_name, partition_name from dba_segments where tablespace_name='TRANS'

OWNER    SEGMENT_NAME         PARTITION_NAME
------------------------------ --------------
SH       ORDERS_PART_H1_2004          

So move the new table to a new tablespace enabling the TRANS tablespace to be dropped.

SQL> alter table sh.orders_part_h1_2004 move tablespace example;

Table altered.

SQL> drop tablespace trans;

Tablespace dropped.

Datafile copying as a means to transport partitions is one of the many features covered in the Oracle Database 11g : Implement Partitioning course.

About the Author:

Gwen Lazenby



Gwen Lazenby is Principal Training Consultant at Oracle. She is part of the UK and IE Core Tech Delivery team, teaching database administration, Spatial and Linux courses. Gwen is also a member of the OU EMEA SME team, with a special responsibility for Oracle Spatial.

Wednesday Apr 18, 2012

Setting a simple high availability configuration for an Oracle Unified Directory Topology

Oracle Unified Directory is the latest full java implementation, of LDAP directories offered by Oracle.

It offers several improved features over earlier LDAP versions such as higher speed on read/write operations, better handling of high volume data capacity, ease of scaling, replication and proxy capabilities.

In this topic we will explore some of the replications features of the Oracle Unified Directory Server, by providing a simple setup for high availability of any OUD topology.

Production OUD topologies need to offer, a continues, synchronized, and high availability flow of the business data, managed / hosted on OUD Ldap Stores. This is achieved by using OUD Ldap data nodes, with specific OUD Ldap directories as "Replication" nodes. A cluster of "OUD replication nodes" associated then with OUD Ldap datastores offers a high degree of availability for the data hosted on the OUD Ldap directories.

A Replication node, is considered an instance of an Oracle Unified Directory, which is used only to synchronize data (read/write operations) between several OUD dedicated LDAP stores. We can setup a replication node, on the same JVM which is hosting an OUD Ldap store, or we can create a specific JVM process which will handle the replication process (this is our approach for this demo).


For our demo, we will create a simply 4 node topology, on the same physical host.

Two OUD nodes (every node is a separate JVM process) , both of them are active, will handle the user data, and two OUD Replication nodes (as separate JMV processes ), will handle the synchronization of any modification applied on the data of the node1/node2.

As a best practice, its good to have at least 2 separate replication nodes in our topology, although we can start the replication scenario by using only 1 replication node, and then adding one additional instance.

Having at least 2 replication nodes in our system, we ensure that any operation on the data nodes of our Ldap systems (node1/noede2) will be propagated to the other Ldap nodes, even if one of the replication node fails.

The whole scenario can be run into one physical host (VirtualBox, or VMaware), without any overhead. In some other topics we will discuss tuning and monitoring operations.



Lets Start by creating the two LDAP nodes (node1, node2) which will hold our data.

This is done by executing the oud-setup script in graphical mode:


Creation of the first node, the node1 listens to 1389 port, and the 4444 is its admin port.


The node 1 is a standalone LDA server


The server will manage the directory tree dc=example,dc=com which is a sample site. For this site, the wizard will generate 2000 samples entries.


this is the final setup for the node1


At this stage, we have already one instance of our LDAP topology ( node1) up and running !.

We will continue, with the creation of the second OUD Ldap node (node2)


The setup for the node node2 is nearly similar to the node1, the LDAP listen port is 2389, and the admin port is 5444.


For the node2, we will create the same directory tree structure, but we will leave the directory database empty. During the synchronization phase (see later slides), we will provision the directory with data coming from the node1.


This is the final setup for the node2


And at this stage we have the second node2 up and running !

The creation of the replication nodes is "nearly similar process" as for the previous LDAP nodes. We will create 2 OUD instances, with the configuration wizard, and we will setup the replication process as an additional setup using the dsreplication command.


The first replication node will listen to port 3389, the admin port will be 6444.


The second replication node will listen to 4389, its admin port will be at 7444.

At this stage we have 4 OUD instances running to our system. Two LDAP nodes (node1, node2) with the "business data", and two replication nodes. The node1 is actually populated with data, the node2 will be provisioned during the setup of the replication between this node and the node1.

Now lets setup the replication process between the node1, and the first replication OUD server, by executing the following dsreplication command. The node1 , is the LDAP data node, and the first replication node, will hold only the replication information (it will not hold any LDAP data)


dsreplication enable \                                                                          
  --host1 localhost  --port1 4444 --bindDN1 "cn=Directory Manager" \                             
  --bindPassword1 welcome1 --noReplicationServer1 \                                              
  --host2 localhost --port2 6444 --bindDN2 "cn=Directory Manager" \                              
  --bindPassword2 welcome1 --onlyReplicationServer2 \                                            
  --replicationPort2  8989 --adminUID admin --adminPassword password \                           
  --baseDN "dc=example,dc=com" -X -n                                                             


This is what we should see in our prompt:


Then we should associate the second replication node with the node1.

The only parameters that we have to change to the previous scripts are the admin port for the second replication node, and the replication port for the second replication node


 dsreplication enable \                                                                          
  --host1 localhost  --port1 4444 --bindDN1 "cn=Directory Manager" \                             
  --bindPassword1 welcome1 --noReplicationServer1 \                                              
  --host2 localhost --port2 7444 --bindDN2 "cn=Directory Manager" \                              
  --bindPassword2 welcome1 --onlyReplicationServer2 \                                            
  --replicationPort2  9989 --adminUID admin --adminPassword password \                           
  --baseDN "dc=example,dc=com" -X -n                                                             



We should execute the same script now, on the node2 in order to associate this node with the first replication node :


dsreplication enable \                                                                          
  --host1 localhost  --port1 5444 --bindDN1 "cn=Directory Manager" \                             
  --bindPassword1 welcome1 --noReplicationServer1 \                                              
  --host2 localhost --port2 6444 --bindDN2 "cn=Directory Manager" \                              
  --bindPassword2 welcome1 --onlyReplicationServer2 \                                            
  --replicationPort2  8989 --adminUID admin --adminPassword password \                           
  --baseDN "dc=example,dc=com" -X -n                                                             



At this stage we have associated the node1, node2, with the two replications nodes.

Before to start our operations (read/write) on node1,and node1 we have to initialize the replication topology with the following script :


dsreplication initialize-all --hostname localhost --port 4444 \                                 
  --baseDN "dc=example,dc=com" --adminUID admin --adminPassword password                         


Here is what we should see :


As you notice, there is a fully successful provisioning of the node2, from the data of the node1!

Now we can monitor our configuration by executing the following command :


 dsreplication status --hostname localhost --port 4444 \                                         
  --baseDN "dc=example,dc=com" --adminUID admin --adminPassword password                         



To test our replication configuration, you can use any LDAP browser client, connect to the first instance, modify one or several entries, then connect to the second instance and check that your modification are applied :)

For additional training see: Oracle Unified Directory 11g: Services Deployment Essentials

About the Author:


Eugene Simos is based in France and joined Oracle through the BEA-Weblogic Acquisition, where he worked for the Professional Service, Support, end Education for major accounts across the EMEA Region. He worked in the banking sector, ATT, Telco companies giving him extensive experience on production environments. Eugene currently specializes in Oracle Fusion Middleware teaching an array of courses on Weblogic/Webcenter, Content,BPM /SOA/Identity-Security/GoldenGate/Virtualisation/Unified Comm Suite) throughout the EMEA region.

About

Expert trainers from Oracle University share tips and tricks and answer questions that come up in a classroom.

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today