Oracle Support Master Note for Streams Downstream Capture - 10g and 11g [Video] (Doc ID 1264598.1)

Master Note for Streams Downstream Capture - 10g and 11g [Video] (Doc ID 1264598.1)

Copyright (c) 2010, Oracle Corporation. All Rights Reserved.

In this Document
  Purpose
  Scope and Application
  Master Note for Streams Downstream Capture - 10g and 11g [Video]
     Downstream Capture
     Transport Considerations in Downstream Capture
     Instantiation - Implications for Primary Database
     Recommended Parameter Settings
     TroubleShooting
     Performance Issues
     Ongoing Streams Related Activities on the Primary Database
  References


Applies to:

Oracle Server - Enterprise Edition - Version: 10.1.0.2 to 11.2.0.2 - Release: 10.1 to 11.2
Information in this document applies to any platform.
Oracle Server Enterprise Edition - Version: 10.1.0.2 to 11.2.0.2

Purpose

This note provides background and troubleshooting information applicable in a Streams Downstream Capture Environment . Its main purpose as a master note is to bring together reference material in a form which allows a clear understanding of what is required in the configuration as well as to where to look when there are issues. Its focus is setup and understanding relevant parameters settings which are required.

Scope and Application

This document is intended for anyone with an interest in understanding Streams downstream capture. Where scripts details are references these are provided for reference to DBAs and Application developers. It is likely these will require modification in view of the fact that the examples presented are for illustration purposes. In order to have a better insight into how the examples work, please ensure you have read and understood the information below.

Whilst every care has been taken to ensure that the details are correct and accurate , code should be thoroughly tested to verify it is fit for intended purpose.

Please note , the approach adopted in relation to knowledge has been to improve Oracle's product documentation . Thus, relevant content in relation to downstream capture is now found in the primary reference guides for Streams ; if possible, please refer to the latest documentation :

- Streams Concepts and Administration;
- Streams Replication Administrator's Guide

A useful reference in relation to downstream capture specific details is found in :

Streams Configuration Best practices ; this was created with respect to 10.2 but is applicable to all later versions.

Further questions in relation to this subject may be answered in :

FAQ on Downstreams Capture, Note:394575.1 .

Master Note for Streams Downstream Capture - 10g and 11g [Video] 

Downstream Capture


Downstream capture is a configuration in which the capture process runs on a database other than the source database. The source database has to be in archive log mode since the key aspect of downstream Capture of offloading the burden or load of processing the redo generated on the Primary to the downstream database.

The following types of downstream capture configurations are possible: real-time downstream capture and archived-log downstream capture. The main consideration of a real-time downstream capture process is that it requires standby logs on the downstream database.

Requirements

  • the source database must be running at least Oracle Database 10g and the downstream capture database must be running the same release of Oracle as the source database or later;
  • the downstream database must be running Oracle Database 10g Release 2 to configure real-time downstream capture. In this case, the source database must be running Oracle Database 10g Release 1 or later;
  • the operating system on the source and downstream capture sites must be the same, but the operating system release does not need to be the same. However, the downstream sites can use a different directory structure from the source site;
  • the hardware architecture on the source and downstream capture sites must be the same. For example, a downstream capture configuration with a source database on a 32-bit Sun system must have a downstream database that is configured on a 32-bit Sun system. Other hardware elements, such as the number of CPUs, memory size, and storage configuration, can be different between the source and downstream sites.

Advantages

  • capturing changes does not happen at the source database, this the overhead for supporting Streams on the source database is greatly reduced;
  • capture process administration is greatly simplified, as you can locate capture processes with different source sites on the same database;
  • provide additional protection against database failure and corruption as the redo logs are shipped to a downstream database.  
Note : Where multiple capture processes capture changes from a single source database,  the resulting LCRs from these capture processes should never be staged in the same queue if the queue is used by an apply process that applies any of these changes. Instead, a separate queue and apply process should be used to apply changes from each capture process.

 

Real-Time Downstream Capture


In a real-time downstream capture environment, the primary database is configured so that the transport services use the log writer process (LGWR) at the source database to send redo data to the downstream database either synchronously or asynchronously. At the same time, the LGWR records redo data in the online redo log at the source database.

A remote file server process (RFS) at the downstream database receives the redo data over the network and stores the redo data in the standby redo logs.

A log switch at the source database causes a log switch at the downstream database and the ARCX process at the downstream database archives the current standby redo log file as necessary.

The real-time downstream capture process captures changes from the standby redo log whenever possible and from the archived standby redo log files whenever necessary. A capture process can capture changes in the archived standby redo log files if it falls behind. When it catches up, it resumes capturing changes from the standby redo log.

Further detail relating to configuring real time downstream capture , with consideration for RAC source databases is found in : How To Configure Streams Real-Time Downstream Environment, Note:753158.1 .

Please note , configuration of Real-Time Downstream differs from Archive-Log Downstream Capture configuration in the following ways :

  • log_archive_* parameters are different ; refer below for example parameter definitions;
  • the downstream capture database must be in archive log mode;
  • standby logs must be created in the downstream capture database ;
  • the capture process should be altered to use real time downstream mine
Archived-Log Downstream Capture

The primary database is configured so that the transport services use the archiver process (ARCX) at the source database. Archived logs from the source database are shipped to the downstream database and the capture process captures the relevant changes.

Multiple capture process can be configured in downstream database to capture changes and these capture processes can capture changes from archived redo log files of multiple source databases.

Further details on how to set up archive log downstream capture are found in : How To Setup Schema Level Streams Replication with a Downstream Capture Process with Implicit Log Assignment, Note:733691.1.

Video - Demonstration of Archive Log Downstream Capture based on : Note:733691.1 (05:00)

Transport Considerations in Downstream Capture


A fundamental requirement is that the source can ship redo to the destination database. The sys user on the source database must be able to connect as sys as sysdba at the downstream database. This will require configuration of Secure Sockets Layer (SSL) protocol or more commonly the addition of a remote login password file at the downstream database. If the source database has a remote login password file, then copy it to the appropriate directory on the downstream capture database system.

- configure archive log mode on the source ; the downstream database does not need to running in archive log mode if it is running archived-log downstream capture ;
- configure network sqlnet tns entries to allow connectivity between the databases ;
- create a password file to ensure that user sys on the source can connect as sys on the downstream capture database. The following example creates password files for both databases however it is the downstream database which requires the password file.


cd $ORACLE_HOME/dbs
orapwd file=orapwstrm1 password=oracle entries=20
cp orapwstrm1 orapwstrm2
show parameter remote_login_passwordfile
-- should show exclusive (non RAC) ; shared (RAC).
-- shutdown / startup


Following the above, you should be able to connect from the source to the destination as follows :


connect sys/oracle@strm1.net as sysdba
select * from global_name;
-- Following is the important connection requirement
connect sys/oracle@strm2.net as sysdba
select * from global_name;



For more details refer to the Streams documentation. The main streams activities and their significance in a downstream capture environment are outlined below.

Instantiation - Implications for Primary Database


Recall that instantiation in a Streams environment involves :

  • preparing the object for instantiation at the source database.
  • ensuring a copy of the object exists at the destination database;
  • setting the instantiation SCN for the database object at the destination (apply) database. The instantiation SCN is an SCN connected with the source database from which the apply process will apply captured changes to the object


Although the capture process is running on a different database , there are still implications for the primary database which need to be understood so that the downstream capture environment can be created successfully. These details are discussed as follows.

The examples detailed in Note:753158.1 and Note:733691.1 do not separate the build operation from the creation of the capture process. Furthermore , they do not refer to a flashback scn on the primary database corresponding to the first_scn returned from the build - refer to 'create capture' below below.

Notes :
- any export of data from the primary should always be made in consistent mode export ;
- if there is little or no change to the primary database , the referenced notes serve as a reasonable guide . However, this can potentially present a gap in the changes captured relative to when the datapump export was performed which may result in confusing ora-1403 errors after setup.

Streams can be configured so that the build activity is separated from the create capture operation. When performed as a separate operation, the build returns a first scn which can be used in subsequent configuration activities :

  • when the capture process is created on the downstream database it should refer to the first SCN returned from the build;
  • if datapump / export is used to move the data to the downstream database, a flashback_scn corresponding to the first scn from the build should be used ;
  • when setting the instantiation scn , this should again refer to the file scn returned from the build.


When the above considerations are met, the data on both databases as well as the relevant capture and apply processes are in sync when the processes are started .

Create Capture

In a downstream capture environment , the capture process has to be created separately from add_{tables|schema|database}_rules. The create_capture operation will internally generate a copy of the data dictionary at the primary in the redo stream when a database link to the primary is specified. Additionally, it creates the capture process referring to the first scn returned from the primary.

A build operation can be performed on the primary as a separate operation using :


set serveroutput on
DECLARE
scn NUMBER;
BEGIN
DBMS_CAPTURE_ADM.BUILD(first_scn => scn);
DBMS_OUTPUT.PUT_LINE('First SCN Value = ' || scn);
commit;
END;
/


The first_scn returned should then be used as the first_scn to the create_capture() call.

Prepare {tables|schema|database} instantiation

This will result in supplemental logging being generated for the related objects at the primary database when run at the downsteam database .

If there is no database link to the primary , perform the prepare at the primary or add the necessary supplemental logging at the primary database for the related objects.

Add {tables|schema|database} rules

This operation also performs a prepare (refer above) operation at the primary database for the related objects and configures relevant rules at the downstream database . This functions also configures relevant rules to allow the related object(s) to be processed by the relevant process.

Set {table|schema|database} instantiation scn

This operation sets the {table|schema|database} instantiation scn for the Apply process . This is the SCN at which changes will be applied from.

An export / import does not set the instantiation SCN for user objects. Therefore, this is also a required action if you use export / import to move objects from one database to another. Once you have performed this action, verify that instantiation SCNs are set for the schema and tables where appropriate as follows before starting the apply and capture processes.

connect strmadmin/strmadmin
select * from dba_apply_instantiated_global;
select * from dba_apply_instantiated_schemas;
select * from dba_apply_instantiated_objects;

 

Note : for individual tables, an instantiation SCN must be present for the object. Similarly, for a schema , a schema instantiation SCN must be exist.

 

Recommended Parameter Settings

The main issue which may be encountered will relate to log transport or redo not appearing at the downstream database. Before discussing what to look at and where consider the relevant parameters which should be examined. Understanding the related parameters and their values is important in ensuring that set up is free from significant issues. The following outlines which parameters are required.

Notes :
- it is assumed in the following that log_archive_dest_1 is used as the archive log location and both databases is in archive log mode;
- the log_archive_* parameter settings have their meaning defined in the Oracle Server reference manual;
- directory specification values are for illustrative purposes only
 

Real-Time Downstream Capture Parameter Values

The following serves as a guide of parameter requirements in setting up real-time downstream capture. For further details consult the Streams Concepts and Administration Guide.

Primary database
alter system set log_archive_dest_1 = 'location=/bugmnt12/em/celclnx7/SR3.2274614446/user/strm1 mandatory reopen=5' scope=spfile
/
alter system set log_archive_dest_state_1 = enable scope=spfile
/
alter system set log_archive_format = 'strm1_%t_%s_%r.dbf' scope=spfile
/
alter system set log_archive_dest_2 = 'service=strm2.net lgwr async noregister valid_for=(online_logfile,all_roles) db_unique_name=strm2' scope=spfile -- Note 1
/
alter system set log_archive_dest_state_2 = enable scope=spfile -- Note 2
/
alter system set log_archive_config='send,dg_config=(strm1,strm2)' scope=both -- Note 3
/

 

Note 1 - the public database link for the remote database is strm2.net ; 'lgwr async' the log writer process will write to the remote database asynchronously - if you use 'lgwr sync' you are writing through the network to the remote database which may have an affect in commit time on the source; noregister - logs are not registered in the control file , there is no need to do this ; valid_for should be defined as detailed; db_unique - indicates the unique_name of the remote database (show parameter db_unique_name);

Note 2 - this can be set to 'deferred' until the setup is complete; then amended to enable , i.e. initially there is no need to ship the redo information until the setup is complete ; once complete , this parameter can be set to 'enable' to activity activity;

Note 3 - this details the unique names involved, again issue show parameter db_unique_name on both databases; also indicates that this is the sending database.
 
Downstream Database 
alter system set log_archive_dest_1 = 'location=/bugmnt12/em/celclnx7/SR3.2274614446/user/strm2 mandatory reopen=5 valid_for=(online_logfile, primary_role)' scope=spfile
/
alter system set log_archive_dest_state_1 = enable scope=spfile
/
alter system set log_archive_format = 'strm2_%t_%s_%r.dbf' scope=spfile
/
alter system set log_archive_dest_2 = 'location=/bugmnt12/em/celclnx7/SR3.2274614446/user/standby mandatory valid_for=(standby_logfile, primary_role)' scope=spfile -- note 4
/
alter system set log_archive_dest_state_2 = enable scope=spfile
/
alter system set log_archive_config='receive,dg_config=(strm1,strm2)' scope=both -- note 5
/

 

Note 4 - location where the related archive log files from the primary are stored. logs will initially be written to the standby redo logs and then written out to this location in much the same way as archival of redo;

Note 5 - this details the unique names involved, again show parameter db_unique_name on both databases; also indicates that this is the sending database.
 

Archived-Log Downstream Capture Parameter Values


The following serves as a guide of parameter requirements in setting up archived log downstream capture. For further details consult the

Streams Concepts and Administration Guide.

Primary database
 
alter system set log_archive_dest_1 = 'location=/bugmnt12/em/celclnx7/SR3.2274614446/user/strm1 mandatory reopen=5' scope=spfile
/
alter system set log_archive_dest_state_1 = enable scope=spfile
/
alter system set log_archive_format = 'strm1_%t_%s_%r.dbf' scope=spfile
/
alter system set log_archive_dest_2 = 'service=strm2.net arch async noregister valid_for=(online_logfile,all_roles) template=/bugmnt12/em/celclnx7/SR3.2274614446/user/logsfromstrm1/strm1_%t_%s_%r.dbf db_unique_name=strm2' scope=spfile -- Note 6
/
alter system set log_archive_dest_state_2 = enable scope=spfile -- Note 7
/
alter system set log_archive_config='send,dg_config=(strm1,strm2)' scope=both -- Note 8
/

 

Note 6 - the public database link for the remote database is strm2.net ; 'arch async' indicates the archiver process (ARCX) will write to the remote database asynchronously; noregister - logs are not registered in the control file , there is need to have logs registered in the control file ; valid_for should be defined as detailed; template= - details the location in the remote machine where the logs transferred should be located on transfer; db_unique - indicates the unique_name of the remote database (show parameter db_unique_name);

Note 7 - as per Note 2 above;

Note 8 - this details the unique names involved, again show parameter db_unique_name on both databases; also indicates that this is the sending database.
 
Downstream Database  
alter system set log_archive_dest_1 = 'location=/bugmnt12/em/celclnx7/SR3.2274614446/user/strm2 mandatory reopen=5 valid_for=(online_logfile, primary_role)' scope=spfile
/
alter system set log_archive_dest_state_1 = enable scope=spfile
/
alter system set log_archive_format = 'strm2_%t_%s_%r.dbf' scope=spfile
/
alter system set log_archive_dest_2 = 'location=/bugmnt12/em/celclnx7/SR3.2274614446/user/standby mandatory valid_for=(standby_logfile, primary_role)' scope=spfile -- Note 9
/
alter system set log_archive_dest_state_2 = enable scope=spfile
/
alter system set log_archive_config='receive,dg_config=(strm1,strm2)' scope=both -- Note 10
/

 

Note 9 - 'valid_for=(standby_logfile' - is in anticipation of moving using real time downstream capture. The primary has specified , by the use of a 'template=', where the logs will be located on the downstream database.

Note 10 - this details the unique names involved, again issue show parameter db_unique_name on both databases ; also indicates that this is the receiving database.
 

TroubleShooting


If downstream Capture is not working , it is usually the case that the primary database cannot connect to the downsteam database. i.e typically, the cause may be ora-1017: invalid username/password; logon denied.

If the remote_login_passwordfile is not configured or is not the same as that on the primary or the sys passwords are different you will likely encounter ora-1017.

Please also refer to the section Troubleshooting Implicit Capture in the 11.2 Streams Concepts and Administration Guide.

Errors

Once the environment is active, the most common type of error which is likely to be encountered in a streams environment will be data related , i.e this may typically be : ora-1403.

For more information, refer to:

Master Note for Troubleshooting Streams Apply Errors ORA-1403, ORA-26787 or ORA-26786 ,  Note:265201.1.


During setup, it may not be obvious what is happening. Please make sure that you review the source alert.log file as well as the related log writer or archiver trace files - whichever process will be managing redo transport. These will likely point to the root cause of the issue which will likely be configuration / parameter related .

Tracing Log Transport Issues

If there is still no clear indication as to why redo is not being shipped , log transport related trace can be enabled by the setting of the parameter : LOG_ARCHIVE_TRACE. For more information , refer to the Oracle Database Reference Guide for trace levels.

Note : most issues will not require any reference to this parameter . Most problems will likely have a visible error which will be connected with an obvious cause and resolution.

Performance Issues

Issues relating to performance in a downstream capture environment should be treated in the same way as issues in a local capture environment. If a capture process is waiting on redo , this would suggest that there is a lag in the transport of redo information from the primary ; this will typically depend on whether real-time downstream capture or archived-log downstream capture is used. Other issues should be addressed in the same manner as with a local capture process and streams . A useful starting reference is :

Master Note for Streams Performance Recommendations ,  Note:335516.1.
 

Ongoing Streams Related Activities on the Primary Database


Once the downstream capture activity is active, there is one piece of relevant maintenance which should be scheduled at the primary database. Recall in the discussion above, that when the Capture process is created, it will refer to a build in the redo which is linked to its first_scn value. If there is any problem which demands that a Capture process has to be dropped and recreated , the new capture process can refer back to the most recent build scn. This will be the first_scn of the new capture process.

NOTE: It is recommended to perform periodic build operations on all streams capture environments.
 

References

NOTE:335516.1 - Master Note for Streams Performance Recommendations
NOTE:394575.1 - FAQ on Downstreams Capture
NOTE:413353.1 - 10.2 Best Practices For Streams in RAC environment
NOTE:418755.1 - Master Note for Streams Recommended Configuration
NOTE:733691.1 - How To Setup Schema Level Streams Replication with a Downstream Capture Process with Implicit Log Assignment
NOTE:753158.1 - How To Configure Streams Real-Time Downstream Environment
NOTE:265201.1 - Master Note for Troubleshooting Streams Apply Errors ORA-1403, ORA-26787 or ORA-26786

Comments:

Post a Comment:
  • HTML Syntax: NOT allowed
About

News and Troubleshooting tips for Oracle Database and Enterprise Manager

Search

Categories
Archives
« July 2015
SunMonTueWedThuFriSat
   
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
 
       
Today