Ten questions to answer when thinking of logical data replication

In this post

Logical database replication may be an eagerly required "toy" in your organization. The success of new reporting applications, (near) real time data distribution and processing and so forth may heavily depend on it - and take it for granted.

This isn't surprising at all: data is supposed to appear and steadily kept current. Vendors, including Oracle, claim they are able to provide very efficient, near real time and even heterogeneous instruments to achieve this result.

With such a variety of replication solutions you have plenty of time to evaluate how strong is your concept of logical replication (and its implications), how your organization is ready to maintain it and make sure nobody will point a finger at you when it breaks.

Have a look at the following 10 simple questions and have some additional fun before playing with technology!

10 (product independent) questions to start the right way

- 1 - can you distinguish logical and physical replication?

Surely you can. Logical replication can carry data from tables in database A to even different tables in database B, eventually filtering rows and applying transformations. This is possible because you've been able to define specific rules for each and every transformation, conflict resolution according to business rules and define appropriate recovery strategies in the unlikely event of a failure.

Physical replication just carries blocks / pages / allocation units. It's all it needs to know. Of course storage techniques cannot guarantee near real time, they provide "still pictures".

- 2 - partial or complete?

Again, we're not talking about a specific product or technology, just logical replication. Ask yourself what you need to filter in two directions:

  1. horizontally: all rows or a subset of them based on eventually changing business requirements?
  2. vertically: which tables, schemas, objects in general?
In both cases obtain a clear comprehension of the requirement and above all how often will it change. Rarely you'll fall into a everlasting definition of what has to be replicates. Be aware that business people often express themselves in terms of "all data related to supply chain" - a definition you'll need to map and keep current. 

- 3 - how efficiently can you identify a row to be updated on the target system?

Logical replication solutions based on the extraction of changes from redo / transaction logs of databases have the very valueable advantage of being lightweight - at least under the perspective of "mining".
Changes in redo / transaction logs are described including minimal information: should you issue an update statement having effect on 1000 rows, you'd have 1000 descriptions of changes occurred in your logs. Remember, 1 statement, 1000 changes.
While it is easy to group 1000 changes extracted from your logs into a single transaction, it's hard to figure out an equivalent update operation to apply to the destination system causing the same change. Products will likely issue 1000 hopefully small updates, say, by key. Keys must exist. For data to be updated in real time, a small update must be 1000 times faster than the original one.
The underlying message is that data is logically replicated via sql generated automatically and likely statements on database B will be very different from those that caused changes on database A. Having spent years in tuning database A won't help.

- 4 - are you interested in replicating changes originated by batch processing?

This is an obvious consequence of question  number 3. Try to figure out if the "1000 times" factor described above is really 1000 rather than 10 or 10000. The highest the factor, the more convenient is to "exclude" batch processing (back to filtering) and run the batch on database B as well, provided you can identify an appropriate time window to do so.

- 5 - are you interested in replicating DDL (changes to structures)?

This makes sense in a 1 to 1 replication scenario. Suppose you are replicating a portion of your CRM database for reporting purposes.
  • The CRM application is likely based on product S and maintained by people PS.
  • The reporting application is a custom, developed by people PR.
Likely the project carried on by people PR assume data is replicated somehow and kept current. People PS, are working independently on PR and likely will apply changes to the CRM database - as usual.
Suppose a new column is added to the "ITEMS" table: people PR will check in advance there's enough space on the local tablespace to accomodate data. Omitting the same check on database B, whatever is the name of the tablespace you took care to remap will lead to errors and possibily to SLA violations for the reporting application. This is just an example. Also ask yourself if replication rules still apply after the change.
Who will PR blame? Maybe PS? A third group of people PD? Even in this case, PD can't prevent events like the above from occurring, unless they know or have observed changes produced by PS in avance.
As you see, having:

  • a very efficient communication process;
  • a reasonable test environment where replication is in place;
is needed regardless of the product you are using.

- 6 - is any table emptied and loaded again?

This is an extreme case of tables being reloaded periodically eventually in response to a command issued by an end user. Be aware of it.

- 7 - what about binary data?

Can binary data be found in your database's logs? Will you need to fetch it? 

- 8 - can you clearly define a recovery strategy?

At this point you may imagine that your replication may break or data in database B be so outdated that it's useless.
Before asking which instruments you can use to monitor the replication lag, define what to do if replicated data is older than age A1:

  • point to a different data source until the lag deflates, eventually preventing heavy reports from being run;
  • just wait and publish a data accuracy disclaimer;
  • whatever you can clearly define;
succeeding in this exercise is crucial to explain people involved in the process of replicating data what you really want.

- 9 - have you thought of retention of changes?

Databases A and B may be miles or meters away from each other. In any case, the communication among them may be interrupted or B may become unavailable due to planned maintenance.
You'll need to store changes to be applied later sometimes at site A, sometimes at site B and arrange proper sizing of filesystems / other structures to accomodate changes before they are applied.
Also discuss how outdated your data can become before it's worth reloading it all instead of ingesting a huge backlog of changes.

- 10 - why aren't there questions about bandwidth and amount of redo logs produced in here?

Those aspects are too obvious to be put on top of a list.
The ability of mining changes efficiently is crucial, but as seen above, it may be 1000 times less challenging than applying them.

Conclusions and good luck!

Congratulations! You read this very generic post until this line.
As you can imagine, this post will be followed by a product specific details and will cover Oracle GoldenGate.


Any comment, correction or suggestion is very welcome.

Enrico Brambilla. 

Recommended reading

[1] Peter Brouwer, Oracle Corporation. The Art of Data Replication  (2011).

[2] Wikipedia, Replication (computing).



Comments:

Post a Comment:
  • HTML Syntax: NOT allowed
About

Enrico Brambilla is an Oracle Core Technology Expert, member of Oracle Consulting. He has worked with over a hundered big and small customers across the years, gathering thoughts to share about Database, e-BS, Exadata, GoldenGate and more - not "just" technology.

Search

Categories
Archives
« August 2015
SunMonTueWedThuFriSat
      
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
     
Today