Migrating to Exadata / HP Oracle DB Machine
By Rene Kundersma on May 09, 2009
Presuming you have to migrate your data to Exadata and you already did your capacity planning.
Then, probably, your next step is plan how to actually move your existing database to Exadata / HP Oracle DB-Machine (from now on called Exadata in this blog).
For this scenario, there are some small things to take care of.
My personal experience is that, preparing such a migration is all about knowing Oracle's MAA strategy. With that knowledge you know what options you have and don't have.
Off-line or On-line
First of al one has to decide whether the migration is 'off-line' or on-line.
'On-line' in this case is a matter of a couple of minutes downtime, you always have to switch from source database 'A' to target database 'B'.
Consider this time, for example, as the time it takes to switch over to a standby database.
'Off-line' is the most comfy solution, but only of course when the off-line window is large enough to migrate your data. In case of migrations to Exadata the data bulk you have to migrate will probably be 'large', so consider yourself lucky if you have a 'off-line' option and a window large enough to do the action.
For Exadata it is recommend to have an extent size that is a multiple of 4M. The ASM AU size should be 4M also. This is to make sure that at database level (for the extents) and at ASM level (for the AU's) at least 4M of contiguous chunks will be read. This is required to make Exadata perform best.
So, if you want to stick to this recommendation, you have to check the current size of your AU's and extents.
When you find out the size is not the recommend size then choice number two pops-up: will you do a 'logical' migration and stick to the recommendation or not and do a 'physical' migration. Of course, when your source system 'A' is not on ASM there isn't much AU's to check.
For the extents sizes the query should look like this:
where bytes < (4*1024*1024)
and owner = 'the name of the schema you are migrating';
If the extents already have the recommended size, you of course still can choose for a logical migration, but my personal feeling would be to do a physical migration in that case.
Physical migration to me means 'transferring the datafiles from platform A to platform B'. This situation leads us to new choices, like the choice of how to transfer the datafiles from A to B. This depends because platform A en B can both have a different architecture.
As we know Exadata/DB-Machine is based on Linux on Intel X86-64 and the source can be RISC (Sun Sparc, IBM P-series) for example (big endian). (where Intel and Itanium is little endian)
Fortunately Oracle has some good answers for this kind of questions:
- transportable tablespaces
To a certain level physical migrations can be done 'on-line' (or at least with as less downtime as possible). Think of dataguard for example.
As a last step; which solution you choose also depends on the infrastructure you are working in. Question you may ask yourself are:
- Do I have staging space on the source system
- Do I have network attached storage (NAS) available that I can use.
- Can this NAS be connected to source and target ?
- Are platform A en B actually on the same network anyway and
- If on the same network: can I safely use that network to tranfer my terabytes of data (not hurting the performance of other systems)
These are all realistic questions you have to deal with when you are talking about migrations.
You have to investigate into available solutions for logical migrations as well.
Before I continue I should first explain what my definition of logical migration is.
To me logical migration is 'exporting' the data from the source and 'importing' that into the target. Export can be any kind of tool such as: datapump, exp or even SQL*Loader or CTAS over a db-link. A 'logical' export can even be done 'on-line' if you would consider solutions like for example logical Standby database (same endian) or Oracle Streams (endian independent). 'Logical' migrations often need 'staging' (a location where you temporary dump you data). Note that datapump exports/imports can be done over the network without dumping anything to file.
As said, if you set yourself the target to change the extent sizes to the recommended values, you automatically end up in a logical migration if you not already have extents of this size.
There may be even more things to consider. For example.
- you need to build more then one replica of the source database on Exadata.
- you want to put as less load on the source system as possible because it's a highly critical production system.
When dealing with these kind of questions always know that Exadata can do the job quicker, because it so powerful.
Say for instance: you may realize exporting a 4TB database with datapump will cause to much load on the prodution system. But you still want to do a logical migration. For these kind of questions my answer would be to just transfer the datafiles to Exadata (and endian convert them if needed) and perform the export (expdp) and import (impdp) there. You probably don't even need to transfer the files, perhaps you can retrieve them from backup.
Exporting data with datapump is something you prefer to do in a read consistent way. For that think of the expdp arguments:
And undo guarentee on tablespace level.
Note 1. Note that exporting lobs in a read consistent way can not be accomplished only by tuning the undo retention of the database. For lobs undo is a property of the column. In order to make sure the retention of these objects is set well you may have to alter your table first. For more information on this see "The application developers guide - Large objects"
Note 2. transferring or exporting indexes to Exadata seems rather useless to me. First, you have to consider yourself if the index is still needed in the first place. And if so, why not just recreate it on the target, since Exadata has probably got the muscles for it to recreate it in a small fraction of the time it took on your source database !
Migrations to Exadata are not more complicated then normal migrations. When dealing with large amount of data one should always think of a strategy before htting the keybord. As production environments have their limitations it proofed to be helpful to workout alternative scenarios as well. With expdp, transportable tablespaces, transportable databases, Streams, physical, logical standby databases, Oracle has all thinkable options available to migrate to this wonderful piece of database !
Oracle Expert Services, The Netherlands