Thursday Jan 28, 2016

TDE is wonderful - Journey to the Cloud V

DBaaS Oracle Cloud

 What happened so far on my Journey to the Cloud?

Today's journey:
Learn about TDE (Transparent Data Encryption) and other secrets

What I really really love about my job: Every day I learn something new.

But sometimes learning can be frustrating at the beginning. And so it was for Roy and myself in the past days when we explored the use of TDE (Transparent Data Encryption) in our DBaaS Cloud environments. But many thanks to Brian Spendolini for his continuous 24x7 support :-)

Never heard of Transparent Data Encryption before? Then please read on here. It's usually part of ASO (Advanced Security Option) but it is included in the cloud offering. 

But first of all before taking care on TDE and PDBs I tried to deploy a new DBaaS VM ...
.

PDB names can't contain underscores?

Well, one learning experience is that initially you can't create a PDB in the DBaaS environment with an underscore in the name. I wanted to name my PDB in my new env simply "TDE_PDB1" (8 characters, all should be fine) - but received this nice message:

Don't worry if you don't speak German - it basically says that it can't be longer than 8 characters (ok, I knew that), must begin with a character (mine does of course) and can only contain characters and number (eh?? no underscores???). Hm ...?!?

Ok, I'll name mine "TDEPDB1".

Of course outside this page you can create PDBs containing an underscore:

SQL> create pluggable database PDB_MIKE admin user mike identified by mike
  2  file_name_convert=('/oradata/CDB2/pdbseed', '/oradata/CDB2/pdb_mike');

Pluggable database created
.

That's what happens when application logic tries to superseed database logic.
.

(Almost) undocumented parameter: encrypt_new_tablespace

Thanks to Robert Pastijn for telling me about this hidden secret. A new parameter which is not in the regular database deployment but only in the cloud.

encrypt_new_tablespaces

First check in MOS:

Interesting.

So let's check with Google.
And here it is: 7 hits, for example:

Controlling Default Tablespace Encryption

The ENCRYPT_NEW_TABLESPACES initialization parameter controls default encryption of new tablespaces. In Database as a Service databases, this parameter is set to CLOUD_ONLY.

Value
Description

ALWAYS

Any tablespace created will be transparently encrypted with the AES128 algorithm unless a different algorithm is specified on the ENCRYPTION clause.

CLOUD_ONLY

Tablespaces created in a Database Cloud Service database will be transparently encrypted with the AES128 algorithm unless a different algorithm is specified on the ENCRYPTION clause. For non-Database Cloud Service databases, tablespaces will only be encrypted if the ENCRYPTION clause is specified. This is the default value.

DDL

Tablespaces are not transparently encrypted and are only encrypted if the ENCRYPTION clause is specified.

What I found really scary is the fact that I couldn't find it in my spfile/pfile. You can alter it with an "alter system" command but you can't remove it.

The idea behind this is great as tablespaces should be encrypted, especially when they reside in a cloud environment. TDE is a very useful feature. And this mechanism exists regardless of your edition, whether you have Standard Edition or Enterprise Edition in any sort of flavor in the DBaaS Cloud.

A new tablespace will be encrypted by default:

SQL> CREATE TABLESPACE TS_MIKE DATAFILE 'ts_mike01.dbf' SIZE 10M;

Then check:

SQL> select TABLESPACE_NAME, ENCRYPTED from DBA_TABLESPACES;

But we'll see later if this adds some constraints to our efforts to migrate a database for testing purposes into the DBaaS cloud environment.
.

Is there anything encrypted yet?

Quick check after setting:

SQL> alter system set exclude_seed_cdb_view=FALSE scope=both;

I tried to find out if any of the tablespaces are encrypted.

SQL> select tablespace_name, encrypted, con_id from CDB_TABLESPACES order by 3;

TABLESPACE_NAME                ENC     CON_ID
------------------------------ --- ----------
SYSTEM                         NO           1
USERS                          NO           1
SYSAUX                         NO           1
UNDOTBS1                       NO           1
TEMP                           NO           1

SYSTEM                         NO           2
USERS                          NO           2
TEMP                           NO           2
SYSAUX                         NO           2

EXAMPLE                        NO           3
USERS                          NO           3
TEMP                           NO           3
SYSAUX                         NO           3
APEX_1701140435539813          NO           3
SYSTEM                         NO           3

15 rows selected.

Looks good.  Nothing encrypted yet.
.

How does the new parameter ENCRYPT_NEW_TABLESPACES effect operation?

Ok, lets try.

SQL> show parameter ENCRYPT_NEW_TABLESPACES

NAME                                 TYPE        VALUE
------------------------------------ ----------- ---------------
encrypt_new_tablespaces              string      CLOUD_ONLY

And further down the road ...

SQL> alter session set container=pdb1;

SQL> create tablespace MIKE_PLAYS_WITH_TDE datafile '/u02/app/oracle/oradata/MIKEDB/PDB1/mike_plays_with_tde.dbf' size 10M;

Tablespace created.

SQL> select tablespace_name, encrypted, con_id from CDB_TABLESPACES order by 3;

TABLESPACE_NAME                ENC     CON_ID
------------------------------ --- ----------
SYSTEM                         NO           3
SYSAUX                         NO           3
TEMP                           NO           3
USERS                          NO           3
EXAMPLE                        NO           3
APEX_1701140435539813          NO           3
MIKE_PLAYS_WITH_TDE            YES          3

7 rows selected.

Ah ... so my new tablespace is encrypted. Not bad ... so far TDE has no influence. I can create objects in this tablespace, query them etc. It is not disturbing at all. Good.
.

How does this key thing work in the DBaaS Cloud?

The documentation in above WP tells us this:

Managing the Software Keystore and Master Encryption Key

Tablespace encryption uses a two-tiered, key-based architecture to transparently encrypt (and decrypt) tablespaces. The master encryption key is stored in an external security module (software keystore). This master encryption key is used to encrypt the tablespace encryption key, which in turn is used to encrypt and decrypt data in the tablespace.

When the Database as a Service instance is created, a local auto-login software keystore is created. The keystore is local to the compute node and is protected by a system-generated password. The auto-login software keystore is automatically opened when accessed.

You can change (rotate) the master encryption key by using the tde rotate masterkey  subcommand of the dbaascli  utility. When you execute this subcommand you will be prompted for the keystore password. Enter the password specified when the service instance was created.
.

Creating a new PDB

That's easy, isn't it?

SQL> alter session set container=cdb$root;

Session altered.

SQL> create pluggable database pdb2 admin user mike identified by mike;

Pluggable database created.

SQL> alter pluggable database pdb2 open;

Pluggable database altered.

SQL> select tablespace_name, encrypted, con_id from CDB_TABLESPACES order by 3;

TABLESPACE_NAME                ENC     CON_ID
------------------------------ --- ----------
SYSTEM                         NO           4
SYSAUX                         NO           4
TEMP                           NO           4
USERS                          NO           4

SQL> select file_name from dba_data_files;

FILE_NAME
--------------------------------------------------------------------------------
/u02/app/oracle/oradata/MIKEDB/2A6680A0D990285DE053BA32C40AED53/datafile/o1_mf_s
ystem_cbn8fo1s_.dbf

/u02/app/oracle/oradata/MIKEDB/2A6680A0D990285DE053BA32C40AED53/datafile/o1_mf_s
ysaux_cbn8fo20_.dbf

/u02/app/oracle/oradata/MIKEDB/2A6680A0D990285DE053BA32C40AED53/datafile/o1_mf_u
sers_cbn8fo27_.dbf

Ah, bad thing. As I neither used the file_name_convert option nor changed the PDB_FILE_NAME_CONVERT initialization parameter my new PDB files get created in the "root" path of the CDB. I don't want this. But isn't there this cool new feature called ONLINE MOVE OF DATAFILES in Oracle Database 12c? Ok, it's an EE feature but let me try this after checking the current OMF file names in DBA_DATA_FILES and DBA_TEMP_FILES:

SQL> !mkdir /u02/app/oracle/oradata/MIKEDB/PDB2

SQL> ALTER DATABASE MOVE DATAFILE '/u02/app/oracle/oradata/MIKEDB/2A6680A0D990285DE053BA32C40AED53/datafile/o1_mf_system_cbn8fo1s_.dbf' TO '/u02/app/oracle/oradata/MIKEDB/PDB2/system01.dbf';

SQL> ALTER DATABASE MOVE DATAFILE '/u02/app/oracle/oradata/MIKEDB/2A6680A0D990285DE053BA32C40AED53/datafile/o1_mf_sysaux_cbn8fo20_.dbf' TO '/u02/app/oracle/oradata/MIKEDB/PDB2/sysaux01.dbf';

SQL> ALTER DATABASE MOVE DATAFILE '/u02/app/oracle/oradata/MIKEDB/2A6680A0D990285DE053BA32C40AED53/datafile/o1_mf_users_cbn8fo27_.dbf' TO '/u02/app/oracle/oradata/MIKEDB/PDB2/users01.dbf';

Be prepared:
This will create a 1:1 copy of the file in the designated location and synch afterwards. It may take a minute per file.

And moving the TEMP tablespace(s) file(s) will fail.

SQL> ALTER DATABASE MOVE DATAFILE '/u02/app/oracle/oradata/MIKEDB/2A6680A0D990285DE053BA32C40AED53/datafile/o1_mf_temp_cbn8fo25_.dbf' TO '/u02/app/oracle/oradata/MIKEDB/PDB2/temp01.dbf';

*
ERROR at line 1:
ORA-01516: nonexistent log file, data file, or temporary file
"/u02/app/oracle/oradata/MIKEDB/2A6680A0D990285DE053BA32C40AED53/datafile/o1_mf_temp_cbn8fo25_.dbf"

The temporary tablespace will have to be dropped and recreated. But not a big deal. 

Check;

SQL> select file_name from dba_data_files;

FILE_NAME
-----------------------------------------------------
/u02/app/oracle/oradata/MIKEDB/PDB2/sysaux01.dbf
/u02/app/oracle/oradata/MIKEDB/PDB2/users01.dbf
/u02/app/oracle/oradata/MIKEDB/PDB2/system01.dbf

Let me fix this so I don't hit this pitfall again:

SQL> alter system set pdb_file_name_convert='/u02/app/oracle/oradata/MIKEDB/pdbseed','/u02/app/oracle/oradata/MIKEDBPDB2';

Final verification:

SQL> select name, value from v$system_parameter where con_id=4;

NAME                   VALUE
---------------------- ----------------------------------
resource_manager_plan
pdb_file_name_convert  /u02/app/oracle/oradata/MIKEDB/pdbseed, /u02/app/oracle/oradata/MIKEDBPDB2


Now the fun part starts ... ORA-28374: typed master key not found in wallet

Remember this command from above in my PDB1? It run fine. But now it fails in PDB2.

SQL> create tablespace MIKE_PLAYS_WITH_TDE datafile '/u02/app/oracle/oradata/MIKEDB/PDB2/mike_plays_with_tde.dbf' size 10M;

create tablespace MIKE_PLAYS_WITH_TDE datafile '/u02/app/oracle/oradata/MIKEDB/PDB2/mike_plays_with_tde.dbf' size 10M
*
ERROR at line 1:
ORA-28374: typed master key not found in wallet

Voodoo in the database? I'm worried - especially as Roy had the same issue days before. But why did the command pass through without issues before in PDB1 - and now it doesn't in PDB2? Is it because the PDB1 is precreated - and my PDB2 is not?

Kinda strange, isn't it?
So connecting back to my PDB1 and trying again: 

SQL> alter session set container=pdb1;

Session altered.

SQL> create tablespace MIKE_STILL_PLAYS_WITH_TDE datafile '/u02/app/oracle/oradata/MIKEDB/PDB1/mike_still_plays_with_tde.dbf' size 10M;

Tablespace created.

Ok, now I'm worried. What is the difference between the precreated PDB1 and my new PDB2?
.

Why do I get an ORA-28374 in my fresh PDB2?

When we compare the wallet status in both PDBs we'll recognize the difference:

PDB1:

SQL> select wrl_type, wallet_type, status from v$encryption_wallet;

WRL_TYPE             WALLET_TYPE          STATUS
-------------------- -------------------- ------------------------------
FILE                 AUTOLOGIN            OPEN

PDB2:

SQL> select wrl_type, wallet_type, status from v$encryption_wallet;

WRL_TYPE             WALLET_TYPE          STATUS
-------------------- -------------------- ------------------------------
FILE                 AUTOLOGIN            OPEN_NO_MASTER_KEY

.
Now thanks to Brian Spendolini I have a working solution. But I'm not convinced that this is an obvious path ...
Remember? I just would like to create a tablespace in my new (own, non-precreated) PDB. That's all ... 

SQL> alter session set container=cdb$root;

SQL> administer key management set keystore close;

keystore altered.

SQL> administer key management set keystore open identified by <your-sysadmin-pw> container=all;

keystore altered.

SQL> alter session set container=pdb2;

Session altered.

SQL> SELECT WRL_PARAMETER,STATUS,WALLET_TYPE FROM V$ENCRYPTION_WALLET;

WRL_PARAMETER                             STATUS             WALLET_TYPE
----------------------------------------- ------------------ -----------
/u01/app/oracle/admin/MIKEDB/tde_wallet/  OPEN_NO_MASTER_KEY PASSWORD

SQL>  ADMINISTER KEY MANAGEMENT SET KEY USING TAG  "tde_dbaas" identified by <your-sysadmin-pw>  WITH BACKUP USING "tde_dbaas_bkup"; 

keystore altered.

SQL> SELECT WRL_PARAMETER,STATUS,WALLET_TYPE FROM V$ENCRYPTION_WALLET;

WRL_PARAMETER                             STATUS   WALLET_TYPE
----------------------------------------- -------- -----------
/u01/app/oracle/admin/MIKEDB/tde_wallet/  OPEN     PASSWORD

And finally ... 

SQL> create tablespace MIKE_STILL_PLAYS_WITH_TDE datafile '/u02/app/oracle/oradata/MIKEDB/PDB2/mike_still_plays_with_tde.dbf' size 10M;

Tablespace created.

Wow!!!

That was not straight forward at all. Maybe it all happens due to my almost non-existing knowledge about TDE.

Ah ... and let me say that I find the missing uppercase letter with all keystore altered. echo message quite disturbing. But this is a generic one and non-critical of course ...

--Mike 




Friday Oct 02, 2015

OOW 2015 Sessions and Labs - Oracle Open World

OOW 2015OMG ... only a few weeks to go ... Oracle Open World 2015 in San Francisco is coming closer and closer ...

And this year will be really tough as we have a reduced number of people there - but more work to do as in previous years. 3 talks (2 for Upgrade, 1 for Data Pump), 4 labs (all in Nikko Hotel 15min walking distance from Moscone Center) - plus a good number of customer meetings already lined up. Plus the chance to meet so many great people ... and not to forget the Data Warehouse Global Leaders event at the Oracle HQ. 

I have that strange feeling that I will be VERY tired when I'll board the plane on Friday night heading back to Germany ... ;-)
.

Focus On Upgrades/Migrations

As the fantastic application we are using for the OOW content catalog doesn't allow me to link directly to a session Roy has built a Focus On document to guide you to some important talks around Upgrades and Migrations at OOW2015.for your convenience: 


Talks

Session ID

Title

Start Time

Room

CON6777 Upgrade and Migrate to Oracle Database 12c: Live and Uncensored!

Many customers now have database environments numbering in the hundreds or even thousands. This session addresses the challenge of maintaining technical currency of such an environment while also containing upgrade and migration costs at a reasonable level. Learn from Oracle Database upgrade experts about product features, options, tools, techniques, and services that can help you maintain control of your database environment. You will also see examples of how real customers are successfully meeting this challenge today.
.
.
October 26
at 13:30h
Moscone South—102
CON8375 How to Upgrade Hundreds or Thousands of Databases in a Reasonable Amount of Time

Many customers now have database environments numbering in the hundreds or even thousands. This session addresses the challenge of maintaining technical currency of such an environment while also containing upgrade and migration costs at a reasonable level. Learn from Oracle Database upgrade experts about product features, options, tools, techniques, and services that can help you maintain control of your database environment. You will also see examples of how real customers are successfully meeting this challenge today.
.
.
October 28
at 12:15h
Moscone South—102
CON8376 Deep Dive: More Oracle Data Pump Performance Tips and Tricks

The Oracle Data Pump development team is back with even more performance tips and tricks for DBAs! In this session, learn about Oracle Data Pump features, parameters, and patches—some added since the first patch set of Oracle Database 12c 12.1.0.2—that will improve performance and decrease overhead for Oracle Data Pump projects. Whether you are an Oracle Data Pump novice or already an expert, you are sure to learn something new in this session that will help you maximize the throughput of your export and import operations.
.
.
October 29
at 9:30h

Moscone South—305

HOL10348 Upgrade, Migrate, and Consolidate to Oracle Database 12c [HOL10438]

The Oracle Data Pump development team is back with even more performance tips and tricks for DBAs! In this session, learn about Oracle Data Pump features, parameters, and patches—some added since the first patch set of Oracle Database 12c 12.1.0.2—that will improve performance and decrease overhead for Oracle Data Pump projects. Whether you are an Oracle Data Pump novice or already an expert, you are sure to learn something new in this session that will help you maximize the throughput of your export and import operations.
.
Oct 26 at 11:00h
Oct 27 at 11:45h
Oct 28 at 13:15h
Oct 29 at 12:30h

Hotel Nikko - Golden Gate


Hope to see you at OOW 2015!

--Mike

Tuesday Jun 30, 2015

Some Data Pump issues:
+ DBMS_DATAPUMP Import via NETWORK_LINK fails
+ STATUS parameter giving bad performance

One of my dear Oracle ACS colleagues (Danke Thomas!) highlighted this issue to me as one of his lead customers hit this pitfall a week ago. . 

DBMS_DATAPUMP Import Over NETWORK_LINK fails with ORA-39126 / ORA-31600

Symptoms are: 

KUPW$WORKER.CONFIGURE_METADATA_UNLOAD [ESTIMATE_PHASE]
ORA-31600: invalid input value IN ('VIEWS_AS_TABLES/TABLE_DATA') for parameter VALUE in function SET_FILTER

This can be cured with the patch for bug19501000 -  but this patch can conflict with:Bug 18793246  EXPDP slow showing base object lookup during datapump export causes full table scan per object and therefore may require a merge patch - patch 21253883 is the one to go with in this case.

 .

Another issue Roy just came across:

Data Pump is giving bad performance in Oracle 12.1.0.2 when the STATUS parameter option is used on command line

Symptoms are: 

It looks like the routines we are using to get status are significantly slower in 12c than in 11g. On 11.2.0.4 a STATUS call of expdp/impdp runs in 0.2-0.3 seconds, but in 12.1.0.2 it takes 0.8-1.6 seconds. As a result the client falls behind on 12.1.0.2; it is taking about 0.5-0.8 seconds to put out each line in the logfile because it is getting the status each time. With over 9000 tables in a test that half a second really adds up. The result in this test case was that the data pump job completed in 35 minutes, but it took another 30-35 minutes to finish putting out messages on the client (the log file was already complete) and return control to the command line. This happens only when you use STATUS on the command line.

Recommendation is:  

Don't use the STATUS parameter on the expdp/impdp command line in Oracle 12.1.0.2 until the issue is fixed. This will be tracked under Bug 21123545.

--Mike 

Thursday Sep 04, 2014

OOW 2014 - Upgrade and Data Pump Talks

Oracle Open World (OOW) 2014 in San Francisco is coming ... just a few weeks to go ... everybody is in the prep phase for demos, presentations, labs etc. 

If you'd like to get in touch with us to discuss your upgrade and migration strategies please feel free to contact either Roy Swonger or myself directly. We'll be happy to assist you. And of course you are welcome to stop by at our combined Upgrade/DataPump booth at the demo grounds and visit one of our talks.

Our group is happy to deliver the following talks and labs:

How to Upgrade, Migrate, and Consolidate to Oracle Database 12c [CON7647]
Monday, Sep 29, 5:15 PM - 6:00 PM - Moscone South - 102

The most widely anticipated feature of Oracle Database 12c is now available, and you may be wondering just how you can move your current databases to pluggable databases in a multitenant architecture. Whether you are just starting to explore the world of pluggable databases or are planning a production upgrade in the near future to Oracle Database 12c, this presentation by Oracle Database upgrade and migration experts gives you all the details: what methods are available; how they work; and which is the best for your particular upgrade, migration, or consolidation scenario.

How an Oracle Database 12c Upgrade Works in a Multitenant Environment [CON7648]
Tuesday, Sep 30, 12:00 PM - 12:45 PM - Moscone South - 306

With the first patch set of Oracle Database 12c, you will be able to choose between various methods of upgrading a multitenant container database and its pluggable databases. In this session, you will hear from Oracle upgrade experts about all the details of how a database upgrade works in a multitenant environment. You will learn what your options are, how parallelism works for database upgrades, and what is new for database upgrades in the first patch set of Oracle Database 12c.

How and Why to Migrate from Schema Consolidation to Pluggable Databases [CON7649]
Wednesday, Oct 1, 11:30 AM - 12:15 PM - Moscone South - 306

One important use case for pluggable databases is to enable you to move from schema consolidation with multiple applications in the same database to a more secure environment with Oracle Multitenant and pluggable databases. In this technical session, you will hear from Oracle development experts about the methods available for migrating from schema consolidation to a multitenant database environment with Oracle Data Pump, transportable tablespaces, or new features in Oracle Multitenant.

Oracle Database 12c Upgrade: Tools and Best Practices from Oracle Support [CON8236]
This talk is not done by us but by our Global Tech Lead for Upgrades in Support, Agrim Pandit
Tuesday, Sep 30, 5:00 PM - 5:45 PM - Moscone South - 310

You’ve heard about Oracle Database 12c and its new capabilities. Now come hear from Oracle experts about all the great tools and resources Oracle offers to help you upgrade to Oracle Database 12c efficiently and effectively. This session’s presenters, from Oracle Support, bring years of database experience and recent lessons learned from Oracle Database 12c upgrades at companies of all sizes all around the world. You are sure to leave with valuable information that will help you plan and execute your upgrade. What's more, most, if not all, of the tools and resources they discuss are available to current customers at no additional cost through their standard product support coverage.

.

I will publish the schedules for the Hands-On-Lab (4x) and the location of the demo ground's booth as soon as I'll get it.

-Mike

Monday Jul 22, 2013

OTN Tour Latin America 2013

While Mike is in Shanghai for OOW, I'll be participating in the 2013 OTN Tour in Latin America. This week includes events in Panama, Costa Rica, and Mexico where I will have two sessions at each:

OTN Tour Header

Migrate and Consolidate your Databases using Data Pump

What ́s New in Database Upgrade

Both Mike and I will be participating in the August events in Uruguay, Argentina, and Brazil as well. But if you can't make it to these or other upcoming events, the slides are ready for download!

Thursday Jan 17, 2013

Potential check for corruptions

Having a corruption somewhere in the database is one of the worst case scenarios I could ever imagine - especially if it "sleeps" somewhere in the data dictionary. Recently I did talk to a customer who encountered a failing upgrade due to a data dictionary corruption gotten introduced in an earlier release.

What can you do to check your database(s) prior to an upgrade or generally from time to time? Actually I know now two powerful possibilities: 

  • hcheck.sql
    See MOS Note:136697.1
    This script will check for known problems in Oracle8i, Oracle9i, Oracle10g and Oracle 11g.
    You will need to create
    hOut Helper Package first - please see MOS Note:101468.1 to download the script hout.sql
  • RMAN validation:
    RMAN> backup check logical validate database;
    See MOS Note:836658.1 for further details - and you can run this with multiple parallel channels to speed up the run

Tuesday Jul 03, 2012

Data Pump: Consistent Export?

Ouch ... I have to admit as I did say in several workshops in the past weeks that a data pump export with expdp is per se consistent.

Well ... I thought it is ... but it's not. Thanks to a customer who is doing a large unicode migration at the moment. We were discussing parameters in the expdp's par file. And I did ask my colleagues after doing some research on MOS. And here are the results of my "research":

  • MOS Note 377218.1 has a nice example showing a data pump export of a partitioned table with DELETEs on that table as inconsistent
  • Background:
    Back in the old 9i days when Data Pump was designed flashback technology wasn't as popular and well known as today - and UNDO usage was the major concern as a consistent per default export would have heavily relied on UNDO. That's why - similar to good ol' exp - the export won't operate per default in consistency mode
  • To get a consistent data pump export with expdp you'll have to set:
    FLASHBACK_TIME=SYSTIMESTAMP
    in your parameter file. Then it will be consistent according to the timestamp when the process has been started. You could use FLASHBACK_SCN instead and determine the SCN beforehand if you'd like to be exact.

So sorry if I had proclaimed a feature which unfortunately is not there by default :-(

- Mike

Monday Dec 05, 2011

Exclude DBMS_SCHEDULER Jobs from expdp?

You have never thought about excluding DBMS_SCHEDULER jobs from a Data Pump export? Me neither but I've recently got a copy of an email for such a customer case from Roy who owns Data Pump as well. And this is the code example from Dean Gagne:

exclude.par:
exclude=procobj:"IN (SELECT NAME FROM sys.OBJ$ WHERE TYPE# IN (47,48,66,67,68,69,71,72,74))"

  • This will work only on export
  • It's an all or nothing approach

Quite interesting, isn't it? 

Tuesday Jul 19, 2011

How to get the Master Table from a Data Pump expdp?

Interesting question a customer had last week during the Upgrade Workshop in Munich. He's getting export dump files from several customers and often not much information describing the contents. So how can ge find out what's in there, which was the source characterset etc.

This seems to be a simple question but it did cost me a few searches and tests to come back with some (hopefully) useful information.Pump

First attempt: $strings expdp.dmp > outexpdp.txt

I bet there are better ways to do this but in my case this will give me:
"APP"."SYS_EXPORT_SCHEMA_01"
x86_64/Linux 2.4.xx
WE8ISO8859P15
LBB EMB GHC JWD SD EBE WMF DDG JG SJH SRH JGK CL EGM BJM RAP RLP RP KR PAR MS MRS JLS CET HLT
10.02.00.05.00

This does look interesting as it tells me the exporting user ('APP') , the source OS and - more important - the characterset of the source database (WE8ISO8859P15). 

But Data Pump has also a Master Table - and how do I read this table? We'll do a dummy import with impdp and keep the master table :-)

impdp system/oracle dumpfile=app.dmp SQLFILE=sql.txt nologfile=Y keep_master=y directory=DATA_PUMP_DIR

And voilà ... we have a sql file with all DMLs - and we have the master table:

Import: Release 11.2.0.2.0 - Production on Tue Jul 19 09:56:05 2011

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining
and Real Application Testing options
Master table "SYSTEM"."SYS_SQL_FILE_FULL_02" successfully loaded/unloaded
Starting "SYSTEM"."SYS_SQL_FILE_FULL_02":  system/******** dumpfile=app.dmp SQLFILE=sql.txt logfile=nologfile keep_master=y:q!
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/POST_SCHEMA/PROCACT_SCHEMA
Job "SYSTEM"."SYS_SQL_FILE_FULL_02" successfully completed at 09:56:06

So the master table "SYSTEM"."SYS_SQL_FILE_FULL_02" is still kept and can be read to access information about the dump file.


16-AUG-2011:
Thanks to Marco Patzwahl from MuniQSoft for correcting my example to:

impdp system/oracle dumpfile=app.dmp SQLFILE=sql.txt nologfile=Y keep_master=y directory=DATA_PUMP_DIR
Using LOGFILE=NOLOGFILE will simply create a logfile called NOLOGFILE.log which was not my intention. And not specifying the directory will require that you start impdp from the actual Data Pump OS directory - by default $ORACLE_BASE/admin/<SID>/dpdump.
About

Mike Dietrich - Oracle Mike Dietrich
Master Product Manager - Database Upgrade & Migrations - Oracle

Based in Germany. Interlink between customers/partners and the Upgrade Development. Running workshops between Arctic and Antartica. Assisting customers in their reference projects onsite and remotely. Connect via:

- -

Search

Archives
« February 2016
SunMonTueWedThuFriSat
 
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
     
       
Today
Slides Download Center
Visitors since 17-OCT-2011
White Paper and Docs
Workshops
Viewlets and Videos
Workshop Map
x Oracle related Tech Blogs
This week on my Rega & Pono
Upgrade Reference Papers