Thursday Sep 04, 2014

OOW 2014 - Upgrade and Data Pump Talks

Oracle Open World (OOW) 2014 in San Francisco is coming ... just a few weeks to go ... everybody is in the prep phase for demos, presentations, labs etc. 

If you'd like to get in touch with us to discuss your upgrade and migration strategies please feel free to contact either Roy Swonger or myself directly. We'll be happy to assist you. And of course you are welcome to stop by at our combined Upgrade/DataPump booth at the demo grounds and visit one of our talks.

Our group is happy to deliver the following talks and labs:

How to Upgrade, Migrate, and Consolidate to Oracle Database 12c [CON7647]
Monday, Sep 29, 5:15 PM - 6:00 PM - Moscone South - 102

The most widely anticipated feature of Oracle Database 12c is now available, and you may be wondering just how you can move your current databases to pluggable databases in a multitenant architecture. Whether you are just starting to explore the world of pluggable databases or are planning a production upgrade in the near future to Oracle Database 12c, this presentation by Oracle Database upgrade and migration experts gives you all the details: what methods are available; how they work; and which is the best for your particular upgrade, migration, or consolidation scenario.

How an Oracle Database 12c Upgrade Works in a Multitenant Environment [CON7648]
Tuesday, Sep 30, 12:00 PM - 12:45 PM - Moscone South - 306

With the first patch set of Oracle Database 12c, you will be able to choose between various methods of upgrading a multitenant container database and its pluggable databases. In this session, you will hear from Oracle upgrade experts about all the details of how a database upgrade works in a multitenant environment. You will learn what your options are, how parallelism works for database upgrades, and what is new for database upgrades in the first patch set of Oracle Database 12c.

How and Why to Migrate from Schema Consolidation to Pluggable Databases [CON7649]
Wednesday, Oct 1, 11:30 AM - 12:15 PM - Moscone South - 306

One important use case for pluggable databases is to enable you to move from schema consolidation with multiple applications in the same database to a more secure environment with Oracle Multitenant and pluggable databases. In this technical session, you will hear from Oracle development experts about the methods available for migrating from schema consolidation to a multitenant database environment with Oracle Data Pump, transportable tablespaces, or new features in Oracle Multitenant.

Oracle Database 12c Upgrade: Tools and Best Practices from Oracle Support [CON8236]
This talk is not done by us but by our Global Tech Lead for Upgrades in Support, Agrim Pandit
Tuesday, Sep 30, 5:00 PM - 5:45 PM - Moscone South - 310

You’ve heard about Oracle Database 12c and its new capabilities. Now come hear from Oracle experts about all the great tools and resources Oracle offers to help you upgrade to Oracle Database 12c efficiently and effectively. This session’s presenters, from Oracle Support, bring years of database experience and recent lessons learned from Oracle Database 12c upgrades at companies of all sizes all around the world. You are sure to leave with valuable information that will help you plan and execute your upgrade. What's more, most, if not all, of the tools and resources they discuss are available to current customers at no additional cost through their standard product support coverage.

.

I will publish the schedules for the Hands-On-Lab (4x) and the location of the demo ground's booth as soon as I'll get it.

-Mike

Monday Jul 22, 2013

OTN Tour Latin America 2013

While Mike is in Shanghai for OOW, I'll be participating in the 2013 OTN Tour in Latin America. This week includes events in Panama, Costa Rica, and Mexico where I will have two sessions at each:

OTN Tour Header

Migrate and Consolidate your Databases using Data Pump

What ́s New in Database Upgrade

Both Mike and I will be participating in the August events in Uruguay, Argentina, and Brazil as well. But if you can't make it to these or other upcoming events, the slides are ready for download!

Tuesday Jul 03, 2012

Data Pump: Consistent Export?

Ouch ... I have to admit as I did say in several workshops in the past weeks that a data pump export with expdp is per se consistent.

Well ... I thought it is ... but it's not. Thanks to a customer who is doing a large unicode migration at the moment. We were discussing parameters in the expdp's par file. And I did ask my colleagues after doing some research on MOS. And here are the results of my "research":

  • MOS Note 377218.1 has a nice example showing a data pump export of a partitioned table with DELETEs on that table as inconsistent
  • Background:
    Back in the old 9i days when Data Pump was designed flashback technology wasn't as popular and well known as today - and UNDO usage was the major concern as a consistent per default export would have heavily relied on UNDO. That's why - similar to good ol' exp - the export won't operate per default in consistency mode
  • To get a consistent data pump export with expdp you'll have to set:
    FLASHBACK_TIME=SYSTIMESTAMP
    in your parameter file. Then it will be consistent according to the timestamp when the process has been started. You could use FLASHBACK_SCN instead and determine the SCN beforehand if you'd like to be exact.

So sorry if I had proclaimed a feature which unfortunately is not there by default :-(

- Mike

Monday Dec 05, 2011

Exclude DBMS_SCHEDULER Jobs from expdp?

You have never thought about excluding DBMS_SCHEDULER jobs from a Data Pump export? Me neither but I've recently got a copy of an email for such a customer case from Roy who owns Data Pump as well. And this is the code example from Dean Gagne:

exclude.par:
exclude=procobj:"IN (SELECT NAME FROM sys.OBJ$ WHERE TYPE# IN (47,48,66,67,68,69,71,72,74))"

  • This will work only on export
  • It's an all or nothing approach

Quite interesting, isn't it? 

Tuesday Jul 19, 2011

How to get the Master Table from a Data Pump expdp?

Interesting question a customer had last week during the Upgrade Workshop in Munich. He's getting export dump files from several customers and often not much information describing the contents. So how can ge find out what's in there, which was the source characterset etc.

This seems to be a simple question but it did cost me a few searches and tests to come back with some (hopefully) useful information.Pump

First attempt: $strings expdp.dmp > outexpdp.txt

I bet there are better ways to do this but in my case this will give me:
"APP"."SYS_EXPORT_SCHEMA_01"
x86_64/Linux 2.4.xx
WE8ISO8859P15
LBB EMB GHC JWD SD EBE WMF DDG JG SJH SRH JGK CL EGM BJM RAP RLP RP KR PAR MS MRS JLS CET HLT
10.02.00.05.00

This does look interesting as it tells me the exporting user ('APP') , the source OS and - more important - the characterset of the source database (WE8ISO8859P15). 

But Data Pump has also a Master Table - and how do I read this table? We'll do a dummy import with impdp and keep the master table :-)

impdp system/oracle dumpfile=app.dmp SQLFILE=sql.txt nologfile=Y keep_master=y directory=DATA_PUMP_DIR

And voilà ... we have a sql file with all DMLs - and we have the master table:

Import: Release 11.2.0.2.0 - Production on Tue Jul 19 09:56:05 2011

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining
and Real Application Testing options
Master table "SYSTEM"."SYS_SQL_FILE_FULL_02" successfully loaded/unloaded
Starting "SYSTEM"."SYS_SQL_FILE_FULL_02":  system/******** dumpfile=app.dmp SQLFILE=sql.txt logfile=nologfile keep_master=y:q!
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/POST_SCHEMA/PROCACT_SCHEMA
Job "SYSTEM"."SYS_SQL_FILE_FULL_02" successfully completed at 09:56:06

So the master table "SYSTEM"."SYS_SQL_FILE_FULL_02" is still kept and can be read to access information about the dump file.


16-AUG-2011:
Thanks to Marco Patzwahl from MuniQSoft for correcting my example to:

impdp system/oracle dumpfile=app.dmp SQLFILE=sql.txt nologfile=Y keep_master=y directory=DATA_PUMP_DIR
Using LOGFILE=NOLOGFILE will simply create a logfile called NOLOGFILE.log which was not my intention. And not specifying the directory will require that you start impdp from the actual Data Pump OS directory - by default $ORACLE_BASE/admin/<SID>/dpdump.
About

Mike Dietrich - Oracle Mike Dietrich
Master Product Manager - Database Upgrade & Migrations - Oracle Corp

Based near Munich/Germany and spending plenty of time in airplanes to run either upgrade workshops or work onsite or remotely with reference customers. Acting as interlink between customers/partners and the Upgrade Development.

Follow me on TWITTER

Contact me via LinkedIn or XING

Search

Archives
« May 2015
SunMonTueWedThuFriSat
     
1
2
3
4
5
6
7
8
9
10
12
13
14
15
16
17
19
20
21
22
23
24
25
26
27
28
29
30
31
      
Today
Oracle related Tech Blogs
Slides Download Center
Visitors since 17-OCT-2011
White Paper and Docs
Workshops
Viewlets and Videos
This week on my Rega/iPod/CD
Workshop Map
Upgrade Reference Papers