Tuesday Jun 30, 2015

Some Data Pump issues:
+ DBMS_DATAPUMP Import via NETWORK_LINK fails
+ STATUS parameter giving bad performance

One of my dear Oracle ACS colleagues (Danke Thomas!) highlighted this issue to me as one of his lead customers hit this pitfall a week ago. . 

DBMS_DATAPUMP Import Over NETWORK_LINK fails with ORA-39126 / ORA-31600

Symptoms are: 

KUPW$WORKER.CONFIGURE_METADATA_UNLOAD [ESTIMATE_PHASE]
ORA-31600: invalid input value IN ('VIEWS_AS_TABLES/TABLE_DATA') for parameter VALUE in function SET_FILTER

This can be cured with the patch for bug19501000 -  but this patch can conflict with:Bug 18793246  EXPDP slow showing base object lookup during datapump export causes full table scan per object and therefore may require a merge patch - patch 21253883 is the one to go with in this case.

 .

Another issue Roy just came across:

Data Pump is giving bad performance in Oracle 12.1.0.2 when the STATUS parameter option is used.

Symptoms are: 

It looks like the routines we are using to get status are significantly slower in 12c than in 11g. On 11.2.0.4 a STATUS call of expdp/impdp runs in 0.2-0.3 seconds, but in 12.1.0.2 it takes 0.8-1.6 seconds. As a result the client falls behind on 12.1.0.2; it is taking about 0.5-0.8 seconds to put out each line in the logfile because it is getting the status each time. With over 9000 tables in a test that half a second really adds up. The result in this test case was that the data pump job completed in 35 minutes, but it took another 30-35 minutes to finish putting out messages on the client (the log file was already complete) and return control to the command line.

Recommendation is:  

Don't use the STATUS parameter on the expdp/impdp command line in Oracle 12.1.0.2 until the issue is fixed. This will be tracked under Bug 21123545.

--Mike 

Monday Jun 29, 2015

Premier Support for Oracle 11.2 has ended months ago

I have blogged many times about the End of Premier Support for Oracle Database 11.2 and the different stages and regulations for Extended Support for this release. But still I'm getting inquiries almost every day, internally and externally.

Let me first point out that I'm not a rep of Oracle Support. I'm just a Product Manager with some Oracle Support background. So you'll always have to consult our officially available documents and sources as regulations may change after I published this blog post. I'm just trying to summarize what I know at the moment to help you avoid some of the usual misunderstandings.

Premier Support for Oracle 11.2 has already ended

Fact. Premier Support for Oracle 11.2 has ended on Jan 31, 2015. Period. Regardless of the patch level you may be on. No further discussion here please. Just for your notes: this has happened a while ago in case you've missed the date:

Extended Support for Oracle 11.2 has started on Feb 1, 2015

  • For Oracle Database 11.2.0.1 there's no such Extended Support offering available.
    This information can be found in MOS Note:742060.1.
    .
  • For Oracle Database 11.2.0.2 there's no such Extended Support offering available.
    This information can be found in MOS Note:742060.1.
    .
  • For Oracle Database 11.2.0.3 Oracle will waive Extended Support.
    But any patching for Oracle Database 11.2.0.3 will end on 27-AUG-2015 meaning no further bug fixes or patches for Oracle Database 11.2.0.3 (including PSUs, SPUs and BPs) after 27-AUG-2015 (please see the table below). Any bug fixing support for Oracle 11.2.0.3 (of course including the waived Extended Support offering) will end at this date. No extension possible.
    This information can be found in MOS Note:742060.1.
    .
  • For Oracle Database 11.2.0.4 - the Terminal (final) Patch Set for Oracle Database 11.2 - Oracle will waive (no extra cost, no action required) the entire first year of Extended Support to every customer with a valid support contract. After 31-JAN-2016 you can purchase an additional maximum of two years of Extended Support for Oracle Database 11.2.0.4 only. 
    This information can be found in MOS Note:742060.1.
    .

Finally please look into MOS Note:742060.1 and MOS Note:1067455.1:

End of Patching

Quoting from MOS Note:1067455.1:

Support for Patch Set 11.2.0.3
Under the new policy for overlapping Patch Set support, 11.2.0.3 will continue to be supported through 27 August 2015. The last Critical Patch Updates (Security Patch Update and Patch Set Update) for 11.2.0.3 will be released in July 2015. After then you will need to be running on 11.2.0.4 or a supported 12.1 Patch Set to get Critical Patch Updates.

I have learned in the past weeks that some information in the official Support documents (Lifetime Support Policy)such as the Brochure for Technology Products (including of course the database but also a clear description about the different levels of support) is not clear and obvious to everybody. This docs don't differentiate between the several patch releases.  

Premier Extended Support Oracle Database

Therefore you will have to consult MOS Note:742060.1 for further clarification and specific dates per patch level. 

--Mike 

Tuesday Jun 23, 2015

Package Differences between Oracle 11.2.0.4 and 12.1.0.2?

The question sounds pretty trivial:

Which packages did exist in Oracle 11.2.0.4 but don't exist in Oracle 12.1.0.2 anymore?

And going a bit deeper into the topic,

Which procedures and functions calls of SYS packages have been changed since then?

.

A colleague of mine (thanks to Thomas Kempkens from ACS Support for writing up a fancy procedure - all credits go to him) had to dig this out for a customer,

See the result.
These packages don't exist in Oracle 12.1.0.2 anymore but did exist in Oracle 11.2.0.4 (without PSUs):

  • CWM2_OLAP_INSTALLER
  • DBMS_AMD
  • DBMS_APPCTX
  • DBMS_DBLINK
  • DBMS_DM_IMP_INTERNAL
  • DBMS_DM_UTIL_INTERNAL
  • DBMS_DUMA_INTERNAL
  • DBMS_JDM_INTERNAL
  • DBMS_RULE_COMPATIBLE_90
  • DBMS_SQL2
  • DBMS_XDSUTL
  • DBMS_XS_SECCLASS_INT
  • DBMS_XS_SECCLASS_INT_FFI
  • EXF$DBMS_EXPFIL_SYSPACK
  • HTMLDB_SYSTEM
  • ODM_ABN_MODEL
  • SQLJUTL2

17 packages don't exist anymore.

And this is the list of
Procedures and Functions Calls inside of Oracle SYS packages which have changed in Oracle 12.1.0.2:

  • PACKAGE_NAME               SUBTYPE    SUBTYPE_NAME
  • -------------------------- ---------- ------------------------------------------------
  • CWM2_OLAP_INSTALLER        PROCEDURE  VALIDATE_CWM2_INSTALL
  • DBMS_AMD                   PROCEDURE  MOVE_OLAP_CATALOG
  • DBMS_APPCTX                PROCEDURE  CLEAR_CONTEXT
  • DBMS_APPCTX                PROCEDURE  SET_CONTEXT
  • DBMS_AQADM_SYS             PROCEDURE  GET_CORR_MSG
  • DBMS_AQADM_SYS             PROCEDURE  RESET_CORR_MSG
  • DBMS_AQADM_SYSCALLS        PROCEDURE  KWQA_3GL_UPDPERSTATS
  • DBMS_DATA_MINING           FUNCTION   GET_MODEL_DETAILS_ABN RETURNS DM_ABN_DETAILS
  • DBMS_DATA_MINING_INTERNAL  PROCEDURE  CREATE_TEMP_META_TABLE
  • DBMS_DBLINK                PROCEDURE  UPGRADE
  • DBMS_DDL_INTERNAL          FUNCTION   GEN_NEW_OBJECT_ID RETURNS BINARY_INTEGER
  • DBMS_DM_EXP_INTERNAL       FUNCTION   GET_TRACE RETURNS BOOLEAN
  • DBMS_DM_EXP_INTERNAL       PROCEDURE  DELETE_FROM_TEMP
  • DBMS_DM_EXP_INTERNAL       PROCEDURE  DO_TEMP_TABLE
  • DBMS_DM_EXP_INTERNAL       PROCEDURE  DUMP_STATUS
  • DBMS_DM_EXP_INTERNAL       PROCEDURE  GET_LIST_MODELS
  • DBMS_DM_EXP_INTERNAL       PROCEDURE  INSERT_LIST_MODELS
  • DBMS_DM_EXP_INTERNAL       PROCEDURE  SET_TRACE
  • DBMS_DM_IMP_INTERNAL       PROCEDURE  REGISTER_TEMP_TABLE
  • DBMS_DM_IMP_INTERNAL       PROCEDURE  UNREGIST_TEMP_TABLE
  • DBMS_DM_MODEL_EXP          FUNCTION   INSTANCE_INFO_EXP RETURNS VARCHAR2
  • DBMS_DM_MODEL_EXP          FUNCTION   SCHEMA_INFO_EXP RETURNS VARCHAR2
  • DBMS_DM_MODEL_EXP          PROCEDURE  SCHEMA_CALLOUT
  • DBMS_DM_UTIL               PROCEDURE  DISPLAY_MSG
  • DBMS_DM_UTIL               PROCEDURE  DUMP_MSG
  • DBMS_DM_UTIL_INTERNAL      PROCEDURE  DUMP_DM_TRACE
  • DBMS_DUMA_INTERNAL         FUNCTION   GET_PROPERTY RETURNS NUMBER
  • DBMS_DUMA_INTERNAL         FUNCTION   READ_LONG_TO_BLOB RETURNS BLOB
  • DBMS_DUMA_INTERNAL         PROCEDURE  CONVERT_LONG
  • DBMS_DUMA_INTERNAL         PROCEDURE  SCAN_TABLE
  • DBMS_DUMA_INTERNAL         PROCEDURE  WRITE_BLOB_TO_LONG
  • DBMS_INTERNAL_LOGSTDBY     PROCEDURE  EDS_EVOLVE_TABLE_1_FINISH
  • DBMS_INTERNAL_LOGSTDBY     PROCEDURE  END_INSTANTIATION
  • DBMS_INTERNAL_LOGSTDBY     PROCEDURE  INSTANTIATE_FEATURE
  • DBMS_INTERNAL_LOGSTDBY     PROCEDURE  LOCK_TABLES
  • DBMS_INTERNAL_LOGSTDBY     PROCEDURE  PREPARE_INSTANTIATION
  • DBMS_INTERNAL_LOGSTDBY     PROCEDURE  REGISTER_SCHEMA
  • DBMS_INTERNAL_LOGSTDBY     PROCEDURE  UNSUPPORTED_DML
  • DBMS_JAVA                  FUNCTION   DEPLOY_INVOKE RETURNS VARCHAR2
  • DBMS_JAVA                  FUNCTION   DROP_JAR RETURNS NUMBER
  • DBMS_JAVA                  FUNCTION   FINISH_LOADING_JAR RETURNS NUMBER
  • DBMS_JAVA                  FUNCTION   GETSOURCECHUNK RETURNS VARCHAR2
  • DBMS_JAVA                  FUNCTION   HANDLEMD5 RETURNS RAW
  • DBMS_JAVA                  FUNCTION   INITGETSOURCECHUNKS RETURNS NUMBER
  • DBMS_JAVA                  FUNCTION   JAR_STATUS RETURNS NUMBER
  • DBMS_JAVA                  FUNCTION   RJBC_INIT RETURNS VARCHAR2
  • DBMS_JAVA                  FUNCTION   RJBC_NORMALIZE RETURNS NUMBER
  • DBMS_JAVA                  FUNCTION   RJBC_REQUEST RETURNS NUMBER
  • DBMS_JAVA                  FUNCTION   RJBC_RESPOND RETURNS NUMBER
  • DBMS_JAVA                  FUNCTION   START_LOADING_JAR RETURNS NUMBER
  • DBMS_JAVA                  PROCEDURE  DEPLOY_COPY
  • DBMS_JAVA                  PROCEDURE  DEPLOY_OPEN
  • DBMS_JAVA                  PROCEDURE  RJBC_DONE
  • DBMS_JAVA                  PROCEDURE  RJBC_OUTPUT
  • DBMS_JDM_INTERNAL          FUNCTION   GET_JDM_ATTR_NAMES RETURNS JDM_ATTR_NAMES
  • DBMS_JDM_INTERNAL          FUNCTION   GET_JDM_NUM_VALS RETURNS JDM_NUM_VALS
  • DBMS_JDM_INTERNAL          FUNCTION   GET_JDM_STR_VALS RETURNS JDM_STR_VALS
  • DBMS_JDM_INTERNAL          FUNCTION   TO_CHAR_VARRAY RETURNS VARCHAR2
  • DBMS_JDM_INTERNAL          FUNCTION   UNIQUE_OBJECT_NAME RETURNS VARCHAR2
  • DBMS_JDM_INTERNAL          PROCEDURE  APPLY_TASK
  • DBMS_JDM_INTERNAL          PROCEDURE  BUILD_TASK
  • DBMS_JDM_INTERNAL          PROCEDURE  EXPLAIN_TASK
  • DBMS_JDM_INTERNAL          PROCEDURE  EXPORT_TASK
  • DBMS_JDM_INTERNAL          PROCEDURE  IMPORT_TASK
  • DBMS_JDM_INTERNAL          PROCEDURE  PREDICT_TASK
  • DBMS_JDM_INTERNAL          PROCEDURE  PROFILE_TASK
  • DBMS_JDM_INTERNAL          PROCEDURE  SQL_APPLY_TASK
  • DBMS_JDM_INTERNAL          PROCEDURE  TEST_TASK
  • DBMS_JDM_INTERNAL          PROCEDURE  XFORM_SEQ_TASK
  • DBMS_JDM_INTERNAL          PROCEDURE  XFORM_TASK
  • DBMS_LOGMNR_INTERNAL       PROCEDURE  ADDXTENDEDPKLOGGROUP
  • DBMS_LOGMNR_INTERNAL       PROCEDURE  DROPXTENDEDPKLOGGROUP
  • DBMS_LOGMNR_INTERNAL       PROCEDURE  KRVGDRM2
  • DBMS_LOGMNR_INTERNAL       PROCEDURE  LOGMNR_KRVILD
  • DBMS_LOGMNR_INTERNAL       PROCEDURE  LOGMNR_KRVIULD
  • DBMS_LOGMNR_INTERNAL       PROCEDURE  SET_PARAMETER
  • DBMS_LOGMNR_INTERNAL       PROCEDURE  UNSET_PARAMETER
  • DBMS_LOGSTDBY              FUNCTION   VALIDATE_AUTH RETURNS BOOLEAN
  • DBMS_LOGSTDBY              PROCEDURE  EDS_EVOLVE_TABLE
  • DBMS_METADATA_INT          PROCEDURE  CLEAR_CACHE
  • DBMS_METADATA_INT          PROCEDURE  CLEAR_XSL_CACHE
  • DBMS_OBJECTS_UTILS         FUNCTION   SPLIT_SOURCE RETURNS NUMBER
  • DBMS_OBJECTS_UTILS         PROCEDURE  RECOMPILE_TYPES
  • DBMS_OBJECTS_UTILS         PROCEDURE  UPDATE_TYPES
  • DBMS_REPORT                FUNCTION   STORE_FILE RETURNS NUMBER
  • DBMS_REPORT                PROCEDURE  CREATE_SHARED_DIRECTORY
  • DBMS_REPORT                PROCEDURE  DROP_SHARED_DIRECTORY
  • DBMS_RMIN                  PROCEDURE  INSTALL
  • DBMS_RMIN                  PROCEDURE  RM$_CALIBRATE_IO
  • DBMS_RMIN                  PROCEDURE  RM$_CLRPAREA
  • DBMS_RMIN                  PROCEDURE  RM$_CONSUMER_GROUP_MAPPING_PRI
  • DBMS_RMIN                  PROCEDURE  RM$_CRTPAREA
  • DBMS_RMIN                  PROCEDURE  RM$_CUP_CATEGORY
  • DBMS_RMIN                  PROCEDURE  RM$_CUP_GROUP
  • DBMS_RMIN                  PROCEDURE  RM$_CUP_PLAN
  • DBMS_RMIN                  PROCEDURE  RM$_CUP_PLAN_DIRECTIVE
  • DBMS_RMIN                  PROCEDURE  RM$_CUP_STORAGE_POOL_MAPPING
  • DBMS_RMIN                  PROCEDURE  RM$_DRP_CATEGORY
  • DBMS_RMIN                  PROCEDURE  RM$_DRP_GROUP
  • DBMS_RMIN                  PROCEDURE  RM$_DRP_PLAN
  • DBMS_RMIN                  PROCEDURE  RM$_DRP_PLAN_CSD
  • DBMS_RMIN                  PROCEDURE  RM$_DRP_PLAN_DIRECTIVE
  • DBMS_RMIN                  PROCEDURE  RM$_SET_CAPABILITY
  • DBMS_RMIN                  PROCEDURE  RM$_SET_CONSUMER_GROUP_MAPPING
  • DBMS_RMIN                  PROCEDURE  RM$_SET_INSTANCE_CAPABILITY
  • DBMS_RMIN                  PROCEDURE  RM$_SUBPAREA
  • DBMS_RMIN                  PROCEDURE  RM$_SWTCH_SESSION_GROUP
  • DBMS_RMIN                  PROCEDURE  RM$_SWTCH_USR_GROUP
  • DBMS_RMIN                  PROCEDURE  RM$_VLDPAREA
  • DBMS_RMIN                  PROCEDURE  UNINSTALL
  • DBMS_RULE_COMPATIBLE_90    FUNCTION   INCOMPATIBLE_RULES_ENGINE_OBJ RETURNS NUMBER
  • DBMS_RULE_COMPATIBLE_90    PROCEDURE  DOWNGRADE_RULE_OBJECTS
  • DBMS_RULE_COMPATIBLE_90    PROCEDURE  INCOMPATIBLE_EVCTXS
  • DBMS_RULE_COMPATIBLE_90    PROCEDURE  INCOMPATIBLE_RULES
  • DBMS_RULE_COMPATIBLE_90    PROCEDURE  INCOMPATIBLE_RULESETS
  • DBMS_RULE_COMPATIBLE_90    PROCEDURE  UPGRADE_RULE_OBJECTS
  • DBMS_SPM_INTERNAL          FUNCTION   FETCH_PLAN_INFO RETURNS REF CURSOR
  • DBMS_SPM_INTERNAL          PROCEDURE  GET_SPM_HINTSET
  • DBMS_SQL2                  PROCEDURE  EXPAND_SQL_TEXT
  • DBMS_SQL2                  PROCEDURE  PARSE
  • DBMS_STATS                 FUNCTION   TO_CASCADE_TYPE RETURNS BOOLEAN
  • DBMS_STATS                 FUNCTION   TO_DEGREE_TYPE RETURNS NUMBER
  • DBMS_STATS                 FUNCTION   TO_ESTIMATE_PERCENT_TYPE RETURNS NUMBER
  • DBMS_STATS                 FUNCTION   TO_PUBLISH_TYPE RETURNS BOOLEAN
  • DBMS_STATS_INTERNAL        FUNCTION   COMPOSE_HASHVAL_CLOB RETURNS CLOBTAB
  • DBMS_STATS_INTERNAL        FUNCTION   GET_CHTAB RETURNS CHTAB
  • DBMS_STATS_INTERNAL        FUNCTION   GET_SYNOPSIS_BLKCNT RETURNS NUMBER(38)
  • DBMS_STATS_INTERNAL        PROCEDURE  DELETE_TABLE_SYNOPSIS
  • DBMS_STATS_INTERNAL        PROCEDURE  EXPORT_SYNOPSIS_HEAD
  • DBMS_SWRF_REPORT_INTERNAL  FUNCTION   TO_1000S RETURNS VARCHAR2
  • DBMS_SWRF_REPORT_INTERNAL  FUNCTION   TO_1024S RETURNS VARCHAR2
  • DBMS_WORKLOAD_REPLAY       FUNCTION   SYNCPOINT_WAIT_TO_POST RETURNS NUMBER
  • DBMS_WRR_INTERNAL          FUNCTION   DATE_TO_EPOCH_SECS RETURNS BINARY_INTEGER
  • DBMS_WRR_INTERNAL          FUNCTION   IS_CONSOLIDATED_DIR RETURNS BOOLEAN
  • DBMS_XDS                   PROCEDURE  REFRESH_DSD
  • DBMS_XDSUTL                PROCEDURE  INVALIDATE_DSD_CACHE
  • DBMS_XDSUTL                PROCEDURE  INVALIDATE_DSD_CACHE_BY_ACLID
  • DBMS_XS_SECCLASS_INT       PROCEDURE  DELETE_SEC_CLASS
  • DBMS_XS_SECCLASS_INT       PROCEDURE  GET_LEAF_PRIVILEGES
  • DBMS_XS_SECCLASS_INT_FFI   PROCEDURE  DELETE_SEC_CLASS
  • DBMS_XS_SECCLASS_INT_FFI   PROCEDURE  GET_LEAF_PRIVILEGES
  • DBMS_XS_SESSIONS           FUNCTION   CHECK_ROLE RETURNS BOOLEAN
  • DBMS_XS_SESSIONS           FUNCTION   CHECK_SESSION RETURNS BOOLEAN
  • DBMS_XS_SESSIONS           FUNCTION   GET_SESSIONID_FROM_COOKIE RETURNS RAW
  • DBMS_XS_SESSIONS           PROCEDURE  ABORTATTACH_SESSION
  • DBMS_XS_SESSIONS           PROCEDURE  GET_SESSION_COOKIE
  • DBMS_XS_SESSIONS           PROCEDURE  SET_NAMESPACE_ACL
  • DBMS_XS_SESSIONS           PROCEDURE  SET_NAMESPACE_HANDLER
  • DBMS_XS_SESSIONS_FFI       PROCEDURE  ABORTATTACH_SESSION
  • DBMS_XS_SESSIONS_FFI       PROCEDURE  CHECK_ROLE
  • DBMS_XS_SESSIONS_FFI       PROCEDURE  CHECK_SESSION
  • DBMS_XS_SESSIONS_FFI       PROCEDURE  GET_SESSION_COOKIE
  • DBMS_XS_SESSIONS_FFI       PROCEDURE  SET_NAMESPACE_ACL
  • DBMS_XS_SESSIONS_FFI       PROCEDURE  SET_NAMESPACE_HANDLER
  • DMP_SEC                    FUNCTION   GET_MODEL_DETAILS_ABN RETURNS VARCHAR2(32767)
  • DMP_SYS                    PROCEDURE  DUMP_MODEL_REC
  • DMP_SYS                    PROCEDURE  DUMP_TRACE
  • DMP_SYS                    PROCEDURE  DUMP_VARCHAR2A
  • DMP_SYS                    PROCEDURE  DUMP_VARCHAR2S
  • DM_QGEN                    PROCEDURE  LS_DUMP
  • EXF$DBMS_EXPFIL_SYSPACK    FUNCTION   GET_CMTSCN_FOR_XID RETURNS NUMBER
  • EXF$DBMS_EXPFIL_SYSPACK    FUNCTION   GET_EXPRID_4M_TEXTID RETURNS VARCHAR2
  • EXF$DBMS_EXPFIL_SYSPACK    FUNCTION   GET_SPATIAL_MDATAREP RETURNS VARCHAR2
  • EXF$DBMS_EXPFIL_SYSPACK    FUNCTION   GET_TAB_NAME RETURNS VARCHAR2
  • EXF$DBMS_EXPFIL_SYSPACK    FUNCTION   GET_TAB_OWNER RETURNS VARCHAR2
  • EXF$DBMS_EXPFIL_SYSPACK    FUNCTION   GET_TYPELV_STATUS RETURNS VARCHAR2
  • EXF$DBMS_EXPFIL_SYSPACK    FUNCTION   GET_TYPE_OID RETURNS RAW
  • EXF$DBMS_EXPFIL_SYSPACK    FUNCTION   PROC_IS_DEFINERS RETURNS NUMBER
  • EXF$DBMS_EXPFIL_SYSPACK    FUNCTION   TAB_COLUMN_EXISTS RETURNS NUMBER
  • EXF$DBMS_EXPFIL_SYSPACK    PROCEDURE  CLR_EXP_DEPACTION
  • EXF$DBMS_EXPFIL_SYSPACK    PROCEDURE  CLR_OBJECT_NOEXP
  • EXF$DBMS_EXPFIL_SYSPACK    PROCEDURE  CLR_USER_NOEXP
  • EXF$DBMS_EXPFIL_SYSPACK    PROCEDURE  DOWNGRADE_COMPILED_SPARSE
  • EXF$DBMS_EXPFIL_SYSPACK    PROCEDURE  PARSE_PROC_AS_OWNER
  • EXF$DBMS_EXPFIL_SYSPACK    PROCEDURE  PARSE_PROC_AS_OWNER2
  • EXF$DBMS_EXPFIL_SYSPACK    PROCEDURE  SET_EXP_DEPACTION
  • EXF$DBMS_EXPFIL_SYSPACK    PROCEDURE  SET_OBJECT_NOEXP
  • HTMLDB_SYSTEM              FUNCTION   VERIFY_USER RETURNS BINARY_INTEGER
  • INITJVMAUX                 FUNCTION   DO_JIS_DROP RETURNS BOOLEAN
  • KUPD$DATA                  FUNCTION   CONVENTIONAL_LOAD_CALLOUT RETURNS BINARY_INTEGER
  • KUPF$FILE_INT              FUNCTION   GET_ENCPWD RETURNS RAW
  • KUPF$FILE_INT              FUNCTION   GET_OBFPWD RETURNS RAW
  • KUPF$FILE_INT              PROCEDURE  ENCRYPT_PWD
  • KUPP$PROC                  PROCEDURE  INTALGCONV
  • LOGMNR_DICT_CACHE          PROCEDURE  PURGE_SCN
  • LOGSTDBY_INTERNAL          PROCEDURE  EDS_EVOLVE_TABLE_1_I
  • ODM_ABN_MODEL              PROCEDURE  BUILD
  • ODM_ABN_MODEL              PROCEDURE  ESTIMATE_FEATUREBLD_TIME
  • ODM_ABN_MODEL              PROCEDURE  GET_MODEL_STATE
  • ODM_CLUSTERING_UTIL        FUNCTION   IS_DEBUG_ENABLED RETURNS BOOLEAN
  • ODM_MODEL_UTIL             FUNCTION   UNIQUE_SEQUENCE_NAME RETURNS VARCHAR2(30)
  • ODM_MODEL_UTIL             PROCEDURE  CREATE_SEQUENCE
  • ODM_MODEL_UTIL             PROCEDURE  DEBUG_DUMP
  • ODM_MODEL_UTIL             PROCEDURE  DROP_SEQUENCE
  • ODM_MODEL_UTIL             PROCEDURE  GET_TIME_MILLISECONDS
  • ODM_UTIL                   FUNCTION   COMPUTE_LOG_COMBINATIONS RETURNS NUMBER
  • PRVT_RTADDM                FUNCTION   EXTRACTOSSTAT RETURNS NUMBER
  • PRVT_RTADDM                FUNCTION   GETDATA RETURNS CLOB_T
  • PRVT_RTADDM                FUNCTION   GETVERSION RETURNS VARCHAR2
  • SQLJUTL2                   FUNCTION   EVALUATE RETURNS LONG RAW
  • SQLJUTL2                   FUNCTION   INVOKE RETURNS LONG RAW
  • SQLJUTL2                   FUNCTION   REFLECT RETURNS LONG
  • SQLJUTL2                   FUNCTION   REFLECT2 RETURNS CLOB
  • STANDARD                   FUNCTION   ASCII RETURNS BINARY_INTEGER

204 procedure and function calls have been changed.

-- Mike

PS: The above listening is without PSUs applied - I learned that the list my vary slightly based on the PSUs or BPs you have applied. Credits to my colleague Katsumi-san, who pointed my to the DBMS_OPTIM_BUNDLE package which seems to be introduced with the October 2014 PSU. Philippe Fierens has blogged about the package: http://pfierens.blogspot.co.uk/2014_10_01_archive.html

Monday Jun 22, 2015

Java in the database - OJVM non-rolling patches

Question:

How can I find out if Oracle's JVM is used in my database?

Answer:

This is unfortunately not as trivial as I thought initially ...
Let's start with:



Until Oracle version 11.2 or later, there was no way to confirm if Oracle JVM is not actively used in the database

However, what can be said is:
1) If there are non-Oracle schemas that contain java objects, then 3rd party products or user defined java programs could be actively using the Oracle JVM.
2) If there are Oracle schemas, other than SYS, that contain java objects, then other Oracle products or Oracle Applications could be actively using the Oracle JVM.  (For example, ORDSYS schema for Oracle Intermedia and APPS schema for Oracle Applications).
3) Even if all java objects are owned by SYS schema, there might still be user defined java objects in the SYS schema. 

If the total number of java objects owned by SYS is much greater than the totals shown above, then this is likely.  However, the totals shown above are for a fully installed Oracle JVM.  If the JVM is not fully installed, then the existence of user defined java objects in the SYS schema could still make the total number of java objects exceed the above totals. Therefore, there is no way to guarantee that the Oracle JVM is not in use.

For Oracle version 11.2 or later query the DBA_FEATURE_USAGE_STATISTICS view to confirm if the Java features are being used.

I'm not a JAVA/OJVM expert but I'd do the following:

  1. Check how many JAVA objects exist in your database:
    select owner, status, count(*) from all_objects 
            where object_type like '%JAVA%' group by owner, status;
    .
  2. lf the results is equal to 29211 in Oracle 12c (see MOS Note: 397770.1 for numbers in different releases) then I'd silently assume that JAVA is not in use inside the database as there are no additional user defined objects. 
    .
  3. In addition you may run this script from MOS Note:456949.1 (Script to Check the Status or State of the JVM within the Database) to check for any user defined objects JAVA objects in your database
    .
  4. Anyhow, before doing anything to your JAVA installation now keep in mind that there are dependencies. The following components require the existence of a valid JAVA installation in your database:
    Oracle Multimedia (formerly known as Intermedia)
    Oracle Spatial
    Oracle OLAP
    And even more important, as there are dependencies between components there may be also dependent objects belonging to these components in your database. So it's not as simple as it looked initially - you'll have to check if any of the dependent components is in use as well - and the numbers 4.-6. will apply to 11.2. databases only, not to Oracle 12c:
    1. How to Determine if Spatial is Being Used in the Database? (Doc ID 726929.1)
      Please be aware that having a user defined SPATIAL SDO Geometry object will NOT increase the number of existing Java objects compared to a default installation. Roy verified this (THANKS!). So you'll have to make sure that you checked also the dependent components for being in use. 
    2. How To Find Out If OLAP Is Being Used (Doc ID 739032.1)
    3. How To Check If Oracle Multimedia Is Being Used In Oracle Version 11.2 (Doc ID 1088032.1)
    4. How to Determine if Ultra Search is Being Used? (Doc ID 738126.1)
    5. Warehouse builder has a note about how to uninstall it, but that (very badly written) note does not tell you how to determine whether OWB is in use
    6. Rules Manager and Expression Filter document installation and deinstallation in their developers guide
    7. .
  5. And even more important, before doing anything to your JAVA installation please take a backup - even though you may believe that backups are just for wimpies you'll better take one before :-)
    .
  6. Now the question is:
    Should you remove only JAVAVM component - or CATJAVA as well? Please see the section further below on this blog posting for more information.
    To remove only JAVAVM this script could do the job - but it will leave two INVALID Package Bodies (JAVAVM_SYS, JVMRJBCINV):
    SQL> @?/xdk/admin/rmxml.sql 
    SQL> delete from registry$ where status='99' and cid = 'JAVAVM';
    SQL> commit;

  7. The execution of the removal scripts won't de-register the component from DBA_REGISTRY - that's why the manual de-registration is necessary. 
    .
  8. Even if I'd remove the entire JAVA stack including the XDK it will leave those two invalid objects  (JAVAVM_SYS, JVMRJBCINV).
    1. SQL> @?/rdbms/admin/catnojav.sql
    2. SQL> @?/xdk/admin/rmxml.sql
    3. SQL> @?/javavm/install/rmjvm.sql
    4. SQL> @?/rdbms/admin/utlrp.sql
    5. SQL> delete from registry$ where status='99' and cid in ('XML','JAVAVM','CATJAVA');
So honestly the best choice is always not to install things you clearly don't need instead of trying to remove those things afterwards. In this case would now need to double check with Oracle Support if we'd safely can drop the two remaining  package bodies JAVAVM_SYand JVMRJBCINV. In my environment that worked well - but obviously I can't give any official statement here.

Again, please don't get me wrong:
I don't say that you should remove JAVA from your databases And please check back with Oracle Support before doing this. But the question came up so often in the past months because of the OJVM patch which does not allow a rolling PSU upgrade anymore. For further information please see the following MOS Note: 

  • Document 1929745.1 Oracle Recommended Patches -- "Oracle JavaVM Component Database PSU" (OJVM PSU) Patches

So let's do a quick experiment.

Check the installed components in a standard Oracle 12.1.0.2 database first:

SQL> select substr(comp_id,1,8) COMP_ID, substr(COMP_NAME,1,36) COMP_NAME from dba_registry;

COMP_ID    COMP_NAME
---------- ------------------------------------
DV         Oracle Database Vault
APEX       Oracle Application Express
OLS        Oracle Label Security
SDO        Spatial
ORDIM      Oracle Multimedia
CONTEXT    Oracle Text
OWM        Oracle Workspace Manager
XDB        Oracle XML Database
CATALOG    Oracle Database Catalog Views
CATPROC    Oracle Database Packages and Types
JAVAVM     JServer JAVA Virtual Machine
XML        Oracle XDK
CATJAVA    Oracle Database Java Packages
APS        OLAP Analytic Workspace
XOQ        Oracle OLAP API
RAC        Oracle Real Application Clusters

Before we'd be able to safely remove JAVAVM we will need to take out Spatial, Multimedia and OLAP (exactly in this order) as well.

Spatial removal: 

SQL> drop user MDSYS cascade;
SQL> drop user MDDATA cascade;
SQL> drop user spatial_csw_admin_usr cascade;
SQL> drop user spatial_wfs_admin_usr cascade;

After this action you'll end up with 5 invalid objects in APEX in case APEX is installed. I think you can safely ignore them as those are spatial objects in the FLOWS-APEX schema:

PACKAGE: WWV_FLOW_SPATIAL_INT, WWV_FLOW_SPATIAL_API
PACKAGE BODY: WWV_FLOW_SPATIAL_INT, WWV_FLOW_SPATIAL_API
SYNONYM: APEX_SPATIAL

Multimendia removal:

SQL> @?/rdbms/admin/catcmprm.sql ORDIM

OLAP removal:

SQL> @?/olap/admin/olapidrp.plb
SQL> @?/olap/admin/catnoxoq.sql
SQL> @?/olap/admin/catnoaps.sql
SQL> @?/rdbms/admin/utlrp.sql

Let's do the check again:

SQL> select substr(comp_id,1,8) COMP_ID, substr(COMP_NAME,1,36) COMP_NAME from dba_registry;

COMP_ID    COMP_NAME
---------- ------------------------------------
DV         Oracle Database Vault
APEX       Oracle Application Express
OLS        Oracle Label Security
CONTEXT    Oracle Text
OWM        Oracle Workspace Manager
XDB        Oracle XML Database
CATALOG    Oracle Database Catalog Views
CATPROC    Oracle Database Packages and Types
JAVAVM     JServer JAVA Virtual Machine
XML        Oracle XDK
CATJAVA    Oracle Database Java Packages
RAC        Oracle Real Application Clusters 

12 components still there - SDO, ORDIM, XQO and APS are gone as expected.

JAVAVM removal: 

Question would be now to remove only the JAVAVM - or CATJAVA as well?
As of MOS Note:397770.1 it seems to be that removing the JAVAVM is (a) trivial and (b) will avoid to apply the OJVM patch. So removing JAVAVM only seems to be the best way in this case. As shown above this will lead to two additional leftover package bodies JAVAVM_SYS and JVMRJBCINV

4) Oracle JVM is not installed in the database

Do not apply the DST JVM patch.

If for some reason the patch is applied, then apply the patch to the ORACLE_HOME but DO NOT run the post install steps in the database.  This will leave unwanted java objects in the database and create an incomplete non-working Oracle JVM.  See Note 414248.1 for details.

SQL> @?/xdk/admin/rmxml.sql 
SQL> delete from registry$ where status='99' and cid = 'JAVAVM';
SQL> commit;

-- Mike 

Wednesday Jun 17, 2015

Webcast "Why Upgrade to Oracle 12c" available

In April I've done a webcast mainly for Oracle partners to explain and elaborate
Why Upgrade to Oracle Database 12c?

This webcast is now available without registration etc. Watch it here (click on the picture below):

Webcast Why Upgrade To Oracle Database 12c - Mike Dietrich, April 2015

--Mike


Tuesday Jun 16, 2015

New Cloud Control 12.1.0.5 is available

The new version of Oracle's Cloud Control 12c Release 5 is available for download as of today. Some of the new features include Hybrid Cloud Computing but also the support for RMAN incremental rolling forward backups (cross platform, cross Endianness) to decrease downtime for Transportable Tablespaces and Full Transportable Export/Import migrations.

Find it here on OTN:

http://www.oracle.com/technetwork/oem/enterprise-manager/downloads/index.html

Oracle Enterprise Manager Cloud Control 12c Release 5 (12.1.0.5) New!


Enterprise Manager Base Platform
(Full Installers for OMS, Agent, Repository, Management Plug-ins)


Download for Linux x86 (32-bit)
Download for Linux x86-64 (64-bit)
Download for Windows x86-64 (64-bit)
Download for Solaris Operating System (SPARC)
Download for Solaris Operating System (x86-64)
Download for IBM AIX on POWER Systems (64-bit)
Download for HP-UX Itanium (64-bit)

--Mike

Tuesday Jun 09, 2015

Recent News about Pluggable Databases - Oracle Multitenant

Three recent learnings about PDBs in the Oracle Single/Multitenant space you should be aware of. And thanks to my teammates and the Multitenant PMs for bringing this into our radar.

Unplug/plug - don't forget to DROP your PDB

I've had to add a single line to my previous blog post about the upgrade solution Unplug/Plug/Upgrade for PDBs:
https://blogs.oracle.com/UPGRADE/entry/upgrade_pdbs_one_at_a

Unplug Plug Upgrade PDB Oracle Mutitenant

You'll have to DROP your PDB after you have unplugged it as otherwise the information will stay in the CDB's dictionary where you have unplugged it from (a) forever and (b) during an subsequent upgrade of the entire source CDB. But the latter will cause trouble as catcon.pl, the PERL script to execute sql code in all the containers, will try to open the PDB you have unplugged a while ago as the information about it is still kept. To me this looks like an undesired behavior but there's discussion going on internally about it. 

  • alter pluggable database PDB1 close;
  • alter pluggable database PDB1 unplug into '/stage/pdb1.xml';
  • drop pluggable database PDB1 keep datafiles;

If you don't issue the command marked in yellow you'll get yourself in trouble the sooner or later unless this CDB will be deleted afterwards anyways as it was just meant to hold this single PDB.

Unplug/plug - take care on your backup

If you'd use the above procedure for Unplug/Plug/Upgrade (or even just Unplug/Plug without upgrade from one CDB into another) be aware that you will need to take a backup of your PDB right after the upgrade has been finished. This is something which is pretty clear and obvious but it doesn't jump into your face when you don't think about it - and it's not mentioned in the docs as far as I know.

The previous backup is useless as you won't be able to use a backup of PDB1 taken on CDB1 to recover PDB1 into CDB2. Therefore a backup taken directly after the upgrade or plug in has to be scheduled immediately - it's a must and needs to be considered into your maintenance plans.

Every PDB must have its own TEMP tablespace

This one is fairly new to me [thanks Hector!].
MOS Note: 2004595.1 (PDB to Use Global CDB (ROOT) Temporary Tablespace Functionality is Missin

Basically this means that you'll be unable to drop the local temporary tablespace of a PDB and instead use the global temporary tablespace (the one in CDB$ROOT). This is documented as a functionality which is described in the docs but does not exist right now. It is logged under Bug17554022. No major issue but I've a intense discussion with Johannes Ahrends a while back at the DOAG conference about it - so others saw this issue as well.

--Mike 

Tuesday Jun 02, 2015

Migrating to Unicode? Get DMU 2.1!

DMU OTN LogoWhen you find yourself needing to perform a character set migration as part of a database upgrade or migration, the Oracle Database Migration Assistant for Unicode (DMU) is a very helpful tool. It will assess your migration needs and help automate the process, all with a very nice GUI front end that makes the whole process easier.

And now comes a really nice enhancement with DMU 2.1: automation of near-zero downtime character set migration! Here is the blurb from the DMU OTN page:

New! Oracle DMU 2.1, released in May 2015, supports a near-zero downtime migration model in conjunction with the Oracle GoldenGate replication technology. Using DMU 2.1 and GoldenGate 12.1.2.1.0 or later, you can set up a migration procedure that takes advantage of the DMU data preparation and in-place conversion capabilities while leveraging GoldenGate to replicate incremental data changes on the production system during the migration process, thereby effectively eliminating the downtime window requirement. Other new features in DMU 2.1 include migration profile support, problem data report, and transparent repository upgrade. Please see the DMU 2.1 Release Notes for changes since the 2.0 release.

See the OTN Page for DMU to get all the latest information.  

Wednesday May 27, 2015

Removing Options from the Oracle Database kernel in 12c

Remove Options from the Oracle Database Kernel - chopt

Sometimes people have the desire to remove options from the database kernel (i.e. from the oracle executable).

It's a matter of fact that by default you'll get plenty of things linked into your kernel in Oracle Database 12c.

In case you'd like to remove things the chopt utility does still exist in Oracle Database 12c - but you may recognize a difference between Oracle 11.2 and Oracle 12.1. Anyhow, ideally you'll do these changes before you create a database directly after the installation has been completed. See the documentation for Post Installation Tasks first:

Now let's call chopt and see what it tells us on the command prompt:

$ chopt

usage:
chopt <enable|disable> <option>

options:
                  dm = Oracle Data Mining RDBMS Files
                olap = Oracle OLAP
        partitioning = Oracle Partitioning
                 rat = Oracle Real Application Testing
e.g. chopt enable rat 

For a first try I'm unlinking Data Mining:

$ chopt disable dm

Writing to /u01/app/oracle/product/12.1.0.2/install/disable_dm.log...
/usr/bin/Xmake -f /u01/app/oracle/product/12.1.0.2/rdbms/lib/ins_rdbms.mk dm_off ORACLE_HOME=/u01/app/oracle/product/12.1.0.2
/usr/bin/Xmake -f /u01/app/oracle/product/12.1.0.2/rdbms/lib/ins_rdbms.mk ioracle ORACLE_HOME=/u01/app/oracle/product/12.1.0.2

I tested all the 4 available chopt options and this is the result when you exit SQL*Plus afterwards:

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production

versus before with all options still linked into the kernel:

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

But how about all the other options being available in releases before Oracle Database 12c, such as: lbac_on|off (Label Security), dv_on|off (Database Vault)? If you'd refer to the list of options to link on/off published by Ghokan Atil years back in his blog you may find more things to try out.

Lets give it a try with Label Security:

$ /usr/bin/Xmake -f /u01/app/oracle/product/12.1.0.2/rdbms/lib/ins_rdbms.mk lbac_off ORACLE_HOME=/u01/app/oracle/product/12.1.0.2
lbac_off has been deprecated

Ah, very smart ;-) It signals that you can't link those things off anymore. The same would happen with dv_off.

And how about things which are from older sources, such as Spatial Data (sdo_on|off)?

$ /usr/bin/Xmake -f /u01/app/oracle/product/12.1.0.2/rdbms/lib/ins_rdbms.mk sdo_off ORACLE_HOME=/u01/app/oracle/product/12.1.0.2
Warning: sdo is always turned on. sdo_off is disabled.

In this case it would be simple: You'd take out SDO from the components inside the database - and there's no need to unlink anything.

In case you wonder why I'm writing always  Xmake in this blog post (which is not correct - it is "make" of course):
Our blog software does not allow me to publish an article saying /usr/bin/m... :-( Sorry for the inconvenience.

--Mike 

Tuesday May 26, 2015

Oracle 12.1.0.2 - Security Behavior Change with non-SYSDBA Triggers

Oracle Database SecuritySometimes things get revealed at unexpected occasions. This one happened during a recent customer upgrade to Oracle Database 12c with a 3rd party geospatioanl application installed (ESRI).

At the very end of the upgrade the customer saw many ORA-1031 (insufficient privileges) errors and it seemed to be that nothing was working correctly anymore. 

This happened during the run of catupend.sql. The following code path in  catupend.sql causes the error.

cursor ddl_triggers is                                       
   select o.object_id from dba_triggers t, dba_objects o     
    where t.owner = o.owner and t.trigger_name = o.object_name
      and o.object_type = 'TRIGGER'                          
      and (t.triggering_event like '%ALTER%' or              
    t.triggering_event like '%DDL%');     


ERROR at line 1:
ORA-04045: errors during recompilation/revalidation of
SDE.DB_EV_ALTER_ST_METADATA
ORA-01031: insufficient privileges
ORA-06512: at "SYS.DBMS_UTILITY", line 1279
ORA-06512: at line 20

Apparently there's no access to an application trigger anymore - which got deployed as a system trigger (for more information about ESRI's system trigger please click this link). Even though this is strange it doesn't seem like a big issue. But in fact it is as this procedure failed and caused other stuff not getting validated correctly. So subsequent actions (for instance the run of utlu121s.sql, the post upgrade script) failed with ORA-1031 as well pointing to DBMS_UTILITY.

The customer [Danke Andy!!!] itself found the workaround by pattern matching similar issues in MOS and trying some grants - the 3rd one did the trick:

  • GRANT ADMINISTER DATABASE TRIGGER to SDE;


So it was obvious that something in the security architecture in Oracle Database 12.1.0.2 had been changed - and somebody forgot to document it. Later on I've learned that this change got introduced with the July 2013 PSU/CPU as well. I don't blame the customer for not applying PSUs since almost two years - I knew that upfront and we are implementing a 2-PSUs-per-year strategy now with the upgrade to Oracle Database 12c. 

The system trigger ESRI had created couldn't be validated anymore under the context of the SDE (ESRI's application) user. Therefore it failed but caused other actions to fail as well.  

This behavior change is related to "SYSDBA privilege should not be available in non-SYS owned DR procedure / trigger execution" which is first fixed into 12.1.0.2, and then backported as part of CPU July-2013.
When SYS executes a non-SYS owned DR procedure or a Trigger, the SYS privileges would not available during the procedure/trigger execution. The procedure/trigger owner privileges prevail.

--Mike

Patching Best Practices - 38min Video

Patch Ever asked yourself about Database Patching Best Practices?

I know that not always everything works that simple and easy. See the blog post by Pieter Van Puymbroeck from Exitas in Belgium about some findings when applying the April 2015 PSU to an Exadata or by myself about my experiences when applying the GI PSU from Jan 2015.

But this 38 min video may give you a good overview about the best and recommended techniques at first hand by Eleanor Meritt and David Price, both from the Sustaining Engineering organization that issues bug fixes, patches and patchset updates for the Oracle Database, Enterprise Manager and Fusion Middleware product area.

Building on a highly popular session from Oracle OpenWorld 2013, this 2014 OpenWorld session further explores ways to help you maintain and patch your database systems most efficiently. Learn about patch testing best practices, techniques for minimizing downtimes, how to best rollout patches in cloud environments, and more. The presentation also shares the latest Oracle Database 12c features and tooling to help ease patching processes.

-Mike 


Monday May 18, 2015

Create a PDB directly from a stand-alone database?

The documentation offers a well hidden feature for migrating a database into the universe of Oracle Single-/Multitenant:

Remote Cloning with the NON$CDB option.

If you'll read the documentation it doesn't say much about this option, neither the requirements nor the exact syntax or an example:
http://docs.oracle.com/database/121/SQLRF/statements_6010.htm#SQLRF55686

Scroll down to the FROM clause:

... FROM NON$CDB@dblink ... this option will be able to plugin a stand-alone database and make it a pluggable database. Sounds interesting, let's try it.

Test 1 - Try to plugin an Oracle 11.2.0.4 database

Well, the documentation doesn't say anywhere anything about source release limitans. So I tried it simply with an Oracle 11.2.0.4 database. 

  1. Created a database link from my existing CDB pointing into my 11.2.0.4 database
  2. Started my SOURCEDB in read-only mode
  3. Tried to create a pluggable database from my SOURCEDB - and failed ...
    SQL> create pluggable database PDB1 from non$cdb@sourcedb;
    create pluggable database PDB1 from non$cdb@sourcedb
                                                 *
    ERROR at line 1:
    ORA-17627: ORA-28002: the password will expire within 7 days
    ORA-17629: Cannot connect to the remote database server

Test 2 - Try to plugin an Oracle 12.1.0.2 database in file system 

Thanks to Tim Hall - his blog post did the magic trick for me:
http://oracle-base.com/articles/12c/multitenant-clone-remote-pdb-or-non-cdb-12cr1.php#cloning-remote-non-cdb

First of all, the reason why my Test 1 failed is simply that I can't have a user in an Oracle 11.2.0.4 database with the privilege CREATE PLUGGABLE DATABASE - but this is a requirement as I learned later on.

  1. You'll need a user in SOURCEDB with the privilege to CREATE PLUGGABLE DATABSE:
    GRANT CREATE PLUGGABLE DATABASE TO sourcedb_user;
  2. Start SOURCEDB in read-only mode after shutting it down:
    SHUTDOWN IMMEDIATE
    STARTUP OPEN READ ONLY;
  3. Create a database link pointing from the CDB back into the SOURCEDB:
    CREATE DATABASE LINK sourcedblink
    CONNECT TO sourcedb_user IDENTIFIED BY password USING 'upgr12';
  4. Now create the pluggable database from the stand-alone UPGR12 database:
    CREATE PLUGGABLE DATABASE pdb_upgr12 FROM NON$CDB@sourcedblink
    FILE_NAME_CONVERT=('/oradata/UPGR12','/oradata/CDB2/pdb_upgr12');
  5. But when you check the status of the new PDB you'll realize it is OPEN but only in RESTRICTED mode. Therefore noncdb_to_pdb,sql needs to be run. Connect to the new PDB and start the script:
    ALTER SESSION SET CONTAINER=pdb_upgr12;
    @?/rdbms/admin/noncdb_to_pdb.sql

Summary

What will you get from this command? Actually it will allow a simple way to plug in a stand-alone database into a container database but the following restrictions apply:

  • Source database must be at least Oracle 12.1.0.1
  • Source database must be on the same OS platform
  • Source database must be at the same (equal) version as the container database
  • Script noncdb_to_pdb.sql needs to be run

You may have a look at this MOS Note:1928653.1 Example for Cloning PDB from NON-CDB via Dblink as well [Thanks Krishnakumar for pointing me to this note].

Finally the only simplification seems to be to avoid the extra step of creating the XML manifest file with DBMS_PDB.DESCRIBE - but apart from that I can't see many other benefits - except for easing of remote cloning with the above restrictions.

--Mike 

,, 

Monday May 11, 2015

New version of the BIG 12c SLIDE DECK available

I'm not on vacation right now. I'm just very busy traveling between customer onsite visits, customer meetings and workshops, internal and external workshops, testing sessions ... and so on ... 

A lot of stuff has been stacked up in my inbox regarding changes in Oracle 12c, changed behavior - and thanks for your inputs. I will blog on it as soon as I have understood the issue and the solution - so plenty of stuff should come in the next weeks ;-)

Just end of last week Roy and I uploaded a new version of our BIG slide deck - now having even more slides (but still not over 550) *haha* ;-)

Find them here - and as always - the change log at the end of the deck will explain what we've recently adjusted/added/altered.

Upgrade, Migrate and Consolidate to Oracle Database 12c

--Mike 

Thursday Apr 23, 2015

CDBs with less options now supported in Oracle 12.1.0.2

When Oracle Multitenant was launched Roy and I amongst many other people always mentioned that the requirement of having all options in a Container Database (CDB$ROOT), and therefore also for the PDB$SEED with the immediate result that all PDBs provisioned from the PDB$SEED will have all options as well, will hinder customer adoption significantly. 

Almost all customers I have talked to in the past 3-4 years about Oracle Multitenant mentioned immediately that it will be a huge problem for them to install all options as (1) their policy is to install only things they are licensed for to (2) prevent developers, users and DBAs to use things accidentally without even knowing that this or that will require a license.

As it is not allowed to manipulate and change the PDB$SEED the workaround - as PDBs were allowed to have less options - has been to create a stand-alone Oracle 12c database with exactly the options you'd like to have configured as your gold standard - and then plug it in under a remarkable name, for instance PDB$MASTER. Switch it to read only and make sure from now on you'll provision a new PDB always as a clone from PDB$MASTER, and not from PDB$SEED.

All Options in a CDB

That would have even worked in the Single Tenant case, which does not require licensing the Oracle Multitenant option and where you have only one active PDB. For this purpose you would have unplugged your PDB$MASTER after making it a pluggable database and provision new PDBs with just your desired options set as plugging in PDB$MASTER under a new name (e.g. PDB26) using the COPY option of the command.

Now this will become all obsolete as from now you it is allowed to have a CDB installation with less options. This applies to linked kernel modules (e.g. RAT) as well as to configured database components (e.g. JAVA, OWM, SPATIAL etc).

Please see the following new/rephrased MOS Notes:

MOS Note:2001512.1 basically explains the following steps:

  • Do all the click work in DBCA (Database Creation Assistant) to create a container database - but let DBCA only create the scripts
  • Edit the <SID>.sql script and remove the unwanted options according to the dependency table in the MOS Note
  • Edit the CreateDBCatalog.sql in case you want to remove OWM (Oracle Workspace Manager) creation as well 
  • Add the Oracle PERL $ORACLE_HOME/perl/bin in front of your $PATH variable
  • Start the <SID>.sh script on the shell prompt

Here's an example of a CreateDBCatalog.sql and a XXXX.sql creating a CDB with no options except XDB (which is mandatory in Oracle Database 12c):

cat CreateDBCatalog.sql

SET VERIFY OFF
connect "SYS"/"&&sysPassword" as SYSDBA
set echo on
spool /u01/app/oracle/admin/XXXX/scripts/CreateDBCatalog.log append
alter session set "_oracle_script"=true;
alter pluggable database pdb$seed close;
alter pluggable database pdb$seed open;
host perl /u01/app/oracle/product/12.1.0.2/rdbms/admin/catcon.pl -n 1 -l /u01/app/oracle/admin/XXXX/scripts -b catalog /u01/app/oracle/product/12.1.0.2/rdbms/admin/catalog.sql;
host perl /u01/app/oracle/product/12.1.0.2/rdbms/admin/catcon.pl -n 1 -l /u01/app/oracle/admin/XXXX/scripts -b catproc /u01/app/oracle/product/12.1.0.2/rdbms/admin/catproc.sql;
host perl /u01/app/oracle/product/12.1.0.2/rdbms/admin/catcon.pl -n 1 -l /u01/app/oracle/admin/XXXX/scripts -b catoctk /u01/app/oracle/product/12.1.0.2/rdbms/admin/catoctk.sql;
-- host perl /u01/app/oracle/product/12.1.0.2/rdbms/admin/catcon.pl -n 1 -l /u01/app/oracle/admin/XXXX/scripts -b owminst /u01/app/oracle/product/12.1.0.2/rdbms/admin/owminst.plb;
host perl /u01/app/oracle/product/12.1.0.2/rdbms/admin/catcon.pl -n 1 -l /u01/app/oracle/admin/XXXX/scripts -b pupbld -u SYSTEM/&&systemPassword /u01/app/oracle/product/12.1.0.2/sqlplus/admin/pupbld.sql;
connect "SYSTEM"/"&&systemPassword"
set echo on
spool /u01/app/oracle/admin/XXXX/scripts/sqlPlusHelp.log append
host perl /u01/app/oracle/product/12.1.0.2/rdbms/admin/catcon.pl -n 1 -l /u01/app/oracle/admin/XXXX/scripts -b hlpbld -u SYSTEM/&&systemPassword -a 1  /u01/app/oracle/product/12.1.0.2/sqlplus/admin/help/hlpbld.sql 1helpus.sql;
spool off
spool off

.

cat XXXX.sql

set verify off
ACCEPT sysPassword CHAR PROMPT 'Enter new password for SYS: ' HIDE
ACCEPT systemPassword CHAR PROMPT 'Enter new password for SYSTEM: ' HIDE
host /u01/app/oracle/product/12.1.0.2/bin/orapwd file=/u01/app/oracle/product/12.1.0.2/dbs/orapwXXXX force=y format=12
@/u01/app/oracle/admin/XXXX/scripts/CreateDB.sql
@/u01/app/oracle/admin/XXXX/scripts/CreateDBFiles.sql
@/u01/app/oracle/admin/XXXX/scripts/CreateDBCatalog.sql
-- @/u01/app/oracle/admin/XXXX/scripts/JServer.sql
-- @/u01/app/oracle/admin/XXXX/scripts/context.sql
-- @/u01/app/oracle/admin/XXXX/scripts/ordinst.sql
-- @/u01/app/oracle/admin/XXXX/scripts/interMedia.sql
-- @/u01/app/oracle/admin/XXXX/scripts/cwmlite.sql
-- @/u01/app/oracle/admin/XXXX/scripts/spatial.sql
-- @/u01/app/oracle/admin/XXXX/scripts/labelSecurity.sql
-- @/u01/app/oracle/admin/XXXX/scripts/apex.sql
-- @/u01/app/oracle/admin/XXXX/scripts/datavault.sql
-- @/u01/app/oracle/admin/XXXX/scripts/CreateClustDBViews.sql

@/u01/app/oracle/admin/XXXX/scripts/lockAccount.sql
@/u01/app/oracle/admin/XXXX/scripts/postDBCreation.sql
@/u01/app/oracle/admin/XXXX/scripts/PDBCreation.sql
@/u01/app/oracle/admin/XXXX/scripts/plug_PDB.sql
@/u01/app/oracle/admin/XXXX/scripts/postPDBCreation_PDB.sql

 .

This results in a database having only these components - the minimal component set in Oracle 12.1.0.2: 

COMP ID  NAME
-------- --------------------------------------

CATALOG  Oracle Database Catalog View

CATPROC  Oracle Database Packages and

XDB      Oracle XML Database

   .

-- Mike 

Tuesday Apr 21, 2015

REPLAY and Slides for Webcast "Why Upgrade to Oracle Database 12c" for ISVs/Partners - Apr 2015

Thanks for attending today's webcast about
"Upgrading to Oracle Database 12c - and why you should upgrade".

Webcast Why Upgrade 12c

Please access the slides via this link:

Webcast Slides - Why Upgrade to Oracle Database 12c

And for those who couldn't participate here's the replay of the 45 minute Webcast:

REPLAY Webcast "Why Upgrade to Oracle 12c"

Thanks :-)

 --Mike 


Monday Apr 20, 2015

Oracle PSU and BP April 2015 is available

As of April 14, 2015:

The April 2015 PSU/BP is available!

Here's the most important information:

Please find below the links to the Recommended Patches and Patch Numbers for each of your database releases:

12.1.0.2
 Availability and Known issues for 12.1.0.2      Note:1683799.1
12.1.0.1
 Availability and Known issues for 12.1.0.1      Note:1565082.1 
11.2.0.4
 Availability and Known issues for 11.2.0.4      Note:1562139.1
 
11.2.0.3
 Availability and Known issues for 11.2.0.3      Note:1348336.1

11.1.0.7
 List of fixes included in 11.1.0.7              Note:601739.1

-Mike

Thursday Apr 16, 2015

Webcast for ISVs/Partners on Apr 21, 2015, 3pm CET
Why Upgrade to Oracle Database 12c?

Oracle's latest generation of database technology, Oracle Database 12.1.0.2, has some impressive new features. It offers great potential for a fast upgrade, simple migrations and consolidation, making more efficient use of hardware and delivering major improvements in management efficiency.


Join our webcast on Upgrading to Oracle Database 12c for ISVs and you will learn:

  • Why you need to upgrade to Oracle Database 12.1.0.2
  • How to ensure that your applications are ready for Oracle Database 12c
  • How to evaluate and test all enhancements of the upgrade process plus other new features
  • Best practices to upgrade and migrate successfully

At the end of the webcast our speaker Mike Dietrich, Master Product Manager for Oracle's Database Upgrade Development Team, will be available for your questions. The presentation will be for 45 minutes followed by the Q&A session.

Registration Link:
https://event.on24.com/eventRegistration/EventLobbyServlet?target=reg20.jsp&eventid=972198&sessionid=1&key=732CC73F7BED83BF66054033AF7685C3&sourcepage=register

Date/Time:
Tuesday, April 21, 2015 at 3pm CET 

--Mike 

Tuesday Apr 14, 2015

COLLABORATE15: Hands On Lab: Bring Your Laptop!!!


Wednesday, Apr  15, 2015, Roy and I will deliver two Hands-On Labs at COLLABORATE15 in the "Southseas Room A" in the morning. If you are signed up and read this please don't forget to bring your laptop with a VNC installed - and power charged for at least 1 hour.

You will access the lab via a VNC session - there are no laptops in the room - and the hotel doesn't provide power strips for labs shorter than 1 hour :-(

Unfortunately we don't have access to the registration data. Therefore we are unable to inform you upfront directly. Just in case you arrive without meeting these requirements you can always afterwards download the entire lab from the blog:

Hands On Lab -  Upgrade, Migrate, Consolidate to 12c

Hands On Lab Instructions 

--Mike

Friday Apr 10, 2015

Parallel Index Creation with Data Pump Import

Here is a new capability that might be interesting to anybody who is performing a migration using Data Pump. Previously, Data Pump would create indexes one at a time, specifying the PARALLEL keyword for the CREATE INDEX statement to invoke parallel query for index creation. We used to recommend a workaround to create indexes in parallel, which involved a three-step process of importing without indexes, then creating a SQLFILE of the CREATE INDEX statements, and breaking that file into multiple windows.

Through extensive performance testing we found that it is faster to create multiple indexes in parallel (using a parallel degree of 1) instead of creating a single index using parallel query processes. This is enabled by the patch for bug 18793090, which is available as a backport for 11.2.0.4 Exadata BP 9, 12.1.0.1.3, or 12.1.0.2. If you need it for another platform, that can of course be requested. The number of indexes created will be based on the PARALLEL parameter.

Here is an example of the effects of this patch on a toy example that I created using our hands-on lab VM environment. I created a table in the SYSTEM schema with 4 columns and 14 indexes, and then inserted a couple of dozen rows into the table. Then I exported the SYSTEM schema from 11.2.0.4 and imported into a 12.1.0.2 PDB with PARALLEL=4, both with and without the patch. 

Normal (unpatched) behavior:

;;; 
Import: Release 12.1.0.2.0 - Production on Thu Apr 9 14:38:50 2015
Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.
;;; 
Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
09-APR-15 14:39:08.602: Startup took 2 seconds
09-APR-15 14:39:12.841: Master table "SYSTEM"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
09-APR-15 14:39:13.417: Starting "SYSTEM"."SYS_IMPORT_FULL_01":  system/********@pdb1 directory=temp dumpfile=exp_test.dmp logfile=imp_test1.log logtime=all metrics=yes parallel=4 
09-APR-15 14:39:13.605: Processing object type SCHEMA_EXPORT/USER
09-APR-15 14:39:14.454:      Completed 1 USER objects in 1 seconds
09-APR-15 14:39:14.470: Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
09-APR-15 14:39:14.541:      Completed 5 SYSTEM_GRANT objects in 0 seconds
09-APR-15 14:39:14.596: Processing object type SCHEMA_EXPORT/ROLE_GRANT
09-APR-15 14:39:14.655:      Completed 2 ROLE_GRANT objects in 0 seconds
09-APR-15 14:39:14.690: Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
09-APR-15 14:39:14.728:      Completed 1 DEFAULT_ROLE objects in 0 seconds
09-APR-15 14:39:14.746: Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
09-APR-15 14:39:15.275:      Completed 1 PROCACT_SCHEMA objects in 1 seconds
09-APR-15 14:39:15.377: Processing object type SCHEMA_EXPORT/TABLE/TABLE
09-APR-15 14:39:15.626:      Completed 1 TABLE objects in 0 seconds
09-APR-15 14:39:15.673: Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
09-APR-15 14:39:16.031: . . imported "SYSTEM"."TAB1"                             6.375 KB      12 rows in 1 seconds
09-APR-15 14:39:16.096: Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
09-APR-15 14:39:20.740:      Completed 14 INDEX objects in 4 seconds

With Patch:

;;; 
Import: Release 12.1.0.2.0 - Production on Thu Apr 9 15:05:19 2015
Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.
;;; 
Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
09-APR-15 15:05:22.590: Startup took 0 seconds
09-APR-15 15:05:23.175: Master table "SYSTEM"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
09-APR-15 15:05:23.613: Starting "SYSTEM"."SYS_IMPORT_FULL_01":  system/********@pdb1 directory=temp dumpfile=exp_test.dmp logfile=imp_test3.log logtime=all metrics=yes parallel=4 
09-APR-15 15:05:23.699: Processing object type SCHEMA_EXPORT/USER
09-APR-15 15:05:23.862:      Completed 1 USER objects in 0 seconds
09-APR-15 15:05:23.882: Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
09-APR-15 15:05:23.937:      Completed 5 SYSTEM_GRANT objects in 0 seconds
09-APR-15 15:05:23.993: Processing object type SCHEMA_EXPORT/ROLE_GRANT
09-APR-15 15:05:24.071:      Completed 2 ROLE_GRANT objects in 1 seconds
09-APR-15 15:05:24.096: Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
09-APR-15 15:05:24.180:      Completed 1 DEFAULT_ROLE objects in 0 seconds
09-APR-15 15:05:24.216: Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
09-APR-15 15:05:24.378:      Completed 1 PROCACT_SCHEMA objects in 0 seconds
09-APR-15 15:05:24.460: Processing object type SCHEMA_EXPORT/TABLE/TABLE
09-APR-15 15:05:24.665:      Completed 1 TABLE objects in 0 seconds
09-APR-15 15:05:24.782: Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
09-APR-15 15:05:25.096: . . imported "SYSTEM"."TAB1"                             6.375 KB      12 rows in 1 seconds
09-APR-15 15:05:26.291: Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
09-APR-15 15:05:26.809: Startup took 4 seconds
09-APR-15 15:05:26.896: Startup took 5 seconds
09-APR-15 15:05:27.138: Startup took 4 seconds
09-APR-15 15:05:28.398:      Completed 14 INDEX objects in 3 seconds

A few of things are noteworthy here:

  1. The indexes took 4.644 seconds without the patch and 3.302 seconds with the patch. So, the effect is significant even on a toy example.
  2. Those messages about startup taking X seconds are because we did not need the parallel workers for the table data. The startup time probably isn't correct; that's just the time between the start of the job and the first use of the worker
  3. If you apply this patch, we will no longer use PX processes (formerly known as PQ slaves) to create indexes
If you take advantage of this patch, let me know the results in a reply here! We are excited to be able to add a bit more parallelism into Data Pump, and plan on more for the future.

Wednesday Apr 01, 2015

Upgrade Workshop incl Hands-On in Vienna/Austria on April 9./10. - still some seats available :-)

Just in case you'll would like to participate in an Upgrade and Migration Workshop including Hands-On in the area in and around Vienna, Austria :-)

I'll deliver an Upgrade workshop in the Oracle Office (IZD-Tower, 3. Stock - Wagramer Straße 19 - 1223 Wien) next week on April 9 and 10. It will include Hands-On so you'll have to bring your laptop with Oracle Virtual Box installed and roughly 40 GB of free space to copy an image into your environment.

Just in case you'd like to participate ... here's the registration link which has more details about the agenda and the requirements:

 --Mike

SAP is now certified on Oracle Database 12.1.0.2

ORACLE and SAPSAP certified Oracle Database 12.1.0.2 as of March 31, 2015!

Yes, it's true. Our colleagues in the Oracle/SAP/CompetenceCenter in Walldorf worked really hard to complete the certification for SAP with Oracle Database 12.1.0.2 But finally it happened. Actually it was announced long time before as planned. And it happened right in time.

Please find the official documents here:

SAP Service Marketplace ==> Products ==> Installation & Upgrade Guides ==> Database Upgrades (login required) ==> Oracle 

And here's the link to the official announcement:

 

Great job by a excellent Oracle team in Walldorf - and please no excuses anymore from anybody saying that you can't certify your application on Oracle Database 12.1.0.2. If SAP can do it, everybody can do it :-)

If you are looking for more information and useful documents please see this discussion here - scroll down a bit (thanks to Upgrade Expert Andreas Becker from the Oracle-SAP-CC in Walldorf):

See also the ROADMAP for Oracle/SAP regarding future developments, feature support, In-Memory etc:

--Mike

.

PS:
I'm expressing my sincere condolences as one of the 5 founders of SAP, Klaus Tschira, died on March 31, 2015 way too early at the age of 74 years. May he rest in piece!

Thursday Mar 19, 2015

Migration of an EM Repository cross-platform?

Can you migrate your EM Cloud Control Repository to another OS platform? Cross-platform and cross-Endianness?

This question sounds so incredibly simple that you won't even start thinking I guess. Same for ourselves. Use Data Pump. Or Transportable Tablespaces. Or Full Transportable Export/Import if your source is at least 11.2.0.3 or newer.

But sometimes in life things seem to be simple, but as soon as you unmask them you'll find a full bunch of issues. It's a fact that the repository of EM Cloud Control is quite a bit complicated. And uses plenty of databases technologies. 

Actually all credits go to Roy here as he has worked with the EM group for the past 6 months on this topic.

You can migrate a EM Cloud Control Repository cross-platform but not cross-Endianness (e.g. HP-UX to OL, big to little Endianness). The latter is scheduled to be supported in EM 13.2.

 

Summary:

As EM Cloud Control Repository migrations is possible right now only within the same Endianness group you should decide carefully where you store your EM Cloud Control Repository.

 --Mike

 

Friday Mar 13, 2015

EM Cloud Control 12.1.0.4 (or newer) OMS Repository certified with Oracle Database 12.1.0.2 plus patches

The certification for Oracle Database 12.1.0.2 as Repository Database underneath of Oracle Enterprise Manager Cloud Control 12.1.0.4 (OMS) had been revoked for a while. One reason may have been the potential removal of the SYSMAN user during a DBUA upgrade ;-) But this is just a rumor. I don't know the exact reasons.

After checking with the certification matrix earlier today I realized:

Cloud Control Repository Database Certification Oracle 12.1.0.2

Oracle Database 12.1.0.2 is certified as a repository database for Cloud Control resp. OMS.

But ... with a few important restrictions (see MOS Note:1987905.1 - 12c Database has been Certified as a 12cR4 Repository with Certain Patchset Restrictions):

  • Database PSU April 2015 (12.1.0.2.3)

    OR:

  • Database PSU October 2014 (12.1.0.2.1) or newer
  • Database Patch 20243268 on top of the PSU
    • Bug20243268:
      EM QUERY WITH SQL_ID 4RQ83FNXTF39U PERFORMS POORLY ON ORACLE 12C RELATIVE TO 11G

Please be aware that the above mentioned MOS Note has a typo - it refers to PSU 12.1.0.2.3 (correct) as the Oct PSU (incorrect - it's the April 2015 one). 

--Mike 

PS: Thanks for the anonymous comment ... I wasn't aware of the MOS Note and updated the patch requirements

Tuesday Mar 10, 2015

Another Upgrade Workshop in Reston on March 31!

For those who were unable to attend the previous Database Upgrade workshop in Reston, VA (just outside DC), we have added another event in three weeks on Tuesday, March 31. This time we will have the nice auditorium with a larger capacity, so we should be able to accommodate everybody. If you would like to sign up, please see the evite at

http://www.oracle.com/us/dm/seo100396221-na-us-ip-ipi1-ev-2436170.html 

You will notice that this time the registration is being handled via telephone, to avoid the problems we had last time with the email system.  If you have any problems registering, please let me know.

So, apologies to anybody who was shut out last month, and I hope we get a nice big audience in three weeks! 

Monday Mar 09, 2015

Applying a PSU or BP to a Single-/Multitenant Environment

I have already explained in broad details a while ago how to:

But one may miss the steps for applying a PSU (Patch Set Update) or BP (Bundled Patch) to a Single-/Multitenant Environment. At first everything will work just the same if you choose the Everything-at-Once strategy as datapatch will adjust all the required things regardless of being executed in a stand-alone or a singe/Multitenant environment.

But what happens if you apply a PSU or a BP to one of your Multitenant environments and want to move PDBs one after another (or a few at the same time) to the new environment?
Or revert a PSU by plugging out from a CDB with the PSU inside - and plug it back into a CDB with a lower version or no PSU at all? 

First step - Check Plug In Compatibility 

Before you can even start your unplug/plug operation you should always perform the plugin check. This is divided in two simple steps:

  1. Create the XML description file for your PDB in CDB_SOURCE
    exec DBMS_PDB.DESCRIBE ('/home/oracle/PDB1_unplug.xml', 'PDB1');
  2. Run the plug in check in CDB_DEST
    begin
      if DBMS_PDB.CHECK_PLUG_COMPATIBILITY('/home/oracle/PDB1_unplug.xml','PDB1') then  
      DBMS_OUTPUT.PUT_LINE('No violations found - you can relax');
    else
      DBMS_OUTPUT.PUT_LINE('Violations found - check PDB_PLUG_IN_VIOLATIONS');
    end if;
    end;
    /

No Plugin Violations?

Then please follow the procedure described in:
http://blogs.oracle.com/UPGRADE/entry/upgrade_pdbs_one_at_a
without the upgrade part as you don't need to upgrade anything in this case of course. 


Higher patch in CDB_DEST than in CDB_SOURCE?

Then run this query:

select TYPE,MESSAGE,ACTION from PDB_PLUG_IN_VIOLATIONS where NAME='PDB1';

It will tell you to execute datapatch:

TYPE    MESSAGE
------  ----------------------------------------------------------------------------
ERROR   PSU bundle patch 1 (PSU Patch 4711): Installed in the CDB but not in the PDB

ACTION
------------------------------------------------
Call datapatch to install in the PDB or the CDB

Lower patch in CDB_DEST than in CDB_SOURCE?

Now this becomes a bit more tricky. See the output of PDB_PLUG_IN_VIOLATIONS:

TYPE  MESSAGE
----- ----------------------------------------------------------------------------
ERROR PSU bundle patch 1 (PSU Patch 4711): Installed in the PDB but not in the CDB

ACTION
------------------------------------------------
Call datapatch to install in the PDB or the CDB

Huh? Install???
What does this mean? Should I install now the current CDB/PDB's PSU into my target environment before being able to step down? 

Actually I think this message is misleading. And when you look into the MyOracle Support Note describing this under scenario 3 (MOS Note:1935365.1 - Multitenant Unplug/Plug Best Practices) you'll see that the author silently assumed as well that is is more likely that you'll remove the patch from the PDB. 

But how do you remove changes which came in with datapatch from within a PDB only?

You will need to run datapatch -rollback on the affected PDBs only:

$> datapatch -rollback <patch id> –force [–bundle_series] -pdbs <pdb1,pdb2,...,pdbn>

For further information see:

--Mike 

About

Mike Dietrich - Oracle Mike Dietrich
Master Product Manager - Database Upgrade & Migrations - Oracle Corp

Based near Munich/Germany and spending plenty of time in airplanes to run either upgrade workshops or work onsite or remotely with reference customers. Acting as interlink between customers/partners and the Upgrade Development.

Follow me on TWITTER

Contact me via LinkedIn or XING

Search

Archives
« June 2015
SunMonTueWedThuFriSat
 
1
3
4
5
6
7
8
10
11
12
13
14
15
18
19
20
21
24
25
26
27
28
30    
       
Today
Oracle related Tech Blogs
Slides Download Center
Visitors since 17-OCT-2011
White Paper and Docs
Workshops
Viewlets and Videos
This week on my Rega/iPod/CD
Workshop Map
Upgrade Reference Papers