X

Archived Database Upgrade Blog

Recent Posts

Workshop

Upgrade Workshops in California Next Week!

My title With a major winter storm hitting the east coast today, there is no better time to think about flying to California!   I will be delivering workshops in Redwood Shores (Tuesday, March 21) and El Segundo (Thursday, March 23). Remember that these are free workshops, but please do register if you plan on attending. That way we be sure to have enough space (not to mention enough food!) for everybody.   As a reminder, here is the general workshop agenda.  Introduction, Preparation Steps, Parameter Settings Break Upgrade cases: Command-line upgrade (live demo) RAC/GI upgrade Data Pump migration Lunch Transportable Tablespaces migration Zero-Downtime Migration with GoldenGate Consolidation with Multitenant Migration to the cloud  Break Fallback Strategies New features in 12.2 for DBAs & geeks Ensuring good performance after the upgrade As you can see it's a full day of technical information. I hope to see you there! And if you aren't in California, please check the events list in the right sidebar for other upcoming workshops, such as those in Milano, Columbus, Cleveland, Cincinnati, Raleigh, and Atlanta. If you'd like to comment please use the new and refreshed upgrade blog: https://MikeDietrichDE.com/2017/03/14/upgrade-workshops-in-california-next-week/

With a major winter storm hitting the east coast today, there is no better time to think about flying to California!   I will be delivering workshops in Redwood Shores (Tuesday, March 21) and El Segundo...

Oracle Database 12.2

Non-CDB still exists in Oracle Database 12.2.

  I received a question from 3 different colleagues in the past week: "Does Oracle Database 12.2 still allow us to create non-CDB databases?" And as I get this question at every customer-facing workshop these days as well before being able to talk about Single/Multitenant - I think it is important to clarify it. . Is the non-CDB architecture still available and supported in Oracle Database 12.2? I did blog about it a while ago - but Google is not always your friend to direct you to the right answer: https://blogs.oracle.com/UPGRADE/entry/oracle_non_cdb_architecture_may   The Database Upgrade Guide 12.2 contains this paragraph: .   And in Oracle Database 12.1 we did announce already: whereas the difference between "Deprecated" and "Desupported" is described here: What does DEPRECATED mean? And what is DESUPPORTED? . Summary The non-CDB (or stand-alone) architecture is still available in Oracle Database 12.2. You can install/deploy your databases exactly as you did in previous releases. Even though I would recommend you to test and experiment with the Singe-/Multitenant (CDB) architecture to learn about the benefits and flexibility already today, there's no forced push to adopt Single-/Multitenant at the moment. . --Mike . If you'd like to comment please use the new and refreshed upgrade blog: https://MikeDietrichDE.com/2017/03/14/non-cdb-still-exists-in-oracle-database-12-2/   Version 2

  I received a question from 3 different colleagues in the past week: "Does Oracle Database 12.2 still allow us to create non-CDB databases?" And as I get this question at every customer-facing...

Single-/Multitenant

Converting an 12.1 non-CDB and plug it into an 12.2 CDB

2 customers requested a bit more explanation on how to:Convert and Plugin an Oracle 12.1 non-CDB database into an Oracle 12.2 container database Cool, I like that :-) I will do this right away and use the approach to Upgrade first, then Plugin afterwards.. Upgrading an Oracle 12.1.0.2 database to Oracle 12.2.0.1 First the preupgrade.jar must be executed - please download always the most recent version from MOS Note:884522.1 as the version we publish is usually newer including important improvements (and fixes) than the version you'll get with the release drop. We are working on including always the newest drop with future PSUs and BPs. $ java -jar /u01/app/oracle/product/12.2.0.1/rdbms/admin/preupgrade.jar TEXT TERMINAL Report generated by Oracle Database Pre-Upgrade Information Tool Version12.2.0.1.0Upgrade-To version: 12.2.0.1.0=======================================Status of the database prior to upgrade=======================================      Database Name:  DB12     Container Name:  DB12       Container ID:  0            Version:  12.1.0.2.0         Compatible:  12.1.0.2.0          Blocksize:  8192           Platform:  Linux x86 64-bit      Timezone File:  18  Database log mode:  NOARCHIVELOG           Readonly:  FALSE            Edition:  EE  Oracle Component                       Upgrade Action    Current Status  ----------------                       --------------    --------------  Oracle Server                          [to be upgraded]  VALID  Oracle Workspace Manager               [to be upgraded]  VALID  Oracle XML Database                    [to be upgraded]  VALID==============BEFORE UPGRADE==============  Run <preupgradeLogDirPath>/preupgrade_fixups.sql to complete all  of the BEFORE UPGRADE action items below marked with '(AUTOFIXUP)'.  REQUIRED ACTIONS  ================   + Adjust TABLESPACE SIZES as needed.                                                Auto      12.2.0.1.0     Tablespace                        Size     Extend    Min Size    Action     ----------                     ----------  --------  ----------  ------     SYSAUX                             550 MB  DISABLED      500 MB  None     SYSTEM                             700 MB  ENABLED       765 MB  None     TEMP                                20 MB  ENABLED       150 MB  None     UNDOTBS1                           295 MB  ENABLED       400 MB  None     Note that 12.2.0.1.0 minimum sizes are estimates.     If you plan to upgrade multiple pluggable databases concurrently,     then you must ensure that the UNDO tablespace size is equal to at least     the number of pluggable databases that you upgrade concurrently,     multiplied by that minimum.  Failing to allocate sufficient space can     cause the upgrade to fail.  RECOMMENDED ACTIONS  ===================   + (AUTOFIXUP) Gather stale data dictionary statistics prior to database     upgrade in off-peak time using:      EXECUTE DBMS_STATS.GATHER_DICTIONARY_STATS;     Dictionary statistics do not exist or are stale (not up-to-date).     Dictionary statistics help the Oracle optimizer find efficient SQL     execution plans and are essential for proper upgrade timing. Oracle     recommends gathering dictionary statistics in the last 24 hours before     database upgrade.     For information on managing optimizer statistics, refer to the 12.1.0.2     Oracle Database SQL Tuning Guide.=============AFTER UPGRADE=============  Run <preupgradeLogDirPath>/postupgrade_fixups.sql to complete all  of the AFTER UPGRADE action items below marked with '(AUTOFIXUP)'.  REQUIRED ACTIONS  ================  None  RECOMMENDED ACTIONS  ===================   + Upgrade the database time zone version using the DBMS_DST package.     The database is using timezone datafile version 18 and the target     12.2.0.1.0 database ships with timezone datafile version 26.     Oracle recommends using the most recent timezone data.  For further     information, refer to My Oracle Support Note 1585343.1.   + (AUTOFIXUP) Gather dictionary statistics after the upgrade using the     command:       EXECUTE DBMS_STATS.GATHER_DICTIONARY_STATS;     Oracle recommends gathering dictionary statistics after upgrade.     Dictionary statistics provide essential information to the Oracle     optimizer to help it find efficient SQL execution plans. After a     database upgrade, statistics need to be re-gathered as there can now be     tables that have significantly changed during the upgrade or new tables     that do not have statistics gathered yet.Preupgrade generated files:    /u01/app/oracle/cfgtoollogs/DB12/preupgrade/preupgrade_fixups.sql    /u01/app/oracle/cfgtoollogs/DB12/preupgrade/postupgrade_fixups.sql Next step is to follow the guidelines displayed by the output, i.e. changing parameters, gathering stats, running the preupgrade_fixups.sql etc., then shutting down the database and copying the SPFILE and creating a new password file in the 12.2 Oracle Home's dbs directory. [oracle@localhost ~]$ . db121[DB12] oracle@localhost:~$ sSQL*Plus: Release 12.1.0.2.0 Production on Tue Mar 7 14:09:25 2017Copyright (c) 1982, 2014, Oracle.  All rights reserved.Connected to:Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit ProductionWith the Partitioning, OLAP, Advanced Analytics and Real Application Testing optionsSQL> EXECUTE DBMS_STATS.GATHER_DICTIONARY_STATS;PL/SQL procedure successfully completed.SQL> @/u01/app/oracle/cfgtoollogs/DB12/preupgrade/preupgrade_fixups.sqlExecuting Oracle PRE-Upgrade Fixup ScriptAuto-Generated by:       Oracle Preupgrade Script                         Version: 12.2.0.1.0 Build: 3Generated on:            2017-03-07 13:46:51For Source Database:     DB12Source Database Version: 12.1.0.2.0For Upgrade to Version:  12.2.0.1.0                          FixupCheck Name                Status  Further DBA Action----------                ------  ------------------dictionary_stats          Passed  NonePL/SQL procedure successfully completed.SQL> shutdown immediateDatabase closed.Database dismounted.ORACLE instance shut down.SQL> exitDisconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit ProductionWith the Partitioning, OLAP, Advanced Analytics and Real Application Testing options[DB12] oracle@localhost:~$ cp /u01/app/oracle/product/12.1.0.2/dbs/spfileDB12.ora $OH12/dbs[DB12] oracle@localhost:~$ cp /u01/app/oracle/product/12.1.0.2/dbs/orapwDB12 $OH12/dbs[DB12] oracle@localhost:~-- it's Best Practice to create a new password file instead of copying the old one!  Now the database is ready to get upgraded to Oracle Database 12.2.0.1.. Database Upgrade to Oracle Database 12.2.0.1 Database must be in STARTUP UPGRADE mode - and the upgrade gets executed from the command line in my case. Alternative would be the DBUA of course. You could also use the shortcut "dbupgrade" instead of typing all the characters below. [DB12] oracle@localhost:~$ . db122[DB12] oracle@localhost:~$ sSQL*Plus: Release 12.2.0.1.0 Production on Tue Mar 7 14:16:20 2017Copyright (c) 1982, 2016, Oracle.  All rights reserved.Connected to an idle instance.SQL> startup upgradeORACLE instance started.Total System Global Area 1048576000 bytesFixed Size                  8628640 bytesVariable Size             369100384 bytesDatabase Buffers          662700032 bytesRedo Buffers                8146944 bytesDatabase mounted.Database opened.SQL> exitDisconnected from Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production[DB12] oracle@localhost:~ $ cd /u01/app/oracle/product/12.2.0.1/rdbms/admin/[DB12] oracle@localhost:/u01/app/oracle/product/12.2.0.1/rdbms/admin$ $ORACLE_HOME/perl/bin/perl catctl.pl -n 6 -l /home/oracle catupgrd.sqlArgument list for [catctl.pl]Run in                c = 0Do not run in         C = 0Input Directory       d = 0Echo OFF              e = 1Simulate              E = 0Forced cleanup        F = 0Log Id                i = 0Child Process         I = 0Log Dir               l = /home/oraclePriority List Name    L = 0Upgrade Mode active   M = 0SQL Process Count     n = 6SQL PDB Process Count N = 0Open Mode Normal      o = 0Start Phase           p = 0End Phase             P = 0Reverse Order         r = 0AutoUpgrade Resume    R = 0Script                s = 0Serial Run            S = 0RO User Tablespaces   T = 0Display Phases        y = 0Debug catcon.pm       z = 0Debug catctl.pl       Z = 0catctl.pl VERSION: [12.2.0.1.0]           STATUS: [production]            BUILD: [RDBMS_12.2.0.1.0_LINUX.X64_170107]/u01/app/oracle/product/12.2.0.1/rdbms/admin/orahome = [/u01/app/oracle/product/12.2.0.1]/u01/app/oracle/product/12.2.0.1/bin/orabasehome = [/u01/app/oracle/product/12.2.0.1]catctlGetOrabase = [/u01/app/oracle/product/12.2.0.1]Analyzing file /u01/app/oracle/product/12.2.0.1/rdbms/admin/catupgrd.sqlLog file directory = [/home/oracle]catcon: ALL catcon-related output will be written to [/home/oracle/catupgrd_catcon_13547.lst]catcon: See [/home/oracle/catupgrd*.log] files for output generated by scriptscatcon: See [/home/oracle/catupgrd_*.lst] files for spool files, if anyNumber of Cpus        = 2Database Name         = DB12DataBase Version      = 12.1.0.2.0Parallel SQL Process Count            = 6Components in [DB12]    Installed [CATALOG CATPROC OWM XDB]Not Installed [APEX APS CATJAVA CONTEXT DV EM JAVAVM MGW ODM OLS ORDIM RAC SDO WK XML XOQ]------------------------------------------------------Phases [0-115]         Start Time:[2017_03_07 14:17:11]------------------------------------------------------***********   Executing Change Scripts   ***********Serial   Phase #:0    [DB12] Files:1    Time: 29s***************   Catalog Core SQL   ***************Serial   Phase #:1    [DB12] Files:5    Time: 43sRestart  Phase #:2    [DB12] Files:1    Time: 1s***********   Catalog Tables and Views   ***********Parallel Phase #:3    [DB12] Files:19   Time: 22sRestart  Phase #:4    [DB12] Files:1    Time: 0s*************   Catalog Final Scripts   ************Serial   Phase #:5    [DB12] Files:6    Time: 16s*****************   Catproc Start   ****************Serial   Phase #:6    [DB12] Files:1    Time: 12s*****************   Catproc Types   ****************Serial   Phase #:7    [DB12] Files:2    Time: 11sRestart  Phase #:8    [DB12] Files:1    Time: 0s****************   Catproc Tables   ****************Parallel Phase #:9    [DB12] Files:69   Time: 38sRestart  Phase #:10   [DB12] Files:1    Time: 0s*************   Catproc Package Specs   ************Serial   Phase #:11   [DB12] Files:1    Time: 37sRestart  Phase #:12   [DB12] Files:1    Time: 0s**************   Catproc Procedures   **************Parallel Phase #:13   [DB12] Files:97   Time: 12sRestart  Phase #:14   [DB12] Files:1    Time: 1sParallel Phase #:15   [DB12] Files:118  Time: 21sRestart  Phase #:16   [DB12] Files:1    Time: 1sSerial   Phase #:17   [DB12] Files:13   Time: 2sRestart  Phase #:18   [DB12] Files:1    Time: 0s*****************   Catproc Views   ****************Parallel Phase #:19   [DB12] Files:33   Time: 31sRestart  Phase #:20   [DB12] Files:1    Time: 0sSerial   Phase #:21   [DB12] Files:3    Time: 7sRestart  Phase #:22   [DB12] Files:1    Time: 0sParallel Phase #:23   [DB12] Files:24   Time: 117sRestart  Phase #:24   [DB12] Files:1    Time: 0sParallel Phase #:25   [DB12] Files:11   Time: 67sRestart  Phase #:26   [DB12] Files:1    Time: 0sSerial   Phase #:27   [DB12] Files:1    Time: 0sSerial   Phase #:28   [DB12] Files:3    Time: 3sSerial   Phase #:29   [DB12] Files:1    Time: 0sRestart  Phase #:30   [DB12] Files:1    Time: 0s***************   Catproc CDB Views   **************Serial   Phase #:31   [DB12] Files:1    Time: 1sRestart  Phase #:32   [DB12] Files:1    Time: 0sSerial   Phase #:34   [DB12] Files:1    Time: 0s*****************   Catproc PLBs   *****************Serial   Phase #:35   [DB12] Files:283  Time: 18sSerial   Phase #:36   [DB12] Files:1    Time: 0sRestart  Phase #:37   [DB12] Files:1    Time: 0sSerial   Phase #:38   [DB12] Files:1    Time: 5sRestart  Phase #:39   [DB12] Files:1    Time: 0s***************   Catproc DataPump   ***************Serial   Phase #:40   [DB12] Files:3    Time: 48sRestart  Phase #:41   [DB12] Files:1    Time: 1s******************   Catproc SQL   *****************Parallel Phase #:42   [DB12] Files:13   Time: 69sRestart  Phase #:43   [DB12] Files:1    Time: 0sParallel Phase #:44   [DB12] Files:12   Time: 12sRestart  Phase #:45   [DB12] Files:1    Time: 1sParallel Phase #:46   [DB12] Files:2    Time: 0sRestart  Phase #:47   [DB12] Files:1    Time: 0s*************   Final Catproc scripts   ************Serial   Phase #:48   [DB12] Files:1    Time: 4sRestart  Phase #:49   [DB12] Files:1    Time: 0s**************   Final RDBMS scripts   *************Serial   Phase #:50   [DB12] Files:1    Time: 20s************   Upgrade Component Start   ***********Serial   Phase #:51   [DB12] Files:1    Time: 0sRestart  Phase #:52   [DB12] Files:1    Time: 0s****************   Upgrading Java   ****************Serial   Phase #:53   [DB12] Files:1    Time: 1sRestart  Phase #:54   [DB12] Files:1    Time: 0s*****************   Upgrading XDK   ****************Serial   Phase #:55   [DB12] Files:1    Time: 0sRestart  Phase #:56   [DB12] Files:1    Time: 1s*********   Upgrading APS,OLS,DV,CONTEXT   *********Serial   Phase #:57   [DB12] Files:1    Time: 0s*****************   Upgrading XDB   ****************Restart  Phase #:58   [DB12] Files:1    Time: 0sSerial   Phase #:60   [DB12] Files:3    Time: 8sSerial   Phase #:61   [DB12] Files:3    Time: 5sParallel Phase #:62   [DB12] Files:9    Time: 3sParallel Phase #:63   [DB12] Files:24   Time: 4sSerial   Phase #:64   [DB12] Files:4    Time: 6sSerial   Phase #:65   [DB12] Files:1    Time: 0sSerial   Phase #:66   [DB12] Files:30   Time: 3sSerial   Phase #:67   [DB12] Files:1    Time: 0sParallel Phase #:68   [DB12] Files:6    Time: 2sSerial   Phase #:69   [DB12] Files:2    Time: 14sSerial   Phase #:70   [DB12] Files:3    Time: 31sRestart  Phase #:71   [DB12] Files:1    Time: 0s*********   Upgrading CATJAVA,OWM,MGW,RAC   ********Serial   Phase #:72   [DB12] Files:1    Time: 32s****************   Upgrading ORDIM   ***************Restart  Phase #:73   [DB12] Files:1    Time: 0sSerial   Phase #:75   [DB12] Files:1    Time: 1sParallel Phase #:76   [DB12] Files:2    Time: 0sSerial   Phase #:77   [DB12] Files:1    Time: 0sRestart  Phase #:78   [DB12] Files:1    Time: 0sParallel Phase #:79   [DB12] Files:2    Time: 1sSerial   Phase #:80   [DB12] Files:2    Time: 0s*****************   Upgrading SDO   ****************Restart  Phase #:81   [DB12] Files:1    Time: 0sSerial   Phase #:83   [DB12] Files:1    Time: 1sSerial   Phase #:84   [DB12] Files:1    Time: 0sRestart  Phase #:85   [DB12] Files:1    Time: 0sSerial   Phase #:86   [DB12] Files:1    Time: 0sRestart  Phase #:87   [DB12] Files:1    Time: 1sParallel Phase #:88   [DB12] Files:3    Time: 0sRestart  Phase #:89   [DB12] Files:1    Time: 0sSerial   Phase #:90   [DB12] Files:1    Time: 1sRestart  Phase #:91   [DB12] Files:1    Time: 0sSerial   Phase #:92   [DB12] Files:1    Time: 0sRestart  Phase #:93   [DB12] Files:1    Time: 1sParallel Phase #:94   [DB12] Files:4    Time: 0sRestart  Phase #:95   [DB12] Files:1    Time: 0sSerial   Phase #:96   [DB12] Files:1    Time: 1sRestart  Phase #:97   [DB12] Files:1    Time: 0sSerial   Phase #:98   [DB12] Files:2    Time: 0sRestart  Phase #:99   [DB12] Files:1    Time: 0sSerial   Phase #:100  [DB12] Files:1    Time: 1sRestart  Phase #:101  [DB12] Files:1    Time: 0s***********   Upgrading Misc. ODM, OLAP   **********Serial   Phase #:102  [DB12] Files:1    Time: 0s****************   Upgrading APEX   ****************Restart  Phase #:103  [DB12] Files:1    Time: 1sSerial   Phase #:104  [DB12] Files:1    Time: 0sRestart  Phase #:105  [DB12] Files:1    Time: 0s***********   Final Component scripts    ***********Serial   Phase #:106  [DB12] Files:1    Time: 0s*************   Final Upgrade scripts   ************Serial   Phase #:107  [DB12] Files:1    Time: 118s**********   End PDB Application Upgrade   *********Serial   Phase #:108  [DB12] Files:1    Time: 0s*******************   Migration   ******************Serial   Phase #:109  [DB12] Files:1    Time: 32sSerial   Phase #:110  [DB12] Files:1    Time: 0sSerial   Phase #:111  [DB12] Files:1    Time: 40s*****************   Post Upgrade   *****************Serial   Phase #:112  [DB12] Files:1    Time: 73s****************   Summary report   ****************Serial   Phase #:113  [DB12] Files:1    Time: 0sSerial   Phase #:114  [DB12] Files:1    Time: 0sSerial   Phase #:115  [DB12] Files:1     Time: 22s------------------------------------------------------Phases [0-115]         End Time:[2017_03_07 14:34:45]------------------------------------------------------Grand Total Time: 1055s  LOG FILES: (/home/oracle/catupgrd*.log)Upgrade Summary Report Located in:/home/oracle/upg_summary.logGrand Total Upgrade Time:    [0d:0h:17m:35s][DB12] oracle@localhost:/u01/app/oracle/product/12.2.0.1/rdbms/admin Database DB12 is now upgraded to Oracle Database 12.2.0.1 but needs some post-upgrade treatments.. Post-Upgrade Treatments Those include recompilation, postupgrade_fixups.sql and time zone adjustment. $ sSQL*Plus: Release 12.2.0.1.0 Production on Tue Mar 7 14:36:06 2017Copyright (c) 1982, 2016, Oracle.  All rights reserved.Connected to an idle instance.SQL> startupORACLE instance started.Total System Global Area 1048576000 bytesFixed Size            8628640 bytesVariable Size          369100384 bytesDatabase Buffers      662700032 bytesRedo Buffers            8146944 bytesDatabase mounted.Database opened.SQL> @?/rdbms/admin/utlrp.sqlTIMESTAMP--------------------------------------------------------------------------------COMP_TIMESTAMP UTLRP_BGN          2017-03-07 14:36:40DOC>   The following PL/SQL block invokes UTL_RECOMP to recompile invalidDOC>   objects in the database. Recompilation time is proportional to theDOC>   number of invalid objects in the database, so this command may takeDOC>   a long time to execute on a database with a large number of invalidDOC>   objects.DOC>DOC>   Use the following queries to track recompilation progress:DOC>DOC>   1. Query returning the number of invalid objects remaining. ThisDOC>      number should decrease with time.DOC>         SELECT COUNT(*) FROM obj$ WHERE status IN (4, 5, 6);DOC>DOC>   2. Query returning the number of objects compiled so far. This numberDOC>      should increase with time.DOC>         SELECT COUNT(*) FROM UTL_RECOMP_COMPILED;DOC>DOC>   This script automatically chooses serial or parallel recompilationDOC>   based on the number of CPUs available (parameter cpu_count) multipliedDOC>   by the number of threads per CPU (parameter parallel_threads_per_cpu).DOC>   On RAC, this number is added across all RAC nodes.DOC>DOC>   UTL_RECOMP uses DBMS_SCHEDULER to create jobs for parallelDOC>   recompilation. Jobs are created without instance affinity so that theyDOC>   can migrate across RAC nodes. Use the following queries to verifyDOC>   whether UTL_RECOMP jobs are being created and run correctly:DOC>DOC>   1. Query showing jobs created by UTL_RECOMPDOC>         SELECT job_name FROM dba_scheduler_jobsDOC>        WHERE job_name like 'UTL_RECOMP_SLAVE_%';DOC>DOC>   2. Query showing UTL_RECOMP jobs that are runningDOC>         SELECT job_name FROM dba_scheduler_running_jobsDOC>        WHERE job_name like 'UTL_RECOMP_SLAVE_%';DOC>#PL/SQL procedure successfully completed.TIMESTAMP--------------------------------------------------------------------------------COMP_TIMESTAMP UTLRP_END          2017-03-07 14:41:52DOC> The following query reports the number of invalid objects.DOC>DOC> If the number is higher than expected, please examine the errorDOC> messages reported with each object (using SHOW ERRORS) to see if theyDOC> point to system misconfiguration or resource constraints that must beDOC> fixed before attempting to recompile these objects.DOC>#OBJECTS WITH ERRORS-------------------          0DOC> The following query reports the number of exceptions caught duringDOC> recompilation. If this number is non-zero, please query the errorDOC> messages in the table UTL_RECOMP_ERRORS to see if any of these errorsDOC> are due to misconfiguration or resource constraints that must beDOC> fixed before objects can compile successfully.DOC> Note: Typical compilation errors (due to coding errors) are notDOC>       logged into this table: they go into DBA_ERRORS instead.DOC>#ERRORS DURING RECOMPILATION---------------------------              0Function created.PL/SQL procedure successfully completed.Function dropped.PL/SQL procedure successfully completed.SQL> EXECUTE DBMS_STATS.GATHER_DICTIONARY_STATS;PL/SQL procedure successfully completed.SQL> @/u01/app/oracle/cfgtoollogs/DB12/preupgrade/postupgrade_fixups.sqlSession altered.PL/SQL procedure successfully completed.PL/SQL procedure successfully completed.Package created.No errors.Package body created.No errors.Package created.No errors.Package body created.No errors.Executing Oracle POST-Upgrade Fixup ScriptAuto-Generated by:       Oracle Preupgrade Script                         Version: 12.2.0.1.0 Build: 3Generated on:            2017-03-07 13:46:52For Source Database:     DB12Source Database Version: 12.1.0.2.0For Upgrade to Version:  12.2.0.1.0                          FixupCheck Name                Status  Further DBA Action----------                ------  ------------------old_time_zones_exist      Failed  Manual fixup recommended.post_dictionary           Passed  NonePL/SQL procedure successfully completed.Session altered.SQL> @/home/oracle/DST/upg_tzv_check.sqlINFO: Starting with RDBMS DST update preparation.INFO: NO actual RDBMS DST update will be done by this script.INFO: If an ERROR occurs the script will EXIT sqlplus.INFO: Doing checks for known issues ...INFO: Database version is 12.2.0.1 .INFO: Database RDBMS DST version is DSTv18 .INFO: No known issues detected.INFO: Now detecting new RDBMS DST version.A prepare window has been successfully started.INFO: Newest RDBMS DST version detected is DSTv26 .INFO: Next step is checking all TSTZ data.INFO: It might take a while before any further output is seen ...A prepare window has been successfully ended.INFO: A newer RDBMS DST version than the one currently used is found.INFO: Note that NO DST update was yet done.INFO: Now run upg_tzv_apply.sql to do the actual RDBMS DST update.INFO: Note that the upg_tzv_apply.sql script willINFO: restart the database 2 times WITHOUT any confirmation or prompt.SQL> @/home/oracle/DST/upg_tzv_apply.sqlINFO: If an ERROR occurs the script will EXIT sqlplus.INFO: The database RDBMS DST version will be updated to DSTv26 .WARNING: This script will restart the database 2 timesWARNING: WITHOUT asking ANY confirmation.WARNING: Hit control-c NOW if this is not intended.INFO: Restarting the database in UPGRADE mode to start the DST upgrade.Database closed.Database dismounted.ORACLE instance shut down.ORACLE instance started.Total System Global Area 1048576000 bytesFixed Size                  8628640 bytesVariable Size             369100384 bytesDatabase Buffers          662700032 bytesRedo Buffers                8146944 bytesDatabase mounted.Database opened.INFO: Starting the RDBMS DST upgrade.INFO: Upgrading all SYS owned TSTZ data.INFO: It might take time before any further output is seen ...An upgrade window has been successfully started.INFO: Restarting the database in NORMAL mode to upgrade non-SYS TSTZ data.Database closed.Database dismounted.ORACLE instance shut down.ORACLE instance started.Total System Global Area 1048576000 bytesFixed Size                  8628640 bytesVariable Size             369100384 bytesDatabase Buffers          662700032 bytesRedo Buffers                8146944 bytesDatabase mounted.Database opened.INFO: Upgrading all non-SYS TSTZ data.INFO: It might take time before any further output is seen ...INFO: Do NOT start any application yet that uses TSTZ data!INFO: Next is a list of all upgraded tables:Table list: "GSMADMIN_INTERNAL"."AQ$_CHANGE_LOG_QUEUE_TABLE_L"Number of failures: 0Table list: "GSMADMIN_INTERNAL"."AQ$_CHANGE_LOG_QUEUE_TABLE_S"Number of failures: 0INFO: Total failures during update of TSTZ data: 0 .An upgrade window has been successfully ended.INFO: Your new Server RDBMS DST version is DSTv26 .INFO: The RDBMS DST update is successfully finished.INFO: Make sure to exit this sqlplus session.INFO: Do not use it for timezone related selects.SQL> Database is now fully upgraded and ready to go. You may create fixed objects statistics a while after the database is up and running. But I can proceed now with the plugin operation.. Plugin the upgraded database as a PDB into a CDB This step is simple and straight forward, though on part, the sanity script noncdb_to_pdb.sql, which will be only run once in the life span of a database and is irreversible (but restartable in Oracle Database 12.2.0.1 - it wasn't before), may take a bit to complete depending on the number of objects and some other constraints. First I'm creating the XML manifest file which basically described the physical layout of my database. The database has to be on read only mode - and it has to be at least an Oracle 12.1.0.1 database as otherwise the package DBMS_PDB wouldn't exist. $ . db122[DB12] oracle@localhost:~$ sSQL*Plus: Release 12.2.0.1.0 Production on Tue Mar 7 22:58:38 2017Copyright (c) 1982, 2016, Oracle.  All rights reserved.Connected to:Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit ProductionSQL> shutdown immediateDatabase closed.Database dismounted.ORACLE instance shut down.SQL> startup open read only;ORACLE instance started.Total System Global Area 1048576000 bytesFixed Size                  8628640 bytesVariable Size             369100384 bytesDatabase Buffers          662700032 bytesRedo Buffers                8146944 bytesDatabase mounted.Database opened.SQL> exec DBMS_PDB.DESCRIBE('/tmp/pdb1.xml');PL/SQL procedure successfully completed.SQL> shutdown immediateDatabase closed.Database dismounted.ORACLE instance shut down. Once the file has been created and the database is shutdown I'll connect now into my already existing container database and progress from there. I will plugin now the DB12 database, give it a new name (not required but I'd recommend some obvious name identifying it as a PDB), doing a compatibility check (for errors please see PDB_PLUG_IN_VIOLATIONS) and then kick off the sanity script noncdb_to_pdb.sql. [DB12] oracle@localhost:~$ . cdb2[CDB2] oracle@localhost:~$ sSQL*Plus: Release 12.2.0.1.0 Production on Tue Mar 7 23:07:19 2017Copyright (c) 1982, 2016, Oracle.  All rights reserved.Connected to:Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit ProductionSQL> set serverout onSQL> DECLARE      compatible CONSTANT VARCHAR2(3) := CASE       DBMS_PDB.CHECK_PLUG_COMPATIBILITY(         pdb_descr_file => '/tmp/pdb1.xml',         pdb_name => 'PDB1')       WHEN TRUE THEN 'YES' ELSE 'NO'     END;     BEGIN       DBMS_OUTPUT.PUT_LINE('Is UPGR compatible? ' || compatible);     END;     /Is UPGR compatible? YESPL/SQL procedure successfully completed.SQL> create pluggable database PDB1 using '/tmp/pdb1.xml' nocopy tempfile reuse;Pluggable database created.SQL> alter session set container=PDB1;Session altered.SQL> @?/rdbms/admin/noncdb_to_pdb.sql---- ... <this can take now a while>-- SQL> startupPluggable Database opened.SQL> show pdbs    CON_ID CON_NAME                       OPEN MODE  RESTRICTED---------- ------------------------------ ---------- ----------         3 PDB1                           READ WRITE NOSQL> Important annotation:When I run the CREATE PLUGGABLE DATABASE command I use the NOCOPY option intentionally. But be aware that in this case where the databases stays in place and does not get moved you have to have a valid backup of the database as in case of failure you may be lost in outer space with no way back or forth. In addition, after this operation is completed, the sanity script has been run successfully and the new PDB is OPEN you must take a fresh backup as all your archives and backups from "before" are useless from now on. Don't forget this - it's key! Yep - all done!Database DB12 got upgraded to Oracle Database 12.2.0.1 - and then plugged into CDB2 as a new PDB1. . --Mike If you'd like to comment please use the new and refreshed upgrade blog: https://mikedietrichde.com/2017/03/08/converting-an-12-1-non-cdb-and-plug-it-into-an-12-2-cdb/

2 customers requested a bit more explanation on how to:Convert and Plugin an Oracle 12.1 non-CDB database into an Oracle 12.2 container database Cool, I like that :-) I will do this right away and use...

Single-/Multitenant

Install components in Multitenant ALWAYS with catcon.pl

I did blog several times about how to remove an unwanted component from a database. But yesterday I came across this interesting scenario worth a blog post.. How to install a component afterwards in Single-/Multitenant? I haven't checked all the MOS Notes but I recognized that some MOS Notes explaining how to (re-)create a component such as JVM are not updated to deal with a Single and/or Multitenant environment as they simply call the scripts from SQL*Plus. But the key to script execution in such an environment is catcon.pl, the perl driver meant to execute database scripts not only in the CDB$ROOT but also in the PDB$SEED and in all opened PDBs. If you don't use catcon.pl for script executions then you may get something in CDB$ROOT but not in the PDB$SEED and therefore not in any future PDB.. Example: Create JVM after database creation Creating JVM is pretty well handled and described in MOS Note:1612279.1. With one exception: It does not talk about what to do in a Multitenant environment. I learned about this as one of my colleagues ran the steps described in the note from SQL*Plus connected to the CDB$ROOT - and recognized that JVM did not get created in the existing PDB(s). The MOS Note:1612279.1 proposes the following steps: -- Start of File full_jvminst.sqlspool full_jvminst.log;set echo onconnect / as sysdbastartup mountalter system set "_system_trig_enabled" = false scope=memory;alter database open;select obj#, name from obj$where type#=28 or type#=29 or type#=30 or namespace=32;@?/javavm/install/initjvm.sqlselect count(*), object_type from all_objectswhere object_type like '%JAVA%' group by object_type;@?/xdk/admin/initxml.sqlselect count(*), object_type from all_objectswhere object_type like '%JAVA%' group by object_type;@?/xdk/admin/xmlja.sqlselect count(*), object_type from all_objectswhere object_type like '%JAVA%' group by object_type;@?/rdbms/admin/catjava.sqlselect count(*), object_type from all_objectswhere object_type like '%JAVA%' group by object_type;shutdown immediateset echo offspool offexit-- End of File full_jvminst.sql-- But if you'd execute it you'll get the Java components only in the CDB$ROOT - but not in the PDB$SEED: And as the PDB$SEED is untouchable for users you can't simply force it open and run the script again. . Install JVM correctly in a Single-/Multitenant environment Assuming that the above script does create everything correctly it needs to be transformed to deal with a Single-/Multitenant environment by using catcon.pl.: -- Start of File full_jvminst_mike.sqlspool full_jvminst.log;set echo onalter pluggable database all open;alter system set "_system_trig_enabled" = false scope=memory;select obj#, name from obj$where type#=28 or type#=29 or type#=30 or namespace=32;host $ORACLE_HOME/perl/bin/perl /u01/app/oracle/product/12.1.0.2/rdbms/admin/catcon.pl -n 1 -l /home/oracle -b initjvm /u01/app/oracle/product/12.1.0.2/javavm/install/initjvm.sql;select count(*), object_type from all_objectswhere object_type like '%JAVA%' group by object_type;host $ORACLE_HOME/perl/bin/perl /u01/app/oracle/product/12.1.0.2/rdbms/admin/catcon.pl -n 1 -l /home/oracle -b initxml /u01/app/oracle/product/12.1.0.2/xdk/admin/initxml.sql;select count(*), object_type from all_objectswhere object_type like '%JAVA%' group by object_type;host $ORACLE_HOME/perl/bin/perl /u01/app/oracle/product/12.1.0.2/rdbms/admin/catcon.pl -n 1 -l /home/oracle -b xmlja /u01/app/oracle/product/12.1.0.2/xdk/admin/xmlja.sql;select count(*), object_type from all_objectswhere object_type like '%JAVA%' group by object_type;host $ORACLE_HOME/perl/bin/perl /u01/app/oracle/product/12.1.0.2/rdbms/admin/catcon.pl -n 1 -l /home/oracle -b catjava /u01/app/oracle/product/12.1.0.2/rdbms/admin/catjava.sql;select count(*), object_type from all_objectswhere object_type like '%JAVA%' group by object_type;shutdown immediateset echo offspool offexit-- End of File full_jvminst_mike.sql By the way, the (still) undocumented "-n 1" option simply does start only one worker and creates only one logfile. It is usually used to execute database standard scripts.. Testing it in my environment Of course I did execute the script in my environment - an Oracle Database 12.1.0.2 container database: SQL*Plus: Release 12.1.0.2.0 Production on Thu Mar 2 10:03:36 2017Copyright (c) 1982, 2014, Oracle.  All rights reserved.Connected to an idle instance.SQL> startupORACLE instance started.Total System Global Area 1048576000 bytesFixed Size                  2932336 bytesVariable Size             369099152 bytesDatabase Buffers          671088640 bytesRedo Buffers                5455872 bytesDatabase mounted.Database opened.SQL> alter pluggable database all open;Pluggable database altered.SQL> show pdbs    CON_ID CON_NAME                       OPEN MODE  RESTRICTED---------- ------------------------------ ---------- ----------         2 PDB$SEED                       READ ONLY  NO SQL> create pluggable database PDB1        2  admin user adm identified by adm  3  file_name_convert=('/u02/oradata/CDB1/pdbseed','/u02/oradata/CDB1/pdb1');Pluggable database created.SQL> alter pluggable database pdb1 open;Pluggable database altered.SQL> show pdbs    CON_ID CON_NAME                       OPEN MODE  RESTRICTED---------- ------------------------------ ---------- ----------         2 PDB$SEED                       READ ONLY  NO         3 PDB1                           READ WRITE NOSQL> @full_jvminst_mike.sqlSQL> alter pluggable database all open;Pluggable database altered.SQL> alter system set "_system_trig_enabled" = false scope=memory;System altered.SQL> SQL> select obj#, name from obj$  2  where type#=28 or type#=29 or type#=30 or namespace=32;no rows selectedSQL> SQL> host $ORACLE_HOME/perl/bin/perl /u01/app/oracle/product/12.1.0.2/rdbms/admin/catcon.pl -n 1 -l /home/oracle -b initjvm /u01/app/oracle/product/12.1.0.2/javavm/install/initjvm.sql;catcon: ALL catcon-related output will be written to /home/oracle/initjvm_catcon_13696.lstcatcon: See /home/oracle/initjvm*.log files for output generated by scriptscatcon: See /home/oracle/initjvm_*.lst files for spool files, if anycatcon.pl: completed successfullySQL> SQL> select count(*), object_type from all_objects  2  where object_type like '%JAVA%' group by object_type;  COUNT(*) OBJECT_TYPE---------- ---------------------------------------------------------------------       302 JAVA DATA       832 JAVA RESOURCE     25893 JAVA CLASSSQL> SQL> host $ORACLE_HOME/perl/bin/perl /u01/app/oracle/product/12.1.0.2/rdbms/admin/catcon.pl -n 1 -l /home/oracle -b initxml /u01/app/oracle/product/12.1.0.2/xdk/admin/initxml.sql;catcon: ALL catcon-related output will be written to /home/oracle/initxml_catcon_15160.lstcatcon: See /home/oracle/initxml*.log files for output generated by scriptscatcon: See /home/oracle/initxml_*.lst files for spool files, if anycatcon.pl: completed successfullySQL> SQL> select count(*), object_type from all_objects  2  where object_type like '%JAVA%' group by object_type;  COUNT(*) OBJECT_TYPE---------- ---------------------------------------------------------------------       302 JAVA DATA       908 JAVA RESOURCE     27097 JAVA CLASSSQL> SQL> host $ORACLE_HOME/perl/bin/perl /u01/app/oracle/product/12.1.0.2/rdbms/admin/catcon.pl -n 1 -l /home/oracle -b xmlja /u01/app/oracle/product/12.1.0.2/xdk/admin/xmlja.sql;catcon: ALL catcon-related output will be written to /home/oracle/xmlja_catcon_16210.lstcatcon: See /home/oracle/xmlja*.log files for output generated by scriptscatcon: See /home/oracle/xmlja_*.lst files for spool files, if anycatcon.pl: completed successfullySQL> SQL> select count(*), object_type from all_objects  2  where object_type like '%JAVA%' group by object_type;  COUNT(*) OBJECT_TYPE---------- ---------------------------------------------------------------------       302 JAVA DATA       908 JAVA RESOURCE     27097 JAVA CLASSSQL> SQL> host $ORACLE_HOME/perl/bin/perl /u01/app/oracle/product/12.1.0.2/rdbms/admin/catcon.pl -n 1 -l /home/oracle -b catjava /u01/app/oracle/product/12.1.0.2/rdbms/admin/catjava.sql;catcon: ALL catcon-related output will be written to /home/oracle/catjava_catcon_16273.lstcatcon: See /home/oracle/catjava*.log files for output generated by scriptscatcon: See /home/oracle/catjava_*.lst files for spool files, if anycatcon.pl: completed successfullySQL> SQL> select count(*), object_type from all_objects  2  where object_type like '%JAVA%' group by object_type;  COUNT(*) OBJECT_TYPE---------- ---------------------------------------------------------------------       302 JAVA DATA       911 JAVA RESOURCE     27473 JAVA CLASS     2 JAVA SOURCESQL> shutdown immediateDatabase closed.Database dismounted.ORACLE instance shut down.SQL> set echo offDisconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit ProductionWith the Partitioning, OLAP, Advanced Analytics and Real Application Testing options [CDB1] oracle@localhost:/u01/app/oracle/product/12.1.0.2/rdbms/admin$ sSQL*Plus: Release 12.1.0.2.0 Production on Thu Mar 2 10:28:53 2017Copyright (c) 1982, 2014, Oracle.  All rights reserved.Connected to an idle instance.SQL> startupORACLE instance started.Total System Global Area 1048576000 bytesFixed Size                  2932336 bytesVariable Size             369099152 bytesDatabase Buffers          671088640 bytesRedo Buffers                5455872 bytesDatabase mounted.Database opened.SQL> alter pluggable database all open;Pluggable database altered.SQL> show pdbs    CON_ID CON_NAME                       OPEN MODE  RESTRICTED---------- ------------------------------ ---------- ----------         2 PDB$SEED                       READ ONLY  NO         3 PDB1                           READ WRITE NOSQL> select comp_id from dba_registry order by 1;COMP_ID--------------------------------------------------------------------------------CATALOGCATJAVACATPROCJAVAVMRACXDBXML7 rows selected.SQL> alter session set container=pdb1;Session altered.SQL> select comp_id from dba_registry order by 1;COMP_ID--------------------------------------------------------------------------------CATALOGCATJAVACATPROCJAVAVMRACXDBXML7 rows selected. So it looks good - the required components got all created within each container. But will the component be there as well when I provision a new pluggable database?. Provisioning a new pluggable database SQL> alter session set container=cdb$root;Session altered.SQL> create pluggable database PDB2  2  admin user adm identified by adm  3  file_name_convert=('/u02/oradata/CDB1/pdbseed','/u02/oradata/CDB1/pdb2');Pluggable database created.SQL> alter session set container=pdb2;Session altered.SQL> startup     Pluggable Database opened. SQL> select comp_id from dba_registry order by 1;COMP_ID--------------------------------------------------------------------------------CATALOGCATJAVACATPROCJAVAVMRACXDBXML7 rows selected.SQL> This looks good as well.. Summary It's important to change habits when dealing with Single Tenant or Multitenant environments. You must ensure to execute scripts with catcon.pl in order to ensure that those will be executed in all containers, not only in the CDB$ROOT. Of course it is hard to find the right balance between either too many components or the need to recreate components here and there. Especially in a Multitenant environment it is key that you don't alter component sets to heavily as otherwise you'll see yourself ending up in situations where you want to unplug the above PDB1 and plug it in somewhere else - but if the destination's CDB has no JVM installed you can't really plugin.. --Mike If you'd like to comment please use the new and refreshed upgrade blog: https://mikedietrichde.com/2017/03/02/install-components-in-multitenant-always-with-catcon-pl/

I did blog several times about how to remove an unwanted component from a database. But yesterday I came across this interesting scenario worth a blog post. . How to install a component afterwards...

Best Practice

Remove components from the Oracle kernel with chopt

I've had an interesting discussion today. Somebody removed OLAP with chopt - and got issues afterwards. My guess: chopt will only remove the binary option and is meant to be used before a database got created. But if you use it afterwards you'll have to make sure to remove the dictionary components as well.  A while back a blogged about chopt already: Removing Options from the Oracle Database kernel in 12c And there's a helpful Support Note out there: MOS Note: 948061.1How to Check and Enable/Disable Oracle Binary Options This note WILL NOT address product specific scripts that are required at the database level for specific components to further complete the install/deinstall process. Please refer to the appropriate Oracle Component Guide or as a starting pointNote 472937.1 Information On Installed Database Components and Schemas. This applies especially to OLAP. When you remove it with chopt make sure you have removed all OLAP components from the database beforehand (AMD, APS, XOQ). Check COMP_ID in DBA_REGISTRY. And extra attention is required for CDBs as removing components afterwards from the dictionary may be not straight forward.. An interesting finding on the side for Oracle Database 12.2: chopt can now remove the following options from the database kernel: olap oaa (Advanced Analytics - this is a new chopt option in 12.2) partitioning rat --Mike

I've had an interesting discussion today. Somebody removed OLAP with chopt - and got issues afterwards. My guess: chopt will only remove the binary option and is meant to be used before a database...

Oracle Database 12.2

Client Certification for Oracle Database 12.1.0.2/12.2.0.1

I've received a question about client certification for Oracle Database 12.2.0.1 the other day.And this reminded me on this very helpful MOS Note: MOS Note: 207303.1Client / Server Interoperability Support Matrix for Different Oracle Versions  It spans the client interoperability back to the Oracle 9.2 days and is a very helpful resource. And don't forget that sometimes client software may need a patch as well. A customer I exchange emails with on a regular basis just upgraded the Database server to Oracle Database 12.1.0.2 and saw several strange errors with the JDBC client despite the fact that they were already using the most recent JDBC client. ORA-904, ORA-923 and ORA-920 were signaled together with:oracle.jdbc.driver.OracleParameterMetaDataParser.computeBasicInfo(OracleParameterMetaDataParser.java:277) It turned out that the JDBC client need some patch treatment with merge patch 21623553 on top of the 12.1.0.2. client. It got combined together with some other useful fixes into merge patch 24012252 for the JDBC client: JDBC Patch for Bug# 24012252 for Generic Platform Afterwards the errors were gone. For further JDBC client patch recommendations please see: MOS Note: 2227214.1Recommended patches on JDBC 12.1.0.2.0 version. Furthermore don't forget that there's the Instant Client as well which is easy to deploy and downloadable from OTN: --Mike.

I've received a question about client certification for Oracle Database 12.2.0.1 the other day. And this reminded me on this very helpful MOS Note: MOS Note: 207303.1Client / Server Interoperability...

Patch Recommendation

The OJVM Patching Saga - and how to solve it - Part IV

Related Posts on"The OJVM Patching Saga - and how to solve it": Part I - The OJVM Basics Part II - Important Notes and Information Part III - The Mitigation Patch Part IV - What you may have missed Part V - MOS Note explaining "Conditional Rolling Install"  What is missing with the "Mitigation Patch? Michael, the team lead from a large Oracle customer mailed me the other day. He wanted to disable Java on all their +600 databases by using the Mitigation Patch for OJVM. But he received a strange error when trying to enable the mitigation patch which is clearly included in the April 2016 PSU he is using on his Oracle 12.1.0.2 databases: SQL>  exec dbms_java_dev.disableBEGIN dbms_java_dev.disable; END;      *ERROR at line 1:ORA-06550: line 1, column 7:PLS-00201: identifier 'DBMS_JAVA_DEV.DISABLE' must be declaredORA-06550: line 1, column 7:PL/SQL: Statement ignored MOS didn't reveal an immediate solution. And we've exchanged some emails detecting some issues with one-offs applied only on one cluster node. But actually that was not the root cause for the mitigation patch not allowing to disable java. We did check if the SQL changes from the April 2016 PSU really had been applied with the script I published here: https://blogs.oracle.com/UPGRADE/entry/dba_registry_history_vs_dba But the results were ok - datapatch had been executed in the database: SQL> SET LINESIZE 400SQL> COLUMN action_time FORMAT A20SQL> COLUMN action FORMAT A10SQL> COLUMN status FORMAT A10SQL> COLUMN description FORMAT A60SQL> COLUMN version FORMAT A10SQL> COLUMN bundle_series FORMAT A10SQL> SELECT TO_CHAR(action_time, 'DD-MON-YYYY HH24:MI:SS') AS action_time,  2         action,  3         status,  4         description,  5         version,  6         patch_id,  7         bundle_series  8  FROM   sys.dba_registry_sqlpatch  9  ORDER by action_time; ACTION_TIME          ACTION   STATUS  DESCRIPTION                                            VERSION  PATCH_ID BUN-------------------- -------- ------- ------------------------------------------------------ -------- -------- ---02-MAR-2016 08:59:35 APPLY    SUCCESS Database Patch Set Update : 12.1.0.2.5 (21359755)      12.1.0.2 21359755 PSU02-MAY-2016 09:44:45 APPLY    SUCCESS Database Patch Set Update : 12.1.0.2.160419 (22291127) 12.1.0.2 22291127 PSU13-MAY-2016 12:52:01 ROLLBACK SUCCESS Database Patch Set Update : 12.1.0.2.160419 (22291127) 12.1.0.2 22291127 PSU17-MAY-2016 11:16:56 APPLY    SUCCESS Database Patch Set Update : 12.1.0.2.160419 (22291127) 12.1.0.2 22291127 PSU The Solution Finally Michael found the solution.This tiny little script didn't get run: dbmsjdev.sql It only gets mentioned in the PSU's and BPs READMEs and easily can be overlooked.Furthermore I couldn't find it mentioned in any other MOS note. July 2016 - README PSU for Oracle Database 12.1.0.2 - Section 3.3.2. under 3.https://updates.oracle.com/Orion/Services/download?type=readme&aru=20592212. October 2016 - README PSU for Oracle Database 12.1.0.2 - Section 3.3.2 under 3.https://updates.oracle.com/Orion/Services/download?type=readme&aru=20919988#CHDCBBFE. January 2017 - README BP for Oracle Database 12.1.0.2 - Section 2.6.2 under 4.https://updates.oracle.com/Orion/Services/download?type=readme&aru=20803829#CIHEEDCG. And I missed it on my previous blog post as well and added it now. Once you executed dbmsjdev.sql then you can disable Java by using: SQL> exec dbms_java_dev.disablePL/SQL procedure successfully completed. Further information MOS Note:1929745.1Oracle Recommended Patches -- "Oracle JavaVM Component Database PSU" (OJVM PSU) PatchesThis MOS Note has a section about ISSUES as well. .--Mike.

Related Posts on "The OJVM Patching Saga - and how to solve it": Part I - The OJVM Basics Part II - Important Notes and Information Part III - The Mitigation Patch Part IV - What you may have missed Part V...

Single-/Multitenant

DBMS_QOPATCH does not work in PDBs (right now)

Thanks to Murthy who commented on this blog post and Jeannette Holland (SimCorp) who opened an SR resulting in an ER.. DBMS_QOPATCH in Multitenant DBMS_QOPATCH will deliver useful information about installed patches only when executed within the CDB$ROOT. It has been designed this way for security reasons in Oracle Database 12.1 but I can easily see a need to check for installed patches within a PDB as well.. Testcase I "borrowed" this test case from Jeannette's SR: SQL> COLUMN NAME FORMAT A8SQL>SQL> SELECT NAME, CON_ID, DBID, CON_UID, GUID FROM V$CONTAINERS ORDER BY CON_ID;NAME     CON_ID DBID       CON_UID    GUID-------- ------ ---------- ---------- ------------------CDB$ROOT      1 3424772713 1          47C8525C0DFE49...PDB$SEED      2 3983775695 3983775695 E6204BB1F6EB4F...MYPDB1        3 7270044002 7270044002 B975668B860049...MYPDB2        4 1943363979 1943363979 BCD7AAFAF3F641...In a PDB:ALTER SESSION SET container = myPDB;Session altered.SQL> select * from OPATCH_XML_INV ;ERROR:ORA-29913: error in executing ODCIEXTTABLEOPEN calloutORA-29400: data cartridge errorKUP-04080: directory object OPATCH_LOG_DIR not foundno rows selectedSQL> select dbms_qopatch.get_opatch_install_info from dual;ERROR:ORA-20001: Latest xml inventory is not loaded into tableORA-06512: at "SYS.DBMS_QOPATCH", line 1986ORA-06512: at "SYS.DBMS_QOPATCH", line 133In the CDB:SQL> ALTER SESSION SET container = cdb$root;Session altered.SQL> select * from OPATCH_XML_INV ;XML_INVENTORY--------------------------------------------------------------------------------<?xml version="1.0" encoding="UTF-8" standalone="yes"?><InventoryInstance>SQL> select dbms_qopatch.get_opatch_install_info from dual;GET_OPATCH_INSTALL_INFO--------------------------------------------------------------------------------<oracleHome><UId>OracleHome-2d1c0910-36ac-429b-98db-96a353d423b6</UId><targetTyp Solution There's no solution available right now for Oracle Database 12.1.0.2. And this behavior does not seem to be documented yet. The SR resulted in an (unpublished) Enhancement Request. In a PDB the following workaround may help in Oracle Database 12.1.0.2: 0"> select patch_id, patch_uid, version, action, action_time, status, description from dba_registry_sqlpatch; But this is not as fancy and easy to deal with as an API call to a DBMS package.. I tested in Oracle Database 12.2.0.1 - and there everything seems to work fine there :-) SQL> create pluggable database PDB3 admin user adm identified by adm file_name_convert=( '/u02/oradata/CDB2/pdbseed', '/u02/oradata/CDB2/pdb3'); Pluggable database created. SQL> alter pluggable database pdb3 open; Pluggable database altered. SQL> alter session set container=pdb3; Session altered. SQL> select dbms_qopatch.get_opatch_install_info from dual; GET_OPATCH_INSTALL_INFO -------------------------------------------------------------------------------- <oracleHome><UId>OracleHome-3cb04a3a-3999-4767-86f1-bc845cab158e</UId><targetTyp SQL>   select * from OPATCH_XML_INV ; XML_INVENTORY -------------------------------------------------------------------------------- <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <Inv entoryInstance> <ora SQL> select xmltransform(dbms_qopatch.get_opatch_lsinventory, dbms_qopatch.get_opatch_xslt) from dual; XMLTRANSFORM(DBMS_QOPATCH.GET_OPATCH_LSINVENTORY,DBMS_QOPATCH.GET_OPATCH_XSLT) -------------------------------------------------------------------------------- Oracle Querayable Patch Interface 1.0 ----------------------------------------- SQL> show pdbs     CON_ID CON_NAME                       OPEN MODE  RESTRICTED ---------- ------------------------------ ---------- ----------          3 PDB3                           READ WRITE NO .Cheers--Mike

Thanks to Murthy who commented on this blog post and Jeannette Holland (SimCorp) who opened an SR resulting in an ER. . DBMS_QOPATCH in Multitenant DBMS_QOPATCH will deliver useful information about...

Oracle Database 12c

New SPFILE parameters in Oracle Database 12.1.0.2 with July 2016 (and newer) PSU/BP

New Parameters in Oracle Database 12.1.0.2 with July 2016 PSU/BP By following an internal discussion and checking parameter changes between Patch Set Updates (PSU) and Proactive Bundle Patches (BP) I learned that we introduced two new SPFILE parameters in Oracle Database 12.1.0.2 with the July PSU and BP. One is documented in the patch readme, the other one can be found right now only in the Oracle Database 12.2.0.1 manual: ALLOW_GROUP_ACCESS_TO_SGA ENCRYPT_NEW_TABLESPACES .The Oracle 12.2 documentation about ALLOW_GROUP_ACCESS_TO_SGA, the parameter which appears not in the Oracle 12.1 documentation right now, says: ALLOW_GROUP_ACCESS_TO_SGA controls group access to shared memory on UNIX platforms. The default value is FALSE, which means that database shared memory is created with owner access only. In Oracle Database releases prior to Oracle Database 12c Release 2 (12.2.0.1), database shared memory was created with owner and group access. When this parameter is set to TRUE, database shared memory is created with owner and group access. This behavior grants permissions to DBAs to manage shared memory outside the database, but also allows DBAs to read and write to shared memory, which may not be desirable for certain installations. So there's a tiny correction required:It should say "prior to Oracle Database 12.1.0.2 July 2016 PSU/BP".. The ENCRYPT_NEW_TABLESPACES parameter came in for the cloud deployments and is documented in the description of the July Proactive BP: 12.1.0.2.DBBP:160719 21281607E Transparently encrypt tablespace at creation in Cloud (adds "encrypt_new_tablespaces")  . Why does ALLOW_GROUP_ACCESS_TO_SGA appear in Oracle Database 12.1.0.2? Simple reasons: it will make Oracle Database 12.1.0.2 more secure. Default for this parameter is FALSE - which actually changes behavior but may not affect you at first sight. And that's why I blog about it. You will recognize that the Oracle executable runs now with permission "600" - whereas it was "640" before. See my example of an 12.1.0.2 database with the January 2017 BP in place: $ ipcs -m------ Shared Memory Segments --------key        shmid      owner      perms      bytes      nattch     status      0x00000000 23298063   oracle     600        2932736    76                      0x00000000 23330832   oracle     600        1040187392 38                      0x00000000 23363601   oracle     600        5455872    38                      0xc8969114 23396370   oracle     600        20480      38       ... whereas my 11.2.0.4 database runs with different permissions: $ ipcs -m------ Shared Memory Segments --------key        shmid      owner      perms      bytes      nattch     status     0x00000000 22872084   oracle     640        12582912   21                      0x00000000 22904853   oracle     640        721420288  21                      0xd41b1c5c 22937622   oracle     640        2097152    21 ... .Does this effect the connection of your applications to the database?No, of course not. It has only an effect if you try to access the SGA from the OS level, i.e. attaching to the shared memory segment. In the old behavior an OS user being in the same group can attach and read from the SGA. With the new "600" protection only the OWNER can attach to it - and read out the SGA. This is the standard behavior in Oracle Database 12.2.0.1 and onward. And it has been backported to the July 2016 PSU and Proactive Bundle Patches which are cumulative, i.e it is in all following PSUs and BPs included as well.. --Mike.

New Parameters in Oracle Database 12.1.0.2 with July 2016 PSU/BP By following an internal discussion and checking parameter changes between Patch Set Updates (PSU) and Proactive Bundle Patches (BP) I...

Patch Recommendation

Where is the Jan 2017 PSU for Oracle Database 11.2.0.4?

Where is it? This question reached me a week ago:"Where is the January 2017 PSU for Oracle Database 11.2.0.4?" "What's the deal?" was my first thought - just go and download it from MOS. Well, not really ... . Where to download Proactive Bundle Patches, Patch Set Updates and Patch Sets? My favorite note is:MOS Note:1454618.1 Quick Reference to Patch Numbers for PSU, SPU(CPU), BPs and Patchsets It gives me access to all the download links for the patch bundles I'm looking for. But for 11.2.0.4. there's only "NA" (not available) stated: And a very small asterisk "m": . Why is there no January 2017 PSU for Oracle Database 11.2.0.4? The comment says it all - but I was looking for a more precise explanation. And Roy knew where to look at (thanks!): MOS Note: 854428.1Patch Set Updates for Oracle Products.Section 3 (Patch Sets Updates Lifecycle) says: PSUs will reach a plateau of non-security content as the version stabilizes. Once this is reached, there will be a tapering off of non-security content. Oracle expects this plateau to be reached with the third or fourth PSU. PSU content will be primarily security-related once the next patch set in the series is released. So yes, this is possible and somehow documented. As there are no Security Fixes for Oracle Database 11.2.0.4 this quarter there are no SPUs or PSU released.. Further Information? Please see the blog post of Rodrigo Jorge from Enkitec with more details: http://www.dbarj.com.br/en/2017/01/11-2-0-4-170117-db-psu-dbbp/ Obrigado, Rodrigo!. --Mike.

Where is it? This question reached me a week ago: "Where is the January 2017 PSU for Oracle Database 11.2.0.4?" "What's the deal?" was my first thought - just go and download it from MOS.Well, not...

Cloud

Gather SQL Statements from AWR and Cursor Cache without need for Diag and Tuning Packs

When we talk about database upgrades and migrations the most important (and unfortunately time-/resource consuming) activity is testing. But often testing resources are limited or, even worse, don't exist. I've worked with customers where we migrated a core EBS system off AIX to Linux - but only got a 6 year old Tru64 box for testing purposes (where the recompilation took 20x as long as later on the production environment). Or the classic one: Database is too big thus we test only with 10% of the data. I know that often your hands are tied. Everybody wants an 1:1 duplicate test system - but not everybody has one. And here our DBaaS Cloud is coming into play. Why? Because you can use one of our best and strongest testing tools, SQL Performance Analyzer (part of the Real Application Testing Pack) without the need to license RAT as it is included into the High and Extreme Performance Cloud offerings. Even better:If you have an Enterprise Edition database license on source you won't even need Diagnostic and Tuning Pack licenses to offload your statements from your local environment's AWR and Cursor Cache. SQL Tuning Sets can also be accessed by way of database server APIs and command-line interfaces. The following subprograms, part of the DBMS_SQLTUNE package, provide an interface to manage SQL Tuning Sets and are part of Oracle Database Enterprise Edition: ADD_SQLSET_REFERENCE CAPTURE_CURSOR_CACHE_SQLSET CREATE_SQLSET CREATE_STGTAB_SQLSET DELETE_SQLSET DROP_SQLSET LOAD_SQLSET PACK_STGTAB_SQLSET REMOVE_SQLSET_REFERENCE SELECT_CURSOR_CACHE SELECT_SQLSET SELECT_WORKLOAD_REPOSITORY UNPACK_STGTAB_SQLSET UPDATE_SQLSET  But please note:Diagnostic and Tuning Packs are still available and key to performance management and diagnosibility on your systems. If you'd like to use the above packages and evaluate your statements on-premises (locally) for instance with a scheduled Tuning Task with the SQL Tuning Advisor you will still need a license for Diagnostic and Tuning Packs. If you plan to use the SQL Performance analyzer locally you will need to get a license for Real Application Testing Pack first. See the documentation link below for the exact description (scroll down to SQL Tuning Sets): https://docs.oracle.com/database/121/DBLIC/editions.htm#DBLIC109 http://docs.oracle.com/cd/E11882_01/license.112/e47877/editions.htm#DBLIC117 For a short example how to capture your SQL statements directly from AWR please see this fairly old blog post: https://blogs.oracle.com/UPGRADE/entry/how_to_select_statements_from But I will publish a complete example within the next days including not only the capture from AWR and Cursor Cache but also the packaging into staging tables, the unpackaging - and of course the SPA runs.. --Mike. PS: I did check also the DBA_FEATURE_USAGE_STATISTICS view and I don't see any offending entries when you use the listed packages/calls from above.

When we talk about database upgrades and migrations the most important (and unfortunately time-/resource consuming) activity is testing. But often testing resources are limited or, even worse,...

Oracle Database 12.2

Restarting a failed Database Upgrade with DBUA 12.2

In my previous blog post I did showcase how to restart a failed database upgrade on the command line before Oracle Database 12c, with Oracle Database 12.1.0.x and with the new Oracle Database 12.2.0.x: Restarting a failed Database Upgrade with catctl.pl(Upgrade Blog - Jan 24, 2017) Here I'd like to show the new capability of the Database Upgrade Assistant (DBUA) to restart an upgrade, a functionality the DBUA was missing until now. Please note that you can always go back to the command line, no matter in which version you have used the DBUA. I won't describe an upgrade with the DBUA in this blog post as this is showcased in the documentation already: Upgrading a database with the DBUA. Starting the database upgrade with DBUA I will upgrade the UPGR database well known from our Hands-On Lab. And please don't put in your credentials in the fields below - if you are logged in as the oracle user then this will lead to failure and drive you crazy ... Another thing which puzzles me:I still have to execute the olspreupgrade.sql script from the newer (in my case the 12.2) home by myself. I'd wish the DBUA would do this for me as well as I'll have to open an xterm, set my environment and type in a very long path name to point to the new 12.2 home in order to execute this script in my source environment. Ok, let's kick off the upgrade: The progress bar is very imprecise in relation to the duration - you can ignore it more or less. And - very sad - the Alert and Activity Monitor buttons disappeared - but they may reappear in a later release of the DBUA.. The Error Scenario It's always fun to kill pmon finding out how cool this database is ;-) It survives the deadly attack :-) Of course it does ... it's the Oracle Database ;-) Bang! . The DBUA recognizes the failure Even though the DBUA recognized the failure quite quickly it still tries to complete the upgrade - which of course results in a ton of errors. It just means that you'll have to wait until the DBUA has "progressed" the upgrade till the end - and DON'T CLOSE THE DBUA as otherwise you'll lose the ability to RETRY. You'll see the error count going up until the DBUA has reached the "end" of the (failed) upgrade. And then it displays the RETRY button: . Rerun the Database Upgrade Once you hit RETRY the DBUA will try to solve the situation - and in my case it will start up my source database in STARTUP UPGRADE mode again - and then process the upgrade using the -R option of catctl.pl (described in the previous blog post) You'll find also a new set of logfiles in $ORACLE_BASE/cfgtoollogs/dbua/ subdirectories indication with a number (here: 1) and an "R" that this is the first restart attempt's logs: --Mike.

In my previous blog post I did showcase how to restart a failed database upgrade on the command line before Oracle Database 12c, with Oracle Database 12.1.0.x and with the new Oracle...

Oracle Database 12.2

Restarting a failed Database Upgrade with catctl.pl

What if ... What if a database upgrade fails in between? First of all you should have enabled one of the common protections for issues happening during an upgrade: Guaranteed Restore Point to Flashback Database Restore an RMAN Online Backup Restore a partial Offline Backup See our slide's Fallback section for further details. But actually there are ways to restart the upgrade without the need to fallback in case of a non-destructive incident such as running out of archivelog space.. Restart the Upgrade before Oracle Database 12c Before Oracle Database 12c the most simple way to restart an upgrade was to restart catupgrd.sql from SQL*Plus making sure the database is back in upgrade mode. Upgrade (and downgrade) scripts are designed to run again ... and again ... if necessary. Even if you kicked off the upgrade with the Database Upgrade Assistant (DBUA) who by itself is not able to rerun the upgrade you can invoke the command line upgrade and rerun it without the need for restore. SQL> startup upgrade SQL> spool /home/oracle/catupgrd.logSQL> @?/rdbms/admin/catupgrd.sql This will restart the upgrade from the beginning and rerun it again.. Restart the Upgrade in Oracle Database 12.1.0.x With this release we introduced the parallel upgrade utility, catctl.pl.  The database needs to be in startup upgrade mode again but the upgrade will be driven from the command line with the Perl tool running the upgrade with a maximum of 8 parallel workers. And if the upgrade fails non-destructive for whatever reason you invoke catctl.pl from the phase where it has stopped with the -p option specifying the phase number. In the following example the upgrade has been stopped in phase 100 (in my example by a CTRL-C): *********** Upgrading Misc. ODM, OLAP ********** Serial Phase  #:95 [UPGR] Files:1 Time: 0s **************** Upgrading APEX **************** Restart Phase #:96 [UPGR] Files:1 Time: 0s Serial Phase  #:97 [UPGR] Files:1 Time: 1s Restart Phase #:98 [UPGR] Files:1 Time: 0s *********** Final Component scripts *********** Serial Phase  #:99 [UPGR] Files:1 Time: 0s ************* Final Upgrade scripts ************ Serial Phase #:100 [UPGR] Files:1   ^[c Now I simply can restart it from this phase again without the need to rerun all the previous steps: Make sure the database is in STARTUP UPGRADE mode. Then invoke from ?/rdbms/admin: $ORACLE_HOME/perl/bin/perl catctl.pl -p 100 -l /home/oracle/ catupgrd.sql ************* Final Upgrade scripts ************ Serial Phase #:100 [UPGR] Files:1 Time: 142s ********** End PDB Application Upgrade ********* Serial Phase #:101 [UPGR] Files:1 Time: 1s ******************* Migration ****************** Serial Phase #:102 [UPGR] Files:1 Time: 60s Serial Phase #:103 [UPGR] Files:1 Time: 0s Serial Phase #:104 [UPGR] Files:1 Time: 71s ***************** Post Upgrade ***************** Serial Phase #:105 [UPGR] Files:1 Time: 30s **************** Summary report **************** Serial Phase #:106 [UPGR] Files:1 Time: 1s Serial Phase #:107 [UPGR] Files:1 Time: 0s Serial Phase #:108 [UPGR] Files:1 Time: 38s This works also when the upgrade has been started with the DBUA and failed. The DBUA is unable to rerun the upgrade but you can always invoke the command line upgrade by looking at the logfiles DBUA produced finding the phase where it has been stopped. . Restart the Upgrade in Oracle Database 12.2.0.x In Oracle Database 12.2.0.1 we have again improved the rerun ability of the upgrade - now you don't have to know the phase where it stopped - but we introduce the -R option for catctl.pl to rerun the upgrade automatically after the last completed phase. $ORACLE_HOME/perl/bin/perl catctl.pl -R -l /home/oracle/ catupgrd.sql See the documentation for further details: Rerunning Upgrade for Oracle Databases - Oracle Database Upgrade Guide 12.2 And the DBUA has been improved as well. It can now rerun an failed upgrade but only from within the same DBUA session (so please don't close it). And of course you can always fall back to the command line and complete it from there. See a separate blog post about: Rerunning the Upgrade with the DBUA in Oracle Database 12.2 .--Mike.

What if ... What if a database upgrade fails in between? First of all you should have enabled one of the common protections for issues happening during an upgrade: Guaranteed Restore Point to Flashback D...

Oracle Database 12.2

Default Changes SPFILE Parameters - Oracle 12.2

Parameters in Oracle Database 12.2.0.1 - part 4 of the series: New SPFILE Parameters in Oracle Database 12.2.0.1 Obsolete SPFILE Parameters in Oracle Database 12.2.0.1 Deprecated SPFILE Parameters in Oracle Database 12.2.0.1 Default SPFILE Parameter changes between Oracle Database 11.2.0.4, 12.1.0.2 and 12.2.0.1 Roy and I did a comparison between default parameter settings in Oracle Database 11.2.0.4 vs Oracle Database 12.1.0.2 vs Oracle Database 12.2.0.1. And some changes are quite interesting - of course the memory driven parameters are left out in this list. Color RED marks a change between releases.The databases were all on OL (Oracle Linux) 6.8 and located on file system, not in ASM - therefore certain values may differ when the database is located in ASM and/or on a different OS port.  table { }td { padding-top: 1px; padding-right: 1px; padding-left: 1px; color: black; font-size: 12pt; font-weight: 400; font-style: normal; text-decoration: none; font-family: Calibri,sans-serif; vertical-align: bottom; border: medium none; white-space: nowrap; }.xl63 { text-align: right; } Parameter Oracle 11.2.0.4 Oracle 12.1.0.2 Oracle. 12.2.0.1 audit_sys_operations FALSE TRUE TRUE compatible 11.2.0.4 12.1.0.2.0 12.2.0 control_file_record_keep_time 7 7 30 db_securefile PERMITTED PREFERRED PREFERRED dml_locks 616 1416 2076 filesystemio_options NONE NONE setall job_queue_processes 1000 1000 4000 object_cache_optimal_size 102400 102400 10240000 optimizer_features_enable 11.2.0.4 12.1.0.2 12.2.0.1 parallel_max_servers 48 80 80 parallel_min_servers 0 8 8 parallel_servers_target 64 32 32 parallel_adaptive_multi_user TRUE TRUE FALSE pre_page_sga FALSE TRUE TRUE resource_limit FALSE TRUE TRUE sec_max_failed_login_attempts 10 3 4 sec_protocol_error_trace_action CONTINUE TRACE LOG spatial_vector_acceleration FALSE FALSE TRUE sql92_security FALSE FALSE TRUE .--Mike

Parameters in Oracle Database 12.2.0.1 - part 4 of the series: New SPFILE Parameters in Oracle Database 12.2.0.1 Obsolete SPFILE Parameters in Oracle Database 12.2.0.1 DeprecatedSPFILE Parameters in...

Single-/Multitenant

Having fun with PDB LOCKDOWN PROFILES

In Oracle Database 12.2 (available in the Oracle DBaaS Cloud) there's a new functionality called "PDB LOCKDOWN PROFILES". Actually the parameter came already in undocumented and not functional in Oracle Database 12.1.0.2 (see here). PDB Lockdown Profiles PDB Lockdown Profiles are meant as a granular way to restrict or enable access - mostly under the aspect of using the database in a highly shared environment but requiring security. The idea is to embed restrictions on-top of a grant, i.e. taking away certain things. As an example you could allow a user logged into a certain PDB only to change the optimizer_mode and cursor_sharing parameters when doing an ALTER SYSTEM. The glossary in the Concepts Guide says: A security mechanism to restrict operations that are available to local users connected to a specified PDB. And here is a good overview in the Concepts Guide and the Security Guide for details: Lockdown Profiles in Oracle Multitenant Lockdown Profiles - Create, Drop, Alter How-to-Lockdown-Profile Lets start with a fresh PDB: SQL> create pluggable database PDB2 admin user adm identified by adm file_name_convert=('/oradata/CDB2/pdbseed', '/oradata/CDB2/pdb2'); First we need to create a lockdown profile: SQL> create lockdown profile P1; Then we can ALTER the profile and allow only to change optimizer_mode and cursor_sharing with an ALTER SYSTEM. SQL> alter lockdown profile P1 disable statement=('ALTER SYSTEM') clause=('SET') OPTION ALL EXCEPT=('optimizer_mode','cursor_sharing'); And finally, the PDB Lockdown Profile needs to be enabled: SQL> alter system set PDB_LOCKDOWN=P1; Check: SQL> show parameter pdb_lNAME          TYPE    VALUE------------- ------- ------pdb_lockdown  string  P1. Where the fun begins ... Now lets connect with my default SYS user - SYS per definitionem is a common user - and switch to my PDB2: $> sqlplus / as sysdba SQL> alter session set container=PDB2; SQL> alter system set sql_trace=TRUE;*ERROR at line 1:ORA-01031: insufficient privileges Aha! Ok, so let's try ... SQL> alter system set cursor_sharing='FORCE';System altered.SQL> alter system set optimizer_mode='FIRST_ROWS_10';System altered. Ok, this works. But can I still change SQL_TRACE on the session level? Of course I can ... SQL> alter session set SQL_TRACE=TRUE;Session altered. Makes sense as I restricted only ALTER SYSTEM but not ALTER SESSION. So let's do this as well: SQL> alter session set container=cdb$root;Session altered.SQL> alter lockdown profile P1 disable statement=('ALTER SESSION') clause=('SET') OPTION ALL EXCEPT=('optimizer_mode','cursor_sharing')Lockdown Profile altered. Another tiny exercise taken directly from the documentation: CREATE LOCKDOWN PROFILE medium;ALTER LOCKDOWN PROFILE medium DISABLE STATEMENT=('ALTER SYSTEM');ALTER LOCKDOWN PROFILE medium ENABLE STATEMENT=('ALTER SYSTEM') CLAUSE=('FLUSH SHARED POOL'); This will allow ONLY the ALTER SYSTEM FLUSH SHARED POOL command with ALTER SYSTEM. SQL>   alter system set pdb_lockdown='MEDIUM';  alter system set pdb_lockdown='MEDIUM'*ERROR at line 1:ORA-01031: insufficient privilegesSQL> alter system set sql_trace=true;alter system set sql_trace=true*ERROR at line 1:ORA-01031: insufficient privileges And of course you can add more things to an existing profiles such as disabling certain features: SQL> alter session set container=cdb$root;Session altered.SQL> ALTER LOCKDOWN PROFILE medium DISABLE FEATURE=('XDB_PROTOCOLS');Lockdown Profile altered . Which profiles do exist and what's defined? First of all, the parameter PDB_LOCKDOWN is modifiable on a PDB level meaning you can have different profiles for different PDBs. But as far as I tested only one profile per PDB can be active and enabled. You can look up existing profiles and their contents? SQL>  select profile_name, rule_type, rule, clause, clause_option, status from DBA_LOCKDOWN_PROFILES order by 1;PROFILE_NAME   RULE_TYPE  RULE           CLAUSE     CLAUSE_OPTION   STATUS-------------- ---------- -------------- ---------- --------------- -------MEDIUM         STATEMENT  ALTER SYSTEM                              DISABLEMEDIUM         FEATURE    XDB_PROTOCOLS                             DISABLEP1             STATEMENT  ALTER SESSION  SET        CURSOR_SHARING  ENABLEP1             STATEMENT  ALTER SYSTEM   SET        OPTIMIZER_MODE  ENABLEP1             STATEMENT  ALTER SYSTEM   SET        CURSOR_SHARING  ENABLEP1             STATEMENT  ALTER SESSION  SET        OPTIMIZER_MODE  ENABLEP1             STATEMENT  ALTER SESSION  SET                        DISABLEP1             STATEMENT  ALTER SYSTEM   SET                        DISABLEPRIVATE_DBAAS                                                       EMPTYPUBLIC_DBAAS                                                        EMPTYSAAS                                                                EMPTY11 rows selected.. Summary This is a very powerful feature - but it can be a bit tricky in the future to find out why certain things don't work. The ORA-1031 error may now always guide into the correct direction. One thing which puzzles me a bit: We speak always about simplifying things, easing administration. And with PDB Lockdown Profiles you can highly complicate things. Or tease your colleagues :-) (please don't, ok?!). Just don't over-complicate things ... :-). --Mike

In Oracle Database 12.2 (available in the Oracle DBaaS Cloud) there's a new functionality called "PDB LOCKDOWN PROFILES". Actually the parameter came already in undocumented and not functional in...

Workshop

DOAG Noon2Noon - Upgrade/Multitenant - 2.-3.2.2017

++++++ Sorry - this is a German event only thus I will write in German ++++++ . Ich bin schon sehr gespannt. Am 2. Februar 2017 wird im Intercity Hotel in Mainz das DOAG Noon2Noon zum Thema "Upgrade, Migration und Multitenant" stattfinden. Von 12:00h mittags bis 12:00h mittags am Folgetag. Der Praxisbezug steht im Vordergrund. Garniert mit kurzen Vorträgen. In der praxisorientierten "Noon2Noon"-Veranstaltung wollen wir uns mit den Herausforderungen beim Upgrade nach Oracle 12c bzw. Oracle 12c Release 2 (falls diese zu diesem Zeitpunkt für On Premises Datenbanken freigegeben ist) auseinandersetzen. Das ungewöhnliche Format zielt auf Lösungsorientiertheit. Bei den interaktiven Sessions heißt es dann: Ärmel hochkrempeln und mitmachen!Von 12 bis 12 Uhr, mittags bis mittags, reduzieren wir die Folienschlachten auf ein Minimum. Unser Ziel ist es, dass am Ende der Veranstaltung jeder Teilnehmer erfolgreich ein Datenbank-Upgrade von Version 11.2 auf 12.1 mit der Multitenant Architektur durchgeführt hat. Unterstützung erhalten wir dabei von Ernst Leber, MT AG, Mike Dietrich, ORACLE Corporation, Martin Bach, Enkitec/Accenture und Uwe Hesse (ex-Oracle University). Sie sollten unbedingt Ihren Laptop und vor allem eine Datenbank, entweder lokal installiert oder noch einfacher, in einer Oracle VBox oder VMware Umgebung, mitbringen, damit wir gemeinsam live und vor Ort das Upgrade oder eine Migration durchführen  und die DB dann nach Oracle Single/Multitenant überführen können. Anmeldung bitte über die DOAG Seite: http://www.doag.org/termine/termine.php?tid=527684 Hier noch ein wenig Motivation vorab: Erfolgreiches Upgrade nach Oracle 12c – „Mr. Upgrade“ verrät, wie es geht Bis dahin :-) --Mike

++++++ Sorry - this is a German event only thus I will write in German ++++++ . Ich bin schon sehr gespannt. Am 2. Februar 2017 wird im Intercity Hotel in Mainz das DOAG Noon2Noon zum Thema "Upgrade,...

Off Topic

Happy New Year!

A Happy New Year 2017 to all of you! Thanks for visiting our blog quite often - we promise to keep that level of technical information in the new year as well. But let me review some things and give you an outlook for 2017. Flashback 2016 was an exciting year in terms of technology. Oracle moves strongly towards more and more cloud offerings - and Oracle Database 12.2 got released in the Oracle DBaaS Cloud as well. Roy and I did a good number of workshops internally with our Support and Presales people all over the world. And some of them were the best workshops we've ever did with such an amazing audience. Thanks to everybody for coming by and playing with our lab and challenging us. That was pure fun - and sorry for the frosty temperatures in early November in Dalian in China - it was still a wonderful visit and we hope to be back soon again. There were also a lot of great conferences such as DOAG, UKOUG, OUGN, AUOG and SANGAM just to name a few. And of course the OTN Tours. Thanks to all those great people who organize all these events mostly in their spare time. You do such a great job bringing Oracle people together - thank you very much! These events - although they are hard work for us usually as well - help us to recharge our batteries. Just visiting excellent talks by other people, learning from customers and partners but also discussing so many tech topics in the breaks or at meetings is simply great. If you've never been at such an event give it a try. It's really worth it. Writing the blog is still a lot of fun as well - we learn a lot about things you saw and experienced. Please keep on sending us comments if you have seen issues you think might be worth to investigate further or highlight to others as well. I can't stop saying how important this is to us. And it helps us to learn about new stuff as well. Outlook I just received some emails and comments asking for the availability of Oracle Database 12.2. on-premises - all we can do is direct you to the official MOS Note:742060.1. And I'd bet 2017 will be as exciting as 2016 has been. The year will start off with a workshop at a partner, Roy will do a huge number of workshops in the US - and again, no marketing will be done throughout our workshops, just purely TECH stuff. And we may add more User Group driven events if necessary and requested.. Have a great start into 2017! And see you soon at an event or a conference!--Mike

A Happy New Year 2017 to all of you! Thanks for visiting our blog quite often - we promise to keep that level of technical information in the new year as well. But let me review some things and give...

Cloud

APEX is in CDB$ROOT again - Journey to the Cloud VII

Well ... it's been a while ... but I would like to continue my journey to the cloud ... What happened so far on my Journey to the Cloud? Part I - Push a Button (Dec 3, 2015) Part II - Switch On/Off and Remove (Dec 4, 2015) Part III - Patch, patch, patch (Dec 22, 2015) Part IV - Clean Up APEX (Jan 19, 2016) Part V - TDE is wonderful (Jan 28, 2016) Part VI - Patching does not work (Apr 18, 2016) ==> Part VII - APEX is in CDB$ROOT again (Dec 20, 2016) . Oracle Database 12.2.0.1 in the Cloud Since November 2016 Oracle Database 12.2.0.1 is available in the Oracle DBaaS Cloud. And I received this question in my inbox yesterday: I have a customer who wants to migrate Apex 4.2 applications to DBCS and ORDS.war to Weblogic on Compute. I recently went through your blog on removing Apex from CDB Root. The customer is planning to do something similar but had questions on repercussions of doing so in Oracle Public Cloud. What are the factors that need to be considered. Also how would DBCS patching work in this scenario. . Is APEX is in CDB$ROOT again I haven't checked this yet as we've had a very productive conversation with the APEX folks a while back. And I'm 100% sure that the APEX group wasn't involved in this decision as they recommend clearly in the doc to NOT HAVE APEX installed in the CDB$ROOT ;-)Ok, I did connect to our Cloud environment and ... voila ... COLUMN VERSION FORMAT A15COLUMN NAME FORMAT A12COLUMN COMP_ID FORMAT A10SQL> select r.comp_id, r.version, c.name from cdb_registry r, v$containers c where r.con_id=c.con_id and r.comp_id='APEX' order by 3COMP_ID    VERSION         NAME---------- --------------- ------------APEX       5.0.4.00.12     CDB$ROOTAPEX       5.0.4.00.12     ROYPDB1 Ouch ... The presence of APEX in the CDB$ROOT may have to do with the Oracle DBaaS Monitor Console. This is just an assumption but when I removed the APEX in my old 12.1.0.2 cloud deployment I had to clean-up the DBaaS Monitor as well. See: Journey to the Cloud Part IV - Clean Up APEX (Jan 19, 2016) for my experience a few months back.. Solution? Well, having APEX in the CDB$ROOT is still a brilliant [IRONY!] idea. As soon as you start unplug/plug operations with APEX in the PDB only or with a different APEX version you are asking for trouble. See this blog post for the potential pitfalls: Why you should remove APEX from the CDB$ROOT - Nov 15, 2015 Which options does the customer have assuming that his APEX 4.2 application is in a non-CDB? Upgrade APEX locally to 5.0.4.00.12 before migrating the database to the DBaaS cloudThis would be the easiest workaround if it wouldn't involve an application software upgrade. Through the Oracle glasses this looks simple - but from a customer's and application developer's standpoint this isn't trivial as most likely it will involve application testing. Export the APEX application and import itI haven't done this by myself but first of all with APEX 4.2 (or below) you must take care to move the image files as well - and you'll have to move data as well. And, of course, you won't end up in APEX 4.2 but in APEX 5.0 so the above mentioned application upgrade will hit you as well. I don't see any benefit over solution 1.. Remove APEX from the 12.2.0.1 DBaaS Cloud deployment's CDB$ROOTThis is - in my humble opinion - the only viable solution here if the customer can't upgrade APEX in the current environment for whatever reason. But this will most likely remove the DBaaS Monitor as well. I can live without it but I know that it offers a lot of good stuff especially when dealing with encrypted tablespaces which is otherwise hard to handle on the command line. The good part of this solution is the freedom and flexibility you'll get once APEX is removed from the CDB$ROOT for unplug/plug operations in the future.. Finally, regarding patching: I don't see any issues. And the same for a future upgrade as we decouple APEX upgrades from the database upgrade with Oracle Database 12.2.0.. --Mike

Well ... it's been a while ... but I would like to continue my journey to the cloud ... What happened so far on my Journey to the Cloud? Part I - Push a Button (Dec 3, 2015) Part II - Switch On/Off and...

Tech

Create a database with NON-DEFAULT Time Zone

One requirement of Transportable Tablespaces (and of course Full Transportable Export/Import as well) is to have identical database character sets AND identical time zone settings. Problem Source database has a lower time zone setting than the default target database in the destination home. Lets assume you'd like to migrate an Oracle 11.2.0.4 off IBM AIX and migrate it into Oracle 12.1.0.2 on an Exadata meaning Oracle Linux 6. The source time zone version if it has never been upgraded would be TZ V14 - whereas the new Oracle 12.1.0.2 database you'd create would get TZ V18 by default. Solution 1 - Upgrade Time Zone version in Source You could now "simply" upgrade the Time Zone version in the source database meaning you'd apply the time zone patch matching the default time zone of the destination database home (see MOS Note: 412160.1 for the list of available Time Zone patches). Then you would have to adjust the data with the scripts provided in MOS Note:1585343.1 Or you could go a step further and apply the most recent available time zone patches to both homes which I'd recommend in this case. Then you'll adjust the source database before initiating the transport steps including the copy operations. I have put "simply" in quotation marks on purpose as often this may not be an option. Changing the source's time zone version will incur downtime and a restart of the database. Furthermore in case of somebody has a very high amount of time zone dependent data it may take a while. . Solution 2 - Create Destination Database with lower Time Zone version Therefore I'd consider the second solution as the more interesting and practical one: Create the destination database to transport to with a different time zone version than the default. In Oracle Database 12.1.0.2 the default would be TZ V18. Step 1: Create database creation scripts with DBCA I haven't tried if it would work as well to let the DBCA directly create a custom database but I don't trust Java applications using my environment variables so I prefer the script version. It does not work to use one of the seed databases in DBCA (DWH and OLTP and GENERAL PURPOSE) as those all use the default version of the release without an option to override it. Step 2: Set environment variable ORA_TZFILE The time zone files are located in $ORACLE_HOME/oracore/zoneinfo. By using the environment variable ORA_TZFILE we can override the default of using the highest numbered time zone file in this directory: $ export  ORA_TZFILE=/u01/app/oracle/product/12.1.0.2/oracore/zoneinfo/timezone_14.dat Step 3: Create the database with the "sh" script DBCA has created the "create database" scripts and one named <SID>.sh. Start the creation by: $ ./MIKE.sh Step 4: Verify the correct time zone setting of your new database The following query reflects the current time zone version of this database: SQL> select value$ "TZ Version" from SYS.PROPS$ where name='DST_PRIMARY_TT_VERSION'; In my example the result will be 14 now instead of 18 (the default for an Oracle 12.1.0.2 database).. --Mike.

One requirement of Transportable Tablespaces (and of course Full Transportable Export/Import as well) is to have identical database character sets AND identical time zone settings. Problem Source...

Data Pump

Transportable Tablespaces and READ ONLY in Oracle Database 12c

We recently worked with a customer who noticed that they were not able to use transportable tablespaces to connect the same tablespace data files to two databases at the same time, even after setting the tablespaces READ ONLY in SQL*Plus. This is new behavior in 12c, and many customers are not yet aware of this change. Here are the details of what changed, why, and how you might want to deal with it if the changes affect your environment. What Changed? Starting in 12.1, data pump sets tablespaces read write during the import phase of a transportable tablespace migration. This means that a tablespace cannot be hooked into two different databases at the same time using transportable tablespaces. Why Was This Change Made? There were a couple of motivations for this change. First, as databases have grown, we encountered performance hits when dealing with tablespaces that contain many partitions or subpartitions for either tables or indexes. The reason for this (apologies if this gets too deep) is that we try to ensure that we can reclaim free space in cases where a tablespace is being moved, but where not all tables within that tablespace are part of the operation. For example, you could move a tablespace data file which includes partitions from 5 tables, but you may be interested in only 2 of those tables. The segments used by the other 3 tables would be dead space that we should reclaim. Prior to 12c, we would reclaim this space by first recording all the segments being exported during the export phase, and then those imported during the import phase. This allowed us to free up all the space for segments that were not imported. This worked, but as bigfile tablespaces grew into the tens of terabytes, performance really suffered. We ran into cases where all that accounting for segments took literally days. In 12c we implemented a different technique where we no longer record the segments on export (this is available as a backport to 11g as well), and then upon import we recompute the bitmap for the tablespace. Recomputing the bitmap means calling DBMS_SPACE_ADMIN.TABLESPACE_REBUILD_BITMAPS, and generally takes a matter of seconds compared to the previous method which could take hours. Therefore, this change is a very large performance improvement for both export and import of transportable tablespaces when there are large numbers of data segments involved. The second reason for this change was to enable transportable tablespaces to handle import of a tablespace into a database that used a different version of Timestamp with Timezone (TSTZ) data. Prior to 12c, there were many restrictions on moving TSTZ data between databases. We have progressively been able to relax and eliminate these restrictions over time. To quote from the 11.2.0.4 Database Utilities Guide: Jobs performed in transportable tablespace mode have the following requirements concerning time zone file versions: If the source is Oracle Database 11g release 2 (11.2.0.2) or later and there are tables in the transportable set that use TIMESTAMP WITH TIMEZONE (TSTZ) columns, then the time zone file version on the target database must exactly match the time zone file version on the source database. If the source is earlier than Oracle Database 11g release 2 (11.2.0.2), then the time zone file version must be the same on the source and target database for all transportable jobs regardless of whether the transportable set uses TSTZ columns. If these requirements are not met, then the import job aborts before anything is imported. This is because if the import job were allowed to import the objects, there might be inconsistent results when tables with TSTZ columns were read. Starting in 12.1 we are able to handle TSTZ data when moving a tablespace to a database with a higher timezone version. This is done by opening the tablespace datafile and fixing the TSTZ data using features created in 12c for this purpose. This means that with 12c we can use transportable tablespaces in more scenarios, and that the resulting import will be more complete when there is Timestamp with Timezone data involved. In summary, the change to opening data files read write during a transportable tablespace import makes the process faster and the technique more broadly applicable. But Neither of those Benefits Affect Me, and I Like the Old Behavior! It might be possible for us to allow the old behavior, by implementing a parameter explicitly for that purpose. However, there would be the following shortcomings to using this parameter: We would not be able to account for the segments imported during the transportable impdp operation, because we would have no list of exported segments against which to compare them. Those segments would be dead space until the user explicitly called the DBMS_SPACE_ADMIN.TABLESPACE_REBUILD_BITMAPS procedure. If any of the tablespaces being imported contained TSTZ data, then The timezone version of the importing database would have to exactly match that of the exporting database, or We would abort the import However, we don't require a new parameter in order to achieve the goal of preserving the old behavior. If the files are set READ ONLY at the operating system level (or in ASM) then this prevents any attempt to set them read write at the database level. This means that we will not reclaim free space from unused segments, and the TSTZ fix-ups will not be performed. In this case, any tables with TSTZ data will be dropped upon import in cases where the timezone version of the target database is not the same as that of the imported data. We (Data Pump development) tend to view the benefits gained by this change as far outweighing the loss of functionality. However, we do recognize that we have hundreds of thousands of customers who may have varying use cases and priorities for Data Pump, so if there is a desire to provide options as described above, we would be happy to accept enhancement requests for consideration.

We recently worked with a customer who noticed that they were not able to use transportable tablespaces to connect the same tablespace data files to two databases at the same time, even after setting...

Workshop

UKOUG TECH 16 is coming - and I'm speaking

Uhhh ... time flies. I look into my calendar - and UKOUG TECH 16 Conference is almost there. Just a few days to go (and hopefully no Lufthansa strike on the upcoming weekend - but I looked already for alternatives). Anyhow, I'm excited and look forward to my 2nd TECH conference. Last year was the first time for me and I enjoyed it a lot. Very good audience, good location - even though I can't say much about the city of Birmingham as I haven't seen anything except for the venue, my hotel and an Indian restaurant. Well, business travel is not meant for sightseeing usually. But the weather this year is supposed to be fine - chilly but sunny ;-) Just in case you'll be there as well these are my three talks at UKOUG TECH 16: Monday, December 5, 2016 - 14:10h - HALL 9Ensure Performance Stability When Upgrading Oracle Databases Nobody likes unforeseeable surprises when it comes to database upgrades and migrations.As the upgrade or migration usually is not complicated by itself, the most important topic is to ensure good performance afterwards. This presentation will discuss a straight forward step-by-step approach to ensure not only plan stability but also proper testing, secret tweaks and helpful tips and tricks around database upgrades and migrations. It spans the entire project lifetime from things-to-do before approaching such a move to important settings in your new environment during or after the upgrade. Just to avoid unforeseeable surprises .... Tuesday, December 6, 2016 - 15:25h - HALL 5Upgrade to the Next Generation of Oracle Database: Live & Uncensored! Upgrade on Powerpoint Slides will work always fine.But how does a database upgrade to the Next Generation of the Oracle Database works? What is different compared to Oracle Database 12.1, what is new? And is it faster?This talk will briefly overview database upgrades and new upgrade and migration features but focus mainly on a LIVE and UNCENSORED (and potentially INTERACTIVE) demonstration of a database upgrade to the Next Generation of the Oracle Database.. Wednesday, December 7, 2016 - 8:50h - HALL 9The Best Upgrade & Migration Strategies - or Things to Avoid What if you have more than a few databases? What is the best strategy to keep them current, upgrade and/or migrate them in a regular cycle without taking all your available resources? And what can be automated?This talk will not only highlight successful customer upgrade projects but also showcase techniques to automate your upgrade and your patching with two different examples of customers with a range between 300 and 1700 databases.Furthermore there are things and habits to definitely avoid as otherwise things get really tricky and become very hard to handle - and very expensive as well.Real life cases will showcase things to avoid by all means., Hope to see you there! --Mike.

Uhhh ... time flies. I look into my calendar - and UKOUG TECH 16 Conference is almost there. Just a few days to go (and hopefully no Lufthansa strike on the upcoming weekend - but I looked already for...

Patch Recommendation

DBA_REGISTRY_HISTORY vs DBA_REGISTRY_SQLPATCH

At the DOAG Conference in November in Nürnberg in November 2016 a customer asked me right after my talk about "Upgrade to Oracle Database 12.2. - Live and Uncensored" why the DBA_REGISTRY_HISTORY does not get updated when he applies a Bundle Patch and follows all instructions including the "./datapatch -verbose" call. I was wondering as well and asked him to open an SR. Which he did. And he received the message from Support that it is not supposed to appear in Oracle 12c anymore this way but only in DBA_REGISTRY_SQLPATCH. Now I dug a bit deeper internally to get a clear statement (thanks to Carol (my boss) and Rae (my teammate) and Scott (the man who owns datapatch) for following up!).. Patch Query in Oracle Database 11g Tim Hall has published this simple and quite helpful script to query applied PSUs and BPs in Oracle Database 11g:Script to monitor DBA_REGISTRY_HISTORY And the output in my environment looked like this: ACTION_TIME          ACTION  NAMESPE VERSION  ID      COMMENTS             BUN-------------------- ------- ------- -------- ------- -------------------- ---01-JUL-2016 15:24:56 APPLY   SERVER  11.2.0.4 160419  PSU 11.2.0.4.160419  PSU21-OCT-2016 17:40:32 APPLY   SERVER  11.2.0.4 161018  PSU 11.2.0.4.161018  PSU But running the same script on Oracle Database 12.1.0.2 returnes (as for the customer) "no rows selected".. Patch Query for Oracle Database 12c Since Oracle Database 12.1.0.1 we use DBA_REGISTRY_SQLPATCH instead of DBA_REGISTRY_HISTORY to track PSUs and BPs applied to the database. My output in Oracle Database 12.1.0.2 looks like this: ACTION_TIME          ACTION  STATUS   DESCRIPTION          VERSION  PATCH_ID BUND-------------------- ------- -------- -------------------- -------- -------- ----21-OCT-2016 17:29:36 APPLY   SUCCESS  DBP: 12.1.0.2.161018 12.1.0.2 24340679 DBBP when using this tiny script: SET LINESIZE 400 COLUMN action_time FORMAT A20COLUMN action FORMAT A10COLUMN status FORMAT A10COLUMN description FORMAT A40COLUMN version FORMAT A10COLUMN bundle_series FORMAT A10SELECT TO_CHAR(action_time, 'DD-MON-YYYY HH24:MI:SS') AS action_time,       action,       status,       description,       version,       patch_id,       bundle_seriesFROM   sys.dba_registry_sqlpatchORDER by action_time; But the question remains if - as in Oracle Database 12.1.0.1 - both views should get updated. Explanation In 11.2.0.4, we used the script catbundle.sql to apply bundle patches.  It uses DBA_REGISTRY_HISTORY only.  For 12.1.0.1 with the introduction of datapatch, we now have the (much better) DBA_REGISTRY_SQLPATCH.  This is used for both, bundle and non-bundle patches.  In Oracle Database 12.1.0.1. for bundle patches we actually called catbundle internally, so in 12.1.0.1 both registries were updated for bundle patches.Starting in 12.1.0.2, however, only DBA_REGISTRY_SQLPATCH is queried and updated for bundle and non bundle patches. Update [Dec 23, 2016] After discussing this and other issues with the owners of datapatch my teammate Rae logged a bug for this issue as we believe both views should be updated as it happened in 12.1.0.1. Bug# 25269268 tracks the issue..--Mike .

At the DOAG Conference in November in Nürnberg in November 2016 a customer asked me right after my talk about "Upgrade to Oracle Database 12.2. - Live and Uncensored" why the DBA_REGISTRY_HISTORY does...

Off Topic

Lufthansa - 14th pilot strike in 2 years - lovely :-(

I don't blame anybody. Really, I don't.  But nevertheless this is awkward.  Source: Facebook - https://scontent.fmuc2-1.fna.fbcdn.net/v/t1.0-9/15193604_10210663792047910_5774221704440071551_n.jpg?oh=99f7242c9283bf664db779354d594b3b&oe=58B51EA1 I've got affected by the most recent - the 14th (!!!) in two years - strike of the Lufthansa pilots. And not for the first time. I've had my share of strike experience already quite often in the past years with my main airline carrier. Last week I had a hard time to reschedule my flight to Brussels, and I've got stuck in the Netherlands the other day as the workers union VC who's taking care on the belongings of estimated 5400 Lufthansa pilots extended their strike day by day. And as my ticket got adjusted by Lufthansa for the first leg our travel agency couldn't alter it anymore. The Lufthansa hotline was not helpful as it simply broke down saying either "wrong number" or disconnecting me after 10 seconds. And all the flight alternatives Lufthansa offered me via their app or booking page were sold out or asking me to put myself on the waiting list just to deny it a second later. I ended up with a new ticket bringing me home at least on Friday by midnight thanks to SAS and a brave Lufthansa City Line pilot who went not on strike. Yes, there are pilots who fly and don't agree with the workers union's position. But the ticket was not issued by Lufthansa so I will have to do the paperwork by myself to claim it back. Today I wanted to fly out to a future reference customer in Scandinavia. I read about the new strike on Monday morning (for me I count it as the 15th) grounding all short hauls on Tuesday plus all short and long hauls on Wednesday. But even though our travel bureau has a good relationship with Lufthansa there was no way to get a useful flight connection. I've got booked on an earlier flight - and an hour later Lufthansa canceled this one as well. I still could have flown to my destination today - with just a tiny little 5 hour layover in Brussels. But there wouldn't be any way back tomorrow allowing enough time for a useful customer meeting working together on a challenging project. And all the valid flight options with SAS were taken already. So I had to cancel a long planned meeting. This time Lufthansa didn't offer any alternatives - not even sold out ones - as their web site broke down yesterday as well. The hotline showed the same behavior as on Friday ... tuut-tuut-tuut-tuut ... I'm not in the position to judge who's right or wrong. But I get asked several times a day why these strikes happen. And I don't see any of the parties moving towards each other. Lufthansa is a great company, no doubt, with plenty of tradition and history. And the right to strike is guaranteed by our constitution. But showing no movement towards each other's positions is a bad thing. No dispute can be solved this way. All involved parties should understand that this will lead towards losing more and more customers. Which will lead to more savings. Which will lead to lose more customers. More cost cuts. Lose more customers. And so on. I will book all my future flights on alternate airlines as I value our customers a lot. If I schedule a workshop on the other side of the globe or cross the street I would like to ensure I'll be there - and not depend on an airline which is not reliable for many years. Please find a solution! --Mike,

I don't blame anybody. Really, I don't.  But nevertheless this is awkward.  Source: Facebook...

Oracle Database 12c

Enabling ADAPTIVE Features of Oracle 12.2 in 12.1

Oracle Database 12.2 introduces the new split-up adaptive parameters, OPTIMIZER_ADAPTIVE_PLANS and OPTIMITER_ADAPTIVE_STATISTICS. For more information please see: https://blogs.oracle.com/UPGRADE/entry/optimizer_adaptive_features_obsolete_in https://blogs.oracle.com/optimizer/entry/optimizer_adaptive_features_in_the But Oracle Database 12.2 on-premises is not out yet - so what should you do when upgrading to Oracle Database 12.1 - or struggling with some of the "adaptive" features in Oracle 12.1? MOS Note: 2187449.1Recommendations for Adaptive Features in Oracle Database 12.1 It recommends to adopt the Oracle Database 12.2 defaults when upgrading to Oracle Database 12.1. This can be achieved by installing two patches - we call it the recommended approach. The patch for bug# 22652097 introduces the two parameters OPTIMIZER_ADAPTIVE_PLANS and OPTIMIZER_ADAPTIVE_STATISTICS, and in addition removes the parameter OPTIMIZER_ADAPTIVE_FEATURES. The patch for bug# 21171382 disables the automatic creation of extended statistics unless the optimizer preference AUTO_STATS_EXTENSIONS is set to ON. Please make sure you'll remove OPTIMIZER_ADAPTIVE_FEATURES from your spfile:alter system reset optimizer_adaptive_features;when applying the patches. Both patches should help as well once you have upgraded already to Oracle Database 12.1 but encountering performance issues. Please note that it's not necessary to set OPTIMIZER_DYNAMIC_SAMPLING to a non-default value because the patches will disable the use of adaptive dynamic sampling to match the default behavior in Oracle Database 12.2 when both new parameters are used in their default settings.. Further Information: https://blogs.oracle.com/optimizer/entry/optimizer_adaptive_features_and_upgrading. --Mike.

Oracle Database 12.2 introduces the new split-up adaptive parameters, OPTIMIZER_ADAPTIVE_PLANS and OPTIMITER_ADAPTIVE_STATISTICS. For more information please see: https://blogs.oracle.com/UPGRADE/entry/o...

Single-/Multitenant

SPFILE Parameter: max_pdbs - a must for Single Tenant

Sometimes my job has a aspect making me smile at the end of the day ;-) I sat together with Johannes Ahrends during a talk at the OUGN Conference on the boat from Oslo towards Kiel. And we were discussing afterwards why there's no official way to limit the number of PDBs which will be essential for customers wanting to go the Single Tenant track. I had my Hands-On environment up and we played a bit in the break recognizing that a constraint on CONTAINER$ won't be the correct solution as unplug/plug operations leave leftovers in it unless you DROP PLUGGABLE DATABASE. And even if you drop the remains the constraint solution does not work. At the breakfast the next morning Johannes mentioned a trigger - and he published one soon after. But it is not a good idea in terms of keeping support for your database when you fiddle around with the data dictionary. So I did ask my contacts internally and received a message saying clearly"Somebody doesn't want this.". :-)Well, I work long enough for Oracle to know how to read it. It's a common thing to blame it on "Somebody" when you don't want to discuss things further. One can hide perfectly well behind "Somebody". The higher my surprise was when I started testing and playing with Oracle Database 12.2 - and collecting init.ora parameters between release labels to detect changes and additions. And apparently, this one here appeared: MAX_PDBS saying "max number of pdbs allowed in CDB or Application ROOT" in its parameter description.I was (a) surprised and (b) happy and (c) had to try it out immediately in my environment where I had already 3 PDBs (the PDB$SEED does not count): SQL> alter system set max_pdbs=3; System altered. SQL> show pdbs CON_ID CON_NAME               OPEN MODE  RESTRICTED ------ ---------------------- ---------- ----------2      PDB$SEED               READ ONLY  NO 3      PDB1                   READ WRITE NO 4      PDB2                   READ WRITE NO 5      CLONE_PDB              MOUNTED SQL> alter system set max_pdbs=2; alter system set max_pdbs=2 * ERROR at line 1: ORA-32017: failure in updating SPFILE ORA-65331: DDL on a data link table is outside an application action.. So even though the error message is a bit rough (at least in my release drop) the parameter does what we want. In a single tenant environment you'll set it to "1" and prevent the creation or plugin of a 2nd PDB in this container database.. Another test with a fresh container database: SQL> show pdbsCON_ID CON_NAME      OPEN MODE  RESTRICTED------ ------------- ---------- ----------2      PDB$SEED      READ ONLY  NO SQL> alter system set max_pdbs=1;System altered.SQL> show parameter max_pdbsNAME                     TYPE     VALUE------------------------ -------- ----- max_pdbs                 integer      1 SQL> create pluggable database pdb1 admin user adm identified by adm file_name_convert=('/u02/oradata/CDB2/pdbseed','/u02/oradata/CDB2/pdb1');Pluggable database created.SQL> create pluggable database pdb2 admin user adm identified by adm file_name_convert=('/u02/oradata/CDB2/pdbseed','/u02/oradata/CDB2/pdb2');create pluggable database pdb2 admin user adm identified by adm file_name_convert=('/u02/oradata/CDB2/pdbseed','/u02/oradata/CDB2/pdb2')*ERROR at line 1:ORA-65010: maximum number of pluggable databases createdSQL> drop pluggable database pdb1 including datafiles;Pluggable database dropped.SQL> create pluggable database pdb2 admin user adm identified by adm file_name_convert=('/u02/oradata/CDB2/pdbseed','/u02/oradata/CDB2/pdb2');Pluggable database created. .Looks like a solid solution to me. But please see also this blog post by Oracle ACE Director Franck Pachot about an issue with the parameter:http://blog.dbi-services.com/oracle-12cr2-max_pdbs/ --Mike.

Sometimes my job has a aspect making me smile at the end of the day ;-) I sat together with Johannes Ahrends during a talk at the OUGN Conference on the boat from Oslo towards Kiel. And we...

Oracle Database 12.2

Obsolete SPFILE Parameters in Oracle Database 12.2.0.1

This is the 2nd posting in my series about init.ora/SPFILE parameters in Oracle Database 12.2.0.1.  New SPFILE Parameters in Oracle Database 12.2.0.1 Obsolete SPFILE Parameters in Oracle Database 12.2.0.1 Deprecated SPFILE Parameters in Oracle Database 12.2.0.1 Find the list of the 159 obsoleted parameters here (and of course in V$OBSOLETE_PARAMETERS): _app_ctx_vers_average_dirties_half_life_aw_row_source_enabled_compatible_no_recovery_data_transfer_cache_size_db_no_mount_lock_dlm_send_timeout_dtree_bintest_id_dtree_compressbmp_enabled_evolve_plan_baseline_report_level_fast_start_instance_recovery_target_fic_max_length_fic_outofmem_candidates_idl_conventional_index_maintenance_kgl_latch_count_kks_free_cursor_stat_pct_kspptbl_mem_usage_lm_direct_sends_lm_multiple_receivers_lm_rcv_buffer_size_lm_statistics_log_archive_buffer_size_log_io_size_max_log_write_io_parallelism_module_action_old_length_optimizer_adaptive_plans_optimizer_choose_permutation_oracle_trace_events_oracle_trace_facility_version_plan_verify_local_time_limit_plsql_conditional_compilation_px_async_getgranule_px_slaves_share_cursors_seq_process_cache_const_spr_use_hash_table_sqlexec_progression_cost_use_hidden_partitions_very_large_partitioned_tableallow_partial_sn_resultsalways_anti_joinalways_semi_joinarch_io_slavesb_tree_bitmap_plansbackup_disk_io_slavescache_size_thresholdcell_partition_large_extentscleanup_rollback_entriesclose_cached_open_cursorscomplex_view_mergingdb_block_checkpoint_batchdb_block_lru_extended_statisticsdb_block_lru_latchesdb_block_lru_statisticsdb_block_max_dirty_targetdb_file_simultaneous_writesdblink_encrypt_loginddl_wait_for_locksdelayed_logging_block_cleanoutsdiscrete_transactions_enableddistributed_recovery_connection_hold_timedistributed_transactionsdrs_startenqueue_resourcesexclude_seed_cdb_viewfast_full_scan_enabledfreeze_DB_for_fast_instance_recoverygc_defer_timegc_files_to_locksgc_latchesgc_lck_procsgc_releasable_locksgc_rollback_lockshash_join_enabledhash_multiblock_io_countinstance_nodesetjob_queue_intervaljob_queue_keep_connectionslarge_pool_min_alloclgwr_io_slaveslm_lockslm_procslm_procslm_resslock_sga_areaslog_block_checksumlog_fileslog_parallelismlog_simultaneous_copieslog_small_entry_max_sizelogmnr_max_persistent_sessionsmax_commit_propagation_delaymax_rollback_segmentsmax_transaction_branchesmts_circuitsmts_dispatchersmts_listener_addressmts_max_dispatchersmts_max_serversmts_multiple_listenersmts_serversmts_servicemts_sessionsogms_homeops_admin_groupops_interconnectsoptimizer_adaptive_featuresoptimizer_max_permutationsoptimizer_percent_paralleloptimizer_search_limitoracle_trace_collection_nameoracle_trace_collection_pathoracle_trace_collection_sizeoracle_trace_enableoracle_trace_facility_nameoracle_trace_facility_pathparallel_automatic_tuningparallel_broadcast_enabledparallel_default_max_instancesparallel_degree_levelparallel_io_cap_enabledparallel_min_message_poolparallel_serverparallel_server_idle_timeparallel_server_instancesparallel_transaction_resource_timeoutpartition_view_enabledplsql_compiler_flagsplsql_native_c_compilerplsql_native_library_dirplsql_native_library_subdir_countplsql_native_linkerplsql_native_make_file_nameplsql_native_make_utilitypush_join_predicateremote_archive_enablerow_cache_cursorsrow_lockingsequence_cache_entriessequence_cache_hash_bucketsserializableshared_pool_reserved_min_allocsnapshot_refresh_intervalsnapshot_refresh_keep_connectionssnapshot_refresh_processes  sort_direct_writessort_multiblock_read_countsort_read_facsort_spacemap_sizesort_write_buffer_sizesort_write_buffersspin_countsql_versionstandby_preserves_namestemporary_table_lockstext_enabletransaction_auditingundo_suppress_errorsuse_indirect_data_buffersuse_ism --Mike.

This is the 2nd posting in my series about init.ora/SPFILE parameters in Oracle Database 12.2.0.1.  New SPFILE Parameters in Oracle Database 12.2.0.1 ObsoleteSPFILE Parameters in Oracle Database...

Oracle Database 12.2

New SPFILE parameters in Oracle Database 12.2.0.1

Oracle Database 12.2.0.1 is available now in the Oracle Cloud. And this is the list of 46 new init.ora/spfile parameters compared to Oracle Database 12.1.0.2 - including the links (where possible) to the Oracle Database 12.2 Reference documentation. Parameter Description allow_global_dblinks LDAP lookup for DBLINKS allow_group_access_to_sga Allow read access for SGA to users of Oracle owner group approx_for_aggregation Replace exact aggregation with approximate aggregation approx_for_count_distinct Replace count distinct with approx_count_distinct approx_for_percentile Replace percentile_* with approx_percentile asm_io_processes number of I/O processes per domain in the ASM IOSERVER instance autotask_max_active_pdbs Setting for Autotask Maximum Maintenance PDBs awr_pdb_autoflush_enabled Enable/Disable AWR automatic PDB flushing cdb_cluster [undocumented if TRUE startup in CDB Cluster mode cdb_cluster_name [undocumented] CDB Cluster name clonedb_dir CloneDB Directory containers_parallel_degree Parallel degree for a CONTAINERS() query cursor_invalidation default for DDL cursor invalidation semantics data_guard_sync_latency Data Guard SYNC latency data_transfer_cache_size Size of data transfer cache default_sharing Default sharing clause disable_pdb_feature [undocumented] Disable features enable_automatic_maintenance_pdb Enable/Disable Automated Maintenance for Non-Root PDB enable_dnfs_dispatcher Enable DNFS Dispatcher enabled_PDBs_on_standby List of Enabled PDB patterns encrypt_new_tablespaces whether to encrypt newly created tablespaces exafusion_enabled Enable Exafusion external_keystore_credential_location external keystore credential location inmemory_adg_enabled Enable IMC support on ADG inmemory_expressions_usage Controls which In-Memory Expressions are populated in-memory inmemory_virtual_columns Controls which user-defined virtual columns are stored in-memory instance_abort_delay_time time to delay an internal initiated abort (in seconds) instance_mode indicates whether the instance read-only or read-write or read-mostly long_module_action Use longer module and action max_datapump_jobs_per_pdb maximum number of concurrent Data Pump Jobs per PDB max_idle_time maximum session idle time in minutes max_iops MAX IO per second max_mbps MAX MB per second max_pdbs max number of pdbs allowed in CDB or Application ROOT ofs_threads Number of OFS threads one_step_plugin_for_pdb_with_tde [undocumented] Facilitate one-step plugin for PDB with TDE encrypted data optimizer_adaptive_plans controls all types of adaptive plans optimizer_adaptive_statistics controls all types of adaptive statistics outbound_dblink_protocols Outbound DBLINK Protocols allowed remote_recovery_file_dest default remote database recovery file location for refresh/relocate resource_manage_goldengate goldengate resource manager enabled sga_min_size Minimum, guaranteed size of PDB's SGA shrd_dupl_table_refresh_rate duplicated table refresh rate (in seconds) standby_db_preserve_states Preserve state cross standby role transition target_pdbs [undocumented] Parameter is a hint to adjust certain attributes of the CDB uniform_log_timestamp_format use uniform timestamp formats vs pre-12.2 formats .--Mike.

Oracle Database 12.2.0.1 is available now in the Oracle Cloud. And this is the list of 46 new init.ora/spfile parameters compared to Oracle Database 12.1.0.2 - including the links (where possible) to...

Data Pump

Full Transportable Export/Import - PAR File Examples

Roy and I blogged about Full Transportable Export/Import in the past: Full Transportable Export/Import White Paper Full Transportable Export/Import - Things to Know Full Transportable Export/Import - Migrating an 11.2.0.4 database to Oracle Database 12c- into the Oracle Cloud Exclude Stats differently for TTS and FTEX If you haven't heard of this database feature, it allows you to migrate a full database by using Transportable Tablespaces as a base technology but letting Data Pump do all the manual steps for you in a one-command migration. And if needed, it works with RMAN Incremental Backups as well in order to decrease the downtime for copying larger databases. It works cross-platform, cross-Endianness - and as early as with a source version of Oracle Database 11.2.0.3 - but only towards Oracle Database 12c. Often there are questions regarding the correct syntax for the parameter files. As I have to look it up from time to time as well I thought I'd share some example par files as there are two different approaches: With or without NETWORK_LINK. With expdp/impdp - without NETWORK_LINK expdp.par directory=oradumpdumpfile=wspmst_ftex_exp.dmplogfile=wspmst_ftex_exp.logtransportable=alwaysfull=ymetrics=ylogtime=allexclude=table_statistics,index_statistics impdp.par directory=oradumpdumpfile=wspmst_ftex_exp.dmplogfile=wspmst_ftex_imp.logmetrics=ylogtime=allTRANSPORT_DATAFILES='/oradata/wspmst/data/WSPMSTCON/3ABC7F153DB55A4DE0539ED0C80A102D/datafile/advsecmon.331.880913493'TRANSPORT_DATAFILES='/oradata/wspmst/data/WSPMSTCON/3ABC7F153DB55A4DE0539ED0C80A102D/datafile/advsecmon.555.880923697' ... Only impdp (no expdp) - using NETWORK_LINK 2 examples here - one dealing with an Oracle 11.2 source database, the other moving directly from Oracle 12c non-CDB into a pluggable database (PDB). impdp1.par impdp mike/passwd@v121 NETWORK_LINK=v112
 FULL=Y TRANSPORTABLE=ALWAYS VERSION=12 METRICS=Y exclude=table_statistics,index_statistics
  LOGTIME=ALL  LOGFILE=ftex_dir:v112fullimp.log TRANSPORT_DATAFILES='/oracle/DQ1/sapdata50/ts1.dbf' TRANSPORT_DATAFILES='/oracle/DQ1/sapdata50/ts2.dbf' impdp2.par impdp system/oracle@pdb2 network_link=sourcedb full=y transportable=always metrics=y exclude=table_statistics,index_statistics directory=mydir logfile=pdb2.log transport_datafiles='/u02/oradata/CDB2/pdb2/users01.dbf' Please note: The VERSION=12 option must be used only when an Oracle 11.2 database is the source as in this case Data Pump on the 11.2 side must be triggered to access this 12c feature and create an export in a form the 12c Data Pump can understand. There's no expdp necessary when using Data Pump over a database link (NETWORK_LINK). But in this case you will need the keywords FULL=Y and TRANSPORTABLE=ALWAYS as export parameters as the export portion of Data Pump on the source side will be triggered underneath the covers. --Mike

Roy and I blogged about Full Transportable Export/Import in the past: Full Transportable Export/Import White Paper Full Transportable Export/Import - Things to Know Full Transportable Export/Import...

Patch Recommendation

October 2016 Proactive BP got replaced

Just received a message from Oracle Support this early morning as I did install the Proactive Bundle Patch from October 2016 into my Oracle Database 12.1.0.2 environment saying: Dear Oracle Customer,You are receiving this email because our recordsindicate you downloaded the following patch:Patch number: 24448103Release: DB Proactive Bundle 12.1.0.2.161018Platform: Linux x86-64This patch has been replaced and is now available for download. Please review section 1.1 of thefollowing My Oracle Support note for further technical details and instructions:Note: 2171506.1 - Oracle Database Proactive Patch 12.1.0.2.161018 Known Issues Issue found: SCAN Listener or local listener fails to start The symptom of failed to start SCAN listener resource happens in environments that have been upgraded from 11.2 to 12.1. The Oct2016 Proactive Bundle Patch 12.1.0.2.161018 Patch 24448103 has been uploaded again with a fix for this issue as of 30-Oct-2016 8am PST. The My Oracle Support Note: 2166451.1 - "SCAN Listener or local listener fails to start after applying Patch 23273629 - Oracle Grid Infrastructure Patch Set Update 12.1.0.2.160719 (Jul2016) or Oct DB BP patch 24448103" has more information It didn't affect me as I don't have the SCAN listener in my environments. But you should be aware of this. --Mike.

Just received a message from Oracle Support this early morning as I did install the Proactive Bundle Patch from October 2016 into my Oracle Database 12.1.0.2 environment saying: Dear Oracle Customer,You...

Single-/Multitenant

Gather Fixed Objects Stats in PDBs as well?

Yesterday I received an interesting customer question: "Do we have to gather fixed objects stats inside a PDB as well or only in the CDB$ROOT?" The customer told me he couldn't find anything in the documentation. Neither could I. And I did check also the usual suspects this morning "Best Practices for Gathering Statistics" - but it does not say anything about PDBs and CDBs. Therefore I did a short test: Create a PDB in my Oracle 12.1.0.2 CDB:SQL> create pluggable database PDB3 admin user adm identified by adm file_name_convert=( '/u02/oradata/CDB1/pdbseed', '/u02/oradata/CDB1/pdb3'); Open the PDB:SQL> alter pluggable database PDB3 open; Check for missing stats on X$ tables:SQL> select count(*) from sys.dba_tab_statistics where owner='SYS' and table_name like 'X$%' and  last_analyzed is not null; Result in CDB$ROOT: 0 Result in PDB3: 0 Gather Fixed Objects Stats only in PDB3:SQL> exec dbms_stats.gather_fixed_objects_stats; Check the results with:SQL> select count(*) from sys.dba_tab_statistics where owner='SYS' and table_name like 'X$%' and  last_analyzed is not null; Result in CDB$ROOT: 0 Result in PDB3: 1025 Gather Fixed Objects Stats in CDB$ROOT:SQL> exec dbms_stats.gather_fixed_objects_stats; Check the results again with:SQL> select count(*) from sys.dba_tab_statistics where owner='SYS' and table_name like 'X$%' and  last_analyzed is not null; Result in CDB$ROOT: 1027 Result in PDB3: 1025 Conclusion Yes, you'll have to gather fixed objects stats in PDBs independently from the CDB$ROOT.But generally in Oracle Database 12c the Automatic Statistics Gathering job will take care on Fixed Objects Stats as well (see here). In a Multitenant environment you just may have to take care to unfold your default maintenance windows as otherwise this will happen in all PDBs at the same time generating plenty of noise on your system. Oracle Database automatically gathers fixed object statistics as part of automated statistics gathering if they have not been previously collected (see "Controlling Automatic Optimizer Statistics Collection"). --Mike

Yesterday I received an interesting customer question: "Do we have to gather fixed objects stats inside a PDB as well or only in the CDB$ROOT?" The customer told me he couldn't find anything in...

Patch Recommendation

October 2016 PSU and BP - Database Patching?

What will you get when you download the most recent Oracle Database PSU or BP from October 2016? MOS Note: 1683799.1 - 12.1.0.2 Patch Set - Availability and Known Issues is not entirely clear. Therefore lets shed some light ... The Matrix This matrix in MOS Note: 1683799.1 tells you about the availability of PSUs and BPs for a regular database installation (non-RAC, non-Exadata). But it doesn't clearly tell you what's included - and the names being used aren't very revealing either. Non Exadata Non RAC Document Description Rolling RAC Patch Download Note:24448103.8 12.1.0.2.161018 Database Proactive Bundle Patch (Oct 2016) Yes Patch:24448103 Note:24436306.8 Combo of 12.1.0.2.161018 OJVM PSU and 12.1.0.2.161018 DBBP (Oct 2016) Part Patch:24436306 Note:24433133.8 Combo of 12.1.0.2.161018 OJVM PSU and 12.1.0.2.161018 DB PSU (Oct 2016) Part Patch:24433133 Note:24315824.8 Oracle JavaVM Component 12.1.0.2.161018 Database PSU (Oct 2016) (OJVM PSU) No Patch:24315824 Note:24006101.8 12.1.0.2.161018 (Oct 2016) Database Patch Set Update (DB PSU) Yes Patch:24006101 My Matrix I translate this into: Non Exadata Non RAC Document Description Rolling RAC Patch Download DB PSU DB BP GI PSU OJVM Note:24448103.8 12.1.0.2.161018 Database Proactive Bundle Patch (Oct 2016) Yes Patch:24448103 X X Note:24436306.8 Combo of 12.1.0.2.161018 OJVM PSU and 12.1.0.2.161018 DBBP (Oct 2016) Part Patch:24436306 X X X Note:24433133.8 Combo of 12.1.0.2.161018 OJVM PSU and 12.1.0.2.161018 DB PSU (Oct 2016) Part Patch:24433133 X X Note:24315824.8 Oracle JavaVM Component 12.1.0.2.161018 Database PSU (Oct 2016) (OJVM PSU) No Patch:24315824 X Note:24006101.8 12.1.0.2.161018 (Oct 2016) Database Patch Set Update (DB PSU) Yes Patch:24006101 X How to apply a Proactive Bundle Patch? If you've never done it before, applying a Proactive Bundle Patch to a database-only installation is not very complicated. Please see my own step-by-step instructions here (but don't forget to check the current readme as well please!). More Information: General information: Critical Patch Update - October 2016 Risk Matrix for: Oracle Database Server, version(s) 11.2.0.4, 12.1.0.2 Patch Availability: MOS Note: 2171485.1 Download the Database PSU or BP:MOS Note: 1683799.1 - 12.1.0.2 Patch Set - Availability and Known Issues MOS Note:1454618.1 - Quick Reference to Patch Numbers for Database PSU, SPU(CPU), Bundle Patches and Patchsets Example on how to apply the April 16 Proactive Bundle Patch:https://blogs.oracle.com/UPGRADE/entry/oracle_database_bp_april16_applied --Mike

What will you get when you download the most recent Oracle Database PSU or BP from October 2016? MOS Note: 1683799.1 - 12.1.0.2 Patch Set - Availability and Known Issues is not entirely clear....

Patch Recommendation

_rowsets_enabled - Apply patch and use the default

I while back I blogged about issues with "rowsets", a new Oracle 12c feature which unfortunately had two known wrong result (WQR) bugs: Switch off "_rowsets_enabled" in Oracle Database 12c  Nov 10, 2015 UPDATE: _rowsets_enabled in Oracle Database 12c Nov 16, 2015 extended with updates on Dec 2, 2015 and Sep 14, 2016 What does "rowsets" actually mean? I'd like to thank Sankar, our Development manager for providing this explanation which sheds some light on how important this feature actually is: "Rowsets is a SQL execution performance enhancement introduced in Oracle RDBMS release 12.1 and further extended in a future release of the Oracle Database. Prior to 12.1, data processing in the SQL layer were done on a row by row basis. That is, each row source in the row source tree performing a specific SQL operation (scan, join aggregation, window function, connect by etc) consume and produce one row at a time. Starting in 12.1, we introduced the concept of rowsets, where many row sources were enhanced to consume and produce rows in batches called rowsets. It is an internal performance optimization project. That is why there isn't much documentation. This is a fundamental change in the SQL layer row processing. The benefits of rowsets is not only instruction cache locality but it also enables other performance optimizations such as, vectorization and prefetch in row sources and expression evaluation.". Which issues did we see? bug 22173980 WRONG RESULTS WHEN "_ROWSETS_ENABLED" = TRUE A hash join operation receives rowsets from its right input but then produces one row at a time as output resulting in a WQR bug 21608238  Wrong results with rowsets (enabled by default) This issue got found with more than 255 columns in a table but there may be other possibilities to hit it Which patches should you apply? Patches available for 22173980 WRONG RESULTS (NUMBER OF ROWS) WHEN "_ROWSETS_ENABLED" = TRUE Included in PSU/BPs since April 2016 Several One-Off Patches available in addition for other releases. Patches available for 21608238 WRONG RESULTS WITH ROWSETS One-Off patch available on Linux for several Oracle 12.1.0.2 patch levels as well as on AIX on top of BP160419 only You may have to request a one-off for your platform until the fix gets included into a Proactive BP. Summary My recommendation would be to make sure that you are on the April/July 2016 Bundle Patch or PSU (or you take the one coming in October 2016) to cure the first bug mentioned above. The second issue (21608238) may need a one-off on top of your Proactive Bundle Patch or PSU so you may have to request it via an SR if it is not available on your platform. Nevertheless the feature by itself is really important and shouldn't be turned off when possible to benefit from the positive performance impact. It may be extended in the next release of the database.. --Mike.

I while back I blogged about issues with "rowsets", a new Oracle 12c feature which unfortunately had two known wrong result (WQR) bugs: Switch off "_rowsets_enabled" in Oracle Database 12c  Nov 10, 2015 U...

Workshop

UKOUG: SIG (RAC, Cloud, Database, ...) - 12 Oct 2016

Wow ... time flies ... I think it was February or March this year when I'd got invited to the UKOUG SIG for RAC, Database, Cloud, Infrastructure in London on Oct 12, 2016. Just a week ago I looked into my calendar and realized: Book a flight. Now! We have to arrange travel at least 7 days before departure as otherwise extra approvals may be required. Time flies ... The room number in the picture above was the one of my hotel room in Baku earlier this year in May ;-) And of course on Wednesday I will talk about (and demo) Oracle Database 12.2.0.1. Live and Uncensored. On stage. And I hope to meet you there: Location: Amsterdam Room at Tenter House in 45 Moorfields, Moorgate, London Date and Time: October 12, 2016 - 14:00-20:00h Agenda: see here Registration: Link And besides my talks Database Upgrade or Migration - And Everything is Slow? What's New in Database Upgrade: Live and Uncensored! and news from the user group leaders there are three other talks I'm really looking forward to, David Hickson from BT about Setting up an Oracle 12c Migration Factory, Julian Dyke about IO Revisited and Jeff and Tania from Oracle about An Introduction to Oracle Management Cloud- Next Generation Unified Systems Management Looking forward to see you on Wednesday this week!. --Mike .

Wow ... time flies ... I think it was February or March this year when I'd got invited to the UKOUG SIG for RAC, Database, Cloud, Infrastructure in London on Oct 12, 2016. Just a week ago I looked into...

Patch Recommendation

RMAN Recovery Catalog with July PSU: How to resolve RASCHEMAVER and BASCHEMAVER schema mismatch

Related blog posts: RMAN Catalog Upgrade fails - ORA-02296 - error creating modify_ts_pdbinc_key_not_null(April 19, 2016) RMAN Catalog requires Enterprise Edition (EE) since Oracle Database 12.1.0.2(April 22, 2016) RMAN Catalog Upgrade to Oracle 12.1.0.2(August 1, 2014) Thanks to Ah Huat Tan from Amway IT Services for keeping me updated!Actually as I see that more people who got hit by this issue so I'd consider it to be worth to write about it. Problem and Analysis You'd apply the July 2016 PSU or BP. According to the readme you are required to upgrade your RMAN catalog afterwards. The readme says: If you are using the Oracle Recovery Manager, the catalog needs to be upgraded. Enter the following command to upgrade it: $ rman catalog username/password@aliasRMAN> UPGRADE CATALOG; I think there's a second "upgrade catalog" command necessary but I may be wrong. Anyhow, if you do this after applying the July 2016 PSU or BP you may see the following error: RMAN> RMAN-06435: recovery catalog owner is CATALOG5 RMAN-06442: enter UPGRADE CATALOG command again to confirm catalog upgrade RMAN> RMAN-06444: error creating dbms_rcvcat package body RMAN-00571: =========================================================== RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS =============== RMAN-00571: =========================================================== RMAN-06443: error upgrading recovery catalog When you try now to recompile the package manually you'll get: SYS@CATDB> alter package catalog5.DBMS_RCVCAT compile body; Warning: Package Body altered with compilation errors. SYS@CATDB> show error Errors for PACKAGE BODY CATALOG5.DBMS_RCVCAT: LINE/COL ERROR -------- ----------------------------------------------------------------- 11328/12 PL/SQL: Item ignored 11328/12 PLS-00201: identifier 'BASCHEMAVER.VERSION' must be declared 11340/6  PL/SQL: SQL Statement ignored 11342/13 PL/SQL: ORA-00942: table or view does not exist 11345/9  PL/SQL: Statement ignored 11345/9  PLS-00320: the declaration of the type of this expression is incomplete or malformed 11348/3  PL/SQL: Statement ignored 11348/7  PLS-00320: the declaration of the type of this expression is incomplete or malformed 11353/3  PL/SQL: Statement ignored 11353/54 PLS-00320: the declaration of the type of this expression is incomplete or malformed Explanation This got caused by a wrong library being shipped wrongly with the July PSU and BP meant for ZDLRA only. . Solution Please apply patch for Bug 24461826 - RMAN-06443: ERROR UPGRADING RECOVERY CATALOG AFTER APPLYING PATCH. The bug is non-public and therefore not visible. Sorry for the inconvenience.. --Mike.

Related blog posts: RMAN Catalog Upgrade fails - ORA-02296 - error creating modify_ts_pdbinc_key_not_null(April 19, 2016) RMAN Catalog requires Enterprise Edition (EE) since Oracle Database 12.1.0.2(April...

OOW

OOW 2016: Slides and HOL Download

Sorry for any delay but Roy and I promised to make our OOW 2016 slides available right after the talks. And some people were asking already where they are. For internal people you'll find them via the usual sources on database.us. But customers will have to wait until Oracle Database 12.2 is available in the Cloud as it got announced at OOW during the keynotes. . What you can get right now: How to Ensure Performance Stability when Upgrading to Oracle Database 12c,our Thursday OOW16 talk. I had to mask the Oracle 12.2 information but the presentation should be very useful without it anyhow as it is mostly about the methodology and features which are available generally. And thanks also to Nigel Bayliss and Mike Hallas for the reviews and input.. Upgrade to Oracle Database 12c (12.1.0.2),which is our OOW15 talk. I promise as soon as 12.2 is available I will publish a video showing you what we've did live and uncensored for the upgrade to 12.2. And in between you can watch Roy's real-time video about the migration piece with FTEX for migrating to the DBaaS Cloud. And of course I will also upload the slides from the OOW16 talk as soon as possible.. Data Pump Best Practicewill be available at the same time. It contains too much 12.2 information so masking it would lead to a not-useful presentation. Please be patient.. Hands-On Labwill be available on OTN at around the same time as the presentations, maybe with a slight delay. But for those who would like to play with it please take the 12.1 LAB environment and the 12.1 instructions. The difference is really only that you will upgrade to 12.1 instead of 12.2 - and of course you can't experiment with the new Multitenant features.. Thanks for your patience!. --Mike .

Sorry for any delay but Roy and I promised to make our OOW 2016 slides available right after the talks. And some people were asking already where they are. For internal people you'll find them via the...

Data Pump

Data Pump's amazingly useful METRICS=Y and LOGTIME=ALL parameters

Now that I am back from OpenWorld, I will hijack the Upgrade blog while Mike is traveling. :-) Thank you to everybody who came to our presentations or stopped at the demo booth to chat last week. We had a great many conversations, and we always learn from talking to customers! One of the common questions about Data Pump came in the form, "I have a data pump job that used to run in X minutes, but now takes <multiple of X> minutes. Can you tell me what might be happening?" Of course with that much information we can do very little. This is true even if we know that you moved from one version to another, or from one operating system to another. What we really need is some data to start with. That is where the METRICS and LOGTIME parameters are very useful.  Let's start by looking at a fragment of a logfile. This is from a metadata-only export of an E-Business Suite database on our small test system.  If I told you that this particular export took 2h 20m to complete, how would you know which parts took a long time and which completed faster? Here is part of the default logfile: Processing object type DATABASE_EXPORT/EARLY_POST_INSTANCE_IMPCALLOUT/MARKERProcessing object type DATABASE_EXPORT/SCHEMA/XMLSCHEMA/XMLSCHEMAProcessing object type DATABASE_EXPORT/NORMAL_OPTIONS/TABLEProcessing object type DATABASE_EXPORT/NORMAL_OPTIONS/VIEWS_AS_TABLES/TABLEProcessing object type DATABASE_EXPORT/NORMAL_POST_INSTANCE_IMPCALLOU/MARKERProcessing object type DATABASE_EXPORT/SCHEMA/TABLE/PROCACT_INSTANCEProcessing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLEProcessing object type DATABASE_EXPORT/SCHEMA/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANTProcessing object type DATABASE_EXPORT/SCHEMA/TABLE/GRANT/WITH_GRANT_OPTION/OBJECT_GRANTThe default logfile gives you an accounting of what was done in which order, but precious little in the way of performance data. Now let's see what happens when we specify METRICS=Y for that same export. The METRICS parameter has been around  for a long time, and has been documented since Oracle Database 11g Release 2 (11.2). It has the very useful effect of adding information to the logfile that tells you (and us) how long it took to export a particular type of database object. : Processing object type DATABASE_EXPORT/EARLY_POST_INSTANCE_IMPCALLOUT/MARKER Completed 1 MARKER objects in 88 secondsProcessing object type DATABASE_EXPORT/SCHEMA/XMLSCHEMA/XMLSCHEMA Completed 3 XMLSCHEMA objects in 1 secondsProcessing object type DATABASE_EXPORT/NORMAL_OPTIONS/TABLE Completed 64 TABLE objects in 98 secondsProcessing object type DATABASE_EXPORT/NORMAL_OPTIONS/VIEWS_AS_TABLES/TABLE Completed 10 TABLE objects in 121 secondsProcessing object type DATABASE_EXPORT/NORMAL_POST_INSTANCE_IMPCALLOU/MARKER Completed 1 MARKER objects in 20 secondsProcessing object type DATABASE_EXPORT/SCHEMA/TABLE/PROCACT_INSTANCE Completed 48 PROCACT_INSTANCE objects in 2 secondsProcessing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE Completed 33263 TABLE objects in 343 secondsProcessing object type DATABASE_EXPORT/SCHEMA/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT Completed 358621 OBJECT_GRANT objects in 614 secondsProcessing object type DATABASE_EXPORT/SCHEMA/TABLE/GRANT/WITH_GRANT_OPTION/OBJECT_GRANT Completed 102 OBJECT_GRANT objects in 7 secondsHere we get a sense of what took time or not in the processing. But, there are cases where the rounding error of reporting in full seconds is too large -- particularly when you have a job containing thousands of small tables. Let's get more granular measurements by specifying the LOGTIME parameter, which was introduced in Oracle Database 12c. Here is the logfile for that same job, but this time specifying LOGTIME=ALL in addition to METRICS=Y:18-SEP-16 11:05:43.156: Processing object type DATABASE_EXPORT/EARLY_POST_INSTANCE_IMPCALLOUT/MARKER 18-SEP-16 11:05:44.815: Completed 1 MARKER objects in 88 seconds 18-SEP-16 11:05:44.818: Processing object type DATABASE_EXPORT/SCHEMA/XMLSCHEMA/XMLSCHEMA 18-SEP-16 11:05:46.755: Completed 3 XMLSCHEMA objects in 1 seconds 18-SEP-16 11:05:46.768: Processing object type DATABASE_EXPORT/NORMAL_OPTIONS/TABLE 18-SEP-16 11:07:44.876: Completed 64 TABLE objects in 98 seconds 18-SEP-16 11:07:44.879: Processing object type DATABASE_EXPORT/NORMAL_OPTIONS/VIEWS_AS_TABLES/TABLE 18-SEP-16 11:09:43.716: Completed 10 TABLE objects in 121 seconds 18-SEP-16 11:09:43.717: Processing object type DATABASE_EXPORT/NORMAL_POST_INSTANCE_IMPCALLOU/MARKER 18-SEP-16 11:09:45.477: Completed 1 MARKER objects in 20 seconds 18-SEP-16 11:09:45.478: Processing object type DATABASE_EXPORT/SCHEMA/TABLE/PROCACT_INSTANCE 18-SEP-16 11:09:49.218: Completed 48 PROCACT_INSTANCE objects in 2 seconds 18-SEP-16 11:09:49.219: Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE 18-SEP-16 11:15:35.060: Completed 33263 TABLE objects in 343 seconds 18-SEP-16 11:15:35.061: Processing object type DATABASE_EXPORT/SCHEMA/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT 18-SEP-16 11:25:48.739: Completed 358621 OBJECT_GRANT objects in 614 seconds 18-SEP-16 11:25:48.741: Processing object type DATABASE_EXPORT/SCHEMA/TABLE/GRANT/WITH_GRANT_OPTION/OBJECT_GRANT 18-SEP-16 11:25:50.554: Completed 102 OBJECT_GRANT objects in 7 seconds As you can see, the LOGTIME parameter adds a timestamp down to the millisecond for every line in your logfile. This can be helpful in diagnosing issues with I/O, where the difference between a table taking 0.123 seconds and 0.059 seconds to export might seem small...unless you have 50,000 or so tables in your database.  So, next time you open an SR or talk to us about data pump performance, we will give you a big (virtual) hug if you specify METRICS=Y and LOGTIME=ALL for your data pump jobs. Even if we eventually need to ask for tracing, these parameters will help us identify the exact tracing needed and speed up analysis immensely!

Now that I am back from OpenWorld, I will hijack the Upgrade blog while Mike is traveling. :-) Thank you to everybody who came to our presentations or stopped at the demo booth to chat last week. We...

OOW

OOW 2016: Where are the slides? And the lab?

OOW16 is not over yet. Still one day to go. Or walk. My average walking distance per day so far is almost 11km per day or 13500 steps according to my cell phone.  We've had one talk so far, our "Upgrade and Migrate to Oracle Database 12.2 - Live and Uncensored" on Monday, 3 of 4 Hands-On-Labs, all fully sold out - and tomorrow we'll have the final lab and the 2nd presentation "Ensure Performance Stability When Upgrading to Oracle Database 12c" in Moscone South 302 at 9:30am.  Today I just realized that the "wonderful" OOW iPhone app does not display this talk neither does it display any of the customer talks we'd recommend to visit tomorrow. Maybe a Wrong Result Bug in whatever database is running underneath this app (it is an external company). Poor ... Anyhow, these are the talks to see for tomorrow: Best Practice to Ensure Performance Stability When Upgrading Oracle Database [CON6069] Thursday, Sep 22, 9:30 a.m. - 10:15 a.m. | Moscone South - 302. Migrating Oracle Real Application Clusters Databases to Oracle Multitenant [CON2718] David Hickson, Database Administrator, British Telecom Thursday, Sep 22, 1:15 p.m. - 2:00 p.m. | Park Central - Franciscan I. Agile Development, Cost Saving and More with Oracle's Multitenancy Solutions [CON1668] Alain Fuhrer, Oracle Database Administrator, Mobiliar Versicherungen AG Thursday, Sep 22, 2:30 p.m. - 3:15 p.m. | Park Central - Franciscan I Well, and you may ask yourself:Where are the slides and where's the Hands-On Lab. We will upload the Hands-On Lab including the instructions as soon as Oracle Database 12.2 is available for download from the usual sources. In between please use the Lab for Oracle 12.1 as more or less the only real difference is the new preupgrade.jar tool. I will upload the slides as well - minus the Oracle Database 12.2 content which will be made available as soon as it is available for download. Thanks! --Mike.

OOW16 is not over yet. Still one day to go. Or walk. My average walking distance per day so far is almost 11km per day or 13500 steps according to my cell phone.  We've had one talk so far, our...

OOW

OOW16 - Focus on Upgrade / Migrate / Consolidate

Almost ready to fly out - just have to pack my suitcase. And looking forward to an exiting Oracle Open World 2016 Conference in San Francisco.  If you are interested in a selection of really useful presentations, labs and meetups around Upgrade / Migration (incl DMU) / Consolidation and the Optimizer in Oracle Database 12c please see this Focus On document with links to the presentations of people we know, customers we have worked with - and of course our talks as well. Click on the picture above or use this link.. Furthermore please find to highly recommended customer presentations here: https://blogs.oracle.com/UPGRADE/entry/oracle_open_world_2016_customer . And our own Upgrade/Migrate talks and labs here: https://blogs.oracle.com/UPGRADE/entry/oracle_open_world_2016_upgrade. And just a note regarding the labs:If you are on the waiting list or didn't get a seat this time don't feel disappointed. We'll make the lab available publicly as soon as Oracle Database 12.2 is available for download. And if you really really need to get a seat in the lab but had no luck during signup try to be in front of the room 20 minutes before the lab starts. We'll give away the open seats to the waiting line approx 10 min before lab start in order to allow people to sit down. This means for all others who have a seat allocated: Please try to be there 10 mins in advance. We've had a few intense discussions last year with people showing up too late realizing their seat had been given to somebody else. Looking forward to see you all at OOW16!!! --Mike.

Almost ready to fly out - just have to pack my suitcase. And looking forward to an exiting Oracle Open World 2016 Conference in San Francisco.  If you are interested in a selection of really...

Patch Recommendation

The OJVM Patching Saga - and how to solve it - Part III

Related Posts on"The OJVM Patching Saga - and how to solve it": Part I - The OJVM Basics Part II - Important Notes and Information Part III - The Mitigation Patch Part IV - What you may have missed Part V - MOS Note explaining "Conditional Rolling Install"  . What is the "Mitigation Patch? In MOS Note:1929745.1 - Oracle JavaVM Component Database Patches there's a section called What is the Mitigation Patch. It says:"For situations where the latest OJVM PSU cannot be installed immediately there is a "Mitigation Patch" that can be used. The "Mitigation Patch" is an interim solution to protect against all currently known (Jul 2015) Oracle JavaVM security vulnerabilities in the database until such time as the OJVM PSU can be installed. It can also be used to protect database versions no longer covered by error correction support." It is applicable to database homes only. And it makes sense only in databases where OJVM is present (see the previous blog post Part II - Important Notes and Information). It has no dependency on the PSU or BP, and it can be installed RAC Rolling. It's a SQL patch. And finally it has been reviewed for use from January 2015 PSU onward. The Mitigation Patch can be downloaded via Patch:19721304.. How to apply the Mitigation Patch This is the official procedure when using the Mitigation Patch documented in MOS Note:1929745.1 - Oracle JavaVM Component Database Patches - and I flipped around some of the steps documented in this note as steps 5/6 in my list below are documented AFTER datapatch in the note - which won't work. Only if the patch is not included in your PSU/BP yet:Download the Mitigation Patch (Patch:19721304) and follow the readme to unzip it and do the patch prechecks Shutdown the instances and services on the local node only Apply the Database PSU (or Bundle Patch) but DO NOT RUN the post install steps Only if the patch is not included in your PSU/BP yet:Apply the Mitigation Patch Restart all databases on this node Proceed with the next node(s) Run post-install steps (for 12c: datapatch - for 11g: @?/sqlpatch/19721304/postinstall.sql ) This step is greatly missed in MOS and the patch documentation itselves - it is mentioned in the PSU/BP readme:SQL> @?/rdbms/admin/dbmsjdev.sql on all databases running from this patched Oracle home(s) SQL> exec DBMS_JAVA_DEV.DISABLE; on all databases running from this patched Oracle home(s)Note: This will enable the Mitigation Patch. At a later point, when you can take a complete outage: Disable public access to the database(s) SQL> exec DBMS_JAVA_DEV.ENABLE; in each databaseNote: This will disable the Mitigation Patch - without this step the PSU or BP can't be applied Shutdown the database(s) Apply the OJVM PSU/BP patch Run OJVM PSU/BP post install steps on all databases in the patched home. Final Comments First of all let me repeat that this is something which will affect customers with RAC systems only wou are used to apply PSUs or BPs in a rolling fashion. The Mitigation Patch is a workaround, not more than that. Second personally I think that the cleanest way is to not have OJVM (JAVAVM component in DBA_REGISTRY) in your database when you don't need it. This shouldn't devalue OJVM at all. Even though I'm not a Java expert I know that OJVM has great value for some customers. But based on my internal polls I think that just a smaller number of customers relies really on it. Still, removing OJVM from an existing database can be quite complicated, especially due to the dependencies of other components. So the best way would be - if you know you won't need OJVM - to not-install it when you create a database. It can always be created later on quite easily if required. But again, the devil is in the detail, you may need components depending on OJVM, and therefore you may have no choice.. --Mike

Related Posts on "The OJVM Patching Saga - and how to solve it": Part I - The OJVM Basics Part II - Important Notes and Information Part III - The Mitigation Patch Part IV - What you may have missed Part V...

Patch Recommendation

The OJVM Patching Saga - and how to solve it - Part II

Related Posts on"The OJVM Patching Saga - and how to solve it": Part I - The OJVM Basics Part II - Important Notes and Information Part III - The Mitigation Patch Part IV - What you may have missed Part V - MOS Note explaining "Conditional Rolling Install"  . Important Notes MOS Note: 1929745.1 - Oracle JavaVM Component Database Patches MOS Note: 1940702.1 - Database JVM Vulnerabilities FAQ MOS Note: 2165212.1 - What to do if the Database JAVAVM Component becomes INVALID After installing an OJVM Patch?This Note contains a pretty detailed removal procedure for JAVAVM as well - I haven't tried it out this way.. Facts and Things and Questions and Answers Q:When will I be affected by this issue with OJVM patching not being rolling and not Standby-First applicable?A:If you have a RAC system and/or if you don't use a Physical Standby database for patching.If you have a regular stand-alone database, no RAC and no intension to do Standby-First Patching you can stop reading now.. Q:Is this a one-time-only thing?A: No, the OJVM patching is a recurring topic.. Q:Why does the Oracle JavaVM patch incur downtime?A:When OJVM gets patched we'll have to:(a) update the Oracle executable ... and ...(b) run "CREATE OR REPLACE JAVA SYSTEM" which is a global step that updates classes.bin.The first part can be done one a per-node basis of course, but the second part requires interruption.. Q: Do I have to patch all my homes?A:No, only the database homes are affected by the OJVM patch. But please be aware of the JDBC patch for client and GI homes. See MOS Note: 1929745.1 for a matrix and further details.. Q: How do I find out if OJVM is present in my database?A:Please query: SELECT version, status FROM dba_registry WHERE comp_id='JAVAVM'; SELECT version, status FROM dba_registry WHERE comp_id='JAVAVM';I SELECT version, status FROM dba_registry WHERE comp_id='JAVAVM'; f you get a return result OJVM is present in your database.. Q:The Combo Patches combine OJVM and PSU together - should I take it?A:Well ... very experienced ACS Engineers told me they'll always take the separated patches, never the combos. If you don't have OJVM anyways there's no need to take the combo patches. And personally if I'd have to patch a critical system I'd prefer to keep things piece-by-piece. Just a guts feeling ...So for example see MOS Note 1683799.1 - 12.1.0.2 Patch Set - Availability and Known Issues to patch a RAC and non-Exadata system. You'll find this matrix: Non Exadata Real Application Clusters (RAC) Document Description Rolling RAC Patch Download Note:23615334.8 Combo of 12.1.0.2.160719 OJVM PSU and 12.1.0.2.160719 DBBP (Jul 2016) Part Patch:23615334 Note:23615308.8 Combo of 12.1.0.2.160719 OJVM PSU and 12.1.0.2.160719 GI PSU (Jul 2016) Part Patch:23615308 Note:23273686.8 12.1.0.2.160719 Database Proactive Bundle Patch (Jul 2016) Yes Patch:23273686 Note:23273629.8 12.1.0.2.160719 (Jul 2016) Grid Infrastructure Patch Set Update (GI PSU) Yes Patch:23273629 Note:23177536.8 Oracle JavaVM Component 12.1.0.2.160719 Database PSU (Jul 2016) (OJVM PSU) No Patch:23177536 I'd take the Proactive Bundle Patch (RAC Rolling) and if necessary the OJVM Component PSU (both marked in RED) instead. And even though the name does not say it, Patch:23273686 does contain the Database Bundle Patch and the PSU for the Grid Infrastructure (OCW is the acronym for Oracle Cluster Ware). But it does not include the OJVM patch.. Q:If I don't use Oracle JVM can I remove it?A:In theory, yes. But it is not as simple as it does look like. First of all it's not that easy to detect if there's REALLY nobody using OJVM in your database. And second there are a lot of other components and features dependening on JAVAVM such as Spatial/Graph and Multimedia. I tried to picture this here. And MOS Note: 2165212.1 does contain detailed steps as well.. Q:Will the OJVM patching be rolling and standby-first again?A:There may be plans for a future release or patch set of the Oracle database. . Stay tuned ... --Mike.

Related Posts on "The OJVM Patching Saga - and how to solve it": Part I - The OJVM Basics Part II - Important Notes and Information Part III - The Mitigation Patch Part IV - What you may have missed Part V...

Patch Recommendation

The OJVM Patching Saga - and how to solve it - Part I

Related Posts on"The OJVM Patching Saga - and how to solve it": Part I - The OJVM Basics Part II - Important Notes and Information Part III - The Mitigation Patch Part IV - What you may have missed Part V - MOS Note explaining "Conditional Rolling Install"  . Who's the Product Manager for Patching? First of all, I'm neither a patching expert nor am I the Product Manager for Patching. There's no such role as far as I know but there are people inside Oracle who have way more clue about this topic. There's a group inside Oracle's development owning this topic but I'm not working in this team. Still I'm interested in patching.Why? Simply because patching goes usually in conjunction with an upgrade or a migration first. Our recommendation for years is this straight forward approach: Download and install the most recent release (in most cases the newest patch set which is a full release since Oracle 11.2.0.2) Then apply the most recent Bundle Patch (or Patch Set Update) on top Then apply the important one-off (single) patches on top (see our slide deck for a guided how-to) Then upgrade or migrate This way you'll make sure to avoid the most terrible issues known to Oracle already. Of course there's no guarantee that this will avoid all potential issues and pitfalls. But when dealing with Oracle Support later on (or myself in case we work together on a reference project) it's much easier for you and us as we won't waste time to diagnose issues which are already fixed. Ah, and there's my background of 6 years of RDMBS mission critical support when I started at Oracle almost 20 years ago. . The OJVM Drama or Saga Well ... procrastination is a terrible but very human habit. The OVJM topic is on my radar for 6 months now. Actually even longer but more and more people approached me in the past months asking so many questions about OJVM patching. And I realized that I don't have an answer in all cases. As I said above, I'm not a patching expert. What is this thing I'm calling the OJVM drama or saga? And what the heck is OJVM? Very important:If you don't have a RAC (Real Application Clusters) system or intend to do Standby-First Patching (see MOS Note: 1265700.1 - Data Guard Standby-First Patch Apply) you can stop reading here. This "drama" won't affect you at all. OJVM is the Oracle Java Virtual Machine which gives you the ability to execute JAVA code directly in the database. The glossary says: "A standard, Java-compatible environment that runs any pure Java application". If you intend to search for it please use "Oracle JVM" or "JVM" but not "OJVM" as it is not known in the documentation as OJVM. The saga started actually with the October 2014 PSU (Patch Set Update). It included a patch for the Oracle JVM which required the database to be restarted. And this meant that the patch is not RAC rolling installable anymore. PSU October 2014 https://blogs.oracle.com/UPGRADE/entry/psu_october_2014(Oct 15, 2014) https://blogs.oracle.com/security/entry/october_2014_critical_patch_update"Due to the nature of the fixes required, Oracle development was not able to produce a normal RAC-rolling fix for these issues. To help protect customers until they can apply the Oracle JavaVM component Database PSU, which requires downtime, Oracle produced a script that introduces new controls to prevent new Java classes from being deployed or new calls from being made to existing Java classes, while preserving the ability of the database to execute the existing Java stored procedures that customers may rely on.". Older Blog Posts about OJVM? Yes of course, I did blog about this already: Where it all began - PSU October 2014https://blogs.oracle.com/UPGRADE/entry/psu_october_2014(Oct 15, 2014). Java in the Database - OJVM non-rolling patcheshttps://blogs.oracle.com/UPGRADE/entry/jvm_java_in_the_database(Jun 22, 2015). OJVM - Standby-First Patching: Yes or No?https://blogs.oracle.com/UPGRADE/entry/ojvm_patch_standby_first_patching(Aug 8, 2016). But my focus in these posts is to show you potential solutions and share my experience with workarounds. So please stay tuned ;-) --Mike

Related Posts on "The OJVM Patching Saga - and how to solve it": Part I - The OJVM Basics Part II - Important Notes and Information Part III - The Mitigation Patch Part IV - What you may have missed Part V...

OOW

Oracle Open World 2016 - Recommended Customer Talks

At OOW 2016 there are some customer talks from customers we worked with or at least were involved in their project plans. We highly recommend the talks below. British Telecom has such an ambitious upgrade and migration project for really MANY databases. I'm looking forward to David's talk. We met at UKOUG TECH15 and I promise it will be a very interesting insight presentation.  Migrating Oracle Real Application Clusters Databases to Oracle Multitenant [CON2718] David Hickson, Database Administrator, British Telecom Thursday, Sep 22, 1:15 p.m. - 2:00 p.m. | Park Central - Franciscan I Since April 2015, BT has deployed new databases using an Oracle Database 12c pluggable database as the standard. However, it still has a large existing Oracle Real Application Clusters database running Oracle Database 11.2 or earlier. In this session, learn how the company plans to move all its databases onto Oracle Database 12c using multitenant-based container database/pluggable database configurations. The session also explains why the move to an Oracle Database 12c multitenant architecture makes good business sense and then delves into the technical detail of the processes to move databases with minimal application downtime.. You've never heard of the oldest Swiss insurer, Die Mobiliar? Die Mobiliar exists since 1826 - but their IT and especially their database landscape is more than up to date. Mobiliar really sets the speed for others. They upgraded their entire databases landscape of almost 300 databases with a very small but skilled team in less than 16 months - and introduced Oracle In-Memory as well. And Mobiliar will set the pace again for Oracle Database 12.2 and Oracle Multitenant. I'm so looking forward to Alain's talk. We work together since two years now. Mobiliar became an official Oracle Database 12c and Oracle In-Memory reference customer. And you'll find more information about the upgrade project in our 12c slide deck. Agile Development, Cost Saving and More with Oracle's Multitenancy Solutions [CON1668] Alain Fuhrer, Oracle Database Administrator, Mobiliar Versicherungen AG Time and location changed:Thursday, Sep 22, 2:30 p.m. - 3:15 p.m. | Park Central - Franciscan I The insurance industry must now provide a rapidly evolving portfolio of specific offerings. And IT departments can only keep pace if we adopt agile development methods. But how can this be accomplished without taking additional risks? In this session learn about a new era of agile development that keeps costs under strict control. Instead of relying merely on consolidated schemata and achieving savings in IT equipment, high software license utilization, and automation, a consolidation approach to multitenancy can achieve additional goals. Learn how it works with Oracle Database 12.2.. CU soon at OOW16 :-) --Mike. 

At OOW 2016 there are some customer talks from customers we worked with or at least were involved in their project plans. We highly recommend the talks below. British Telecom has such an ambitious...

OOW

Oracle Open World 2016 - Upgrade Talks / HOL

Just a few weeks until Oracle Open World 2016 Conference will kick off. September 18-22, 2016 in San Francisco. For us it is usually the most busiest week of the year. But also a great experience every year with so many good discussions and meetings. It's very intense. Since a few hours the OOW16 Session Catalog is online with the schedule (but it still misses the rooms).  This is the list of our talks and HOL sessions:  Upgrade and Migrate to Oracle Database 12c: Live and Uncensored! [CON6068] Monday, Sep 19, 4:15 p.m. - 5:00 p.m.  (Moscone South - 102) This session introduces the new features in the next generation of Oracle Database. As an added bonus, you will see a live upgrade from Oracle Database 11g to the next generation of Oracle Database and also a live migration using the full transportable export/import feature, both in the span of 45 minutes. . Best Practice to Ensure Performance Stability When Upgrading Oracle Database [CON6069] Thursday, Sep 22, 9:30 a.m. - 10:15 a.m.  (Moscone South - 302) Nobody likes surprises when it comes to database upgrades and migrations. Because the upgrade or migration usually is not complicated by itself, the most important task is ensuring good performance afterward. This presentation discusses a straightforward, step-by-step approach to ensure not only plan stability but also proper testing; secret tweaks; and helpful tips and tricks around database upgrades and migrations. It spans the entire project lifetime, from things to do before approaching such a move to important settings in your new environment during or after the upgrade. Follow best practice and get the performance you expect..  Hands-on Lab: Upgrade and Migrate to Oracle Database 12c Release 2 [HOL7377] Monday, Sep 19, 2:30 p.m. - 3:30 p.m.  (Hotel Nikko - Bay View - 25th Floor) Tuesday, Sep 20, 11:30 a.m. - 12:30 p.m.  (Hotel Nikko - Bay View - 25th Floor) Wednesday, Sep 21, 11:30 a.m. - 12:30 p.m.  (Hotel Nikko - Bay View - 25th Floor) Thursday, Sep 22, 11:30 a.m. - 12:30 p.m.   (Hotel Nikko - Bay View - 25th Floor) In this hands-on lab, learn about new ways to upgrade, migrate, and consolidate databases in Oracle Database 12c Release 2. Learn to • Upgrade a database to Oracle Database 12c Release 2 using the new command-line parallel upgrade script • Plug that database into an Oracle Database 12c Release 2 container database as a pluggable database (PDB) • Migrate a second database to a PDB, using the new Oracle Database 12c full transportable export/import feature The end result is a CDB with two PDBs, which is the first step toward database as a service (DBaaS) or toward building a private cloud.  CU soon :-)  --Mike.

Just a few weeks until Oracle Open World 2016 Conference will kick off. September 18-22, 2016 in San Francisco. For us it is usually the most busiest week of the year. But also a great experience every...

Patch Recommendation

OJVM patch: Standby-First patching, yes or no?

I have blogged in the past more than once about the "wonderful" topic of OJVM patching. And please don't beat me and you can be sure that even our execs are aware of this topic. Anyhow,  I'd like to summarize a few typical questions sent to me in the past months regarding OJVM patching.. Do I need to apply the OJVM patch every time I apply a PSU or BP? Unfortunately this is not a one-time-and-then-you-are-set operation. You'll have to do it every time.. How do I find out if OJVM is used in my database? I tried to nail this down in this blog post here: https://blogs.oracle.com/UPGRADE/entry/jvm_java_in_the_database But the best solution would be to not install OJVM from the beginning if you have no intention of using it. It's far more simple to install it later on than to remove it.. Can OJVM patch being rolling installable or standby-first applicable? No, unfortunately the OJVM patch is neither rolling installable in a RAC database nor can it be applied in the Standby-First patching method. Please see MOS Note:1929745.1Oracle Recommended Patches -- OJVM PSU Patches . Is the Mitigation Patch a valid workaround to OJVM patching downtime? Well, I leave this up to you. But if you have never heard of the OJVM Mitigation patch before please see again MOS Note:1929745.1 - Oracle Recommended Patches -- OJVM PSU Patches: For situations where the latest OJVM PSU cannot be installed immediately there is a "Mitigation Patch" that can be used. The "Mitigation Patch" is an interim solution to protect against all currently known (Jul 2015) Oracle JavaVM security vulnerabilities in the database until such time as the OJVM PSU can be installed. It can also be used to protect database versions no longer covered by error correction support. The "Mitigation Patch": is applicable only to database homes, not client nor Grid homes is only applicable to databases that have JavaVM installed has no dependency on the DB PSU (or equivalent) level can be installed in a RAC Rolling manner is a SQL only patch that needs to be installed and activated in each database hence it can be installed standby first but it requires SQL steps to be executed to be effective, which cannot be done on a read only standby affects use of Java and Java development in the database has been reviewed for January 2015, April 2015, July 2015, October 2015, January 2016, April 2016 and July 2016 and provides mitigation against all currently known OJVM vulnerabilities can be downloaded here: Patch:19721304. Does OJVM Patching affect the Grid Infrastructure Rolling Patching? No, it doesn't. Oracle Grid Infrastructure patching is always rolling and does not get affected by the OJVM patch.. I'm pretty sure this does not answer all your questions but please don't hesitate to open SRs with Oracle Support. I will update this FAQ from time to time. --Mike

I have blogged in the past more than once about the "wonderful" topic of OJVM patching. And please don't beat me and you can be sure that even our execs are aware of this topic. Anyhow,  I'd like to...

Workshop

OTN Tour LAD - Panama, Costa Rica, Mexico, Guatemala

OTN Tour Latin America 2016 is ready to take off And for me it's the first time in 4 years that I won't be able to participate. Which is very sad as it is such a great thing to meet all these wonderful people, not only from the User Groups doing all the orga stuff but also discussing with customers, learning from skilled and knowledgeable co-presenters, the entire travel, the wonderful cities ... I miss this already. Maybe next year ... Nevertheless, the Database Upgrade team will present - and thanks to the tour organizers for accepting the talks of Byron and Hector, two of my team mates based in Mexico. And of course they will present in Spanish :-)... Our Speakers  Let me introduce my two team mates first.. Hector Vieyra Software Development Manager at Oracle An Oracle senior level DBA with more than 15 years of experience Contributing as software engineer for Database Upgrade team Broad range of experience in architecting, designing, implementing, managing and supporting complex business critical databases and related server infrastructure on transnational companies like Bank Of America Merrill Lynch, General Electric and Cadbury Adams. Ample experience in different Oracle database technologies like data guard, rac and Exadata... Byron Motta Principal Member of Technical Staff at Oracle, Database Upgrade Development Team 15+ years of experience as DBA and as a Software developer. OCP and OCE GI & RAC Software architect. Consultant for companies such as Banco Santander, InterAmerican Development Bank, Bank of America, etc. Upgrade/downgrade and migrations specialist, RAC, high availability and Exadata. SME  Oracle on Windows. Preupgrade developer at Database Upgrade Development team. The Cities Hector and Byron will present in these cities. Please find the links to the events with agenda, location etc below.  Ciudad de Panamá, Panamá - 01-Aug-16 San Jose, Costa Rica - 03-Aug-16 Ciudad de México, Mexico - 05-Aug-16 Ciudad de Guatemala, Guatemala - 08-Aug-16 And of course the OTN Tour will visit also Buenos Aires, Montevideo, Sao Paolo, Santiago de Chile, Lima and Pereira. Find all the cities and stops and further information HERE.. The Talks Hector and Byron will give the following talks:  What's New in Database Upgrade: Live and Uncensored!.This session will introduce the new features in database upgrade for the latest version of Oracle Database. As an added bonus you will see a live upgrade from Oracle Database 11g Release 2 to the newest version of Oracle Database, and also a live migration using the new Full Transportable Export/Import feature, both in the span of 45 minutes!.How to Ensure Good Performance After your Database Upgrade.The top concern of customers when approaching a database upgrade is how their server will perform under the new version of Oracle. This session will show you Best Practice for using a variety of database parameters, features, and options to achieve better performance on the same hardware when you upgrade your Oracle Database. . I'm so glad that our team gets the chance to attend - and all the best to Byron and Hector throughout their travel :-) Hope to be part of the OTN LAD Tour in 2016 then again :-) --Mike 

OTN Tour Latin America 2016 is ready to take off And for me it's the first time in 4 years that I won't be able to participate. Which is very sad as it is such a great thing to meet all these wonderful...

Patch Recommendation

July 2016 - Proactive BPs and PSUs are available

Last night the July 2016 patches got released Not all of them actually. In case you miss AIX, Intel Solaris and zLinux versions those should be available by Friday, July 22, 2016.   See the Oracle Critical Patch Update Advisory July 2016 for further details, and especially the Database announcement on MOS. What's new, what's important? First of all the renaming of DBIM and Exadata Bundle Patches into PROACTIVE BUNDLE PATCHES is now settled in more MOS notes. 2.1 Database patch for Engineered Systems and Database In-Memory 12.1.0.2 renamed to "Proactive Bundle Patch 12.1.0.2" Starting from Apr2016 onwards the prior Database Bundle that was called "Database patch for Engineered Systems and Database In-Memory 12.1.0.2" will now be called "Proactive Bundle Patch 12.1.0.2". This patch will continue be a cummulative patch and will include all prior fixes. The Apr2016 Proactive BP can also be applied on top the Jan2016 "Database patch for Engineered Systems and Database In-Memory 12.1.0.2".. Does Oracle really recommend the Proactive Bundle Patches? Well, I blogged about it almost 3 months ago and received several emails and comments from customer and colleagues sending me either complaints or SRs where somebody in Oracle Support gave them a hard time as one has applied a Proactive Bundle patch on a non-Exadata system. Hm ... the fact the Proactive BPs were available in Solaris, AIX and HP-UX as well got simply ignored. Sorry for the inconvenience - but Oracle is a big ship and sometimes it takes a while until the message reaches really everybody. Anyhow, in case you get into discussion with Oracle Support people in an SR please direct them to MOS Note: 1962125.1 - Oracle Database - Overview of Database Patch Delivery Methods: Oracle makes the following recommendation for which patch method to use for Database related installations: Every customer should at least install PSUs. Minimal testing required. 12.1.0.2 Customers wanting a more comprehensive set of fixes should install the Database Proactive Bundle patch. This requires a bit more testing than a Patch Set Update (PSU), but delivers a larger set of fixes 1 The "Database Proactive Bundle Patch" requires a bit more testing than a Patch Set Update (PSU) as it delivers a larger set of fixes. [above table and text is taken from MOS Note:1962125.1 as of July 20, 2016] Other changes you should be aware of? And you'll find also significant changes in the naming in MOS Note:1683799.1 - 12.1.0.2 Patch Set - Availability and Known Issues. The Recommended Patches section differentiates now between Exadata, RAC and non-RAC systems making your choice much easier, and removes the misleading naming for DBIM: Non Exadata Non RAC Document Description Rolling RAC Patch Download Note:23615334.8 Combo of 12.1.0.2.160719 OJVM PSU and 12.1.0.2.160719 DBBP (Jul 2016) Part Patch:23615334 Note:23615289.8 Combo of 12.1.0.2.160719 OJVM PSU and 12.1.0.2.160719 DB PSU (Jul 2016) Part Patch:23615289 Note:23273686.8 12.1.0.2.160719 Database Proactive Bundle Patch (Jul 2016) Yes Patch:23273686 Note:23177536.8 Oracle JavaVM Component 12.1.0.2.160719 Database PSU (Apr 2016) (OJVM PSU) No Patch:23177536 Note:23054246.8 12.1.0.2.160719 (Jul 2016) Database Patch Set Update (DB PSU) Yes Patch:23054246 I'd recommend you the one in BOLD letters unless you use OJVM and require the OJVM patch in addition. : And finally ... the summary! For those who have no time to read such a lengthy blog post here's the important facts: Where do you find Critical Patch Advisory information and the root to all further notes?Critical Patch Advisory July 2016 . Where does Oracle recommend Proactive BPs over PSUs?MOS Note:1962125.1 - Oracle Database - Overview of Database Patch Delivery Methods. Where do you find the link to YOUR patch for the Database and GI for 12.1.0.2?MOS Note:2136219.1 - Patch Set Update and Critical Patch Update July 2016 Availability Document. --Mike. [Addition/Update]  Please be aware of: BUG 24332805 – OUI-67124:RE-LINK FAILS ON TARGET “ISQORA” DURING JUL 2016 PSU APPLY” Two workarounds: Addition of UnixODBC package to server Install the "unixODBC" packages:yum install unixODBC Re-run the ins_odbc.mk cd $ORACLE_HOME/odbc/lib/ make -f ins_odbc.mk isqora Removing the sqora relinking from 'actions.xml' file of Linux x86-64 12.1.0.2.160719 DBPSU It is already removed from all the other platforms

Last night the July 2016 patches got released Not all of them actually. In case you miss AIX, Intel Solaris and zLinux versions those should be available by Friday, July 22, 2016.   See the Oracle...

Optimizer

Global Temporary Tables - Change in Oracle 12c

A few weeks back I was copied on an email conversation about a important change with Global Temporary Tables (GTT) in Oracle Database 12c.  Something you need to be aware of when using GTTs in Oracle Database 12.1.0.2:Prior to this release GTTs shared statistics between sessions. Statistics were SHARED between different sessions. Now since Oracle Database 12c this is different by default - statistics on GTTs are visible to the SESSION only. This can be changed and altered of course. And there are pros and cons. But as I'm not an optimizer expert I'm glad that my colleague Nigel Bayliss, our Optimizer Product Manager, has blogged about it. If you use this functionality please read all the details about this change here:  Global Temporary Tables and Upgrading to Oracle Database 12c - Don't Get Caught Out https://blogs.oracle.com/optimizer/entry/gtts_and_upgrading_to_oracle"[,,] If your database applications are upgraded without taking into account the new default, then workloads relying on shared statistics won't see statistics on GTTs where they had seen them before. Dynamic sampling might kick in where it hadn't before [..]" Thanks Nigel! --Mike. PS: Original post said: Oracle Database 12.1.0.2 but I misread it - and thanks for the hint - it got introduced with Oracle Database 12.1.0.1 :-)

A few weeks back I was copied on an email conversation about a important change with Global Temporary Tables (GTT) in Oracle Database 12c.  Something you need to be aware of when using GTTs in...

Flaws and Pitfalls

Speed up Upgrade Phase 65 with a new catuposb.sql

Credits to Chris Smids from Proximus in Belgium :-) Thanks, Chris!!! Upgrade to Oracle 12.1.0.2 is slow in phase: #65 ? You are wondering why phase: #65 of the database upgrade to Oracle Database 12.1.0.2 takes quite a while. You dig down into the catupgrd0.log and recognized this statement taking a while:  dbms_output.put_line('catuposb, update 4 - rows updated ' || rows_updated);END; -- end of update for system internally generated objs/ The cause for this issue is buried in the script catuposb.sql hitting stale histograms which did not get refreshed even if you gathered dictionary stats before the upgrade as recommended. Offered Workarounds Gather table stats (DBMS_STATS.GATHER_TABLE_STATS) on SYS.USER$ first Exchange catuposb.sql with the version distributed via  MOS Note:2147232.1 - Upgrade to 12c : CATUPOSB.SQL SLOW DURING UPGRADE. Important things to know Bug 21744290 - catuposb.sql can be slow during upgrade depending on no. of objects in the DBis supposed to be fixed in the April 2016 BPs and PSU - but in Chris' case Oracle Support confirmed that even though Chris had the BP the issue still happens. And this observation is correct. The fix from December 2015 is missing in the catuposb.sql distributed with the April 2016 PSU and BP.When you compare it now with the version Oracle Support is distributing you'll find the fix being included - but the two more recent fixes in the current catuposb.sql (see above) are missing.We'll sort this out and I will update you via this blog post.. These are the lines missing in the version of catuposb.sql distributed with the April 2016 PSU/BP:-- bug 22178855: gather table stats on user$ after update to avoid slow-- select in update 4dbms_stats.gather_table_stats('SYS', 'USER$');. What should you do? Actually it is not supported to edit "our" scripts in ?/rdbms/admin so please wait for a clean version of catuposb.sql being distributed. I will update the blog asap. In between I think it should be ok to gather stats on USER$ beforehand in order to prevent this from happening: SQL> exec dbms_stats.gather_table_stats('SYS', 'USER$'); Or you'll wait for the July 2016 Proactive BP and PSU which should be available on July 19, 2016. It will contain the correct version of the script including all fixes.  --Mike .

Credits to Chris Smids from Proximus in Belgium :-) Thanks, Chris!!! Upgrade to Oracle 12.1.0.2 is slow in phase: #65 ? You are wondering why phase: #65 of the database upgrade to Oracle Database...

Best Practice

Full Transportable Export/Import - Things to Know

This blog post is an addition to: Full Transportable Export/Import - Migration an 11.2.0.4 database to Oracle Database 12c - into the Oracle Cloud Seth Miller commented the pitfall of having a serious issue during the Data Pump run, Data Pump exiting and not finishing, and you'll have to do the entire backup/restore/incremental-roll-foward thing again. Without any doubt, this is no fun at all.  So let me point out a few things to take into consideration - and some of them are not obvious I guess.. Do you need to backup/restore/incremental-roll-forward again? When you let do Data Pump all the manual tasks of Transportable Tablespaces (TTS) this is called Full Transportable Export/Import (FTEX as I abbreviate it). With regular TTS you will take your tablespaces read-write in your destination at the end of the entire process. So whenever something is going wrong during the meta data rebuild you won't have to restore all your tens of terabytes.  With FTEX things are a bit different. Data Pump will take the tablespaces several times read-write during the impdp run. This has to happen. And the job is not restartable right now. This may change in a future release or patch set. We'll see. What are your options now to prevent a full repeat of all your backup/restore/incremental-roll-forward activities? FLASHBACK DATABASE unfortunately is not an option as you could set a Guaranteed Restore Point - but as soon as the data file headers got adjusted FLASHBACK won't be able to revert this.  Setting the data files to read-only on the OS level is not an option either as you might force Data Pump to fail when it would like to write something into a tablespace and gets an OS return code for not being able to complete the write operation. Therefore right now the only valid option I can think of is leveraging storage snapshot technologies in order to restore the files back into the status from "before Data Pump ran". We are aware of this pitfall and we discuss alternatives internally for the future.. Character Sets and Time Zone? Another topic to pay close attention is the database's character set and national character set. Please find all the guidelines about character sets and national character sets here:  Transportable Tablespaces - Characters Sets - Same same but different?(May 25, 2016) For time zone files please make sure you have the same time zone version in source and destination. As you can't downgrade your time zone file in the destination usually you'll have to apply the matching time zone patch in the source before initiating the transport. If you don't pay attention Data Pump will deny the meta import of the data in the tablespaces as otherwise you may end up with corrupted time zone data. Please find the precise description in the Oracle 12c documentation:  General Limitations on Transporting Data Tablespace Encryption When your tablespaces are encrypted you'll have to be a bit more careful when using TTS or FTEX. Even though the FAQ on Oracle.com mentions that there are no issues still there are some limitations. The documentation is more precise:  Limitations on Transportable Tablespace Transportable tablespaces cannot transport encrypted tablespaces. Transportable tablespaces cannot transport tablespaces containing tables with encrypted columns Limitations on Full Transportable Export/Import You cannot transport an encrypted tablespace to a platform with different endianness.To transport an encrypted tablespace to a platform with the same endianness, during export set the ENCRYPTION_PWD_PROMPT export utility parameter to YES, or use the ENCRYPTION_PASSWORD export utility parameter. During import, use the equivalent import utility parameter, and set the value to the same password that was used for the export.. --Mike .

This blog post is an addition to: Full Transportable Export/Import - Migration an 11.2.0.4 database to Oracle Database 12c - into the Oracle Cloud Seth Miller commented the pitfall of having a serious...

Data Pump

Full Transportable Export/Import - Migrating an 11.2.0.4 database to Oracle Database 12c- into the Oracle Cloud

Full Transportable Export/Import - one of the coolest features in Oracle Database 12c  We blogged about Full Transportable Export/Import a while back. It is - no doubt - one of the coolest features in Oracle Database 12c. And it is part of our Hands-On Lab exercise (Hands On Lab - Upgrade, Migrate, Consolidate to Oracle Database 12c) as well. It utilizes the technique of Transportable Tablesspaces - cross-platform, cross- Endianness, cross-version - but lets Oracle Data Pump do all the "dirty" work of rebuilding everything kept in your SYSTEM and SYSAUX tablespace including views, synonyms, public objects, grants, sequences etc etc etc. You can even combine it with RMAN Incremental Backups - of course cross-platform and cross-Endianness as well - in order to decrease the downtime for large databases.Please find our presentation about the combination of these feature here in the Slides Download Center to the right: Full Transportable Export/Import with RMAN Incrementals  In addition: You can use this Oracle Database 12c feature with your source database Oracle 11.2.0.3 or 11.2.0.4. Just the destination database you are upgrading or migrating to must be at least an Oracle 12.1.0.1 database (or newer of course).  See the feature in action Roy recorded a 20 minute video demonstrating how to: Migrate an Oracle Database 11.2.0.4 database into an Multitenant's pluggable database using Full Transportable Export/Import into the Oracle DBaaS Cloud. And of course the feature works also without migrating into a Container database environment - and without migrating to the Oracle DBaaS Cloud.  If you need to migrate a database from Oracle 11.2.0.3 or newer to Oracle Database 12c Full Transportable Export/Import can become your best friend. --Mike. 

Full Transportable Export/Import - one of the coolest features in Oracle Database 12c  We blogged about Full Transportable Export/Import a while back. It is - no doubt - one of the coolest features...

Best Practice

Minor Upgrade? Going from 11.2.0.1 to 11.2.0.4?

My Belgium friend Phillipe Fierens raised a great question on Twitter last week and dropped me an email again after having a discussion with his client: For Phillipe and myself the answer is pretty clear and straight forward: There is no minor upgrade anymore since every (patch set) release is a full release and since new parameters, parameter values, features and whatever appears even in patch sets.  But the following discussion on Twitter with comments from all sides made me think about why people would honestly declare going for instance from 11.2.0.1 to 11.2.0.3 as a minor upgrade whereas going to 12.1.0.2 is seen as a major upgrade? Let me summarize why I completely disagree - and actually Dom Giles nailed it: Since Oracle Database 11.2.0.1 we deliver patch sets as a full release A patch set can contain not only new parameters or parameter values but may occasionally also contain new features (typical examples in Oracle 11.2.0.4 are the new value for optimizer_dynamic_sampling=11 or the DBMS_REDACT package) Therefore you will have to do exactly the same amount of testing, regardless of going to Oracle Database 11.2.0.4 vs Oracle Database 12.1.0.2 - it is ZERO difference in the tests, the time, the effort, the manpower .... You don't believe me? Then please check MOS Note:1962125.1 (Oracle Database - Overview of Database Patch Delivery Methods). Scroll down a bit to Testing Recommendations By Patch Type and see the rightmost column of this table headlined "Patch Set Release": I hope this clarifies it all. There is no "minor" patch set upgrade anymore. Even though I would love to tell you that going from 11.2.0.3 to 11.2.0.4 is just a minor hop it will require exactly the same testing and evaluation work then going to Oracle Database 12.1.0.2. But going to Oracle Database 12.1.0.2 will mean that you are under Premier Support until end of June 2018 - whereas Free Extended Support for Oracle Database 11.2.0.4 will end in May 2017. --Mike .

My Belgium friend Phillipe Fierens raised a great question on Twitter last week and dropped me an email again after having a discussion with his client: For Phillipe and myself the answer is pretty...

Best Practice

DMU - Tips and Tricks - Migration Assistant for Unicode

Please find previous posts about the Data Migration Assistant for Unicode (DMU) here: We included the DMU into our slide deck a while ago:Upgrade, Migrate & Consolidate to Oracle Database 12c. DMU 2.1 is available (June 2, 2015)https://blogs.oracle.com/UPGRADE/entry/migrating_to_unicode_get_dmu. Since Roy and I subscribed to the Hot Topics support email we'll find a very helpful note none of us was aware of almost every second day. One of these recent finds is: MOS Note:2018250.1Tips and Known Issues - Database Migration Assistant for Unicode (DMU) 2.1 For those who have never heard of the DMU before you'll find a very brief overview about this tiny little - and very helpful - tool for database character set migrations not only to Unicode here:  http://www.oracle.com/technetwork/database/database-technologies/globalization/dmu/overview/index.html Why is this note so helpful? Simply because it combines the most important topics about DMU in one note - from starting to issues. And this is really helpful when you plan to use the tool. It is structured into: A) Things to check before using the DMU B) FAQ and common issues C) Problems starting the DMU or during scan D) Typical errors such as "invalid binary conversion" and others E) Post conversion phase F) Steps to do afterwards plus some extras .--Mike . 

Please find previous posts about the Data Migration Assistant for Unicode (DMU) here: We included the DMU into our slide deck a while ago: Upgrade, Migrate & Consolidate to Oracle Database 12c. DMU 2.1...

Tech

Transportable Tablespaces - Characters Sets - Same same but different?

All credits go to Don Wolf, an Oracle Advanced Customer Support engineer from Ohio as he dug out this information :-) Thanks Don! Do database character sets have to match EXACTLY for Transportable Tablespaces? That sounds like a simple question. When you look into our big slide deck the answer will be a straight "Yes". No doubts. Regardless if you would like to do Transportable Tablespaces or Full Transportable Export/Import your sources and your target's database character sets must be equal. Otherwise Data Pump won't allow you to process the meta data import. But Don was wondering about slightly differing information in MOS notes and the documentation.And asked me if I can clarify things. I couldn't.  MOS Note:1454872.1Transportable Tablespace (TTS) Restrictions and Limitations: Details, Reference, and Version Where Applicabletells you: Character Set and National Character Set Limitation Restriction :  The source and target database must use a compatible character set and national character set. Reference  : Oracle Database Administrators Guide - Transporting Tablespaces Between Databases - Limitations on Transportable Tablespace Use Affected Version :  All version.  And the above documentation link then tells you: The source and the destination databases must use compatible database character sets. That is, one of the following must be true: The database character sets of the source and the target databases are the same. The source database character set is a strict (binary) subset of the target database character set, and the following three conditions are true: The source database is in version 10.1.0.3 or higher. The tablespaces to be transported contain no table columns with character length semantics or the maximum character width is the same in both the source and target database character sets. The tablespaces to be transported contain no columns with the CLOB data type, or the source and the target database character sets are both single-byte or both multibyte. The source database character set is a strict (binary) subset of the target database character set, and the following two conditions are true: The source database is in a version lower than 10.1.0.3. The maximum character width is the same in the source and target database character sets.. And furthermore from "Choosing a Character Set" section of Oracle 12.1  Database Globalization Support Guide: Oracle Database does not maintain a list of all subset-superset pairs but it does maintain a list of binary subset-superset pairs that it recognizes in various situations such as checking compatibility of a transportable tablespace or a pluggable database. Table A-11 and Table A-12 list the binary subset-superset pairs recognized by Oracle Database. In those tables the only binary subset-superset pairs involving AL32UTF8 are: UTF8/AL32UTF8 US7ASCII/AL32UTF8 . This is not 100% congruent to the information provided in MOS Note 371556.1 - How to Migrate to different Endian Platform Using Transportable Tablespaces With RMANsaying: "The source and target database must use the same character set and national character set.". What is correct, what's not? First of all the wording of "compatible character sets" seems to be gotten introduced with Oracle Database 11.2.  In fact the scope for the target system has become broader in Oracle Database 11.2.  These rule here are correct as soon as your target database is an Oracle Database 11.2 or newer database release. The source and the destination databases must use compatible database character sets. That is, one of the following must be true: The database character sets of the source and the target databases are the same. The source database character set is a strict (binary) subset of the target database character set, and the following three conditions are true: The source database is in version 10.1.0.3 or higher. The tablespaces to be transported contain no table columns with character length semantics or the maximum character width is the same in both the source and target database character sets. The tablespaces to be transported contain no columns with the CLOB data type, or the source and the target database character sets are both single-byte or both multibyte. The source database character set is a strict (binary) subset of the target database character set, and the following two conditions are true: The source database is in a version lower than 10.1.0.3. The maximum character width is the same in the source and target database character sets.. --Mike. 

All credits go to Don Wolf, an Oracle Advanced Customer Support engineer from Ohio as he dug out this information :-) Thanks Don! Do database character sets have to match EXACTLY for Transportable...

Best Practice

MOS Note:136697.1 - New HCHECK.SQL for Oracle Database 12c

A while back we added this slide to our big slide deck: The story behind this slide A large and important customer in the US tested a patch set upgrade - but when they approached the production upgrade from 11.2.0.2 to 11.2.0.3 on a large RAC cluster they've had to cancel the attempt and revert to the previous state. Reason They've hit a dictionary corruption somewhere silently sleeping in the database causing no trouble at all so far - until the upgrade touched the broken structures  They've asked us:"How could we ensure the database is really healthy and in good state?" And unfortunately there's no 100% confirmation as there may be an unknown issue we are not aware of. But we can test two things: Let RMAN verify the logical integrity of the Oracle blocks Let the hcheck.sql script provided by Oracle Support check and test for known dictionary issues It's actually a general recommendation we give to customers for a long time to check their really important and business critical databases from time to time during off-peak hours.  Recently Oracle Support updated hcheck.sql to version 4 to include Oracle Database 12c checks as well. Please find it here: MOS Note:136697.1hcheck.sql - Script to Check for Known Problems in Oracle8i, Oracle9i, Oracle10g, Oracle 11g and Oracle 12c. --Mike. 

A while back we added this slide to our big slide deck: The story behind this slide A large and important customer in the US tested a patch set upgrade - but when they approached the production upgrade...

Workshop

Recap - OTN Tour EMEA - Baku, Azerbaijan - 2016

Working on a Saturday is not common for me but happens from time to time mostly when I travel abroad or when a customer has a critical upgrade or migration on a weekend. Having 120 skilled and enthusiastic people in a conference day on a Saturday is VERY UNCOMMON. And having them stay from 9am to 6pm is a sign that the selection of topics fit the interest of the audience - and the presenters got their attention the entire day. Please find the slides and the Hands-On-Lab here: Ensure Performance Stability When Upgrading Oracle Databases Upgrade, Migrate, Consolidate to Oracle Database 12c Hands On Lab - 11.2.0.4, 12.1.0.1, 12.1.0.2 - Upgrade, Migrate, Consolidate I'm full of a ton of positive emotions (you know that Germans are ice cold and keep their feelings mostly under control, don't you *haha*) - still two days later. That was such a amazing day in Azerbaijan. I haven't been to this country before neither have I visited Baku. The only thing I knew about Baku was from my youth days when I used to play chess in our school team and tried to learn from Garri Kasparov's books. He was born in Baku. The city is beautiful. Unfortunately I didn't see much as I wished as I had to rush back to Germany. But looking at the 1000s of pictures Oracle ACE Director Joél Perez has taken in two days it is very scenic. Speaking of the other presenters I learned a lot from everybody, Chris Antonini from Trivadis, Björn Rost from Pythias and of course from Joel. And ... I know now - thanks to the AzerOUG's president Kamran Agayev that ALT+N advances the Java-based DBUA when the NEXT button is outside of your screen's scope and does not appear no matter what you do. Thanks my friend, you saved me :-) Congrats also to all the organizers - everything worked perfectly well. The conference got even attention in television: ;-) I hope I can come back soon with a bit more time to look around. CU all soon! --Mike. 

Working on a Saturday is not common for me but happens from time to time mostly when I travel abroad or when a customer has a critical upgrade or migration on a weekend. Having 120 skilled and...

Patch Recommendation

Are BPs. PSUs and Proactive BPs cumulative?

Are Bundle Patches (BPs) and Patch Set Updates (PSUs) cumulative? That is a question sounding trivial to many people but actually it does get asked quite often. And sometimes I forget to mention this during the workshops - and luckily usually somebody asks the question reminding me to explain it. Yes, Bundle Patches and Patch Set Updates (and of course Proactive Bundle Patches and Critical/Security Patch Updates (CPUs/SPUs) are all cumulative. You'll find this mentioned in the first paragraph of MOS Note: 854428.1 - Patch Set Updates for Oracle Products: ... Interesting note on the side:I would have expected this important piece of information in MOS Note:1962125.1 - Oracle Database - Overview of Database Patch Delivery Methods but I couldn't find it. So it's no wonder why people ask such a trivial question ... [irony!]  . Two simple examples: You have the October 2015 PSU applied You'd like to apply the April 2016 PSU on top Then you don't need the January 2016 PSU as it is included in the April 2016 PSU already. You never applied a Procative Bundle Patch You'd like to apply the April 2016 Proactive Bundle because a guy recommended it on an Oracle blog - and actually MOS notes mention it as well as highly recommended You don't need to apply anything beforehand. The April 2016 Proactive BP has all the fixes from all previous BPs included on top of Oracle Database 12.1.0.2.0. Further Information? You'll find recent related postings on this blog here:  Can I apply a BP on top of a PSU? Or vice versa? (May 3, 2016)https://blogs.oracle.com/UPGRADE/entry/can_i_apply_a_bp. Oracle April 2016 PSU and Proactive BPs are there (Apr 21, 2016)https://blogs.oracle.com/UPGRADE/entry/oracle_april_2016_psu_and. Oracle Database BP April 2016 applied successfully (Apr 21, 2016)https://blogs.oracle.com/UPGRADE/entry/oracle_database_bp_april16_applied. MOS Note 1454618.1: Quick Reference to Database PSUs, CPUs, BPs and Patchsets (Apr 26, 2016)https://blogs.oracle.com/UPGRADE/entry/mos_note_1454618_1_quick.  --Mike.

Are Bundle Patches (BPs) and Patch Set Updates (PSUs) cumulative? That is a question sounding trivial to many people but actually it does get asked quite often. And sometimes I forget to mention this...

Patch Recommendation

Can I apply a BP on top of a PSU? Or vice versa?

This question was in my inbox this morning raised by a customer via a colleague.  Our feeling said:Yes, but you'll have to deinstall the PSU first before applying the BP. Having a feeling is one thing, knowing the truth is better. And as I have so much fun by applying PSUs and BPs back and forth since two weeks I thought "let's give it a try". So here we go .... Apply a BP on top of a PSU This is my starting point - Oracle Database 12.1.0.2 with PSU October 2015, the last state in my VBox environment since I experiment with parameters and packages at the moment. And this is the result when trying to apply the BP from April 2016 on top:  [CDB2] oracle@localhost:/media/sf_CTEMP/22899531/22899531/22806133$ opatch applyOracle Interim Patch Installer version 12.1.0.1.10Copyright (c) 2016, Oracle Corporation.  All rights reserved. Oracle Home       : /u01/app/oracle/product/12.1.0.2Central Inventory : /u01/app/oraInventory   from           : /u01/app/oracle/product/12.1.0.2/oraInst.locOPatch version    : 12.1.0.1.10OUI version       : 12.1.0.2.0Log file location : /u01/app/oracle/product/12.1.0.2/cfgtoollogs/opatch/opatch2016-05-03_10-26-37AM_1.log Verifying environment and performing prerequisite checks... Conflicts/Supersets for each patch are: Sub-Patch : 20243804         Bug Conflict with Sub-Patch 21359755        Conflicting bugs are:        18966843, 19468991, 19032777, 19587324 [..] Following patches have conflicts: [   21359755   22806133 ]Refer to My Oracle Support Note 1299688.1 for instructions on resolving patch conflicts. UtilSession failed: Patch 21359755 is a composite patch which could not be rollback. Log file location: /u01/app/oracle/product/12.1.0.2/cfgtoollogs/opatch/opatch2016-05-03_10-26-37AM_1.log OPatch failed with error code 73 Ok, our suspicion was correct. There are a lot of conflicts - I have to remove the PSU first. And of course the recommended conflict check would have given me the same result. Furthermore I think I have read this a while back in one of the central notes about PSUs and BPs in MOS as well.. Removing the PSU from October 2015 first Removing a PSU or BP is very simple and straight forward (and well described in the ReadMe.html coming coming with the patch). [CDB2] oracle@localhost:/media/sf_CTEMP/21359755/21359755$ opatch rollback -id 21359755Oracle Interim Patch Installer version 12.1.0.1.10Copyright (c) 2016, Oracle Corporation.  All rights reserved. Oracle Home       : /u01/app/oracle/product/12.1.0.2Central Inventory : /u01/app/oraInventory   from           : /u01/app/oracle/product/12.1.0.2/oraInst.locOPatch version    : 12.1.0.1.10OUI version       : 12.1.0.2.0 Log file location : /u01/app/oracle/product/12.1.0.2/cfgtoollogs/opatch/21359755_May_03_2016_10_41_54/rollback2016-05-03_10-41-54AM_1.log Patches will be rolled back in the following order:   21359755   20831110   20299023   19769480The following patch(es) will be rolled back: 21359755  20831110  20299023  19769480Sub-patches of a composite series are being rolled back. The system will be returned to a state where all subpatches are rolled back. [..] Please shutdown Oracle instances running out of this ORACLE_HOME on the local system.(Oracle Home = '/u01/app/oracle/product/12.1.0.2') Is the local system ready for patching? [y|n]y User Responded with: YRolling back patch 21359755... [..] RollbackSession removing interim patch '19769480' from inventory Log file location: /u01/app/oracle/product/12.1.0.2/cfgtoollogs/opatch/21359755_May_03_2016_10_41_54/rollback2016-05-03_10-41-54AM_1.log OPatch succeeded. Afterwards please make sure to run ./datapatch -verbose to rollback the SQL changes as explained in section 4 of the README.html (and of course all other steps mentioned there).. Apply the BP from April 2016 I don't want to repeat myself as I wrote already about this positive experience a few days ago: Oracle Database BP April 2016 applied successfully (Apr 21, 2016)https://blogs.oracle.com/UPGRADE/entry/oracle_database_bp_april16_applied.  I continued this blog post on a later occasion here: How to rollback Sub Patches? (Oct 4, 2016) Further Information? You'll find recent related postings on this blog here: Oracle April 2016 PSU and Proactive BPs are there (Apr 21, 2016)https://blogs.oracle.com/UPGRADE/entry/oracle_april_2016_psu_and. Oracle Database BP April 2016 applied successfully (Apr 21, 2016)https://blogs.oracle.com/UPGRADE/entry/oracle_database_bp_april16_applied. MOS Note 1454618.1: Quick Reference to Database PSUs, CPUs, BPs and Patchsets (Apr 26, 2016)https://blogs.oracle.com/UPGRADE/entry/mos_note_1454618_1_quick.   --Mike. 

This question was in my inbox this morning raised by a customer via a colleague.  Our feeling said: Yes, but you'll have to deinstall the PSU first before applying the BP. Having a feeling is one thing,...

Oracle Database 12c

Upgrade to Oracle Database 12c: We don't insist :-)

It's so funny - for years I discuss with customers minimal downtime upgrade strategies back and forth, up and down. I saw DBAs really hunting to save a few seconds potential downtime - and I always take this serious as there is usually a real reason behind that. Just to learn a few days later by repeating experience that my work Windows7 laptop goes down to apply upgrades ... and this takes looooooooong .... sometimes it takes a lengthy +30 minutes of quiet time. Whereas my OL6 VBox image applies upgrades in the background and needs a simple restart to be back in less than a minute or so.Different strategies of course.  For those who think I'm writing nonsense about the upgrade of my Win7 on my Lenovo x230 with 16GB of RAM, an i5-320M cpu and a 5400rpm WD spinning disk here's a screenshot taken from one of those upgrades where I sat already 15 minutes waiting ... waiting ... waiting ... As Win7 had to configure the upgrades afterwards, once it was done with restarting I had to wait another 10 minutes for completion. And no, my laptop is not broken (not that I know) - it's 3 years old and heavily used of course. But all running fine usually ... :-)  Anyhow ...  Of course I discussed with a ton of people why you should upgrade now to Oracle Database 12.1.0.2 now! - and stay away from that "we go live on the 2nd release only" thinking as there's no such thing as THE 2nd release anymore. But whatever my position on upgrades is, I can ensure that we don't discuss methods like this internally to force you to upgrade ... ... not yet *haha* !!!. --Mike.  PS: The picture is not a fake - you can watch the 41 seconds youtube video here :-) .

It's so funny - for years I discuss with customers minimal downtime upgrade strategies back and forth, up and down. I saw DBAs really hunting to save a few seconds potential downtime - and I always...

Flaws and Pitfalls

Incremental Statistics Collection in Oracle 12.1.0.2 - A True Story

Recently I came across a really interesting customer case in the UK dealing with Incremental Statistics Collection issues in regards to an upgrade to Oracle Database 12.1.0.2. This is the follow-up blog post to:  Incremental Statistics Collection in Oracle 12.1.0.2 - Upgrade Pitfalls (Apr 27. 2016) https://blogs.oracle.com/UPGRADE/entry/incremental_statistics_collection_in_oracle2 A while back I blogged already about Incremental Statistics collection in Oracle Database 12.1.0.2:  Incremental Statistics Collection Improved in Oracle Database 12c (Nov 13, 2014)https://blogs.oracle.com/UPGRADE/entry/incremental_statistics_collection_improved_in And you'll find more information in our documentation and in posts by our optimizer folks:   Database SQL Tuning Guide 12c:Gathering Incremental Statistics on Partitioned Objects Incremental Statistics Maintenance - what statistics will be gathered after DML occurs on the table? - Mar 18, 2012https://blogs.oracle.com/optimizer/entry/incremental_statistics_maintenance_what_statistics . The Basics The idea of Incremental Statistics Collection is simply to save time and resources when gathering statistics for partitioned tables in order to update the global stats. In Oracle Database 12c we added the very important features of:  Incremental stats working now with partition exchange as well "Changed" partitions won't be eligible for new stats generation until a certain stale percentage (default: 10%) has been reached - this has to be enabled and can be tweaked SQL> exec DBMS_STATS.SET_DATABASE_PREFS('INCREMENTAL_STALENESS','USE_STALE_PERCENT'); SQL> exec DBMS_STATS.SET_DATABASE_PREFS('STALE_PERCENT','12'); Furthermore we always recommend to:   Not enable incremental stats collection globally but only for specific tables. Otherwise the footprint for the synopsis on disk can grow fairly large. Biggest footprint I've seen so far was almost 1TB in size in the SYSAUX tablespace Enable it mostly for range-partitioned tables where only a few partitions undergo DML changes . The Case Actually the synopsis table in this particular case did contain "only" 300GB of data. But as the starting point was already Oracle Database 11.2.0.3 just a change from Range-Hash to List-Hash Partitioning would happen. As this happens via metadata swapping the impact should be small. But the issue coming up after the upgrade didn't have to do with this change in partitioning layout. Issue No.1 During the maintenance window the incremental stats job did not finish and actually this statement caused plenty of trouble: delete from sys.wri$_optstat_synopsis$ where bo# = :tobjn and group# in (select * from table(:groups) Not completing this statement within the 4 hours of the default maintenance window led to a rollback of the delete - and its rollback alone took 14 hours. It turned out that the delete has to happen (and complete) before the regathering of stats could start.  I did recommend:patch 21498770: AUTOMATIC INCREMENTAL STATISTICS JOB TAKING MORE TIME ON 12.1.0.2  (see also MOS Note:2041541.1 - Gather_Database_Stats_Job_Proc Taking More Time in 12.1.0.2 Than 11.2.0.4) and the customer requested:Patch 22893653: MERGE REQUEST ON TOP OF DATABASE PSU 12.1.0.2.160119 FOR BUGS 19450139 20807398on top of their January 2016 PSU - the merge included the patch I mentioned. Besides that another issue got discovered. Issue No.2 The daily purge of statistics didn't really work on large synopsis as the default degree of parallelism introduced with Oracle 12c gets derived from the number of blocks on the synopsis table - bigger table means a higher parallel degree for the purge. It ended up with a PARALLEL hint of 60 - and that was counterproductive. Once a purge got started manually in serial mode or with a low DOP it completed in less than 1 minute. With a similar trace as: set serveroutput on;EXEC dbms_output.enable(999999999);EXEC dbms_stats.set_global_prefs('trace',1+4);EXEC dbms_stats.gather_table_stats(ownname=>'&TABLE_OWNER',tabname=>'&TABLE_NAME');EXEC dbms_stats.set_global_prefs('trace',0); the issues could be identified as:  Bug 21258096 - UNNECESSARY INCREMENTAL PARTITION GATHERS/HISTOGRAM REGATHERS The customer requested another merge patch 22926433 which contains the following fixes: 19450139: KN:LNX:PERFORMANCE ISSUE WHEN RUNNING GATHER TABLE STATS WITH INCREMENTAL STATS20807398: ORA-00600 [KGL-HASH-COLLISION] WITH FIX TO BUG 2046558221258096: UNNECESSARY INCREMENTAL PARTITION GATHERS/HISTOGRAM REGATHERS21498770: AUTOMATIC INCREMENTAL STATISTICS JOB TAKING MORE TIME ON 12.1.0.2 Finally the customer agreed with Support's recommendation to truncate the two synopsis tables, WRI$_OPTSTAT_SYNOPSIS_HEAD$ andWRI$_OPTSTAT_SYNOPSIS$, and regathered incremental statistics the following weekend. Of course they validated this action plan on their performance testing environment first - with the merge patch applied - and it had the desired effect and solved the issue. Incremental statistic gathering works now as expected, the job fits into the maintenance window..  Lessons Learned Actually Oracle Support released a very helpful and important note just a few weeks ago (too late for this customer):  MOS Note: 2107602.1Things to Consider When Using Incremental Statistics It contains not only links to the patches for the issues the customer hit here - but also a long list for Oracle 11.2.0.4 as well.  Another MOS Note is worth to mention here:  MOS Note:1953961.1 - How to Delete Unwanted Incremental Partition Statistics Synopsis Information From WRI$_OPTSTAT_SYNOPSIS$ in the SYSAUX Tablespace   But these were not all issues the customer faced - so I may write up another blog post in addition within the next days..  --Mike.  PS. All credits go to David Butler and Rob Dawley - thanks for your hard work, sorry for all the inconvenience - and especially thanks for writing it all together and forwarding it to me!!!

Recently I came across a really interesting customer case in the UK dealing with Incremental Statistics Collection issues in regards to an upgrade to Oracle Database 12.1.0.2. This is the follow-up...

Flaws and Pitfalls

Incremental Statistics Collection in Oracle 12.1.0.2 - Upgrade Pitfalls

A while back I blogged already about Incremental Statistics collection in Oracle Database 12.1.0.2:  Incremental Statistics Collection Improved in Oracle Database 12c - Nov 13, 2014https://blogs.oracle.com/UPGRADE/entry/incremental_statistics_collection_improved_in. And you'll find more information in our documentation and in posts by our optimizer folks:   Database SQL Tuning Guide 12c:Gathering Incremental Statistics on Partitioned Objects Incremental Statistics Maintenance - what statistics will be gathered after DML occurs on the table?  (Mar 18, 2012)https://blogs.oracle.com/optimizer/entry/incremental_statistics_maintenance_what_statistics  And  you may read on this follow-up blog post about a related real world customer example ...   Incremental Statistics Collection in Oracle 12.1.0.2 - A True Story (Apr 27, 2016)https://blogs.oracle.com/UPGRADE/entry/incremental_statistics_collection_in_oracle1  . Database Upgrade Important to know is the fact that during a database upgrade the underlying tables containing the synopsis for incremental stats collection may be reorganized. And depending on the amount of data this can take a bit. The largest synopsis tables I have seen so far were almost 1TB of size at a financial customer in Europe. But I have seen ranges around 300GB quite often in the past months. . What happens during the database upgrade?  Incremental Statistics Collection got introduced with Oracle Database 11.1.0.6 and improved from release to release. But during a database upgrade a reorganization of the synopsis table can happen. Upgrade from Oracle 11.1.0.6/7 or 11.2.0.1 to Oracle 11.2.0.2/3/4: Restructuring of WRI$_OPSTAT_SYNOPSIS$ to use range-hash partitioning  Most data movement will happen here As for the interim period two synopsis tables exist this will consume 2x the space of the synopsis table during the movement. Upgrade from Oracle 11.2.0.2/3/4 to Oracle 12.1.0.x: Restructuring of WRI$_OPSTAT_SYNOPSIS$ from range-hash partitioning to list-hash partitioning There is little data movement in this case as the move happens with the help of metadata swapping. Which symptoms may you see? Actually very simple and obvious symptoms:Phase 1 of the parallel upgrade to Oracle Database 12c takes unusually long. It should usually complete within the range of less than a few minutes. But in those cases it can take literally hours. If that happens check your catupgrd0.log and watch out for the long running statements. It does not mean necessarily that it happens because of a huge synopsis table. For instance one of my German reference customers, DVAG had leftovers in the SYSAUX because of bugs in earlier releases they had worked with.  But if you spot such results (quoting a colleague here): "The table WRI$_OPTSTAT_SYNOPSIS$ has 20420 partitions, 344618 subpartitions and 921207 MB size. [..] This transformation step lasts for 6,5 hours, so the whole upgrade process duration has an important impact from this step."  then you should be alerted. . How can you check this upfront?  We haven't included a check into the preupgrd.sql yet. But the following three queries will tell you if you may see issues when you get a larger number as result:  How many tables have incremental stats on?SQL> select count(distinct bo#) from sys.wri$_optstat_synopsis_head$;. How many partitions does your WRI$_OPSTATS_SYNOPSIS$ have?SQL> select partition_count from dba_part_tables where table_name='WRI$_OPTSTAT_SYNOPSIS$';. How large is your synopsis table?SQL> select sum(bytes/(1024*1024)) "MB" from dba_segments where segment_name='WRI$_OPTSTAT_SYNOPSIS$';, Tables where inc stats are ON?SQL> select u.name "OWNER" ,o.name "TABLE_NAME" ,p.valcharfrom  sys.OPTSTAT_USER_PREFS$ pinner join sys.obj$ o on p.obj#=o.obj#inner join sys.user$ u on o.owner#=u.user#where p.PNAME = 'INCREMENTAL';.  Synopsis for tables which don't exist anymore?SQL> select distinct h.bo# from sys.wri$_optstat_synopsis_head$ h where not exists (select 1 from sys.tab$ t where t.obj# = h.bo#);. Especially a large number of tables being monitored and a size of tens and hundreds of GBs will indicate that you may have to calculate for a longer upgrade duration..  How do you cure this? Support sometimes gives the recommendation to look for MOS Note: 1055547.1 - SYSAUX Grows Because Optimizer Stats History is Not Purged and asks for a manual purge of stats, for instance: beginfor i in reverse 1..31loopdbms_stats.purge_stats(sysdate-i);end loop;end;/ But this won't clean up the synopsis tables but only stats history for object statistics. And it may create some noise in your UNDO. So in any case you may better set your stats retention policy to something such as 10 days instead of the default of 31 days instead generally. First of all you have to make sure that this patch got applied to your target already before upgrade - it will add parallel index capabilities which will speed up the rebuild a lot:  Bug 19855835 - Upgrade slow when reorganizing large stats history tables. Check the most important MOS 2034610.1 - Things to Consider to Avoid Poor Performance or Wrong Results on 12.1.0.2 for inclusion in your current or future PSU or BP.As it turns out the fix did not get included into a PSU yet - but it is included since BP7 for 12.1.0.2 - and as BPs are cumulative as well it is included since October 2015 already.   Be aware:Truncating WRI$_OPTSTATS_SYNOPSIS$ and WRI$_OPTSTAT_SYNOPSIS_HEAD$ is strictly not recommended. If you plan to do it the hard way please check back with Oracle Support for their approval first.. Further Information? Please read on here about a real world customer example ...  Incremental Statistics Collection in Oracle 12.1.0.2 - A True Story (Apr 27, 2016)https://blogs.oracle.com/UPGRADE/entry/incremental_statistics_collection_in_oracle1  --Mike. 

A while back I blogged already about Incremental Statistics collection in Oracle Database 12.1.0.2:  Incremental Statistics Collection Improved in Oracle Database 12c - Nov 13, 2014https://blogs.oracle....

Patch Recommendation

MOS Note 1454618.1: Quick Reference to Database PSUs, CPUs, BPs and Patchsets

Sometimes my mouse arm gets tired by clicking myself through MOS notes just to download a specific PSU or BP - and as I experiment a lot with PSUs and BPs right now I clicked a lot in the past days and nights.  Usually I'd start with either MOS Note:161818.1 - then click on the release link (e.g. 12.1.0.x) in the left-most column, then into the Availability and Known Issues not (e.g. MOS Note:1683799.1 for Oracle Database 12.1.0.2.0) and then select the most recent patch from the list of Current Recommended Patches. Even though we all agree that you should be ideally always on the most recent BP (or at least PSU) there may be situations where you need to access an older PSU or BP or just the CPU.So what if you need for instance the October 2015 PSU?This is obviously not linked from the above note. My usual click routine guides me via the Critical Patch Updates and Security Alerts page, from there via the particular release date (e.g. October 2015) to the Database Server Version (e.g. linked from here is MOS Note:2037108.1) and then to the patch number I'm searching for. Just learned a few days ago by accident that there's a MOS Note which I have on my favorites since then: MOS Note:1454618.1Quick Reference to Patch Numbers for Database PSU, SPU(CPU), Bundle Patches and Patchsets containing all this information - even for 8.1.7.4 patches if you still need them ;-) After listing the base release first followed by the available patch sets it offers you the links to the PSUs, CPUs and BPs - and if you are looking for the patches containing the OJVM you'll find them by scrolling down towards the end of the note as well in a separate section.  *** Don't get puzzled by the note's title - it lists the Bundle Patches for Windows only inside, not the general link to all BPs. Myself and a colleague left already feedback for the note owner to add the BP links (or Proactive Bundle Patch links) as well ***  In fact a very useful MOS Note.. --Mike. 

Sometimes my mouse arm gets tired by clicking myself through MOS notes just to download a specific PSU or BP - and as I experiment a lot with PSUs and BPs right now I clicked a lot in the past days...

Patch Recommendation

Oracle Database BP April16 applied successfully

Usually I don't post twice a day but as my post scriptum for the previous blog post got longer and longer I decided to write an entry about it - maybe simply because I feel soooo happy that my patch application succeeded flawless. For many of you the following steps may look very boring as you have done this many times. But I use the blog also to brain-dump information for myself ;-) And for those who'd like to play with it, I summarized the steps fitting exactly into our Hands-On-Lab environment..  Download I downloaded the following patches:  Database Proactive Bundle Patch 12.1.0.2.160419 (Apr2016) Patch 22899531 Oracle OPatch 12.1.0.1.10 via Patch 6880880 .  Unzip Then unzipped the patch into an empty directory and OPatch directly into the $ORACLE_HOME/OPatch directory in my 12.1.0.2 home.  As I run everything in our HOL environment (download it here) I stored the files on my local host and mapped the directories via a VBox shared folder.  BP into: /media/sf_CTEMP/22899531/22899531/ OPatch into: unzip <OPATCH-ZIP> -d <ORACLE_HOME>. Prechecks Created an OCM response file (ocm.rsp) See MOS Note: 966023.1 (How To Create An OCM Response File For Opatch Silent Installation) Run the conflict checks: $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /media/sf_CTEMP/22899531/22899531/22806133 $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /media/sf_CTEMP/22899531/22899531/23006522 Not sure if the 2nd conflict check was really necessary as it is a CRS patch.  System Space Check  Created a file /tmp/patch_list_dbhome.txt with the following content:/media/sf_CTEMP/22899531/22899531/22806133/media/sf_CTEMP/22899531/22899531/23006522 Run opatch to check if enough free space is available in the Database Home:$ORACLE_HOME/OPatch/opatch prereq CheckSystemSpace -phBaseFile /tmp/patch_list_dbhome.txt My first attempt failed - I had to clean up a bit. After I removed a bit of stuff (I actually deleted the 12.1.0.1 home with $ORACLE_HOME/deinstall/deinstall after switching to the 12.1.0.1 environment with . cdb1 in the HOL environment) I could install the BP. . Patch Installation Here it struck me a bit that the logical layout of the patches readme is still very misleading. I was scrolling up and down to find the section for "Database Home only" install but realized that is is still buried under Patching Oracle RAC Database Homes and GI Together which does not make much sense. Anyhow, I stopped arguing with the owners.  The section I'm interested in here is under Case 3: Non-Oracle RAC Database homes. Shutdown my database(s) and my listener - in the lab env this means: . cdb2 sqlplus / as sysdba SQL> shutdown immediate SQL> exit  lsnrctl stop Add OPatch to the PATH  export PATH=$PATH:/u01/app/oracle/product/12.1.0.2/OPatch Switch to the patches directory and apply the BP cd /media/sf_CTEMP/22899531/22899531/22806133 opatch apply  And after a while ...  . SQL Changes  Almost done. But don't forget to run ./datapatch -verbose afterwards - see the readme's section: 2.6.2 Loading Modified SQL Files into the Database As it is a Multitenant database in my case I'll have to start the database and the listener first, then start all pluggable databases:  . cdb2 sqlplus / as sysdba SQL> startup SQL> alter pluggable database all open; SQL> exit lsnrctl start cd $ORACLE_HOME/OPatch ./datapatch -verbose And finally ... ... after another little while ... Done!. --Mike. 

Usually I don't post twice a day but as my post scriptum for the previous blog post got longer and longer I decided to write an entry about it - maybe simply because I feel soooo happy that my patch...

Patch Recommendation

Oracle April 2016 PSU and Proactive BPs are there

Hurray, it's Patching Day! Sounds a bit like D-Day ;-) But April 19, 2016 the most recent April PSUs (Patch Set Updates) and BPs (Bundle Patches) got released. Find all the necessary information with the below links:  Critical Patch Update - April 2016 Patch Availablity - Oracle Database - April 2016 The important change in the April PSU/BP release:The database patch for "Engineered Systems and Database In-Memory 12.1.0.2" luckily got renamed into "Proactive Bundle Patch 12.1.0.2". That is not only a rebranding but it should express that we would like to encourage you to apply the Bundle Patches instead of the PSUs. Simple reason is that the BPs will contain optimizer fixes.  In the MOS Note: 2102148.1 (Patch Set Update and Critical Patch Update April 2016 Availability Document) you'll find a section 3.1.4 linking to the database patches.  This is the recommended one for Oracle Database 12.1.0.2: Database Proactive Bundle Patch 12.1.0.2.160419 (Apr2016) Patch 22899531, But right now it is available for Linux-x86-64, zLinux and Intel Solaris only. Not sure when the others will get released. Please find links to the regular PSUs and other ports and releases such as 11.2.0.4 and Windows etc in the above MOS Note: 2102148.1. This is the list of fixes included in this Bundle Patch: MOS Note 1937782.112.1.0.2 Bundle Patches for Engineered Systems and DB In-Memory - List of Fixes in each Bundle And don't worry about the name - I found out yesterday that not all MOS Notes have adopted the new naming convention to rename "Bundle Patches for Engineeered Systems and DB In-Memory" which was very misleading anyway into the new "Proactive Bundle Patches" naming. This may take a few additional days I'd guess ... I will download it right now and patch my HOL environment. And as usual don't forget the most recent version of opatch (Patch 6880880). . --Mike .

Hurray, it's Patching Day! Sounds a bit like D-Day ;-) But April 19, 2016 the most recent April PSUs (Patch Set Updates) and BPs (Bundle Patches) got released. Find all the necessary information with...

Data Pump

Data Pump - Exclude Stats differently for TTS and FTEX

Nice little best practice for statistics and Data Pump when doing either Transportable Tablespaces or Full Transportable Export-Import (credits to Roy and Dean Gagne).. Transport Statistics via a Staging Table First of all we always recommend to exclude statistics when doing a Data Pump export as the import of such stats takes way longer than transporting them via a stats table. If you are unfamiliar with transporting stats between databases please see the Oracle Performance Tuning Guide with a nice tutorial:  Importing and Exporting Optimizer Statistics The basic steps to transport statistics from one database to another fast and efficient consist of:  Create a staging table in your source database with DBMS_STATS.CREATE_STAT_TABLE Export your local stats into this staging table by using DBMS_STATS.EXPORT_SCHEMA_STATS Export the staging table and import it into your destination database with Data Pump Import the statistics held in the staging table by using DBMS_STATS.IMPORT_SCHEMA_STATS For the regular Data Pump exports we always recommend to set: EXCLUDE=STATISTICS to avoid performance penalties during impdp. But this does not affect Transportable Tablespaces and Full Transportable Export/Import.. How to exclude Statistics for TTS and FTEX? For reasons I don't know the metadata heterogeneous object for "transportable" is different than all of the others Therefore in order to exclude statistics for Transportable Tablespaces and Full Transportable Export/Import you must set: EXCLUDE=TABLE_STATISTICS,INDEX_STATISTICS instead.  See also MOS Note:1517267.1 - TTS Using Datapump Expdp and Impdp Exports Statistics Despite Setting EXCLUDE=STATISTICS-- Thanks Pete! Keep this in mind during your next migration :-). Other Useful Articles About Data Pump 30-June-2015:Some Data Pump issues: + DBMS_DATAPUMP Import via NETWORK_LINK fails + STATUS parameter giving bad performance 3-July-2012:Consistent Export 10-April-2015:Parallel Index Creation with Data Pump 19-July-2011:How to get the Master Table from Data Pump? 5-December-2011Exclude Scheduler Jobs from Export? 18-July-2013Full Transportable Export/Import White Paper --Mike. 

Nice little best practice for statistics and Data Pump when doing either Transportable Tablespaces or Full Transportable Export-Import (credits to Roy and Dean Gagne). . Transport Statistics via a...

Flaws and Pitfalls

RMAN Catalog Upgrade fails - ORA-02296 - error creating modify_ts_pdbinc_key_not_null

This issue got raised to my via a customer I know for quite a while - all credits go to Andy Kielhorn for digging down into that issue and solving it. , Failed RMAN Catalog Upgrade from 11.2.0.4 to 12.1.0.2 The RMAN catalog upgrade: SQL> @?/rdbms/admin/dbmsrmansys.sql $ rman CATALOG rman/xxx@rman01 RMAN> UPGRADE CATALOG;  RMAN> UPGRADE CATALOG; failed with the following sequence of error messages:  error creating modify_ts_pdbinc_key_not_nullRMAN-00571: ===========================================================RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============RMAN-00571: ===========================================================RMAN-06004: ORACLE error from recovery catalog database: ORA-02296: cannot enable (RMAN.) - null values found error creating modify_tsatt_pdbinc_key_not_nullRMAN-00571: ===========================================================RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============RMAN-00571: ===========================================================RMAN-06004: ORACLE error from recovery catalog database: ORA-02296: cannot enable (RMAN.) - null values found error creating modify_df_pdbinc_key_not_nullRMAN-00571: ===========================================================RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============RMAN-00571: ===========================================================RMAN-06004: ORACLE error from recovery catalog database: ORA-02296: cannot enable (RMAN.) - null values found error creating modify_tf_pdb_key_not_nullRMAN-00571: ===========================================================RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============RMAN-00571: ===========================================================RMAN-06004: ORACLE error from recovery catalog database: ORA-02296: cannot enable (RMAN.) - null values found error creating modify_bs_pdb_key_not_nullRMAN-00571: ===========================================================RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============RMAN-00571: ===========================================================RMAN-06004: ORACLE error from recovery catalog database: ORA-02296: cannot enable (RMAN.) - null values found Andy found also these bugs in MOS - but no helpful information included: Bug 20861957ORA-2296 DURING UPGRADE CATALOG TO 12.1,0.1 IN AN 11.2 DATABASE Bug 19677999CATALOG SCHEMA UPGRADE TO 12.1.0.2 FAILED . The  Solution There seems to be a potential inconsistency in the RMAN catalog when the PDB/CDB mechanisms get introduced. This does not necessarily happen - but it can happen. The workaround is described in: Bug 19677999 : CATALOG SCHEMA UPGRADE TO 12.1.0.2 FAILED ==> Connect to catalog schema and clear the table having null details  delete bdf where not exists (select 1 from dbinc where dbinc.dbinc_key = bdf.dbinc_key);delete bcf where not exists (select 1 from dbinc where dbinc.dbinc_key = bcf.dbinc_key);commit; ==> After clearing the offending rows , upgrade catalog worked But please use this workaround only under Oracle Support's supervision. I did document it here to ease your verification. Andy fixed it with: update <rmancat_owner>.dbinc set PARENT_DBINC_KEY=NULL where (DBINC_KEY) IN (SELECT DBINC_KEY  from  <rmancat_owner>..ts where pdbinc_key is null); commit; but please open an SR and point Oracle Support to the bug and the potential workarounds in case you hit the issue.. --Mike. 

This issue got raised to my via a customer I know for quite a while - all credits go to Andy Kielhorn for digging down into that issue and solving it.  , Failed RMAN Catalog Upgrade from 11.2.0.4...

Oracle

Integrated Cloud Applications & Platform Services