Monday Feb 20, 2017

Workshop in Vancouver - Venue has changed

I arrived well in Canada seeing the longest immigration line ever - but luckily as the immigration process by itself is very efficient it didn't take over an hour. 

Vancouver Football Stadium

Vancouver sees to be a very nice city - only the weather could be a bit better ;-) I'm looking forward to the "Upgrade / Migrate / Consolidate to Oracle Database 12.2 and the Cloud" workshop tomorrow and just in case you haven't been alerted yet, there was a venue change over a week ago in order to accommodate all registered participants from the Oracle office (before) to:

Hyatt Regency Vancouver
Grouse Room, 34th floor
655 Burrard Street
V6C 2R7

Get Directions

Start time will be 9:00am on Tuesday, Feb 21, 2017 - and you may arrive a bit earlier for the registration please.

Looking forward to see you soon :-)


Friday Feb 17, 2017

Client Certification for Oracle Database

I've received a question about client certification for Oracle Database the other day.
And this reminded me on this very helpful MOS Note:

It spans the client interoperability back to the Oracle 9.2 days and is a very helpful resource.

Client Certification Oracle Database

And don't forget that sometimes client software may need a patch as well. A customer I exchange emails with on a regular basis just upgraded the Database server to Oracle Database and saw several strange errors with the JDBC client despite the fact that they were already using the most recent JDBC client.

ORA-904, ORA-923 and ORA-920 were signaled together with:

It turned out that the JDBC client need some patch treatment with merge patch 21623553 on top of the client. It got combined together with some other useful fixes into merge patch 24012252 for the JDBC client:

JDBC Patch for Bug# 24012252 for Generic Platform 

Afterwards the errors were gone. For further JDBC client patch recommendations please see:

Furthermore don't forget that there's the Instant Client as well which is easy to deploy and downloadable from OTN:

Oracle Instant Client Download OTN


Thursday Feb 16, 2017

The OJVM Patching Saga - and how to solve it - Part V

Related Posts on
"The OJVM Patching Saga - and how to solve it":

MOS Note released explaining patching without STARTUP UPGRADE mode

An important note regarding OJVM rolling patching has been published on MyOracle Support:

MOS Note: 2217053.1
RAC Rolling Install Process for the "Oracle JavaVM Component Database PSU" (OJVM PSU) Patches

"Beginning with the Jan2017 OJVM PSU patchset for and for, this document defines a few specific situations where the OJVM PSU patchset can be postinstalled into each database while the database remains in unrestricted "startup" mode. This will allow a "Conditional Rolling Install" ability for the OJVM PSU patchsets for and for"


Monday Feb 13, 2017

Oracle Database 12.2 for Exadata/SuperCluster available

Oracle Database on-premises for Exadata and SuperCluster is now available for download from Oracle eDelivery/SoftwareCloud.

Connect to eDelivery and login:

Search for:

"Oracle Database" and mark "Oracle Database Enterprise Edition ..."


Click on "Select Platforms" and select "Linux x86-64", then hit "Continue":


Once you accepted the license agreement you'll be able to download the software:


Please be aware that the software can be installed on Oracle Exadata and Oracle SuperCluster ONLY. To take advantage of the full potential of Oracle Database 12c Release 2 you should upgrade to the latest Exadata Software Release before upgrading databases to 12c Release 2. Exadata Software supports Smart Scan functionality for Oracle Database 12c Release 2 and is available via OSDC.

For the release of Oracle Database on-premises for other platforms please see MOS Note:742060.1.


Wednesday Feb 08, 2017

DBMS_QOPATCH does not work in PDBs (right now)

Thanks to Murthy who commented on this blog post and Jeannette Holland (SimCorp) who opened an SR resulting in an ER.

DBMS_QOPATCH in Multitenant

DBMS_QOPATCH will deliver useful information about installed patches only when executed within the CDB$ROOT. It has been designed this way for security reasons in Oracle Database 12.1 but I can easily see a need to check for installed patches within a PDB as well.


I "borrowed" this test case from Jeannette's SR:


-------- ------ ---------- ---------- ------------------
CDB$ROOT      1 3424772713 1          47C8525C0DFE49...
PDB$SEED      2 3983775695
3983775695 E6204BB1F6EB4F...
MYPDB1        3 7270044002
7270044002 B975668B860049...
MYPDB2        4 1943363979
1943363979 BCD7AAFAF3F641...

In a PDB:

ALTER SESSION SET container = myPDB;

Session altered.

SQL> select * from OPATCH_XML_INV ;
ORA-29913: error in executing ODCIEXTTABLEOPEN callout
ORA-29400: data cartridge error
KUP-04080: directory object OPATCH_LOG_DIR not found

no rows selected

SQL> select dbms_qopatch.get_opatch_install_info from dual;
ORA-20001: Latest xml inventory is not loaded into table
ORA-06512: at "SYS.DBMS_QOPATCH", line 1986
ORA-06512: at "SYS.DBMS_QOPATCH", line 133

In the CDB:

SQL> ALTER SESSION SET container = cdb$root;

Session altered.

SQL> select * from OPATCH_XML_INV ;

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>

SQL> select dbms_qopatch.get_opatch_install_info from dual;



There's no solution available right now for Oracle Database And this behavior does not seem to be documented yet. The SR resulted in an (unpublished) Enhancement Request. In a PDB the following workaround may help in Oracle Database

select patch_id, patch_uid, version, action, action_time, status, description from dba_registry_sqlpatch;

But this is not as fancy and easy to deal with as an API call to a DBMS package.

I tested in Oracle Database - and there everything seems to work fine there :-)

SQL> create pluggable database PDB3 admin user adm identified by adm file_name_convert=( '/u02/oradata/CDB2/pdbseed', '/u02/oradata/CDB2/pdb3');

Pluggable database created.

SQL> alter pluggable database pdb3 open;

Pluggable database altered.

SQL> alter session set container=pdb3;

Session altered.

SQL> select dbms_qopatch.get_opatch_install_info from dual;


SQL>   select * from OPATCH_XML_INV ;

<?xml version="1.0" encoding="UTF-8" standalone="yes"?> <Inv
entoryInstance> <ora

SQL> select xmltransform(dbms_qopatch.get_opatch_lsinventory, dbms_qopatch.get_opatch_xslt) from dual;


Oracle Querayable Patch Interface 1.0

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         3 PDB3                           READ WRITE NO


Friday Feb 03, 2017

February 2017 version of PREUPGRD.SQL is available

The new version of the preupgrd.sql (including the utluppkg.sql package) in its February 2017 version is available for download from MOS Note:884522.1.

New Preupgrd.sql - MOS Note 884522.1

If you are going to upgrade to Oracle Database please exchange your current preupgrd.sql/utluppkg.sql either in ?/rdbms/admin or in the location of the source system you've copied it to with the new version.


Wednesday Feb 01, 2017

New SPFILE parameters in Oracle Database with July 2016 (and newer) PSU/BP

New Parameters in Oracle Database with July 2016 PSU/BP

By following an internal discussion and checking parameter changes between Patch Set Updates (PSU) and Proactive Bundle Patches (BP) I learned that we introduced two new SPFILE parameters in Oracle Database with the July PSU and BP. One is documented in the patch readme, the other one can be found right now only in the Oracle Database manual:

The Oracle 12.2 documentation about ALLOW_GROUP_ACCESS_TO_SGA, the parameter which appears not in the Oracle 12.1 documentation right now, says:

ALLOW_GROUP_ACCESS_TO_SGA controls group access to shared memory on UNIX platforms.

The default value is FALSE, which means that database shared memory is created with owner access only. In Oracle Database releases prior to Oracle Database 12c Release 2 (, database shared memory was created with owner and group access.

When this parameter is set to TRUE, database shared memory is created with owner and group access. This behavior grants permissions to DBAs to manage shared memory outside the database, but also allows DBAs to read and write to shared memory, which may not be desirable for certain installations.

So there's a tiny correction required:
It should say "prior to Oracle Database July 2016 PSU/BP".

The ENCRYPT_NEW_TABLESPACES parameter came in for the cloud deployments and is documented in the description of the July Proactive BP: 21281607E Transparently encrypt tablespace at creation in Cloud (adds "encrypt_new_tablespaces";)


Why does ALLOW_GROUP_ACCESS_TO_SGA appear in Oracle Database

Simple reasons: it will make Oracle Database more secure. Default for this parameter is FALSE - which actually changes behavior but may not affect you at first sight. And that's why I blog about it.

You will recognize that the Oracle executable runs now with permission "600" - whereas it was "640" before. See my example of an database with the January 2017 BP in place:

$ ipcs -m

------ Shared Memory Segments --------
key        shmid      owner      perms      bytes      nattch     status     
0x00000000 23298063   oracle     600        2932736    76                     
0x00000000 23330832   oracle     600        1040187392 38                     
0x00000000 23363601   oracle     600        5455872    38                     
0xc8969114 23396370   oracle     600        20480      38      


whereas my database runs with different permissions:

$ ipcs -m

------ Shared Memory Segments --------
key        shmid      owner      perms      bytes      nattch     status    
0x00000000 22872084   oracle     640        12582912   21                     
0x00000000 22904853   oracle     640        721420288  21                     
0xd41b1c5c 22937622   oracle     640        2097152    21

Does this effect the connection of your applications to the database?
No, of course not.

It has only an effect if you try to access the SGA from the OS level, i.e. attaching to the shared memory segment. In the old behavior an OS user being in the same group can attach and read from the SGA. With the new "600" protection only the OWNER can attach to it - and read out the SGA.

This is the standard behavior in Oracle Database and onward. And it has been backported to the July 2016 PSU and Proactive Bundle Patches which are cumulative, i.e it is in all following PSUs and BPs included as well.


Monday Jan 30, 2017

Where is the Jan 2017 PSU for Oracle Database

Where is it?

This question reached me a week ago:
"Where is the January 2017 PSU for Oracle Database"

"What's the deal?" was my first thought - just go and download it from MOS.
Well, not really ...

Where to download Proactive Bundle Patches, Patch Set Updates and Patch Sets?

My favorite note is:
MOS Note:1454618.1
Quick Reference to Patch Numbers for PSU, SPU(CPU), BPs and

It gives me access to all the download links for the patch bundles I'm looking for.

But for there's only "NA" (not available) stated:

And a very small asterisk "m":


Why is there no January 2017 PSU for Oracle Database

The comment says it all - but I was looking for a more precise explanation. And Roy knew where to look at (thanks!):

  • MOS Note: 854428.1
    Patch Set Updates for Oracle Products

    Section 3 (Patch Sets Updates Lifecycle) says:

    • PSUs will reach a plateau of non-security content as the version stabilizes. Once this is reached, there will be a tapering off of non-security content. Oracle expects this plateau to be reached with the third or fourth PSU.
    • PSU content will be primarily security-related once the next patch set in the series is released.

So yes, this is possible and somehow documented.
As there are no Security Fixes for Oracle Database this quarter there are no SPUs or PSU released.

Further Information?

Please see the blog post of Rodrigo Jorge from Enkitec with more details:

Obrigado, Rodrigo!


Friday Jan 27, 2017

Release Dates Oracle Database on-prem - Extended Support Waiving for Oracle /

Yesterday night the most important MOS Note:742060.1 got updated with the planned release date for Oracle Database on-premises. 

In addition the dates for Waived Extended Support for Oracle Database and got extended as well.

Please see:

In summary:

  • Oracle Database for Exadata and SuperCluster is supposed to be released soon
  • Oracle Database for Intel Linux x86 and Solaris platforms (SPARC and Intel) is supposed to be released soon after the Exadata/SuperCluster release


  • The Free (Waived) Extended Support for Oracle Database will be extended until end of December 2018 (was end of May 2017 before)

  • The first year of Extended Support for Oracle Database will be waived (free) as well until end of July 2019

Oracle 12.2 12.1 Release Map

For further details please see the MOS Note:742060.1.


Thursday Jan 26, 2017

Gather SQL Statements from AWR and Cursor Cache without need for Diag and Tuning Packs

When we talk about database upgrades and migrations the most important (and unfortunately time-/resource consuming) activity is testing.

But often testing resources are limited or, even worse, don't exist. I've worked with customers where we migrated a core EBS system off AIX to Linux - but only got a 6 year old Tru64 box for testing purposes (where the recompilation took 20x as long as later on the production environment). Or the classic one: Database is too big thus we test only with 10% of the data.

I know that often your hands are tied. Everybody wants an 1:1 duplicate test system - but not everybody has one.

And here our DBaaS Cloud is coming into play.
Why? Because you can use one of our best and strongest testing tools, SQL Performance Analyzer (part of the Real Application Testing Pack) without the need to license RAT as it is included into the High and Extreme Performance Cloud offerings.

RAT in the Cloud - without the need to license SPA SQL Performance Analyzer

Even better:
If you have an Enterprise Edition database license on source you won't even need Diagnostic and Tuning Pack licenses to offload your statements from your local environment's AWR and Cursor Cache.

SQL Tuning Sets can also be accessed by way of database server APIs and command-line interfaces. The following subprograms, part of the DBMS_SQLTUNE package, provide an interface to manage SQL Tuning Sets and are part of Oracle Database Enterprise Edition:


But please note:
Diagnostic and Tuning Packs are still available and key to performance management and diagnosibility on your systems. If you'd like to use the above packages and evaluate your statements on-premises (locally) for instance with a scheduled Tuning Task with the SQL Tuning Advisor you will still need a license for Diagnostic and Tuning Packs. If you plan to use the SQL Performance analyzer locally you will need to get a license for Real Application Testing Pack first.

See the documentation link below for the exact description (scroll down to SQL Tuning Sets):

For a short example how to capture your SQL statements directly from AWR please see this fairly old blog post:

But I will publish a complete example within the next days including not only the capture from AWR and Cursor Cache but also the packaging into staging tables, the unpackaging - and of course the SPA runs.


PS: I did check also the DBA_FEATURE_USAGE_STATISTICS view and I don't see any offending entries when you use the listed packages/calls from above.

Wednesday Jan 25, 2017

Restarting a failed Database Upgrade with DBUA 12.2

In my previous blog post I did showcase how to restart a failed database upgrade on the command line before Oracle Database 12c, with Oracle Database 12.1.0.x and with the new Oracle Database 12.2.0.x:

Here I'd like to show the new capability of the Database Upgrade Assistant (DBUA) to restart an upgrade, a functionality the DBUA was missing until now. Please note that you can always go back to the command line, no matter in which version you have used the DBUA.

I won't describe an upgrade with the DBUA in this blog post as this is showcased in the documentation already:

Starting the database upgrade with DBUA

I will upgrade the UPGR database well known from our Hands-On Lab.

DBUA Database Selection Oracle 12.2

And please don't put in your credentials in the fields below - if you are logged in as the oracle user then this will lead to failure and drive you crazy ...

Another thing which puzzles me:
I still have to execute the olspreupgrade.sql script from the newer (in my case the 12.2) home by myself. I'd wish the DBUA would do this for me as well as I'll have to open an xterm, set my environment and type in a very long path name to point to the new 12.2 home in order to execute this script in my source environment.

DBUA Database Selection Oracle 12.2

Ok, let's kick off the upgrade:

Oracle 12.2 DBUA Database Upgrade

The progress bar is very imprecise in relation to the duration - you can ignore it more or less.

And - very sad - the Alert and Activity Monitor buttons disappeared - but they may reappear in a later release of the DBUA.

The Error Scenario

It's always fun to kill pmon finding out how cool this database is ;-) It survives the deadly attack :-) Of course it does ... it's the Oracle Database ;-)

kill -9 pmon


The DBUA recognizes the failure

Even though the DBUA recognized the failure quite quickly it still tries to complete the upgrade - which of course results in a ton of errors. It just means that you'll have to wait until the DBUA has "progressed" the upgrade till the end - and DON'T CLOSE THE DBUA as otherwise you'll lose the ability to RETRY.

DBUA - upgrade failure

You'll see the error count going up until the DBUA has reached the "end" of the (failed) upgrade.

DBUA Oracle 12.2 - upgrade failure scenario 2

And then it displays the RETRY button:

DBUA Oracle 12.2 - upgrade failure scenario


Rerun the Database Upgrade

Once you hit RETRY the DBUA will try to solve the situation - and in my case it will start up my source database in STARTUP UPGRADE mode again - and then process the upgrade using the -R option of (described in the previous blog post)

DBUA Oracle 12.2 - upgrade failure scenario

You'll find also a new set of logfiles in $ORACLE_BASE/cfgtoollogs/dbua/ subdirectories indication with a number (here: 1) and an "R" that this is the first restart attempt's logs:

Logfiles from DBUA rerun 12.2


Tuesday Jan 24, 2017

Restarting a failed Database Upgrade with

What if ...

What if a database upgrade fails in between? First of all you should have enabled one of the common protections for issues happening during an upgrade:

  • Guaranteed Restore Point to Flashback Database
  • Restore an RMAN Online Backup
  • Restore a partial Offline Backup

See our slide's Fallback section for further details.

But actually there are ways to restart the upgrade without the need to fallback in case of a non-destructive incident such as running out of archivelog space.

Restart the Upgrade before Oracle Database 12c

Before Oracle Database 12c the most simple way to restart an upgrade was to restart catupgrd.sql from SQL*Plus making sure the database is back in upgrade mode. Upgrade (and downgrade) scripts are designed to run again ... and again ... if necessary. Even if you kicked off the upgrade with the Database Upgrade Assistant (DBUA) who by itself is not able to rerun the upgrade you can invoke the command line upgrade and rerun it without the need for restore.

SQL> startup upgrade

SQL> spool /home/oracle/catupgrd.log

SQL> @?/rdbms/admin/catupgrd.sql

This will restart the upgrade from the beginning and rerun it again.

Restart the Upgrade in Oracle Database 12.1.0.x

With this release we introduced the parallel upgrade utility,  The database needs to be in startup upgrade mode again but the upgrade will be driven from the command line with the Perl tool running the upgrade with a maximum of 8 parallel workers. And if the upgrade fails non-destructive for whatever reason you invoke from the phase where it has stopped with the -p option specifying the phase number.

In the following example the upgrade has been stopped in phase 100 (in my example by a CTRL-C):

*********** Upgrading Misc. ODM, OLAP **********

Serial Phase  #:95 [UPGR] Files:1 Time: 0s

**************** Upgrading APEX ****************

Restart Phase #:96 [UPGR] Files:1 Time: 0s

Serial Phase  #:97 [UPGR] Files:1 Time: 1s

Restart Phase #:98 [UPGR] Files:1 Time: 0s

*********** Final Component scripts ***********

Serial Phase  #:99 [UPGR] Files:1 Time: 0s

************* Final Upgrade scripts ************

Serial Phase #:100 [UPGR] Files:1   ^[c

Now I simply can restart it from this phase again without the need to rerun all the previous steps:

Make sure the database is in STARTUP UPGRADE mode. Then invoke from ?/rdbms/admin:

$ORACLE_HOME/perl/bin/perl -p 100 -l /home/oracle/ catupgrd.sql

************* Final Upgrade scripts ************

Serial Phase #:100 [UPGR] Files:1 Time: 142s

********** End PDB Application Upgrade *********

Serial Phase #:101 [UPGR] Files:1 Time: 1s

******************* Migration ******************

Serial Phase #:102 [UPGR] Files:1 Time: 60s

Serial Phase #:103 [UPGR] Files:1 Time: 0s

Serial Phase #:104 [UPGR] Files:1 Time: 71s

***************** Post Upgrade *****************

Serial Phase #:105 [UPGR] Files:1 Time: 30s

**************** Summary report ****************

Serial Phase #:106 [UPGR] Files:1 Time: 1s

Serial Phase #:107 [UPGR] Files:1 Time: 0s

Serial Phase #:108 [UPGR] Files:1 Time: 38s

This works also when the upgrade has been started with the DBUA and failed. The DBUA is unable to rerun the upgrade but you can always invoke the command line upgrade by looking at the logfiles DBUA produced finding the phase where it has been stopped.

Restart the Upgrade in Oracle Database 12.2.0.x

In Oracle Database we have again improved the rerun ability of the upgrade - now you don't have to know the phase where it stopped - but we introduce the -R option for to rerun the upgrade automatically after the last completed phase.

$ORACLE_HOME/perl/bin/perl -R -l /home/oracle/ catupgrd.sql

See the documentation for further details:

Rerunning Upgrade for Oracle Databases - Oracle Database Upgrade Guide 12.2

And the DBUA has been improved as well. It can now rerun an failed upgrade but only from within the same DBUA session (so please don't close it). And of course you can always fall back to the command line and complete it from there.

See a separate blog post about:

Rerunning the Upgrade with the DBUA in Oracle Database 12.2


Wednesday Jan 18, 2017

Default Changes SPFILE Parameters - Oracle 12.2

Parameters in Oracle Database - part 4 of the series:

Roy and I did a comparison between default parameter settings in Oracle Database vs Oracle Database vs Oracle Database And some changes are quite interesting - of course the memory driven parameters are left out in this list.

Color RED marks a change between releases.
The databases were all on OL (Oracle Linux) 6.8 and located on file system, not in ASM - therefore certain values may differ when the database is located in ASM and/or on a different OS port. 

Parameter Oracle Oracle Oracle.
audit_sys_operations FALSE TRUE TRUE
compatible 12.2.0
control_file_record_keep_time 7 7 30
dml_locks 616 1416 2076
filesystemio_options NONE NONE setall
job_queue_processes 1000 1000 4000
object_cache_optimal_size 102400 102400 10240000
parallel_max_servers 48 80 80
parallel_min_servers 0 8 8
parallel_servers_target 64 32 32
parallel_adaptive_multi_user TRUE TRUE FALSE
pre_page_sga FALSE TRUE TRUE
resource_limit FALSE TRUE TRUE
sec_max_failed_login_attempts 10 3 4
sec_protocol_error_trace_action CONTINUE TRACE LOG
spatial_vector_acceleration FALSE FALSE TRUE
sql92_security FALSE FALSE TRUE


Tuesday Jan 17, 2017


In Oracle Database 12.2 (available in the Oracle DBaaS Cloud) there's a new functionality called "PDB LOCKDOWN PROFILES". Actually the parameter came already in undocumented and not functional in Oracle Database (see here).

PDB Lockdown Profiles

PDB Lockdown Profiles are meant as a granular way to restrict or enable access - mostly under the aspect of using the database in a highly shared environment but requiring security. The idea is to embed restrictions on-top of a grant, i.e. taking away certain things. As an example you could allow a user logged into a certain PDB only to change the optimizer_mode and cursor_sharing parameters when doing an ALTER SYSTEM.

The glossary in the Concepts Guide says:

A security mechanism to restrict operations that are available to local users connected to a specified PDB.

And here is a good overview in the Concepts Guide and the Security Guide for details:

Lockdown Profiles in Oracle Multitenant

Lockdown Profiles - Create, Drop, Alter


Lets start with a fresh PDB:

SQL> create pluggable database PDB2 admin user adm identified by adm file_name_convert=('/oradata/CDB2/pdbseed', '/oradata/CDB2/pdb2');

First we need to create a lockdown profile:

SQL> create lockdown profile P1;

Then we can ALTER the profile and allow only to change optimizer_mode and cursor_sharing with an ALTER SYSTEM.

SQL> alter lockdown profile P1 disable statement=('ALTER SYSTEM') clause=('SET') OPTION ALL EXCEPT=('optimizer_mode','cursor_sharing');

And finally, the PDB Lockdown Profile needs to be enabled:

SQL> alter system set PDB_LOCKDOWN=P1;


SQL> show parameter pdb_l

NAME          TYPE    VALUE
------------- ------- ------
pdb_lockdown  string  P1

Where the fun begins ...

Now lets connect with my default SYS user - SYS per definitionem is a common user - and switch to my PDB2:

$> sqlplus / as sysdba

SQL> alter session set container=PDB2;

SQL> alter system set sql_trace=TRUE;
ERROR at line 1:
ORA-01031: insufficient privileges

Aha! Ok, so let's try ...

SQL> alter system set cursor_sharing='FORCE';
System altered.

SQL> alter system set optimizer_mode='FIRST_ROWS_10';
System altered.

Ok, this works. But can I still change SQL_TRACE on the session level? Of course I can ...

SQL> alter session set SQL_TRACE=TRUE;
Session altered.

Makes sense as I restricted only ALTER SYSTEM but not ALTER SESSION.
So let's do this as well:

SQL> alter session set container=cdb$root;
Session altered.

SQL> alter lockdown profile P1 disable statement=('ALTER SESSION') clause=('SET') OPTION ALL EXCEPT=('optimizer_mode','cursor_sharing')
Lockdown Profile altered.

Another tiny exercise taken directly from the documentation:



SQL>   alter system set pdb_lockdown='MEDIUM';
  alter system set pdb_lockdown='MEDIUM'
ERROR at line 1:
ORA-01031: insufficient privileges

SQL> alter system set sql_trace=true;
alter system set sql_trace=true
ERROR at line 1:
ORA-01031: insufficient privileges

And of course you can add more things to an existing profiles such as disabling certain features:

SQL> alter session set container=cdb$root;
Session altered.

Lockdown Profile altered


Which profiles do exist and what's defined?

First of all, the parameter PDB_LOCKDOWN is modifiable on a PDB level meaning you can have different profiles for different PDBs. But as far as I tested only one profile per PDB can be active and enabled.

You can look up existing profiles and their contents?

SQL>  select profile_name, rule_type, rule, clause, clause_option, status from DBA_LOCKDOWN_PROFILES order by 1;

-------------- ---------- -------------- ---------- --------------- -------
MEDIUM         STATEMENT  ALTER SYSTEM                              DISABLE
MEDIUM         FEATURE    XDB_PROTOCOLS                             DISABLE
P1             STATEMENT  ALTER SESSION  SET                        DISABLE
P1             STATEMENT  ALTER SYSTEM   SET                        DISABLE
PRIVATE_DBAAS                                                       EMPTY
PUBLIC_DBAAS                                                        EMPTY
SAAS                                                                EMPTY

11 rows selected.



This is a very powerful feature - but it can be a bit tricky in the future to find out why certain things don't work. The ORA-1031 error may now always guide into the correct direction.

One thing which puzzles me a bit:
We speak always about simplifying things, easing administration. And with PDB Lockdown Profiles you can highly complicate things. Or tease your colleagues :-) (please don't, ok?!). Just don't over-complicate things ... :-)


Monday Jan 09, 2017

DOAG Noon2Noon - Upgrade/Multitenant - 2.-3.2.2017

++++++ Sorry - this is a German event only thus I will write in German ++++++

DOAG Noon2Noon

Ich bin schon sehr gespannt.

Am 2. Februar 2017 wird im Intercity Hotel in Mainz das DOAG Noon2Noon zum Thema "Upgrade, Migration und Multitenant" stattfinden. Von 12:00h mittags bis 12:00h mittags am Folgetag. Der Praxisbezug steht im Vordergrund. Garniert mit kurzen Vorträgen.

In der praxisorientierten "Noon2Noon"-Veranstaltung wollen wir uns mit den Herausforderungen beim Upgrade nach Oracle 12c bzw. Oracle 12c Release 2 (falls diese zu diesem Zeitpunkt für On Premises Datenbanken freigegeben ist) auseinandersetzen. Das ungewöhnliche Format zielt auf Lösungsorientiertheit. Bei den interaktiven Sessions heißt es dann: Ärmel hochkrempeln und mitmachen!
Von 12 bis 12 Uhr, mittags bis mittags, reduzieren wir die Folienschlachten auf ein Minimum. Unser Ziel ist es, dass am Ende der Veranstaltung jeder Teilnehmer erfolgreich ein Datenbank-Upgrade von Version 11.2 auf 12.1 mit der Multitenant Architektur durchgeführt hat. Unterstützung erhalten wir dabei von Ernst Leber, MT AG, Mike Dietrich, ORACLE Corporation, Martin Bach, Enkitec/Accenture und Uwe Hesse (ex-Oracle University).

Sie sollten unbedingt Ihren Laptop und vor allem eine Datenbank, entweder lokal installiert oder noch einfacher, in einer Oracle VBox oder VMware Umgebung, mitbringen, damit wir gemeinsam live und vor Ort das Upgrade oder eine Migration durchführen  und die DB dann nach Oracle Single/Multitenant überführen können.

Anmeldung bitte über die DOAG Seite:


Hier noch ein wenig Motivation vorab:

Bis dahin :-)


Wednesday Dec 21, 2016

APEX is in CDB$ROOT again - Journey to the Cloud VII

Journey to the Oracle CloudWell ... it's been a while ... but I would like to continue my journey to the cloud ...

What happened so far on my Journey to the Cloud?

Oracle Database in the Cloud

Since November 2016 Oracle Database is available in the Oracle DBaaS Cloud. And I received this question in my inbox yesterday:

I have a customer who wants to migrate Apex 4.2 applications to DBCS and ORDS.war to Weblogic on Compute.

I recently went through your blog on removing Apex from CDB Root. The customer is planning to do something similar but had questions on repercussions of doing so in Oracle Public Cloud.

What are the factors that need to be considered. Also how would DBCS patching work in this scenario.


Is APEX is in CDB$ROOT again

I haven't checked this yet as we've had a very productive conversation with the APEX folks a while back. And I'm 100% sure that the APEX group wasn't involved in this decision as they recommend clearly in the doc to NOT HAVE APEX installed in the CDB$ROOT ;-)
Ok, I did connect to our Cloud environment and ... voila ...


SQL> select r.comp_id, r.version, from cdb_registry r, v$containers c where r.con_id=c.con_id and r.comp_id='APEX' order by 3

---------- --------------- ------------

Ouch ...

The presence of APEX in the CDB$ROOT may have to do with the Oracle DBaaS Monitor Console. This is just an assumption but when I removed the APEX in my old cloud deployment I had to clean-up the DBaaS Monitor as well.


for my experience a few months back.


Well, having APEX in the CDB$ROOT is still a brilliant [IRONY!] idea. As soon as you start unplug/plug operations with APEX in the PDB only or with a different APEX version you are asking for trouble.

See this blog post for the potential pitfalls:

Which options does the customer have assuming that his APEX 4.2 application is in a non-CDB?

  • Upgrade APEX locally to before migrating the database to the DBaaS cloud
    This would be the easiest workaround if it wouldn't involve an application software upgrade. Through the Oracle glasses this looks simple - but from a customer's and application developer's standpoint this isn't trivial as most likely it will involve application testing
  • Export the APEX application and import it
    I haven't done this by myself but first of all with APEX 4.2 (or below) you must take care to move the image files as well - and you'll have to move data as well. And, of course, you won't end up in APEX 4.2 but in APEX 5.0 so the above mentioned application upgrade will hit you as well. I don't see any benefit over solution 1.
  • Remove APEX from the DBaaS Cloud deployment's CDB$ROOT
    This is - in my humble opinion - the only viable solution here if the customer can't upgrade APEX in the current environment for whatever reason. But this will most likely remove the DBaaS Monitor as well. I can live without it but I know that it offers a lot of good stuff especially when dealing with encrypted tablespaces which is otherwise hard to handle on the command line. The good part of this solution is the freedom and flexibility you'll get once APEX is removed from the CDB$ROOT for unplug/plug operations in the future.

Finally, regarding patching:
I don't see any issues. And the same for a future upgrade as we decouple APEX upgrades from the database upgrade with Oracle Database 12.2.0.


Thursday Dec 08, 2016

Create a database with NON-DEFAULT Time Zone

One requirement of Transportable Tablespaces (and of course Full Transportable Export/Import as well) is to have identical database character sets AND identical time zone settings.


Source database has a lower time zone setting than the default target database in the destination home.

Lets assume you'd like to migrate an Oracle off IBM AIX and migrate it into Oracle on an Exadata meaning Oracle Linux 6. The source time zone version if it has never been upgraded would be TZ V14 - whereas the new Oracle database you'd create would get TZ V18 by default.

Time Zone Setting Oracle Database

Solution 1 - Upgrade Time Zone version in Source

You could now "simply" upgrade the Time Zone version in the source database meaning you'd apply the time zone patch matching the default time zone of the destination database home (see MOS Note: 412160.1 for the list of available Time Zone patches). Then you would have to adjust the data with the scripts provided in MOS Note:1585343.1

Or you could go a step further and apply the most recent available time zone patches to both homes which I'd recommend in this case. Then you'll adjust the source database before initiating the transport steps including the copy operations.

I have put "simply" in quotation marks on purpose as often this may not be an option. Changing the source's time zone version will incur downtime and a restart of the database. Furthermore in case of somebody has a very high amount of time zone dependent data it may take a while.

Solution 2 - Create Destination Database with lower Time Zone version

Therefore I'd consider the second solution as the more interesting and practical one: Create the destination database to transport to with a different time zone version than the default. In Oracle Database the default would be TZ V18.

Step 1: Create database creation scripts with DBCA

I haven't tried if it would work as well to let the DBCA directly create a custom database but I don't trust Java applications using my environment variables so I prefer the script version. It does not work to use one of the seed databases in DBCA (DWH and OLTP and GENERAL PURPOSE) as those all use the default version of the release without an option to override it.

Step 2: Set environment variable ORA_TZFILE

The time zone files are located in $ORACLE_HOME/oracore/zoneinfo. By using the environment variable ORA_TZFILE we can override the default of using the highest numbered time zone file in this directory:

$ export  ORA_TZFILE=/u01/app/oracle/product/

Step 3: Create the database with the "sh" script

DBCA has created the "create database" scripts and one named <SID>.sh. Start the creation by:

$ ./

Step 4: Verify the correct time zone setting of your new database

The following query reflects the current time zone version of this database:

SQL> select value$ "TZ Version" from SYS.PROPS$ where name='DST_PRIMARY_TT_VERSION';

In my example the result will be 14 now instead of 18 (the default for an Oracle database).


Tuesday Dec 06, 2016

Upgrade to Oracle Database 12.2 - Slides are available

The workshops in Brussels and Utrecht were awesome with great audience - I enjoyed both events a lot. And thanks for your patience with my voice and the microphone issues we've had in Brussels before lunch.

I promised the workshop slides - and here they are, ready for download:

Upgrade, Migrate and Consolidate to Oracle Database 12.2 and the Cloud

Slides Oracle 12.2 Upgrade Migrate Consolidate

Please be aware: It's the first drop of the slides, not everything has been updated yet for Oracle Database 12.2, and of course there are no 12.2 customer examples in the slides yet. But as usual, if you plan to go live early on Oracle Database 12.2 - in the Oracle Cloud or as soon as on-premises is available please get in touch with me as we are looking for future reference customer cases.

--Mike & Roy

Thursday Dec 01, 2016

UKOUG TECH 16 is coming - and I'm speaking

Uhhh ... time flies. I look into my calendar - and UKOUG TECH 16 Conference is almost there. Just a few days to go (and hopefully no Lufthansa strike on the upcoming weekend - but I looked already for alternatives). Anyhow, I'm excited and look forward to my 2nd TECH conference. Last year was the first time for me and I enjoyed it a lot. Very good audience, good location - even though I can't say much about the city of Birmingham as I haven't seen anything except for the venue, my hotel and an Indian restaurant. Well, business travel is not meant for sightseeing usually. But the weather this year is supposed to be fine - chilly but sunny ;-)

Just in case you'll be there as well these are my three talks at UKOUG TECH 16:

  • Monday, December 5, 2016 - 14:10h - HALL 9
    Ensure Performance Stability When Upgrading Oracle Databases
    • Nobody likes unforeseeable surprises when it comes to database upgrades and migrations.As the upgrade or migration usually is not complicated by itself, the most important topic is to ensure good performance afterwards. This presentation will discuss a straight forward step-by-step approach to ensure not only plan stability but also proper testing, secret tweaks and helpful tips and tricks around database upgrades and migrations. It spans the entire project lifetime from things-to-do before approaching such a move to important settings in your new environment during or after the upgrade. Just to avoid unforeseeable surprises ...
  • Tuesday, December 6, 2016 - 15:25h - HALL 5
    Upgrade to the Next Generation of Oracle Database: Live & Uncensored!
    • Upgrade on Powerpoint Slides will work always fine.
      But how does a database upgrade to the Next Generation of the Oracle Database works? What is different compared to Oracle Database 12.1, what is new? And is it faster?
      This talk will briefly overview database upgrades and new upgrade and migration features but focus mainly on a LIVE and UNCENSORED (and potentially INTERACTIVE) demonstration of a database upgrade to the Next Generation of the Oracle Database.
  • Wednesday, December 7, 2016 - 8:50h - HALL 9
    The Best Upgrade & Migration Strategies - or Things to Avoid
    • What if you have more than a few databases? What is the best strategy to keep them current, upgrade and/or migrate them in a regular cycle without taking all your available resources? And what can be automated?
      This talk will not only highlight successful customer upgrade projects but also showcase techniques to automate your upgrade and your patching with two different examples of customers with a range between 300 and 1700 databases.
      Furthermore there are things and habits to definitely avoid as otherwise things get really tricky and become very hard to handle - and very expensive as well.
      Real life cases will showcase things to avoid by all means.

Hope to see you there!


Wednesday Nov 30, 2016


At the DOAG Conference in November in Nürnberg in November 2016 a customer asked me right after my talk about "Upgrade to Oracle Database 12.2. - Live and Uncensored" why the DBA_REGISTRY_HISTORY does not get updated when he applies a Bundle Patch and follows all instructions including the "./datapatch -verbose" call.

I was wondering as well and asked him to open an SR. Which he did. And he received the message from Support that it is not supposed to appear in Oracle 12c anymore this way but only in DBA_REGISTRY_SQLPATCH. Now I dug a bit deeper internally to get a clear statement (thanks to Carol (my boss) and Rae (my teammate) and Scott (the man who owns datapatch) for following up!).

Patch Query in Oracle Database 11g

Tim Hall has published this simple and quite helpful script to query applied PSUs and BPs in Oracle Database 11g:
Script to monitor DBA_REGISTRY_HISTORY

And the output in my environment looked like this:

-------------------- ------- ------- -------- ------- -------------------- ---
01-JUL-2016 15:24:56 APPLY   SERVER 160419  PSU  PSU
21-OCT-2016 17:40:32 APPLY   SERVER 161018  PSU  PSU

But running the same script on Oracle Database returnes (as for the customer) "no rows selected".

Patch Query for Oracle Database 12c

Since Oracle Database we use DBA_REGISTRY_SQLPATCH instead of DBA_REGISTRY_HISTORY to track PSUs and BPs applied to the database. My output in Oracle Database looks like this:

-------------------- ------- -------- -------------------- -------- -------- ----
21-OCT-2016 17:29:36 APPLY   SUCCESS  DBP: 24340679 DBBP

when using this tiny script:


COLUMN action_time FORMAT A20
COLUMN description FORMAT A40
COLUMN bundle_series FORMAT A10

SELECT TO_CHAR(action_time, 'DD-MON-YYYY HH24:MI:SS') AS action_time,
FROM   sys.dba_registry_sqlpatch
ORDER by action_time;

But the question remains if - as in Oracle Database - both views should get updated.


In, we used the script catbundle.sql to apply bundle patches.  It uses DBA_REGISTRY_HISTORY only.  For with the introduction of datapatch, we now have the (much better) DBA_REGISTRY_SQLPATCH.  This is used for both, bundle and non-bundle patches.  In Oracle Database for bundle patches we actually called catbundle internally, so in both registries were updated for bundle patches.
Starting in, however, only DBA_REGISTRY_SQLPATCH is queried and updated for bundle and non bundle patches.

Update [Dec 23, 2016]

After discussing this and other issues with the owners of datapatch my teammate Rae logged a bug for this issue as we believe both views should be updated as it happened in Bug# 25269268 tracks the issue.



Thursday Nov 24, 2016

New PREUPGRADE.JAR is available - MOS 884522.1

As promised - and thanks to everybody from the Database Upgrade Development Team - the new preupgrade.jar for Oracle Database 12.2 is available:

And of course the previous preupgrd.sql with the package utlupkkg.sql and the old utlu112i.sql for upgrade to Oracle (which I wouldn't do anymore) are available as well.

New Preupgrd.sql - MOS Note 884522.1

Please ALWAYS use the version downloaded from this MOS note as this version is much newer and better than the version getting shipped due to code freeze dates. And you will recognize that the new preupgrade.jar is now identical on all platforms.

In the next days I will explain a bit what will change with preupgrade.jar.


Wednesday Nov 23, 2016

Enabling ADAPTIVE Features of Oracle 12.2 in 12.1

Oracle Database 12.2 introduces the new split-up adaptive parameters, OPTIMIZER_ADAPTIVE_PLANS and OPTIMITER_ADAPTIVE_STATISTICS.

For more information please see:

But Oracle Database 12.2 on-premises is not out yet - so what should you do when upgrading to Oracle Database 12.1 - or struggling with some of the "adaptive" features in Oracle 12.1?

It recommends to adopt the Oracle Database 12.2 defaults when upgrading to Oracle Database 12.1. This can be achieved by installing two patches - we call it the recommended approach.

  • The patch for bug# 22652097 introduces the two parameters OPTIMIZER_ADAPTIVE_PLANS and OPTIMIZER_ADAPTIVE_STATISTICS, and in addition removes the parameter OPTIMIZER_ADAPTIVE_FEATURES.
  • The patch for bug# 21171382 disables the automatic creation of extended statistics unless the optimizer preference AUTO_STATS_EXTENSIONS is set to ON.
  • Please make sure you'll remove OPTIMIZER_ADAPTIVE_FEATURES from your spfile:
    alter system reset optimizer_adaptive_features;
    when applying the patches.

Both patches should help as well once you have upgraded already to Oracle Database 12.1 but encountering performance issues.

Please note that it's not necessary to set OPTIMIZER_DYNAMIC_SAMPLING to a non-default value because the patches will disable the use of adaptive dynamic sampling to match the default behavior in Oracle Database 12.2 when both new parameters are used in their default settings.


Tuesday Nov 22, 2016


The Oracle Database 12.1 parameter OPTIMIZER_ADAPTIVE_FEATURES has been made OBSOLETE (i.e. must be removed from the SPFILE when upgrading) in Oracle Database 12.2.

It gets replaced with two parameters of whom one is enabled, the other one is disabled by default:


Nigel Bayliss, our "Optimizer" Product Manager has blogged about it already with way more detailed insights.

But as Oracle Database 12.2 on-premises is not available how should you deal with this feature in Oracle Database 12.1?


Friday Nov 18, 2016

SANGAM and DOAG Conferences 2016 - Recap + Slides

The past weeks were very intense. I've been to Japan for customer meetings and a Dev Day and an internal workshop, then on to China for an internal training, then to India for SANGAM conference and afterwards to Nürnberg for DOAG, the German Oracle User's Group conference. And UKOUG is just in two weeks ...

You'll find the slides of my talks here:

and in the Slides Download Center to the right.

Thanks again to the organizers of SANGAM and DOAG for such intense conferences with an incredibly good program and so many excellent speakers and talks. Sometimes I wished I could divide myself as there was so many stuff I wanted to see.

And thanks to all the people I've had the chance to talk to and discuss things with. It is amazing to see how many of you are interested in all our great technologies. I enjoy learning from customers and partners and experts about their setups and challenges and things they saw and experienced.

Hope to see you all soon again!


Monday Nov 14, 2016

SPFILE Parameter: max_pdbs - a must for Single Tenant

Sometimes my job has a aspect making me smile at the end of the day ;-)

I sat together with Johannes Ahrends during a talk at the OUGN Conference on the boat from Oslo towards Kiel. And we were discussing afterwards why there's no official way to limit the number of PDBs which will be essential for customers wanting to go the Single Tenant track. I had my Hands-On environment up and we played a bit in the break recognizing that a constraint on CONTAINER$ won't be the correct solution as unplug/plug operations leave leftovers in it unless you DROP PLUGGABLE DATABASE. And even if you drop the remains the constraint solution does not work.

At the breakfast the next morning Johannes mentioned a trigger - and he published one soon after. But it is not a good idea in terms of keeping support for your database when you fiddle around with the data dictionary.

So I did ask my contacts internally and received a message saying clearly
"Somebody doesn't want this.". :-)
Well, I work long enough for Oracle to know how to read it. It's a common thing to blame it on "Somebody" when you don't want to discuss things further. One can hide perfectly well behind "Somebody".

The higher my surprise was when I started testing and playing with Oracle Database 12.2 - and collecting init.ora parameters between release labels to detect changes and additions. And apparently, this one here appeared:

saying "max number of pdbs allowed in CDB or Application ROOT" in its parameter description.
I was (a) surprised and (b) happy and (c) had to try it out immediately in my environment where I had already 3 PDBs (the PDB$SEED does not count):

SQL> alter system set max_pdbs=3;
System altered.

SQL> show pdbs

------ ---------------------- ---------- ----------
2      PDB$SEED               READ ONLY  NO
3      PDB1                   READ WRITE NO
4      PDB2                   READ WRITE NO
5      CLONE_PDB              MOUNTED

SQL> alter system set max_pdbs=
alter system set max_pdbs=2
ERROR at line 1:
ORA-32017: failure in updating SPFILE
ORA-65331: DDL on a data link table is outside an application action.

So even though the error message is a bit rough (at least in my release drop) the parameter does what we want. In a single tenant environment you'll set it to "1" and prevent the creation or plugin of a 2nd PDB in this container database..

Another test with a fresh container database:

SQL> show pdbs

------ ------------- ---------- ----------
2      PDB$SEED      READ ONLY  NO

SQL> alter system set max_pdbs=1;
System altered.

SQL> show parameter max_pdbs

NAME                     TYPE     VALUE
------------------------ -------- -----
max_pdbs                 integer      1

SQL> create pluggable database pdb1 admin user adm identified by adm file_name_convert=('/u02/oradata/CDB2/pdbseed','/u02/oradata/CDB2/pdb1');
Pluggable database created.

SQL> create pluggable database pdb2 admin user adm identified by adm file_name_convert=('/u02/oradata/CDB2/pdbseed','/u02/oradata/CDB2/pdb2');

create pluggable database pdb2 admin user adm identified by adm file_name_convert=('/u02/oradata/CDB2/pdbseed','/u02/oradata/CDB2/pdb2')
ERROR at line 1:
ORA-65010: maximum number of pluggable databases created

SQL> drop pluggable database pdb1 including datafiles;
Pluggable database dropped.

SQL> create pluggable database pdb2 admin user adm identified by adm file_name_convert=('/u02/oradata/CDB2/pdbseed','/u02/oradata/CDB2/pdb2');
Pluggable database created.

Looks like a solid solution to me.

But please see also this blog post by Oracle ACE Director Franck Pachot about an issue with the parameter:



Mike Dietrich - Oracle Mike Dietrich
Master Product Manager - Database Upgrade & Migrations - Oracle

Based in Germany. Interlink between customers/partners and the Upgrade Development. Running workshops between Arctic and Antartica. Assisting customers in their reference projects onsite and remotely. Connect via:

- -


« February 2017
Slides Download Center
Visitors since 17-OCT-2011
White Paper and Docs
Viewlets and Videos
Workshop Map
x Oracle related Tech Blogs
This week on my Rega & Pono
Upgrade Reference Papers