Monday Jun 22, 2015

Java in the database - OJVM non-rolling patches


How can I find out if Oracle's JVM is used in my database?


This is unfortunately not as trivial as I thought initially ...
Let's start with:

Until Oracle version 11.2 or later, there was no way to confirm if Oracle JVM is not actively used in the database

However, what can be said is:
1) If there are non-Oracle schemas that contain java objects, then 3rd party products or user defined java programs could be actively using the Oracle JVM.
2) If there are Oracle schemas, other than SYS, that contain java objects, then other Oracle products or Oracle Applications could be actively using the Oracle JVM.  (For example, ORDSYS schema for Oracle Intermedia and APPS schema for Oracle Applications).
3) Even if all java objects are owned by SYS schema, there might still be user defined java objects in the SYS schema. 

If the total number of java objects owned by SYS is much greater than the totals shown above, then this is likely.  However, the totals shown above are for a fully installed Oracle JVM.  If the JVM is not fully installed, then the existence of user defined java objects in the SYS schema could still make the total number of java objects exceed the above totals. Therefore, there is no way to guarantee that the Oracle JVM is not in use.

For Oracle version 11.2 or later query the DBA_FEATURE_USAGE_STATISTICS view to confirm if the Java features are being used.

I'm not a JAVA/OJVM expert but I'd do the following:

  1. Check how many JAVA objects exist in your database:
    select owner, status, count(*) from all_objects 
            where object_type like '%JAVA%' group by owner, status;
  2. lf the results is equal to 29211 in Oracle 12c (see MOS Note: 397770.1 for numbers in different releases) then I'd silently assume that JAVA is not in use inside the database as there are no additional user defined objects. 
  3. In addition you may run this script from MOS Note:456949.1 (Script to Check the Status or State of the JVM within the Database) to check for any user defined objects JAVA objects in your database
  4. Anyhow, before doing anything to your JAVA installation now keep in mind that there are dependencies. The following components require the existence of a valid JAVA installation in your database:
    Oracle Multimedia (formerly known as Intermedia)
    Oracle Spatial
    Oracle OLAP
    And even more important, as there are dependencies between components there may be also dependent objects belonging to these components in your database. So it's not as simple as it looked initially - you'll have to check if any of the dependent components is in use as well - and the numbers 4.-6. will apply to 11.2. databases only, not to Oracle 12c:
    1. How to Determine if Spatial is Being Used in the Database? (Doc ID 726929.1)
      Please be aware that having a user defined SPATIAL SDO Geometry object will NOT increase the number of existing Java objects compared to a default installation. Roy verified this (THANKS!). So you'll have to make sure that you checked also the dependent components for being in use. 
    2. How To Find Out If OLAP Is Being Used (Doc ID 739032.1)
    3. How To Check If Oracle Multimedia Is Being Used In Oracle Version 11.2 (Doc ID 1088032.1)
    4. How to Determine if Ultra Search is Being Used? (Doc ID 738126.1)
    5. Warehouse builder has a note about how to uninstall it, but that (very badly written) note does not tell you how to determine whether OWB is in use
    6. Rules Manager and Expression Filter document installation and deinstallation in their developers guide
    7. .
  5. And even more important, before doing anything to your JAVA installation please take a backup - even though you may believe that backups are just for wimpies you'll better take one before :-)
  6. Now the question is:
    Should you remove only JAVAVM component - or CATJAVA as well? Please see the section further below on this blog posting for more information.
    To remove only JAVAVM this script could do the job - but it will leave two INVALID Package Bodies (JAVAVM_SYS, JVMRJBCINV):
    SQL> @?/xdk/admin/rmxml.sql 
    SQL> delete from registry$ where status='99' and cid = 'JAVAVM';
    SQL> commit;

  7. The execution of the removal scripts won't de-register the component from DBA_REGISTRY - that's why the manual de-registration is necessary. 
  8. Even if I'd remove the entire JAVA stack including the XDK it will leave those two invalid objects  (JAVAVM_SYS, JVMRJBCINV).
    1. SQL> @?/rdbms/admin/catnojav.sql
    2. SQL> @?/xdk/admin/rmxml.sql
    3. SQL> @?/javavm/install/rmjvm.sql
    4. SQL> @?/rdbms/admin/utlrp.sql
    5. SQL> delete from registry$ where status='99' and cid in ('XML','JAVAVM','CATJAVA');
So honestly the best choice is always not to install things you clearly don't need instead of trying to remove those things afterwards. In this case would now need to double check with Oracle Support if we'd safely can drop the two remaining  package bodies JAVAVM_SYand JVMRJBCINV. In my environment that worked well - but obviously I can't give any official statement here.

Again, please don't get me wrong:
I don't say that you should remove JAVA from your databases And please check back with Oracle Support before doing this. But the question came up so often in the past months because of the OJVM patch which does not allow a rolling PSU upgrade anymore. For further information please see the following MOS Note and refer to my previous post quoting one of our Java experts: 

So let's do a quick experiment.

Check the installed components in a standard Oracle database first:

SQL> select substr(comp_id,1,8) COMP_ID, substr(COMP_NAME,1,36) COMP_NAME from dba_registry;

---------- ------------------------------------
DV         Oracle Database Vault
APEX       Oracle Application Express
OLS        Oracle Label Security
SDO        Spatial
ORDIM      Oracle Multimedia
CONTEXT    Oracle Text
OWM        Oracle Workspace Manager
XDB        Oracle XML Database
CATALOG    Oracle Database Catalog Views
CATPROC    Oracle Database Packages and Types
JAVAVM     JServer JAVA Virtual Machine
XML        Oracle XDK
CATJAVA    Oracle Database Java Packages
APS        OLAP Analytic Workspace
XOQ        Oracle OLAP API
RAC        Oracle Real Application Clusters

Before we'd be able to safely remove JAVAVM we will need to take out Spatial, Multimedia and OLAP (exactly in this order) as well.

Spatial removal: 

SQL> drop user MDSYS cascade;
SQL> drop user MDDATA cascade;
SQL> drop user spatial_csw_admin_usr cascade;
SQL> drop user spatial_wfs_admin_usr cascade;

After this action you'll end up with 5 invalid objects in APEX in case APEX is installed. I think you can safely ignore them as those are spatial objects in the FLOWS-APEX schema:


Multimendia removal:

SQL> @?/rdbms/admin/catcmprm.sql ORDIM

OLAP removal:

SQL> @?/olap/admin/olapidrp.plb
SQL> @?/olap/admin/catnoxoq.sql
SQL> @?/olap/admin/catnoaps.sql
SQL> @?/rdbms/admin/utlrp.sql

Let's do the check again:

SQL> select substr(comp_id,1,8) COMP_ID, substr(COMP_NAME,1,36) COMP_NAME from dba_registry;

---------- ------------------------------------
DV         Oracle Database Vault
APEX       Oracle Application Express
OLS        Oracle Label Security
CONTEXT    Oracle Text
OWM        Oracle Workspace Manager
XDB        Oracle XML Database
CATALOG    Oracle Database Catalog Views
CATPROC    Oracle Database Packages and Types
JAVAVM     JServer JAVA Virtual Machine
XML        Oracle XDK
CATJAVA    Oracle Database Java Packages
RAC        Oracle Real Application Clusters 

12 components still there - SDO, ORDIM, XQO and APS are gone as expected.

JAVAVM removal: 

Question would be now to remove only the JAVAVM - or CATJAVA as well?
As of MOS Note:397770.1 it seems to be that removing the JAVAVM is (a) trivial and (b) will avoid to apply the OJVM patch. So removing JAVAVM only seems to be the best way in this case. As shown above this will lead to two additional leftover package bodies JAVAVM_SYS and JVMRJBCINV

4) Oracle JVM is not installed in the database

Do not apply the DST JVM patch.

If for some reason the patch is applied, then apply the patch to the ORACLE_HOME but DO NOT run the post install steps in the database.  This will leave unwanted java objects in the database and create an incomplete non-working Oracle JVM.  See Note 414248.1 for details.

SQL> @?/xdk/admin/rmxml.sql 
SQL> delete from registry$ where status='99' and cid = 'JAVAVM';
SQL> commit;

-- Mike 

Tuesday Jun 09, 2015

Recent News about Pluggable Databases - Oracle Multitenant

Three recent lessons about PDBs in the Oracle Single/Multitenant space you should be aware of. And thanks to my teammates and the Multitenant PMs for bringing this into our radar.

Unplug/plug - don't forget to DROP your PDB

I've had to add a single line to my previous blog post about the upgrade solution Unplug/Plug/Upgrade for PDBs:

Unplug Plug Upgrade PDB Oracle Mutitenant

You'll have to DROP your PDB after you have unplugged it as otherwise the information will stay in the CDB's dictionary where you have unplugged it from (a) forever and (b) during an subsequent upgrade of the entire source CDB. But the latter will cause trouble as, the PERL script to execute sql code in all the containers, will try to open the PDB you have unplugged a while ago as the information about it is still kept. To me this looks like an undesired behavior but there's discussion going on internally about it. 

  • alter pluggable database PDB1 close;
  • alter pluggable database PDB1 unplug into '/stage/pdb1.xml';
  • drop pluggable database PDB1 keep datafiles;

If you don't issue the command marked in yellow you'll get yourself in trouble the sooner or later unless this CDB will be deleted afterwards anyways as it was just meant to hold this single PDB.

Unplug/plug - take care on your backup

If you'd use the above procedure for Unplug/Plug/Upgrade (or even just Unplug/Plug without upgrade from one CDB into another) be aware that you will need to take a backup of your PDB right after the upgrade has been finished. This is something which is pretty clear and obvious but it doesn't jump into your face when you don't think about it - and it's not mentioned in the docs as far as I know.

The previous backup is useless as you won't be able to use a backup of PDB1 taken on CDB1 to recover PDB1 into CDB2. Therefore a backup taken directly after the upgrade or plug in has to be scheduled immediately - it's a must and needs to be considered into your maintenance plans.

Every PDB must have its own TEMP tablespace

This one is fairly new to me [thanks Hector!].
MOS Note: 2004595.1 (PDB to Use Global CDB (ROOT) Temporary Tablespace Functionality is Missin

Basically this means that you'll be unable to drop the local temporary tablespace of a PDB and instead use the global temporary tablespace (the one in CDB$ROOT). This is documented as a functionality which is described in the docs but does not exist right now. It is logged under Bug17554022. No major issue but I've a intense discussion with Johannes Ahrends a while back at the DOAG conference about it - so others saw this issue as well.


Tuesday Jun 02, 2015

Migrating to Unicode? Get DMU 2.1!

DMU OTN LogoWhen you find yourself needing to perform a character set migration as part of a database upgrade or migration, the Oracle Database Migration Assistant for Unicode (DMU) is a very helpful tool. It will assess your migration needs and help automate the process, all with a very nice GUI front end that makes the whole process easier.

And now comes a really nice enhancement with DMU 2.1: automation of near-zero downtime character set migration! Here is the blurb from the DMU OTN page:

New! Oracle DMU 2.1, released in May 2015, supports a near-zero downtime migration model in conjunction with the Oracle GoldenGate replication technology. Using DMU 2.1 and GoldenGate or later, you can set up a migration procedure that takes advantage of the DMU data preparation and in-place conversion capabilities while leveraging GoldenGate to replicate incremental data changes on the production system during the migration process, thereby effectively eliminating the downtime window requirement. Other new features in DMU 2.1 include migration profile support, problem data report, and transparent repository upgrade. Please see the DMU 2.1 Release Notes for changes since the 2.0 release.

See the OTN Page for DMU to get all the latest information.  

Wednesday May 27, 2015

Removing Options from the Oracle Database kernel in 12c

Remove Options from the Oracle Database Kernel - chopt

Sometimes people have the desire to remove options from the database kernel (i.e. from the oracle executable).

It's a matter of fact that by default you'll get plenty of things linked into your kernel in Oracle Database 12c.

In case you'd like to remove things the chopt utility does still exist in Oracle Database 12c - but you may recognize a difference between Oracle 11.2 and Oracle 12.1. Anyhow, ideally you'll do these changes before you create a database directly after the installation has been completed. See the documentation for Post Installation Tasks first:

Now let's call chopt and see what it tells us on the command prompt:

$ chopt

chopt <enable|disable> <option>

                  dm = Oracle Data Mining RDBMS Files
                olap = Oracle OLAP
        partitioning = Oracle Partitioning
                 rat = Oracle Real Application Testing
e.g. chopt enable rat 

For a first try I'm unlinking Data Mining:

$ chopt disable dm

Writing to /u01/app/oracle/product/
/usr/bin/make -f /u01/app/oracle/product/ dm_off ORACLE_HOME=/u01/app/oracle/product/
/usr/bin/make -f /u01/app/oracle/product/ ioracle ORACLE_HOME=/u01/app/oracle/product/

I tested all the 4 available chopt options and this is the result when you exit SQL*Plus afterwards:

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release - 64bit Production

versus before with all options still linked into the kernel:

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

But how about all the other options being available in releases before Oracle Database 12c, such as: lbac_on|off (Label Security), dv_on|off (Database Vault)? If you'd refer to the list of options to link on/off published by Ghokan Atil years back in his blog you may find more things to try out.

Lets give it a try with Label Security:

$ /usr/bin/make -f /u01/app/oracle/product/ lbac_off ORACLE_HOME=/u01/app/oracle/product/
lbac_off has been deprecated

Ah, very smart ;-) It signals that you can't link those things off anymore. The same would happen with dv_off.

And how about things which are from older sources, such as Spatial Data (sdo_on|off)?

$ /usr/bin/make -f /u01/app/oracle/product/ sdo_off ORACLE_HOME=/u01/app/oracle/product/
Warning: sdo is always turned on. sdo_off is disabled.

In this case it would be simple: You'd take out SDO from the components inside the database - and there's no need to unlink anything.


Tuesday May 26, 2015

Patching Best Practices - 38min Video

Patch Ever asked yourself about Database Patching Best Practices?

I know that not always everything works that simple and easy. See the blog post by Pieter Van Puymbroeck from Exitas in Belgium about some findings when applying the April 2015 PSU to an Exadata or by myself about my experiences when applying the GI PSU from Jan 2015.

But this 38 min video may give you a good overview about the best and recommended techniques at first hand by Eleanor Meritt and David Price, both from the Sustaining Engineering organization that issues bug fixes, patches and patchset updates for the Oracle Database, Enterprise Manager and Fusion Middleware product area.

Building on a highly popular session from Oracle OpenWorld 2013, this 2014 OpenWorld session further explores ways to help you maintain and patch your database systems most efficiently. Learn about patch testing best practices, techniques for minimizing downtimes, how to best rollout patches in cloud environments, and more. The presentation also shares the latest Oracle Database 12c features and tooling to help ease patching processes.


Monday May 18, 2015

Create a PDB directly from a stand-alone database?

The documentation offers a well hidden feature for migrating a database into the universe of Oracle Single-/Multitenant:

Remote Cloning with the NON$CDB option.

If you'll read the documentation it doesn't say much about this option, neither the requirements nor the exact syntax or an example:

Scroll down to the FROM clause:

... FROM NON$CDB@dblink ... this option will be able to plugin a stand-alone database and make it a pluggable database. Sounds interesting, let's try it.

Test 1 - Try to plugin an Oracle database

Well, the documentation doesn't say anywhere anything about source release limitans. So I tried it simply with an Oracle database. 

  1. Created a database link from my existing CDB pointing into my database
  2. Started my SOURCEDB in read-only mode
  3. Tried to create a pluggable database from my SOURCEDB - and failed ...
    SQL> create pluggable database PDB1 from non$cdb@sourcedb;
    create pluggable database PDB1 from non$cdb@sourcedb
    ERROR at line 1:
    ORA-17627: ORA-28002: the password will expire within 7 days
    ORA-17629: Cannot connect to the remote database server

Test 2 - Try to plugin an Oracle database in file system 

Thanks to Tim Hall - his blog post did the magic trick for me:

First of all, the reason why my Test 1 failed is simply that I can't have a user in an Oracle database with the privilege CREATE PLUGGABLE DATABASE - but this is a requirement as I learned later on.

  1. You'll need a user in SOURCEDB with the privilege to CREATE PLUGGABLE DATABSE:
  2. Start SOURCEDB in read-only mode after shutting it down:
  3. Create a database link pointing from the CDB back into the SOURCEDB:
    CREATE DATABASE LINK sourcedblink
    CONNECT TO sourcedb_user IDENTIFIED BY password USING 'upgr12';
  4. Now create the pluggable database from the stand-alone UPGR12 database:
    CREATE PLUGGABLE DATABASE pdb_upgr12 FROM NON$CDB@sourcedblink
  5. But when you check the status of the new PDB you'll realize it is OPEN but only in RESTRICTED mode. Therefore noncdb_to_pdb,sql needs to be run. Connect to the new PDB and start the script:


What will you get from this command? Actually it will allow a simple way to plug in a stand-alone database into a container database but the following restrictions apply:

  • Source database must be at least Oracle
  • Source database must be on the same OS platform
  • Source database must be at the same (equal) version as the container database
  • Script noncdb_to_pdb.sql needs to be run

You may have a look at this MOS Note:1928653.1 Example for Cloning PDB from NON-CDB via Dblink as well [Thanks Krishnakumar for pointing me to this note].

Finally the only simplification seems to be to avoid the extra step of creating the XML manifest file with DBMS_PDB.DESCRIBE - but apart from that I can't see many other benefits - except for easing of remote cloning with the above restrictions.



Thursday Apr 23, 2015

CDBs with less options now supported in Oracle

When Oracle Multitenant was launched Roy and I amongst many other people always mentioned that the requirement of having all options in a Container Database (CDB$ROOT), and therefore also for the PDB$SEED with the immediate result that all PDBs provisioned from the PDB$SEED will have all options as well, will hinder customer adoption significantly. 

Almost all customers I have talked to in the past 3-4 years about Oracle Multitenant mentioned immediately that it will be a huge problem for them to install all options as (1) their policy is to install only things they are licensed for to (2) prevent developers, users and DBAs to use things accidentally without even knowing that this or that will require a license.

As it is not allowed to manipulate and change the PDB$SEED the workaround - as PDBs were allowed to have less options - has been to create a stand-alone Oracle 12c database with exactly the options you'd like to have configured as your gold standard - and then plug it in under a remarkable name, for instance PDB$MASTER. Switch it to read only and make sure from now on you'll provision a new PDB always as a clone from PDB$MASTER, and not from PDB$SEED.

All Options in a CDB

That would have even worked in the Single Tenant case, which does not require licensing the Oracle Multitenant option and where you have only one active ("customer-created PDB";) PDB. For this purpose you would have unplugged your PDB$MASTER after making it a pluggable database and provision new PDBs with just your desired options set as plugging in PDB$MASTER under a new name (e.g. PDB26) using the COPY option of the command.

Now this will become all obsolete as from now you it is allowed to have a CDB installation with less options. This applies to linked kernel modules (e.g. RAT) as well as to configured database components (e.g. JAVA, OWM, SPATIAL etc).

Please see the following new/rephrased MOS Notes:

MOS Note:2001512.1 basically explains the following steps:

  • Do all the click work in DBCA (Database Creation Assistant) to create a container database - but let DBCA only create the scripts
  • Edit the <SID>.sql script and remove the unwanted options according to the dependency table in the MOS Note
  • Edit the CreateDBCatalog.sql in case you want to remove OWM (Oracle Workspace Manager) creation as well 
  • Add the Oracle PERL $ORACLE_HOME/perl/bin in front of your $PATH variable
  • Start the <SID>.sh script on the shell prompt

Here's an example of a CreateDBCatalog.sql and a XXXX.sql creating a CDB with no options except XDB (which is mandatory in Oracle Database 12c):

cat CreateDBCatalog.sql

connect "SYS"/"&&sysPassword" as SYSDBA
set echo on
spool /u01/app/oracle/admin/XXXX/scripts/CreateDBCatalog.log append
alter session set "_oracle_script"=true;
alter pluggable database pdb$seed close;
alter pluggable database pdb$seed open;
host perl /u01/app/oracle/product/ -n 1 -l /u01/app/oracle/admin/XXXX/scripts -b catalog /u01/app/oracle/product/;
host perl /u01/app/oracle/product/ -n 1 -l /u01/app/oracle/admin/XXXX/scripts -b catproc /u01/app/oracle/product/;
host perl /u01/app/oracle/product/ -n 1 -l /u01/app/oracle/admin/XXXX/scripts -b catoctk /u01/app/oracle/product/;
-- host perl /u01/app/oracle/product/ -n 1 -l /u01/app/oracle/admin/XXXX/scripts -b owminst /u01/app/oracle/product/;
host perl /u01/app/oracle/product/ -n 1 -l /u01/app/oracle/admin/XXXX/scripts -b pupbld -u SYSTEM/&&systemPassword /u01/app/oracle/product/;
connect "SYSTEM"/"&&systemPassword"
set echo on
spool /u01/app/oracle/admin/XXXX/scripts/sqlPlusHelp.log append
host perl /u01/app/oracle/product/ -n 1 -l /u01/app/oracle/admin/XXXX/scripts -b hlpbld -u SYSTEM/&&systemPassword -a 1  /u01/app/oracle/product/ 1helpus.sql;
spool off
spool off


cat XXXX.sql

set verify off
ACCEPT sysPassword CHAR PROMPT 'Enter new password for SYS: ' HIDE
ACCEPT systemPassword CHAR PROMPT 'Enter new password for SYSTEM: ' HIDE
host /u01/app/oracle/product/ file=/u01/app/oracle/product/ force=y format=12
-- @/u01/app/oracle/admin/XXXX/scripts/JServer.sql
-- @/u01/app/oracle/admin/XXXX/scripts/context.sql
-- @/u01/app/oracle/admin/XXXX/scripts/ordinst.sql
-- @/u01/app/oracle/admin/XXXX/scripts/interMedia.sql
-- @/u01/app/oracle/admin/XXXX/scripts/cwmlite.sql
-- @/u01/app/oracle/admin/XXXX/scripts/spatial.sql
-- @/u01/app/oracle/admin/XXXX/scripts/labelSecurity.sql
-- @/u01/app/oracle/admin/XXXX/scripts/apex.sql
-- @/u01/app/oracle/admin/XXXX/scripts/datavault.sql
-- @/u01/app/oracle/admin/XXXX/scripts/CreateClustDBViews.sql



This results in a database having only these components - the minimal component set in Oracle 

-------- --------------------------------------

CATALOG  Oracle Database Catalog View

CATPROC  Oracle Database Packages and

XDB      Oracle XML Database


-- Mike 

Friday Apr 10, 2015

Parallel Index Creation with Data Pump Import

Here is a new capability that might be interesting to anybody who is performing a migration using Data Pump. Previously, Data Pump would create indexes one at a time, specifying the PARALLEL keyword for the CREATE INDEX statement to invoke parallel query for index creation. We used to recommend a workaround to create indexes in parallel, which involved a three-step process of importing without indexes, then creating a SQLFILE of the CREATE INDEX statements, and breaking that file into multiple windows.

Through extensive performance testing we found that it is faster to create multiple indexes in parallel (using a parallel degree of 1) instead of creating a single index using parallel query processes. This is enabled by the patch for bug 18793090, superseded by patch 21539301,  which is available as a backport for Exadata BP 9,, or If you need it for another platform, that can of course be requested. The number of indexes created will be based on the PARALLEL parameter.

Here is an example of the effects of this patch on a toy example that I created using our hands-on lab VM environment. I created a table in the SYSTEM schema with 4 columns and 14 indexes, and then inserted a couple of dozen rows into the table. Then I exported the SYSTEM schema from and imported into a PDB with PARALLEL=4, both with and without the patch. 

Normal (unpatched) behavior:

Import: Release - Production on Thu Apr 9 14:38:50 2015
Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.
Connected to: Oracle Database 12c Enterprise Edition Release - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
09-APR-15 14:39:08.602: Startup took 2 seconds
09-APR-15 14:39:12.841: Master table "SYSTEM"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
09-APR-15 14:39:13.417: Starting "SYSTEM"."SYS_IMPORT_FULL_01":  system/********@pdb1 directory=temp dumpfile=exp_test.dmp logfile=imp_test1.log logtime=all metrics=yes parallel=4 
09-APR-15 14:39:13.605: Processing object type SCHEMA_EXPORT/USER
09-APR-15 14:39:14.454:      Completed 1 USER objects in 1 seconds
09-APR-15 14:39:14.470: Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
09-APR-15 14:39:14.541:      Completed 5 SYSTEM_GRANT objects in 0 seconds
09-APR-15 14:39:14.596: Processing object type SCHEMA_EXPORT/ROLE_GRANT
09-APR-15 14:39:14.655:      Completed 2 ROLE_GRANT objects in 0 seconds
09-APR-15 14:39:14.690: Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
09-APR-15 14:39:14.728:      Completed 1 DEFAULT_ROLE objects in 0 seconds
09-APR-15 14:39:14.746: Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
09-APR-15 14:39:15.275:      Completed 1 PROCACT_SCHEMA objects in 1 seconds
09-APR-15 14:39:15.377: Processing object type SCHEMA_EXPORT/TABLE/TABLE
09-APR-15 14:39:15.626:      Completed 1 TABLE objects in 0 seconds
09-APR-15 14:39:15.673: Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
09-APR-15 14:39:16.031: . . imported "SYSTEM"."TAB1"                             6.375 KB      12 rows in 1 seconds
09-APR-15 14:39:16.096: Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
09-APR-15 14:39:20.740:      Completed 14 INDEX objects in 4 seconds

With Patch:

Import: Release - Production on Thu Apr 9 15:05:19 2015
Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.
Connected to: Oracle Database 12c Enterprise Edition Release - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
09-APR-15 15:05:22.590: Startup took 0 seconds
09-APR-15 15:05:23.175: Master table "SYSTEM"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
09-APR-15 15:05:23.613: Starting "SYSTEM"."SYS_IMPORT_FULL_01":  system/********@pdb1 directory=temp dumpfile=exp_test.dmp logfile=imp_test3.log logtime=all metrics=yes parallel=4 
09-APR-15 15:05:23.699: Processing object type SCHEMA_EXPORT/USER
09-APR-15 15:05:23.862:      Completed 1 USER objects in 0 seconds
09-APR-15 15:05:23.882: Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
09-APR-15 15:05:23.937:      Completed 5 SYSTEM_GRANT objects in 0 seconds
09-APR-15 15:05:23.993: Processing object type SCHEMA_EXPORT/ROLE_GRANT
09-APR-15 15:05:24.071:      Completed 2 ROLE_GRANT objects in 1 seconds
09-APR-15 15:05:24.096: Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
09-APR-15 15:05:24.180:      Completed 1 DEFAULT_ROLE objects in 0 seconds
09-APR-15 15:05:24.216: Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
09-APR-15 15:05:24.378:      Completed 1 PROCACT_SCHEMA objects in 0 seconds
09-APR-15 15:05:24.460: Processing object type SCHEMA_EXPORT/TABLE/TABLE
09-APR-15 15:05:24.665:      Completed 1 TABLE objects in 0 seconds
09-APR-15 15:05:24.782: Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
09-APR-15 15:05:25.096: . . imported "SYSTEM"."TAB1"                             6.375 KB      12 rows in 1 seconds
09-APR-15 15:05:26.291: Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
09-APR-15 15:05:26.809: Startup took 4 seconds
09-APR-15 15:05:26.896: Startup took 5 seconds
09-APR-15 15:05:27.138: Startup took 4 seconds
09-APR-15 15:05:28.398:      Completed 14 INDEX objects in 3 seconds

A few of things are noteworthy here:

  1. The indexes took 4.644 seconds without the patch and 3.302 seconds with the patch. So, the effect is significant even on a toy example.
  2. Those messages about startup taking X seconds are because we did not need the parallel workers for the table data. The startup time probably isn't correct; that's just the time between the start of the job and the first use of the worker
  3. If you apply this patch, we will no longer use PX processes (formerly known as PQ slaves) to create indexes
If you take advantage of this patch, let me know the results in a reply here! We are excited to be able to add a bit more parallelism into Data Pump, and plan on more for the future.

Thursday Mar 19, 2015

Migration of an EM Repository cross-platform?

Can you migrate your EM Cloud Control Repository to another OS platform? Cross-platform and cross-Endianness?

This question sounds so incredibly simple that you won't even start thinking I guess. Same for ourselves. Use Data Pump. Or Transportable Tablespaces. Or Full Transportable Export/Import if your source is at least or newer.

But sometimes in life things seem to be simple, but as soon as you unmask them you'll find a full bunch of issues. It's a fact that the repository of EM Cloud Control is quite a bit complicated. And uses plenty of databases technologies. 

Actually all credits go to Roy here as he has worked with the EM group for the past 6 months on this topic.

You can migrate a EM Cloud Control Repository cross-platform but not cross-Endianness (e.g. HP-UX to OL, big to little Endianness). The latter is scheduled to be supported in EM 13.2.



As EM Cloud Control Repository migrations is possible right now only within the same Endianness group you should decide carefully where you store your EM Cloud Control Repository.



Friday Mar 13, 2015

EM Cloud Control (or newer) OMS Repository certified with Oracle Database plus patches

The certification for Oracle Database as Repository Database underneath of Oracle Enterprise Manager Cloud Control (OMS) had been revoked for a while. One reason may have been the potential removal of the SYSMAN user during a DBUA upgrade ;-) But this is just a rumor. I don't know the exact reasons.

After checking with the certification matrix earlier today I realized:

Cloud Control Repository Database Certification Oracle

Oracle Database is certified as a repository database for Cloud Control resp. OMS.

But ... with a few important restrictions (see MOS Note:1987905.1 - 12c Database has been Certified as a 12cR4 Repository with Certain Patchset Restrictions):

  • Database PSU April 2015 (


  • Database PSU October 2014 ( or newer
  • Database Patch 20243268 on top of the PSU
    • Bug20243268:

Please be aware that the above mentioned MOS Note has a typo - it refers to PSU (correct) as the Oct PSU (incorrect - it's the April 2015 one). 


PS: Thanks for the anonymous comment ... I wasn't aware of the MOS Note and updated the patch requirements

Monday Mar 09, 2015

Applying a PSU or BP to a Single-/Multitenant Environment

I have already explained in broad details a while ago how to:

But one may miss the steps for applying a PSU (Patch Set Update) or BP (Bundled Patch) to a Single-/Multitenant Environment. At first everything will work just the same if you choose the Everything-at-Once strategy as datapatch will adjust all the required things regardless of being executed in a stand-alone or a singe/Multitenant environment.

But what happens if you apply a PSU or a BP to one of your Multitenant environments and want to move PDBs one after another (or a few at the same time) to the new environment?
Or revert a PSU by plugging out from a CDB with the PSU inside - and plug it back into a CDB with a lower version or no PSU at all? 

First step - Check Plug In Compatibility 

Before you can even start your unplug/plug operation you should always perform the plugin check. This is divided in two simple steps:

  1. Create the XML description file for your PDB in CDB_SOURCE
    exec DBMS_PDB.DESCRIBE ('/home/oracle/PDB1_unplug.xml', 'PDB1');
  2. Run the plug in check in CDB_DEST
      if DBMS_PDB.CHECK_PLUG_COMPATIBILITY('/home/oracle/PDB1_unplug.xml','PDB1') then  
      DBMS_OUTPUT.PUT_LINE('No violations found - you can relax');
      DBMS_OUTPUT.PUT_LINE('Violations found - check PDB_PLUG_IN_VIOLATIONS');
    end if;

No Plugin Violations?

Then please follow the procedure described in:
without the upgrade part as you don't need to upgrade anything in this case of course. 

Higher patch in CDB_DEST than in CDB_SOURCE?

Then run this query:


It will tell you to execute datapatch:

------  ----------------------------------------------------------------------------
ERROR   PSU bundle patch 1 (PSU Patch 4711): Installed in the CDB but not in the PDB

Call datapatch to install in the PDB or the CDB

Lower patch in CDB_DEST than in CDB_SOURCE?

Now this becomes a bit more tricky. See the output of PDB_PLUG_IN_VIOLATIONS:

----- ----------------------------------------------------------------------------
ERROR PSU bundle patch 1 (PSU Patch 4711): Installed in the PDB but not in the CDB

Call datapatch to install in the PDB or the CDB

Huh? Install???
What does this mean? Should I install now the current CDB/PDB's PSU into my target environment before being able to step down? 

Actually I think this message is misleading. And when you look into the MyOracle Support Note describing this under scenario 3 (MOS Note:1935365.1 - Multitenant Unplug/Plug Best Practices) you'll see that the author silently assumed as well that is is more likely that you'll remove the patch from the PDB. 

But how do you remove changes which came in with datapatch from within a PDB only?

You will need to run datapatch -rollback on the affected PDBs only:

$> datapatch -rollback <patch id> –force [–bundle_series] -pdbs <pdb1,pdb2,...,pdbn>

For further information see:


Tuesday Mar 03, 2015

There is still time to register for next week's Upgrade Workshop in Omaha!

Upgrade Workshop Banner

While the event in San Francisco is a sell-out (metaphorically speaking -- Upgrade Workshops are of course free of charge), we do still have room at next week's seminar in Omaha on Wednesday, March 11:

Oracle Database 12c Upgrade Seminar

This full-day event at the Embassy Suites in La Vista will include discussions of many ways to upgrade and migrate to Oracle Database 12c. I will also talk about Oracle Multitenant and other new features. We also include a guide for how to ensure you will have good performance after the upgrade.

I have visited Omaha as a tourist before, but this is my first upgrade workshop in the area. I hope to see you there! 

ORAchk with new checks + Restart support


Never used or heard about ORAchk? Then it's time to give it a try! 

ORAchk has now be released. ORAchk has so many great features, especially more than 50 new health checks, OL7 Support - and is now aware of Oracle Restart environments.

New features include:

  • Linux on System Z (RHEL 6, RHEL 7, SuSE 12)
  • Oracle Enterprise Linux 7
  • Single Instance ASM Checks
  • Upgrade Validation checks for
  • Enhanced Golden Gate Checks 
  • Enterprise Manager Agent Checks
  • Enhanced Reporting to highlight checks that ORAchk cannot gain full visibility for
    (checks that require manual validation)
  • Improved health Score Calculation (you can now hit 100%)
  • Over 50 new health checks
  • Bug Fixes

Download the most recent version via MOS Note:1268927.1.

Learn more about the Upgrade Readiness Assessment with ORAckh in MOS Note: 1457357.1


PS: Updated the post on Mar 7, 2015, as ORAchk became production!

Monday Mar 02, 2015

New Behaviour in Oracle Database 12c and SELECT ANY DICTIONARY with reduced privilege set

You've just upgraded to Oracle Database 12c - but your favorite admin tool receives an ORA-1031: Insufficient Privileges after connection?

Then the reason may be the reduced set of privileges for the SELECT ANY DICTIONARY privilege. This privilege does not allow access to tables USER$, ENC$ and DEFAULT_PWD$, LINK$, USER_HISTORY$, CDB_LOCAL_ADMINAUTH$, and XS$VERIFIERS. Actually such changes are not new. For instance in Oracle 10.1 we removed the access to  LINK$ in SELECT ANY DICTIONARY (well, this may have happened because the dblink's password was stored in clear text in LINK$ - a misbehavior which is fixed since Oracle 10.2).

Please be very careful with granting this privilege. Furthermore, you need to be aware that it can't be granted either through a role, nor is it included in the GRANT ALL PRIVILEGES.

Oracle 11.2:

Oracle 12.1:

Documentation can be found here: 

  1. SELECT ANY DICTIONARY Privilege No Longer Accesses Some SYS Data Dictionary Tables
    For better security, the SELECT ANY DICTIONARY system privilege no longer permits you to query the SYS schema system tables DEFAULT_PWD$, ENC$, LINK$, USER$, USER_HISTORY$, CDB_LOCAL_ADMINAUTH$, and XS$VERIFIERS. Only user SYShas access to these tables, but user SYS can grant object privileges (such as GRANT SELECT ON USER$ TO sec_admin) to other users.
  2. Increased Security When Using SELECT ANY DICTIONARY

Please be aware that you can't query anywhere inside the database which privileges are included in the SELECT ANY DICTIONARY privilege as this is embedded in our code.


PS: Credits go to Andy Kielhorn for highlighting this to me and thanks to Gopal for providing me with the doc links

Friday Feb 27, 2015

Why you seriously can't wait for the second release!

Premier Support for Oracle 11.2 has ended 4 weeks ago at 31-January-2015. 
Read here if you didn't know or don't believe it.

I think most Oracle DBAs are aware of it. And I have stressed the topic about the need to upgrade to Oracle Database a lot in the past months via the blog, in workshops and in discussions and customer meetings.

But there are still plenty of people out there you would like to wait for Oracle Database 12.2, the so called "second" release. From looking backwards I can understand this thinking. Neither Oracle 9.0 nor Oracle 10.1 were the best and most stable releases ever. If you waited a while then you could expect at least the first or sometimes even the 2nd patch set for the SECOND release being available. And then most people started going live. And some did wait for the terminal patch set.

But this has changed. You can't seriously wait anymore for the second release. Why? There are several reasons and it's fairly easy to explain.

  • Reason 1 - Every release is a full release
  • Reason 2 - Every release has a significant number of changes
  • Reason 3 - Every release has a significant number of new features
  • Reason 4 - Oracle is the TERMINAL patch set 
  • Reason 5 - The time span between releases has grown to large
  • Reason 6 - Important application providers will certify Oracle


Especially the Reason 5 is very important. You can't seriously wait for Oracle Database 12.2 as you will potentially see a period with no bug fixing support for Oracle 11.2. 

So let's be honest:
You don't wait for the second release. You'll wait at least for the first patch set.
Patch Sets in the past got released roughly 12 months after the initial drop has been put out (please don't get this wrong: I'm not saying that will be released 12 months after - I just try to project from the past!).

We announced already the planned availability of Oracle 12.2 for H1CY16 (first half of calender year 2016).
See the Release Schedule MOS Note:742060.1for further details.

And keep in mind that this is a plan and no fixed schedule. So let's project the usual patch set cycle from the past. Then we may be in the 2nd half of 2017. If you start your tests (I hope you'll test :-) ) by then you may be ready to go live in 2018 - and Extended Support for Oracle will end 31-January-2018.

Ouch ...

Look at the release cycles:

Oracle Release Cycle

It has grown from 18 months in the past to 45 months for Oracle Database For Oracle Database 12.2 we may be at 30-36 months based on the currently announced plan.

Any further questions?

Be smart and transform from "We'll go live on the 2nd release only" into "We'll go live with the current release's first or terminal patch set!". This will be Oracle Database There's a reason why large application providers such as SAP will announce support for Oracle Database soon.


Thursday Feb 26, 2015

Oracle In-Memory Advisor with Oracle Multitenant? Issues?

Do you want to use the Oracle In-Memory Advisor in an Oracle Multitenant environment?

This is possible but with the current version it will require a workaround. The next available version of the IM Advisor is designated to support it without any workarounds.

At the moment the IM Advisor does not install into the CDB$ROOT container. But it can be installed into any PDB.

  • Create the user IMADVISOR in the PDB as local user first
  • No objects must be placed into the IMADVISOR schema but the user must have a default and a temporary tablespace
  • Install the IM Advisor locally in this PDB by using the install script imstimadv.sql
    • The script will detect the presences of the IMADVISOR schema. If the IMADVISOR schema is empty, it will proceed. It will then ask you to press ENTER instead of prompting you to set the password and default tablespace. Unless you setup the IMADVISOR user with an automated authentication method, it will still prompt you for the IMADVISOR's password when it connects to the IMADVISOR user

An issue which I learned about in the past days:

ORA-24817 get raised when using the IM Advisor.'s imadvisor_fetch_recommendations script unless you do not have a massive shared pool configured. The script does a SET LONGCHUNKSIZE 2000000000; and this needs to be reduced.
There has been a bug filed for it and I think it's supposed to be fixed in the next release of the IM Advisor.


Monday Feb 23, 2015

Upgrade the Operating System in a RAC Environment

Last week at the upgrade workshop in Budapest a customer had a interesting and - I believe - not uncommon question.

How can I upgrade my Linux Operating System in my RAC environment without taking the entire cluster down?
In this specific case the customer wanted to upgrade from RHEL 5.10 to RHEL6 or RHEL7.

Let's assume it's the typical 2-node-RAC where one wants to upgrade in a rolling fashion. And the data is stored within ASM.

  1. Drain Node 1 (i.e. take the workload off) - this will be the node getting upgraded first
  2. Remove Node 1 from the  cluster (deleteNode procedure)
  3. Upgrade the OS (which is most likely a reimaging of the node). If the OS upgrade does not wipe out the entire server you can follow MOS Note:1559762.1 as it shows an OS upgrade from OL 6.2 to 6.4 which leave the Oracle installations intact) 
  4. Add Node 1 back to the cluster (addNode procedure)
  5. Extend the Database Home to Node 1 using either cloning or addNode
  6. Make the database available on the newly added Node 1 and drain Node 2
  7. Repeat steps 2-6 for Node 2 
  8. Ideally now you'll perform a rolling upgrade of Oracle GI to Oracle Database 12c. Please apply the most recent PSU as well.
  9. Furthermore you may now look into upgrading your databases to Oracle Database as well.
See the following documentation: 

Friday Feb 20, 2015

Upgrade Workshops coming in March in North America

The days are getting longer, the snow is still getting deeper, and the upgrade workshops keep coming in North America! Remember that you can always go to and plug in a keyword of "upgrade" to find upcoming live events. Roy will be the speaker at some of these events, while others will be delivered by some of our top presales technical folks like Dan Wittry:

March 5: Oracle 12c Database Upgrade Seminar-Detroit (Troy, MI)

March 11: Oracle Database 12c Upgrade Seminar - Omaha (La Vista, NE)

March 12: Oracle Database 12c Upgrade Seminar - Redwood Shores, CA Sold out!

March 17: Oracle 12c Database Upgrade Seminar-Calgary

March 19: Oracle Database 12c Upgrade Seminar - Seattle (Bellevue, WA)

March 19: Oracle Database 12c Upgrade Seminar Tampa

As you can see, we do sometimes reach capacity and have to shut down registrations. If you are interested in any of these events, sign up now and don't be left out in the cold (literally in the north, or metaphorically in other regions).

Downgrade Oracle Restart 12c - Grid Infrastructure only?

Oracle RestartCan you downgrade your Oracle Restart installation from Oracle 12c back to Oracle 11g?

Actually there's no real direct downgrade supported for Oracle Restart. But of course there's a way to do it.

Basically it is: 

  • Deconfigure Oracle Restart in 12c
  • Configure Oracle Restart in 11g 

But wait a minute. It is very important to know if you have upgraded your database already. If that is the case then first you MUST downgrade your database(s) as you can't manage a higher version Oracle Database with a lower version Clusterware.

So first of all, please downgrade your Oracle database(s) first: 

At the next stage you'll need to "downgrade" the Oracle Clusterware resp Grid Infrastructure for Oracle Restart: 

Before you attempt this you'll need to deconfigure the Restart resources - and please be aware that here's a small difference in commands between Oracle and Oracle

This is from the documentation for Oracle

  • Deconfigure Oracle Restart:
    • Log in as root
    • cd /u01/app/
    • -deconfig -force
  • Once this is complete you can now deinstall the Grid Infrastructure with the deinstall tool
  • Then you will need either to reinstall the previous - for instance Oracle - Grid Infrastructure or - if it's still present on the machine - reconfigure it by running from the previous Clusterware's home
  • And finally reconfigure the database(s) again with Clusterware
    • $ srvctl downgrade database -d db-unique-name -o oraclehome -t to_version

In case you'd plan to do this exercise back from Oracle instead then you'll have different steps to follow to deconfigure Oracle Restart

  • Deconfigure Oracle Restart
    • # cd /u01/app/
    • # /u01/app/ /u01/app/ -deconfig

Regarding ASM backwards compatibility please be aware that changing the COMPATIBLE attributes within ASM will get you in trouble - see the Oracle Database Upgrade Guide for further information:


PS: See bug18160024 or the GI install guide, section A11.4 for the Standalone GI downgrade instructions from 

Monday Feb 16, 2015

Oracle Fail Safe 4.1.1 released - supports Multitenant

Oracle Fail SafeOracle Fail Safe 4.1.1 is now released. It will be included in future Oracle Database 12c media packs.  Customers can download the kit and documentation from the Oracle Technology Network (OTN) now by starting at the main Fail Safe page.  

The main changes in this release are:

  • Basic multi-tenant (container database, or CDB) support
  • This release of Oracle Fail Safe adds support for the container database feature that was introduced in Oracle Database 12c. Fail Safe will recognize that a database is a root container and will start and stop individual pluggable databases owned by the container database. When a database is failed over or moved to another node in the cluster, Oracle Fail Safe will start each pluggable database using the state that was saved by the last SQL ALTER PLUGGABLE DATABASE ALL SAVE STATE command. Oracle database 12c patch set 1 ( is required to have the ability to save the state of the pluggable databases. 

  • Improved net (TNS) configuration
  • Cluster Shared Volume (CSV) support

 All documentation for this release, including the release notes, can be found here.


Friday Feb 13, 2015

Hands-On-Lab "Upgrade, Migrate & Consolidate to Oracle Database 12c" available for DOWNLOAD now!

Wow ... that was a hard piece of work. Roy put a lot of effort into getting our Hands-On-Lab on OTN for download. We promised to have it available after OOW - or at least a description how to create it by yourself. And finally it's there. Find it here:

A few important things to mention before you start the download: 

  • It's a Virtual Box image
  • You will need to install Oracle Virtual Box first - and please install also the VBox Extensions
  • Your PC must have a 64-bit host operating system
  • You need to enable Virtualization options in your computer's BIOS
  • You PC should have at least 4GB of RAM - having 8GB is very helpful
  • A fast disk (SSD) will speed up things
  • The instructions are available for download but are included in the download as well
  • The lab will guide you through the following tasks:
    1. Upgrade an database to Oracle
    2. Plug in this database into a CDB
    3. Migrate an database with Full Transportable Export into another PDB
    4. Unplug an PDB and plug/upgrade it into an CDB

You'll find a picture as screen background inside the VBox image always giving you guidance about "what to accomplish" and "how to switch environments".

Enjoy :-)




Monday Feb 09, 2015

Virtual Technology Summit with Hands-On Labs!

VTS Virtual Technology Summit

If you are at all interested in upgrade to Oracle Database 12c, you really should sign up for the Virtual Technology Summit for February. This is a "pseudo-live" event with recorded webcasts and live experts online to answer questions. Even better, it includes a hands-on lab that will take you through the process of:

  1. Upgrading from to
  2. Plugging that upgraded database in as a PDB
  3. Migrating a second database to a second PDB using the new Full Transportable Export/Import feature

The Virtual Technology Summit will run three times: 

and it has content for developers and sysadmins as well as DBAs.

So, sign up now and get a head start by downloading the hands-on lab VM image here:

We think you will find the VM environment really useful to keep around for those times when you want to try something out, without having to install binaries or create databases. 


Friday Feb 06, 2015

New MOS Note:1962125.1 - Overview of Database Patch Delivery Methods

Usually I don't announce MOS Notes but this one may be very helpful to sort out between all the different patches available these days for the database only. From Patch Sets to PSUs to SPUs to Interim Patches to Bundle Patches and so on.

Plus it includes also recommendations for testing and if you should apply them on a regular basis.

A very important MOS Note for most DBAs:

MOS Note:1962125.1
Overview of Database Patch Delivery Methods

MOS Note - Patch Delivery Methods


Thursday Feb 05, 2015

Oracle Multitenant: New SQL Container Clause

Tiny little enhancement in Oracle Database
The new CONTAINER clause to access data from different containers within one SQL statement. This may be very helpful, especially in case of schema consolidation. Similar things could have be done in Oracle already by using database links - but resulting in way more complicated SQLs.

This is the new clause: 

SELECT ename FROM CONTAINERS(scott.emp) WHERE CON_ID IN (45, 49); 

See the documentation for more info about it.


Tuesday Feb 03, 2015

How to migrate to Unified Auditing?


What is Unified Auditing and is it on by default?

Unified Auditing is the new auditing facility since Oracle Database 12c. But the "old" auditing is still working. And there are a few things to mention if you'd like to make the right choice. I have written some things about it a while ago but as I discovered yesterday my previous blog post (  doesn't satisfy all my needs.

The initial motivation to move towards the new Unified Audit trail is audit performance. The audit records will be written into the read-only table AUDSYS in SYSAUX tablespace. But there are other benefits such as no dependency on init.ora parameters, one location - one format, and close interaction with Oracle Audit Vault and Database Firewall. And of course tiny things such as the immediate write, which avoids losing any audit records during an instance crash.

Audit records are coming from those sources:

  • Audit records (including SYS audit records) from unified audit policies and AUDIT settings
  • Fine-grained audit records from the DBMS_FGA PL/SQL package
  • Oracle Database Real Application Security audit records
  • Oracle Recovery Manager audit records
  • Oracle Database Vault audit records
  • Oracle Label Security audit records
  • Oracle Data Mining records
  • Oracle Data Pump
  • Oracle SQL*Loader Direct Load 

In addition to user SYS all users having the roles AUDIT_ADMIN and AUDIT_VIEWER can query the AUDSYS table.

After upgrade to Oracle Database 12c Unified Auditing is not enabled by default in order to prevent customers having "old" auditing on already from enabling both auditing facilities at the same time. This is something you need to be aware of: Unified Auditing can be on together with the "old" auditing at the same time

Check if Unified Auditing is linked into the oracle kernel;


----------------  ---------- 
Unified Auditing  FALSE

To link it into the kernel or enable it use the following commands/actions - and the documentation states that you'll have to shut down the listener and restart it again afterwards:

  • UNIX
    • cd $ORACLE_HOME/rdbms/lib
      make -f uniaud_on ioracle
  • Windows
    • Rename the file %ORACLE_HOME%/bin/orauniaud12.dll.option to %ORACLE_HOME%/bin/orauniaud12.dll

The tricky part is now that - even though Unified Auditing is not enabled by default - Unified Auditing is enabled in a Mixed Mode, i.e. there are two auditing policies enabled - but the option is not linked into the kernel.

To disable these policies you'll execute:

  • SQL> noaudit policy ORA_SECURECONFIG;
  • SQL> noaudit policy ORA_LOGON_FAILURES;

Don't get me wrong: This is not a recommendation to disable Unified Auditing. I just would like to explain what's on and the possibilities to turn things into the desired direction. The documentation says about Mixed Mode:

Mixed mode is intended to introduce unified auditing, so that you can have a feel of how it works and what its nuances and benefits are. Mixed mode enables you to migrate your existing applications and scripts to use unified auditing. Once you have decided to use pure unified auditing, you can relink the oracle binary with the unified audit option turned on and thereby enable it as the one and only audit facility the Oracle database runs. 

How do you enable a Unified Auditing Policy?

The documentation offers a straight forward tutorial (which is a bit EM driven):

How to change between IMMEDIATE and QUEUED WRITE mode?

For a performance evaluation please see Szymon's blog post at the CERN blogs. To switch between the different modes please see the Oracle Documentation:

  • To use immediate write mode use this procedure:
  • To use queued write mode run this procedure:

The size of the queue by default is 1MB. If you'd like to change it (maximum: 30MB) the initialization parameter UNIFIED_AUDIT_SGA_QUEUE_SIZE has to be changed.

What happens now to the traditional AUDIT_TRAIL parameter and what effect does it have?

AUDIT_TAIL will still trigger and direct the "old" auditing facilitiy (SYS.AUD$ for the database audit trail, SYS.FGA_LOG$ for fine-grained auditing, DVSYS.AUDIT_TRAIL$ for Oracle Database Vault, Oracle Label Security, and so on). So be aware to have both auditing facilities on at the same time as this won't make much sense. Our recommendation since Oracle Database 11g is generally to set AUDIT_TRAIL in every 11g/12c database explicitly to the value you want. Otherwise it could always happen (and happens many times) that your database accidentally writes audit records into AUD$. Reason why this happens so often: the default setting for AUDIT_TRAIL since Oracle Database 11g is "DB" unless you change this via the non-standard parameter listening in the DBCA (Database Configuration Assistant).

Therefore always set AUDIT_TRAIL explicitly to the value you want to prevent the database from accidental auditing.

Summary - Steps to migrate to Unified Auditing?

  1. Turn off traditional auditing with AUDIT_TRAIL=NONE
  2. Link Unified Auditing into the kernel or enable it on Windows
  3. Define your auditing policies 
  4. Monitor it with the views UNIFIED_AUDIT_TRAIL and in multitenant environments with CDB_UNIFIED_AUDIT_TRAIL
A final question remains unanswered:
What happens to your existing audit records in AUD$ and the other auditing tables?

Actually I can't answer this question but to me there seems to be no way to migrate existing audit records into the new Unified Auditing facility. But I don't think that this will cause any issues as you can keep and safely store the contents of the traditional auditing. They don't get overwritten or deleted during an upgrade.

Further information required?



Mike Dietrich - Oracle Mike Dietrich
Master Product Manager - Database Upgrade & Migrations - Oracle

Based in Germany. Interlink between customers/partners and the Upgrade Development. Running workshops between Arctic and Antartica. Assisting customers in their reference projects onsite and remotely. Connect via:

- -


« October 2015
Oracle related Tech Blogs
Slides Download Center
Visitors since 17-OCT-2011
White Paper and Docs
Viewlets and Videos
Workshop Map
This week on my Rega & Pono
Upgrade Reference Papers