Friday May 13, 2016

Recap - OTN Tour EMEA - Milano, Italy - 2016

Milano Centrale - OTN Tour EMEA 2016

Thanks for this excellent day yesterday in Milano for the OTN EMEA Tour 2016. What a great audience, excellent speakers covering a broad spectrum - and also a perfect organization.

In case you need the slides please find them here (or many more including the Hands-On-Lab) in the Slides Download Center to your right (scroll down a bit):

I'm looking forward to Baku, Azerbaijan now - but I hope as well that the Italian User Group will stay now more active. There's such a huge potential and so many great customers and people ...!!!

Ciao!

--Mike

Thursday Mar 31, 2016

DROP PLUGGABLE DATABASE - things you need to know

Directly after my DOAG (German Oracle User Group) Conference presentation about "How Single-/Multitenant will change a DBA's life" Martin Bach (Enkitec) approached me and told me about his experiences with the DROP PLUGGABLE DATABASE command and future recoverability.

Martin discovered that once you issued the DROP PLUGGABLE DATABASE command you can't reuse a previously taken backup of this particular PDB anymore and recover the PDB into this existing CDB. I wasn't aware of this and I'm glad that Martin told me about it.

Actually only the meta information in the controlfile or the RMAN catalog will be deleted. But archive logs and backup still persist.

See also my blog post from Jan 9, 2015:
Recent News about Pluggable Databases - Oracle Multitenant

This is the error message you'll see when you try to recover a dropped pluggable database:

RMAN> restore pluggable database pdb2drop;

Starting restore at 01-JUN-15
using channel ORA_DISK_1
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of restore command at 06/01/2015 10:10:40
RMAN-06813: could not translate pluggable database pdb2drop

Just a few weeks ago a colleague from German presales asked me if I know a more convenient way to restore a PDB once the magic command (DROP PLUGGABLE DATABASE) has been issued than recovering it into an auxiliary container database abd unbplug/plug it. I haven't.

But Nik (thanks!!!) told me that he pushed a MOS Note to being published explaining how to workaround this issue: 

MOS Note: 2034953.1
How to Restore Dropped PDB in Multitenant

In brief this MOS Note describes how to:

  • Create an auxiliary container database
  • Recover the backup (yes, you will have to have a backup of your container database) including this particular PDB
  • Unplug the PDB after recovery has been finished and plug it back into the original CDB

Now some will say: Hey, that's simple and obvious. For me it wasn't ;-) That's why I write about it to remind myself of this workaround ...

--Mike
.



Thursday Jan 28, 2016

TDE is wonderful - Journey to the Cloud V

DBaaS Oracle Cloud

 What happened so far on my Journey to the Cloud?

Today's journey:
Learn about TDE (Transparent Data Encryption) and other secrets

What I really really love about my job: Every day I learn something new.

But sometimes learning can be frustrating at the beginning. And so it was for Roy and myself in the past days when we explored the use of TDE (Transparent Data Encryption) in our DBaaS Cloud environments. But many thanks to Brian Spendolini for his continuous 24x7 support :-)

Never heard of Transparent Data Encryption before? Then please read on here. It's usually part of ASO (Advanced Security Option) but it is included in the cloud offering. 

But first of all before taking care on TDE and PDBs I tried to deploy a new DBaaS VM ...
.

PDB names can't contain underscores?

Well, one learning experience is that initially you can't create a PDB in the DBaaS environment with an underscore in the name. I wanted to name my PDB in my new env simply "TDE_PDB1" (8 characters, all should be fine) - but received this nice message:

Don't worry if you don't speak German - it basically says that it can't be longer than 8 characters (ok, I knew that), must begin with a character (mine does of course) and can only contain characters and number (eh?? no underscores???). Hm ...?!?

Ok, I'll name mine "TDEPDB1".

Of course outside this page you can create PDBs containing an underscore:

SQL> create pluggable database PDB_MIKE admin user mike identified by mike
  2  file_name_convert=('/oradata/CDB2/pdbseed', '/oradata/CDB2/pdb_mike');

Pluggable database created
.

That's what happens when application logic tries to superseed database logic.
.

(Almost) undocumented parameter: encrypt_new_tablespace

Thanks to Robert Pastijn for telling me about this hidden secret. A new parameter which is not in the regular database deployment but only in the cloud.

encrypt_new_tablespaces

First check in MOS:

Interesting.

So let's check with Google.
And here it is: 7 hits, for example:

Controlling Default Tablespace Encryption

The ENCRYPT_NEW_TABLESPACES initialization parameter controls default encryption of new tablespaces. In Database as a Service databases, this parameter is set to CLOUD_ONLY.

Value
Description

ALWAYS

Any tablespace created will be transparently encrypted with the AES128 algorithm unless a different algorithm is specified on the ENCRYPTION clause.

CLOUD_ONLY

Tablespaces created in a Database Cloud Service database will be transparently encrypted with the AES128 algorithm unless a different algorithm is specified on the ENCRYPTION clause. For non-Database Cloud Service databases, tablespaces will only be encrypted if the ENCRYPTION clause is specified. This is the default value.

DDL

Tablespaces are not transparently encrypted and are only encrypted if the ENCRYPTION clause is specified.

What I found really scary is the fact that I couldn't find it in my spfile/pfile. You can alter it with an "alter system" command but you can't remove it.

The idea behind this is great as tablespaces should be encrypted, especially when they reside in a cloud environment. TDE is a very useful feature. And this mechanism exists regardless of your edition, whether you have Standard Edition or Enterprise Edition in any sort of flavor in the DBaaS Cloud.

A new tablespace will be encrypted by default:

SQL> CREATE TABLESPACE TS_MIKE DATAFILE 'ts_mike01.dbf' SIZE 10M;

Then check:

SQL> select TABLESPACE_NAME, ENCRYPTED from DBA_TABLESPACES;

But we'll see later if this adds some constraints to our efforts to migrate a database for testing purposes into the DBaaS cloud environment.
.

Is there anything encrypted yet?

Quick check after setting:

SQL> alter system set exclude_seed_cdb_view=FALSE scope=both;

I tried to find out if any of the tablespaces are encrypted.

SQL> select tablespace_name, encrypted, con_id from CDB_TABLESPACES order by 3;

TABLESPACE_NAME                ENC     CON_ID
------------------------------ --- ----------
SYSTEM                         NO           1
USERS                          NO           1
SYSAUX                         NO           1
UNDOTBS1                       NO           1
TEMP                           NO           1

SYSTEM                         NO           2
USERS                          NO           2
TEMP                           NO           2
SYSAUX                         NO           2

EXAMPLE                        NO           3
USERS                          NO           3
TEMP                           NO           3
SYSAUX                         NO           3
APEX_1701140435539813          NO           3
SYSTEM                         NO           3

15 rows selected.

Looks good.  Nothing encrypted yet.
.

How does the new parameter ENCRYPT_NEW_TABLESPACES effect operation?

Ok, lets try.

SQL> show parameter ENCRYPT_NEW_TABLESPACES

NAME                                 TYPE        VALUE
------------------------------------ ----------- ---------------
encrypt_new_tablespaces              string      CLOUD_ONLY

And further down the road ...

SQL> alter session set container=pdb1;

SQL> create tablespace MIKE_PLAYS_WITH_TDE datafile '/u02/app/oracle/oradata/MIKEDB/PDB1/mike_plays_with_tde.dbf' size 10M;

Tablespace created.

SQL> select tablespace_name, encrypted, con_id from CDB_TABLESPACES order by 3;

TABLESPACE_NAME                ENC     CON_ID
------------------------------ --- ----------
SYSTEM                         NO           3
SYSAUX                         NO           3
TEMP                           NO           3
USERS                          NO           3
EXAMPLE                        NO           3
APEX_1701140435539813          NO           3
MIKE_PLAYS_WITH_TDE            YES          3

7 rows selected.

Ah ... so my new tablespace is encrypted. Not bad ... so far TDE has no influence. I can create objects in this tablespace, query them etc. It is not disturbing at all. Good.
.

How does this key thing work in the DBaaS Cloud?

The documentation in above WP tells us this:

Managing the Software Keystore and Master Encryption Key

Tablespace encryption uses a two-tiered, key-based architecture to transparently encrypt (and decrypt) tablespaces. The master encryption key is stored in an external security module (software keystore). This master encryption key is used to encrypt the tablespace encryption key, which in turn is used to encrypt and decrypt data in the tablespace.

When the Database as a Service instance is created, a local auto-login software keystore is created. The keystore is local to the compute node and is protected by a system-generated password. The auto-login software keystore is automatically opened when accessed.

You can change (rotate) the master encryption key by using the tde rotate masterkey  subcommand of the dbaascli  utility. When you execute this subcommand you will be prompted for the keystore password. Enter the password specified when the service instance was created.
.

Creating a new PDB

That's easy, isn't it?

SQL> alter session set container=cdb$root;

Session altered.

SQL> create pluggable database pdb2 admin user mike identified by mike;

Pluggable database created.

SQL> alter pluggable database pdb2 open;

Pluggable database altered.

SQL> select tablespace_name, encrypted, con_id from CDB_TABLESPACES order by 3;

TABLESPACE_NAME                ENC     CON_ID
------------------------------ --- ----------
SYSTEM                         NO           4
SYSAUX                         NO           4
TEMP                           NO           4
USERS                          NO           4

SQL> select file_name from dba_data_files;

FILE_NAME
--------------------------------------------------------------------------------
/u02/app/oracle/oradata/MIKEDB/2A6680A0D990285DE053BA32C40AED53/datafile/o1_mf_s
ystem_cbn8fo1s_.dbf

/u02/app/oracle/oradata/MIKEDB/2A6680A0D990285DE053BA32C40AED53/datafile/o1_mf_s
ysaux_cbn8fo20_.dbf

/u02/app/oracle/oradata/MIKEDB/2A6680A0D990285DE053BA32C40AED53/datafile/o1_mf_u
sers_cbn8fo27_.dbf

Ah, bad thing. As I neither used the file_name_convert option nor changed the PDB_FILE_NAME_CONVERT initialization parameter my new PDB files get created in the "root" path of the CDB. I don't want this. But isn't there this cool new feature called ONLINE MOVE OF DATAFILES in Oracle Database 12c? Ok, it's an EE feature but let me try this after checking the current OMF file names in DBA_DATA_FILES and DBA_TEMP_FILES:

SQL> !mkdir /u02/app/oracle/oradata/MIKEDB/PDB2

SQL> ALTER DATABASE MOVE DATAFILE '/u02/app/oracle/oradata/MIKEDB/2A6680A0D990285DE053BA32C40AED53/datafile/o1_mf_system_cbn8fo1s_.dbf' TO '/u02/app/oracle/oradata/MIKEDB/PDB2/system01.dbf';

SQL> ALTER DATABASE MOVE DATAFILE '/u02/app/oracle/oradata/MIKEDB/2A6680A0D990285DE053BA32C40AED53/datafile/o1_mf_sysaux_cbn8fo20_.dbf' TO '/u02/app/oracle/oradata/MIKEDB/PDB2/sysaux01.dbf';

SQL> ALTER DATABASE MOVE DATAFILE '/u02/app/oracle/oradata/MIKEDB/2A6680A0D990285DE053BA32C40AED53/datafile/o1_mf_users_cbn8fo27_.dbf' TO '/u02/app/oracle/oradata/MIKEDB/PDB2/users01.dbf';

Be prepared:
This will create a 1:1 copy of the file in the designated location and synch afterwards. It may take a minute per file.

And moving the TEMP tablespace(s) file(s) will fail.

SQL> ALTER DATABASE MOVE DATAFILE '/u02/app/oracle/oradata/MIKEDB/2A6680A0D990285DE053BA32C40AED53/datafile/o1_mf_temp_cbn8fo25_.dbf' TO '/u02/app/oracle/oradata/MIKEDB/PDB2/temp01.dbf';

*
ERROR at line 1:
ORA-01516: nonexistent log file, data file, or temporary file
"/u02/app/oracle/oradata/MIKEDB/2A6680A0D990285DE053BA32C40AED53/datafile/o1_mf_temp_cbn8fo25_.dbf"

The temporary tablespace will have to be dropped and recreated. But not a big deal. 

Check;

SQL> select file_name from dba_data_files;

FILE_NAME
-----------------------------------------------------
/u02/app/oracle/oradata/MIKEDB/PDB2/sysaux01.dbf
/u02/app/oracle/oradata/MIKEDB/PDB2/users01.dbf
/u02/app/oracle/oradata/MIKEDB/PDB2/system01.dbf

Let me fix this so I don't hit this pitfall again:

SQL> alter system set pdb_file_name_convert='/u02/app/oracle/oradata/MIKEDB/pdbseed','/u02/app/oracle/oradata/MIKEDBPDB2';

Final verification:

SQL> select name, value from v$system_parameter where con_id=4;

NAME                   VALUE
---------------------- ----------------------------------
resource_manager_plan
pdb_file_name_convert  /u02/app/oracle/oradata/MIKEDB/pdbseed, /u02/app/oracle/oradata/MIKEDBPDB2


Now the fun part starts ... ORA-28374: typed master key not found in wallet

Remember this command from above in my PDB1? It run fine. But now it fails in PDB2.

SQL> create tablespace MIKE_PLAYS_WITH_TDE datafile '/u02/app/oracle/oradata/MIKEDB/PDB2/mike_plays_with_tde.dbf' size 10M;

create tablespace MIKE_PLAYS_WITH_TDE datafile '/u02/app/oracle/oradata/MIKEDB/PDB2/mike_plays_with_tde.dbf' size 10M
*
ERROR at line 1:
ORA-28374: typed master key not found in wallet

Voodoo in the database? I'm worried - especially as Roy had the same issue days before. But why did the command pass through without issues before in PDB1 - and now it doesn't in PDB2? Is it because the PDB1 is precreated - and my PDB2 is not?

Kinda strange, isn't it?
So connecting back to my PDB1 and trying again: 

SQL> alter session set container=pdb1;

Session altered.

SQL> create tablespace MIKE_STILL_PLAYS_WITH_TDE datafile '/u02/app/oracle/oradata/MIKEDB/PDB1/mike_still_plays_with_tde.dbf' size 10M;

Tablespace created.

Ok, now I'm worried.
What is the difference between the precreated PDB1 and my new PDB2?
.

Why do I get an ORA-28374 in my fresh PDB2?

When we compare the wallet status in both PDBs we'll recognize the difference:

PDB1:

SQL> select wrl_type, wallet_type, status from v$encryption_wallet;

WRL_TYPE        WALLET_TYPE          STATUS
--------------- -------------------- -----------------------
FILE            AUTOLOGIN            OPEN

PDB2:

SQL> select wrl_type, wallet_type, status from v$encryption_wallet;

WRL_TYPE        WALLET_TYPE          STATUS
--------------- -------------------- -----------------------
FILE            AUTOLOGIN            OPEN_NO_MASTER_KEY

.
Now thanks to Brian Spendolini I have a working solution. But I'm not convinced that this is an obvious path ...
Remember? I just would like to create a tablespace in my new (own, non-precreated) PDB. That's all ... 

SQL> alter session set container=cdb$root;

SQL> administer key management set keystore close;

keystore altered.

SQL> administer key management set keystore open identified by <your-sysadmin-pw> container=all;

keystore altered.

SQL> alter session set container=pdb2;

Session altered.

SQL> SELECT WRL_PARAMETER,STATUS,WALLET_TYPE FROM V$ENCRYPTION_WALLET;

WRL_PARAMETER                             STATUS             WALLET_TYPE
----------------------------------------- ------------------ -----------
/u01/app/oracle/admin/MIKEDB/tde_wallet/  OPEN_NO_MASTER_KEY PASSWORD

SQL>  ADMINISTER KEY MANAGEMENT SET KEY USING TAG  "tde_dbaas" identified by <your-sysadmin-pw>  WITH BACKUP USING "tde_dbaas_bkup"; 

keystore altered.

SQL> SELECT WRL_PARAMETER,STATUS,WALLET_TYPE FROM V$ENCRYPTION_WALLET;

WRL_PARAMETER                             STATUS   WALLET_TYPE
----------------------------------------- -------- -----------
/u01/app/oracle/admin/MIKEDB/tde_wallet/  OPEN     PASSWORD

And finally ... 

SQL> create tablespace MIKE_STILL_PLAYS_WITH_TDE datafile '/u02/app/oracle/oradata/MIKEDB/PDB2/mike_still_plays_with_tde.dbf' size 10M;

Tablespace created.

Wow!!!

That was not straight forward at all. Maybe it all happens due to my almost non-existing knowledge about TDE.

Ah ... and let me say that I find the missing uppercase letter with all keystore altered. echo messages quite disturbing. But this is a generic one and non-critical of course ...

--Mike
.

PS: Read on about Seth Miller's experience here on Seth's blog:
http://sethmiller.org/oracle-2/oracle-public-cloud-ora-28374-typed-master-key-not-found-in-wallet/ 




Thursday Nov 19, 2015

Why you should remove APEX from the CDB$ROOT

Upgrade Blog posts about Oracle Application Express:


Oracle APEX (Application Express) is great piece of software. But it gets installed by default into the container database's CDB$ROOT unless you'd customized your CDB creation via scripts in Oracle 12.1.0.2. See: 

on how to customize a Single/Multitenant Database with less options.

But that is not the topic I would like to write about. I came across an pitfall in the past days twice.

Think of having a local APEX (Application Express) application in a standalone database or already inside a PDB. You'd like to plug it into a CDB, regardless if this CDB is local or in the Oracle DBaaS Cloud. But your current APEX is on a lower version than the APEX inside the target CDB.

Let's assume you'd like to migrate this standalone database or PDB over into the cloud. But as soon as you plug it in - even though you'll have Oracle Database 12.1.0.2 in both sites locally and in the cloud, you can't open your PDB in read/write mode. It will open RESTRICTED only.

Then you'll check PDB_PLUG_IN_VIOLATIONS and get a plug in error:

"APEX mismatch: PDB installed version 4.2.5.00.08 CDB installed version 5.0.0.00.31" 

Reason is simply that an Oracle Database 12.1.0.2 standard deployment will contain APEX 4.2.0 whereas the Oracle Cloud deployment in this scenario uses the more recent APEX version 5.0 already.

Having APEX installed into the CDB$ROOT was not a great idea. Actually it will cause you a lot of trouble and does not make much sense in the idea of Multitenant concept.

Oracle APEX Multitenant

I see why it has been done this way as not having APEX in the CDB$ROOT would have meant not having it in the PDB$SEED either - and all further provisioned PDBs would not have had APEX deployed by default. In this case every PDB would have required a separate APEX installation - but then you'd get the freedom of different APEX versions within the same Multitenant environment. And way more freedom during unplug/plug. But I'll elaborate this further down below.

Different APEX versions in Multitenant

First of all check your current APEX version with this query:

SQL> select COMP_ID, VERSION, STATUS from CDB_REGISTRY where COMP_ID='APEX' order by CON_ID;

COMP_ID  VERSION      STATUS
-------- ------------ -------
APEX     4.2.5.00.08  VALID

Find out in which container APEX is currently installed into:

SQL> select r.COMP_NAME, r.VERSION, c.NAME, c.CON_ID from CDB_REGISTRY r, V$CONTAINERS c where r.CON_ID=c.CON_ID and r.COMP_ID='APEX' order by CON_ID;

COMP_NAME                   VERSION   NAME      CON_ID
--------------------------- --------- --------- ------
Oracle Application Express  4.2.5.00  CDB$ROOT       1

Then choose the best solution for your environment to avoid trouble with APEX in a Multitenant environment.

Scenario

Migrate a PDB with containing an APEX 4.2 deployment into another CDB with APEX 5.0 already installed in the CDB$ROOT container (the standard deployment in the Oracle DBaaS Cloud).

Two Solutions

Basically I see two potential solutions:

  1. Remove APEX from the CDB$ROOT before plug in your PDB with a different APEX version
  2. Upgrade your APEX installation in the PDB after plugging it into the CDB to match the CDB's APEX version

Solution 1 - Remove APEX from the CDB$ROOT 

I'd prefer Solution 1 as this will give you way more freedom. It will even allow you to have different APEX versions in the same Multitenant environment in different PDBs. But you'll have to do this steps right after creation of the container database - or before you worked with APEX in any of your PDBs as the following steps will remove APEX from all your PDBs!!!! If you already have APEX applications inside one of your PDBs you must export them before the APEX removal. Keep in mind that the "application export" facility in APEX 4 does not carry your image files automatically (APEX 5 is smarter).

Remove APEX from CDB$ROOT will remove it from all PDBs as well

The Oracle Documentation describes the path pretty straight forward. 

  1. Make sure to change into the local $ORACLE_HOME/apex directory first before starting SQL*Plus as otherwise the removal process won't work and error out - I'd assume that path variables are not carried on correctly. 
    cd $ORACLE_HOME/apex

  2. Connect to your CDB$ROOT:
    sqlplus / as sysdba

  3. Run the "Remove APEX from the CDB$ROOT" script:
    SQL> @apxremov_con.sql

  4. Check afterwards if APEX has been removed - also check for invalid objects. If necessary recompile. 
    In my tests two objects were INVALID after the removal but could be easily fixed
    (this part is not mentioned in the documentation):

    SQL> select COMP_ID, STATUS from DBA_REGISTRY where COMP_ID='APEX';
    no rows selected

    SQL> select object_name, status from dba_objects where status='INVALID';

    OBJECT_NAME                      STATUS
    -------------------------------- -------------
    APEX_PKG_APP_INSTALL_LOG         INVALID
    APEX_SPATIAL                     INVALID

    SQL> @?/rdbms/admin/utlrp.sql

    SQL> select object_name, status from dba_objects where status='INVALID';
    no rows selected

  5. Once you have done this your PDB containing the APEX application should not give you APEX errors anymore upon plugin as there's no mismatch anymore between CDB$ROOT's and the PDB's APEX.

  6. Just in the case - independently of my example - you'd like to install APEX now locally into a PDB then you'll have to use the apexins.sql script:

    cd $ORACLE_HOME/apex
    sqlplus / as sysdba
    SQL> alter session set container=PDB1;
    SQL> @apexins.sql SYSAUX SYSAUX TEMP /i/


    Be aware - this step may run a long time (in my VBox image it took over 22 minutes to complete).

    timing for: Complete Installation
    Elapsed: 00:22:16.29


    Check the existence of the component finally:

    SQL> select comp_id, status , con_id from cdb_registry where comp_id='APEX';

    COMP_ID   STATUS  CON_ID
    --------- ------- -------
    APEX      VALID         3

Solution 2 - Upgrade APEX in the PDB after plugin

Even though I'd prefer solution 1 there may be some cases where you'll have to deal with Solution 2 and upgrade APEX after plugin. One simple reason could be that somebody uses APEX already in another PDB and you can't export and preserve it for later import - then you better don't follow Solution 1 then as it will force you to test your backup/restore strategy concept soon after ... 

  1. Run apexins.sql (from the APEX 5.0 installation - $ORACLE_HOME/apex) only in this particular PDB after plugin. You'll have to use catcon.pl

  2. The documentation https://docs.oracle.com/cd/E59726_01/install.50/e39144/db_pluggable.htm#HTMIG29441
    may not be correct mentioning all parameters without the --p option. Jason Straub corrected it:

    host &OH_HOME/perl/bin/perl -I &OH_HOME/rdbms/admin &OH_HOME/rdbms/admin/catcon.pl -b apexins -c '<PDB_name>' apexins_nocdb.sql --pSYSAUX --pSYSAUX --pTEMP --p/i/

    whereas &OH_HOME represents the full path to the Oracle home.


More Scenarios 

If you look at some of the potential scenarios you may see when trying to move PDBs around including APEX applications it is pretty obvious why having APEX only locally inside each PDB is a wise move. It will give you way more flexibility, less trouble, actually less upgrades (and APEX upgrades can take a while) - and simply less headache.

Assume that my CDB1 and CDB2 below are at the same database version - having different database versions will require upgrade/downgrade tasks in addition. 

APEX installed commonly within CDB$ROOT

  • CDB1 has APEX 4.2 in CDB$ROOT
    CDB2 has APEX 5.0 in CDB$ROOT
    • Unplug a PDB from CDB1 and want to plug it into CDB2 but you'll get an error in PDB_PLUG_IN_VIOLATIONS:
      “APEX mismatch: PDB installed version 4.2.5.00.08 CDB installed version 5.0.0.00.31”
    • That is actually the simple case as you'd only have to upgrade APEX inside the PDB:
      host &OH_HOME/perl/bin/perl -I &OH_HOME/rdbms/admin &OH_HOME/rdbms/admin/catcon.pl -b apexins -c '<PDB_name>' apexins_nocdb.sql --pSYSAUX --pSYSAUX --pTEMP --p/i/

  • CDB1 has APEX 5.0 in CDB$ROOT
    CDB2 has APEX 4.2 in CDB$ROOT
    • You unplug a PDB from CDB1 and want to plug it into CDB2 - but you can't actually plug in.
    • As APEX can't be downgraded to 4.2 in your PDB you'll have to upgrade APEX in the CDB2 to 5.0 first
    • Then you can plugin your PDB into CDB2
  • CDB1 has APEX in CDB$ROOT
    CDB2 has NO APEX in CDB$ROOT
    • Unplug a PDB with an APEX application from CDB1 and want to plug it into CDB2- fails
    • You will need to install APEX into CDB2's CDB$ROOT before being able to plug in the PDB
      • And it has to be the same version of APEX as used in CDB1

APEX installed locally within the PDB only

  • CDB1 has NO APEX in CDB$ROOT
    CDB2 has APEX  in CDB$ROOT
    • Unplug a PDB from CDB1 (having APEX locally in the PDB only) and want to plug it into CDB2 - works if ...
      • the PDBs local APEX version match the CDB2's common APEX version
      • If they don't match ...
        • PDB has APEX 4.2
          CDB2 has APEX 5.0
          Works, but you will need to upgrade APEX in the PDB after plugin
        • PDB has APEX 5.0
          CDB2 has APEX 4.2
          Fails, and you'll have to upgrade CDB2s common APEX 4.2 to 5.0 first before being able to plugin.

  • CDB1 has NO APEX in CDB$ROOT
    CDB2 has NO APEX  in CDB$ROOT
    • That is actually the best case as you don't have to care.
      • The only thing you will need to be aware of is that you'll need to patch your ?/apex home in case that PDB has a higher APEX version already than the one we ship by default 

More Information

You can find more information via these links:

And Finally ...

I'd recommend to NEVER have APEX in any CDB$ROOT. I see the advantage of having it as a new PDB will always have APEX provisioned with it. That is convenient, no doubt. But honestly if you really need APEX in most or all of your PDBs then create your own PDB$MASTER and provision from this one instead of using the PDB$SEED. In all other cases it's by far easier to install APEX inside the PDB when you really need it. Even with the penalty that it requires some extra 10 or more minutes until it is ready to go.

Given also the fact that APEX upgrade may take a while as well (see this blog post: https://blogs.oracle.com/UPGRADE/entry/save_upgrade_downtime_upgrade_apex  ) there's another advantage which personally I consider way more important:

Having APEX locally in the PDB only offers you way more freedom and flexibility. All the complicated scenarios come up only when either the source has no APEX in the CDB$ROOT (APEX locally in the PDB) but the target has any version (most likely a different one) or the target has no APEX in the CDB$ROOT but the source had it. And even more fun, both CDBs have APEX in the CDB$ROOT but at different versions.

This can be totally avoided by NEVER having APEX installed in your CDB$ROOT.

Which means also for the Oracle Cloud DBaaS databases:
Remove APEX from the CDB$ROOT and install it into your PDB before really starting up doing APEX stuff inside. APEX is such cool piece of software - and it's free - and powerful - and easy!!!

And (thanks to Jason again for this hint) if you look into the APEX 5.0 Installation Guide you'll find this tip:

Oracle recommends removing Oracle Application Express from the root container database for the majority of use cases, except for hosting companies or installations where all pluggable databases (PDBs) utilize Oracle Application Express and they all need to run the exact same release and patch set of Oracle Application Express. To learn more, see "Installing Application Express into Different PDBs."


--Mike

Friday Sep 04, 2015

Oracle non-CDB architecture may be desupported after Oracle Database 12.2

You may wonder about the headline of this blog post.
Haven't I blogged about this a long time ago?

Yes, I did. On January 22, 2015 - over 7 months ago:
https://blogs.oracle.com/UPGRADE/entry/non_cdb_architecture_of_oracle

But whenever in the past weeks this message has been refreshed and got a bit more precise (which I'd guess everybody appreciates). Now we are clearly saying that such a change won't happen with Oracle 12.2. Before we've said only "in a later release".

See the Oracle Database 12c Upgrade Guide

Deprecation Note non-CDB architecture

In case you'd like to explore the world of (at least) Oracle Single Tenant (no extra license required, available even with SE2) which I'd highly recommend you'll find some useful advice in this slide deck:

  • How Oracle Single Tenant will change a DBA's life 
  •  

    --Mike 

    Thursday Sep 03, 2015

    Can you have Oracle Multitenant in Oracle 12.1.0.2 SE2?

    Please find all our articles about Oracle Database Standard Edition 12.1.0.2 SE2:

     

    Can you have Oracle Multitenant in Oracle Standard Edition 12.1.0.2 SE2?

    No, you can't as Oracle Multitenant is a licensable option for Enterprise Edition (EE) databases only.

    But wait a second ...

    You can do Oracle Single Tenant with SE2 of course meaning you can have one active pluggable database within one container database at a time. That is possible and does not require any additional licenses.

    See the DBCA (Database Configuration Assistant) screen when you'll try to create a container database:

    DBCA - Single Tenant

    More information? 

    You'll find more information about the differences in handling and such between Oracle non-CDB, Oracle Single Tenant and Oracle Multitenant databases in this presentation:

    --Mike 

    Friday Jul 24, 2015

    Why does a PDB require an upgrade?

    Why do I need to upgrade (or downgrade) a Pluggable Database (PDB) once I unplug it from version A and replug into version B?

    This question is one of the most often asked questions in our workshops when Roy and I present Upgrades and Oracle Multitenant.

    If you take a look into the documentation you'll find:

    But unfortunately this is not 100% correct for every object.

    When you do a simple query for basic dictionary structures such as OBJ$ you'll easily find out that a PDB has its own Data Dictionary.  Some things are really linked into the PDB only - check the SHARING column of DBA_OBJECTS for instance. If it says "METADATA LINK" it will give you an indication that it exchanges information with its parent in the CDB$ROOT. But that doesn't mean necessarily that it's just an empty meta object. Some tables do exist in the PDB as well and allocate space. Check it by yourself and you'll find out that OBJ$ inside a PDB has in fact the same attribute - but still owns segments in the PDB and in the CDB$ROOT - as other objects do allocate space as well. And of course the object's definition exists in the PDB as well. 

    • CDB$ROOT

      OBJ$ in CDB$ROOT

    • PDB 

      OBJ$ in PDB
      .

    So the answer is simple:
    A Pluggable Database (PDB) must be upgraded (or downgraded) when it get moved between versions simply because it has its own data dictionary. And this doesn't get upgraded automatically.
    ,

     

    --Mike 

    Monday May 18, 2015

    Create a PDB directly from a stand-alone database?

    The documentation offers a well hidden feature for migrating a database into the universe of Oracle Single-/Multitenant:

    Remote Cloning with the NON$CDB option.

    If you'll read the documentation it doesn't say much about this option, neither the requirements nor the exact syntax or an example:
    http://docs.oracle.com/database/121/SQLRF/statements_6010.htm#SQLRF55686

    Scroll down to the FROM clause:

    ... FROM NON$CDB@dblink ... this option will be able to plugin a stand-alone database and make it a pluggable database. Sounds interesting, let's try it.

    Test 1 - Try to plugin an Oracle 11.2.0.4 database

    Well, the documentation doesn't say anywhere anything about source release limitans. So I tried it simply with an Oracle 11.2.0.4 database. 

    1. Created a database link from my existing CDB pointing into my 11.2.0.4 database
    2. Started my SOURCEDB in read-only mode
    3. Tried to create a pluggable database from my SOURCEDB - and failed ...
      SQL> create pluggable database PDB1 from non$cdb@sourcedb;
      create pluggable database PDB1 from non$cdb@sourcedb
                                                   *
      ERROR at line 1:
      ORA-17627: ORA-28002: the password will expire within 7 days
      ORA-17629: Cannot connect to the remote database server

    Test 2 - Try to plugin an Oracle 12.1.0.2 database in file system 

    Thanks to Tim Hall - his blog post did the magic trick for me:
    http://oracle-base.com/articles/12c/multitenant-clone-remote-pdb-or-non-cdb-12cr1.php#cloning-remote-non-cdb

    First of all, the reason why my Test 1 failed is simply that I can't have a user in an Oracle 11.2.0.4 database with the privilege CREATE PLUGGABLE DATABASE - but this is a requirement as I learned later on.

    1. You'll need a user in SOURCEDB with the privilege to CREATE PLUGGABLE DATABSE:
      GRANT CREATE PLUGGABLE DATABASE TO sourcedb_user;
    2. Start SOURCEDB in read-only mode after shutting it down:
      SHUTDOWN IMMEDIATE
      STARTUP OPEN READ ONLY;
    3. Create a database link pointing from the CDB back into the SOURCEDB:
      CREATE DATABASE LINK sourcedblink
      CONNECT TO sourcedb_user IDENTIFIED BY password USING 'upgr12';
    4. Now create the pluggable database from the stand-alone UPGR12 database:
      CREATE PLUGGABLE DATABASE pdb_upgr12 FROM NON$CDB@sourcedblink
      FILE_NAME_CONVERT=('/oradata/UPGR12','/oradata/CDB2/pdb_upgr12');
    5. But when you check the status of the new PDB you'll realize it is OPEN but only in RESTRICTED mode. Therefore noncdb_to_pdb,sql needs to be run. Connect to the new PDB and start the script:
      ALTER SESSION SET CONTAINER=pdb_upgr12;
      @?/rdbms/admin/noncdb_to_pdb.sql

    Summary

    What will you get from this command? Actually it will allow a simple way to plug in a stand-alone database into a container database but the following restrictions apply:

    • Source database must be at least Oracle 12.1.0.1
    • Source database must be on the same OS platform
    • Source database must be at the same (equal) version as the container database
    • Script noncdb_to_pdb.sql needs to be run

    You may have a look at this MOS Note:1928653.1 Example for Cloning PDB from NON-CDB via Dblink as well [Thanks Krishnakumar for pointing me to this note].

    Finally the only simplification seems to be to avoid the extra step of creating the XML manifest file with DBMS_PDB.DESCRIBE - but apart from that I can't see many other benefits - except for easing of remote cloning with the above restrictions.

    And not to forget:
    The clone-via-db-link command does not allow to specify the NOCOPY option. So you'll get always a copy of your source database which may not be desired in every situation ... 

    --Mike 

    ,, 

    Thursday Apr 23, 2015

    CDBs with less options now supported in Oracle 12.1.0.2

    When Oracle Multitenant was launched Roy and I amongst many other people always mentioned that the requirement of having all options in a Container Database (CDB$ROOT), and therefore also for the PDB$SEED with the immediate result that all PDBs provisioned from the PDB$SEED will have all options as well, will hinder customer adoption significantly. 

    Almost all customers I have talked to in the past 3-4 years about Oracle Multitenant mentioned immediately that it will be a huge problem for them to install all options as (1) their policy is to install only things they are licensed for to (2) prevent developers, users and DBAs to use things accidentally without even knowing that this or that will require a license.

    As it is not allowed to manipulate and change the PDB$SEED the workaround - as PDBs were allowed to have less options - has been to create a stand-alone Oracle 12c database with exactly the options you'd like to have configured as your gold standard - and then plug it in under a remarkable name, for instance PDB$MASTER. Switch it to read only and make sure from now on you'll provision a new PDB always as a clone from PDB$MASTER, and not from PDB$SEED.

    All Options in a CDB

    That would have even worked in the Single Tenant case, which does not require licensing the Oracle Multitenant option and where you have only one active ("customer-created PDB";) PDB. For this purpose you would have unplugged your PDB$MASTER after making it a pluggable database and provision new PDBs with just your desired options set as plugging in PDB$MASTER under a new name (e.g. PDB26) using the COPY option of the command.

    Now this will become all obsolete as from now you it is allowed to have a CDB installation with less options. This applies to linked kernel modules (e.g. RAT) as well as to configured database components (e.g. JAVA, OWM, SPATIAL etc).

    Please see the following new/rephrased MOS Notes:

    MOS Note:2001512.1 basically explains the following steps:

    • Do all the click work in DBCA (Database Creation Assistant) to create a container database - but let DBCA only create the scripts
    • Edit the <SID>.sql script and remove the unwanted options according to the dependency table in the MOS Note
    • Edit the CreateDBCatalog.sql in case you want to remove OWM (Oracle Workspace Manager) creation as well 
    • Add the Oracle PERL $ORACLE_HOME/perl/bin in front of your $PATH variable
    • Start the <SID>.sh script on the shell prompt

    Here's an example of a CreateDBCatalog.sql and a XXXX.sql creating a CDB with no options except XDB (which is mandatory in Oracle Database 12c):

    cat CreateDBCatalog.sql

    SET VERIFY OFF
    connect "SYS"/"&&sysPassword" as SYSDBA
    set echo on
    spool /u01/app/oracle/admin/XXXX/scripts/CreateDBCatalog.log append
    alter session set "_oracle_script"=true;
    alter pluggable database pdb$seed close;
    alter pluggable database pdb$seed open;
    host perl /u01/app/oracle/product/12.1.0.2/rdbms/admin/catcon.pl -n 1 -l /u01/app/oracle/admin/XXXX/scripts -b catalog /u01/app/oracle/product/12.1.0.2/rdbms/admin/catalog.sql;
    host perl /u01/app/oracle/product/12.1.0.2/rdbms/admin/catcon.pl -n 1 -l /u01/app/oracle/admin/XXXX/scripts -b catproc /u01/app/oracle/product/12.1.0.2/rdbms/admin/catproc.sql;
    host perl /u01/app/oracle/product/12.1.0.2/rdbms/admin/catcon.pl -n 1 -l /u01/app/oracle/admin/XXXX/scripts -b catoctk /u01/app/oracle/product/12.1.0.2/rdbms/admin/catoctk.sql;
    -- host perl /u01/app/oracle/product/12.1.0.2/rdbms/admin/catcon.pl -n 1 -l /u01/app/oracle/admin/XXXX/scripts -b owminst /u01/app/oracle/product/12.1.0.2/rdbms/admin/owminst.plb;
    host perl /u01/app/oracle/product/12.1.0.2/rdbms/admin/catcon.pl -n 1 -l /u01/app/oracle/admin/XXXX/scripts -b pupbld -u SYSTEM/&&systemPassword /u01/app/oracle/product/12.1.0.2/sqlplus/admin/pupbld.sql;
    connect "SYSTEM"/"&&systemPassword"
    set echo on
    spool /u01/app/oracle/admin/XXXX/scripts/sqlPlusHelp.log append
    host perl /u01/app/oracle/product/12.1.0.2/rdbms/admin/catcon.pl -n 1 -l /u01/app/oracle/admin/XXXX/scripts -b hlpbld -u SYSTEM/&&systemPassword -a 1  /u01/app/oracle/product/12.1.0.2/sqlplus/admin/help/hlpbld.sql 1helpus.sql;
    spool off
    spool off

    .

    cat XXXX.sql

    set verify off
    ACCEPT sysPassword CHAR PROMPT 'Enter new password for SYS: ' HIDE
    ACCEPT systemPassword CHAR PROMPT 'Enter new password for SYSTEM: ' HIDE
    host /u01/app/oracle/product/12.1.0.2/bin/orapwd file=/u01/app/oracle/product/12.1.0.2/dbs/orapwXXXX force=y format=12
    @/u01/app/oracle/admin/XXXX/scripts/CreateDB.sql
    @/u01/app/oracle/admin/XXXX/scripts/CreateDBFiles.sql
    @/u01/app/oracle/admin/XXXX/scripts/CreateDBCatalog.sql
    -- @/u01/app/oracle/admin/XXXX/scripts/JServer.sql
    -- @/u01/app/oracle/admin/XXXX/scripts/context.sql
    -- @/u01/app/oracle/admin/XXXX/scripts/ordinst.sql
    -- @/u01/app/oracle/admin/XXXX/scripts/interMedia.sql
    -- @/u01/app/oracle/admin/XXXX/scripts/cwmlite.sql
    -- @/u01/app/oracle/admin/XXXX/scripts/spatial.sql
    -- @/u01/app/oracle/admin/XXXX/scripts/labelSecurity.sql
    -- @/u01/app/oracle/admin/XXXX/scripts/apex.sql
    -- @/u01/app/oracle/admin/XXXX/scripts/datavault.sql
    -- @/u01/app/oracle/admin/XXXX/scripts/CreateClustDBViews.sql

    @/u01/app/oracle/admin/XXXX/scripts/lockAccount.sql
    @/u01/app/oracle/admin/XXXX/scripts/postDBCreation.sql
    @/u01/app/oracle/admin/XXXX/scripts/PDBCreation.sql
    @/u01/app/oracle/admin/XXXX/scripts/plug_PDB.sql
    @/u01/app/oracle/admin/XXXX/scripts/postPDBCreation_PDB.sql

     .

    This results in a database having only these components - the minimal component set in Oracle 12.1.0.2: 

    COMP ID  NAME
    -------- --------------------------------------

    CATALOG  Oracle Database Catalog View

    CATPROC  Oracle Database Packages and

    XDB      Oracle XML Database

       .

    -- Mike 

    Monday Mar 09, 2015

    Applying a PSU or BP to a Single-/Multitenant Environment

    I have already explained in broad details a while ago how to:

    But one may miss the steps for applying a PSU (Patch Set Update) or BP (Bundled Patch) to a Single-/Multitenant Environment. At first everything will work just the same if you choose the Everything-at-Once strategy as datapatch will adjust all the required things regardless of being executed in a stand-alone or a singe/Multitenant environment.

    But what happens if you apply a PSU or a BP to one of your Multitenant environments and want to move PDBs one after another (or a few at the same time) to the new environment?
    Or revert a PSU by plugging out from a CDB with the PSU inside - and plug it back into a CDB with a lower version or no PSU at all? 

    First step - Check Plug In Compatibility 

    Before you can even start your unplug/plug operation you should always perform the plugin check. This is divided in two simple steps:

    1. Create the XML description file for your PDB in CDB_SOURCE
      exec DBMS_PDB.DESCRIBE ('/home/oracle/PDB1_unplug.xml', 'PDB1');
    2. Run the plug in check in CDB_DEST
      begin
        if DBMS_PDB.CHECK_PLUG_COMPATIBILITY('/home/oracle/PDB1_unplug.xml','PDB1') then  
        DBMS_OUTPUT.PUT_LINE('No violations found - you can relax');
      else
        DBMS_OUTPUT.PUT_LINE('Violations found - check PDB_PLUG_IN_VIOLATIONS');
      end if;
      end;
      /

    No Plugin Violations?

    Then please follow the procedure described in:
    http://blogs.oracle.com/UPGRADE/entry/upgrade_pdbs_one_at_a
    without the upgrade part as you don't need to upgrade anything in this case of course. 


    Higher patch in CDB_DEST than in CDB_SOURCE?

    Then run this query:

    select TYPE,MESSAGE,ACTION from PDB_PLUG_IN_VIOLATIONS where NAME='PDB1';

    It will tell you to execute datapatch:

    TYPE    MESSAGE
    ------  ----------------------------------------------------------------------------
    ERROR   PSU bundle patch 1 (PSU Patch 4711): Installed in the CDB but not in the PDB

    ACTION
    ------------------------------------------------
    Call datapatch to install in the PDB or the CDB

    Lower patch in CDB_DEST than in CDB_SOURCE?

    Now this becomes a bit more tricky. See the output of PDB_PLUG_IN_VIOLATIONS:

    TYPE  MESSAGE
    ----- ----------------------------------------------------------------------------
    ERROR PSU bundle patch 1 (PSU Patch 4711): Installed in the PDB but not in the CDB

    ACTION
    ------------------------------------------------
    Call datapatch to install in the PDB or the CDB

    Huh? Install???
    What does this mean? Should I install now the current CDB/PDB's PSU into my target environment before being able to step down? 

    Actually I think this message is misleading. And when you look into the MyOracle Support Note describing this under scenario 3 (MOS Note:1935365.1 - Multitenant Unplug/Plug Best Practices) you'll see that the author silently assumed as well that is is more likely that you'll remove the patch from the PDB. 

    But how do you remove changes which came in with datapatch from within a PDB only?

    You will need to run datapatch -rollback on the affected PDBs only:

    $> datapatch -rollback <patch id> –force [–bundle_series] -pdbs <pdb1,pdb2,...,pdbn>

    For further information see:

    --Mike 

    Thursday Jan 29, 2015

    Upgrade to 12c and Plugin - one fast way to move into the world of Oracle Multitenant

    What is the best way to move into Oracle Multitenant (which includes Single Tenant with just one PDB) as well?

    This question came up on the blog, it's part of the Multitenant chapter in our big slide deck but let me elaborate this a bit more in detail. Of course there are several ways to move a database into a Single/Multitenant environment such as Data Pump but one fact is for sure: You can't convert a stand-alone database to become a container database (CDB$ROOT). A container database has to be built up from scratch as first step. Please use the DBCA to create it as this is the easiest path.

    Upgrade and Plugin

    But how can your stand-alone database be moved now?

    1. Upgrade your stand-alone database to Oracle Database 12c, ideally to Oracle 12.1.0.2 with the newest PSU included already. Upgrade it to exactly the same version including the same PSU as your container database (CDB) got created from.
      --
    2. Then start your stand-alone database in read-only mode - of course at this step you'll encounter downtime:
      startup open read only;
      --
    3.  Generate the XML description file – this file will contain the information describing the database structure. To create it the database has to be in read only mode:
      exec DBMS_PDB.DESCRIBE('/tmp/pdb1.xml');
      --
    4. Shutdown the database
      shutdown immediate
      --
    5. Change into your new CDB and do a compatibility check:
      SET SERVEROUTPUT ON
      DECLARE
      compatible CONSTANT VARCHAR2(3) := CASE DBMS_PDB.CHECK_PLUG_COMPATIBILITY( pdb_descr_file => '/tmp/pdb1.xml', pdb_name => 'PDB1') WHEN TRUE THEN 'YES' ELSE 'NO'
      END;

      BEGIN
      DBMS_OUTPUT.PUT_LINE(compatible);
      END;
      /
      --
    6. Plugin your stand-alone database (in my case with a new name PDB1):
      create pluggable database PDB1 using '/tmp/pdb1.xml' nocopy tempfile reuse;
      --
      Be aware that using the NOCOPY option will require that you have a valid backup of your database. If you'll use the COPY option instead you will need additional disk space. When using NOCOPY files will remain at their location. When using COPY you'll combine it with the FILE_NAME_CONVERT option, converting source to destination paths.
      It's always a recommendation to keep the TEMP tablespaces. In case you'd like to remove then you'll have to drop the tablespace and the tempfile before. Otherwise there will be a reference in the XML manifest file making it impossible to plug in your new PDB.

      --
    7. Connect to this new PDB1 and perform sanity operations:
      alter session set container=PDB1;
      @?/rdbms/admin/noncdb_to_pdb.sql

      This script will do required sanity operations to connect the PDB with the CDB. It will do things such as change object id's, mark common objects etc. Its runtime can vary between 5 minutes and several hours depending on the complexity of the former stand-alone's database data dictionary. It will do recompilations of many invalid objects as well. Please test it carefully. 
      If this step gets skipped the PDB can be open in read-only mode only.
    8. ----

    9. Now the database is plugged in – but not open yet. It will need to be started.
      startup
      show pdbs

      exit
      --
    10. To connect to the consolidated PDB1 from the command prompt the following command syntax needs to be used:
      sqlplus "sys/oracle@pdb1 as sysdba"
      As an alternative you could use the EZconnect syntax:
      sqlplus "sys/oracle@//localhost:1521/pdb1 as sysdba"

    1.
    -Mike

    Thursday Jan 22, 2015

    Non-CDB architecture of Oracle databases is DEPRECATED since Oracle Database 12.1.0.2

    Beginning with Oracle Database 12.1.0.2 a non-CDB architecture is deprecated.

    non-CDB deprecated in Oracle 12.1.0.2

    --

    What does this mean?

    Deprecation first of all does not mean "desupported". It means in this case that you of course can have still stand-alone Oracle databases as you know the architecture from previous releases. But you can also have a single-tenant deployment (a CDB with one PDB - no Multitenant license required) or multitenant databases (a CDB with up to 252 PDBs - Multitenant Option license required).

    But in a future Oracle release single-tenant and multitenant databases only may be allowed. Right now there's no information available when this might happen. --

    Which features are not supported at the moment?

    • Database Change Notification
    • Continuous Query Notification (CQN)
    • Client Side Cache
    • Heat Map
    • Automatic Data Optimization
    • Oracle Streams
    • Oracle Fail Safe
    • Flashback Pluggable Database (Flashback Database works but will flashback CDB$ROOT including all PDBs)
    • DBVERIFY
    • Data Recovery Advisor (DRA)
    • Flashback Transaction Backout 

    -Mike 

    ---

    From the Upgrade Guide:
    By deprecate, we mean that the feature is no longer being enhanced but is still supported for the full life of the 12.1 release. By desupported, we mean that Oracle will no longer fix bugs related to that feature and may remove the code altogether. Where indicated, a deprecated feature may be desupported in a future major release

    Tuesday Jan 13, 2015

    New MOS Notes on Database Upgrades for 12c with or without Oracle Multitenant

    Please find some new MyOracle Support (MOS) Notes about Database Upgrades to Oracle Database 12c, with or without Oracle Multitenant:

    Credits for the CDB/PDB Notes go to my colleague, Krishnakumar Kunka  :-)

    -Mike 

    Thursday Nov 27, 2014

    New Slide Decks Uploaded:
    Upgrade/Migrate/Consolidate to Oracle 12c and
    Parallel Multitenant Upgrade Internals

    It's time for a few updates and slide uploads :-) And thanks again to all the great people in Madrid earlier this week. It was a fantastic day - I enjoyed it a lot and wish you all successful upgrades and migrations :-)

    What's new? 

    • A refreshed version of our huge Oracle Database 12c "Upgrade, Migrate & Consilidate" slide deck. Now with 530 slides - kudos to Tom Kyte who gave me a lot of food for thought after his excellent talk at the DB TECH SHOWCASE 2014 TOKYO. I had to add a few slides later on making it now even harder to fit the content into a 1 day workshop :-)
      As usual see the Change Log at the end of the deck about what has been added/changed.
    • A new and extended version of Joe Errede's OOW talk about how an Oracle Multitenant Upgrade works. All details about the parallel upgrade, options and tweaks of catctl.pl. So kudos to Joe as he has build the basis for the slides - I reused many of them and added some stuff - and thanks to the (always) great audience at DOAG Conference in Nuernberg :-)
    •  A refreshed version of the Hitchhiker's Guide especially prepared for the (amazingly) huge audience at the DOAG Conference in Nuernberg (I was really impressed!!)
    -Mike 

    Friday Aug 22, 2014

    Automatic Maintenance Jobs in every PDB?
    New SPM Evolve Advisor Task in Oracle 12.1.0.2

    A customer checking out our slides from the OTN Tour in August 2014 asked me a finicky question the other day:

    "According to the documentation the Automatic SQL Tuning Advisor maintenance task gets executed only within the CDB$ROOT, but not within each PDB - but the slides are not clear here. So what is the truth?"

    Ok, that's good question. In my understanding all tasks will get executed within each PDB - that's why we recommend (based on experience) to break up the default maintenance windows when using Oracle Multitenant. Otherwise all PDBs will have the same maintenance windows, and guess what will happen when 25 PDBs start gathering object statistics at the same time ...

    The documentation indeed says:

    Automatic SQL Tuning Advisor data is stored in the root. It might have results about SQL statements executed in a PDB that were analyzed by the advisor, but these results are not included if the PDB is unplugged. A common user whose current container is the root can run SQL Tuning Advisor manually for SQL statements from any PDB. When a statement is tuned, it is tuned in any container that runs the statement.

    This sounds reasonable. But when we have a look into our PDBs or into the CDB_AUTOTASK_CLIENT view the result is different from what the doc says. In my environment I did create just two fresh empty PDBs (CON_ID 3 and 4):

    SQL> select client_name, status, con_id from cdb_autotask_client;

    CLIENT_NAME                           STATUS         CON_ID
    ------------------------------------- ---------- ----------
    auto optimizer stats collection       ENABLED             1
    sql tuning advisor                    ENABLED             1
    auto space advisor                    ENABLED             1
    auto optimizer stats collection       ENABLED             4
    sql tuning advisor                    ENABLED             4
    auto space advisor                    ENABLED             4
    auto optimizer stats collection       ENABLED             3
    sql tuning advisor                    ENABLED             3
    auto space advisor                    ENABLED             3

    9 rows selected.

    I haven't verified the reason why this is different from the docs but it may have been related to one change in Oracle Database 12.1.0.2: The new SPM Evolve Advisor Task ( SYS_AUTO_SPM_EVOLVE_TASK) for automatic plan evolution for SQL Plan Management. This new task doesn't appear as a stand-alone job (client) in the maintenance window but runs as a sub-entity of the Automatic SQL Tuning Advisor task. And (I'm just guessing) this may be one of the reasons why every PDB will have to have its own Automatic SQL Tuning Advisor task 

    Here you'll find more information about how to enable, disable and configure the new Oracle 12.1.0.2 SPM Evolve Advisor Task:

    -Mike

    Tuesday Aug 05, 2014

    Upgrade PDBs - One at a Time (unplug/plug)

    *** I have added an important change on May 26, 2015 ***
    ***      Please see below marked in YELLOW           *** 
    ********************************************************

    Basically there are two techniques to upgrade an Oracle Multitenant environment:

    In this post I will refer to the "One at a Time" approach and describe the steps. During some presentations, discussions etc people were left with the impression that it will be a very simple approach to unplug one or many PDBs from a CDB in lets say Oracle 12.1.0.1 and plug it into an Oracle 12.1.0.2 Container Database. Bingo, upgraded!

    Well, unfortunately this is not true. In fact it is completely wrong.


    If you want to upgrade via unplug/plug the following steps will have to be followed:

    • In CDB1 environment - e.g. Oracle 12.1.0.1 with an PDB1
      • In SQL*Plus: 
        • alter session set container=PDB1;
        • @$ORACLE_HOME_12102/rdbms/admin/preupgrd.sql
          (The output of the preupgrade.log will show you the location of the fixups)
        • @/u01/app/oracle/cfgtoollogs/CDB1/preupgrade/preupgrade_fixups.sql
          (If ORACLE_BASE is not set the files will be created under $ORACLE_HOME/cfgtoollogs instead of $ORACLE_BASE/cfgtoollogs)
        • exec dbms_stats.gather_dictionary_stats;
          (plus include all additional treatments recommended by the preupgrade.log)
        • alter session set container=CDB$ROOT; 
        • alter pluggable database PDB1 close;
        • alter pluggable database PDB1 unplug into '/stage/pdb1.xml';
        • drop pluggable database PDB1 keep datafiles;
          The reason why you will need to DROP the PDB afterwards is simply to cleanup leftovers in the CDB views. It is under observation if this is a bug or not. The information does not get removed to allow quick plugin again but the leftovers may cause plenty of trouble once you'll try to upgrade this CDB1 later on. But be aware (thanks to Martin Bach from Enkitec): Once you dropped the PDB from its original CDB you can revert to it with a previously taken backup. So it is best practice to backup your PDB in the destination CDB first, then issue the DROP command on the source as otherwise you'd sail for a while without a safety net.
        • exit
          .
    • In CDB2 environment - e.g. Oracle 12.1.0.2
      • In SQL*Plus:
        • alter session set container=CDB$ROOT;
        • At this point we "could" do a Plug In Check but as the COMPATIBLE of the new CDB2 created as per recommendation with DBCA defaults to "12.1.0.2" the Plug In Check will result in "NO" - but obviously the plugin operation will work. Just for the records here's the procedure to check plugin compatibility
          • SET SERVEROUTPUT ON
            DECLARE
              compatible CONSTANT VARCHAR2(3) := CASE DBMS_PDB.CHECK_PLUG_COMPATIBILITY(
              pdb_descr_file => '/stage/pdb1.xml',
              pdb_name => 'PDB1')
              WHEN TRUE THEN 'YES' ELSE 'NO'
            END;
            BEGIN
            DBMS_OUTPUT.PUT_LINE(compatible);
            END;
            /

            .
            select message, status from pdb_plug_in_violations where type like '%ERR%';
            .
        • create pluggable database pdb1 using '/stage/pdb1.xml' file_name_convert=('/oradata/CDB1/pdb1', '/oradata/CDB2/pdb1');
        • alter pluggable database PDB1 open upgrade;
        • exit
      • On the command prompt:
        • cd $ORACLE_HOME/rdbms/admin 
        • $ORACLE_HOME/perl/bin/perl catctl.pl -c "PDB1" -l /home/oracle/upgrade catupgrd.sql
      • Back into SQL*Plus:
        • alter session set container=pdb1;
        • startup
        • @?/rdbms/admin/utlrp.sql
        • @/u01/app/oracle/cfgtoollogs/CDB1/preupgrade/postupgrade_fixups.sql
          (If ORACLE_BASE is not set the files will be created under $ORACLE_HOME/cfgtoollogs instead of $ORACLE_BASE/cfgtoollogs)
    Of course this technique will work also with more than one PDB at a given time. You'll have to repeat the steps, and your upgrade call on the command line will look like this:

        • $ORACLE_HOME/perl/bin/perl catctl.pl -c "PDB1, PDB2" -l /home/oracle/upgrade catupgrd.sql

    Well, not really unplug+plug=upgraded ;-)

    -Mike 

    PS: I did add a few pieces of information based on the excellent feedback given to me by Frank Kobylanski from the MAA Team - cheers, Frank!!! 

    Monday May 26, 2014

    New interesting White Paper:
    Oracle Multitenant Efficiency Study

    There's a new White Paper available interesting for those who'd like to learn a bit more about the efficiency and the overhead of Oracle Multitenant in a fairly large environment:

    http://www.oracle.com/technetwork/database/multitenant/learn-more/oraclemultitenantt5-8-final-2185108.pdf

    -Mike

    Tuesday May 13, 2014

    More than one PDB in the same directory?

    Can you create more than one pluggable database (PDB) within the same directory?
    And how does the file naming work? Considering the fact each PDB's SYSTEM tablespace will be named system01.dbf by default the question is not trivial. 

    This question got asked by a customer during one of the workshops in Switzerland last week. And the solution is straight forward. Thanks to Roy for trying it out yesterday at 170 km/h on our way back from Stuttgart :-)

    Thanks :-)

    -Mike 

    Additional information:

    Within ASM with OMF the file structure looks like this:

     1  select con_id, substr(file_name,1,90),tablespace_name from cdb_data_files
      2* order by 1

        CON_ID SUBSTR(FILE_NAME,1,90)                                                           TABLESPACE_NAME
    ---------- -------------------------------------------------------------------------------- ---------------
             1 +DA1/CDBUPGR/DATAFILE/system.394.845632641                                       SYSTEM
             1 +DA1/CDBUPGR/DATAFILE/users.475.845632685                                        USERS
             1 +DA1/CDBUPGR/DATAFILE/undotbs4.448.845632683                                     UNDOTBS4
             1 +DA1/CDBUPGR/DATAFILE/sysaux.392.845632651                                       SYSAUX
             1 +DA1/CDBUPGR/DATAFILE/undotbs2.393.845632679                                     UNDOTBS2
             1 +DA1/CDBUPGR/DATAFILE/undotbs1.471.845632657                                     UNDOTBS1
             1 +DA1/CDBUPGR/DATAFILE/undotbs3.478.845632681                                     UNDOTBS3
             2 +DA1/CDBUPGR/F7B70DCBF2D4ECEAE0437A28890AE4D8/DATAFILE/sysaux.472.845632655      SYSAUX
             2 +DA1/CDBUPGR/F7B70DCBF2D4ECEAE0437A28890AE4D8/DATAFILE/system.398.845632647      SYSTEM
             3 +DA1/CDBUPGR/F6A142792168D540E0437A28890A4707/DATAFILE/system.493.845643325      SYSTEM
             3 +DA1/CDBUPGR/F6A142792168D540E0437A28890A4707/DATAFILE/sysaux.468.845643325      SYSAUX
             3 +DA1/CDBUPGR/F6A142792168D540E0437A28890A4707/DATAFILE/soets.452.845643325       SOETS
             4 +DA1/CDBUPGR/F7B9BDC2AEC4411EE0437A28890A2B81/DATAFILE/system.491.845643937      SYSTEM
             4 +DA1/CDBUPGR/F7B9BDC2AEC4411EE0437A28890A2B81/DATAFILE/sysaux.488.845643937      SYSAUX
             4 +DA1/CDBUPGR/F7B9BDC2AEC4411EE0437A28890A2B81/DATAFILE/soets.484.845643937       SOETS
             5 +DA1/CDBUPGR/F7B9CA6B92804A56E0437A28890A2721/DATAFILE/system.485.845644149      SYSTEM
             5 +DA1/CDBUPGR/F7B9CA6B92804A56E0437A28890A2721/DATAFILE/sysaux.490.845644149      SYSAUX
             5 +DA1/CDBUPGR/F7B9CA6B92804A56E0437A28890A2721/DATAFILE/soets.487.845644149       SOETS
             6 +DA1/CDBUPGR/F7B9D727715B5B4AE0437A28890AB3D9/DATAFILE/system.486.845644363      SYSTEM
             6 +DA1/CDBUPGR/F7B9D727715B5B4AE0437A28890AB3D9/DATAFILE/sysaux.483.845644363      SYSAUX
             6 +DA1/CDBUPGR/F7B9D727715B5B4AE0437A28890AB3D9/DATAFILE/soets.481.845644363       SOETS
             7 +DA1/CDBUPGR/F7B9E3D23CFC67F1E0437A28890A5A68/DATAFILE/system.453.845644575      SYSTEM
             7 +DA1/CDBUPGR/F7B9E3D23CFC67F1E0437A28890A5A68/DATAFILE/sysaux.482.845644575      SYSAUX
             7 +DA1/CDBUPGR/F7B9E3D23CFC67F1E0437A28890A5A68/DATAFILE/soets.467.845644575       SOETS
             8 +DA1/CDBUPGR/F7B9F051E81B7892E0437A28890AD3A3/DATAFILE/system.465.845644785      SYSTEM
             8 +DA1/CDBUPGR/F7B9F051E81B7892E0437A28890AD3A3/DATAFILE/sysaux.455.845644785      SYSAUX
             8 +DA1/CDBUPGR/F7B9F051E81B7892E0437A28890AD3A3/DATAFILE/soets.479.845644785       SOETS
             9 +DA1/CDBUPGR/F7BA2D0F2F17A755E0437A28890A72C6/DATAFILE/system.464.845645805      SYSTEM
             9 +DA1/CDBUPGR/F7BA2D0F2F17A755E0437A28890A72C6/DATAFILE/sysaux.500.845645805      SYSAUX
             9 +DA1/CDBUPGR/F7BA2D0F2F17A755E0437A28890A72C6/DATAFILE/soets.498.845645805       SOETS
            10 +DA1/CDBUPGR/F7BA3A179DAFB12FE0437A28890ABBF3/DATAFILE/system.499.845646023      SYSTEM
            10 +DA1/CDBUPGR/F7BA3A179DAFB12FE0437A28890ABBF3/DATAFILE/sysaux.504.845646023      SYSAUX
            10 +DA1/CDBUPGR/F7BA3A179DAFB12FE0437A28890ABBF3/DATAFILE/soets.502.845646023       SOETS
            11 +DA1/CDBUPGR/F7BA46A1A6B7B9C2E0437A28890AE021/DATAFILE/system.503.845646233      SYSTEM
            11 +DA1/CDBUPGR/F7BA46A1A6B7B9C2E0437A28890AE021/DATAFILE/sysaux.508.845646233      SYSAUX
            11 +DA1/CDBUPGR/F7BA46A1A6B7B9C2E0437A28890AE021/DATAFILE/soets.506.845646233       SOETS
    ...


    Thursday Mar 06, 2014

    Free Webcast available now On Demand --- Upgrade and Migrate to Oracle Database 12c and Consolidate with Oracle Multitenant

    Almost 90 minutes about Upgrade, Migrate and Consolidate to Oracle Database 12c with or without Multitenant Option.
    Available now on demand. Just register yourself and watch it whenever it is convenient for you:

    Register to receive the On Demand Link and Watch It!

    -Mike 

    Wednesday Mar 05, 2014

    PSU1 and PSU2: Datapatch Issues coverd in MOS Note

    You may have read a posting disrecommending PSU1 and PSU2 for Oracle Multitenant especially in RAC/GI environments earlier this week. Actually following a lot of internal discsussions I will post some advice and clarification later this week.

    Now I have an useful update:
    Datapatch Issues are covered within a separate MOS Note making it easier to keep track and find workarounds for known issues.
    Please see MOS Note:1609718.1 Datapatch Known Issues

    -Mike

    Friday Feb 07, 2014

    Airfare Pricing vs. Oracle Multitenant for DBaaS?

    I'm currently evalutating flight options to and from India for the 3 workshops in March in Mumbai, Delhi and Bangalore.

    As everything at Oracle is fully self-serviced I've got stuck in our booking tool for over an hour now just wondering ... wondering ... wondering ...

    For instance I wonder why an Economy class ticket with Lufthansa and Swiss to Mumbai and return from Bangalore will cost over EUR 5000 (no joke!!!) even though Swiss is a 100% subsidiary of Lufthansa.

    whereas I can fly a slightly different route with Delta Airlines only from Germany to the US and back to Amsterdam and then further to Mumbai for less than half of the price - even though this includes different airlines as well (KLM and Delta) and will take more than twice as long and almsost triple the distance:

    Roy (kudos!) and I had this great idea if YOU as a customer would match airline pricing strategy to your internal Database-as-a-Service (DBaaS) strategy?

    • First of all you make the price dependent on the time frame one odered a fresh PDB
      • The earlier the cheaper - except for the last week before a fixed date as now you'll have to max out allocated resources
    • Second you will have to make it completely intransparent so nobody will be able to proof against your pricing strategy being insane
    • Third you should make the price also depend on the process somebody used to order a PDB
    • Furthermore you should introduce some extra components such as "serviced by a lead DBA" will make it more expensive
      • Same for "served by another companies expert" - even though you own that company as well
    • And don't forget to include some components which will give yourself perfect flexibility such as "The enegery prices climbed up this week so unfortunately we'll have to make the provisioning and operation of a PDB more expensive" - and never take back this price growth (or if you do so then just by a portion of it)
      • Or tell people they'll get a special price with the only downside that the department working with this PDB will have to get up to work now at 3:40am in the morning and the PDB won't be accessible after 9am anymore

    Wouldn't this be a wonderful pricing model?

    Of course you read my irony and sarcasm. You know the answer: You'll be in real trouble if you'd offer such a service and pricng internally to anybody. But I'll never understand airline pricing models ...

    -Mike

    Tuesday Dec 03, 2013

    Starting up 252 PDBs automatically?

    In my recent posting I have explained the startup of many PDBs at the same time.

    But once you startup the container database CDB$ROOT the PDBs will stay in MOUNT status. So how do you start them during CDB$ROOT startup (or immediately afterwards) in an automatic fashion?

    A startup trigger will do this job.

    CREATE OR REPLACE TRIGGER startup_all_pdbs
    AFTER STARTUP ON DATABASE

    BEGIN

    EXECUTE IMMEDIATE 'ALTER PLUGGABLE DATABASE ALL OPEN';

    END;

    /

    And of course you can use the EXCEPT command option to exclude one or more PDBs from the automatic startup.

    CREATE OR REPLACE TRIGGER startup_all_pdbs_except_a_few
    AFTER STARTUP ON DATABASE

    BEGIN
    EXECUTE IMMEDIATE 'ALTER PLUGGABLE DATABASE ALL OPEN EXCEPT PDB100, PDB101';
    END;
    /

    How does this work in an Oracle Real Application Clusters environment?
    In an RAC environment you won't need the startup trigger as clusterware takes over this role of ensuring the automatic startup of a PDB on designated nodes within the CDB$ROOT's instances.

    srvctl add service -db rac -service pdbrac_srv -pdb pdbrac -preferred "rac1,rac2"

    A snipet from the crsctl status output will look like this:

       crsctl status resource -t
        :
       ora.rac.db
             1    ONLINE  ONLINE   rac-server01       Open,STABLE
             2    ONLINE  ONLINE   rac-server02       Open,STABLE
       ora.rac.pdbrac_srv.svc
             1    ONLINE  ONLINE   rac-server01       STABLE
             2    ONLINE  ONLINE   rac-server02       STABLE
        :

    -Mike

    Friday Nov 29, 2013

    Starting up 252 PDBs in Oracle Multitenant

    What happens when you start up 252 PDBs (Pluggable Databases) with the Oracle Multitenant Option for the first time?

    Interesting question, isn't it? The expectation would be that this will happen within seconds as the SGA and therefore the shared memory segments are already allocated from within the CDB$ROOT (Container Database). But ...

    The following happens:
    It takes minutes ... hours .... In my tiny lab environment with just as little as 20 PDBs due to space constraints it takes over 30 minutes to startup 21 PDBs. Takashi Ikeda from Fujitsu Hokoriku Systems who did a great demo with the new Fujitsu M10 servers at OOW this year told me that it took over two hours to start up 252 PDBs for the first time.
    Why is that?

    Let's have a closer look into the alert.log during startup. After issueing the command:

    ALTER PLUGGABLE DATABASE ALL OPEN;

    I'd expect all PDBs to get started. With an EXCEPT PDB1, PDB2, PDB3 clause I could exclude some PDBs from this action. Now a look into the alert.log shows a very promising message:

    I'm just wondering about the opening sequence of PDBs. I'd expect 1 ... 2 ... 3 ... 4 ... ... ... 21. But the "order" is 3 ... 10 ... 16 ... 15 ... 20 ... 21 etc. telling me that the Resource Manager is not active (which is a must if you take Multitenant serious).
    OK, for that strange order there's an explanation:
    The open action gets distrubuted to slaves so PDBs may opened in a random order.
    Fuuny thing apart from that: I can access the PDB but the system seems to be really under heavy pressure. CPUs are all at 100%. What the heck is going on here in the background?

    Well, XDB needs to be installed (at least that is what the message says). Strange, isn't it, as the PDB$SEED has XDB in it and all my PDBs got provisioned from it. The awkward thing here is that the XDB messages appear over 20 minutes AFTER the PDBs signaled the Opening message into the alert.log (see the time stamps above).

    Now after exchanging a few emails with some very helpful people in development there's an explanation for the XDB messages as well. Actually it doesn't get really installed but the SGA needs to be initialized for XDB. And I'm guessing that this action takes a lot of resources plus may cause contention when many PDBs get opened at the same time. And there's optimization work going on right now meaning that a problem with port initialization within the PDB will get fixed in a future patch set. So this issue with the very long startups of PDBs because of XDB should disappear in 12.1.0.2 most likely :-)

    Finally it took another while to get the PDBs really into OPEN mode. Even though they were showing OPEN before already in V$PDBS. But as the CPUs all went to 100% as XDB got installed/initiallized at more or less the same time in all PDBs you really can't do anything.

    Finally ...

    ... all PDBs got opened and the command ALTER PLUGGABLE DATABASE ALL OPEN returned completed.

    The good news:
    It takes just this long during the initial startup of a newly provisioned PDB. And you may see this issue only when you try to open many PDBs at the same time. But have a close look into your alert.log if you'll spot the message after creating a fresh PDB.

    And btw, just for the records: I was using Oracle Database 12.1.0.1 with Oct 2013 PSU in it.

    -Mike

    Wednesday Jul 17, 2013

    Oracle Multitenant (Pluggable Database) White Paper

    The feature we did introduce for a while now as Pluggable Database got named officially Oracle Multitenant - and if you'd like to read more about this feature the newly release White Paper may give you a good overview:

    http://www.oracle.com/technetwork/database/multitenant-wp-12c-1949736.pdf

    -Mike

    About

    Mike Dietrich - Oracle Mike Dietrich
    Master Product Manager - Database Upgrade & Migrations - Oracle

    Based in Germany. Interlink between customers/partners and the Upgrade Development. Running workshops between Arctic and Antartica. Assisting customers in their reference projects onsite and remotely. Connect via:

    - -

    Search

    Archives
    « May 2016
    SunMonTueWedThuFriSat
    1
    2
    5
    7
    8
    9
    10
    12
    14
    15
    16
    18
    20
    21
    22
    23
    24
    26
    27
    28
    29
    30
    31
        
           
    Today
    Slides Download Center
    Visitors since 17-OCT-2011
    White Paper and Docs
    Workshops
    Viewlets and Videos
    Workshop Map
    x Oracle related Tech Blogs
    This week on my Rega & Pono
    Upgrade Reference Papers