Welcome to the Oracle Database Upgrade blog!!!

If your browser doesn't automatically go there within a few seconds, you may want to go to The New Upgrade Blog manually.

I wrote the below text almost 8 years ago. The blog started out of the blue - and based on the feedback I received and the stats I get the blog has gotten used a lot. Still the platforms it settled on for many years, Apache Roller, is very limited and old-fashioned. And as the blogs take a step forward to another new platform I took the chance and moved it to a more flexible environment. 

The blog will live on but from now on at:


(which is the equivalent naming to my Twitter handle).

I hope you'll still follow this blog as we will offer more value and useful resources on the new platform.


This page can be found now at:





Glad that you'd find your way to my Oracle Database Upgrade Blog :-)

"Uhhh ... yet another blog ..." you might think. But the intention is to give you the most recent database upgrade tech news and findings. Discuss with you about best practices. And offer you access to upgrade workshops we are running right around the world - at no extra cost.

So I hope to see you more often - and maybe we'll meet one day in one of our workshops in person.

Enjoy - thanks and kind regards


Hi Mike, Attended one of your 11g Upgrade workshops and found it to be highly informative, and would encourage anyone to get along to one of these. Testing 11gR1 now. having to use 2 underscore parameters due to regressions introduced - both marked as 11gR2. Could be better. jason.

Posted by jason arneil on July 29, 2009 at 07:15 AM CEST #

Hallo Herr Dietrich,
Anfrage: beim Upgrade von auf
wurden alle Werte von Oracle Connection Pooling
auf Default-Werte zurückgesetzt.
Die Konfiguration musste neu aufgebaut werden.
Ein automatiches Start der Prozesse
wurde auch nicht durchgeführt.
Ist das irgendwo beschrieben ?

Posted by Peter on May 16, 2013 at 01:27 PM CEST #

Hallo Herr Peter,

ich habe diverse Dokumente in MOS gefunden, die ueber das Thema "Connection Pool" beim Upgrade schreiben. Ohne genauere Kenntnis der Thematik kann ich Ihnen aber keine Auskunft geben. Haben Sie eine SR Nummer, die Sie mir zumailen koennen?

Danke und VG
Mike Dietrich

Posted by Mike Dietrich on May 21, 2013 at 11:37 AM CEST #

Thank you

Posted by LAMOTTE on June 27, 2013 at 10:16 AM CEST #

Hi Mike,

Thanks a lot for the interesting workshop today in Brussels; it gave us a brief and clear view in some topics regarding upgrade, migrating strategies....

FCN or Bayern? I hope you wear the correct shirt tonight.

Posted by guest on April 29, 2014 at 07:59 PM CEST #

I wore my regular shirt - but as you may have seen from the current league standings it may have been the wrong decisions ... :-(

But good luck for Belgium for the World Cup - the team is good enough to surprise everybody and make it under the top-4 teams :-)

Thanks for your feedback!!!


Posted by Mike on May 06, 2014 at 01:12 PM CEST #

Hell'o Mike,
Thanks again for the upgrade day in Brussel the 29th of April. From that time I heard that Bayern lost 4-0... Ouch!
I have a question about a work project. We have bought 2 exadata (1/4 and 1/8) X4 generation. All our servers are windows based with microsoft cluster and we'll have to migrate and upgrade (we have a panel of 9, 10, 11 DB version) most database into exadata. The contact we have here in Belgium for Oracle said that he'll ask Johan Vandenbosch to come see us. He mention that project might interest you and that he'll ask you aswell. Did you heard about anything? (More informations by email if you need some.)
Greetings and keep the very good work.

Posted by guest on May 09, 2014 at 08:27 AM CEST #

Thanks a lot for the interesting workshop yesterday in Warsaw in Poland; it gave us a brief and clear view in some topics regarding upgrade and migration. This was the first workshop organized by Oracle have been to.

Posted by Lukasz on July 16, 2014 at 12:32 PM CEST #


thanks a lot - I did enjoy the day as well :-)

Cheers, Mike

Posted by Mike on July 16, 2014 at 04:02 PM CEST #

We have a two node rac version running on red hat linux x86_64. We want to upgrade the OS to RHEL 5 and the database to 11.2.

We want to do a rac rolling upgrade. We are told by Oracle support that we should complete the upgrade on both nodes within 24 hours or it is likely we will encounter criticat problem. Our system engineers are telling us that it takes 2 to 3 days to upgrade each node from rhel 4 to rhel 5 eventhoug we know this could be done in 2 to 3 hours.

Any suggestions ?

My name is Harvey Jefferson i work for acxiom corporation and i attended one your upgrade workshops a few years ago.

Thanks for any help

Posted by guest on September 10, 2014 at 09:45 PM CEST #


this is more a question for support than for myself ;-)

Actually I don't believe that you can do a rolling upgrade from 10.2 RAC to 11.2. As far as I know 11.1 would be the min requirement to do such a thing (and you'd still need additional patches). You may check our (not-updated-anymore) slide deck about 11.2 upgrades in the Slides Download Center for the MOS Notes about Rolling RAC Upgrades to 11.2.

One of the reasons this won't work is the desupport of OCR/Voting on RAW in 11.2.

My question nr. 1 would be: Are you staying on the same hardware or will you get a new cluster for the move? If the latter is true than my proposal would be:
- configure your new RAC with the latest patches/PSUs
- install database software (again, get PSUs)
- build up a physcial standby (you won't need 10.2 sw)
- stop production when in synch
- activate the standby and upgrade it
Overall downtime will be (depending on installed options)
between 20 and 60 minutes usually. And you can test this
approach several times.
If you don't get new hardware ... hm ... then it will get tough. Minumal downtime usually requires a bit more effort ;-) Do you have a spare system which can take the load for a weekend or so? Then I'd build up a standby on the spare, switchover, reconstruct your current RAC with new Linux, new GI, PSUs, database with PSUs, setup a phyiscal standby from SPARE prod back to the old/new cluster, activate and upgrade it. Downside here: you can't really test as you'll wipe out your current prod.

Hope this helps, cheers

Posted by Mike on September 11, 2014 at 10:33 AM CEST #

Thanks for replying.

We are going to stay on the same servers/hardware. No spare servers that we could use.

Here is our latest steps we are researching.

I am not including all detailed rac steps but this is a basic outline.

1. Shutdown oracle and rac on NODE1
2. Upgrade os on NODE1 --- since two different versions of os is not running then the 24 hour guideline does not apply.
3. Reinstall/relink oracle 10.2 on NODE1
4. Bring up rac on NODE1
At this time you need to meet Oracle’s 24 hour guideline since two different version of os is running until step 5 is done which should only take a few minutes
5. Shutdown oracle and rac on NODE2
Once step 5 is done then you are no longer running 2 different os versions so the 24 hour guideline does not apply.
6. Upgrade os on NODE2 --- 24 hour guideline does not apply
7. Reinstall/relink oracle 10.2 on NODE2
8. Bring up rac on NODE2
9. Upgrade oracle rac from 10.2 to 11.2

Posted by guest on September 11, 2014 at 05:06 PM CEST #

Ok, to upgrade the OS on both nodes only your solution sounds fine to me. But you may please double check with Oracle Support.

Posted by Mike on September 12, 2014 at 11:01 AM CEST #

Hi Mike,

We currently have 11R2 GI Stand Alone running and want to upgrade to 12R1 GI Stand Alone.
Is it possible to do a 'software only' installation of 12R1 GI Stand Alone, patch that software set to the latest PSU and, in a separate step (when having downtime), do the actual upgrade ot 11R2 GI to 12R1 GI.

Regards, Jaco

Posted by Jaco de Graaf on November 19, 2014 at 11:02 AM CET #


Kann ich mit GoldenGate auch den initialen Load der Daten von der Source- zur Target-DB machen oder ist GG nur für den Transport der Redo-Daten (oder wie auch immer das bei non-Oracle DB's heisst) zuständig?

Danke und Gruss

Posted by Christian on February 06, 2015 at 02:43 PM CET #


OGG kann theoretisch kleine Datenbanken selbst umziehen, das wuerde ich aber ab einer Groesse von 20GB vermeiden. Generell gesprochen ist OGG die Synchronisations-Lösung, um einen Umzug mit nahe-Null oder sogar zero downtime zu ermöglichen. Die Datenbank muss aber zuvor mit den verfübaren Technologien wie Transportable Tablespace oder Data Pump migriert oder upgegraded werden.


Posted by Mike on February 06, 2015 at 02:46 PM CET #

Danke für die Bestätigung!
Die Doku ist hier etwas interpretierbar.


Posted by guest on February 06, 2015 at 04:22 PM CET #

after activate the standby and upgrade it
you will need to

• Attache databases to new CRS cluster


Posted by guest on March 13, 2015 at 01:54 PM CET #

Hi Mike ,
We have to upgrade(in-place) from to (non-CDB mode).
do you recommend to leave parameter
COMPATIBLE= to prevent downgrade ?
What is the best practice ?
Best regards.

Posted by kais on June 18, 2015 at 09:09 PM CEST #


yes, our recommendation would be:
(1) Leave COMPATIBLE= for 7-10 days after the upgrade. This will allow you to downgrade in case of significant issues you haven't see during testing.
(2) Then change COMPATIBLE 7-10 days after upgrade. But keep in mind that this will cause you downtime as you'll have to restart the database. If you won't get this additional downtime it's a tough decision: the ability to downgrade vs the new features.

Hope this helps - cheers

Posted by Mike on June 19, 2015 at 10:10 AM CEST #

Hi Mike ,
Thanks a lot for your valuable response !
What about issue during upgrade operation
It is possible to downgrade in this case ?
Best regards.

Posted by kais on June 19, 2015 at 11:18 PM CEST #


yes, you can downgrade back down to as long as you leave COMPATIBLE unchanged. For issues and such please see the posts on the blog and the big slide deck "Upgrade, Migrate and Consolidate to 12c" in the Slides Download Center to your right on this page.


Posted by Mike on June 22, 2015 at 09:53 AM CEST #


I would like to upgrade multiple (around 50) databases to The databases are hosted in Linux Environment 3 node RAC Setup. I would like to know fastest way of doing the upgrade with as much as minimum downtime as possible. Please note Grid Infrastructure had been upgraded already and Oracle binaries had been installed in separate Oracle Home.
Is there a way to do an Online upgrade ? Our company Maintenance window is around 2 hours only. I went thru the standard Oracle documentation on this which requires downtime.
I have a script which I am testing which has standard oracle scripts as per Readme file. This takes roughly 1 hour per database.

Please note the above is just one example. We have other environments (like test, prod etc) which has little less databases. The underlying architecture is ASM. Is there a way to do something at ASM level which will inturn upgrade the databases ?

Please let us know the other faster options if available.


Posted by SURESH RAMASWAMY on August 19, 2015 at 09:17 PM CEST #


thanks for your comment - but actually this goes far beyond what I can deliver via the blog.

And let me point out one thing:
Oracle will got out of FREE EXTENDED SUPPORT (meaning no CPUs/SPUs/PSUs/BPs after that without PAID EXTENDED SUPPORT) in 5 months. I don't see any good reason to upgrade to these days as I'd silently assume that this will involve testing for 50 (!!!) databases. The resources (aka MONEY) spent on this effort is simply wasted as the amount of work would be exactly the same for a upgrade.

I see your point of having GI already on - but still my advice would be to move GI to NOW - and afterwards the databases as soon as possible to as well.

Not sure about your question with ASM - ASM runs out of the GI home and is independent of the database homes.

And unfortunately I can't tell you a precise answer as I know far to little about your environments.

Basically the upgrade can be fairly fast - 1 hour per database is fine.

But I'd emphasize on TESTING - the upgrade itself is just the most simple part - 90% of your efforts should be on testing your applications as brings in a ton of optimizer changes over (and this is another reason why I highly would encourage you to move to instead - your testing amount will be the same).

Cheers :-)

Posted by Mike on August 20, 2015 at 10:06 AM CEST #


I do not believe you have the luxury to automate database upgrade with an automated scripts, as every database upgrade is different and it requires some level of sanity checks before you proceed to next step. However, Some part can be automated to speed up. You could also run upgrade on multiple databases at the same time, however, it depends on the server capacity.

Alternatively you could use Multi-tenant option (req license), to help your cause. This can be very useful and viable feature based on the available downtime you have for 2 hours (50 database). With Multi-tenant you could be able to upgrade db in a short period of time, however, this is a 12c feature and depends on your database size too.
Hope this helps.


Posted by guest on August 30, 2015 at 01:49 PM CEST #

Hi Mike,
Usually I save (csv file) all parameters before upgrade because parameter value can be changed by DBA within or after upgrade.
What do you think about this practice ?
Do you think is enough to compare changes before and after upgrade procedure ?

Posted by Kais on September 29, 2015 at 12:21 AM CEST #


sure, you can do that (saving and comparing).


Posted by Mike on September 29, 2015 at 01:11 PM CEST #

Hi Mike,
we are currently in the process of testing an upgrade ( to in combination with a platform change (Linux to AIX) of a DWH (approx. 2TB). We are using transportable tablespaces and have excluded statistics. What is your recommendation for gathering statistics on the new system? We are thinking about schema- (of course), system-, fixed-objects- and data dictionary stats. At present we are facing huge performance problems on the new system even we have more memory, cpu etc.

Best regards
Axel D.

Posted by guest on November 20, 2015 at 10:01 AM CET #

Hi Axel,

thanks for your message.

I'll drop you an email.


Posted by Mike on November 20, 2015 at 10:50 AM CET #

I am upgrading from to During the DBUA setup screens, I checked "Set User Tablespaces to Read Only During the Upgrade". It just seemed like the right thing to do. All of my tablespaces were ONLINE. All tablespace files were autoextendable. During the upgrade I got this error.
Context component upgrade error
ORA-01647 tablespace xxxxx is read-only. Cannot allocate space in it. There was plenty of space. I re-ran without the box checked and it ran ok. Just curious if anyone else has seen this.

Posted by guest on January 28, 2016 at 09:19 PM CET #

This is expected as soon as a repository is in another tablespace which is not treated as an Oracle supplied repository tablespace.

Which objects belonging to TEXT reside in this tablespace?


Posted by Mike on January 28, 2016 at 11:54 PM CET #

Hi Mike,

Today i have a Question/Doubts regarding cross-platform db migration - specifically from AIX to Oracle Linux 12.1

I aware of the new rman features introduced in 12c for cross-platform data transport.

For example, "Cross-platform Transportable Tablespaces using Read-Only Tablespaces"
as explained in MOS Note:
"How to restore a pre-12c backup to a cross-platform, cross-endian 12c database (Doc ID 1644693.1)"

In the example above they put the tablespaces in read-only mode.

Is it possible to use the same technique, but using incremental backup sets to reduce downtime to migrate from 11.2 to 12.1 ?

The new rman syntax in 12c using "Backup for Transport allow inconsistent" and "restore from platform .."
seem to be valid only when source and destination's compatible parameter is set to 12.1

But i have successfully performed a normal rman backup from a Database on AIX , without using the clause "for transport" or "allow inconsistent"
and i could successfully restore it on 12.1 DB on Linux using the rman "from platform" Clause.

I am not sure if this is officially supported - but it seems the platform-conversion was done even without using "backup for Transport".

Could please confirm if this is supported?

From the Document: http://docs.oracle.com/database/121/BRADV/rcmxplat.htm#BRADV726

It is indicated that before performing 'cross platform transportable tablespaces using inconsistent backup sets',
the COMPATIBLE parameter must be set to 12.0 or later on both source and target DB.

This does not seem to be true because i have seen examples where a cross-platform tts backup was made on and restored to 12.1.
At least it should not be set to 12.0 on source DB - it can be set on target DB.

Can you please confirm if this is true or if it is a documentation bug?

I know it's not possible to use the rman clause "for transport" from a 11.2 Database, but it seems the clause "from platform .." on a 12c destination db would still be able to read those inconsistent rman backups and convert them automatically.

Best Regards,


Posted by Pascal Phillip on February 08, 2016 at 05:52 PM CET #


thanks for your comment - and for all your questions.

As I don't work in the RMAN Development or Support I think you'll have to open an SR to get your questions answered in a way that you can rely on it.

I know that there are several open topics regarding COMPATIBLE and SOURCE versions which require clarification. The notes say "12.1 to 12.1" only - and I know from other customers that some things work even when your source is But I'm not the instance to add or correct this. There may be a reason for this restriction - but I don't know it.

Now regarding your questions:

(1) Is it possible to use the same technique, but using incremental backup sets to
reduce downtime to migrate from 11.2 to 12.1 ?

Yes, this is possible. Please see: MOS Note:1389592.1
They offer you a PERL script which allows you to do this with RMAN Inc Backups.
Please see our presentation about "Full Transport with RMAN Inc" on the right side of this page:

(2) Could please confirm if this is supported?

Sorry, but you'll have to open an SR to get a confirmation.
To me it looks as if this restriction (12.1 compatible on both sides) is a requirement regarding the specific command using 'cross platform transportable tablespaces using inconsistent backup sets'. As stated above you can do cross platform TTS, first of all regardless of version since Oracle 10.2 - and when using RMAN Inc backups since as the target with (as far as I remember correctly) as the minumum source. If you want to add some sort of automation you can rely on the PERL scripts. And if you want to avoid the crucial TTS part and your source is at least you can add the Full Transportable Feature I described in the above mentioned presentation.

(3) Can you please confirm if this is true or if it is a documentation bug?

Please clarify with Oracle Support ;-)
I hate to say/write this but such questions need to be clarified by Support - and if it's a doc bug they'll file it for you and get you the bug id.


Posted by Mike on February 09, 2016 at 04:44 PM CET #

Hi Mike ,
We encounter serious problem after interuption of upgrade from 11g to 12c due to network outage ...
Database is in migrate(upgrade) state.
It is not possible to try catupgrade.sql
Is full restore is only way to continue upgrade from 11g ? Any tip to rebuild dictionary ?

Posted by guest on February 20, 2016 at 08:42 PM CET #

Hi Mike ,
After upgrade from 11g to 12c , we find invalid objects like
csvm$columns ...
Not resolved even when compiling with utlrp ...

Posted by Kais on February 21, 2016 at 10:21 PM CET #

Sorry to hear/read that but with so little information how should I give you advices? I assume silently that you are upgrading to 11.2 as you mention catupgrd.sql.

Not sure in which state your outage happened but catupgrd.sql is rerunable - bring the database into UPGRADE mode again and restart it.

If that doesn't help than either flshback to your guaranteed restore point or restore the backup and run the upgrade again.


Posted by Mike on February 22, 2016 at 09:54 AM CET #


did you check catupgrd0.log?
Any errors in there?
Was the ORACLE SERVER component INVALID after the completion of catctl.pl?
Did you apply any PSUs prior to the upgrade into the 12c home - if yes, which PSU or BP?


Posted by Mike on February 22, 2016 at 09:56 AM CET #

I ran into something relative to my 12C upgrade that cost me days. I ran an upgrade of my to database. It failed on time zone update. I rolled the upgrade back using flashback. When I again tried to use DBUA, the second screen in DBUA would not pull from my oratab. The drop down only showed paths. I worked on this for days. I found nothing online that would help. I do use Grid/ASM. I ran srvctl commands to first stop and then remove my listener and then my database from ASM. This fixed my problem. I thought your readers may benefit from this.

Posted by Marvin King on March 02, 2016 at 03:41 PM CET #

Hi Mike,
I would know your opinion and have some suggestion about the following:

I need to create two small production database on a new installed Oracle SE2 on Linux (OEL 6.5).

Since SE2 allows single tenant configuration only, I have the following choices:

1)- create two non CDB Database (this is deprecated by Oracle)
2)- create a CDB plus PDB, using both database for the
3)- create two CDB with one PDB each, which means more

In the first case: what will happen in the future upgrades??
In the second case: can I use a CDB as a normal database??
In the third case: is there a significant overhead in terms of performances??

Let me say I appreciate really a lot your blog.
Thanks in advance

Giuseppe Lottini
3Lobyte S.A.S.

Posted by guest on March 11, 2016 at 11:53 AM CET #

Hi Mike, this is pavan i am working as ORACLE DBA, in my environment all the databases are Multi tenant database ,i have went through some of your blogs regarding upgrade 12c from using transportable full tablespace, since i have multi-tenant database taking transportable tablespace for every database would be difficult for us, considering down provided by the client,could be please suggest best method oracle 12c upgrade for multi-tenant datbase

Posted by guest on March 14, 2016 at 05:47 AM CET #

What actually happens when I run the runInstaller to install the 12C Oracle binaries? Does it only copy the binaries to the new folder or does it compile and link?
Another way to ask is "Does ASM/Grid have to be updated to release 12c before I install the 12C binaries?".

Posted by guest on March 16, 2016 at 08:58 PM CET #

Hi Mike, I used transportable tablespaces and everything migrated except for the OSB tablespaces. Specifically the IAS_TEMP tablespace. It seems to want to find this in the old directory, instead of the new. Is there a trick to migrating OSB from Oracle to Oracle

ORA-39083: Object type TABLESPACE:"UNDOTBS1" failed to create with error:
ORA-01516: nonexistent log file, data file, or temporary file "/tnet/u02/oradata/undotbs01.dbf"
Failing sql is:
ALTER DATABASE DATAFILE '/tnet/u02/oradata/undotbs01.dbf' RESIZE 1258291200
ORA-31684: Object type TABLESPACE:"TEMP" already exists
ORA-39083: Object type TABLESPACE:"DEV_IAS_TEMP" failed to create with error:
ORA-01119: error in creating database file '/tnet/u02/oradata/dev_iastemp_data01.dbf'
ORA-27038: created file already exists
Additional information: 1
Failing sql is:
ORA-39083: Object type TABLESPACE:"TOSB_IAS_TEMP" failed to create with error:
ORA-01119: error in creating database file '/tnet/u02/oradata/TOSB_iastemp.dbf'
ORA-27038: created file already exists
Additional information: 1

Posted by David Montoya on March 19, 2016 at 02:22 AM CET #


there is no need to adopt single tenant right now. "Deprecated" only means that we may desupport it sometime after Oracle 12.2. See my blog post getting published tomorrow morning at 9am ((March 23).

What we recommend to people today is to play around with Single Tenant (license free). You can't do Multitenant with SE2 anyhow. Therefore if you want my true and honest recommendation play with Single Tenant to get experienced - but I don't see a magic reason for single tenant right now unless you'd plan to plug into our DBaaS cloud and make this move easier.


Posted by Mike on March 22, 2016 at 03:14 PM CET #


simply upgrade and then plugin.
See the big slide deck (Upgrade, Migrate, Consolidate to Oracle 12c) in the slides download center and check the Multitenant chapter for the simple step-by-step approach.

I'd do Full Transport or any other technology apart from upgrade/plugin only if my database needs to be lifted cross OS architectures.


Posted by Mike on March 22, 2016 at 03:17 PM CET #


the OUI will tell you that it copies and links the binaries.


Posted by Mike on March 22, 2016 at 03:18 PM CET #

Please download it again once you receive unzip errors.
This is far beyond my power ;-)


Posted by Mike on March 22, 2016 at 03:18 PM CET #


the log says that the two tablespaces exist already.
If you have precreated them then you may need to drop them in order to allow TTS (or are you doing Full Transportable Export/Import into 12c?) to proceed.
The error with UNDOTBS is more or less expected as I'd assume that your new database has an UNDO tablespace already.


Posted by Mike on March 22, 2016 at 03:21 PM CET #

Hi Mike,

we are planning to upgrade our database from to Are there any options available to do a zero downtime upgrade or minimum downtime upgrade using standby databases?

Posted by guest on April 19, 2016 at 07:21 PM CEST #

Yes, of course - Transient Standby is the way to go.
Please check the first slide deck saying "Upgrade/Migrate/Consolidate to 12c" and check for "Transient". It has the link to the white paper(s) etc.


Posted by guest on April 19, 2016 at 07:33 PM CEST #

Thank you Mike. I will review the docs and get back to you in case of further queries. Thank you for your help.

Posted by guest on April 19, 2016 at 07:39 PM CEST #

Hi Mike,
Can you recommend a good reference for sizing the sga with good merit. ADDM unfortunately at times make recommendations that are adverse and I would like to know if you can recommend a good sizing model with documentation to backup the claims.


Posted by guest on May 18, 2016 at 11:14 PM CEST #


Sorry I can't give such recommendations blindly without seeing AWR reports, cache hit rations and such.

The Oracle Perf Tuning Guide has a few very good advices:

Hope this helps - cheers

Posted by Mike Dietrich on June 15, 2016 at 04:16 PM CEST #

Hi Mike ,

appreciated your help so far , i need to ask you if there is a way i we can upgrade 11gR2 to 12c using rman duplicate technique and apply an incremental backup later on to roll it forward provided i believe duplicate is changing the DBID during the process , how can i achieve this ?


Posted by Samer on July 13, 2016 at 04:34 PM CEST #


what are you trying to achieve?
Let me know your downtime requirements, the size of your database, the (exact) version you are starting with and the OS platform of source and destination.


Posted by Mike Dietrich on July 15, 2016 at 06:47 PM CEST #

Hi Mike,

do you have any idea when the release 12cR2 for linux will be available? "1HCY2016" has not been met.
So far, I am not able to find anything on oracle.com ...


Posted by To.Ni. on July 29, 2016 at 08:22 AM CEST #

After the upgrade, we get the following warning, see below. What is to be done?

WARNING: unknown state for DB spfile location resource, Return Value: 3                             
Starting ORACLE instance (normal) (OS id: 16166)
CLI notifier numLatches:3 maxDescs:931                                                                                         
Dump of system resources acquired for SHARED GLOBAL AREA (SGA)
Per process system memlock (soft) limit = 8388608T
Expected per process system memlock (soft) limit to lock
SHARED GLOBAL AREA (SGA) into memory: 3072M
Available system pagesizes:
Supported system pagesize(s):
        4K       Configured          786436          786436        NONE

Posted by Ulrich Rothenbuehler on July 29, 2016 at 11:40 AM CEST #

No - but I can recommend you to check MOS Note:742060.1:

It will tell you 2HCY2016 (2nd half year of calendar year 2016).

But please take into consideration that usually people wait for at least the first patch set. I don't know any plans but in the past it took 11-14 months until the first patch set arrived.

There is no such thing as a 2nd release anymore. This has been overhauled by
the stretched release cycles ...

Just saying ... ;-)


Posted by Mike Dietrich on August 01, 2016 at 02:15 PM CEST #

Hi Ulrich,

I just googled a bit:

Pls search for "Richtige Antwort" on the page.

Does this solve your issue?


Posted by Mike Dietrich on August 01, 2016 at 02:20 PM CEST #

Hi Mike,

thanks for your help and the informations.


Posted by To.Ni. on August 01, 2016 at 03:07 PM CEST #

Hello Mike,
We have one solarix box with DB.
We have to migrate it to OEL 7 which doesn't support
Hence we have to perform DB upgrade to as well in Linux.
Can we do the same using TTS and if yes, what extra steps we need to perform for DB upgrade which is not part of OS cross platform migration tasks.


Posted by guest on September 06, 2016 at 10:41 AM CEST #


First of all, TTS works without restrictions on versions and patching (sa long as you don't hit known issues).

So for instance you can easily transport from directly into Is this an INTEL Solaris Box or a SPARC Solaris machine? If it's INTEL Solaris you can migrate much easier with a physical standby directly in the Linux box, then activate it and upgrade it. No need to have software on the linux side. Please see our BIG 12c slide deck and roll to around slide 200 for the RAC Upgrade case. Ignore the fact about the upgrade but see the slides on "heterogenous phyiscal standy" and about "RMAN commands and procedures for DUPLICATE FOR STANDBY FROM ACTIVE DATABASE" and the MOS notes.

If it's SPARC Solaris then either Data Pump or xTTS will be your friends. In both cases no need to take care on the source version.

And just by the way, if I'd be YOU I would move to Please be aware that Oracle goes out of FREE/WAIVED EXTENDED SUPPORT by May 31, 2017.


Posted by Mike on September 06, 2016 at 11:53 AM CEST #

Many thanks for your prompt response!
Source server is SPARC Solaris, so it would be cross endian platform migration with minor DB upgrade.

Also to confirm that yes, we'll go to 12c but as per requirement associated application(Peoplesoft) needs to be upgraded first at DB 11g.


Posted by Rishi on September 06, 2016 at 02:02 PM CEST #

ORA-00600: internal error code, arguments: [723], [147576], [5754312], [memory leak]

After upgrade to 12c. We opened a SR.

Posted by SCS on September 26, 2016 at 10:37 AM CEST #

@ ORA-600 (723)

you may please share the SR number with me. This shouldn't be an upgrade issue at all but I'd like to have a look.


Posted by Mike on October 03, 2016 at 12:35 AM CEST #

Hi Mike,
After I upgrade DB from to, if I don't perform a new backup immediately and a few hours/days later the DB crashes, Can I restore de 11G backup and recover both 11G and 12G archives until the point of failure time, so I don't have to upgrade again? If yes, how is the procedure? I didn't find in MOS.

Posted by Rodrigo on October 17, 2016 at 06:40 PM CEST #


yes, this should work as you'd do nothing different when upgrading a physical standby database - you'll simply apply the archivelogs from production including all archives from the upgrade. You may have to be a bit careful if you have changed COMPATIBLE - and you'll have to use the higher version's RMAN to recover (or do it from SQL*Plus). But I fear that if it does not work straight forward you may open a Sev.1 SR with Oracle Support.


Posted by Mike on October 18, 2016 at 04:35 PM CEST #

Hi Mike,
we upgraded V11.2.0.4 to V12.1.0.2 and had big performance problems with Optimizer. We recalculate Statistics (gather_system_stats, gather_database_stats, gather_dictionary_stats). Performance is okay with Parameter optimizer_features_enable('') - run 16 Seconds. When I set to '' then the same Statement run 16 Minutes.
Statement has 264 Lines - and very many Tables (and subselects). What could be the reason why Optimizer is not working fine?

Posted by Bruno Gorski on November 21, 2016 at 01:31 PM CET #


please have a look into my newest blog posts from 22 Nov and 23 Nov. This may be an issue. If you try a quick remedy you may alter OPTIMIZER_ADAPTIVE_FEATURES and switch OPTIMIZER_DYNAMIC_SAMPLING to a non-default value. But honestly there are no blind recommendations. The best thing would be to open an SR with Support and get their advice.


Posted by Mike on November 22, 2016 at 11:30 PM CET #

Hi Mike,

Thx for the workshop today, again a very nice workshop.
Hopefully your detour via Denmark brings you home nicely.

Two additions on your presentation.

-spfile's with all the underscore parameters as you mentioned is probably due to a bug. Create spfile from memory, I had the same problem earlier this year at a customer. See MOS 1268588.1.

-For sqlplus/dgmgrl/rman command line history you can use rlwrap as stated by Robert. See Harold's blok on it https://prutser.files.wordpress.com/2008/12/sqlplus_forgotten_history.pdf. There are some security flaws and solutions see for instance http://stackoverflow.com/questions/31659680/is-there-a-way-to-have-rlwrap-automatically-delete-history-files

Thanx again and see you next time.


Posted by Laurens Wagemakers on November 25, 2016 at 04:40 PM CET #

Hi Mike,

I'm migrating Oracle 12c from Linux to Solaris. On the linux side we used PSU for patching but after I attended your uograde workshop in Utrecht. I'm wondering if I can use Database Proactive Bundle Patches on the Solaris side.

I want to install on Solaris and on top of it the latest Database Proactive Bundle Patches. It will be Solaris on Intel and I'm planning to use RMAN incremental updates backups as migration strategie.

Kind reagrds,

Posted by guest on December 01, 2016 at 10:29 AM CET #


We are planning to upgrade our 11gR2 database to 12cR2. By any chance do you have any complete manual upgrade checklist for this upgrade process? I appreciate if you can share it.


Posted by guest on January 19, 2017 at 07:51 PM CET #

Hi BA,

do you really mean "12.2"? Or 12.1?

The note for 12.1 is out on MOS - please search for "upgrade checklist".

The note for 12.2 is included in our slides already but will be visible only as soon as 12.2 is available on-premises:
MOS Note: 2173141.1 Complete Checklist for Manual Upgrades to Oracle Database 12.2


Posted by Mike on January 26, 2017 at 04:59 PM CET #

Hi Mike,

I have two questions for you:
Question 1) I am doing tests to convert a non-CDB database to a PDB. I cloned my non-CDB using a database link.


Since I am using OMF, I did NOT used FILE_NAME_CONVERT parameter. After I executed the noncdb_to_pdb.sql script and opened my new pdb, all the datafiles were created in the following directory in ASM

Do you know why did it create that long directory name, and how to avoid it?

Question 2) We have been doing upgrade tests and would like to know what happens behind the scenes when converting the non-CDB to PDB in ASM? Are the datafiles moved from the old data directory to the new location in ASM, or are they copied? If they are copied, does it mean that we need additional storage during this step?

Thank you,

Posted by Nidia on February 06, 2017 at 09:29 PM CET #


this is expected with ASM/OMF. And this is the case why you'll get some even nicer experience when dealing with Standby databases with CDBs in ASM.

Use the cool 12c feature "Online Move of Data Files" which we introduced for EE customers with

ALTER DATABASE MOVE DATAFILE '/data/user1.dbf' TO '/test/user1.dbf';

Works of course with ASM as well. Create a directory in ASM with your desired subdir name and relocate the files to it.

A potential workaround would have been the init.ora parameter PDB_FILE_NAME_CONVERT - then you could convert the seed's dir pattern to your desired pattern. Downside: you'll have to reset it all the time as otherwise you'll try to create another PDB in the same dir again - which may get you errors.

Regarding your 2nd question:
It depends on the option you are using: COPY is the default, NOCOPY is my preferred option - but if you use NOCOPY you'll have to make sure you have a valid backup as if the plugin operation fails you are lost in space otherwise. So with the default your database (non-CDB in your case) will be relocated. In the NOCOPY case it will stay where it is. Both solutions have its advantages - but in the latter case you may be able to use the ONLINE relocate feature afterwards to move the database around without downtime.


Posted by Mike on February 08, 2017 at 11:05 AM CET #

Hi Mike,

I appreciate your soon response, and valuable information.

I will test the online move/relocate feature.

Have a nice day, and thank you.

Posted by Nidia on February 08, 2017 at 05:07 PM CET #

Post a Comment:
Comments are closed for this entry.

Mike Dietrich - Oracle Mike Dietrich
Master Product Manager - Database Upgrade & Migrations - Oracle

BLOG is moving to:



« March 2017