Thursday Jul 07, 2016

Global Temporary Tables - Change in Oracle 12c

A few weeks back I was copied on an email conversation about a important change with Global Temporary Tables (GTT) in Oracle Database 12c

Something you need to be aware of when using GTTs in Oracle Database 12.1.0.2:
Prior to this release GTTs shared statistics between sessions. Statistics were SHARED between different sessions. Now since Oracle Database 12c this is different by default - statistics on GTTs are visible to the SESSION only.

This can be changed and altered of course. And there are pros and cons. But as I'm not an optimizer expert I'm glad that my colleague Nigel Bayliss, our Optimizer Product Manager, has blogged about it. If you use this functionality please read all the details about this change here: 

Thanks Nigel!

--Mike


PS: Original post said: Oracle Database 12.1.0.2 but I misread it - and thanks for the hint - it got introduced with Oracle Database 12.1.0.1 :-)

Monday Jul 04, 2016

Full Transportable Export/Import - Things to Know

This blog post is an addition to:

Seth Miller commented the pitfall of having a serious issue during the Data Pump run, Data Pump exiting and not finishing, and you'll have to do the entire backup/restore/incremental-roll-foward thing again. Without any doubt, this is no fun at all. 

So let me point out a few things to take into consideration - and some of them are not obvious I guess.
.

Do you need to backup/restore/incremental-roll-forward again?

When you let do Data Pump all the manual tasks of Transportable Tablespaces (TTS) this is called Full Transportable Export/Import (FTEX as I abbreviate it). With regular TTS you will take your tablespaces read-write in your destination at the end of the entire process. So whenever something is going wrong during the meta data rebuild you won't have to restore all your tens of terabytes. 

With FTEX things are a bit different. Data Pump will take the tablespaces several times read-write during the impdp run. This has to happen. And the job is not restartable right now. This may change in a future release or patch set. We'll see.

What are your options now to prevent a full repeat of all your backup/restore/incremental-roll-forward activities?

FLASHBACK DATABASE unfortunately is not an option as you could set a Guaranteed Restore Point - but as soon as the data file headers got adjusted FLASHBACK won't be able to revert this. 

Setting the data files to read-only on the OS level is not an option either as you might force Data Pump to fail when it would like to write something into a tablespace and gets an OS return code for not being able to complete the write operation.

Therefore right now the only valid option I can think of is leveraging storage snapshot technologies in order to restore the files back into the status from "before Data Pump ran".

We are aware of this pitfall and we discuss alternatives internally for the future.
.

Character Sets and Time Zone?

Another topic to pay close attention is the database's character set and national character set. Please find all the guidelines about character sets and national character sets here: 

For time zone files please make sure you have the same time zone version in source and destination. As you can't downgrade your time zone file in the destination usually you'll have to apply the matching time zone patch in the source before initiating the transport. If you don't pay attention Data Pump will deny the meta import of the data in the tablespaces as otherwise you may end up with corrupted time zone data.

Please find the precise description in the Oracle 12c documentation: 


Tablespace Encryption

When your tablespaces are encrypted you'll have to be a bit more careful when using TTS or FTEX. Even though the FAQ on Oracle.com mentions that there are no issues still there are some limitations. The documentation is more precise: 

  • Limitations on Transportable Tablespace
    • Transportable tablespaces cannot transport encrypted tablespaces.
    • Transportable tablespaces cannot transport tablespaces containing tables with encrypted columns
  • Limitations on Full Transportable Export/Import
    • You cannot transport an encrypted tablespace to a platform with different endianness.
      To transport an encrypted tablespace to a platform with the same endianness, during export set the ENCRYPTION_PWD_PROMPT export utility parameter to YES, or use the ENCRYPTION_PASSWORD export utility parameter. During import, use the equivalent import utility parameter, and set the value to the same password that was used for the export.
      .
--Mike
.

Monday Jun 27, 2016

Full Transportable Export/Import - Migrating an 11.2.0.4 database to Oracle Database 12c- into the Oracle Cloud

Full Transportable Export/Import - one of the coolest features in Oracle Database 12c 

We blogged about Full Transportable Export/Import a while back. It is - no doubt - one of the coolest features in Oracle Database 12c. And it is part of our Hands-On Lab exercise (Hands On Lab - Upgrade, Migrate, Consolidate to Oracle Database 12c) as well.

It utilizes the technique of Transportable Tablesspaces - cross-platform, cross- Endianness, cross-version - but lets Oracle Data Pump do all the "dirty" work of rebuilding everything kept in your SYSTEM and SYSAUX tablespace including views, synonyms, public objects, grants, sequences etc etc etc.

You can even combine it with RMAN Incremental Backups - of course cross-platform and cross-Endianness as well - in order to decrease the downtime for large databases.
Please find our presentation about the combination of these feature here in the Slides Download Center to the right:

In addition: You can use this Oracle Database 12c feature with your source database Oracle 11.2.0.3 or 11.2.0.4. Just the destination database you are upgrading or migrating to must be at least an Oracle 12.1.0.1 database (or newer of course). 

See the feature in action

Roy recorded a 20 minute video demonstrating how to:

  • Migrate an Oracle Database 11.2.0.4 database into an Multitenant's pluggable database using
  • Full Transportable Export/Import
  • into the Oracle DBaaS Cloud
    .

And of course the feature works also without migrating into a Container database environment - and without migrating to the Oracle DBaaS Cloud. 

If you need to migrate a database from Oracle 11.2.0.3 or newer to Oracle Database 12c Full Transportable Export/Import can become your best friend.

--Mike

Monday Jun 20, 2016

Minor Upgrade? Going from 11.2.0.1 to 11.2.0.4?

My Belgium friend Phillipe Fierens raised a great question on Twitter last week and dropped me an email again after having a discussion with his client:

For Phillipe and myself the answer is pretty clear and straight forward:

There is no minor upgrade anymore since every (patch set) release is a full release and since new parameters, parameter values, features and whatever appears even in patch sets. 

But the following discussion on Twitter with comments from all sides made me think about why people would honestly declare going for instance from 11.2.0.1 to 11.2.0.3 as a minor upgrade whereas going to 12.1.0.2 is seen as a major upgrade?

Let me summarize why I completely disagree - and actually Dom Giles nailed it:

  • Since Oracle Database 11.2.0.1 we deliver patch sets as a full release
  • A patch set can contain not only new parameters or parameter values but may occasionally also contain new features (typical examples in Oracle 11.2.0.4 are the new value for optimizer_dynamic_sampling=11 or the DBMS_REDACT package)
  • Therefore you will have to do exactly the same amount of testing, regardless of going to Oracle Database 11.2.0.4 vs Oracle Database 12.1.0.2 - it is ZERO difference in the tests, the time, the effort, the manpower ...
    .

You don't believe me? Then please check MOS Note:1962125.1 (Oracle Database - Overview of Database Patch Delivery Methods). Scroll down a bit to Testing Recommendations By Patch Type and see the rightmost column of this table headlined "Patch Set Release":

I hope this clarifies it all.

There is no "minor" patch set upgrade anymore. Even though I would love to tell you that going from 11.2.0.3 to 11.2.0.4 is just a minor hop it will require exactly the same testing and evaluation work then going to Oracle Database 12.1.0.2.

But going to Oracle Database 12.1.0.2 will mean that you are under Premier Support until end of June 2018 - whereas Free Extended Support for Oracle Database 11.2.0.4 will end in May 2017.

--Mike

.

Monday May 30, 2016

DMU - Tips and Tricks - Migration Assistant for Unicode

DMU OTN Logo

Please find previous posts about the Data Migration Assistant for Unicode (DMU) here:


Since Roy and I subscribed to the Hot Topics support email we'll find a very helpful note none of us was aware of almost every second day.

One of these recent finds is:

For those who have never heard of the DMU before you'll find a very brief overview about this tiny little - and very helpful - tool for database character set migrations not only to Unicode here: 

Why is this note so helpful?

Simply because it combines the most important topics about DMU in one note - from starting to issues. And this is really helpful when you plan to use the tool.

It is structured into:

  • A) Things to check before using the DMU
  • B) FAQ and common issues
  • C) Problems starting the DMU or during scan
  • D) Typical errors such as "invalid binary conversion" and others
  • E) Post conversion phase
  • F) Steps to do afterwards
  • plus some extras

.
--Mike

Thursday May 19, 2016

MOS Note:136697.1 - New HCHECK.SQL for Oracle Database 12c

A while back we added this slide to our big slide deck:

The story behind this slide

A large and important customer in the US tested a patch set upgrade - but when they approached the production upgrade from 11.2.0.2 to 11.2.0.3 on a large RAC cluster they've had to cancel the attempt and revert to the previous state.

Reason

They've hit a dictionary corruption somewhere silently sleeping in the database causing no trouble at all so far - until the upgrade touched the broken structures 

They've asked us:
"How could we ensure the database is really healthy and in good state?"

And unfortunately there's no 100% confirmation as there may be an unknown issue we are not aware of. But we can test two things:

  • Let RMAN verify the logical integrity of the Oracle blocks
  • Let the hcheck.sql script provided by Oracle Support check and test for known dictionary issues
It's actually a general recommendation we give to customers for a long time to check their really important and business critical databases from time to time during off-peak hours. 

Recently Oracle Support updated hcheck.sql to version 4 to include Oracle Database 12c checks as well.

Please find it here:

--Mike

Wednesday May 11, 2016

OTN Tour EMEA 2016 - Milan and Baku

Time to travel a bit.

OTN Tour EMEA 2016 has started yesterday in Düsseldorf, and tomorrow, May 12, Milan will be on the route.

OTN Tour Milan, Italy

.

Then on to Baku, Azerbaijan for the event on Saturday, May 14.

First time for me. And I'm so looking forward to it.

OTN Tour EMEA 2016 Baku

For Baku you will need to hurry as only 20 tickets are left.

Thanks to all the people like Ludovico Caldara and Kamran Agayev and all the others for their hard work setting ip those events.

And I look forward to see you and learn a lot from the Oracle ACE Directors presenting throughout the entire day. 

CU soon :-)

--Mike

Wednesday May 04, 2016

Are BPs. PSUs and Proactive BPs cumulative?

Are Bundle Patches (BPs) and Patch Set Updates (PSUs) cumulative?

That is a question sounding trivial to many people but actually it does get asked quite often. And sometimes I forget to mention this during the workshops - and luckily usually somebody asks the question reminding me to explain it.

Yes, Bundle Patches and Patch Set Updates (and of course Proactive Bundle Patches and Critical/Security Patch Updates (CPUs/SPUs) are all cumulative.

You'll find this mentioned in the first paragraph of MOS Note: 854428.1 - Patch Set Updates for Oracle Products:


...

Interesting note on the side:
I would have expected this important piece of information in MOS Note:1962125.1 - Oracle Database - Overview of Database Patch Delivery Methods but I couldn't find it. So it's no wonder why people ask such a trivial question ... [irony!]  
.

Two simple examples:

  • You have the October 2015 PSU applied
  • You'd like to apply the April 2016 PSU on top
    • Then you don't need the January 2016 PSU as it is included in the April 2016 PSU already
      .
  • You never applied a Procative Bundle Patch
  • You'd like to apply the April 2016 Proactive Bundle because a guy recommended it on an Oracle blog - and actually MOS notes mention it as well as highly recommended
    • You don't need to apply anything beforehand.
      The April 2016 Proactive BP has all the fixes from all previous BPs included on top of Oracle Database 12.1.0.2.0
      .

Further Information?

You'll find recent related postings on this blog here: 

 --Mike
.

Tuesday May 03, 2016

Can I apply a BP on top of a PSU? Or vice versa?

This question was in my inbox this morning raised by a customer via a colleague. 

Our feeling said:
Yes, but you'll have to deinstall the PSU first before applying the BP.

Having a feeling is one thing, knowing the truth is better. And as I have so much fun by applying PSUs and BPs back and forth since two weeks I thought "let's give it a try". So here we go ...
.

Apply a BP on top of a PSU

This is my starting point - Oracle Database 12.1.0.2 with PSU October 2015, the last state in my VBox environment since I experiment with parameters and packages at the moment.

And this is the result when trying to apply the BP from April 2016 on top: 

[CDB2] oracle@localhost:/media/sf_CTEMP/22899531/22899531/22806133
$ opatch apply
Oracle Interim Patch Installer version 12.1.0.1.10
Copyright (c) 2016, Oracle Corporation.  All rights reserved.


Oracle Home       : /u01/app/oracle/product/12.1.0.2
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/oracle/product/12.1.0.2/oraInst.loc
OPatch version    : 12.1.0.1.10
OUI version       : 12.1.0.2.0
Log file location : /u01/app/oracle/product/12.1.0.2/cfgtoollogs/opatch/opatch2016-05-03_10-26-37AM_1.log

Verifying environment and performing prerequisite checks...

Conflicts/Supersets for each patch are:

Sub-Patch : 20243804

        Bug Conflict with Sub-Patch 21359755
        Conflicting bugs are:
        18966843, 19468991, 19032777, 19587324

[..]


Following patches have conflicts: [   21359755   22806133 ]
Refer to My Oracle Support Note 1299688.1 for instructions on resolving patch conflicts.

UtilSession failed: Patch 21359755 is a composite patch which could not be rollback.

Log file location: /u01/app/oracle/product/12.1.0.2/cfgtoollogs/opatch/opatch2016-05-03_10-26-37AM_1.log

OPatch failed with error code 73


Ok, our suspicion was correct.

There are a lot of conflicts - I have to remove the PSU first. And of course the recommended conflict check would have given me the same result. Furthermore I think I have read this a while back in one of the central notes about PSUs and BPs in MOS as well.
.

Removing the PSU from October 2015 first

Removing a PSU or BP is very simple and straight forward (and well described in the ReadMe.html coming coming with the patch).

[CDB2] oracle@localhost:/media/sf_CTEMP/21359755/21359755
$ opatch rollback -id 21359755
Oracle Interim Patch Installer version 12.1.0.1.10
Copyright (c) 2016, Oracle Corporation.  All rights reserved.

Oracle Home       : /u01/app/oracle/product/12.1.0.2
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/oracle/product/12.1.0.2/oraInst.loc
OPatch version    : 12.1.0.1.10
OUI version       : 12.1.0.2.0

Log file location : /u01/app/oracle/product/12.1.0.2/cfgtoollogs/opatch/21359755_May_03_2016_10_41_54/rollback2016-05-03_10-41-54AM_1.log

Patches will be rolled back in the following order:
   21359755   20831110   20299023   19769480
The following patch(es) will be rolled back: 21359755  20831110  20299023  19769480
Sub-patches of a composite series are being rolled back. The system will be returned to a state where all subpatches are rolled back.

[..]

Please shutdown Oracle instances running out of this ORACLE_HOME on the local system.
(Oracle Home = '/u01/app/oracle/product/12.1.0.2')

Is the local system ready for patching? [y|n]
y

User Responded with: Y
Rolling back patch 21359755...

[..]

RollbackSession removing interim patch '19769480' from inventory Log file location: /u01/app/oracle/product/12.1.0.2/cfgtoollogs/opatch/21359755_May_03_2016_10_41_54/rollback2016-05-03_10-41-54AM_1.log OPatch succeeded.

Apply the BP from April 2016

I don't want to repeat myself as I wrote already about this positive experience a few days ago:

Oracle Database BP April 2016 applied successfully (Apr 21, 2016)
https://blogs.oracle.com/UPGRADE/entry/oracle_database_bp_april16_applied

Further Information?

You'll find recent related postings on this blog here:

--Mike

Thursday Apr 28, 2016

Incremental Statistics Collection in Oracle 12.1.0.2 - A True Story

Recently I came across a really interesting customer case in the UK dealing with Incremental Statistics Collection issues in regards to an upgrade to Oracle Database 12.1.0.2.

This is the follow-up blog post to: 

A while back I blogged already about Incremental Statistics collection in Oracle Database 12.1.0.2: 

And you'll find more information in our documentation and in posts by our optimizer folks:  

The Basics

The idea of Incremental Statistics Collection is simply to save time and resources when gathering statistics for partitioned tables in order to update the global stats. In Oracle Database 12c we added the very important features of: 

  • Incremental stats working now with partition exchange as well
  • "Changed" partitions won't be eligible for new stats generation until a certain stale percentage (default: 10%) has been reached - this has to be enabled and can be tweaked
    • SQL> exec DBMS_STATS.SET_DATABASE_PREFS('INCREMENTAL_STALENESS','USE_STALE_PERCENT');
    • SQL> exec DBMS_STATS.SET_DATABASE_PREFS('STALE_PERCENT','12');

Furthermore we always recommend to:  

  • Not enable incremental stats collection globally but only for specific tables. Otherwise the footprint for the synopsis on disk can grow fairly large. Biggest footprint I've seen so far was almost 1TB in size in the SYSAUX tablespace
  • Enable it mostly for range-partitioned tables where only a few partitions undergo DML changes 
    .

The Case

Actually the synopsis table in this particular case did contain "only" 300GB of data. But as the starting point was already Oracle Database 11.2.0.3 just a change from Range-Hash to List-Hash Partitioning would happen. As this happens via metadata swapping the impact should be small.

But the issue coming up after the upgrade didn't have to do with this change in partitioning layout.

Issue No.1

During the maintenance window the incremental stats job did not finish and actually this statement caused plenty of trouble:

delete from sys.wri$_optstat_synopsis$ where bo# = :tobjn and group# in (select * from table(:groups)

Not completing this statement within the 4 hours of the default maintenance window led to a rollback of the delete - and its rollback alone took 14 hours. It turned out that the delete has to happen (and complete) before the regathering of stats could start. 

I did recommend:
patch 21498770: AUTOMATIC INCREMENTAL STATISTICS JOB TAKING MORE TIME ON 12.1.0.2  (see also MOS Note:2041541.1 - Gather_Database_Stats_Job_Proc Taking More Time in 12.1.0.2 Than 11.2.0.4)

and the customer requested:
Patch 22893653: MERGE REQUEST ON TOP OF DATABASE PSU 12.1.0.2.160119 FOR BUGS 19450139 20807398
on top of their January 2016 PSU - the merge included the patch I mentioned.

Besides that another issue got discovered.

Issue No.2

The daily purge of statistics didn't really work on large synopsis as the default degree of parallelism introduced with Oracle 12c gets derived from the number of blocks on the synopsis table - bigger table means a higher parallel degree for the purge. It ended up with a PARALLEL hint of 60 - and that was counterproductive. Once a purge got started manually in serial mode or with a low DOP it completed in less than 1 minute.

With a similar trace as:

set serveroutput on;
EXEC dbms_output.enable(999999999);
EXEC dbms_stats.set_global_prefs('trace',1+4);
EXEC dbms_stats.gather_table_stats(ownname=>'&TABLE_OWNER',tabname=>'&TABLE_NAME');
EXEC dbms_stats.set_global_prefs('trace',0);

the issues could be identified as: 

Bug 21258096 - UNNECESSARY INCREMENTAL PARTITION GATHERS/HISTOGRAM REGATHERS

The customer requested another merge patch 22926433 which contains the following fixes:

19450139: KN:LNX:PERFORMANCE ISSUE WHEN RUNNING GATHER TABLE STATS WITH INCREMENTAL STATS
20807398: ORA-00600 [KGL-HASH-COLLISION] WITH FIX TO BUG 20465582
21258096: UNNECESSARY INCREMENTAL PARTITION GATHERS/HISTOGRAM REGATHERS
21498770: AUTOMATIC INCREMENTAL STATISTICS JOB TAKING MORE TIME ON 12.1.0.2

Finally the customer agreed with Support's recommendation to truncate the two synopsis tables, WRI$_OPTSTAT_SYNOPSIS_HEAD$ andWRI$_OPTSTAT_SYNOPSIS$, and regathered incremental statistics the following weekend. Of course they validated this action plan on their performance testing environment first - with the merge patch applied - and it had the desired effect and solved the issue.

Incremental statistic gathering works now as expected, the job fits into the maintenance window.

Lessons Learned

Actually Oracle Support released a very helpful and important note just a few weeks ago (too late for this customer): 

It contains not only links to the patches for the issues the customer hit here - but also a long list for Oracle 11.2.0.4 as well. 

Another MOS Note is worth to mention here: 

But these were not all issues the customer faced - so I may write up another blog post in addition within the next days.

--Mike

PS. All credits go to David Butler and Rob Dawley - thanks for your hard work, sorry for all the inconvenience - and especially thanks for writing it all together and forwarding it to me!!!

Wednesday Apr 27, 2016

Incremental Statistics Collection in Oracle 12.1.0.2 - Upgrade Pitfalls

A while back I blogged already about Incremental Statistics collection in Oracle Database 12.1.0.2: 

And you'll find more information in our documentation and in posts by our optimizer folks:  

And  you may read on this follow-up blog post about a related real world customer example ...   .

Database Upgrade

Important to know is the fact that during a database upgrade the underlying tables containing the synopsis for incremental stats collection may be reorganized. And depending on the amount of data this can take a bit.

The largest synopsis tables I have seen so far were almost 1TB of size at a financial customer in Europe. But I have seen ranges around 300GB quite often in the past months.
.

What happens during the database upgrade? 

Incremental Statistics Collection got introduced with Oracle Database 11.1.0.6 and improved from release to release. But during a database upgrade a reorganization of the synopsis table can happen.

  • Upgrade from Oracle 11.1.0.6/7 or 11.2.0.1 to Oracle 11.2.0.2/3/4:
    • Restructuring of WRI$_OPSTAT_SYNOPSIS$ to use range-hash partitioning 
    • Most data movement will happen here
    • As for the interim period two synopsis tables exist this will consume 2x the space of the synopsis table during the movement
      .

  • Upgrade from Oracle 11.2.0.2/3/4 to Oracle 12.1.0.x:
    • Restructuring of WRI$_OPSTAT_SYNOPSIS$ from range-hash partitioning to list-hash partitioning
    • There is little data movement in this case as the move happens with the help of metadata swapping
      .

Which symptoms may you see?

Actually very simple and obvious symptoms:
Phase 1 of the parallel upgrade to Oracle Database 12c takes unusually long. It should usually complete within the range of less than a few minutes. But in those cases it can take literally hours.

If that happens check your catupgrd0.log and watch out for the long running statements. It does not mean necessarily that it happens because of a huge synopsis table. For instance one of my German reference customers, DVAG had leftovers in the SYSAUX because of bugs in earlier releases they had worked with. 

But if you spot such results (quoting a colleague here):

"The table WRI$_OPTSTAT_SYNOPSIS$ has 20420 partitions, 344618 subpartitions and 921207 MB size. [..] This transformation step lasts for 6,5 hours, so the whole upgrade process duration has an important impact from this step." 

then you should be alerted. 
.

How can you check this upfront? 

We haven't included a check into the preupgrd.sql yet. But the following three queries will tell you if you may see issues when you get a larger number as result: 

  • How many tables have incremental stats on?
    SQL> select count(distinct bo#) from sys.wri$_optstat_synopsis_head$;
    .
  • How many partitions does your WRI$_OPSTATS_SYNOPSIS$ have?
    SQL> select partition_count from dba_part_tables where table_name='WRI$_OPTSTAT_SYNOPSIS$';
    .
  • How large is your synopsis table?
    SQL> select sum(bytes/(1024*1024)) "MB" from dba_segments where segment_name='WRI$_OPTSTAT_SYNOPSIS$';
    ,
  • Tables where inc stats are ON?
    SQL> select u.name "OWNER" ,o.name "TABLE_NAME" ,p.valchar
    from  sys.OPTSTAT_USER_PREFS$ p
    inner join sys.obj$ o on p.obj#=o.obj#
    inner join sys.user$ u on o.owner#=u.user#
    where p.PNAME = 'INCREMENTAL';

  • Synopsis for tables which don't exist anymore?
    SQL> select distinct h.bo# from sys.wri$_optstat_synopsis_head$ h where not exists (select 1 from sys.tab$ t where t.obj# = h.bo#);
    .

Especially a large number of tables being monitored and a size of tens and hundreds of GBs will indicate that you may have to calculate for a longer upgrade duration.

How do you cure this?

Support sometimes gives the recommendation to look for MOS Note: 1055547.1 - SYSAUX Grows Because Optimizer Stats History is Not Purged and asks for a manual purge of stats, for instance:

begin
for i in reverse 1..31
loop
dbms_stats.purge_stats(sysdate-i);
end loop;
end;
/

But this won't clean up the synopsis tables but only stats history for object statistics. And it may create some noise in your UNDO. So in any case you may better set your stats retention policy to something such as 10 days instead of the default of 31 days instead generally.

First of all you have to make sure that this patch got applied to your target already before upgrade - it will add parallel index capabilities which will speed up the rebuild a lot: 

Be aware:
Truncating WRI$_OPTSTATS_SYNOPSIS$ and WRI$_OPTSTAT_SYNOPSIS_HEAD$ is strictly not recommended. If you plan to do it the hard way please check back with Oracle Support for their approval first.
.

Further Information?

Please read on here about a real world customer example ... 

 


--Mike

Tuesday Apr 26, 2016

MOS Note 1454618.1: Quick Reference to Database PSUs, CPUs, BPs and Patchsets

Sometimes my mouse arm gets tired by clicking myself through MOS notes just to download a specific PSU or BP - and as I experiment a lot with PSUs and BPs right now I clicked a lot in the past days and nights. 

Usually I'd start with either MOS Note:161818.1 - then click on the release link (e.g. 12.1.0.x) in the left-most column, then into the Availability and Known Issues not (e.g. MOS Note:1683799.1 for Oracle Database 12.1.0.2.0) and then select the most recent patch from the list of Current Recommended Patches.

Even though we all agree that you should be ideally always on the most recent BP (or at least PSU) there may be situations where you need to access an older PSU or BP or just the CPU.
So what if you need for instance the October 2015 PSU?
This is obviously not linked from the above note.

My usual click routine guides me via the Critical Patch Updates and Security Alerts page, from there via the particular release date (e.g. October 2015) to the Database Server Version (e.g. linked from here is MOS Note:2037108.1) and then to the patch number I'm searching for.

Just learned a few days ago by accident that there's a MOS Note which I have on my favorites since then:

containing all this information - even for 8.1.7.4 patches if you still need them ;-)

After listing the base release first followed by the available patch sets it offers you the links to the PSUs, CPUs and BPs - and if you are looking for the patches containing the OJVM you'll find them by scrolling down towards the end of the note as well in a separate section. 

*** Don't get puzzled by the note's title - it lists the Bundle Patches for Windows only inside, not the general link to all BPs. Myself and a colleague left already feedback for the note owner to add the BP links (or Proactive Bundle Patch links) as well *** 

MOS Note Patches PSUs CPUs SPUs BPs

In fact a very useful MOS Note.
.

--Mike

Friday Apr 22, 2016

New PREUPGRD.SQL is available - Upgrade 12c - Apr16

It's time for a new and improved version of our team's preupgrd.sql (comes with the preupgrade package utluppkg.sql). 

The April 2016 preupgrd.sql 

Please always download and use the most recent version from:

as this version is 3 years newer than the one you'll get with a fresh install of Oracle Database 12.1.0.2

preupgrd.sql - April 2016 - MOS Note:884522.1
.

Included in the April 2016 PSU and BP as well

Great news - and very important. Once you apply the most recent April 2016 PSU or BP (recommended - please see here: April 2016 PSU/BP are here!) then you'll get the most recent version of the preupgrd.sql with utluppkg.sql as well automatically.

No need to download it separately anymore once you are on this PSU or BP.
.

--Mike

Thursday Apr 21, 2016

Oracle April 2016 PSU and Proactive BPs are there

Hurray, it's Patching Day!

Sounds a bit like D-Day ;-) But April 19, 2016 the most recent April PSUs (Patch Set Updates) and BPs (Bundle Patches) got released.

Find all the necessary information with the below links: 

The important change in the April PSU/BP release:
The database patch for "Engineered Systems and Database In-Memory 12.1.0.2" luckily got renamed into "Proactive Bundle Patch 12.1.0.2". That is not only a rebranding but it should express that we would like to encourage you to apply the Bundle Patches
instead of the PSUs. Simple reason is that the BPs will contain optimizer fixes. 

In the MOS Note: 2102148.1 (Patch Set Update and Critical Patch Update April 2016 Availability Document) you'll find a section 3.1.4 linking to the database patches

This is the recommended one for Oracle Database 12.1.0.2:

  • Database Proactive Bundle Patch 12.1.0.2.160419 (Apr2016) Patch 22899531,

But right now it is available for Linux-x86-64, zLinux and Intel Solaris only. Not sure when the others will get released. Please find links to the regular PSUs and other ports and releases such as 11.2.0.4 and Windows etc in the above MOS Note: 2102148.1.

This is the list of fixes included in this Bundle Patch:

And don't worry about the name - I found out yesterday that not all MOS Notes have adopted the new naming convention to rename "Bundle Patches for Engineeered Systems and DB In-Memory" which was very misleading anyway into the new "Proactive Bundle Patches" naming. This may take a few additional days I'd guess ...

I will download it right now and patch my HOL environment.

And as usual don't forget the most recent version of opatch (Patch 6880880).

opatch download MOS

.

--Mike 
.


Wednesday Apr 20, 2016

Data Pump - Exclude Stats differently for TTS and FTEX

Nice little best practice for statistics and Data Pump when doing either Transportable Tablespaces or Full Transportable Export-Import (credits to Roy and Dean Gagne).
.

Transport Statistics via a Staging Table

First of all we always recommend to exclude statistics when doing a Data Pump export as the import of such stats takes way longer than transporting them via a stats table. If you are unfamiliar with transporting stats between databases please see the Oracle Performance Tuning Guide with a nice tutorial

The basic steps to transport statistics from one database to another fast and efficient consist of: 

  1. Create a staging table in your source database with DBMS_STATS.CREATE_STAT_TABLE
  2. Export your local stats into this staging table by using DBMS_STATS.EXPORT_SCHEMA_STATS
  3. Export the staging table and import it into your destination database with Data Pump
  4. Import the statistics held in the staging table by using DBMS_STATS.IMPORT_SCHEMA_STATS

For the regular Data Pump exports we always recommend to set:

EXCLUDE=STATISTICS

to avoid performance penalties during impdp.

But this does not affect Transportable Tablespaces and Full Transportable Export/Import.
.

How to exclude Statistics for TTS and FTEX?

For reasons I don't know the metadata heterogeneous object for "transportable" is different than all of the others Therefore in order to exclude statistics for Transportable Tablespaces and Full Transportable Export/Import you must set:

EXCLUDE=TABLE_STATISTICS,INDEX_STATISTICS

Tuesday Apr 19, 2016

RMAN Catalog Upgrade fails - ORA-02296 - error creating modify_ts_pdbinc_key_not_null

This issue got raised to my via a customer I know for quite a while - all credits go to Andy Kielhorn for digging down into that issue and solving it. 
,

Failed RMAN Catalog Upgrade from 11.2.0.4 to 12.1.0.2

The RMAN catalog upgrade:

SQL> @?/rdbms/admin/dbmsrmansys.sql

$ rman CATALOG rman/xxx@rman01

RMAN> UPGRADE CATALOG; 

RMAN> UPGRADE CATALOG;

failed with the following sequence of error messages: 

error creating modify_ts_pdbinc_key_not_null
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-06004: ORACLE error from recovery catalog database: ORA-02296: cannot enable (RMAN.) - null values found

error creating modify_tsatt_pdbinc_key_not_null
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-06004: ORACLE error from recovery catalog database: ORA-02296: cannot enable (RMAN.) - null values found

error creating modify_df_pdbinc_key_not_null
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-06004: ORACLE error from recovery catalog database: ORA-02296: cannot enable (RMAN.) - null values found

error creating modify_tf_pdb_key_not_null
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-06004: ORACLE error from recovery catalog database: ORA-02296: cannot enable (RMAN.) - null values found

error creating modify_bs_pdb_key_not_null
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-06004: ORACLE error from recovery catalog database: ORA-02296: cannot enable (RMAN.) - null values found

Andy found also these bugs in MOS - but no helpful information included:

  • Bug 20861957
    ORA-2296 DURING UPGRADE CATALOG TO 12.1,0.1 IN AN 11.2 DATABASE
  • Bug 19677999
    CATALOG SCHEMA UPGRADE TO 12.1.0.2 FAILED
.

The  Solution

There seems to be a potential inconsistency in the RMAN catalog when the PDB/CDB mechanisms get introduced. This does not necessarily happen - but it can happen.

The workaround is described in:

  • Bug 19677999 : CATALOG SCHEMA UPGRADE TO 12.1.0.2 FAILED

==> Connect to catalog schema and clear the table having null details 

delete bdf where not exists (select 1 from dbinc where dbinc.dbinc_key = bdf.dbinc_key);
delete bcf where not exists (select 1 from dbinc where dbinc.dbinc_key = bcf.dbinc_key);
commit;

==> After clearing the offending rows , upgrade catalog worked

But please use this workaround only under Oracle Support's supervision. I did document it here to ease your verification.

Andy fixed it with:

update <rmancat_owner>.dbinc set PARENT_DBINC_KEY=NULL where (DBINC_KEY) IN (SELECT DBINC_KEY  from  <rmancat_owner>..ts where pdbinc_key is null); 
commit;

but please open an SR and point Oracle Support to the bug and the potential workarounds in case you hit the issue.
.

--Mike

Monday Apr 18, 2016

Patching does not work - Journey to the Cloud VI

DBaaS Oracle Cloud

What happened so far on my Journey to the Cloud?


Patching in the Cloud

I would like to patch my Oracle DBaaS Cloud today. It was so simple a few weeks ago. But I didn't patch it yet up to the January 2016 PSU (12.1.0.2.160119PSU) - shame on me :-(

Please don't be scared by the weird mix of German and English - I filed a bug for this months ago ... and maybe one magic sunny day I can switch the language back to pure English ...

Anyhow, I choose my VM and it highlighted to me an available patch to apply, the January 2016 PSU:


.

Let's do the PreCheck 

I chose PRECHECK first from the drop-down hamburger menu to the right and received the following message: 

But where are the logs? There is no link, no hint, nothing.
.

Let's check the README 

At this point I decided to check the README to find out whether I missed something important. So I clicked on README and received this meaningful message:

Potentially not a fault of the Cloud folks as I realized by myself that the MOS facility to link patches directly is broken since weeks. A manual interaction in MOS to locate the README is necessary - but it gave me no indication regarding the above failed precheck.
.

Force to apply the PSU 

Next step: FORCE to apply the patch (2nd option in the hamburger menu). But again the result is not positive showing me again a weird mix of Denglish (Deutsch and English).

.

Result?

Well, that is somewhat unexpected. I checked with some people who are way more familiar than I with our DBaaS Cloud after consulting our documentation and googling around - and learned that this functionality is broken since several weeks,

No further comment necessary.
.

--Mike

Thursday Apr 07, 2016

Collaborate16 - See you on Monday!

Collaborate Conference 2016

Time flies.

 I already started packing stuff for COLLABORATE16 - and I hope to see you in Las Vegas from April 10-14, 2016 :-)

These are the sessions I'll present: 

And if you'd like to discuss your topics in more detail feel free to visit me at the:

Oracle Booth #1053
Exhibit Hall - Bayside C/D, Level 1 – Mandalay Bay South Convention Center

  • Wednesday, April 13
    • 10:15 a.m. - 11:15 p.m. 
    • 12:15 p.m. - 1:15 p.m.

     CU soon!

    --Mike
    .

    Thursday Mar 31, 2016

    DROP PLUGGABLE DATABASE - things you need to know

    Directly after my DOAG (German Oracle User Group) Conference presentation about "How Single-/Multitenant will change a DBA's life" Martin Bach (Enkitec) approached me and told me about his experiences with the DROP PLUGGABLE DATABASE command and future recoverability.

    Martin discovered that once you issued the DROP PLUGGABLE DATABASE command you can't reuse a previously taken backup of this particular PDB anymore and recover the PDB into this existing CDB. I wasn't aware of this and I'm glad that Martin told me about it.

    Actually only the meta information in the controlfile or the RMAN catalog will be deleted. But archive logs and backup still persist.

    See also my blog post from Jan 9, 2015:
    Recent News about Pluggable Databases - Oracle Multitenant

    This is the error message you'll see when you try to recover a dropped pluggable database:

    RMAN> restore pluggable database pdb2drop;

    Starting restore at 01-JUN-15
    using channel ORA_DISK_1
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of restore command at 06/01/2015 10:10:40
    RMAN-06813: could not translate pluggable database pdb2drop

    Just a few weeks ago a colleague from German presales asked me if I know a more convenient way to restore a PDB once the magic command (DROP PLUGGABLE DATABASE) has been issued than recovering it into an auxiliary container database abd unbplug/plug it. I haven't.

    But Nik (thanks!!!) told me that he pushed a MOS Note to being published explaining how to workaround this issue: 

    MOS Note: 2034953.1
    How to Restore Dropped PDB in Multitenant

    In brief this MOS Note describes how to:

    • Create an auxiliary container database
    • Recover the backup (yes, you will have to have a backup of your container database) including this particular PDB
    • Unplug the PDB after recovery has been finished and plug it back into the original CDB

    Now some will say: Hey, that's simple and obvious. For me it wasn't ;-) That's why I write about it to remind myself of this workaround ...

    --Mike
    .



    Wednesday Mar 30, 2016

    DBUA and Read-Only Tablespaces - Things to Know II

    Related Blog Posts:



    Thanks to Rodolfo Baselli commenting on a previous blog post about the DBUA and Read-Only Tablespaces I dug a bit deeper and found out that "assuming silently" does not mean "works as intended".

    But one piece after another.

    Rodolfo commented that if he triggers the DBUA to switch all data tablespaces into read-only mode for the duration of the upgrade it will be still him to create the correct backup, the DBUA won't do it.

    This is the option in the Oracle Database 12.1.0.2 DBUA (Database Upgrade Assiatant):

    DBUA - Tablespaces in Read Only Mode

    I did silently assume that the DBUA will choose the correct backup strategy automatically when it offers (enabled by default) to create an Offline Backup on the Recovery Options screen a while later in the same dialogue.

    Backup Strategy DBUA

    But in fact it doesn't.

    When you choose the default, "Create a New Offline RMAN Backup" it will create a full offline RMAN  backup to the desired location - but not an partial offline backup as intended by the optional trigger to have the tablespaces in read-only mode during upgrade to allow a fast and simple restore without the need for a recovery. Please note that I would recommend this option generally only in cases where the database is on purpose in norarchivelog mode or where the RTO (Recovery Time Objective) is met only by restoring a partial offline backup.

    What are your options now?

    If you switch on the Read-Only option on purpose you'll have to choose "I have my own backup and restore strategy" and do the partial offline backup by yourself - before you start the DBUA.

    Personally I'd consider this option as not useful when used within the DBUA right now. We have discussed this internally, it may work correctly in a future patch set for the upcoming next release of the database therefore no offense to anybody. It's just important to know that you'll have to do the partial offline backup by yourself at the moment.
    .

    --Mike

    Wednesday Mar 23, 2016

    What does DEPRECATED mean? And DESUPPORTED?

    There's sometimes a misunderstanding about what we mean with the term DEPRECATED? And what is the difference to DESUPPORTED? Actually there's a full chapter in the Database Upgrade Guide listing deprecated and desupported features.

    Deprecated

    Especially this message puzzled a lot of customers stating that the non-CDB architecture is deprecated in Oracle Database 12c.


    In the Database Upgrade Guide we clearly explain what deprecated means:

    "By deprecate, we mean that the feature is no longer being enhanced but is still supported"

    So for you it means just be aware that we don't further develop or enhance something. But you are still fully supported by using this feature.

    Another well known example is Oracle Streams which is fully supported in Oracle Database 12c - but not in Oracle Multitenant - and is deprecated and therefore not enhanced or developed any further. 

    Or to name another example, Oracle Restart, which is deprecated for some time - but still not desupported. And I know a good bunch of customers using it in production even with Oracle Database 12.1.0.2 on several hundred databases.
    .

    Desupported

    Even if something is desupported - remember the Rule Based Optimizer? - you can still use a feature being desupported. But on your own risk as we don't fix any bugs or issues anymore. 

    Again the Database Upgrade Guide clarifies the term:

    "By desupported, we mean that Oracle will no longer fix bugs related to that feature and may remove the code altogether"

    Other common examples in Oracle Database 12c are the Enterprise Manager Database Control which simply does not exist anymore in Oracle Database 12c or the desupport of Raw Devices.
    .

    Summary

    Deprecated is a signal that something may disappear in the future and does not get enhanced anymore. No activity required except of taking note for your future plans. Desupported means that we don't fix anything anymore for a desupported feature or product - and it may even disappear. But often desupported features are still there and can be used on your own risk only. 
    .

    --Mike
    .


    Tuesday Mar 22, 2016

    GC Freelist Session Waits causing slowness and hangs

    Best Practice Hint

    One of the best things in my job:
    I learn from you folks out there. Everyday. 

    Credits here go to Maciej Tokar who did explain the below topic to me via LinkedIn - thanks a lot, Maciej! 
    .

    Locks are not being closed fast enough, resulting in gc freelist waits

    You can find a reference for Global Cache Freelist in the Oracle Documentation. This issue here can or will lead to database being slow, up to complete hangs. Based on my research it looks as the issue is not related to RAC only but a general thing. In your session waits you'll spot this:

    Event                               Event Class        % Event   Sessions
    ----------------------------------- --------------- ---------- ----------
    gc freelist                         Cluster              41.37       8.61

    This has been logged as a bugs 21352465 (public) and 18228629 (not public). It causes locks are not being closed fast enough, resulting in gc freelist waits. In conjunction the default for _gc_element_percent seemed to be too low at 120 (or 110 in 11.2.0.4).

    Actually the issue can affect not only Oracle Database 12.1.0.2 but also Oracle Database 11.2.0.3 and 11.2.0.4.

    See MOS Note:2055409.1 (Database Hangs with High "gc freelist" wait ) for further details.
    .

    Solution

    • Apply the patch for bug 18228629 on top of a PSU or BP where available
      • See the drop-down list to the right labeled "Release" to access the correct patch for your release
      • Unlike the above MOS Note states in Oracle Database 12.1.0.2 it is only available on top of the January 2016 PSU and BP and two other Exadata BPs - and on Linux only!
        .
    • Use the workaround and set _gc_element_percent = 200
      • This will require an instance restart as the parameter can't be changed dynamically:
        alter system set "_gc_element_percent"=200 scope=spfile;
        .

    Epilogue

    We've had a lot of discussions about underscore parameter in the past weeks. And I'm not a big fan of underscores especially when it comes to upgrades as experiences has shown that having underscores set one day may make it hard to remove them the other day - and underscores can significantly impact the upgrade duration in a negative way.

    But on the other hand, if an issue is seriously affecting many customers, and there's no patch available for your platform and environment right now, what else can one do?
    .

    --Mike

    .

    Wednesday Mar 09, 2016

    OUGN Conference - On the boat again

    OUGN Spring Conference 2016

    Last year influenza took me down and out just a couple of days before my planned departure for the famous OUGN Spring Conference. But this year (so far) I'm still happy and healthy and on my way towards beautiful Oslo. I'm really looking forward to this year's OUGN Spring Conference which will happen again on the boat departing from Oslo and sailing over to Kiel - and then returning back.

    In case you plan to visit my talks and demos:

    • Thursday, 10-March-2016 - 14:00-14:45h - Parliament 1+2
      How Oracle Single Tenant will change a DBA's life
      .
    • Friday, 11-March-2016 - 10:30-11:15h - Parliament 1+2
      Oracle Database Upgrade: Live and Uncensored
      .

    Looking forward to this wonderful event with so many good talks and presentations and such a great group of people. And thanks to the organizers of OUGN!

    --Mike

    .

    Tuesday Mar 08, 2016

    Parameter Recommendations for Oracle Database 12c - Part II


    Best Practice Hint

    Time for a new round on Parameter Recommendations for Oracle Database 12.1.0.2. The focus of this blog post settles on very well known parameters with interesting behavior. This can be a behavior change or simply something we'd like to point out. And even if you still work on Oracle Database 11g some of the below recommendations may apply to your environment as well.

    Preface

    Again, please be advised - the following parameter list is mostly based on personal experience only. Some of them are officially recommended by Oracle Support. Always use proper testing mechanisms.

    We strongly recommend Real Application Testing, especially the SQL Performance Analyzer but also Database Replay to verify the effect of any of those parameters. 
    .

    Known Parameters - Interesting Behavior

    • parallel_min_servers
      • What it does?
      • Default:
        • CPU_COUNT * PARALLEL_THREADS_PER_CPU * 2
      • Behavior Change:
        • Setting it to a value below the default will let the database ignore it.
        • In Oracle Database 11g the default was 0
        • Compare 11.2.0.4 vs 12.1.0.2 on the same box:
          • 11g:
            SQL> show parameter parallel_min_servers
            NAME                  TYPE     VALUE
            --------------------- -------- ------
            parallel_min_servers  integer  0

          • 12c:
            SQL> show parameter parallel_min_servers
            NAME                  TYPE     VALUE
            --------------------- -------- ------
            parallel_min_servers  integer  8
      • Explanation:

    • job_queue_processes
      • What it does?
        • See the Oracle Documentation - value specifies the maximum number of job slaves to be created to execute jobs started by either DBMS_JOBS or DBMS_SCHEDULER
      • Default:
        • 1000
      • Recommendation:
        • Set it to a rough equivalent of 2 * CPU cores
      • Explantion:
        • In Oracle Database 12c we introduced the automatic stats gathering during CTAS and IAS (into an empty table only) operations. This can potentially lead to too many jobs doing the stats gathering. Furthermore issues can happen due to the default of concurrent stats gathering.
          Therefore a limitation of this parameter seems to be a good idea. 
        • Be aware when switching it to 0 - this will block all recompilation attempts. Furthermore generally no jobs can be executed anymore with  DBMS_JOBS or DBMS_SCHEDULER.
        • Multitenant behavior change:
          In 12.1.0.1, job_queue_process was a Container Database (CDB) modifiable parameter (ie. at a global level). However, in 12.1.0.2, the job_queue_process parameter is not CDB modifiable; instead it's PDB modifiable which means each PDB can have its own job_queue_process value.  
      • More Information:
      • Annotation:
        I've had an email exchange with Stefan Köhler about the stats behavior for CTAS. As I couldn't myself reproduce the behavior we say at two customer with job_queue_processes=1000 and an heavy CTAS activity (which could be remedied by setting JQP to a lower value) I would put a question mark behind my above statement.

        .
        .
    • recyclebin
      • What it does?
        • See the Oracle Documentation - controls whether the Flashback Drop capability is turned on or off. If the parameter is set to OFF, then dropped tables do not go into the recycle bin. If this parameter is set to ON, then dropped tables go into the recycle bin and can be recovered.
      • Default:
        • ON
      • Recommendation:
        • If the recyclebin is ON (the default) in your environment then empty it at least once per week. Create a default job in all your environments emptying the recycle bin every Sunday morning at 3am for instance:
          SQL> purge DBA_RECYCLEBIN;
      • Explantion:
        • The recycle bin is on in every database by default since Oracle 10g. The danger is that it may not be emptied but especially on developer databases many objects may be created and dropped again. As a result the dropped objects and its dependents still stay in the database until the space needs to be reclaimed. That means, they exist in the data dictionary as well, for instance in TAB$. Their name is different now starting with "BIN$..." instead of "EMP" - but they will blow up your dictionary. And emptying it not often enough may introduce a performance dip to your system as the cleanup of many objects can be quite resource intense
        • Check your current recycle bins:
          SQL > SHOW RECYCLEBIN;
          ORIGINAL NAME RECYCLEBIN NAME              OBJECT TYPE DROP TIME
          ------------- ---------------------------- ----------- -------------------
          TEST_RBIN     BIN$2e51YTaSK8TL/mPy+FuA==$0 TABLE       2010-05-27:15:23:45
          TEST_RBIN     BIN$5dF60S3GSEOSSYREaqCg==$0 TABLE       2010-05-27:15:23:43
          TEST_RBIN     BIN$JHCDN9YwQRXjXGOJcCIg==$0 TABLE       2010-05-27:15:23:42
      • More Information:
    .
    .

    • deferred_segment_creation
      • What it does?
        • See the Oracle Documentation - set to the default (TRUE), then segments for tables and their dependent objects (LOBs, indexes) will not be created until the first row is inserted into the table
      • Default:
        • TRUE
      • Recommendation:
        • Set it to FALSE unless you plan to create a larger number of tables/indexes knowing that you won't populate many of them.
      • Explantion/Risk:
        • If my understanding is correct this parameter got introduced with Oracle Database 11.2 in order to save space when applications such as EBS, Siebel or SAP create tons of tables and indexes which never may get used as you don't work with the matching module of the software
        • The risk can be that certain query check DBA_SEGMENTS and/or DBA_EXTENTS - and if there's no segment allocated you won't find an indication about the existence of the object in there - but it actually exists. Furthermore we have seen issues with Data Pump workers getting contention, and some other things. 
      • More Information:
        • The documentation has become now pretty conservative as well since Oracle 11.2.0.4 and I'll second that:
          Before creating a set of tables, if it is known that a significant number of them will not be populated, then consider setting this parameter to true. This saves disk space and minimizes install time.
          ..
     --Mike

    Friday Mar 04, 2016

    Parameter Recommendations for Oracle Database 12c - Part I

    Best Practice Hint

     A few weeks ago we've published some parameter recommendations including several underscores but based on an internal discussion (still ongoing) we decided to remove this entry and split up the tasks. The optimizer team will take over parts of it and I'll post an update as soon as something is published.

    .

    Preface

    Please be advised - the following parameter list is mostly based on personal experience only. Some of them are officially recommended by Oracle Support. Always use proper testing mechanisms.

    We strongly recommend SQL Performance Analyzer to verify the effect of any of those parameters. 
    .

    How to read this blog post?

    Never ever blindly set any underscore or hidden parameters because "somebody said" or "somebody wrote on a blog" (including this blog!) or "because our country has the best tuning experts worldwide" ... Only trust Oracle Support if it's written into a MOS Note or an official Oracle White Paper or if you work with a particular support or consulting engineer for quite a long time who understands your environment.
    .

    Important Parameter Settings

      • _kks_obsolete_dump_threshold
        • What it does?
          • Introduced in Oracle 12.1.0.2 as an enhancement  to improve cursor sharing diagnostics by dumping information about an obsolete parent cursor and its child cursors after the parent cursor has been obsoleted N times. 
        • Problem:
          • Trace files can grow like giant mushrooms due to cursor invalidations
        • Solution:
        • Patches:
          • Fix included in DBBP 12.1.0.2.160216
          • Fix on-top of 12.1.0.2.13DBEngSysandDBIM
          • Since Feb 13, 2016 there's a one-off available but on Linux only - and only on top of a fresh 12.1.0.2 
        • Remarks:
          • The underlying cursor sharing problem needs to be investigated - always
            If you have cursor sharing issues you may set this parameter higher therefore not every invalidation causes a dump, then investigate and solve the issue, and finally switch the parameter to 0 once the issue is taken care of. 
            Please be aware that switching the parameter to 0 will lead to a lack of diagnostics information in case of cursor invalidations.


      • _use_single_log_writer
      • memory_target
        • What it does?
        • Problem:
          • Unexpected failing database upgrades with settings of  memory_target < 1GB where equal settings ofsga_target and pga_aggregate_target didn't cause issues 
          • It prevents the important use of HugePages
        • Solution:
          • Avoid memory_target by any chance
          • Better use sga_target and pga_aggregate_target instead


      • pga_aggregate_limit

      Essential MOS Notes for Oracle Database 12.1.0.2

      --Mike
      .

      About

      Mike Dietrich - Oracle Mike Dietrich
      Master Product Manager - Database Upgrade & Migrations - Oracle

      Based in Germany. Interlink between customers/partners and the Upgrade Development. Running workshops between Arctic and Antartica. Assisting customers in their reference projects onsite and remotely. Connect via:

      - -

      Search

      Archives
      « July 2016
      SunMonTueWedThuFriSat
           
      1
      2
      3
      6
      8
      9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      22
      23
      24
      25
      26
      27
      28
      29
      30
      31
            
      Today
      Slides Download Center
      Visitors since 17-OCT-2011
      White Paper and Docs
      Workshops
      Viewlets and Videos
      Workshop Map
      x Oracle related Tech Blogs
      This week on my Rega & Pono
      Upgrade Reference Papers