Thursday Jul 07, 2016

Global Temporary Tables - Change in Oracle 12c

A few weeks back I was copied on an email conversation about a important change with Global Temporary Tables (GTT) in Oracle Database 12c

Something you need to be aware of when using GTTs in Oracle Database 12.1.0.2:
Prior to this release GTTs shared statistics between sessions. Statistics were SHARED between different sessions. Now since Oracle Database 12c this is different by default - statistics on GTTs are visible to the SESSION only.

This can be changed and altered of course. And there are pros and cons. But as I'm not an optimizer expert I'm glad that my colleague Nigel Bayliss, our Optimizer Product Manager, has blogged about it. If you use this functionality please read all the details about this change here: 

Thanks Nigel!

--Mike


PS: Original post said: Oracle Database 12.1.0.2 but I misread it - and thanks for the hint - it got introduced with Oracle Database 12.1.0.1 :-)

Monday Jul 04, 2016

Full Transportable Export/Import - Things to Know

This blog post is an addition to:

Seth Miller commented the pitfall of having a serious issue during the Data Pump run, Data Pump exiting and not finishing, and you'll have to do the entire backup/restore/incremental-roll-foward thing again. Without any doubt, this is no fun at all. 

So let me point out a few things to take into consideration - and some of them are not obvious I guess.
.

Do you need to backup/restore/incremental-roll-forward again?

When you let do Data Pump all the manual tasks of Transportable Tablespaces (TTS) this is called Full Transportable Export/Import (FTEX as I abbreviate it). With regular TTS you will take your tablespaces read-write in your destination at the end of the entire process. So whenever something is going wrong during the meta data rebuild you won't have to restore all your tens of terabytes. 

With FTEX things are a bit different. Data Pump will take the tablespaces several times read-write during the impdp run. This has to happen. And the job is not restartable right now. This may change in a future release or patch set. We'll see.

What are your options now to prevent a full repeat of all your backup/restore/incremental-roll-forward activities?

FLASHBACK DATABASE unfortunately is not an option as you could set a Guaranteed Restore Point - but as soon as the data file headers got adjusted FLASHBACK won't be able to revert this. 

Setting the data files to read-only on the OS level is not an option either as you might force Data Pump to fail when it would like to write something into a tablespace and gets an OS return code for not being able to complete the write operation.

Therefore right now the only valid option I can think of is leveraging storage snapshot technologies in order to restore the files back into the status from "before Data Pump ran".

We are aware of this pitfall and we discuss alternatives internally for the future.
.

Character Sets and Time Zone?

Another topic to pay close attention is the database's character set and national character set. Please find all the guidelines about character sets and national character sets here: 

For time zone files please make sure you have the same time zone version in source and destination. As you can't downgrade your time zone file in the destination usually you'll have to apply the matching time zone patch in the source before initiating the transport. If you don't pay attention Data Pump will deny the meta import of the data in the tablespaces as otherwise you may end up with corrupted time zone data.

Please find the precise description in the Oracle 12c documentation: 


Tablespace Encryption

When your tablespaces are encrypted you'll have to be a bit more careful when using TTS or FTEX. Even though the FAQ on Oracle.com mentions that there are no issues still there are some limitations. The documentation is more precise: 

  • Limitations on Transportable Tablespace
    • Transportable tablespaces cannot transport encrypted tablespaces.
    • Transportable tablespaces cannot transport tablespaces containing tables with encrypted columns
  • Limitations on Full Transportable Export/Import
    • You cannot transport an encrypted tablespace to a platform with different endianness.
      To transport an encrypted tablespace to a platform with the same endianness, during export set the ENCRYPTION_PWD_PROMPT export utility parameter to YES, or use the ENCRYPTION_PASSWORD export utility parameter. During import, use the equivalent import utility parameter, and set the value to the same password that was used for the export.
      .
--Mike
.

Monday Jun 27, 2016

Full Transportable Export/Import - Migrating an 11.2.0.4 database to Oracle Database 12c- into the Oracle Cloud

Full Transportable Export/Import - one of the coolest features in Oracle Database 12c 

We blogged about Full Transportable Export/Import a while back. It is - no doubt - one of the coolest features in Oracle Database 12c. And it is part of our Hands-On Lab exercise (Hands On Lab - Upgrade, Migrate, Consolidate to Oracle Database 12c) as well.

It utilizes the technique of Transportable Tablesspaces - cross-platform, cross- Endianness, cross-version - but lets Oracle Data Pump do all the "dirty" work of rebuilding everything kept in your SYSTEM and SYSAUX tablespace including views, synonyms, public objects, grants, sequences etc etc etc.

You can even combine it with RMAN Incremental Backups - of course cross-platform and cross-Endianness as well - in order to decrease the downtime for large databases.
Please find our presentation about the combination of these feature here in the Slides Download Center to the right:

In addition: You can use this Oracle Database 12c feature with your source database Oracle 11.2.0.3 or 11.2.0.4. Just the destination database you are upgrading or migrating to must be at least an Oracle 12.1.0.1 database (or newer of course). 

See the feature in action

Roy recorded a 20 minute video demonstrating how to:

  • Migrate an Oracle Database 11.2.0.4 database into an Multitenant's pluggable database using
  • Full Transportable Export/Import
  • into the Oracle DBaaS Cloud
    .

And of course the feature works also without migrating into a Container database environment - and without migrating to the Oracle DBaaS Cloud. 

If you need to migrate a database from Oracle 11.2.0.3 or newer to Oracle Database 12c Full Transportable Export/Import can become your best friend.

--Mike

Monday Jun 20, 2016

Minor Upgrade? Going from 11.2.0.1 to 11.2.0.4?

My Belgium friend Phillipe Fierens raised a great question on Twitter last week and dropped me an email again after having a discussion with his client:

For Phillipe and myself the answer is pretty clear and straight forward:

There is no minor upgrade anymore since every (patch set) release is a full release and since new parameters, parameter values, features and whatever appears even in patch sets. 

But the following discussion on Twitter with comments from all sides made me think about why people would honestly declare going for instance from 11.2.0.1 to 11.2.0.3 as a minor upgrade whereas going to 12.1.0.2 is seen as a major upgrade?

Let me summarize why I completely disagree - and actually Dom Giles nailed it:

  • Since Oracle Database 11.2.0.1 we deliver patch sets as a full release
  • A patch set can contain not only new parameters or parameter values but may occasionally also contain new features (typical examples in Oracle 11.2.0.4 are the new value for optimizer_dynamic_sampling=11 or the DBMS_REDACT package)
  • Therefore you will have to do exactly the same amount of testing, regardless of going to Oracle Database 11.2.0.4 vs Oracle Database 12.1.0.2 - it is ZERO difference in the tests, the time, the effort, the manpower ...
    .

You don't believe me? Then please check MOS Note:1962125.1 (Oracle Database - Overview of Database Patch Delivery Methods). Scroll down a bit to Testing Recommendations By Patch Type and see the rightmost column of this table headlined "Patch Set Release":

I hope this clarifies it all.

There is no "minor" patch set upgrade anymore. Even though I would love to tell you that going from 11.2.0.3 to 11.2.0.4 is just a minor hop it will require exactly the same testing and evaluation work then going to Oracle Database 12.1.0.2.

But going to Oracle Database 12.1.0.2 will mean that you are under Premier Support until end of June 2018 - whereas Free Extended Support for Oracle Database 11.2.0.4 will end in May 2017.

--Mike

.

Thursday Jun 16, 2016

EM 13c - How to Upgrade from EM Cloud Control 12c

I'm not a Cloud Control expert but I use the tool from time to time - and most of my customers are heavy users of it, especially in larger deployments.

If you use Oracle Enterprise Manager Cloud Control 12c (12.1.0.3, 12.1.0.4 or 12.1.0.5) and would like to evaluate your options to upgrade to Oracle Enterprise Manager Cloud Control 13c then please consult the following very useful documentation:

If you are searching for the software please find it here:

Just be aware:
As of July 15, 2016 I received several emails and comments by customers about issues with the migration to OEM 13c, e.g.:

So please check with Oracle Support first - and test the migration before doing it on a actual life system.
.

--Mike
.

Monday May 30, 2016

DMU - Tips and Tricks - Migration Assistant for Unicode

DMU OTN Logo

Please find previous posts about the Data Migration Assistant for Unicode (DMU) here:


Since Roy and I subscribed to the Hot Topics support email we'll find a very helpful note none of us was aware of almost every second day.

One of these recent finds is:

For those who have never heard of the DMU before you'll find a very brief overview about this tiny little - and very helpful - tool for database character set migrations not only to Unicode here: 

Why is this note so helpful?

Simply because it combines the most important topics about DMU in one note - from starting to issues. And this is really helpful when you plan to use the tool.

It is structured into:

  • A) Things to check before using the DMU
  • B) FAQ and common issues
  • C) Problems starting the DMU or during scan
  • D) Typical errors such as "invalid binary conversion" and others
  • E) Post conversion phase
  • F) Steps to do afterwards
  • plus some extras

.
--Mike

Wednesday May 25, 2016

Transportable Tablespaces - Characters Sets - Same same but different?

All credits go to Don Wolf, an Oracle Advanced Customer Support engineer from Ohio as he dug out this information :-) Thanks Don!

Do database character sets have to match EXACTLY for Transportable Tablespaces?

That sounds like a simple question. When you look into our big slide deck the answer will be a straight "Yes". No doubts. Regardless if you would like to do Transportable Tablespaces or Full Transportable Export/Import your sources and your target's database character sets must be equal. Otherwise Data Pump won't allow you to process the meta data import.

But Don was wondering about slightly differing information in MOS notes and the documentation.
And asked me if I can clarify things. I couldn't. 

  • MOS Note:1454872.1
    Transportable Tablespace (TTS) Restrictions and Limitations: Details, Reference, and Version Where Applicable

    tells you:
  •  And the above documentation link then tells you:
    • The source and the destination databases must use compatible database character sets. That is, one of the following must be true:
      • The database character sets of the source and the target databases are the same.
      • The source database character set is a strict (binary) subset of the target database character set, and the following three conditions are true:
        • The source database is in version 10.1.0.3 or higher.
        • The tablespaces to be transported contain no table columns with character length semantics or the maximum character width is the same in both the source and target database character sets.
        • The tablespaces to be transported contain no columns with the CLOB data type, or the source and the target database character sets are both single-byte or both multibyte.
      • The source database character set is a strict (binary) subset of the target database character set, and the following two conditions are true:
        • The source database is in a version lower than 10.1.0.3.
        • The maximum character width is the same in the source and target database character sets.
          .
  • And furthermore from "Choosing a Character Set" section of Oracle 12.1  Database Globalization Support Guide:
    • Oracle Database does not maintain a list of all subset-superset pairs but it does maintain a list of binary subset-superset pairs that it recognizes in various situations such as checking compatibility of a transportable tablespace or a pluggable database. Table A-11 and Table A-12 list the binary subset-superset pairs recognized by Oracle Database.
    • In those tables the only binary subset-superset pairs involving AL32UTF8 are:
      • UTF8/AL32UTF8
      • US7ASCII/AL32UTF8
        .
  • This is not 100% congruent to the information provided in
    MOS Note 371556.1 - How to Migrate to different Endian Platform Using Transportable Tablespaces With RMAN
    saying: 
    "
    The source and target database must use the same character set and national character set."
    .

What is correct, what's not?

First of all the wording of "compatible character sets" seems to be gotten introduced with Oracle Database 11.2. 

In fact the scope for the target system has become broader in Oracle Database 11.2.  These rule here are correct as soon as your target database is an Oracle Database 11.2 or newer database release.

  • The source and the destination databases must use compatible database character sets. That is, one of the following must be true:
    • The database character sets of the source and the target databases are the same.
    • The source database character set is a strict (binary) subset of the target database character set, and the following three conditions are true:
      • The source database is in version 10.1.0.3 or higher.
      • The tablespaces to be transported contain no table columns with character length semantics or the maximum character width is the same in both the source and target database character sets.
      • The tablespaces to be transported contain no columns with the CLOB data type, or the source and the target database character sets are both single-byte or both multibyte.
    • The source database character set is a strict (binary) subset of the target database character set, and the following two conditions are true:
      • The source database is in a version lower than 10.1.0.3.
      • The maximum character width is the same in the source and target database character sets.
        .

--Mike

Thursday Apr 28, 2016

Incremental Statistics Collection in Oracle 12.1.0.2 - A True Story

Recently I came across a really interesting customer case in the UK dealing with Incremental Statistics Collection issues in regards to an upgrade to Oracle Database 12.1.0.2.

This is the follow-up blog post to: 

A while back I blogged already about Incremental Statistics collection in Oracle Database 12.1.0.2: 

And you'll find more information in our documentation and in posts by our optimizer folks:  

The Basics

The idea of Incremental Statistics Collection is simply to save time and resources when gathering statistics for partitioned tables in order to update the global stats. In Oracle Database 12c we added the very important features of: 

  • Incremental stats working now with partition exchange as well
  • "Changed" partitions won't be eligible for new stats generation until a certain stale percentage (default: 10%) has been reached - this has to be enabled and can be tweaked
    • SQL> exec DBMS_STATS.SET_DATABASE_PREFS('INCREMENTAL_STALENESS','USE_STALE_PERCENT');
    • SQL> exec DBMS_STATS.SET_DATABASE_PREFS('STALE_PERCENT','12');

Furthermore we always recommend to:  

  • Not enable incremental stats collection globally but only for specific tables. Otherwise the footprint for the synopsis on disk can grow fairly large. Biggest footprint I've seen so far was almost 1TB in size in the SYSAUX tablespace
  • Enable it mostly for range-partitioned tables where only a few partitions undergo DML changes 
    .

The Case

Actually the synopsis table in this particular case did contain "only" 300GB of data. But as the starting point was already Oracle Database 11.2.0.3 just a change from Range-Hash to List-Hash Partitioning would happen. As this happens via metadata swapping the impact should be small.

But the issue coming up after the upgrade didn't have to do with this change in partitioning layout.

Issue No.1

During the maintenance window the incremental stats job did not finish and actually this statement caused plenty of trouble:

delete from sys.wri$_optstat_synopsis$ where bo# = :tobjn and group# in (select * from table(:groups)

Not completing this statement within the 4 hours of the default maintenance window led to a rollback of the delete - and its rollback alone took 14 hours. It turned out that the delete has to happen (and complete) before the regathering of stats could start. 

I did recommend:
patch 21498770: AUTOMATIC INCREMENTAL STATISTICS JOB TAKING MORE TIME ON 12.1.0.2  (see also MOS Note:2041541.1 - Gather_Database_Stats_Job_Proc Taking More Time in 12.1.0.2 Than 11.2.0.4)

and the customer requested:
Patch 22893653: MERGE REQUEST ON TOP OF DATABASE PSU 12.1.0.2.160119 FOR BUGS 19450139 20807398
on top of their January 2016 PSU - the merge included the patch I mentioned.

Besides that another issue got discovered.

Issue No.2

The daily purge of statistics didn't really work on large synopsis as the default degree of parallelism introduced with Oracle 12c gets derived from the number of blocks on the synopsis table - bigger table means a higher parallel degree for the purge. It ended up with a PARALLEL hint of 60 - and that was counterproductive. Once a purge got started manually in serial mode or with a low DOP it completed in less than 1 minute.

With a similar trace as:

set serveroutput on;
EXEC dbms_output.enable(999999999);
EXEC dbms_stats.set_global_prefs('trace',1+4);
EXEC dbms_stats.gather_table_stats(ownname=>'&TABLE_OWNER',tabname=>'&TABLE_NAME');
EXEC dbms_stats.set_global_prefs('trace',0);

the issues could be identified as: 

Bug 21258096 - UNNECESSARY INCREMENTAL PARTITION GATHERS/HISTOGRAM REGATHERS

The customer requested another merge patch 22926433 which contains the following fixes:

19450139: KN:LNX:PERFORMANCE ISSUE WHEN RUNNING GATHER TABLE STATS WITH INCREMENTAL STATS
20807398: ORA-00600 [KGL-HASH-COLLISION] WITH FIX TO BUG 20465582
21258096: UNNECESSARY INCREMENTAL PARTITION GATHERS/HISTOGRAM REGATHERS
21498770: AUTOMATIC INCREMENTAL STATISTICS JOB TAKING MORE TIME ON 12.1.0.2

Finally the customer agreed with Support's recommendation to truncate the two synopsis tables, WRI$_OPTSTAT_SYNOPSIS_HEAD$ andWRI$_OPTSTAT_SYNOPSIS$, and regathered incremental statistics the following weekend. Of course they validated this action plan on their performance testing environment first - with the merge patch applied - and it had the desired effect and solved the issue.

Incremental statistic gathering works now as expected, the job fits into the maintenance window.

Lessons Learned

Actually Oracle Support released a very helpful and important note just a few weeks ago (too late for this customer): 

It contains not only links to the patches for the issues the customer hit here - but also a long list for Oracle 11.2.0.4 as well. 

Another MOS Note is worth to mention here: 

But these were not all issues the customer faced - so I may write up another blog post in addition within the next days.

--Mike

PS. All credits go to David Butler and Rob Dawley - thanks for your hard work, sorry for all the inconvenience - and especially thanks for writing it all together and forwarding it to me!!!

Wednesday Apr 27, 2016

Incremental Statistics Collection in Oracle 12.1.0.2 - Upgrade Pitfalls

A while back I blogged already about Incremental Statistics collection in Oracle Database 12.1.0.2: 

And you'll find more information in our documentation and in posts by our optimizer folks:  

And  you may read on this follow-up blog post about a related real world customer example ...   .

Database Upgrade

Important to know is the fact that during a database upgrade the underlying tables containing the synopsis for incremental stats collection may be reorganized. And depending on the amount of data this can take a bit.

The largest synopsis tables I have seen so far were almost 1TB of size at a financial customer in Europe. But I have seen ranges around 300GB quite often in the past months.
.

What happens during the database upgrade? 

Incremental Statistics Collection got introduced with Oracle Database 11.1.0.6 and improved from release to release. But during a database upgrade a reorganization of the synopsis table can happen.

  • Upgrade from Oracle 11.1.0.6/7 or 11.2.0.1 to Oracle 11.2.0.2/3/4:
    • Restructuring of WRI$_OPSTAT_SYNOPSIS$ to use range-hash partitioning 
    • Most data movement will happen here
    • As for the interim period two synopsis tables exist this will consume 2x the space of the synopsis table during the movement
      .

  • Upgrade from Oracle 11.2.0.2/3/4 to Oracle 12.1.0.x:
    • Restructuring of WRI$_OPSTAT_SYNOPSIS$ from range-hash partitioning to list-hash partitioning
    • There is little data movement in this case as the move happens with the help of metadata swapping
      .

Which symptoms may you see?

Actually very simple and obvious symptoms:
Phase 1 of the parallel upgrade to Oracle Database 12c takes unusually long. It should usually complete within the range of less than a few minutes. But in those cases it can take literally hours.

If that happens check your catupgrd0.log and watch out for the long running statements. It does not mean necessarily that it happens because of a huge synopsis table. For instance one of my German reference customers, DVAG had leftovers in the SYSAUX because of bugs in earlier releases they had worked with. 

But if you spot such results (quoting a colleague here):

"The table WRI$_OPTSTAT_SYNOPSIS$ has 20420 partitions, 344618 subpartitions and 921207 MB size. [..] This transformation step lasts for 6,5 hours, so the whole upgrade process duration has an important impact from this step." 

then you should be alerted. 
.

How can you check this upfront? 

We haven't included a check into the preupgrd.sql yet. But the following three queries will tell you if you may see issues when you get a larger number as result: 

  • How many tables have incremental stats on?
    SQL> select count(distinct bo#) from sys.wri$_optstat_synopsis_head$;
    .
  • How many partitions does your WRI$_OPSTATS_SYNOPSIS$ have?
    SQL> select partition_count from dba_part_tables where table_name='WRI$_OPTSTAT_SYNOPSIS$';
    .
  • How large is your synopsis table?
    SQL> select sum(bytes/(1024*1024)) "MB" from dba_segments where segment_name='WRI$_OPTSTAT_SYNOPSIS$';
    ,
  • Tables where inc stats are ON?
    SQL> select u.name "OWNER" ,o.name "TABLE_NAME" ,p.valchar
    from  sys.OPTSTAT_USER_PREFS$ p
    inner join sys.obj$ o on p.obj#=o.obj#
    inner join sys.user$ u on o.owner#=u.user#
    where p.PNAME = 'INCREMENTAL';

  • Synopsis for tables which don't exist anymore?
    SQL> select distinct h.bo# from sys.wri$_optstat_synopsis_head$ h where not exists (select 1 from sys.tab$ t where t.obj# = h.bo#);
    .

Especially a large number of tables being monitored and a size of tens and hundreds of GBs will indicate that you may have to calculate for a longer upgrade duration.

How do you cure this?

Support sometimes gives the recommendation to look for MOS Note: 1055547.1 - SYSAUX Grows Because Optimizer Stats History is Not Purged and asks for a manual purge of stats, for instance:

begin
for i in reverse 1..31
loop
dbms_stats.purge_stats(sysdate-i);
end loop;
end;
/

But this won't clean up the synopsis tables but only stats history for object statistics. And it may create some noise in your UNDO. So in any case you may better set your stats retention policy to something such as 10 days instead of the default of 31 days instead generally.

First of all you have to make sure that this patch got applied to your target already before upgrade - it will add parallel index capabilities which will speed up the rebuild a lot: 

Be aware:
Truncating WRI$_OPTSTATS_SYNOPSIS$ and WRI$_OPTSTAT_SYNOPSIS_HEAD$ is strictly not recommended. If you plan to do it the hard way please check back with Oracle Support for their approval first.
.

Further Information?

Please read on here about a real world customer example ... 

 


--Mike

Tuesday Apr 26, 2016

MOS Note 1454618.1: Quick Reference to Database PSUs, CPUs, BPs and Patchsets

Sometimes my mouse arm gets tired by clicking myself through MOS notes just to download a specific PSU or BP - and as I experiment a lot with PSUs and BPs right now I clicked a lot in the past days and nights. 

Usually I'd start with either MOS Note:161818.1 - then click on the release link (e.g. 12.1.0.x) in the left-most column, then into the Availability and Known Issues not (e.g. MOS Note:1683799.1 for Oracle Database 12.1.0.2.0) and then select the most recent patch from the list of Current Recommended Patches.

Even though we all agree that you should be ideally always on the most recent BP (or at least PSU) there may be situations where you need to access an older PSU or BP or just the CPU.
So what if you need for instance the October 2015 PSU?
This is obviously not linked from the above note.

My usual click routine guides me via the Critical Patch Updates and Security Alerts page, from there via the particular release date (e.g. October 2015) to the Database Server Version (e.g. linked from here is MOS Note:2037108.1) and then to the patch number I'm searching for.

Just learned a few days ago by accident that there's a MOS Note which I have on my favorites since then:

containing all this information - even for 8.1.7.4 patches if you still need them ;-)

After listing the base release first followed by the available patch sets it offers you the links to the PSUs, CPUs and BPs - and if you are looking for the patches containing the OJVM you'll find them by scrolling down towards the end of the note as well in a separate section. 

*** Don't get puzzled by the note's title - it lists the Bundle Patches for Windows only inside, not the general link to all BPs. Myself and a colleague left already feedback for the note owner to add the BP links (or Proactive Bundle Patch links) as well *** 

MOS Note Patches PSUs CPUs SPUs BPs

In fact a very useful MOS Note.
.

--Mike

Wednesday Apr 20, 2016

Data Pump - Exclude Stats differently for TTS and FTEX

Nice little best practice for statistics and Data Pump when doing either Transportable Tablespaces or Full Transportable Export-Import (credits to Roy and Dean Gagne).
.

Transport Statistics via a Staging Table

First of all we always recommend to exclude statistics when doing a Data Pump export as the import of such stats takes way longer than transporting them via a stats table. If you are unfamiliar with transporting stats between databases please see the Oracle Performance Tuning Guide with a nice tutorial

The basic steps to transport statistics from one database to another fast and efficient consist of: 

  1. Create a staging table in your source database with DBMS_STATS.CREATE_STAT_TABLE
  2. Export your local stats into this staging table by using DBMS_STATS.EXPORT_SCHEMA_STATS
  3. Export the staging table and import it into your destination database with Data Pump
  4. Import the statistics held in the staging table by using DBMS_STATS.IMPORT_SCHEMA_STATS

For the regular Data Pump exports we always recommend to set:

EXCLUDE=STATISTICS

to avoid performance penalties during impdp.

But this does not affect Transportable Tablespaces and Full Transportable Export/Import.
.

How to exclude Statistics for TTS and FTEX?

For reasons I don't know the metadata heterogeneous object for "transportable" is different than all of the others Therefore in order to exclude statistics for Transportable Tablespaces and Full Transportable Export/Import you must set:

EXCLUDE=TABLE_STATISTICS,INDEX_STATISTICS

Thursday Apr 07, 2016

Collaborate16 - See you on Monday!

Collaborate Conference 2016

Time flies.

 I already started packing stuff for COLLABORATE16 - and I hope to see you in Las Vegas from April 10-14, 2016 :-)

These are the sessions I'll present: 

And if you'd like to discuss your topics in more detail feel free to visit me at the:

Oracle Booth #1053
Exhibit Hall - Bayside C/D, Level 1 – Mandalay Bay South Convention Center

  • Wednesday, April 13
    • 10:15 a.m. - 11:15 p.m. 
    • 12:15 p.m. - 1:15 p.m.

     CU soon!

    --Mike
    .

    Tuesday Mar 01, 2016

    Differences between Automatic Statistics Gathering job and GATHER_SCHEMA_STATS

    Recently a customer raised a question whether there are differences between the Automatic Statistics Gathering job and a manual creation of stats via the GATHER_SCHEMA_STATS procedure.

    The results in performance were quite interesting. Performance after an upgrade from Oracle Database 11.2.0.3 to Oracle Database 11.2.0.4 was not good when the automatic stats job got used. But performance changed significantly to the better when schema stats were created with the downside of taking more resources during the gathering.

    Is the Automatic Stats Gathering job enabled?

    That question can be answered quite easily. There's a very good MOS Note:1233203.1 - FAQ: Automatic Statistics Collection displaying this query:

    SELECT CLIENT_NAME, STATUS FROM DBA_AUTOTASK_CLIENT WHERE CLIENT_NAME='auto optimizer stats collection';

    The MOS Note has also the code to enable (or disable) the job.
    .

    Which parameters/settings are used?

    That question is a bit more tricky as the Note says: "The automatic statistics-gathering job uses the default parameter values for the DBMS_STATS procedures". But how do I display them?

    The following script will display the parameters being used during the Automatic Statistics Gathering:

    SET ECHO OFF
    SET TERMOUT ON
    SET SERVEROUTPUT ON
    SET TIMING OFF
    DECLARE
       v1  varchar2(100);
       v2  varchar2(100);
       v3  varchar2(100);
       v4  varchar2(100);
       v5  varchar2(100);
       v6  varchar2(100);
       v7  varchar2(100);
       v8  varchar2(100);
       v9  varchar2(100);
       v10 varchar2(100);        
    BEGIN
       dbms_output.put_line('Automatic Stats Gathering Job - Parameters');
       dbms_output.put_line('==========================================');
       v1 := dbms_stats.get_prefs('AUTOSTATS_TARGET');
       dbms_output.put_line(' AUTOSTATS_TARGET:  ' || v1);
       v2 := dbms_stats.get_prefs('CASCADE');
       dbms_output.put_line(' CASCADE:           ' || v2);
       v3 := dbms_stats.get_prefs('DEGREE');
       dbms_output.put_line(' DEGREE:            ' || v3);
       v4 := dbms_stats.get_prefs('ESTIMATE_PERCENT');
       dbms_output.put_line(' ESTIMATE_PERCENT:  ' || v4);
       v5 := dbms_stats.get_prefs('METHOD_OPT');
       dbms_output.put_line(' METHOD_OPT:        ' || v5);
       v6 := dbms_stats.get_prefs('NO_INVALIDATE');
       dbms_output.put_line(' NO_INVALIDATE:     ' || v6);
       v7 := dbms_stats.get_prefs('GRANULARITY');
       dbms_output.put_line(' GRANULARITY:       ' || v7);
       v8 := dbms_stats.get_prefs('PUBLISH');
       dbms_output.put_line(' PUBLISH:           ' || v8);
       v9 := dbms_stats.get_prefs('INCREMENTAL');
       dbms_output.put_line(' INCREMENTAL:       ' || v9);
       v10:= dbms_stats.get_prefs('STALE_PERCENT');
       dbms_output.put_line(' STALE_PERCENT:     ' || v10);
    END;
    /

    The settings of the DBMS_STATS.GATHER_SCHEMA_STATS procedure are documented:
    https://docs.oracle.com/database/121/ARPLS/d_stats.htm#ARPLS68577 

    When you compare the two you'll see that the settings/defaults are identical. 
    .

    But what is the difference between these two?

    Both activities use the same parameters. So the stats will look the same - IF they get created. The real difference between the Automatic Statistics Gathering job and a manual invocation of GATHER_SCHEMA_STATS is that the latter will refresh ALL statistics whereas the Automatic Statistics Gathering job will refresh only statistics on objects where statistics are missing or marked as STALE.

    The same behavior appears when you compare the recommendation to gather dictionary statistics before the upgrade by using DBMS_STATS.GATHER_DICTIONARY_STATS versus a DBMS_STATS.GATHER_SCHMEA_STATS('SYS')call. The latter will refresh all statistics whereas the first one will take less resources but refresh only STALE and missing statistics.
    .

    A simple example

    This script is kept as simple as possible.

    • It creates a test user
    • It creates two tables within this user - tablespace USERS
    • It inserts and updates information in the two tables
    • It flushes the monitoring information (how many DMLs got run?) out
    • It gathers stats on only one table to verify that STALE is working as intended
    • It kicks off the automatic stats gathering job
    • It kicks off the schema stats gathering call
    • It compares results before/after in the stats history table 

    set timing on
    set serverout on
    set echo on
    set termout on
    column table_name Format a5
    column owner      Format a6
    column stale_stats Format a4
    column last_analyzed Format a15
    column sample_size format 9999999
    drop user test1 cascade;
    create user test1 identified by test1;
    grant connect, resource, dba to test1;
    alter user test1 default tablespace USERS;
    create table TEST1.TAB1 as select * from dba_objects where rownum<50001;
    exec dbms_stats.gather_table_stats('TEST1','TAB1');
    create table TEST1.TAB2 as select * from dba_objects where rownum<50001;
    exec dbms_stats.gather_table_stats('TEST1','TAB2');
    insert into TEST1.TAB1 select * from dba_objects where rownum<50001;
    commit;
    insert into TEST1.TAB2 select * from dba_objects where rownum<50001;
    commit;
    insert into TEST1.TAB2 select * from dba_objects where rownum<50001;
    commit;
    update TEST1.TAB1 set object_id=object_id+0;
    commit;
    update TEST1.TAB2 set object_id=object_id+1;
    commit;
    exec DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO;
    select table_name,owner,stale_stats,to_char(last_analyzed,'DD-MON HH24:MI:SS') LAST_ANALYZED,SAMPLE_SIZE from dba_tab_statistics where table_name in ('TAB1','TAB2');
    exec DBMS_STATS.GATHER_TABLE_STATS('TEST1','TAB1');
    select table_name,owner,stale_stats,to_char(last_analyzed,'DD-MON HH24:MI:SS') LAST_ANALYZED,SAMPLE_SIZE from dba_tab_statistics where table_name in ('TAB1','TAB2');
    exec DBMS_AUTO_TASK_IMMEDIATE.GATHER_OPTIMIZER_STATS;
    pause Wait a bit - then press return ...
    select table_name,owner,stale_stats,to_char(last_analyzed,'DD-MON HH24:MI:SS') LAST_ANALYZED,SAMPLE_SIZE from dba_tab_statistics where table_name in ('TAB1','TAB2');
    exec dbms_stats.gather_schema_stats('TEST1');
    select table_name,owner,stale_stats,to_char(last_analyzed,'DD-MON HH24:MI:SS') LAST_ANALYZED,SAMPLE_SIZE from dba_tab_statistics where table_name in ('TAB1','TAB2');
    prompt End ...

    .

    The results

    exec
    DBMS_STATS.
    FLUSH_DATABASE_MONITORING_INFO;
    TABLE OWNER  STAL LAST_ANALYZED   SAMPLE_SIZE
    ----- ------ ---- --------------- -----------
    TAB1  TEST1  YES  29-FEB 22:37:07       50000
    TAB2  TEST1  YES  29-FEB 22:37:07       50000

    exec
    DBMS_STATS.
    GATHER_TABLE_STATS('TEST1','TAB1');
    TABLE OWNER  STAL LAST_ANALYZED   SAMPLE_SIZE
    ----- ------ ---- --------------- -----------
    TAB1  TEST1  NO   29-FEB 22:37:12      100000
    TAB2  TEST1  YES  29-FEB 22:37:07       50000

    exec
    DBMS_AUTO_TASK_IMMEDIATE.
    GATHER_OPTIMIZER_STATS;

    TABLE OWNER  STAL LAST_ANALYZED   SAMPLE_SIZE
    ----- ------ ---- --------------- -----------
    TAB1  TEST1  NO   29-FEB 22:37:12      100000
    TAB2  TEST1  NO   29-FEB 22:37:13      150000

    exec
    dbms_stats.
    gather_schema_stats('TEST1');

    TABLE OWNER  STAL LAST_ANALYZED   SAMPLE_SIZE
    ----- ------ ---- --------------- -----------
    TAB1  TEST1  NO   29-FEB 22:37:43      100000
    TAB2  TEST1  NO   29-FEB 22:37:43      150000

    The results can be interpreted this way:

    • The sample size of 50k is based on the first activity during the CTAS
    • Once table TAB1 gets analyzed the sample size is now correct - and the time stamp got updated - statistics on TAB2 are still marked STALE of course as the underlying table has changed by more than 10%
    • The Automatic Statistics Gathering job will refresh only stats for objects where stats are missing or marked STALE - in this example here TAB2. Table TAB1's statistics remain unchanged.
    • When the GATHER_SCHEMA_STATS job gets invoked it will refresh all statistics - regardless if they were STALE or not. 

    This is the behavior the customer who raised the question about differences in these two ways to create statistics may have seen. The GATHER_SCHEMA_STATS job took longer and consumed more resources as it will refresh all statistics regardless of the STALE attribute.

    And it's hard to figure out why the refresh of statistics created in a previous release may have led to suboptimal performance, especially as we talk about a patch set upgrade - and not a full release upgrade. Thanks to Wissem El Khlifi who twittered the following annotations I forgot to mention:

    • The Automatic Statistics Gathering job prioritizes objects with NO statistics over objects with STALE statistics
    • The Automatic Statistics Gathering job may get interrupted or skip objects leaving them with NO statistics gathered. You can force this by locking statistics - so the Auto job will skip those completely

    You'll find more information about the Automatic Statistics Gathering job here:

    And another strange finding ...

    When I played with this example in 12c I encountered the strange behavior of the GATHER_OPTIMIZER_STATS call taking exactly 10 minutes unti it returns to the command prompt.

    First I thought this is a Multitenant only issue. But I realized quickly: this happens in non-CDB databases in Oracle 12c as well. And when searching the bug database I came across the following unpublished bug:

    • Bug 14840737
      DBMS_AUTO_TASK_IMMEDIATE.GATHER_OPTIMIZER_STATS RETURNS INCORRECTLY

    which got logged in Oct 2012 and describes this exact behavior. I kick off the job - it will update the stats pretty soon after - but still take 10 minutes to return control to the command prompt. It is supposed to be fixed in a future release of Oracle Database ... 

     

    --Mike 

    Monday Jan 11, 2016

    New Webcast "Why Upgrade to Oracle 12c" available

    In December 2015 I did a recording for a webcast about "Why Upgrade to Oracle Database 12c" refreshing the content from a few months back.

    Webcast "Why Upgrade to Oracle Database 12c"

    Oracle's latest generation of database technology, Oracle Database 12.1.0.2, has some impressive new features. It offers great potential for a fast upgrade, simple migrations and consolidation, making more efficient use of hardware and delivering major improvements in management efficiency.

    Join our webcast on Upgrading to Oracle Database 12c and you will learn:

    • Why you still need to upgrade to Oracle Database 12.1.0.2
    • How to ensure that your applications are ready for Oracle Database 12c
    • How to evaluate and test all enhancements of the upgrade process plus other new features
    • Best practices to upgrade and migrate successfully 

    If you would like to watch it please access the webcast via this link: 

    And please excuse the suboptimal audio quality - there were some technical difficulties to work around.

    --Mike 

    Tuesday Dec 01, 2015

    UKOUG Tech 2015 - 3 Talks - Upgrades & Consolidation

    UKOUG 2015- I'ma  aSpeaker

    UKOUG Tech 2015 Conference will begin on Monday, Dec 7, 2015. But there will be some activities on Sunday as well.  

    It will be my first time at UKOUG's conference. It will happen in Birmingham/UK in the ICC.

    Looking forward to meet great customers, experts and colleagues.

    I'm happy to be invited to deliver 3 talks:

    Hope to meet you there!

    --Mike

    Monday Nov 23, 2015

    Network ACLs and Database Upgrade to Oracle 12c

    What has been changed in Oracle Database 12c with Network ACLs?

    Starting from 12c, network access control in the Oracle database is implemented using Real Application Security access control lists (ACLs). Existing 11g network ACLs in XDB will be migrated. Existing procedures and functions of the DBMS_NETWORK_ACL_ADMIN PL/SQL package and catalog views have been deprecated and replaced with new equivalents

    In 12c, a network privilege can be granted by appending an access control entry (ACE) to a host ACL using DBMS_NETWORK_ACL_ADMIN.APPEND_HOST_ACE.  If you append an ACE to a host that has no existing host ACL, a new host ACL will be created implicitly. If the host ACL already exists, the ACE will be appended to the existing ACL.

    (both paragraphs taken from MOS Note: 2078710.1)

    What happens during/after upgrade?

    • Existing network ACLs will be migrated from XDB in Oracle 11g to Real Application Security in Oracle 12c.
      All privileges of the existing ACLs will be preserved
    • Existing ACLs will be renamed
    • Mapping between the old / new names is reflected in DBA_ACL_NAME_MAP.

      .

    Issues before/during Database Upgrade?

    First of all the current preupgrd.sql does not warn you correctly if such ACLs exist. This fix gets added to the preupgrd.sql. But you'll need to download the most recent version from MOS Note 884522.1. The one from January 2015 does not have it yet. But this is addressed and will be implemented soon.

    Here's an issue which happened to one of my very experienced colleagues from Oracle Consulting in an upgrade project:

    "Customer had network ACLs defined and Privileges (resolve,connect) granted for several hosts to several DB users in 11.2.0.3.

    With the first DB, we observed the ACL renaming as you described it, but, much worse: 4 out of 9 privileges granted were completely gone away after the upgrade performed by DBUA (to 12.1.0.2.4). We then were able to evaluate the missing privileges and re-grant them again. Warned by that, for the next databases to be upgraded, we copied all the 11.2.0.3 content of the DBA_NETWORK_ACLS and DBA_NETWORK_ACL_PRIVILEGES to helper tables in order to be able to restore lost privileges (which was a good idea, as in one of the databases, only 87 out of 240 formerly existing privileges survived the upgrade)." 

    Solution?

    Check for existing Network ACLs before the upgrade or get the most recent preupgrd.sql once it contains the check.

    Preserve the existing network ACLs and privileges (DBA_NETWORK_ACLS and DBA_NETWORK_ACL_PRIVILEGES) in a intermediate staging table to have the possibility to restore them afterwards in case the automatic migration fails or does not happen.

    If you encounter a situation where your Network ACLs don't get migrated correctly, disappear and/or don't exist in the mapping table DBA_ACL_NAME_MAP afterwards please open an SR and let Oracle Support check. There are known issues with mappings and migrations not done correctly (find some bugs below) so needs to be verified if you have hit a known issue or encountered a new one.

    More Information?

    --Mike

    .

    Friday Oct 23, 2015

    OOW 2015 - Upgrade and Migration Talks

    Oracle Open World 2015
    will kick off in a day and a bit.
    And still some work to do ;-)

    A few things I would like to mention:

    • Our four HOLs are all overbooked already with many people on the waiting lists. You can always come by and wait in line at the Nikko Hotel's Golden Gate lab room as some people don't show up. But no guarantee. What I would recommend to you in case you are interested in the lab but didn't get a seat:

      Come to our talk instead on Monday at 1:30pm as Roy and I will demonstrate LIVE and UNCENSORED parts of the lab. Then you'll download it from the blog (find it in the Slides Download Center) and try it out by yourself afterwards: Hands On Lab Upgrade, Migrate, Consolidate to 12c

      Upgrade and Migrate to Oracle Database 12c: Live and Uncensored! [CON6777]
      Mike Dietrich, Master Product Manager, Oracle
      Roy Swonger, Sr Director, Software Development, Oracle

      Monday, Oct 26, 1:30 p.m. | Moscone South—102
      .
      .
    • Our 2nd talk will happen on Wednesday at 12:15pm (skip the unhealthy lunch).
      We'll contrast some nasty things with some very good and detailed customer examples included in it. Real world examples, no artificial lab exercises. Again uncensored (mostly ;-) ) and just from first hand experience.

      How to Upgrade Hundreds or Thousands of Databases in a Reasonable Amount of Time [CON8375]
      Mike Dietrich, Master Product Manager, Oracle
      Roy Swonger, Sr Director, Software Development, Oracle

      Wednesday, Oct 28, 12:15 p.m. | Moscone South—102
      .
      .
    • The 3rd talk of our group is the Data Pump  Performance Tips and Tricks talk delivered by Data Pump experts from Development.

      Deep Dive: More Oracle Data Pump Performance Tips and Tricks [CON8376]
      Dean Gagne, Consulting Member of Technical Staff, Oracle
      Jim Stenoish, Senior Director, Software Development, Oracle

      Thursday, Oct 29, 9:30 a.m. | Moscone South—305
      .
      .
    • And the 4th talk I'd like to highlight is the Database Upgrade 12c - Oracle Support talk from Support Architect Roderick Manalac.

      Oracle Database 12c Upgrade: Tools and Best Practices from Oracle Support [CON8664]
      Roderick Manalac, Architect - Database Support, Oracle
      Thursday, Oct 29, 12:00 p.m. | Moscone South—305
      .
      .

    Ship

    CU soon :-)

    --Mike

    Wednesday Sep 16, 2015

    Script: Is your database ready for Oracle GoldenGate?

    Oracle GoldenGate can be a good addition to a lot of upgrade and migration projects to decrease the downtime to a bare minimum, or even zero in some cases.

    But before you consider Oracle GoldenGate as THE solution to decrease your downtime you may evaluate if your database is ready for OGG. For this purpose you can download scripts from MyOracle Support (MOS) to check exactly this.

    Complete Database Profile OGG readiness check for Classic Extract 

    MOS Note:1298562.1:
    Oracle 
    GoldenGate database Complete Database Profile check script for Oracle DB (All Schemas) Classic Extract 

    This script is intended to query all of the non default database users to identify current configuration and identify any unsupported data types or types that may need special considerations for Oracle GoldenGate in an Oracle environment.

    Check OGG readiness for Schema Only

    MOS Note: 1296168.1
    Oracle GoldenGate database Schema Profile check script for Oracle DB

    This script is intended to query the database by schema to identify current configuration and identify any unsupported data types or types that may need special considerations for Oracle GoldenGate in an Oracle environment. 

    Difference and Execution 

    The main difference between the two scripts is the amount of data being processed. The Schema Script is more targeted and therefore should return fewer items that need additional checking.

    Log into sqlplus as sysdba and run the script:

    SQL> @full-DB_CheckOracle_15092015.sql

    SQL> @full-schemaCheckOracle_15092015.sql

    Review the output.

    --Mike

    Monday Sep 14, 2015

    SQL Plan Management - Known Issues in Oracle 12.1.0.2

    Our Support colleagues released the patch recommendation note for SQL Plan Management (SPM) for Oracle Database 12.1.0.2.

     

    SPM Note

    In the unlikely event you'd like to upgrade to Oracle Database 11.2.0.4 or (very very unlikely hopefully) 11.2.0.3 please see these matching notes:

     

    Some additional things to mention:

    • SPM is an Oracle Enterprise Edition feature at no extra cost
    • SPM is THE feature to ensure plan stability tackling changes such as (of course) upgrades and migrations
    • SPM has been improved a lot internally in Oracle Database 12.1.0.2:
      • We now store entire plans instead of a large accumulation of hints in the SQL Management Base (SMB) in SYSAUX tablespace
      • The "Evolve" task does happen automatically (SYS_AUTO_SPM_EVOLVE_TASK) as part of the Automatic SQL Tuning Task 
    • You should always adjust the retention when starting to play with SQL Plan Management as the default retention of 53 weeks may lead to a too large LOB segment in SYSAUX tablespace (and LOB segments never shrink)
      • SQL> exec DBMS_SPM.CONFIGURE('plan_retention_weeks',5);
    • See the Oracle Database 12c documentation about SPM:

    --Mike

     

    Friday Sep 04, 2015

    Oracle non-CDB architecture may be desupported after Oracle Database 12.2

    You may wonder about the headline of this blog post.
    Haven't I blogged about this a long time ago?

    Yes, I did. On January 22, 2015 - over 7 months ago:
    https://blogs.oracle.com/UPGRADE/entry/non_cdb_architecture_of_oracle

    But whenever in the past weeks this message has been refreshed and got a bit more precise (which I'd guess everybody appreciates). Now we are clearly saying that such a change won't happen with Oracle 12.2. Before we've said only "in a later release".

    See the Oracle Database 12c Upgrade Guide

    Deprecation Note non-CDB architecture

    In case you'd like to explore the world of (at least) Oracle Single Tenant (no extra license required, available even with SE2) which I'd highly recommend you'll find some useful advice in this slide deck:

  • How Oracle Single Tenant will change a DBA's life 
  •  

    --Mike 

    Thursday Jul 30, 2015

    SQL Monitoring - Limitation at 300 lines per statement

    One of the best parts of my job at Oracle:
    I still learn something new every day.

    Yesterday I've learned from my colleague from Oracle Switzerland, Thomas Teske, that SQL Real Time Monitoring has an embedded default limitation on the number of lines in the statement. If the limit (default: 300 lines) is exceeded the statement won't be monitored. We both work with a leading Swiss company and we wanted to monitor a complex plan. 

    Now you may think: Who the heck has statements longer than 300 lines?
    Well ... sometimes that is beyond your influence as in this particular case this is of course done by the application.

    Solution:

    SQL> alter system set "_sqlmon_max_planlines"=800 scope=both;

    or set in your spfile:

    _sqlmon_max_planlines=800

    This limitation is described in:

    MOS Note:1613163.1
    How to Monitor SQL Statements with Large Plans Using Real-Time SQL Monitoring?

    If you'd like to read a bit more about SQL Real Time Monitoring please follow one of these links - and be aware that it's part of the Tuning Pack license and VERY helpful in many everyday situations. You'll have to have STATISTICS_LEVEL either TYPICAL (the default) or ALL and CONTROL_MANAGEMENT_PACK_ACCESS='DIAGNOSTIC+TUNING' (the default as well).

     

    --Mike 

    Things to consider BEFORE upgrading to Oracle 12.1.0.2 to AVOID poor performance and wrong results

    Finally it got published on MyOracle Support (MOS) portal - one of the most helpful notes when we prep for an upgrade or a migration - and potentially the one with the longest title in MOS:

    MOS Note: 2034610.1
    Things to consider BEFORE upgrading to Oracle Database 12.1.0.2
    to AVOID poor performance and wrong results 

    Avoid Poor Performance and Wrong Results when upgrading to 12.1.0.2

    Just in case you are NOT upgrading/migrating to Oracle Database 12.1.0.2 there are three other helpful notes for earlier releases available as well:

    Document 1320966.1
    Things to Consider Before Upgrading to 11.2.0.2
    to Avoid Poor Performance or Wrong Results

    Document 1392633.1
    Things to Consider Before Upgrading to 11.2.0.3
    to Avoid Poor Performance or Wrong Results
    Document 1645862.1
    Things to Consider Before Upgrading to 11.2.0.4
    to Avoid Poor Performance or Wrong Results

    --Mike 

    Wednesday Jul 29, 2015

    ORAchk - How to log SRs and ERs for ORAchk

    ORAchkI have recently recommended to use ORAchk in its new version 12.1.0.4 - and I can just emphasize on this.
    https://blogs.oracle.com/UPGRADE/entry/orachk_12_1_0_4

    During a conf call with a lead customer from the UK Roy and I learned about the uncertainness on how to log a Service Request (SR) or enhancement Request (ER) against ORAchk in case something is not working correctly or missing.

    Especially as the documentation of ORAchk states:

    Appendix B - How to Obtain Support

    If problems are encountered either at runtime or if there are questions about the content of the findings of the tool, please post your issues/questions/concerns to the ORAchk thread within the ORAchk Thread of the Scalability RAC My Oracle Support Community.

    Roy explored this topic in the past days and that is the outcome

    • Of course you can file SRs via MOS (and ERs as well).
      If a Support Engineer told you that the product is only supported via the Community Forum this is not correct. Just insist :-)
    • There's an official product ID (10655), component (ORACHK) and sub component (FRAMEWORK) for filing ORAchk bugs and enhancements internally in our bug database. But of course a customer will need to file an SR first via MOS.

    We hope this helps :-)

    --Mike 

    Monday May 11, 2015

    New version of the BIG 12c SLIDE DECK available

    I'm not on vacation right now. I'm just very busy traveling between customer onsite visits, customer meetings and workshops, internal and external workshops, testing sessions ... and so on ... 

    A lot of stuff has been stacked up in my inbox regarding changes in Oracle 12c, changed behavior - and thanks for your inputs. I will blog on it as soon as I have understood the issue and the solution - so plenty of stuff should come in the next weeks ;-)

    Just end of last week Roy and I uploaded a new version of our BIG slide deck - now having even more slides (but still not over 550) *haha* ;-)

    Find them here - and as always - the change log at the end of the deck will explain what we've recently adjusted/added/altered.

    Upgrade, Migrate and Consolidate to Oracle Database 12c

    --Mike 

    Thursday Mar 19, 2015

    Migration of an EM Repository cross-platform?

    Can you migrate your EM Cloud Control Repository to another OS platform? Cross-platform and cross-Endianness?

    This question sounds so incredibly simple that you won't even start thinking I guess. Same for ourselves. Use Data Pump. Or Transportable Tablespaces. Or Full Transportable Export/Import if your source is at least 11.2.0.3 or newer.

    But sometimes in life things seem to be simple, but as soon as you unmask them you'll find a full bunch of issues. It's a fact that the repository of EM Cloud Control is quite a bit complicated. And uses plenty of databases technologies. 

    Actually all credits go to Roy here as he has worked with the EM group for the past 6 months on this topic.

    You can migrate a EM Cloud Control Repository cross-platform but not cross-Endianness (e.g. HP-UX to OL, big to little Endianness). The latter is scheduled to be supported in EM 13.2.

     

    Summary:

    As EM Cloud Control Repository migrations is possible right now only within the same Endianness group you should decide carefully where you store your EM Cloud Control Repository.

     --Mike

     

    About

    Mike Dietrich - Oracle Mike Dietrich
    Master Product Manager - Database Upgrade & Migrations - Oracle

    Based in Germany. Interlink between customers/partners and the Upgrade Development. Running workshops between Arctic and Antartica. Assisting customers in their reference projects onsite and remotely. Connect via:

    - -

    Search

    Archives
    « July 2016
    SunMonTueWedThuFriSat
         
    1
    2
    3
    6
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
          
    Today
    Slides Download Center
    Visitors since 17-OCT-2011
    White Paper and Docs
    Workshops
    Viewlets and Videos
    Workshop Map
    x Oracle related Tech Blogs
    This week on my Rega & Pono
    Upgrade Reference Papers