Wednesday Apr 20, 2016

Data Pump - Exclude Stats differently for TTS and FTEX

Nice little best practice for statistics and Data Pump when doing either Transportable Tablespaces or Full Transportable Export-Import (credits to Roy and Dean Gagne).
.

Transport Statistics via a Staging Table

First of all we always recommend to exclude statistics when doing a Data Pump export as the import of such stats takes way longer than transporting them via a stats table. If you are unfamiliar with transporting stats between databases please see the Oracle Performance Tuning Guide with a nice tutorial

The basic steps to transport statistics from one database to another fast and efficient consist of: 

  1. Create a staging table in your source database with DBMS_STATS.CREATE_STAT_TABLE
  2. Export your local stats into this staging table by using DBMS_STATS.EXPORT_SCHEMA_STATS
  3. Export the staging table and import it into your destination database with Data Pump
  4. Import the statistics held in the staging table by using DBMS_STATS.IMPORT_SCHEMA_STATS

For the regular Data Pump exports we always recommend to set:

EXCLUDE=STATISTICS

to avoid performance penalties during impdp.

But this does not affect Transportable Tablespaces and Full Transportable Export/Import.
.

How to exclude Statistics for TTS and FTEX?

For reasons I don't know the metadata heterogeneous object for "transportable" is different than all of the others Therefore in order to exclude statistics for Transportable Tablespaces and Full Transportable Export/Import you must set:

EXCLUDE=TABLE_STATISTICS,INDEX_STATISTICS

Tuesday Apr 19, 2016

RMAN Catalog Upgrade fails - ORA-02296 - error creating modify_ts_pdbinc_key_not_null

This issue got raised to my via a customer I know for quite a while - all credits go to Andy Kielhorn for digging down into that issue and solving it. 
,

Failed RMAN Catalog Upgrade from 11.2.0.4 to 12.1.0.2

The RMAN catalog upgrade:

SQL> @?/rdbms/admin/dbmsrmansys.sql

$ rman CATALOG rman/xxx@rman01

RMAN> UPGRADE CATALOG; 

RMAN> UPGRADE CATALOG;

failed with the following sequence of error messages: 

error creating modify_ts_pdbinc_key_not_null
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-06004: ORACLE error from recovery catalog database: ORA-02296: cannot enable (RMAN.) - null values found

error creating modify_tsatt_pdbinc_key_not_null
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-06004: ORACLE error from recovery catalog database: ORA-02296: cannot enable (RMAN.) - null values found

error creating modify_df_pdbinc_key_not_null
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-06004: ORACLE error from recovery catalog database: ORA-02296: cannot enable (RMAN.) - null values found

error creating modify_tf_pdb_key_not_null
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-06004: ORACLE error from recovery catalog database: ORA-02296: cannot enable (RMAN.) - null values found

error creating modify_bs_pdb_key_not_null
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-06004: ORACLE error from recovery catalog database: ORA-02296: cannot enable (RMAN.) - null values found

Andy found also these bugs in MOS - but no helpful information included:

  • Bug 20861957
    ORA-2296 DURING UPGRADE CATALOG TO 12.1,0.1 IN AN 11.2 DATABASE
  • Bug 19677999
    CATALOG SCHEMA UPGRADE TO 12.1.0.2 FAILED
.

The  Solution

There seems to be a potential inconsistency in the RMAN catalog when the PDB/CDB mechanisms get introduced. This does not necessarily happen - but it can happen.

The workaround is described in:

  • Bug 19677999 : CATALOG SCHEMA UPGRADE TO 12.1.0.2 FAILED

==> Connect to catalog schema and clear the table having null details 

delete bdf where not exists (select 1 from dbinc where dbinc.dbinc_key = bdf.dbinc_key);
delete bcf where not exists (select 1 from dbinc where dbinc.dbinc_key = bcf.dbinc_key);
commit;

==> After clearing the offending rows , upgrade catalog worked

But please use this workaround only under Oracle Support's supervision. I did document it here to ease your verification.

Andy fixed it with:

update <rmancat_owner>.dbinc set PARENT_DBINC_KEY=NULL where (DBINC_KEY) IN (SELECT DBINC_KEY  from  <rmancat_owner>..ts where pdbinc_key is null); 
commit;

but please open an SR and point Oracle Support to the bug and the potential workarounds in case you hit the issue.
.

--Mike

Monday Apr 18, 2016

Patching does not work - Journey to the Cloud VI

DBaaS Oracle Cloud

What happened so far on my Journey to the Cloud?


Patching in the Cloud

I would like to patch my Oracle DBaaS Cloud today. It was so simple a few weeks ago. But I didn't patch it yet up to the January 2016 PSU (12.1.0.2.160119PSU) - shame on me :-(

Please don't be scared by the weird mix of German and English - I filed a bug for this months ago ... and maybe one magic sunny day I can switch the language back to pure English ...

Anyhow, I choose my VM and it highlighted to me an available patch to apply, the January 2016 PSU:


.

Let's do the PreCheck 

I chose PRECHECK first from the drop-down hamburger menu to the right and received the following message: 

But where are the logs? There is no link, no hint, nothing.
.

Let's check the README 

At this point I decided to check the README to find out whether I missed something important. So I clicked on README and received this meaningful message:

Potentially not a fault of the Cloud folks as I realized by myself that the MOS facility to link patches directly is broken since weeks. A manual interaction in MOS to locate the README is necessary - but it gave me no indication regarding the above failed precheck.
.

Force to apply the PSU 

Next step: FORCE to apply the patch (2nd option in the hamburger menu). But again the result is not positive showing me again a weird mix of Denglish (Deutsch and English).

.

Result?

Well, that is somewhat unexpected. I checked with some people who are way more familiar than I with our DBaaS Cloud after consulting our documentation and googling around - and learned that this functionality is broken since several weeks,

No further comment necessary.
.

--Mike

Thursday Apr 07, 2016

Collaborate16 - See you on Monday!

Collaborate Conference 2016

Time flies.

 I already started packing stuff for COLLABORATE16 - and I hope to see you in Las Vegas from April 10-14, 2016 :-)

These are the sessions I'll present: 

And if you'd like to discuss your topics in more detail feel free to visit me at the:

Oracle Booth #1053
Exhibit Hall - Bayside C/D, Level 1 – Mandalay Bay South Convention Center

  • Wednesday, April 13
    • 10:15 a.m. - 11:15 p.m. 
    • 12:15 p.m. - 1:15 p.m.

     CU soon!

    --Mike
    .

    Thursday Mar 31, 2016

    DROP PLUGGABLE DATABASE - things you need to know

    Directly after my DOAG (German Oracle User Group) Conference presentation about "How Single-/Multitenant will change a DBA's life" Martin Bach (Enkitec) approached me and told me about his experiences with the DROP PLUGGABLE DATABASE command and future recoverability.

    Martin discovered that once you issued the DROP PLUGGABLE DATABASE command you can't reuse a previously taken backup of this particular PDB anymore and recover the PDB into this existing CDB. I wasn't aware of this and I'm glad that Martin told me about it.

    Actually only the meta information in the controlfile or the RMAN catalog will be deleted. But archive logs and backup still persist.

    See also my blog post from Jan 9, 2015:
    Recent News about Pluggable Databases - Oracle Multitenant

    This is the error message you'll see when you try to recover a dropped pluggable database:

    RMAN> restore pluggable database pdb2drop;

    Starting restore at 01-JUN-15
    using channel ORA_DISK_1
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of restore command at 06/01/2015 10:10:40
    RMAN-06813: could not translate pluggable database pdb2drop

    Just a few weeks ago a colleague from German presales asked me if I know a more convenient way to restore a PDB once the magic command (DROP PLUGGABLE DATABASE) has been issued than recovering it into an auxiliary container database abd unbplug/plug it. I haven't.

    But Nik (thanks!!!) told me that he pushed a MOS Note to being published explaining how to workaround this issue: 

    MOS Note: 2034953.1
    How to Restore Dropped PDB in Multitenant

    In brief this MOS Note describes how to:

    • Create an auxiliary container database
    • Recover the backup (yes, you will have to have a backup of your container database) including this particular PDB
    • Unplug the PDB after recovery has been finished and plug it back into the original CDB

    Now some will say: Hey, that's simple and obvious. For me it wasn't ;-) That's why I write about it to remind myself of this workaround ...

    --Mike
    .



    Wednesday Mar 30, 2016

    DBUA and Read-Only Tablespaces - Things to Know II

    Related Blog Posts:



    Thanks to Rodolfo Baselli commenting on a previous blog post about the DBUA and Read-Only Tablespaces I dug a bit deeper and found out that "assuming silently" does not mean "works as intended".

    But one piece after another.

    Rodolfo commented that if he triggers the DBUA to switch all data tablespaces into read-only mode for the duration of the upgrade it will be still him to create the correct backup, the DBUA won't do it.

    This is the option in the Oracle Database 12.1.0.2 DBUA (Database Upgrade Assiatant):

    DBUA - Tablespaces in Read Only Mode

    I did silently assume that the DBUA will choose the correct backup strategy automatically when it offers (enabled by default) to create an Offline Backup on the Recovery Options screen a while later in the same dialogue.

    Backup Strategy DBUA

    But in fact it doesn't.

    When you choose the default, "Create a New Offline RMAN Backup" it will create a full offline RMAN  backup to the desired location - but not an partial offline backup as intended by the optional trigger to have the tablespaces in read-only mode during upgrade to allow a fast and simple restore without the need for a recovery. Please note that I would recommend this option generally only in cases where the database is on purpose in norarchivelog mode or where the RTO (Recovery Time Objective) is met only by restoring a partial offline backup.

    What are your options now?

    If you switch on the Read-Only option on purpose you'll have to choose "I have my own backup and restore strategy" and do the partial offline backup by yourself - before you start the DBUA.

    Personally I'd consider this option as not useful when used within the DBUA right now. We have discussed this internally, it may work correctly in a future patch set for the upcoming next release of the database therefore no offense to anybody. It's just important to know that you'll have to do the partial offline backup by yourself at the moment.
    .

    --Mike

    Tuesday Mar 29, 2016

    Disable Transparent Hugepages on SLES11, RHEL6, RHEL7, OL6, OL7 and UEK2 Kernels

    This blog post is not related to database upgrades and migrations. But still I think it is very useful for many customers operating on modern Linux systems.

    Recommendation 

    Support just published an ALERT strongly recommending to disable Transparent Hugepages on Linux systems. And the below information does not apply to RAC systems only but also to single instance environments.
    .

    Which Linux Distrubutions/Kernels are affected? 

    • SLES11
    • RHEL6 and RHEL7
    • OL6 and OL7
    • UEK2 Kernels
      .

    What are the Issues? 

    I'm quoting MOS Note: 1557478.1 (ALERT: Disable Transparent HugePages on SLES11, RHEL6, RHEL7, OL6, OL7 and UEK2 Kernels):

    Because Transparent HugePages are known to cause unexpected node reboots and performance problems with RAC, Oracle strongly advises to disable the use of Transparent HugePages. In addition, Transparent Hugepages may cause problems even in a single-instance database environment with unexpected performance problems or delays. As such, Oracle recommends disabling Transparent HugePages on all Database servers running Oracle.

    This ONLY applies to the new feature Transparent HugePages, Oracle highly recommends the use of standard HugePages that were recommended for previous releases of Linux.  See MOS Note:361323.1 for additional information on HugePages. 

    As far as I see you'll have to reboot the server in order to disable Transparent Hugepages - the default is usually ALWAYS.
    .

    More Information?

     --Mike

    Wednesday Mar 23, 2016

    What does DEPRECATED mean? And DESUPPORTED?

    There's sometimes a misunderstanding about what we mean with the term DEPRECATED? And what is the difference to DESUPPORTED? Actually there's a full chapter in the Database Upgrade Guide listing deprecated and desupported features.

    Deprecated

    Especially this message puzzled a lot of customers stating that the non-CDB architecture is deprecated in Oracle Database 12c.


    In the Database Upgrade Guide we clearly explain what deprecated means:

    "By deprecate, we mean that the feature is no longer being enhanced but is still supported"

    So for you it means just be aware that we don't further develop or enhance something. But you are still fully supported by using this feature.

    Another well known example is Oracle Streams which is fully supported in Oracle Database 12c - but not in Oracle Multitenant - and is deprecated and therefore not enhanced or developed any further. 

    Or to name another example, Oracle Restart, which is deprecated for some time - but still not desupported. And I know a good bunch of customers using it in production even with Oracle Database 12.1.0.2 on several hundred databases.
    .

    Desupported

    Even if something is desupported - remember the Rule Based Optimizer? - you can still use a feature being desupported. But on your own risk as we don't fix any bugs or issues anymore. 

    Again the Database Upgrade Guide clarifies the term:

    "By desupported, we mean that Oracle will no longer fix bugs related to that feature and may remove the code altogether"

    Other common examples in Oracle Database 12c are the Enterprise Manager Database Control which simply does not exist anymore in Oracle Database 12c or the desupport of Raw Devices.
    .

    Summary

    Deprecated is a signal that something may disappear in the future and does not get enhanced anymore. No activity required except of taking note for your future plans. Desupported means that we don't fix anything anymore for a desupported feature or product - and it may even disappear. But often desupported features are still there and can be used on your own risk only. 
    .

    --Mike
    .


    Tuesday Mar 22, 2016

    GC Freelist Session Waits causing slowness and hangs

    Best Practice Hint

    One of the best things in my job:
    I learn from you folks out there. Everyday. 

    Credits here go to Maciej Tokar who did explain the below topic to me via LinkedIn - thanks a lot, Maciej! 
    .

    Locks are not being closed fast enough, resulting in gc freelist waits

    You can find a reference for Global Cache Freelist in the Oracle Documentation. This issue here can or will lead to database being slow, up to complete hangs. Based on my research it looks as the issue is not related to RAC only but a general thing. In your session waits you'll spot this:

    Event                               Event Class        % Event   Sessions
    ----------------------------------- --------------- ---------- ----------
    gc freelist                         Cluster              41.37       8.61

    This has been logged as a bugs 21352465 (public) and 18228629 (not public). It causes locks are not being closed fast enough, resulting in gc freelist waits. In conjunction the default for _gc_element_percent seemed to be too low at 120 (or 110 in 11.2.0.4).

    Actually the issue can affect not only Oracle Database 12.1.0.2 but also Oracle Database 11.2.0.3 and 11.2.0.4.

    See MOS Note:2055409.1 (Database Hangs with High "gc freelist" wait ) for further details.
    .

    Solution

    • Apply the patch for bug 18228629 on top of a PSU or BP where available
      • See the drop-down list to the right labeled "Release" to access the correct patch for your release
      • Unlike the above MOS Note states in Oracle Database 12.1.0.2 it is only available on top of the January 2016 PSU and BP and two other Exadata BPs - and on Linux only!
        .
    • Use the workaround and set _gc_element_percent = 200
      • This will require an instance restart as the parameter can't be changed dynamically:
        alter system set "_gc_element_percent"=200 scope=spfile;
        .

    Epilogue

    We've had a lot of discussions about underscore parameter in the past weeks. And I'm not a big fan of underscores especially when it comes to upgrades as experiences has shown that having underscores set one day may make it hard to remove them the other day - and underscores can significantly impact the upgrade duration in a negative way.

    But on the other hand, if an issue is seriously affecting many customers, and there's no patch available for your platform and environment right now, what else can one do?
    .

    --Mike

    .

    Wednesday Mar 09, 2016

    OUGN Conference - On the boat again

    OUGN Spring Conference 2016

    Last year influenza took me down and out just a couple of days before my planned departure for the famous OUGN Spring Conference. But this year (so far) I'm still happy and healthy and on my way towards beautiful Oslo. I'm really looking forward to this year's OUGN Spring Conference which will happen again on the boat departing from Oslo and sailing over to Kiel - and then returning back.

    In case you plan to visit my talks and demos:

    • Thursday, 10-March-2016 - 14:00-14:45h - Parliament 1+2
      How Oracle Single Tenant will change a DBA's life
      .
    • Friday, 11-March-2016 - 10:30-11:15h - Parliament 1+2
      Oracle Database Upgrade: Live and Uncensored
      .

    Looking forward to this wonderful event with so many good talks and presentations and such a great group of people. And thanks to the organizers of OUGN!

    --Mike

    .

    Tuesday Mar 08, 2016

    Parameter Recommendations for Oracle Database 12c - Part II


    Best Practice Hint

    Time for a new round on Parameter Recommendations for Oracle Database 12.1.0.2. The focus of this blog post settles on very well known parameters with interesting behavior. This can be a behavior change or simply something we'd like to point out. And even if you still work on Oracle Database 11g some of the below recommendations may apply to your environment as well.

    Preface

    Again, please be advised - the following parameter list is mostly based on personal experience only. Some of them are officially recommended by Oracle Support. Always use proper testing mechanisms.

    We strongly recommend Real Application Testing, especially the SQL Performance Analyzer but also Database Replay to verify the effect of any of those parameters. 
    .

    Known Parameters - Interesting Behavior

    • parallel_min_servers
      • What it does?
      • Default:
        • CPU_COUNT * PARALLEL_THREADS_PER_CPU * 2
      • Behavior Change:
        • Setting it to a value below the default will let the database ignore it.
        • In Oracle Database 11g the default was 0
        • Compare 11.2.0.4 vs 12.1.0.2 on the same box:
          • 11g:
            SQL> show parameter parallel_min_servers
            NAME                  TYPE     VALUE
            --------------------- -------- ------
            parallel_min_servers  integer  0

          • 12c:
            SQL> show parameter parallel_min_servers
            NAME                  TYPE     VALUE
            --------------------- -------- ------
            parallel_min_servers  integer  8
      • Explanation:

    • job_queue_processes
      • What it does?
        • See the Oracle Documentation - value specifies the maximum number of job slaves to be created to execute jobs started by either DBMS_JOBS or DBMS_SCHEDULER
      • Default:
        • 1000
      • Recommendation:
        • Set it to a rough equivalent of 2 * CPU cores
      • Explantion:
        • In Oracle Database 12c we introduced the automatic stats gathering during CTAS and IAS (into an empty table only) operations. This can potentially lead to too many jobs doing the stats gathering. Furthermore issues can happen due to the default of concurrent stats gathering.
          Therefore a limitation of this parameter seems to be a good idea. 
        • Be aware when switching it to 0 - this will block all recompilation attempts. Furthermore generally no jobs can be executed anymore with  DBMS_JOBS or DBMS_SCHEDULER.
        • Multitenant behavior change:
          In 12.1.0.1, job_queue_process was a Container Database (CDB) modifiable parameter (ie. at a global level). However, in 12.1.0.2, the job_queue_process parameter is not CDB modifiable; instead it's PDB modifiable which means each PDB can have its own job_queue_process value.  
      • More Information:
      • Annotation:
        I've had an email exchange with Stefan Köhler about the stats behavior for CTAS. As I couldn't myself reproduce the behavior we say at two customer with job_queue_processes=1000 and an heavy CTAS activity (which could be remedied by setting JQP to a lower value) I would put a question mark behind my above statement.

        .
        .
    • recyclebin
      • What it does?
        • See the Oracle Documentation - controls whether the Flashback Drop capability is turned on or off. If the parameter is set to OFF, then dropped tables do not go into the recycle bin. If this parameter is set to ON, then dropped tables go into the recycle bin and can be recovered.
      • Default:
        • ON
      • Recommendation:
        • If the recyclebin is ON (the default) in your environment then empty it at least once per week. Create a default job in all your environments emptying the recycle bin every Sunday morning at 3am for instance:
          SQL> purge DBA_RECYCLEBIN;
      • Explantion:
        • The recycle bin is on in every database by default since Oracle 10g. The danger is that it may not be emptied but especially on developer databases many objects may be created and dropped again. As a result the dropped objects and its dependents still stay in the database until the space needs to be reclaimed. That means, they exist in the data dictionary as well, for instance in TAB$. Their name is different now starting with "BIN$..." instead of "EMP" - but they will blow up your dictionary. And emptying it not often enough may introduce a performance dip to your system as the cleanup of many objects can be quite resource intense
        • Check your current recycle bins:
          SQL > SHOW RECYCLEBIN;
          ORIGINAL NAME RECYCLEBIN NAME              OBJECT TYPE DROP TIME
          ------------- ---------------------------- ----------- -------------------
          TEST_RBIN     BIN$2e51YTaSK8TL/mPy+FuA==$0 TABLE       2010-05-27:15:23:45
          TEST_RBIN     BIN$5dF60S3GSEOSSYREaqCg==$0 TABLE       2010-05-27:15:23:43
          TEST_RBIN     BIN$JHCDN9YwQRXjXGOJcCIg==$0 TABLE       2010-05-27:15:23:42
      • More Information:
    .
    .

    • deferred_segment_creation
      • What it does?
        • See the Oracle Documentation - set to the default (TRUE), then segments for tables and their dependent objects (LOBs, indexes) will not be created until the first row is inserted into the table
      • Default:
        • TRUE
      • Recommendation:
        • Set it to FALSE unless you plan to create a larger number of tables/indexes knowing that you won't populate many of them.
      • Explantion/Risk:
        • If my understanding is correct this parameter got introduced with Oracle Database 11.2 in order to save space when applications such as EBS, Siebel or SAP create tons of tables and indexes which never may get used as you don't work with the matching module of the software
        • The risk can be that certain query check DBA_SEGMENTS and/or DBA_EXTENTS - and if there's no segment allocated you won't find an indication about the existence of the object in there - but it actually exists. Furthermore we have seen issues with Data Pump workers getting contention, and some other things. 
      • More Information:
        • The documentation has become now pretty conservative as well since Oracle 11.2.0.4 and I'll second that:
          Before creating a set of tables, if it is known that a significant number of them will not be populated, then consider setting this parameter to true. This saves disk space and minimizes install time.
          ..
     --Mike

    Friday Mar 04, 2016

    Parameter Recommendations for Oracle Database 12c - Part I

    Best Practice Hint

     A few weeks ago we've published some parameter recommendations including several underscores but based on an internal discussion (still ongoing) we decided to remove this entry and split up the tasks. The optimizer team will take over parts of it and I'll post an update as soon as something is published.

    .

    Preface

    Please be advised - the following parameter list is mostly based on personal experience only. Some of them are officially recommended by Oracle Support. Always use proper testing mechanisms.

    We strongly recommend SQL Performance Analyzer to verify the effect of any of those parameters. 
    .

    How to read this blog post?

    Never ever blindly set any underscore or hidden parameters because "somebody said" or "somebody wrote on a blog" (including this blog!) or "because our country has the best tuning experts worldwide" ... Only trust Oracle Support if it's written into a MOS Note or an official Oracle White Paper or if you work with a particular support or consulting engineer for quite a long time who understands your environment.
    .

    Important Parameter Settings

      • _kks_obsolete_dump_threshold
        • What it does?
          • Introduced in Oracle 12.1.0.2 as an enhancement  to improve cursor sharing diagnostics by dumping information about an obsolete parent cursor and its child cursors after the parent cursor has been obsoleted N times. 
        • Problem:
          • Trace files can grow like giant mushrooms due to cursor invalidations
        • Solution:
        • Patches:
          • Fix included in DBBP 12.1.0.2.160216
          • Fix on-top of 12.1.0.2.13DBEngSysandDBIM
          • Since Feb 13, 2016 there's a one-off available but on Linux only - and only on top of a fresh 12.1.0.2 
        • Remarks:
          • The underlying cursor sharing problem needs to be investigated - always
            If you have cursor sharing issues you may set this parameter higher therefore not every invalidation causes a dump, then investigate and solve the issue, and finally switch the parameter to 0 once the issue is taken care of. 
            Please be aware that switching the parameter to 0 will lead to a lack of diagnostics information in case of cursor invalidations.


      • _use_single_log_writer
      • memory_target
        • What it does?
        • Problem:
          • Unexpected failing database upgrades with settings of  memory_target < 1GB where equal settings ofsga_target and pga_aggregate_target didn't cause issues 
          • It prevents the important use of HugePages
        • Solution:
          • Avoid memory_target by any chance
          • Better use sga_target and pga_aggregate_target instead


      • pga_aggregate_limit

      Essential MOS Notes for Oracle Database 12.1.0.2

      --Mike
      .

      Tuesday Mar 01, 2016

      Differences between Automatic Statistics Gathering job and GATHER_SCHEMA_STATS

      Recently a customer raised a question whether there are differences between the Automatic Statistics Gathering job and a manual creation of stats via the GATHER_SCHEMA_STATS procedure.

      The results in performance were quite interesting. Performance after an upgrade from Oracle Database 11.2.0.3 to Oracle Database 11.2.0.4 was not good when the automatic stats job got used. But performance changed significantly to the better when schema stats were created with the downside of taking more resources during the gathering.

      Is the Automatic Stats Gathering job enabled?

      That question can be answered quite easily. There's a very good MOS Note:1233203.1 - FAQ: Automatic Statistics Collection displaying this query:

      SELECT CLIENT_NAME, STATUS FROM DBA_AUTOTASK_CLIENT WHERE CLIENT_NAME='auto optimizer stats collection';

      The MOS Note has also the code to enable (or disable) the job.
      .

      Which parameters/settings are used?

      That question is a bit more tricky as the Note says: "The automatic statistics-gathering job uses the default parameter values for the DBMS_STATS procedures". But how do I display them?

      The following script will display the parameters being used during the Automatic Statistics Gathering:

      SET ECHO OFF
      SET TERMOUT ON
      SET SERVEROUTPUT ON
      SET TIMING OFF
      DECLARE
         v1  varchar2(100);
         v2  varchar2(100);
         v3  varchar2(100);
         v4  varchar2(100);
         v5  varchar2(100);
         v6  varchar2(100);
         v7  varchar2(100);
         v8  varchar2(100);
         v9  varchar2(100);
         v10 varchar2(100);        
      BEGIN
         dbms_output.put_line('Automatic Stats Gathering Job - Parameters');
         dbms_output.put_line('==========================================');
         v1 := dbms_stats.get_prefs('AUTOSTATS_TARGET');
         dbms_output.put_line(' AUTOSTATS_TARGET:  ' || v1);
         v2 := dbms_stats.get_prefs('CASCADE');
         dbms_output.put_line(' CASCADE:           ' || v2);
         v3 := dbms_stats.get_prefs('DEGREE');
         dbms_output.put_line(' DEGREE:            ' || v3);
         v4 := dbms_stats.get_prefs('ESTIMATE_PERCENT');
         dbms_output.put_line(' ESTIMATE_PERCENT:  ' || v4);
         v5 := dbms_stats.get_prefs('METHOD_OPT');
         dbms_output.put_line(' METHOD_OPT:        ' || v5);
         v6 := dbms_stats.get_prefs('NO_INVALIDATE');
         dbms_output.put_line(' NO_INVALIDATE:     ' || v6);
         v7 := dbms_stats.get_prefs('GRANULARITY');
         dbms_output.put_line(' GRANULARITY:       ' || v7);
         v8 := dbms_stats.get_prefs('PUBLISH');
         dbms_output.put_line(' PUBLISH:           ' || v8);
         v9 := dbms_stats.get_prefs('INCREMENTAL');
         dbms_output.put_line(' INCREMENTAL:       ' || v9);
         v10:= dbms_stats.get_prefs('STALE_PERCENT');
         dbms_output.put_line(' STALE_PERCENT:     ' || v10);
      END;
      /

      The settings of the DBMS_STATS.GATHER_SCHEMA_STATS procedure are documented:
      https://docs.oracle.com/database/121/ARPLS/d_stats.htm#ARPLS68577 

      When you compare the two you'll see that the settings/defaults are identical. 
      .

      But what is the difference between these two?

      Both activities use the same parameters. So the stats will look the same - IF they get created. The real difference between the Automatic Statistics Gathering job and a manual invocation of GATHER_SCHEMA_STATS is that the latter will refresh ALL statistics whereas the Automatic Statistics Gathering job will refresh only statistics on objects where statistics are missing or marked as STALE.

      The same behavior appears when you compare the recommendation to gather dictionary statistics before the upgrade by using DBMS_STATS.GATHER_DICTIONARY_STATS versus a DBMS_STATS.GATHER_SCHMEA_STATS('SYS')call. The latter will refresh all statistics whereas the first one will take less resources but refresh only STALE and missing statistics.
      .

      A simple example

      This script is kept as simple as possible.

      • It creates a test user
      • It creates two tables within this user - tablespace USERS
      • It inserts and updates information in the two tables
      • It flushes the monitoring information (how many DMLs got run?) out
      • It gathers stats on only one table to verify that STALE is working as intended
      • It kicks off the automatic stats gathering job
      • It kicks off the schema stats gathering call
      • It compares results before/after in the stats history table 

      set timing on
      set serverout on
      set echo on
      set termout on
      column table_name Format a5
      column owner      Format a6
      column stale_stats Format a4
      column last_analyzed Format a15
      column sample_size format 9999999
      drop user test1 cascade;
      create user test1 identified by test1;
      grant connect, resource, dba to test1;
      alter user test1 default tablespace USERS;
      create table TEST1.TAB1 as select * from dba_objects where rownum<50001;
      exec dbms_stats.gather_table_stats('TEST1','TAB1');
      create table TEST1.TAB2 as select * from dba_objects where rownum<50001;
      exec dbms_stats.gather_table_stats('TEST1','TAB2');
      insert into TEST1.TAB1 select * from dba_objects where rownum<50001;
      commit;
      insert into TEST1.TAB2 select * from dba_objects where rownum<50001;
      commit;
      insert into TEST1.TAB2 select * from dba_objects where rownum<50001;
      commit;
      update TEST1.TAB1 set object_id=object_id+0;
      commit;
      update TEST1.TAB2 set object_id=object_id+1;
      commit;
      exec DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO;
      select table_name,owner,stale_stats,to_char(last_analyzed,'DD-MON HH24:MI:SS') LAST_ANALYZED,SAMPLE_SIZE from dba_tab_statistics where table_name in ('TAB1','TAB2');
      exec DBMS_STATS.GATHER_TABLE_STATS('TEST1','TAB1');
      select table_name,owner,stale_stats,to_char(last_analyzed,'DD-MON HH24:MI:SS') LAST_ANALYZED,SAMPLE_SIZE from dba_tab_statistics where table_name in ('TAB1','TAB2');
      exec DBMS_AUTO_TASK_IMMEDIATE.GATHER_OPTIMIZER_STATS;
      pause Wait a bit - then press return ...
      select table_name,owner,stale_stats,to_char(last_analyzed,'DD-MON HH24:MI:SS') LAST_ANALYZED,SAMPLE_SIZE from dba_tab_statistics where table_name in ('TAB1','TAB2');
      exec dbms_stats.gather_schema_stats('TEST1');
      select table_name,owner,stale_stats,to_char(last_analyzed,'DD-MON HH24:MI:SS') LAST_ANALYZED,SAMPLE_SIZE from dba_tab_statistics where table_name in ('TAB1','TAB2');
      prompt End ...

      .

      The results

      exec
      DBMS_STATS.
      FLUSH_DATABASE_MONITORING_INFO;
      TABLE OWNER  STAL LAST_ANALYZED   SAMPLE_SIZE
      ----- ------ ---- --------------- -----------
      TAB1  TEST1  YES  29-FEB 22:37:07       50000
      TAB2  TEST1  YES  29-FEB 22:37:07       50000

      exec
      DBMS_STATS.
      GATHER_TABLE_STATS('TEST1','TAB1');
      TABLE OWNER  STAL LAST_ANALYZED   SAMPLE_SIZE
      ----- ------ ---- --------------- -----------
      TAB1  TEST1  NO   29-FEB 22:37:12      100000
      TAB2  TEST1  YES  29-FEB 22:37:07       50000

      exec
      DBMS_AUTO_TASK_IMMEDIATE.
      GATHER_OPTIMIZER_STATS;

      TABLE OWNER  STAL LAST_ANALYZED   SAMPLE_SIZE
      ----- ------ ---- --------------- -----------
      TAB1  TEST1  NO   29-FEB 22:37:12      100000
      TAB2  TEST1  NO   29-FEB 22:37:13      150000

      exec
      dbms_stats.
      gather_schema_stats('TEST1');

      TABLE OWNER  STAL LAST_ANALYZED   SAMPLE_SIZE
      ----- ------ ---- --------------- -----------
      TAB1  TEST1  NO   29-FEB 22:37:43      100000
      TAB2  TEST1  NO   29-FEB 22:37:43      150000

      The results can be interpreted this way:

      • The sample size of 50k is based on the first activity during the CTAS
      • Once table TAB1 gets analyzed the sample size is now correct - and the time stamp got updated - statistics on TAB2 are still marked STALE of course as the underlying table has changed by more than 10%
      • The Automatic Statistics Gathering job will refresh only stats for objects where stats are missing or marked STALE - in this example here TAB2. Table TAB1's statistics remain unchanged.
      • When the GATHER_SCHEMA_STATS job gets invoked it will refresh all statistics - regardless if they were STALE or not. 

      This is the behavior the customer who raised the question about differences in these two ways to create statistics may have seen. The GATHER_SCHEMA_STATS job took longer and consumed more resources as it will refresh all statistics regardless of the STALE attribute.

      And it's hard to figure out why the refresh of statistics created in a previous release may have led to suboptimal performance, especially as we talk about a patch set upgrade - and not a full release upgrade. Thanks to Wissem El Khlifi who twittered the following annotations I forgot to mention:

      • The Automatic Statistics Gathering job prioritizes objects with NO statistics over objects with STALE statistics
      • The Automatic Statistics Gathering job may get interrupted or skip objects leaving them with NO statistics gathered. You can force this by locking statistics - so the Auto job will skip those completely

      You'll find more information about the Automatic Statistics Gathering job here:

      And another strange finding ...

      When I played with this example in 12c I encountered the strange behavior of the GATHER_OPTIMIZER_STATS call taking exactly 10 minutes unti it returns to the command prompt.

      First I thought this is a Multitenant only issue. But I realized quickly: this happens in non-CDB databases in Oracle 12c as well. And when searching the bug database I came across the following unpublished bug:

      • Bug 14840737
        DBMS_AUTO_TASK_IMMEDIATE.GATHER_OPTIMIZER_STATS RETURNS INCORRECTLY

      which got logged in Oct 2012 and describes this exact behavior. I kick off the job - it will update the stats pretty soon after - but still take 10 minutes to return control to the command prompt. It is supposed to be fixed in a future release of Oracle Database ... 

       

      --Mike 

      Friday Feb 26, 2016

      Collaborate16 - See you soon!!!

      Collaborate Conference 2016

      If you haven't signed up for COLLABORATE16 yet, then please do so :-)

      And I hope to meet you and many other Oracle experts in Las Vegas from April 10-14, 2016.

      If you plan to attend one of our sessions mark them down in your conference scheduler:

      CU soon!

      --Mike
      .

      Thursday Feb 25, 2016

      What happened to the blog post about "12c parameters"?

      Best Practice Hint

      Two weeks ago I published a blog post about Parameter Recommendations for Oracle Database 12.1.0.2. And I took it down a day later. Why that?

      I've got a lot of input from external sources for the "Parameter" blog post. And I'd like to thank everybody who contributed to it, especially Oracle ACE Ludovico Caldara.

      Generally there was a bit of a misunderstanding internally about whether we should "advertise" underscore parameters to cure some misbehavior of the database. In 99% of all cases I'd agree that underscores are not a good solution - especially when it comes to database upgrades as our slide deck still contains a real world example about what happens when you keep old underscore parameters in your spfile. It can not only slow down the entire upgrade but also makes it very hard for Oracle Support to reproduce issues in case of something going the wrong direction. 

      But in some situations an underscore seems to be the only remedy in cases where a patch is not available for a particular release - the release you are using at the moment. And even if a patch is available or if the fix is available in a future PSU or BP that does not mean necessarily that one can apply it for several reasons.

      We still have a lot of very productive discussions going on internally between many groups. That is very good as it means that we have plenty of smart people around, especially in Oracle's Database Development :-)

      Furthermore we agreed that the Optimizer PM team will take over the part of my (taken down) blog post targeting wrong query results and other optimizer topics. We are in constant exchange and I'll link it as soon as something gets published.

      --Mike

      Monday Feb 15, 2016

      Upgrade Workshop on March 2, 2016 in Switzerland

      Grüezi alle miteinand!

      There are just a few seats open for the Upgrade / Migrate /Consolidate to Oracle Database 12c workshop on March 2, 2016 in Zürich in Switzerland open. If you would like to attend but haven't registered yet, please use this link to sign up:

      Workshop language will be German, slides will be in English.

      Looking forward to meet you there!

      --Mike 

      Wednesday Feb 03, 2016

      DBUA and Read-Only Tablespaces - Things to Know - I

      Related Blog Posts:


      Some people prefer the manual upgrade on the command line, others prefer the graphical tool Database Upgrade Assistant (DBUA).

      DBUA and Read-Only Tablespaces 

      The DBUA offers you an option of setting your non-Oracle tablespaces read-only during the upgrade.

      DBUA Read Only 1

      What the option doesn't tell you is the purpose - and the use case when to "click" it on.

      Partial Offline Backup 

      The option of having data tablespaces which don't belong to the Oracle supplied components is simply to do an offline backup and - in case of a failing upgrade - restore quickly. You'll find this in our big slide deck under "Fallback Strategies - Partial Offline Backup". We have used this method in several scenarios:

      • Large telco systems where the time to restore/recover the entire database would have taken hours or days
      • DWHs where the database is large and intentionally operated in NOARCHIVELOG mode
      • Standard Edition databases where Guaranteed Restore Points in combination with FLASHBACK DATABASE are not available

      FLASHBACK DATABASE is my all-time favorite as it is simple, fast and easy to use. You'll set a Guaranteed Restore Point - and in case of failure during the upgrade you'll flashback. Just don't forget to drop the restore point later when you don't need it anymore. Otherwise your FRA will run out of space the sooner or later. The only real caveat in this case is the fact that you can't change COMPATIBLE

      When setting data tablespaces read-only the idea is to offline backup the "heart" of the database consisting of all files belonging to SYSTEM, SYSAUX and UNDO tablespaces plus the redologs plus the controlfiles. The tricky part: you'll have to backup also all other repository tablespaces. Those can exist for instance when the database has seen several upgrades already and started its life maybe in the Oracle 8i or 9i days. So you may see also XDB, DRSYS and ODM. You'll have to leave them in read-write as well during the upgrade and backup the files offline beforehand.


      The Customer Case

      The real tricky part is something Marvin hit and commented on the upgrade blog:

      I am upgrading from 11.2.0.3 to 12.1.0.2. During the DBUA setup screens, I checked "Set User Tablespaces to Read Only During the Upgrade". It just seemed like the right thing to do. All of my tablespaces were ONLINE. All tablespace files were autoextendable. During the upgrade I got this error.

      Context component upgrade error
      ORA-01647 tablespace xxxxx is read-only.
      Cannot allocate space in it.

      There was plenty of space. I re-ran without the box checked and it ran ok. Just curious if anyone else has seen this.

      The read-only option in DBUA has an issue - it does not detect all repository tablespaces right now.

      DBUA Upgrade - Read Only Tablespaces

      Marvin and I exchanged some mails and from the DBUA logs I could see what happened:

      [AWT-EventQueue-0] [ 2016-01-28 11:07:03.768 CST ] [ProgressPane$RunNextProgressItem.run:1151]  Progress Item passedCONTEXT

      [AWT-EventQueue-0] [ 2016-01-28 11:07:03.768 CST ] [ProgressPane$RunNextProgressItem.run:1151]  Progress Item passedCONTEXT

      [AWT-EventQueue-0] [ 2016-01-28 11:07:03.768 CST ] [ProgressPane$RunNextProgressItem.run:1151]  Progress Item passedCONTEXT

      [AWT-EventQueue-0] [ 2016-01-28 11:07:03.781 CST ] [ProgressPane$RunNextProgressItem.run:1154]  progress to next step CONTEXT

      [Thread-364] [ 2016-01-28 11:07:43.758 CST ] [BasicStep.handleNonIgnorableError:479]  oracle.sysman.assistants.util.InteractiveMessageHandler@5bd44e0b:messageHandler

      [Thread-364] [ 2016-01-28 11:07:43.759 CST ] [BasicStep.handleNonIgnorableError:480]  CONTEXT component upgrade error:

      ORA-01647: tablespace 'WCI_OCS' is read-only, cannot allocate space in it

      :msg

      [Thread-364] [ 2016-01-28 11:15:42.179 CST ] [SummarizableStep$StepSummary.addDetail:783]  Adding detail: CONTEXT component upgrade error:

      ORA-01647: tablespace 'WCI_OCS' is read-only, cannot allocate space in it

      The repository of TEXT (or CONTEXT) is not in SYSAUX as it would be the default but in another tablespace. And this tablespace obviously was set to read-only as DBUA did not discover this tablespace as a repository but a regular user data tablespace. Bang!!!

      Simple workaround:
      Run the upgrade without the Read-Only Option. And this worked out fine. 

      You can create the TEXT component by yourself and decide in which tablespace it should be created:

      SQL> connect SYS/password as SYSDBA
      SQL> spool text_install.txt
      SQL> @?/ctx/admin/catctx.sql change_on_install SYSAUX TEMP NOLOCK

      Thanks to my team mates Cindy, Hector and Byron

      Yesterday I forwarded the email to our Upgrade Team and I received three replies within minutes explaining:

      • The query the DBUA uses is not as sophisticated as you would think:
        select tablespace_name from dba_tablespaces where contents != 'UNDO' and contents != 'TEMPORARY' and status = 'ONLINE' and tablespace_name != 'SYSTEM' and tablespace_name != 'SYSAUX' and tablespace_name != (select PROPERTY_VALUE from database_properties where PROPERTY_NAME = 'DEFAULT_PERMANENT_TABLESPACE') 

      • We have proposed a improved query already

      • It should be included in a future release of the database 


      Summary

      The option having data tablespaces read-only during an upgrade is meant for a fast fallback in case of an failure during an upgrade. But your first option should always be a Guaranteed Restore Point instead. If you still need the read-only solution than please be careful as you may have repositories in non-standard tablespaces. DBA_USER's DEFAULT_TABLESPACE column may give you an indication - but you should also check DBA_SEGMENTS. And I personally would use this option in conjunction with a command line approach.

      --Mike

      Tuesday Feb 02, 2016

      How to find out if a PSU has been applied? DBMS_QOPATCH

      pflaster.jpgSince we change the PSU and BP patch numbering from Oracle Database 12.1.0.2.PSU6 to 12,1,0,2,160119 it is almost impossible to distinguish from the patch name only if you have applied a PSU or a BP.

      But:
      In Oracle Database 12c there's a package available which is very useful to query plenty of information about patches from within the database: DBMS_QOPATCH.

      Here are a few helpful examples which I created by checking in our DBaaS Cloud database.

      Which patches have been applied (or rolled back)?

      SQL> set serverout on

      SQL> exec dbms_qopatch.get_sqlpatch_status;

      Patch Id : 20415564
              Action : APPLY
              Action Time : 24-JUN-2015 06:19:23
              Description : Database PSU 12.1.0.2.3, Oracle JavaVM Component (Apr2015)
              Logfile : /u01/app/oracle/cfgtoollogs/sqlpatch/20415564/18617752/
                        20415564_apply_ORCL_CDBRO
      OT_2015Jun24_06_18_09.log
              Status : SUCCESS

      Patch Id : 20299023
              Action : APPLY
              Action Time : 24-JUN-2015 06:19:23
              Description : Database Patch Set Update : 12.1.0.2.3 (20299023)
              Logfile : /u01/app/oracle/cfgtoollogs/sqlpatch/20299023/18703022/
                        20299023_apply_ORCL_CDBRO
      OT_2015Jun24_06_18_11.log
              Status : SUCCESS

      Patch Id : 20848415
              Action : APPLY
              Action Time : 24-JUN-2015 06:19:23
              Description :
              Logfile : /u01/app/oracle/cfgtoollogs/sqlpatch/20848415/18918227/
                        20848415_apply_ORCL_CDBRO
      OT_2015Jun24_06_18_15.log
              Status : SUCCESS

      Patch Id : 20848415
              Action : ROLLBACK
              Action Time : 24-JUN-2015 06:52:31
              Description :
              Logfile : /u01/app/oracle/cfgtoollogs/sqlpatch/20848415/18918227/
                        20848415_rollback_ORCL_CD
      BROOT_2015Jun24_06_52_29.log
              Status : SUCCESS

      Patch Id : 20618595
              Action : APPLY
              Action Time : 24-JUN-2015 13:52:13
              Description :
              Logfile : /u01/app/oracle/cfgtoollogs/sqlpatch/20618595/18956621/
                        20618595_apply_ORCL_CDBRO
      OT_2015Jun24_13_52_12.log
              Status : SUCCESS

      Patch Id : 20618595
              Action : ROLLBACK
              Action Time : 24-JUN-2015 14:37:11
              Description :
              Logfile : /u01/app/oracle/cfgtoollogs/sqlpatch/20618595/18956621/
                        20618595_rollback_ORCL_CD
      BROOT_2015Jun24_14_37_10.log
              Status : SUCCESS

      Patch Id : 20415564
              Action : ROLLBACK
              Action Time : 27-JAN-2016 17:43:18
              Description : Database PSU 12.1.0.2.3, Oracle JavaVM Component (Apr2015)
              Logfile : /u01/app/oracle/cfgtoollogs/sqlpatch/20415564/18617752/
                        20415564_rollback_MIKEDB_
      CDBROOT_2016Jan27_17_42_16.log
              Status : SUCCESS

      Patch Id : 21555660
              Action : APPLY
              Action Time : 27-JAN-2016 17:43:18
              Description : Database PSU 12.1.0.2.5, Oracle JavaVM Component (Oct2015)
              Logfile : /u01/app/oracle/cfgtoollogs/sqlpatch/21555660/19361790/
                        21555660_apply_MIKEDB_CDB
      ROOT_2016Jan27_17_42_17.log
              Status : SUCCESS

      Patch Id : 21359755
              Action : APPLY
              Action Time : 27-JAN-2016 17:43:18
              Description : Database Patch Set Update : 12.1.0.2.5 (21359755)
              Logfile : /u01/app/oracle/cfgtoollogs/sqlpatch/21359755/19194568/
                        21359755_apply_MIKEDB_CDB
      ROOT_2016Jan27_17_42_18.log
              Status : SUCCESS

      Patch Id : 21962590
              Action : APPLY
              Action Time : 27-JAN-2016 17:43:18
              Description :
              Logfile : /u01/app/oracle/cfgtoollogs/sqlpatch/21962590/19426224/
                        21962590_apply_MIKEDB_CDB
      ROOT_2016Jan27_17_42_21.log
              Status : SUCCESS

      PL/SQL procedure successfully completed.
      .

      Where's my home and inventory?

      SQL> set pagesize 0

      SQL> set long 1000000 

      SQL> select xmltransform(dbms_qopatch.get_opatch_install_info, dbms_qopatch.get_opatch_xslt) "Home and Inventory" from dual;

      Home and Inventory
      -------------------------------------------------------------

      Oracle Home     : /u01/app/oracle/product/12.1.0/dbhome_1
      Inventory    
          : 
      /u01/app/oraInventory


      Has a specific patch been applied?

      Lets check for the latest PSU. 

      SQL> select xmltransform(dbms_qopatch.is_patch_installed('21359755'), dbms_qopatch.get_opatch_xslt) "Patch installed?" from dual;

      Patch installed?
      -------------------------------------------------------

      Patch Information:
               21359755:   applied on 2015-10-22T21:48:17Z

      .

      What's tracked in my inventory?

      The equivalent of opatch lsinventory -detail ...

      SQL> select xmltransform(dbms_qopatch.get_opatch_lsinventory, dbms_qopatch.get_opatch_xslt) from dual; 

      Oracle Querayable Patch Interface 1.0
      ----------------------------------------------------------------
      Oracle Home       : /u01/app/oracle/product/12.1.0/dbhome_1
      Inventory         : /u01/app/oraInventory
      ----------------------------------------------------------------

      Installed Top-level Products (1):
                                          12.1.0.2.0
      Installed Products ( 135)
                                     ...

      .

      Additional Information and Patches

      If you need more helpful examples you may check this excellent blog post by Simon Pane (Pythian):

      And credits to Martin Berger for sending me this important information:

      Just in case there are multiple DBs running from the same O_H, and someone      
      queries dbms_qopatch.get_opatch_lsinventory automated from all DBs (as in       
      automated monitoring/reporting scripts) I'd recommend Patch 20599273 -          
      otherwise there might be strange XM errors due to race conditions. 

      .

      --Mike 

      Monday Feb 01, 2016

      New PREUPGRD.SQL is available for Upgrades to 12c

      üreupgrd.sql

      As of today a new version of our upgrade tool preupgrd.sql (including the package utluppkg.sql) for upgrades to Oracle Database 12.1.0.2 is available as download from MOS:

      Download it and exchange the existing preupgrd.sql and utluppkg.sql in your current Oracle 12.1.0.2 ?/rdbms/admin directory.

      --Mike

      Thursday Jan 28, 2016

      TDE is wonderful - Journey to the Cloud V

      DBaaS Oracle Cloud

       What happened so far on my Journey to the Cloud?

      Today's journey:
      Learn about TDE (Transparent Data Encryption) and other secrets

      What I really really love about my job: Every day I learn something new.

      But sometimes learning can be frustrating at the beginning. And so it was for Roy and myself in the past days when we explored the use of TDE (Transparent Data Encryption) in our DBaaS Cloud environments. But many thanks to Brian Spendolini for his continuous 24x7 support :-)

      Never heard of Transparent Data Encryption before? Then please read on here. It's usually part of ASO (Advanced Security Option) but it is included in the cloud offering. 

      But first of all before taking care on TDE and PDBs I tried to deploy a new DBaaS VM ...
      .

      PDB names can't contain underscores?

      Well, one learning experience is that initially you can't create a PDB in the DBaaS environment with an underscore in the name. I wanted to name my PDB in my new env simply "TDE_PDB1" (8 characters, all should be fine) - but received this nice message:

      Don't worry if you don't speak German - it basically says that it can't be longer than 8 characters (ok, I knew that), must begin with a character (mine does of course) and can only contain characters and number (eh?? no underscores???). Hm ...?!?

      Ok, I'll name mine "TDEPDB1".

      Of course outside this page you can create PDBs containing an underscore:

      SQL> create pluggable database PDB_MIKE admin user mike identified by mike
        2  file_name_convert=('/oradata/CDB2/pdbseed', '/oradata/CDB2/pdb_mike');

      Pluggable database created
      .

      That's what happens when application logic tries to superseed database logic.
      .

      (Almost) undocumented parameter: encrypt_new_tablespace

      Thanks to Robert Pastijn for telling me about this hidden secret. A new parameter which is not in the regular database deployment but only in the cloud.

      encrypt_new_tablespaces

      First check in MOS:

      Interesting.

      So let's check with Google.
      And here it is: 7 hits, for example:

      Controlling Default Tablespace Encryption

      The ENCRYPT_NEW_TABLESPACES initialization parameter controls default encryption of new tablespaces. In Database as a Service databases, this parameter is set to CLOUD_ONLY.

      Value
      Description

      ALWAYS

      Any tablespace created will be transparently encrypted with the AES128 algorithm unless a different algorithm is specified on the ENCRYPTION clause.

      CLOUD_ONLY

      Tablespaces created in a Database Cloud Service database will be transparently encrypted with the AES128 algorithm unless a different algorithm is specified on the ENCRYPTION clause. For non-Database Cloud Service databases, tablespaces will only be encrypted if the ENCRYPTION clause is specified. This is the default value.

      DDL

      Tablespaces are not transparently encrypted and are only encrypted if the ENCRYPTION clause is specified.

      What I found really scary is the fact that I couldn't find it in my spfile/pfile. You can alter it with an "alter system" command but you can't remove it.

      The idea behind this is great as tablespaces should be encrypted, especially when they reside in a cloud environment. TDE is a very useful feature. And this mechanism exists regardless of your edition, whether you have Standard Edition or Enterprise Edition in any sort of flavor in the DBaaS Cloud.

      A new tablespace will be encrypted by default:

      SQL> CREATE TABLESPACE TS_MIKE DATAFILE 'ts_mike01.dbf' SIZE 10M;

      Then check:

      SQL> select TABLESPACE_NAME, ENCRYPTED from DBA_TABLESPACES;

      But we'll see later if this adds some constraints to our efforts to migrate a database for testing purposes into the DBaaS cloud environment.
      .

      Is there anything encrypted yet?

      Quick check after setting:

      SQL> alter system set exclude_seed_cdb_view=FALSE scope=both;

      I tried to find out if any of the tablespaces are encrypted.

      SQL> select tablespace_name, encrypted, con_id from CDB_TABLESPACES order by 3;

      TABLESPACE_NAME                ENC     CON_ID
      ------------------------------ --- ----------
      SYSTEM                         NO           1
      USERS                          NO           1
      SYSAUX                         NO           1
      UNDOTBS1                       NO           1
      TEMP                           NO           1

      SYSTEM                         NO           2
      USERS                          NO           2
      TEMP                           NO           2
      SYSAUX                         NO           2

      EXAMPLE                        NO           3
      USERS                          NO           3
      TEMP                           NO           3
      SYSAUX                         NO           3
      APEX_1701140435539813          NO           3
      SYSTEM                         NO           3

      15 rows selected.

      Looks good.  Nothing encrypted yet.
      .

      How does the new parameter ENCRYPT_NEW_TABLESPACES effect operation?

      Ok, lets try.

      SQL> show parameter ENCRYPT_NEW_TABLESPACES

      NAME                                 TYPE        VALUE
      ------------------------------------ ----------- ---------------
      encrypt_new_tablespaces              string      CLOUD_ONLY

      And further down the road ...

      SQL> alter session set container=pdb1;

      SQL> create tablespace MIKE_PLAYS_WITH_TDE datafile '/u02/app/oracle/oradata/MIKEDB/PDB1/mike_plays_with_tde.dbf' size 10M;

      Tablespace created.

      SQL> select tablespace_name, encrypted, con_id from CDB_TABLESPACES order by 3;

      TABLESPACE_NAME                ENC     CON_ID
      ------------------------------ --- ----------
      SYSTEM                         NO           3
      SYSAUX                         NO           3
      TEMP                           NO           3
      USERS                          NO           3
      EXAMPLE                        NO           3
      APEX_1701140435539813          NO           3
      MIKE_PLAYS_WITH_TDE            YES          3

      7 rows selected.

      Ah ... so my new tablespace is encrypted. Not bad ... so far TDE has no influence. I can create objects in this tablespace, query them etc. It is not disturbing at all. Good.
      .

      How does this key thing work in the DBaaS Cloud?

      The documentation in above WP tells us this:

      Managing the Software Keystore and Master Encryption Key

      Tablespace encryption uses a two-tiered, key-based architecture to transparently encrypt (and decrypt) tablespaces. The master encryption key is stored in an external security module (software keystore). This master encryption key is used to encrypt the tablespace encryption key, which in turn is used to encrypt and decrypt data in the tablespace.

      When the Database as a Service instance is created, a local auto-login software keystore is created. The keystore is local to the compute node and is protected by a system-generated password. The auto-login software keystore is automatically opened when accessed.

      You can change (rotate) the master encryption key by using the tde rotate masterkey  subcommand of the dbaascli  utility. When you execute this subcommand you will be prompted for the keystore password. Enter the password specified when the service instance was created.
      .

      Creating a new PDB

      That's easy, isn't it?

      SQL> alter session set container=cdb$root;

      Session altered.

      SQL> create pluggable database pdb2 admin user mike identified by mike;

      Pluggable database created.

      SQL> alter pluggable database pdb2 open;

      Pluggable database altered.

      SQL> select tablespace_name, encrypted, con_id from CDB_TABLESPACES order by 3;

      TABLESPACE_NAME                ENC     CON_ID
      ------------------------------ --- ----------
      SYSTEM                         NO           4
      SYSAUX                         NO           4
      TEMP                           NO           4
      USERS                          NO           4

      SQL> select file_name from dba_data_files;

      FILE_NAME
      --------------------------------------------------------------------------------
      /u02/app/oracle/oradata/MIKEDB/2A6680A0D990285DE053BA32C40AED53/datafile/o1_mf_s
      ystem_cbn8fo1s_.dbf

      /u02/app/oracle/oradata/MIKEDB/2A6680A0D990285DE053BA32C40AED53/datafile/o1_mf_s
      ysaux_cbn8fo20_.dbf

      /u02/app/oracle/oradata/MIKEDB/2A6680A0D990285DE053BA32C40AED53/datafile/o1_mf_u
      sers_cbn8fo27_.dbf

      Ah, bad thing. As I neither used the file_name_convert option nor changed the PDB_FILE_NAME_CONVERT initialization parameter my new PDB files get created in the "root" path of the CDB. I don't want this. But isn't there this cool new feature called ONLINE MOVE OF DATAFILES in Oracle Database 12c? Ok, it's an EE feature but let me try this after checking the current OMF file names in DBA_DATA_FILES and DBA_TEMP_FILES:

      SQL> !mkdir /u02/app/oracle/oradata/MIKEDB/PDB2

      SQL> ALTER DATABASE MOVE DATAFILE '/u02/app/oracle/oradata/MIKEDB/2A6680A0D990285DE053BA32C40AED53/datafile/o1_mf_system_cbn8fo1s_.dbf' TO '/u02/app/oracle/oradata/MIKEDB/PDB2/system01.dbf';

      SQL> ALTER DATABASE MOVE DATAFILE '/u02/app/oracle/oradata/MIKEDB/2A6680A0D990285DE053BA32C40AED53/datafile/o1_mf_sysaux_cbn8fo20_.dbf' TO '/u02/app/oracle/oradata/MIKEDB/PDB2/sysaux01.dbf';

      SQL> ALTER DATABASE MOVE DATAFILE '/u02/app/oracle/oradata/MIKEDB/2A6680A0D990285DE053BA32C40AED53/datafile/o1_mf_users_cbn8fo27_.dbf' TO '/u02/app/oracle/oradata/MIKEDB/PDB2/users01.dbf';

      Be prepared:
      This will create a 1:1 copy of the file in the designated location and synch afterwards. It may take a minute per file.

      And moving the TEMP tablespace(s) file(s) will fail.

      SQL> ALTER DATABASE MOVE DATAFILE '/u02/app/oracle/oradata/MIKEDB/2A6680A0D990285DE053BA32C40AED53/datafile/o1_mf_temp_cbn8fo25_.dbf' TO '/u02/app/oracle/oradata/MIKEDB/PDB2/temp01.dbf';

      *
      ERROR at line 1:
      ORA-01516: nonexistent log file, data file, or temporary file
      "/u02/app/oracle/oradata/MIKEDB/2A6680A0D990285DE053BA32C40AED53/datafile/o1_mf_temp_cbn8fo25_.dbf"

      The temporary tablespace will have to be dropped and recreated. But not a big deal. 

      Check;

      SQL> select file_name from dba_data_files;

      FILE_NAME
      -----------------------------------------------------
      /u02/app/oracle/oradata/MIKEDB/PDB2/sysaux01.dbf
      /u02/app/oracle/oradata/MIKEDB/PDB2/users01.dbf
      /u02/app/oracle/oradata/MIKEDB/PDB2/system01.dbf

      Let me fix this so I don't hit this pitfall again:

      SQL> alter system set pdb_file_name_convert='/u02/app/oracle/oradata/MIKEDB/pdbseed','/u02/app/oracle/oradata/MIKEDBPDB2';

      Final verification:

      SQL> select name, value from v$system_parameter where con_id=4;

      NAME                   VALUE
      ---------------------- ----------------------------------
      resource_manager_plan
      pdb_file_name_convert  /u02/app/oracle/oradata/MIKEDB/pdbseed, /u02/app/oracle/oradata/MIKEDBPDB2


      Now the fun part starts ... ORA-28374: typed master key not found in wallet

      Remember this command from above in my PDB1? It run fine. But now it fails in PDB2.

      SQL> create tablespace MIKE_PLAYS_WITH_TDE datafile '/u02/app/oracle/oradata/MIKEDB/PDB2/mike_plays_with_tde.dbf' size 10M;

      create tablespace MIKE_PLAYS_WITH_TDE datafile '/u02/app/oracle/oradata/MIKEDB/PDB2/mike_plays_with_tde.dbf' size 10M
      *
      ERROR at line 1:
      ORA-28374: typed master key not found in wallet

      Voodoo in the database? I'm worried - especially as Roy had the same issue days before. But why did the command pass through without issues before in PDB1 - and now it doesn't in PDB2? Is it because the PDB1 is precreated - and my PDB2 is not?

      Kinda strange, isn't it?
      So connecting back to my PDB1 and trying again: 

      SQL> alter session set container=pdb1;

      Session altered.

      SQL> create tablespace MIKE_STILL_PLAYS_WITH_TDE datafile '/u02/app/oracle/oradata/MIKEDB/PDB1/mike_still_plays_with_tde.dbf' size 10M;

      Tablespace created.

      Ok, now I'm worried.
      What is the difference between the precreated PDB1 and my new PDB2?
      .

      Why do I get an ORA-28374 in my fresh PDB2?

      When we compare the wallet status in both PDBs we'll recognize the difference:

      PDB1:

      SQL> select wrl_type, wallet_type, status from v$encryption_wallet;

      WRL_TYPE        WALLET_TYPE          STATUS
      --------------- -------------------- -----------------------
      FILE            AUTOLOGIN            OPEN

      PDB2:

      SQL> select wrl_type, wallet_type, status from v$encryption_wallet;

      WRL_TYPE        WALLET_TYPE          STATUS
      --------------- -------------------- -----------------------
      FILE            AUTOLOGIN            OPEN_NO_MASTER_KEY

      .
      Now thanks to Brian Spendolini I have a working solution. But I'm not convinced that this is an obvious path ...
      Remember? I just would like to create a tablespace in my new (own, non-precreated) PDB. That's all ... 

      SQL> alter session set container=cdb$root;

      SQL> administer key management set keystore close;

      keystore altered.

      SQL> administer key management set keystore open identified by <your-sysadmin-pw> container=all;

      keystore altered.

      SQL> alter session set container=pdb2;

      Session altered.

      SQL> SELECT WRL_PARAMETER,STATUS,WALLET_TYPE FROM V$ENCRYPTION_WALLET;

      WRL_PARAMETER                             STATUS             WALLET_TYPE
      ----------------------------------------- ------------------ -----------
      /u01/app/oracle/admin/MIKEDB/tde_wallet/  OPEN_NO_MASTER_KEY PASSWORD

      SQL>  ADMINISTER KEY MANAGEMENT SET KEY USING TAG  "tde_dbaas" identified by <your-sysadmin-pw>  WITH BACKUP USING "tde_dbaas_bkup"; 

      keystore altered.

      SQL> SELECT WRL_PARAMETER,STATUS,WALLET_TYPE FROM V$ENCRYPTION_WALLET;

      WRL_PARAMETER                             STATUS   WALLET_TYPE
      ----------------------------------------- -------- -----------
      /u01/app/oracle/admin/MIKEDB/tde_wallet/  OPEN     PASSWORD

      And finally ... 

      SQL> create tablespace MIKE_STILL_PLAYS_WITH_TDE datafile '/u02/app/oracle/oradata/MIKEDB/PDB2/mike_still_plays_with_tde.dbf' size 10M;

      Tablespace created.

      Wow!!!

      That was not straight forward at all. Maybe it all happens due to my almost non-existing knowledge about TDE.

      Ah ... and let me say that I find the missing uppercase letter with all keystore altered. echo messages quite disturbing. But this is a generic one and non-critical of course ...

      --Mike
      .

      PS: Read on about Seth Miller's experience here on Seth's blog:
      http://sethmiller.org/oracle-2/oracle-public-cloud-ora-28374-typed-master-key-not-found-in-wallet/ 




      Thursday Jan 21, 2016

      SuSE SLES 12 certified with Oracle Database 12.1.0.2

      Puh ... I've got many mails over several months asking about the current status of certification of SuSE SLES12 for Oracle Database 12.1.0.2. It took a while - and I believe it was not in our hands. But anyhow ... finally ...

      See Release Notes for additional package requirements

      Minimum kernel version: 3.12.49-11-default
      Mininum PATCHLEVEL: 1

      Additional Notes

      • Edit CV_ASSUME_DISTID=SUSE11 parameter in database/stage/cvu/cv/admin/cvu_config & grid/stage/cvu/cv/admin/cvu_config
      • Apply Patch 20737462 to address CVU issues relating to lack of reference data
      • Install libcap1 (libcap2 libraries are installed by default); i.e. libcap1-1.10-59.61.x86_64 & libcap1-32bit-1.10-59.61.x86_64
      • ksh is replaced by mksh; e.g. mksh-50-2.13.x86_64
      • libaio has been renamed to libaio1 (i.e. libaio1-0.3.109-17.15.x86_64); ensure that libaio1 is installed


      Note: OUI may be invoked with -ignoreSysPreqs to temporarily workaround ongoing CVU check failures

      I had a SuSE Linux running on my previous laptop as dual-boot for quite a while. And I still like SuSE way more than any other Linux distributions potentially because of the fact that it was the Linux I started developing some basic Unix skills. I picked up my first Linux at the S.u.S.E. "headquarters" near Fürth Hauptbahnhof in 1994. I used to live just a few kilometers away and the version 0.9 a friend had given to me on a bunch of 3.5'' floppies had a disk failure. I believe the entire package did cost DM 19,90 by then - today roughly 10 Euro when you don't consider inflation - and was distributed on floppy disks. The reason for me to buy it was simply that I had no clue about Linux - but SuSE had a book delivered with the distribution.

      This is a distribution I had purchased later on as well - they've had good discounts for students by then.


      Picture source: Wikipedia - https://en.wikipedia.org/wiki/SUSE_Linux_distributions

      --Mike

      PS: Updated with more recent information on 15-02-2016 

      Wednesday Jan 20, 2016

      Oracle January 2016 CPU PSU BP available now - BE AWARE OF CHANGES IN PATCH NUMBERING

      Last night the PSUs and BPs for January 2016 have been made available for download on support.oracle.com.

      Oracle Critical Patch Update Advisory - January 2016

      http://www.oracle.com/technetwork/topics/security/cpujan2016-2367955.html 

      It contains 248 security fixes across all products and platforms. And of course important non-security fixes - and that's why we recommend to apply the PSUs (or the BPs in case you are on Exadata or an Oracle In-Memory user) as soon as possible. 

      Change in Patch Numbering

      Please be aware that as of November 2015 there's been a change in patch numbering introduced which most of you may not be aware of. A database PSU was named 12.1.0.2.5 before (or I used to call it 12.1.0.2.PSU5 before to make clear that a PSU and not a BP has been applied). But the new notation will change the 5th digit to a 6-digit-number to include the date. See MOS Note:2061926.1 for details.

      Example:

      • Before: Oracle Database 12c PSU October 2015 ... 12.1.0.2.5
      • Now: Oracle Database 12c PSU January 2016 ... 12.1.0.2.160119 

      More Information? 

      --Mike

      Tuesday Jan 19, 2016

      Clean up APEX - Journey to the Cloud IV

      DBaaS Oracle Cloud

      What happened so far on my Journey to the Cloud?

      Today's journey: Cleanup APEX removal leftovers 

      When you read my "Patch" blog post from Dec 22, 2015 you'll see that I was left with an incomplete Oracle Application Express (APEX) removal situation. Something looked really strange and didn't work as expected and documented. 

      But thanks to Jason Straub and Joel Kallman, my APEX colleagues, The solution below got provided by Jason. And let me say upfront that it's not APEX's fault ;-) Somebody has mixed up stuff in the current DBaaS deployment and thus the correct scripts to remove things in the right order are simply missing.

      How to remove Oracle APEX from the DBaaS Cloud's CDB$ROOT

      1. Download APEX 4.2.0.6 and PDBSS 2.0 (Multitenant Self Service Provisioning Application):
        http://www.oracle.com/technetwork/developer-tools/apex/application-express/apex-archive-42-1885734.html
        http://www.oracle.com/technetwork/database/multitenant/downloads/multitenant-pdbss-2016324.html

        Actually if you have removed PDBSS already as I did in my "Patch" blog post from Dec 22, 2015 you don't have to download, unzip and execute the PDBSS removal script again.


      2. Copy the zip files to the Cloud environment

        scp -i ./yourCloudKey /media/sf_CTEMP/Download/apex_4.2.6_en.zip oracle@<your_cloud_IP>:/home/oracle

        Enter passphrase for key './yourCloudKey':

        apex_4.2.6_en.zip              100%   82MB 116.9KB/s   11:58

        Repeat the same with PDBSS in case you didn't remove it already.


      3. Connect to your Cloud environment and unzip both files.

        sh -i ./yourCloudKey oracle@<your_cloud_IP>

        cd
        unzip apex_4.2.6_en.zip


        Repeat the same with PDBSS in case you didn't remove it already.


      4. Remove PDBSS by using your unzipped archive

        cd /home/oracle/pdbss
        sqlplus / as sysdba
        SQL> @pdbss_remove.sql


      5. Remove APEX 5 by using the existing default installation
        (this is the part I did already in my previous blog post from Dec 22)

        cd $ORACLE_HOME/apex
        sqlplus / as sysdba
        SQL> @apxremov.sql 



      6. Remove APEX 4.2 parts by using your unzipped archive
        (this is the part I didn't do yet)

        cd /home/oracle/apex
        sqlplus / as sysdba
        SQL> @apxremov_con.sql


      7. Drop the DBaaS Monitor common users

        As the DBaaS Monitor is based on APEX as well removing APEX will "kill" the DBaaS Monitor as well. So in oder to have a proper working environment you must drop the common users for DBaaS Monitor as well. Otherwise you'll receive ORA-959 synch errors in PDB_PLUG_IN_VIOLATIONS whenever you deploy a new PDB and - even worse - the PDB won't open unrestricted.

        SQL> drop user C##DBAAS_BACKUP cascade;
        SQL> drop user C##DBAAS_MONITOR cascade;


      8. Finally recompilation and check for invalid objects

        sqlplus / as sysdba
        SQL> @?/rdbms/admin/utlrp.sql

        SQL> select object_name, owner from dba_objects where status='INVALID';
        no rows selected


      9. Now you are free to install APEX 5.0 in the PDBs of your choice.

        As theres's a PDB1 already I will create APEX inside this PDB first.

        cd $ORACLE_HOME/apex
        sqlplus / as sysdba
        SQL> alter session set container=pdb1;
        SQL> create tablespace APEX5 datafile '
        /u02/app/oracle/oradata/CDB1/PDB1/apex5_01.dbf' size 100M autoextend ON next 1M;


        Be aware: PDB_FILE_NAME_CONVERT is not set. And your database is not in ASM. Therefore avoiding the data file path and name will let you end up with a file named after OMF standards in:
        /u02/app/oracle/oradata/CDB1/26A65D56D16F21A1E05336036A0A1AD8/datafile/o1_mf_apex5_c9wg9kly_.dbf

        Create APEX 5.0 in PDB1, change the admin password, create the APEX listener and REST service and unlock the public user:

        SQL> @apexins.sql APEX5 APEX5 TEMP /i/
        SQL> @apxchpwd.sql


        Be aware about the password requirements:

        --------------------------------------------------------------------------------
        Password does not conform to this site's password complexity rules.
        * Password must contain at least 6 characters.
        * Password must contain at least one numeric character (0123456789).
        * Password must contain at least one punctuation character
        (!"#$%&()``*+,-/:;?_).
        * Password must contain at least one upper-case alphabetic character.
        * Password must not contain username.
        --------------------------------------------------------------------------------


        SQL> @apex_rest_config.sql
        SQL> alter user APEX_PUBLIC_USER identified by SecretPWD account unlock;

      Let me add that it is possible to have different APEX versions in different PDBs. The only thing which you'll need to really really take care on is the images directory and the different apex homes.

      That's it ... and in a future cloud deployment the extra step to remove APEX 4.2 shouldn't be necessary anymore. 

      --Mike

      Thursday Jan 14, 2016

      VBox 5.0.10/12 issues with PERL and Seg Faults - UPDATE

      A bit more than two months ago I did hear from several people having issues with our Hands-On Lab environment. And it became clear that only those who use Oracle Virtual Box 5 see such errors. 

      Then I read Danny Bryant's blog post (thanks to Deiby Gomez for pointing me to it) about similar issues and a potential solution yesterday:

      And interestingly one of my colleagues, our PL/SQL product manager Bryn Llewellyn started an email thread and a test internally yesterday as well. The issue seem to occur only on newer versions of Apple's MacBooks.
      .

      Potential Root Cause

      The PERL issues seem to happen only on specific new Intel CPUs with a so called 4th level cache.

      The current assumption is that Intel CPUs with Iris Pro graphics are affected. Iris Pro means eDRAM (embedded DRAM) which is reported as 4th level cache in CPUID. We have confirmed that Crystal Well and Broadwell CPUs with Iris Pro are affected. It is likely that the Xeon E3-1200 v4 family is also affected.

      It seems to be that there's a bug in the perl binary. It links against ancient code from the Intel compiler suite doing optimizations according to the CPU features. Very recent Intel CPUs have 4 cache descriptors.

      People who encountered this used Virtual Box VBox 5.0.x - and it passes this information to the guest. This leads to a problem within the perl code. You won't see it on VBox 4.3 as this version does not pass the information to the guest. 

      But actually it seems that this issue is independent of Virtual Box or any other virtualization software. It simply happens in this case as many people use VBox on Macs - and some Macs are equipped with this new CPU model. But people run Oracle in VBox environments and therefore see the issue as soon as they upgraded to VBox 5.0.x.
      .

      Potential Solutions

      If you are using Oracle in VBox there are actually two solutions:

      • Revert to VBox 4.3 as this won't get you in trouble
        This problem was not triggered on VBox 4.3.x because this version did not  pass the full CPUID cache line information to the guest.
        .
      • Run this sequence of commands in VBox 5.0 to tweak the CPUID bits passed to the guest:
        .
        VBoxManage setextradata VM_NAME "VBoxInternal/CPUM/HostCPUID/Cache/Leaf" "0x4"
        VBoxManage setextradata VM_NAME "VBoxInternal/CPUM/HostCPUID/Cache/SubLeaf" "0x4"
        VBoxManage setextradata VM_NAME "VBoxInternal/CPUM/HostCPUID/Cache/eax"  "0"
        VBoxManage setextradata VM_NAME "VBoxInternal/CPUM/HostCPUID/Cache/ebx" "0" 
        VBoxManage setextradata VM_NAME "VBoxInternal/CPUM/HostCPUID/Cache/ecx" "0" 
        VBoxManage setextradata VM_NAME "VBoxInternal/CPUM/HostCPUID/Cache/edx"  "0"
        VBoxManage setextradata VM_NAME "VBoxInternal/CPUM/HostCPUID/Cache/SubLeafMask" "0xffffffff" 

        • Of course you'll need to replace VM_NAME by the name of your VM
          .

      If the error happens on a bare metal machine meaning it happens not inside a virtual image but on a native environment then the only chance you'll have (to my knowledge right now) is to exchange the PERL before doing really something such as running root.sh or rootupgrade.sh in your Grid Infrastructure installation or before using the DBCA or the catctl.pl tool to create or upgrade a database.

      In this case please refer to the blog post of Laurent Leturgez:

      Issues with Oracle PERL causing segmentation faults:

      http://laurent-leturgez.com/2015/05/26/oracle-12c-vmware-fusion-and-the-perl-binarys-segmentation-fault

      .

      Further Information

      This issues is currently tracked internally as bug 22539814: ERRORS INSTALLING GRID INFRASTRUCTURE 12.1.0.2 ON INTEL CPUS WITH 4 CACHE LEVEL.

      So far we have not seen reports by people encountering this in a native environment but only by people using VBox 5.0.x or Parallels or VMware on a very modern version of Apple hardware.

      .
      --Mike 



      Wednesday Jan 13, 2016

      Have an Excellent Start into 2016!!!

      Roy and I and the entire Database Upgrade Team would like to wish you a very good start into 2016!!!

      And thanks for your confidence in our work in the past year(s). The Upgrade Your Database - NOW! Blog had over 440,000 hits in 2015 which is quite a surprise as neither Roy nor I do update the blog full time. But the many mails and comments we consistently get demonstrate your interest. 

      Upgrade Your Database - NOW! Statistics 2015

      We will continue to report about things related to upgrades and migrations and performance and testing and the cloud and ... and ... and regarding Oracle Database 12c in 2016 - I promise :-)

      Thanks for your support and all the helpful emails and comments.
      We learn a lot from you folks out there!!!

      Have a great start into 2016! 

      --Mike 

      About

      Mike Dietrich - Oracle Mike Dietrich
      Master Product Manager - Database Upgrade & Migrations - Oracle

      Based in Germany. Interlink between customers/partners and the Upgrade Development. Running workshops between Arctic and Antartica. Assisting customers in their reference projects onsite and remotely. Connect via:

      - -

      Search

      Archives
      « July 2016
      SunMonTueWedThuFriSat
           
      1
      2
      3
      6
      8
      9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      22
      23
      24
      25
      26
      27
      28
      29
      30
      31
            
      Today
      Slides Download Center
      Visitors since 17-OCT-2011
      White Paper and Docs
      Workshops
      Viewlets and Videos
      Workshop Map
      x Oracle related Tech Blogs
      This week on my Rega & Pono
      Upgrade Reference Papers