Wednesday Mar 30, 2016

DBUA and Read-Only Tablespaces - Things to Know II

Related Blog Posts:



Thanks to Rodolfo Baselli commenting on a previous blog post about the DBUA and Read-Only Tablespaces I dug a bit deeper and found out that "assuming silently" does not mean "works as intended".

But one piece after another.

Rodolfo commented that if he triggers the DBUA to switch all data tablespaces into read-only mode for the duration of the upgrade it will be still him to create the correct backup, the DBUA won't do it.

This is the option in the Oracle Database 12.1.0.2 DBUA (Database Upgrade Assiatant):

DBUA - Tablespaces in Read Only Mode

I did silently assume that the DBUA will choose the correct backup strategy automatically when it offers (enabled by default) to create an Offline Backup on the Recovery Options screen a while later in the same dialogue.

Backup Strategy DBUA

But in fact it doesn't.

When you choose the default, "Create a New Offline RMAN Backup" it will create a full offline RMAN  backup to the desired location - but not an partial offline backup as intended by the optional trigger to have the tablespaces in read-only mode during upgrade to allow a fast and simple restore without the need for a recovery. Please note that I would recommend this option generally only in cases where the database is on purpose in norarchivelog mode or where the RTO (Recovery Time Objective) is met only by restoring a partial offline backup.

What are your options now?

If you switch on the Read-Only option on purpose you'll have to choose "I have my own backup and restore strategy" and do the partial offline backup by yourself - before you start the DBUA.

Personally I'd consider this option as not useful when used within the DBUA right now. We have discussed this internally, it may work correctly in a future patch set for the upcoming next release of the database therefore no offense to anybody. It's just important to know that you'll have to do the partial offline backup by yourself at the moment.
.

--Mike

Tuesday Mar 29, 2016

Disable Transparent Hugepages on SLES11, RHEL6, RHEL7, OL6, OL7 and UEK2 Kernels

This blog post is not related to database upgrades and migrations. But still I think it is very useful for many customers operating on modern Linux systems.

Recommendation 

Support just published an ALERT strongly recommending to disable Transparent Hugepages on Linux systems. And the below information does not apply to RAC systems only but also to single instance environments.
.

Which Linux Distrubutions/Kernels are affected? 

  • SLES11
  • RHEL6 and RHEL7
  • OL6 and OL7
  • UEK2 Kernels
    .

What are the Issues? 

I'm quoting MOS Note: 1557478.1 (ALERT: Disable Transparent HugePages on SLES11, RHEL6, RHEL7, OL6, OL7 and UEK2 Kernels):

Because Transparent HugePages are known to cause unexpected node reboots and performance problems with RAC, Oracle strongly advises to disable the use of Transparent HugePages. In addition, Transparent Hugepages may cause problems even in a single-instance database environment with unexpected performance problems or delays. As such, Oracle recommends disabling Transparent HugePages on all Database servers running Oracle.

This ONLY applies to the new feature Transparent HugePages, Oracle highly recommends the use of standard HugePages that were recommended for previous releases of Linux.  See MOS Note:361323.1 for additional information on HugePages. 

As far as I see you'll have to reboot the server in order to disable Transparent Hugepages - the default is usually ALWAYS.
.

More Information?

 --Mike

Wednesday Mar 23, 2016

What does DEPRECATED mean? And DESUPPORTED?

There's sometimes a misunderstanding about what we mean with the term DEPRECATED? And what is the difference to DESUPPORTED? Actually there's a full chapter in the Database Upgrade Guide listing deprecated and desupported features.

Deprecated

Especially this message puzzled a lot of customers stating that the non-CDB architecture is deprecated in Oracle Database 12c.


In the Database Upgrade Guide we clearly explain what deprecated means:

"By deprecate, we mean that the feature is no longer being enhanced but is still supported"

So for you it means just be aware that we don't further develop or enhance something. But you are still fully supported by using this feature.

Another well known example is Oracle Streams which is fully supported in Oracle Database 12c - but not in Oracle Multitenant - and is deprecated and therefore not enhanced or developed any further. 

Or to name another example, Oracle Restart, which is deprecated for some time - but still not desupported. And I know a good bunch of customers using it in production even with Oracle Database 12.1.0.2 on several hundred databases.
.

Desupported

Even if something is desupported - remember the Rule Based Optimizer? - you can still use a feature being desupported. But on your own risk as we don't fix any bugs or issues anymore. 

Again the Database Upgrade Guide clarifies the term:

"By desupported, we mean that Oracle will no longer fix bugs related to that feature and may remove the code altogether"

Other common examples in Oracle Database 12c are the Enterprise Manager Database Control which simply does not exist anymore in Oracle Database 12c or the desupport of Raw Devices.
.

Summary

Deprecated is a signal that something may disappear in the future and does not get enhanced anymore. No activity required except of taking note for your future plans. Desupported means that we don't fix anything anymore for a desupported feature or product - and it may even disappear. But often desupported features are still there and can be used on your own risk only. 
.

--Mike
.


Tuesday Mar 22, 2016

GC Freelist Session Waits causing slowness and hangs

Best Practice Hint

One of the best things in my job:
I learn from you folks out there. Everyday. 

Credits here go to Maciej Tokar who did explain the below topic to me via LinkedIn - thanks a lot, Maciej! 
.

Locks are not being closed fast enough, resulting in gc freelist waits

You can find a reference for Global Cache Freelist in the Oracle Documentation. This issue here can or will lead to database being slow, up to complete hangs. Based on my research it looks as the issue is not related to RAC only but a general thing. In your session waits you'll spot this:

Event                               Event Class        % Event   Sessions
----------------------------------- --------------- ---------- ----------
gc freelist                         Cluster              41.37       8.61

This has been logged as a bugs 21352465 (public) and 18228629 (not public). It causes locks are not being closed fast enough, resulting in gc freelist waits. In conjunction the default for _gc_element_percent seemed to be too low at 120 (or 110 in 11.2.0.4).

Actually the issue can affect not only Oracle Database 12.1.0.2 but also Oracle Database 11.2.0.3 and 11.2.0.4.

See MOS Note:2055409.1 (Database Hangs with High "gc freelist" wait ) for further details.
.

Solution

  • Apply the patch for bug 18228629 on top of a PSU or BP where available
    • See the drop-down list to the right labeled "Release" to access the correct patch for your release
    • Unlike the above MOS Note states in Oracle Database 12.1.0.2 it is only available on top of the January 2016 PSU and BP and two other Exadata BPs - and on Linux only!
      .
  • Use the workaround and set _gc_element_percent = 200
    • This will require an instance restart as the parameter can't be changed dynamically:
      alter system set "_gc_element_percent"=200 scope=spfile;
      .

Epilogue

We've had a lot of discussions about underscore parameter in the past weeks. And I'm not a big fan of underscores especially when it comes to upgrades as experiences has shown that having underscores set one day may make it hard to remove them the other day - and underscores can significantly impact the upgrade duration in a negative way.

But on the other hand, if an issue is seriously affecting many customers, and there's no patch available for your platform and environment right now, what else can one do?
.

--Mike

.

Wednesday Mar 09, 2016

OUGN Conference - On the boat again

OUGN Spring Conference 2016

Last year influenza took me down and out just a couple of days before my planned departure for the famous OUGN Spring Conference. But this year (so far) I'm still happy and healthy and on my way towards beautiful Oslo. I'm really looking forward to this year's OUGN Spring Conference which will happen again on the boat departing from Oslo and sailing over to Kiel - and then returning back.

In case you plan to visit my talks and demos:

  • Thursday, 10-March-2016 - 14:00-14:45h - Parliament 1+2
    How Oracle Single Tenant will change a DBA's life
    .
  • Friday, 11-March-2016 - 10:30-11:15h - Parliament 1+2
    Oracle Database Upgrade: Live and Uncensored
    .

Looking forward to this wonderful event with so many good talks and presentations and such a great group of people. And thanks to the organizers of OUGN!

--Mike

.

Tuesday Mar 08, 2016

Parameter Recommendations for Oracle Database 12c - Part II


Best Practice Hint

Time for a new round on Parameter Recommendations for Oracle Database 12.1.0.2. The focus of this blog post settles on very well known parameters with interesting behavior. This can be a behavior change or simply something we'd like to point out. And even if you still work on Oracle Database 11g some of the below recommendations may apply to your environment as well.

Preface

Again, please be advised - the following parameter list is mostly based on personal experience only. Some of them are officially recommended by Oracle Support. Always use proper testing mechanisms.

We strongly recommend Real Application Testing, especially the SQL Performance Analyzer but also Database Replay to verify the effect of any of those parameters. 
.

Known Parameters - Interesting Behavior

  • parallel_min_servers
    • What it does?
    • Default:
      • CPU_COUNT * PARALLEL_THREADS_PER_CPU * 2
    • Behavior Change:
      • Setting it to a value below the default will let the database ignore it.
      • In Oracle Database 11g the default was 0
      • Compare 11.2.0.4 vs 12.1.0.2 on the same box:
        • 11g:
          SQL> show parameter parallel_min_servers
          NAME                  TYPE     VALUE
          --------------------- -------- ------
          parallel_min_servers  integer  0

        • 12c:
          SQL> show parameter parallel_min_servers
          NAME                  TYPE     VALUE
          --------------------- -------- ------
          parallel_min_servers  integer  8
    • Explanation:

  • job_queue_processes
    • What it does?
      • See the Oracle Documentation - value specifies the maximum number of job slaves to be created to execute jobs started by either DBMS_JOBS or DBMS_SCHEDULER
    • Default:
      • 1000
    • Recommendation:
      • Set it to a rough equivalent of 2 * CPU cores
    • Explantion:
      • In Oracle Database 12c we introduced the automatic stats gathering during CTAS and IAS (into an empty table only) operations. This can potentially lead to too many jobs doing the stats gathering. Furthermore issues can happen due to the default of concurrent stats gathering.
        Therefore a limitation of this parameter seems to be a good idea. 
      • Be aware when switching it to 0 - this will block all recompilation attempts. Furthermore generally no jobs can be executed anymore with  DBMS_JOBS or DBMS_SCHEDULER.
      • Multitenant behavior change:
        In 12.1.0.1, job_queue_process was a Container Database (CDB) modifiable parameter (ie. at a global level). However, in 12.1.0.2, the job_queue_process parameter is not CDB modifiable; instead it's PDB modifiable which means each PDB can have its own job_queue_process value.  
    • More Information:
    • Annotation:
      I've had an email exchange with Stefan Köhler about the stats behavior for CTAS. As I couldn't myself reproduce the behavior we say at two customer with job_queue_processes=1000 and an heavy CTAS activity (which could be remedied by setting JQP to a lower value) I would put a question mark behind my above statement.

      .
      .
  • recyclebin
    • What it does?
      • See the Oracle Documentation - controls whether the Flashback Drop capability is turned on or off. If the parameter is set to OFF, then dropped tables do not go into the recycle bin. If this parameter is set to ON, then dropped tables go into the recycle bin and can be recovered.
    • Default:
      • ON
    • Recommendation:
      • If the recyclebin is ON (the default) in your environment then empty it at least once per week. Create a default job in all your environments emptying the recycle bin every Sunday morning at 3am for instance:
        SQL> purge DBA_RECYCLEBIN;
    • Explantion:
      • The recycle bin is on in every database by default since Oracle 10g. The danger is that it may not be emptied but especially on developer databases many objects may be created and dropped again. As a result the dropped objects and its dependents still stay in the database until the space needs to be reclaimed. That means, they exist in the data dictionary as well, for instance in TAB$. Their name is different now starting with "BIN$..." instead of "EMP" - but they will blow up your dictionary. And emptying it not often enough may introduce a performance dip to your system as the cleanup of many objects can be quite resource intense
      • Check your current recycle bins:
        SQL > SHOW RECYCLEBIN;
        ORIGINAL NAME RECYCLEBIN NAME              OBJECT TYPE DROP TIME
        ------------- ---------------------------- ----------- -------------------
        TEST_RBIN     BIN$2e51YTaSK8TL/mPy+FuA==$0 TABLE       2010-05-27:15:23:45
        TEST_RBIN     BIN$5dF60S3GSEOSSYREaqCg==$0 TABLE       2010-05-27:15:23:43
        TEST_RBIN     BIN$JHCDN9YwQRXjXGOJcCIg==$0 TABLE       2010-05-27:15:23:42
    • More Information:
.
.

  • deferred_segment_creation
    • What it does?
      • See the Oracle Documentation - set to the default (TRUE), then segments for tables and their dependent objects (LOBs, indexes) will not be created until the first row is inserted into the table
    • Default:
      • TRUE
    • Recommendation:
      • Set it to FALSE unless you plan to create a larger number of tables/indexes knowing that you won't populate many of them.
    • Explantion/Risk:
      • If my understanding is correct this parameter got introduced with Oracle Database 11.2 in order to save space when applications such as EBS, Siebel or SAP create tons of tables and indexes which never may get used as you don't work with the matching module of the software
      • The risk can be that certain query check DBA_SEGMENTS and/or DBA_EXTENTS - and if there's no segment allocated you won't find an indication about the existence of the object in there - but it actually exists. Furthermore we have seen issues with Data Pump workers getting contention, and some other things. 
    • More Information:
      • The documentation has become now pretty conservative as well since Oracle 11.2.0.4 and I'll second that:
        Before creating a set of tables, if it is known that a significant number of them will not be populated, then consider setting this parameter to true. This saves disk space and minimizes install time.
        ..
 --Mike

Friday Mar 04, 2016

Parameter Recommendations for Oracle Database 12c - Part I

Best Practice Hint

 A few weeks ago we've published some parameter recommendations including several underscores but based on an internal discussion (still ongoing) we decided to remove this entry and split up the tasks. The optimizer team will take over parts of it and I'll post an update as soon as something is published.

.

Preface

Please be advised - the following parameter list is mostly based on personal experience only. Some of them are officially recommended by Oracle Support. Always use proper testing mechanisms.

We strongly recommend SQL Performance Analyzer to verify the effect of any of those parameters. 
.

How to read this blog post?

Never ever blindly set any underscore or hidden parameters because "somebody said" or "somebody wrote on a blog" (including this blog!) or "because our country has the best tuning experts worldwide" ... Only trust Oracle Support if it's written into a MOS Note or an official Oracle White Paper or if you work with a particular support or consulting engineer for quite a long time who understands your environment.
.

Important Parameter Settings

    • _kks_obsolete_dump_threshold
      • What it does?
        • Introduced in Oracle 12.1.0.2 as an enhancement  to improve cursor sharing diagnostics by dumping information about an obsolete parent cursor and its child cursors after the parent cursor has been obsoleted N times. 
      • Problem:
        • Trace files can grow like giant mushrooms due to cursor invalidations
      • Solution:
      • Patches:
        • Fix included in DBBP 12.1.0.2.160216
        • Fix on-top of 12.1.0.2.13DBEngSysandDBIM
        • Since Feb 13, 2016 there's a one-off available but on Linux only - and only on top of a fresh 12.1.0.2 
      • Remarks:
        • The underlying cursor sharing problem needs to be investigated - always
          If you have cursor sharing issues you may set this parameter higher therefore not every invalidation causes a dump, then investigate and solve the issue, and finally switch the parameter to 0 once the issue is taken care of. 
          Please be aware that switching the parameter to 0 will lead to a lack of diagnostics information in case of cursor invalidations.


    • _use_single_log_writer
    • memory_target
      • What it does?
      • Problem:
        • Unexpected failing database upgrades with settings of  memory_target < 1GB where equal settings ofsga_target and pga_aggregate_target didn't cause issues 
        • It prevents the important use of HugePages
      • Solution:
        • Avoid memory_target by any chance
        • Better use sga_target and pga_aggregate_target instead


    • pga_aggregate_limit

    Essential MOS Notes for Oracle Database 12.1.0.2

    --Mike
    .

    Tuesday Mar 01, 2016

    Differences between Automatic Statistics Gathering job and GATHER_SCHEMA_STATS

    Recently a customer raised a question whether there are differences between the Automatic Statistics Gathering job and a manual creation of stats via the GATHER_SCHEMA_STATS procedure.

    The results in performance were quite interesting. Performance after an upgrade from Oracle Database 11.2.0.3 to Oracle Database 11.2.0.4 was not good when the automatic stats job got used. But performance changed significantly to the better when schema stats were created with the downside of taking more resources during the gathering.

    Is the Automatic Stats Gathering job enabled?

    That question can be answered quite easily. There's a very good MOS Note:1233203.1 - FAQ: Automatic Statistics Collection displaying this query:

    SELECT CLIENT_NAME, STATUS FROM DBA_AUTOTASK_CLIENT WHERE CLIENT_NAME='auto optimizer stats collection';

    The MOS Note has also the code to enable (or disable) the job.
    .

    Which parameters/settings are used?

    That question is a bit more tricky as the Note says: "The automatic statistics-gathering job uses the default parameter values for the DBMS_STATS procedures". But how do I display them?

    The following script will display the parameters being used during the Automatic Statistics Gathering:

    SET ECHO OFF
    SET TERMOUT ON
    SET SERVEROUTPUT ON
    SET TIMING OFF
    DECLARE
       v1  varchar2(100);
       v2  varchar2(100);
       v3  varchar2(100);
       v4  varchar2(100);
       v5  varchar2(100);
       v6  varchar2(100);
       v7  varchar2(100);
       v8  varchar2(100);
       v9  varchar2(100);
       v10 varchar2(100);        
    BEGIN
       dbms_output.put_line('Automatic Stats Gathering Job - Parameters');
       dbms_output.put_line('==========================================');
       v1 := dbms_stats.get_prefs('AUTOSTATS_TARGET');
       dbms_output.put_line(' AUTOSTATS_TARGET:  ' || v1);
       v2 := dbms_stats.get_prefs('CASCADE');
       dbms_output.put_line(' CASCADE:           ' || v2);
       v3 := dbms_stats.get_prefs('DEGREE');
       dbms_output.put_line(' DEGREE:            ' || v3);
       v4 := dbms_stats.get_prefs('ESTIMATE_PERCENT');
       dbms_output.put_line(' ESTIMATE_PERCENT:  ' || v4);
       v5 := dbms_stats.get_prefs('METHOD_OPT');
       dbms_output.put_line(' METHOD_OPT:        ' || v5);
       v6 := dbms_stats.get_prefs('NO_INVALIDATE');
       dbms_output.put_line(' NO_INVALIDATE:     ' || v6);
       v7 := dbms_stats.get_prefs('GRANULARITY');
       dbms_output.put_line(' GRANULARITY:       ' || v7);
       v8 := dbms_stats.get_prefs('PUBLISH');
       dbms_output.put_line(' PUBLISH:           ' || v8);
       v9 := dbms_stats.get_prefs('INCREMENTAL');
       dbms_output.put_line(' INCREMENTAL:       ' || v9);
       v10:= dbms_stats.get_prefs('STALE_PERCENT');
       dbms_output.put_line(' STALE_PERCENT:     ' || v10);
    END;
    /

    The settings of the DBMS_STATS.GATHER_SCHEMA_STATS procedure are documented:
    https://docs.oracle.com/database/121/ARPLS/d_stats.htm#ARPLS68577 

    When you compare the two you'll see that the settings/defaults are identical. 
    .

    But what is the difference between these two?

    Both activities use the same parameters. So the stats will look the same - IF they get created. The real difference between the Automatic Statistics Gathering job and a manual invocation of GATHER_SCHEMA_STATS is that the latter will refresh ALL statistics whereas the Automatic Statistics Gathering job will refresh only statistics on objects where statistics are missing or marked as STALE.

    The same behavior appears when you compare the recommendation to gather dictionary statistics before the upgrade by using DBMS_STATS.GATHER_DICTIONARY_STATS versus a DBMS_STATS.GATHER_SCHMEA_STATS('SYS')call. The latter will refresh all statistics whereas the first one will take less resources but refresh only STALE and missing statistics.
    .

    A simple example

    This script is kept as simple as possible.

    • It creates a test user
    • It creates two tables within this user - tablespace USERS
    • It inserts and updates information in the two tables
    • It flushes the monitoring information (how many DMLs got run?) out
    • It gathers stats on only one table to verify that STALE is working as intended
    • It kicks off the automatic stats gathering job
    • It kicks off the schema stats gathering call
    • It compares results before/after in the stats history table 

    set timing on
    set serverout on
    set echo on
    set termout on
    column table_name Format a5
    column owner      Format a6
    column stale_stats Format a4
    column last_analyzed Format a15
    column sample_size format 9999999
    drop user test1 cascade;
    create user test1 identified by test1;
    grant connect, resource, dba to test1;
    alter user test1 default tablespace USERS;
    create table TEST1.TAB1 as select * from dba_objects where rownum<50001;
    exec dbms_stats.gather_table_stats('TEST1','TAB1');
    create table TEST1.TAB2 as select * from dba_objects where rownum<50001;
    exec dbms_stats.gather_table_stats('TEST1','TAB2');
    insert into TEST1.TAB1 select * from dba_objects where rownum<50001;
    commit;
    insert into TEST1.TAB2 select * from dba_objects where rownum<50001;
    commit;
    insert into TEST1.TAB2 select * from dba_objects where rownum<50001;
    commit;
    update TEST1.TAB1 set object_id=object_id+0;
    commit;
    update TEST1.TAB2 set object_id=object_id+1;
    commit;
    exec DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO;
    select table_name,owner,stale_stats,to_char(last_analyzed,'DD-MON HH24:MI:SS') LAST_ANALYZED,SAMPLE_SIZE from dba_tab_statistics where table_name in ('TAB1','TAB2');
    exec DBMS_STATS.GATHER_TABLE_STATS('TEST1','TAB1');
    select table_name,owner,stale_stats,to_char(last_analyzed,'DD-MON HH24:MI:SS') LAST_ANALYZED,SAMPLE_SIZE from dba_tab_statistics where table_name in ('TAB1','TAB2');
    exec DBMS_AUTO_TASK_IMMEDIATE.GATHER_OPTIMIZER_STATS;
    pause Wait a bit - then press return ...
    select table_name,owner,stale_stats,to_char(last_analyzed,'DD-MON HH24:MI:SS') LAST_ANALYZED,SAMPLE_SIZE from dba_tab_statistics where table_name in ('TAB1','TAB2');
    exec dbms_stats.gather_schema_stats('TEST1');
    select table_name,owner,stale_stats,to_char(last_analyzed,'DD-MON HH24:MI:SS') LAST_ANALYZED,SAMPLE_SIZE from dba_tab_statistics where table_name in ('TAB1','TAB2');
    prompt End ...

    .

    The results

    exec
    DBMS_STATS.
    FLUSH_DATABASE_MONITORING_INFO;
    TABLE OWNER  STAL LAST_ANALYZED   SAMPLE_SIZE
    ----- ------ ---- --------------- -----------
    TAB1  TEST1  YES  29-FEB 22:37:07       50000
    TAB2  TEST1  YES  29-FEB 22:37:07       50000

    exec
    DBMS_STATS.
    GATHER_TABLE_STATS('TEST1','TAB1');
    TABLE OWNER  STAL LAST_ANALYZED   SAMPLE_SIZE
    ----- ------ ---- --------------- -----------
    TAB1  TEST1  NO   29-FEB 22:37:12      100000
    TAB2  TEST1  YES  29-FEB 22:37:07       50000

    exec
    DBMS_AUTO_TASK_IMMEDIATE.
    GATHER_OPTIMIZER_STATS;

    TABLE OWNER  STAL LAST_ANALYZED   SAMPLE_SIZE
    ----- ------ ---- --------------- -----------
    TAB1  TEST1  NO   29-FEB 22:37:12      100000
    TAB2  TEST1  NO   29-FEB 22:37:13      150000

    exec
    dbms_stats.
    gather_schema_stats('TEST1');

    TABLE OWNER  STAL LAST_ANALYZED   SAMPLE_SIZE
    ----- ------ ---- --------------- -----------
    TAB1  TEST1  NO   29-FEB 22:37:43      100000
    TAB2  TEST1  NO   29-FEB 22:37:43      150000

    The results can be interpreted this way:

    • The sample size of 50k is based on the first activity during the CTAS
    • Once table TAB1 gets analyzed the sample size is now correct - and the time stamp got updated - statistics on TAB2 are still marked STALE of course as the underlying table has changed by more than 10%
    • The Automatic Statistics Gathering job will refresh only stats for objects where stats are missing or marked STALE - in this example here TAB2. Table TAB1's statistics remain unchanged.
    • When the GATHER_SCHEMA_STATS job gets invoked it will refresh all statistics - regardless if they were STALE or not. 

    This is the behavior the customer who raised the question about differences in these two ways to create statistics may have seen. The GATHER_SCHEMA_STATS job took longer and consumed more resources as it will refresh all statistics regardless of the STALE attribute.

    And it's hard to figure out why the refresh of statistics created in a previous release may have led to suboptimal performance, especially as we talk about a patch set upgrade - and not a full release upgrade. Thanks to Wissem El Khlifi who twittered the following annotations I forgot to mention:

    • The Automatic Statistics Gathering job prioritizes objects with NO statistics over objects with STALE statistics
    • The Automatic Statistics Gathering job may get interrupted or skip objects leaving them with NO statistics gathered. You can force this by locking statistics - so the Auto job will skip those completely

    You'll find more information about the Automatic Statistics Gathering job here:

    And another strange finding ...

    When I played with this example in 12c I encountered the strange behavior of the GATHER_OPTIMIZER_STATS call taking exactly 10 minutes unti it returns to the command prompt.

    First I thought this is a Multitenant only issue. But I realized quickly: this happens in non-CDB databases in Oracle 12c as well. And when searching the bug database I came across the following unpublished bug:

    • Bug 14840737
      DBMS_AUTO_TASK_IMMEDIATE.GATHER_OPTIMIZER_STATS RETURNS INCORRECTLY

    which got logged in Oct 2012 and describes this exact behavior. I kick off the job - it will update the stats pretty soon after - but still take 10 minutes to return control to the command prompt. It is supposed to be fixed in a future release of Oracle Database ... 

     

    --Mike 

    Friday Feb 26, 2016

    Collaborate16 - See you soon!!!

    Collaborate Conference 2016

    If you haven't signed up for COLLABORATE16 yet, then please do so :-)

    And I hope to meet you and many other Oracle experts in Las Vegas from April 10-14, 2016.

    If you plan to attend one of our sessions mark them down in your conference scheduler:

    CU soon!

    --Mike
    .

    Thursday Feb 25, 2016

    What happened to the blog post about "12c parameters"?

    Best Practice Hint

    Two weeks ago I published a blog post about Parameter Recommendations for Oracle Database 12.1.0.2. And I took it down a day later. Why that?

    I've got a lot of input from external sources for the "Parameter" blog post. And I'd like to thank everybody who contributed to it, especially Oracle ACE Ludovico Caldara.

    Generally there was a bit of a misunderstanding internally about whether we should "advertise" underscore parameters to cure some misbehavior of the database. In 99% of all cases I'd agree that underscores are not a good solution - especially when it comes to database upgrades as our slide deck still contains a real world example about what happens when you keep old underscore parameters in your spfile. It can not only slow down the entire upgrade but also makes it very hard for Oracle Support to reproduce issues in case of something going the wrong direction. 

    But in some situations an underscore seems to be the only remedy in cases where a patch is not available for a particular release - the release you are using at the moment. And even if a patch is available or if the fix is available in a future PSU or BP that does not mean necessarily that one can apply it for several reasons.

    We still have a lot of very productive discussions going on internally between many groups. That is very good as it means that we have plenty of smart people around, especially in Oracle's Database Development :-)

    Furthermore we agreed that the Optimizer PM team will take over the part of my (taken down) blog post targeting wrong query results and other optimizer topics. We are in constant exchange and I'll link it as soon as something gets published.

    --Mike

    Monday Feb 15, 2016

    Upgrade Workshop on March 2, 2016 in Switzerland

    Grüezi alle miteinand!

    There are just a few seats open for the Upgrade / Migrate /Consolidate to Oracle Database 12c workshop on March 2, 2016 in Zürich in Switzerland open. If you would like to attend but haven't registered yet, please use this link to sign up:

    Workshop language will be German, slides will be in English.

    Looking forward to meet you there!

    --Mike 

    Wednesday Feb 03, 2016

    DBUA and Read-Only Tablespaces - Things to Know - I

    Related Blog Posts:


    Some people prefer the manual upgrade on the command line, others prefer the graphical tool Database Upgrade Assistant (DBUA).

    DBUA and Read-Only Tablespaces 

    The DBUA offers you an option of setting your non-Oracle tablespaces read-only during the upgrade.

    DBUA Read Only 1

    What the option doesn't tell you is the purpose - and the use case when to "click" it on.

    Partial Offline Backup 

    The option of having data tablespaces which don't belong to the Oracle supplied components is simply to do an offline backup and - in case of a failing upgrade - restore quickly. You'll find this in our big slide deck under "Fallback Strategies - Partial Offline Backup". We have used this method in several scenarios:

    • Large telco systems where the time to restore/recover the entire database would have taken hours or days
    • DWHs where the database is large and intentionally operated in NOARCHIVELOG mode
    • Standard Edition databases where Guaranteed Restore Points in combination with FLASHBACK DATABASE are not available

    FLASHBACK DATABASE is my all-time favorite as it is simple, fast and easy to use. You'll set a Guaranteed Restore Point - and in case of failure during the upgrade you'll flashback. Just don't forget to drop the restore point later when you don't need it anymore. Otherwise your FRA will run out of space the sooner or later. The only real caveat in this case is the fact that you can't change COMPATIBLE

    When setting data tablespaces read-only the idea is to offline backup the "heart" of the database consisting of all files belonging to SYSTEM, SYSAUX and UNDO tablespaces plus the redologs plus the controlfiles. The tricky part: you'll have to backup also all other repository tablespaces. Those can exist for instance when the database has seen several upgrades already and started its life maybe in the Oracle 8i or 9i days. So you may see also XDB, DRSYS and ODM. You'll have to leave them in read-write as well during the upgrade and backup the files offline beforehand.


    The Customer Case

    The real tricky part is something Marvin hit and commented on the upgrade blog:

    I am upgrading from 11.2.0.3 to 12.1.0.2. During the DBUA setup screens, I checked "Set User Tablespaces to Read Only During the Upgrade". It just seemed like the right thing to do. All of my tablespaces were ONLINE. All tablespace files were autoextendable. During the upgrade I got this error.

    Context component upgrade error
    ORA-01647 tablespace xxxxx is read-only.
    Cannot allocate space in it.

    There was plenty of space. I re-ran without the box checked and it ran ok. Just curious if anyone else has seen this.

    The read-only option in DBUA has an issue - it does not detect all repository tablespaces right now.

    DBUA Upgrade - Read Only Tablespaces

    Marvin and I exchanged some mails and from the DBUA logs I could see what happened:

    [AWT-EventQueue-0] [ 2016-01-28 11:07:03.768 CST ] [ProgressPane$RunNextProgressItem.run:1151]  Progress Item passedCONTEXT

    [AWT-EventQueue-0] [ 2016-01-28 11:07:03.768 CST ] [ProgressPane$RunNextProgressItem.run:1151]  Progress Item passedCONTEXT

    [AWT-EventQueue-0] [ 2016-01-28 11:07:03.768 CST ] [ProgressPane$RunNextProgressItem.run:1151]  Progress Item passedCONTEXT

    [AWT-EventQueue-0] [ 2016-01-28 11:07:03.781 CST ] [ProgressPane$RunNextProgressItem.run:1154]  progress to next step CONTEXT

    [Thread-364] [ 2016-01-28 11:07:43.758 CST ] [BasicStep.handleNonIgnorableError:479]  oracle.sysman.assistants.util.InteractiveMessageHandler@5bd44e0b:messageHandler

    [Thread-364] [ 2016-01-28 11:07:43.759 CST ] [BasicStep.handleNonIgnorableError:480]  CONTEXT component upgrade error:

    ORA-01647: tablespace 'WCI_OCS' is read-only, cannot allocate space in it

    :msg

    [Thread-364] [ 2016-01-28 11:15:42.179 CST ] [SummarizableStep$StepSummary.addDetail:783]  Adding detail: CONTEXT component upgrade error:

    ORA-01647: tablespace 'WCI_OCS' is read-only, cannot allocate space in it

    The repository of TEXT (or CONTEXT) is not in SYSAUX as it would be the default but in another tablespace. And this tablespace obviously was set to read-only as DBUA did not discover this tablespace as a repository but a regular user data tablespace. Bang!!!

    Simple workaround:
    Run the upgrade without the Read-Only Option. And this worked out fine. 

    You can create the TEXT component by yourself and decide in which tablespace it should be created:

    SQL> connect SYS/password as SYSDBA
    SQL> spool text_install.txt
    SQL> @?/ctx/admin/catctx.sql change_on_install SYSAUX TEMP NOLOCK

    Thanks to my team mates Cindy, Hector and Byron

    Yesterday I forwarded the email to our Upgrade Team and I received three replies within minutes explaining:

    • The query the DBUA uses is not as sophisticated as you would think:
      select tablespace_name from dba_tablespaces where contents != 'UNDO' and contents != 'TEMPORARY' and status = 'ONLINE' and tablespace_name != 'SYSTEM' and tablespace_name != 'SYSAUX' and tablespace_name != (select PROPERTY_VALUE from database_properties where PROPERTY_NAME = 'DEFAULT_PERMANENT_TABLESPACE') 

    • We have proposed a improved query already

    • It should be included in a future release of the database 


    Summary

    The option having data tablespaces read-only during an upgrade is meant for a fast fallback in case of an failure during an upgrade. But your first option should always be a Guaranteed Restore Point instead. If you still need the read-only solution than please be careful as you may have repositories in non-standard tablespaces. DBA_USER's DEFAULT_TABLESPACE column may give you an indication - but you should also check DBA_SEGMENTS. And I personally would use this option in conjunction with a command line approach.

    --Mike

    Tuesday Feb 02, 2016

    How to find out if a PSU has been applied? DBMS_QOPATCH

    pflaster.jpgSince we change the PSU and BP patch numbering from Oracle Database 12.1.0.2.PSU6 to 12,1,0,2,160119 it is almost impossible to distinguish from the patch name only if you have applied a PSU or a BP.

    But:
    In Oracle Database 12c there's a package available which is very useful to query plenty of information about patches from within the database: DBMS_QOPATCH.

    Here are a few helpful examples which I created by checking in our DBaaS Cloud database.

    Which patches have been applied (or rolled back)?

    SQL> set serverout on

    SQL> exec dbms_qopatch.get_sqlpatch_status;

    Patch Id : 20415564
            Action : APPLY
            Action Time : 24-JUN-2015 06:19:23
            Description : Database PSU 12.1.0.2.3, Oracle JavaVM Component (Apr2015)
            Logfile : /u01/app/oracle/cfgtoollogs/sqlpatch/20415564/18617752/
                      20415564_apply_ORCL_CDBRO
    OT_2015Jun24_06_18_09.log
            Status : SUCCESS

    Patch Id : 20299023
            Action : APPLY
            Action Time : 24-JUN-2015 06:19:23
            Description : Database Patch Set Update : 12.1.0.2.3 (20299023)
            Logfile : /u01/app/oracle/cfgtoollogs/sqlpatch/20299023/18703022/
                      20299023_apply_ORCL_CDBRO
    OT_2015Jun24_06_18_11.log
            Status : SUCCESS

    Patch Id : 20848415
            Action : APPLY
            Action Time : 24-JUN-2015 06:19:23
            Description :
            Logfile : /u01/app/oracle/cfgtoollogs/sqlpatch/20848415/18918227/
                      20848415_apply_ORCL_CDBRO
    OT_2015Jun24_06_18_15.log
            Status : SUCCESS

    Patch Id : 20848415
            Action : ROLLBACK
            Action Time : 24-JUN-2015 06:52:31
            Description :
            Logfile : /u01/app/oracle/cfgtoollogs/sqlpatch/20848415/18918227/
                      20848415_rollback_ORCL_CD
    BROOT_2015Jun24_06_52_29.log
            Status : SUCCESS

    Patch Id : 20618595
            Action : APPLY
            Action Time : 24-JUN-2015 13:52:13
            Description :
            Logfile : /u01/app/oracle/cfgtoollogs/sqlpatch/20618595/18956621/
                      20618595_apply_ORCL_CDBRO
    OT_2015Jun24_13_52_12.log
            Status : SUCCESS

    Patch Id : 20618595
            Action : ROLLBACK
            Action Time : 24-JUN-2015 14:37:11
            Description :
            Logfile : /u01/app/oracle/cfgtoollogs/sqlpatch/20618595/18956621/
                      20618595_rollback_ORCL_CD
    BROOT_2015Jun24_14_37_10.log
            Status : SUCCESS

    Patch Id : 20415564
            Action : ROLLBACK
            Action Time : 27-JAN-2016 17:43:18
            Description : Database PSU 12.1.0.2.3, Oracle JavaVM Component (Apr2015)
            Logfile : /u01/app/oracle/cfgtoollogs/sqlpatch/20415564/18617752/
                      20415564_rollback_MIKEDB_
    CDBROOT_2016Jan27_17_42_16.log
            Status : SUCCESS

    Patch Id : 21555660
            Action : APPLY
            Action Time : 27-JAN-2016 17:43:18
            Description : Database PSU 12.1.0.2.5, Oracle JavaVM Component (Oct2015)
            Logfile : /u01/app/oracle/cfgtoollogs/sqlpatch/21555660/19361790/
                      21555660_apply_MIKEDB_CDB
    ROOT_2016Jan27_17_42_17.log
            Status : SUCCESS

    Patch Id : 21359755
            Action : APPLY
            Action Time : 27-JAN-2016 17:43:18
            Description : Database Patch Set Update : 12.1.0.2.5 (21359755)
            Logfile : /u01/app/oracle/cfgtoollogs/sqlpatch/21359755/19194568/
                      21359755_apply_MIKEDB_CDB
    ROOT_2016Jan27_17_42_18.log
            Status : SUCCESS

    Patch Id : 21962590
            Action : APPLY
            Action Time : 27-JAN-2016 17:43:18
            Description :
            Logfile : /u01/app/oracle/cfgtoollogs/sqlpatch/21962590/19426224/
                      21962590_apply_MIKEDB_CDB
    ROOT_2016Jan27_17_42_21.log
            Status : SUCCESS

    PL/SQL procedure successfully completed.
    .

    Where's my home and inventory?

    SQL> set pagesize 0

    SQL> set long 1000000 

    SQL> select xmltransform(dbms_qopatch.get_opatch_install_info, dbms_qopatch.get_opatch_xslt) "Home and Inventory" from dual;

    Home and Inventory
    -------------------------------------------------------------

    Oracle Home     : /u01/app/oracle/product/12.1.0/dbhome_1
    Inventory    
        : 
    /u01/app/oraInventory


    Has a specific patch been applied?

    Lets check for the latest PSU. 

    SQL> select xmltransform(dbms_qopatch.is_patch_installed('21359755'), dbms_qopatch.get_opatch_xslt) "Patch installed?" from dual;

    Patch installed?
    -------------------------------------------------------

    Patch Information:
             21359755:   applied on 2015-10-22T21:48:17Z

    .

    What's tracked in my inventory?

    The equivalent of opatch lsinventory -detail ...

    SQL> select xmltransform(dbms_qopatch.get_opatch_lsinventory, dbms_qopatch.get_opatch_xslt) from dual; 

    Oracle Querayable Patch Interface 1.0
    ----------------------------------------------------------------
    Oracle Home       : /u01/app/oracle/product/12.1.0/dbhome_1
    Inventory         : /u01/app/oraInventory
    ----------------------------------------------------------------

    Installed Top-level Products (1):
                                        12.1.0.2.0
    Installed Products ( 135)
                                   ...

    .

    Additional Information and Patches

    If you need more helpful examples you may check this excellent blog post by Simon Pane (Pythian):

    And credits to Martin Berger for sending me this important information:

    Just in case there are multiple DBs running from the same O_H, and someone      
    queries dbms_qopatch.get_opatch_lsinventory automated from all DBs (as in       
    automated monitoring/reporting scripts) I'd recommend Patch 20599273 -          
    otherwise there might be strange XM errors due to race conditions. 

    .

    --Mike 

    Monday Feb 01, 2016

    New PREUPGRD.SQL is available for Upgrades to 12c

    üreupgrd.sql

    As of today a new version of our upgrade tool preupgrd.sql (including the package utluppkg.sql) for upgrades to Oracle Database 12.1.0.2 is available as download from MOS:

    Download it and exchange the existing preupgrd.sql and utluppkg.sql in your current Oracle 12.1.0.2 ?/rdbms/admin directory.

    --Mike

    Thursday Jan 28, 2016

    TDE is wonderful - Journey to the Cloud V

    DBaaS Oracle Cloud

     What happened so far on my Journey to the Cloud?

    Today's journey:
    Learn about TDE (Transparent Data Encryption) and other secrets

    What I really really love about my job: Every day I learn something new.

    But sometimes learning can be frustrating at the beginning. And so it was for Roy and myself in the past days when we explored the use of TDE (Transparent Data Encryption) in our DBaaS Cloud environments. But many thanks to Brian Spendolini for his continuous 24x7 support :-)

    Never heard of Transparent Data Encryption before? Then please read on here. It's usually part of ASO (Advanced Security Option) but it is included in the cloud offering. 

    But first of all before taking care on TDE and PDBs I tried to deploy a new DBaaS VM ...
    .

    PDB names can't contain underscores?

    Well, one learning experience is that initially you can't create a PDB in the DBaaS environment with an underscore in the name. I wanted to name my PDB in my new env simply "TDE_PDB1" (8 characters, all should be fine) - but received this nice message:

    Don't worry if you don't speak German - it basically says that it can't be longer than 8 characters (ok, I knew that), must begin with a character (mine does of course) and can only contain characters and number (eh?? no underscores???). Hm ...?!?

    Ok, I'll name mine "TDEPDB1".

    Of course outside this page you can create PDBs containing an underscore:

    SQL> create pluggable database PDB_MIKE admin user mike identified by mike
      2  file_name_convert=('/oradata/CDB2/pdbseed', '/oradata/CDB2/pdb_mike');

    Pluggable database created
    .

    That's what happens when application logic tries to superseed database logic.
    .

    (Almost) undocumented parameter: encrypt_new_tablespace

    Thanks to Robert Pastijn for telling me about this hidden secret. A new parameter which is not in the regular database deployment but only in the cloud.

    encrypt_new_tablespaces

    First check in MOS:

    Interesting.

    So let's check with Google.
    And here it is: 7 hits, for example:

    Controlling Default Tablespace Encryption

    The ENCRYPT_NEW_TABLESPACES initialization parameter controls default encryption of new tablespaces. In Database as a Service databases, this parameter is set to CLOUD_ONLY.

    Value
    Description

    ALWAYS

    Any tablespace created will be transparently encrypted with the AES128 algorithm unless a different algorithm is specified on the ENCRYPTION clause.

    CLOUD_ONLY

    Tablespaces created in a Database Cloud Service database will be transparently encrypted with the AES128 algorithm unless a different algorithm is specified on the ENCRYPTION clause. For non-Database Cloud Service databases, tablespaces will only be encrypted if the ENCRYPTION clause is specified. This is the default value.

    DDL

    Tablespaces are not transparently encrypted and are only encrypted if the ENCRYPTION clause is specified.

    What I found really scary is the fact that I couldn't find it in my spfile/pfile. You can alter it with an "alter system" command but you can't remove it.

    The idea behind this is great as tablespaces should be encrypted, especially when they reside in a cloud environment. TDE is a very useful feature. And this mechanism exists regardless of your edition, whether you have Standard Edition or Enterprise Edition in any sort of flavor in the DBaaS Cloud.

    A new tablespace will be encrypted by default:

    SQL> CREATE TABLESPACE TS_MIKE DATAFILE 'ts_mike01.dbf' SIZE 10M;

    Then check:

    SQL> select TABLESPACE_NAME, ENCRYPTED from DBA_TABLESPACES;

    But we'll see later if this adds some constraints to our efforts to migrate a database for testing purposes into the DBaaS cloud environment.
    .

    Is there anything encrypted yet?

    Quick check after setting:

    SQL> alter system set exclude_seed_cdb_view=FALSE scope=both;

    I tried to find out if any of the tablespaces are encrypted.

    SQL> select tablespace_name, encrypted, con_id from CDB_TABLESPACES order by 3;

    TABLESPACE_NAME                ENC     CON_ID
    ------------------------------ --- ----------
    SYSTEM                         NO           1
    USERS                          NO           1
    SYSAUX                         NO           1
    UNDOTBS1                       NO           1
    TEMP                           NO           1

    SYSTEM                         NO           2
    USERS                          NO           2
    TEMP                           NO           2
    SYSAUX                         NO           2

    EXAMPLE                        NO           3
    USERS                          NO           3
    TEMP                           NO           3
    SYSAUX                         NO           3
    APEX_1701140435539813          NO           3
    SYSTEM                         NO           3

    15 rows selected.

    Looks good.  Nothing encrypted yet.
    .

    How does the new parameter ENCRYPT_NEW_TABLESPACES effect operation?

    Ok, lets try.

    SQL> show parameter ENCRYPT_NEW_TABLESPACES

    NAME                                 TYPE        VALUE
    ------------------------------------ ----------- ---------------
    encrypt_new_tablespaces              string      CLOUD_ONLY

    And further down the road ...

    SQL> alter session set container=pdb1;

    SQL> create tablespace MIKE_PLAYS_WITH_TDE datafile '/u02/app/oracle/oradata/MIKEDB/PDB1/mike_plays_with_tde.dbf' size 10M;

    Tablespace created.

    SQL> select tablespace_name, encrypted, con_id from CDB_TABLESPACES order by 3;

    TABLESPACE_NAME                ENC     CON_ID
    ------------------------------ --- ----------
    SYSTEM                         NO           3
    SYSAUX                         NO           3
    TEMP                           NO           3
    USERS                          NO           3
    EXAMPLE                        NO           3
    APEX_1701140435539813          NO           3
    MIKE_PLAYS_WITH_TDE            YES          3

    7 rows selected.

    Ah ... so my new tablespace is encrypted. Not bad ... so far TDE has no influence. I can create objects in this tablespace, query them etc. It is not disturbing at all. Good.
    .

    How does this key thing work in the DBaaS Cloud?

    The documentation in above WP tells us this:

    Managing the Software Keystore and Master Encryption Key

    Tablespace encryption uses a two-tiered, key-based architecture to transparently encrypt (and decrypt) tablespaces. The master encryption key is stored in an external security module (software keystore). This master encryption key is used to encrypt the tablespace encryption key, which in turn is used to encrypt and decrypt data in the tablespace.

    When the Database as a Service instance is created, a local auto-login software keystore is created. The keystore is local to the compute node and is protected by a system-generated password. The auto-login software keystore is automatically opened when accessed.

    You can change (rotate) the master encryption key by using the tde rotate masterkey  subcommand of the dbaascli  utility. When you execute this subcommand you will be prompted for the keystore password. Enter the password specified when the service instance was created.
    .

    Creating a new PDB

    That's easy, isn't it?

    SQL> alter session set container=cdb$root;

    Session altered.

    SQL> create pluggable database pdb2 admin user mike identified by mike;

    Pluggable database created.

    SQL> alter pluggable database pdb2 open;

    Pluggable database altered.

    SQL> select tablespace_name, encrypted, con_id from CDB_TABLESPACES order by 3;

    TABLESPACE_NAME                ENC     CON_ID
    ------------------------------ --- ----------
    SYSTEM                         NO           4
    SYSAUX                         NO           4
    TEMP                           NO           4
    USERS                          NO           4

    SQL> select file_name from dba_data_files;

    FILE_NAME
    --------------------------------------------------------------------------------
    /u02/app/oracle/oradata/MIKEDB/2A6680A0D990285DE053BA32C40AED53/datafile/o1_mf_s
    ystem_cbn8fo1s_.dbf

    /u02/app/oracle/oradata/MIKEDB/2A6680A0D990285DE053BA32C40AED53/datafile/o1_mf_s
    ysaux_cbn8fo20_.dbf

    /u02/app/oracle/oradata/MIKEDB/2A6680A0D990285DE053BA32C40AED53/datafile/o1_mf_u
    sers_cbn8fo27_.dbf

    Ah, bad thing. As I neither used the file_name_convert option nor changed the PDB_FILE_NAME_CONVERT initialization parameter my new PDB files get created in the "root" path of the CDB. I don't want this. But isn't there this cool new feature called ONLINE MOVE OF DATAFILES in Oracle Database 12c? Ok, it's an EE feature but let me try this after checking the current OMF file names in DBA_DATA_FILES and DBA_TEMP_FILES:

    SQL> !mkdir /u02/app/oracle/oradata/MIKEDB/PDB2

    SQL> ALTER DATABASE MOVE DATAFILE '/u02/app/oracle/oradata/MIKEDB/2A6680A0D990285DE053BA32C40AED53/datafile/o1_mf_system_cbn8fo1s_.dbf' TO '/u02/app/oracle/oradata/MIKEDB/PDB2/system01.dbf';

    SQL> ALTER DATABASE MOVE DATAFILE '/u02/app/oracle/oradata/MIKEDB/2A6680A0D990285DE053BA32C40AED53/datafile/o1_mf_sysaux_cbn8fo20_.dbf' TO '/u02/app/oracle/oradata/MIKEDB/PDB2/sysaux01.dbf';

    SQL> ALTER DATABASE MOVE DATAFILE '/u02/app/oracle/oradata/MIKEDB/2A6680A0D990285DE053BA32C40AED53/datafile/o1_mf_users_cbn8fo27_.dbf' TO '/u02/app/oracle/oradata/MIKEDB/PDB2/users01.dbf';

    Be prepared:
    This will create a 1:1 copy of the file in the designated location and synch afterwards. It may take a minute per file.

    And moving the TEMP tablespace(s) file(s) will fail.

    SQL> ALTER DATABASE MOVE DATAFILE '/u02/app/oracle/oradata/MIKEDB/2A6680A0D990285DE053BA32C40AED53/datafile/o1_mf_temp_cbn8fo25_.dbf' TO '/u02/app/oracle/oradata/MIKEDB/PDB2/temp01.dbf';

    *
    ERROR at line 1:
    ORA-01516: nonexistent log file, data file, or temporary file
    "/u02/app/oracle/oradata/MIKEDB/2A6680A0D990285DE053BA32C40AED53/datafile/o1_mf_temp_cbn8fo25_.dbf"

    The temporary tablespace will have to be dropped and recreated. But not a big deal. 

    Check;

    SQL> select file_name from dba_data_files;

    FILE_NAME
    -----------------------------------------------------
    /u02/app/oracle/oradata/MIKEDB/PDB2/sysaux01.dbf
    /u02/app/oracle/oradata/MIKEDB/PDB2/users01.dbf
    /u02/app/oracle/oradata/MIKEDB/PDB2/system01.dbf

    Let me fix this so I don't hit this pitfall again:

    SQL> alter system set pdb_file_name_convert='/u02/app/oracle/oradata/MIKEDB/pdbseed','/u02/app/oracle/oradata/MIKEDBPDB2';

    Final verification:

    SQL> select name, value from v$system_parameter where con_id=4;

    NAME                   VALUE
    ---------------------- ----------------------------------
    resource_manager_plan
    pdb_file_name_convert  /u02/app/oracle/oradata/MIKEDB/pdbseed, /u02/app/oracle/oradata/MIKEDBPDB2


    Now the fun part starts ... ORA-28374: typed master key not found in wallet

    Remember this command from above in my PDB1? It run fine. But now it fails in PDB2.

    SQL> create tablespace MIKE_PLAYS_WITH_TDE datafile '/u02/app/oracle/oradata/MIKEDB/PDB2/mike_plays_with_tde.dbf' size 10M;

    create tablespace MIKE_PLAYS_WITH_TDE datafile '/u02/app/oracle/oradata/MIKEDB/PDB2/mike_plays_with_tde.dbf' size 10M
    *
    ERROR at line 1:
    ORA-28374: typed master key not found in wallet

    Voodoo in the database? I'm worried - especially as Roy had the same issue days before. But why did the command pass through without issues before in PDB1 - and now it doesn't in PDB2? Is it because the PDB1 is precreated - and my PDB2 is not?

    Kinda strange, isn't it?
    So connecting back to my PDB1 and trying again: 

    SQL> alter session set container=pdb1;

    Session altered.

    SQL> create tablespace MIKE_STILL_PLAYS_WITH_TDE datafile '/u02/app/oracle/oradata/MIKEDB/PDB1/mike_still_plays_with_tde.dbf' size 10M;

    Tablespace created.

    Ok, now I'm worried.
    What is the difference between the precreated PDB1 and my new PDB2?
    .

    Why do I get an ORA-28374 in my fresh PDB2?

    When we compare the wallet status in both PDBs we'll recognize the difference:

    PDB1:

    SQL> select wrl_type, wallet_type, status from v$encryption_wallet;

    WRL_TYPE        WALLET_TYPE          STATUS
    --------------- -------------------- -----------------------
    FILE            AUTOLOGIN            OPEN

    PDB2:

    SQL> select wrl_type, wallet_type, status from v$encryption_wallet;

    WRL_TYPE        WALLET_TYPE          STATUS
    --------------- -------------------- -----------------------
    FILE            AUTOLOGIN            OPEN_NO_MASTER_KEY

    .
    Now thanks to Brian Spendolini I have a working solution. But I'm not convinced that this is an obvious path ...
    Remember? I just would like to create a tablespace in my new (own, non-precreated) PDB. That's all ... 

    SQL> alter session set container=cdb$root;

    SQL> administer key management set keystore close;

    keystore altered.

    SQL> administer key management set keystore open identified by <your-sysadmin-pw> container=all;

    keystore altered.

    SQL> alter session set container=pdb2;

    Session altered.

    SQL> SELECT WRL_PARAMETER,STATUS,WALLET_TYPE FROM V$ENCRYPTION_WALLET;

    WRL_PARAMETER                             STATUS             WALLET_TYPE
    ----------------------------------------- ------------------ -----------
    /u01/app/oracle/admin/MIKEDB/tde_wallet/  OPEN_NO_MASTER_KEY PASSWORD

    SQL>  ADMINISTER KEY MANAGEMENT SET KEY USING TAG  "tde_dbaas" identified by <your-sysadmin-pw>  WITH BACKUP USING "tde_dbaas_bkup"; 

    keystore altered.

    SQL> SELECT WRL_PARAMETER,STATUS,WALLET_TYPE FROM V$ENCRYPTION_WALLET;

    WRL_PARAMETER                             STATUS   WALLET_TYPE
    ----------------------------------------- -------- -----------
    /u01/app/oracle/admin/MIKEDB/tde_wallet/  OPEN     PASSWORD

    And finally ... 

    SQL> create tablespace MIKE_STILL_PLAYS_WITH_TDE datafile '/u02/app/oracle/oradata/MIKEDB/PDB2/mike_still_plays_with_tde.dbf' size 10M;

    Tablespace created.

    Wow!!!

    That was not straight forward at all. Maybe it all happens due to my almost non-existing knowledge about TDE.

    Ah ... and let me say that I find the missing uppercase letter with all keystore altered. echo messages quite disturbing. But this is a generic one and non-critical of course ...

    --Mike
    .

    PS: Read on about Seth Miller's experience here on Seth's blog:
    http://sethmiller.org/oracle-2/oracle-public-cloud-ora-28374-typed-master-key-not-found-in-wallet/ 




    Thursday Jan 21, 2016

    SuSE SLES 12 certified with Oracle Database 12.1.0.2

    Puh ... I've got many mails over several months asking about the current status of certification of SuSE SLES12 for Oracle Database 12.1.0.2. It took a while - and I believe it was not in our hands. But anyhow ... finally ...

    See Release Notes for additional package requirements

    Minimum kernel version: 3.12.49-11-default
    Mininum PATCHLEVEL: 1

    Additional Notes

    • Edit CV_ASSUME_DISTID=SUSE11 parameter in database/stage/cvu/cv/admin/cvu_config & grid/stage/cvu/cv/admin/cvu_config
    • Apply Patch 20737462 to address CVU issues relating to lack of reference data
    • Install libcap1 (libcap2 libraries are installed by default); i.e. libcap1-1.10-59.61.x86_64 & libcap1-32bit-1.10-59.61.x86_64
    • ksh is replaced by mksh; e.g. mksh-50-2.13.x86_64
    • libaio has been renamed to libaio1 (i.e. libaio1-0.3.109-17.15.x86_64); ensure that libaio1 is installed


    Note: OUI may be invoked with -ignoreSysPreqs to temporarily workaround ongoing CVU check failures

    I had a SuSE Linux running on my previous laptop as dual-boot for quite a while. And I still like SuSE way more than any other Linux distributions potentially because of the fact that it was the Linux I started developing some basic Unix skills. I picked up my first Linux at the S.u.S.E. "headquarters" near Fürth Hauptbahnhof in 1994. I used to live just a few kilometers away and the version 0.9 a friend had given to me on a bunch of 3.5'' floppies had a disk failure. I believe the entire package did cost DM 19,90 by then - today roughly 10 Euro when you don't consider inflation - and was distributed on floppy disks. The reason for me to buy it was simply that I had no clue about Linux - but SuSE had a book delivered with the distribution.

    This is a distribution I had purchased later on as well - they've had good discounts for students by then.


    Picture source: Wikipedia - https://en.wikipedia.org/wiki/SUSE_Linux_distributions

    --Mike

    PS: Updated with more recent information on 15-02-2016 

    Wednesday Jan 20, 2016

    Oracle January 2016 CPU PSU BP available now - BE AWARE OF CHANGES IN PATCH NUMBERING

    Last night the PSUs and BPs for January 2016 have been made available for download on support.oracle.com.

    Oracle Critical Patch Update Advisory - January 2016

    http://www.oracle.com/technetwork/topics/security/cpujan2016-2367955.html 

    It contains 248 security fixes across all products and platforms. And of course important non-security fixes - and that's why we recommend to apply the PSUs (or the BPs in case you are on Exadata or an Oracle In-Memory user) as soon as possible. 

    Change in Patch Numbering

    Please be aware that as of November 2015 there's been a change in patch numbering introduced which most of you may not be aware of. A database PSU was named 12.1.0.2.5 before (or I used to call it 12.1.0.2.PSU5 before to make clear that a PSU and not a BP has been applied). But the new notation will change the 5th digit to a 6-digit-number to include the date. See MOS Note:2061926.1 for details.

    Example:

    • Before: Oracle Database 12c PSU October 2015 ... 12.1.0.2.5
    • Now: Oracle Database 12c PSU January 2016 ... 12.1.0.2.160119 

    More Information? 

    --Mike

    Tuesday Jan 19, 2016

    Clean up APEX - Journey to the Cloud IV

    DBaaS Oracle Cloud

    What happened so far on my Journey to the Cloud?

    Today's journey: Cleanup APEX removal leftovers 

    When you read my "Patch" blog post from Dec 22, 2015 you'll see that I was left with an incomplete Oracle Application Express (APEX) removal situation. Something looked really strange and didn't work as expected and documented. 

    But thanks to Jason Straub and Joel Kallman, my APEX colleagues, The solution below got provided by Jason. And let me say upfront that it's not APEX's fault ;-) Somebody has mixed up stuff in the current DBaaS deployment and thus the correct scripts to remove things in the right order are simply missing.

    How to remove Oracle APEX from the DBaaS Cloud's CDB$ROOT

    1. Download APEX 4.2.0.6 and PDBSS 2.0 (Multitenant Self Service Provisioning Application):
      http://www.oracle.com/technetwork/developer-tools/apex/application-express/apex-archive-42-1885734.html
      http://www.oracle.com/technetwork/database/multitenant/downloads/multitenant-pdbss-2016324.html

      Actually if you have removed PDBSS already as I did in my "Patch" blog post from Dec 22, 2015 you don't have to download, unzip and execute the PDBSS removal script again.


    2. Copy the zip files to the Cloud environment

      scp -i ./yourCloudKey /media/sf_CTEMP/Download/apex_4.2.6_en.zip oracle@<your_cloud_IP>:/home/oracle

      Enter passphrase for key './yourCloudKey':

      apex_4.2.6_en.zip              100%   82MB 116.9KB/s   11:58

      Repeat the same with PDBSS in case you didn't remove it already.


    3. Connect to your Cloud environment and unzip both files.

      sh -i ./yourCloudKey oracle@<your_cloud_IP>

      cd
      unzip apex_4.2.6_en.zip


      Repeat the same with PDBSS in case you didn't remove it already.


    4. Remove PDBSS by using your unzipped archive

      cd /home/oracle/pdbss
      sqlplus / as sysdba
      SQL> @pdbss_remove.sql


    5. Remove APEX 5 by using the existing default installation
      (this is the part I did already in my previous blog post from Dec 22)

      cd $ORACLE_HOME/apex
      sqlplus / as sysdba
      SQL> @apxremov.sql 



    6. Remove APEX 4.2 parts by using your unzipped archive
      (this is the part I didn't do yet)

      cd /home/oracle/apex
      sqlplus / as sysdba
      SQL> @apxremov_con.sql


    7. Drop the DBaaS Monitor common users

      As the DBaaS Monitor is based on APEX as well removing APEX will "kill" the DBaaS Monitor as well. So in oder to have a proper working environment you must drop the common users for DBaaS Monitor as well. Otherwise you'll receive ORA-959 synch errors in PDB_PLUG_IN_VIOLATIONS whenever you deploy a new PDB and - even worse - the PDB won't open unrestricted.

      SQL> drop user C##DBAAS_BACKUP cascade;
      SQL> drop user C##DBAAS_MONITOR cascade;


    8. Finally recompilation and check for invalid objects

      sqlplus / as sysdba
      SQL> @?/rdbms/admin/utlrp.sql

      SQL> select object_name, owner from dba_objects where status='INVALID';
      no rows selected


    9. Now you are free to install APEX 5.0 in the PDBs of your choice.

      As theres's a PDB1 already I will create APEX inside this PDB first.

      cd $ORACLE_HOME/apex
      sqlplus / as sysdba
      SQL> alter session set container=pdb1;
      SQL> create tablespace APEX5 datafile '
      /u02/app/oracle/oradata/CDB1/PDB1/apex5_01.dbf' size 100M autoextend ON next 1M;


      Be aware: PDB_FILE_NAME_CONVERT is not set. And your database is not in ASM. Therefore avoiding the data file path and name will let you end up with a file named after OMF standards in:
      /u02/app/oracle/oradata/CDB1/26A65D56D16F21A1E05336036A0A1AD8/datafile/o1_mf_apex5_c9wg9kly_.dbf

      Create APEX 5.0 in PDB1, change the admin password, create the APEX listener and REST service and unlock the public user:

      SQL> @apexins.sql APEX5 APEX5 TEMP /i/
      SQL> @apxchpwd.sql


      Be aware about the password requirements:

      --------------------------------------------------------------------------------
      Password does not conform to this site's password complexity rules.
      * Password must contain at least 6 characters.
      * Password must contain at least one numeric character (0123456789).
      * Password must contain at least one punctuation character
      (!"#$%&()``*+,-/:;?_).
      * Password must contain at least one upper-case alphabetic character.
      * Password must not contain username.
      --------------------------------------------------------------------------------


      SQL> @apex_rest_config.sql
      SQL> alter user APEX_PUBLIC_USER identified by SecretPWD account unlock;

    Let me add that it is possible to have different APEX versions in different PDBs. The only thing which you'll need to really really take care on is the images directory and the different apex homes.

    That's it ... and in a future cloud deployment the extra step to remove APEX 4.2 shouldn't be necessary anymore. 

    --Mike

    Thursday Jan 14, 2016

    VBox 5.0.10/12 issues with PERL and Seg Faults - UPDATE

    A bit more than two months ago I did hear from several people having issues with our Hands-On Lab environment. And it became clear that only those who use Oracle Virtual Box 5 see such errors. 

    Then I read Danny Bryant's blog post (thanks to Deiby Gomez for pointing me to it) about similar issues and a potential solution yesterday:

    And interestingly one of my colleagues, our PL/SQL product manager Bryn Llewellyn started an email thread and a test internally yesterday as well. The issue seem to occur only on newer versions of Apple's MacBooks.
    .

    Potential Root Cause

    The PERL issues seem to happen only on specific new Intel CPUs with a so called 4th level cache.

    The current assumption is that Intel CPUs with Iris Pro graphics are affected. Iris Pro means eDRAM (embedded DRAM) which is reported as 4th level cache in CPUID. We have confirmed that Crystal Well and Broadwell CPUs with Iris Pro are affected. It is likely that the Xeon E3-1200 v4 family is also affected.

    It seems to be that there's a bug in the perl binary. It links against ancient code from the Intel compiler suite doing optimizations according to the CPU features. Very recent Intel CPUs have 4 cache descriptors.

    People who encountered this used Virtual Box VBox 5.0.x - and it passes this information to the guest. This leads to a problem within the perl code. You won't see it on VBox 4.3 as this version does not pass the information to the guest. 

    But actually it seems that this issue is independent of Virtual Box or any other virtualization software. It simply happens in this case as many people use VBox on Macs - and some Macs are equipped with this new CPU model. But people run Oracle in VBox environments and therefore see the issue as soon as they upgraded to VBox 5.0.x.
    .

    Potential Solutions

    If you are using Oracle in VBox there are actually two solutions:

    • Revert to VBox 4.3 as this won't get you in trouble
      This problem was not triggered on VBox 4.3.x because this version did not  pass the full CPUID cache line information to the guest.
      .
    • Run this sequence of commands in VBox 5.0 to tweak the CPUID bits passed to the guest:
      .
      VBoxManage setextradata VM_NAME "VBoxInternal/CPUM/HostCPUID/Cache/Leaf" "0x4"
      VBoxManage setextradata VM_NAME "VBoxInternal/CPUM/HostCPUID/Cache/SubLeaf" "0x4"
      VBoxManage setextradata VM_NAME "VBoxInternal/CPUM/HostCPUID/Cache/eax"  "0"
      VBoxManage setextradata VM_NAME "VBoxInternal/CPUM/HostCPUID/Cache/ebx" "0" 
      VBoxManage setextradata VM_NAME "VBoxInternal/CPUM/HostCPUID/Cache/ecx" "0" 
      VBoxManage setextradata VM_NAME "VBoxInternal/CPUM/HostCPUID/Cache/edx"  "0"
      VBoxManage setextradata VM_NAME "VBoxInternal/CPUM/HostCPUID/Cache/SubLeafMask" "0xffffffff" 

      • Of course you'll need to replace VM_NAME by the name of your VM
        .

    If the error happens on a bare metal machine meaning it happens not inside a virtual image but on a native environment then the only chance you'll have (to my knowledge right now) is to exchange the PERL before doing really something such as running root.sh or rootupgrade.sh in your Grid Infrastructure installation or before using the DBCA or the catctl.pl tool to create or upgrade a database.

    In this case please refer to the blog post of Laurent Leturgez:

    Issues with Oracle PERL causing segmentation faults:

    http://laurent-leturgez.com/2015/05/26/oracle-12c-vmware-fusion-and-the-perl-binarys-segmentation-fault

    .

    Further Information

    This issues is currently tracked internally as bug 22539814: ERRORS INSTALLING GRID INFRASTRUCTURE 12.1.0.2 ON INTEL CPUS WITH 4 CACHE LEVEL.

    So far we have not seen reports by people encountering this in a native environment but only by people using VBox 5.0.x or Parallels or VMware on a very modern version of Apple hardware.

    .
    --Mike 



    Wednesday Jan 13, 2016

    Have an Excellent Start into 2016!!!

    Roy and I and the entire Database Upgrade Team would like to wish you a very good start into 2016!!!

    And thanks for your confidence in our work in the past year(s). The Upgrade Your Database - NOW! Blog had over 440,000 hits in 2015 which is quite a surprise as neither Roy nor I do update the blog full time. But the many mails and comments we consistently get demonstrate your interest. 

    Upgrade Your Database - NOW! Statistics 2015

    We will continue to report about things related to upgrades and migrations and performance and testing and the cloud and ... and ... and regarding Oracle Database 12c in 2016 - I promise :-)

    Thanks for your support and all the helpful emails and comments.
    We learn a lot from you folks out there!!!

    Have a great start into 2016! 

    --Mike 

    Monday Jan 11, 2016

    New Webcast "Why Upgrade to Oracle 12c" available

    In December 2015 I did a recording for a webcast about "Why Upgrade to Oracle Database 12c" refreshing the content from a few months back.

    Webcast "Why Upgrade to Oracle Database 12c"

    Oracle's latest generation of database technology, Oracle Database 12.1.0.2, has some impressive new features. It offers great potential for a fast upgrade, simple migrations and consolidation, making more efficient use of hardware and delivering major improvements in management efficiency.

    Join our webcast on Upgrading to Oracle Database 12c and you will learn:

    • Why you still need to upgrade to Oracle Database 12.1.0.2
    • How to ensure that your applications are ready for Oracle Database 12c
    • How to evaluate and test all enhancements of the upgrade process plus other new features
    • Best practices to upgrade and migrate successfully 

    If you would like to watch it please access the webcast via this link: 

    And please excuse the suboptimal audio quality - there were some technical difficulties to work around.

    --Mike 

    Thursday Jan 07, 2016

    Oracle Database certification on Microsoft Windows 10

    The MOS Note: 1307195.1  about

    Certification Information for Oracle Database on Microsoft Windows x64 (64-bit)

    got updated right before the 2015 Christmas holidays with the official certification information for Oracle Database 12c on Microsoft Windows 10.

    And it says:

    Windows 10 O/S Information:
    RAC is not certified.
    The earliest release certified on Windows 10 is 12.1.0.2.

    --Mike 


    Previous related blog posts:

    Friday Dec 25, 2015

    Best Albums in 2015

    Yes, from time to time I write something fully off-topic.

    If you look for tech stuff please scroll down just a bit ;-)

    I love listen to music. Every day. Especially when I travel my two friends (Pono Player and B&O H6 Headphones) are always with me. And at home I have a decent stereo system in my man's cave. But this blog post is not meant about gear, it's meant about music released in 2015.

    When I look at "Best Record of the Year" in terms of popularity or sells I always get a bit scared. So please take the following list as purely based on my own views and likes. And let me say that I bought all these records or CDs as real media
    .

    2015 - A Vinyl Year 

    For me 2015 was a true vinyl year. I purchased more than 75 LPs, some new, some from Used Record stores. I never sold my record collection as some of my friends did, I just kept it in good shape - and since this year I owe also a record washing machine which allows me to refresh especially used examples quite easily. I have a fairly good British turntable with a almost 10 year old Dutch cartridge mounted to it. One thing you'll recognize immediately when listening to music on vinyl: you don't skip songs. No playlists. And you'll discover hidden gems on records you haven't listened to for many many years. 

    But be aware when rushing down the road to your next record store. A lot of the oh-so-popular reissues settle on digital transfer copies, sometimes not even from the original master tapes but from a second sourced copy. Positive examples are the recent editions of Simon and Garfunkel and the huge Queen box. But others such as the Roxy Music collection or - the worst of the worst, shame on you, Klaus Meine and Rudolf Schenker - the 50th Anniversary Scorpions box and re-editions. Terrible sound. No dynamic at all. Luckily I have my copies of "Love at the First Sting", "Virgin Killer", "Blackout" and "World Wide Live" in good shape from the past. 

    Vinyl has another great aspect as well besides sounding often a bit better and warmer than CD:
    I can read the lyrics without looking for a magnifying glass ;-)

    If you miss Adele, well, she has a great voice, no doubt. But I can't really listen to deadly compressed music anymore. It makes me feel sick as all dynamics, all transparency gets completely lost. And music lives on things such as dynamics ... if you don't believe me look up Adele's "25" CD edition in the Loudness War DR Database. Then you'll see what I mean - the CD version will sound only well in your car or on an iPhone - but on a real stereo system it sounds like a dead fish ... If you still don't believe it come by and bring your "21" CD and we'll compare it to my "21" vinyl edition (which is not a good pressing in terms of quality either).

    Enough said. 
    .

    Here's my top 5 list of records in 2015

    5. David Crosby - Croz (Vinyl)

    Good ol' David Crosby has still a marvelous voice. No idea how he could keep it when you read what he has to say about his crazy years in the 60s and 70s. It's a miracle that he survived it. And his 2015 record titled CROZ is a very good collection of laid back folk and rock songs in the tradition of CSNY. The vinyl edition has an extra goody on side 4: a audiophile pressing at 45rpm of "What's Broken" (with Mark Knopfler on guitar) and "The Clearing". Plus it contains the digital download codes for WAV files. I'd wish other record companies would follow this outstanding example and deliver WAV or FLAC/ALAC files and not MP3 with their records.

    4. Kamasi Washington - The Epic (CD)

    So many things have been written about this jazz record. Most likely because it's hip and cool and very unusual. People who are usually not into jazz can enjoy this 3 hour mammoth piece on 3 CDs. I'd say the critics overrated it a bit as if it was the only good jazz record in 2015 (which is not true). But actually all the jazz records I bought in 2015 where from the years before with Marcin Wasilewski Trio's "Spark of Life" from 2014 or Stefano Bollani's "Stone in the Water" from 2008.

    Nevertheless, Kamasi Washington's "Epic" is worth to listen. It's fun. It's fresh and old fashioned and funky and full of soul. It's available on vinyl as well but when I purchased it the vinyl was not yet available. People rumor that the pressing should be quite good. You may read on here in my favorite US Audio magazine Stereophile.

    3. Anathema - A Sort of Homecoming (Vinyl) 

    My favorite Rock band - from Liverpool, England. They started their career 25 years ago as a Death/Doom Metal band - and when you listen today you won't realize this. A Sort Of Homecoming got recorded in 2015 during their anniversary show in Liverpool Cathedral. A have seen Anathema live some years ago in Munich. And I would say it was one of the 3 best concerts I've ever been to. And I have been to many concerts so far. I have almost all of their CDs. For me Anathema is one of the best Rock bands on the planet. If you ever have the chance to visit one of their shows spend the €30-40,- and you'll be amazed.  

    I bought the concert on vinyl on 3 records. But for those who prefer visual impressions and have a decent surround system the 5.1 mix of the BluRay is supposed to be excellent. They play the entire show acoustic with many of their die-hard fans in the audience. I wish I would have had a chance to be there as well.

    2. Bruce Springsteen - Album Collection (Vinyl) 

    Bob Ludwig remastered all the first 7 Springsteen albums. And I had to get it as many of Springsteen's LPs never got a decent remastering treatment. "Nebraska" did not sound any good - given the circumstances under which The Boss recorded it this was expected. But the remastering let it sound way better than any version I've had heard before.

    Before I write too many things you may read what Michael Fremer from analogplanet.com wrote about this awesome edition:

    Bruce Springsteen: The Album Collection Vol. 1 1973-1984 Reviewed 

    Springsteen Complete Album Collection

    The only real caveats are:

    • The download code allows you to download only 320k MP3s - who needs that? 
    • A bit more space in the box would have been better.
    • Two of my copies were warped.
    • The included "book" is more or less unreadable as pictures are generally overlapping the text.

    But the overall sound quality is very good compared to the original editions.

    .

    1. Steven Wilson - Hand. Cannot. Erase. (CD, DVD-A)

    This is THE album of 2015.

    Not only for myself. But also for many magazines.

    You may have never heard of Steven Wilson as he's not present usually on your regular radio station. But for almost everybody interested in prog rock he's called the "God of Prog" for reasons. He is/was the mastermind of Porcupine Tree, he runs many side projects such as No-Man and Blackfield - and he recently did awesome remasters of some of Jethro Tull's, Yes' and Gentle Giant's classic records. No idea when he finds time to sleep because he seems to be on tour almost half of the year.

    I've seen him live several times, with Porcupine Tree as well as with his recent line-ups supported by guitar hero Guthrie Govan and drum legend Marco Minneman. And I'll see him again in January 13, 2016 in Munich. He's fantastic. Concerts are more a celebration of rock played at perfection levels but with a lot of energy and full of heart and passion.

    Just as an update:
    The show happened last night in Munich at the Philharmonie Classic Concert Hall - a quite unusual and suboptimal setting for a prog rock show. But it was truly awesome. Fantastic band (with Dave Kilminster from Roger Waters' Wall tour band replacing Guthrie Govan and Craig Blundell replacing Mario Minneman). Wonderful 2.5 hours of fantastic prog music ...  

    Hand.Cannot.Erase. is inspired by a real story. In 2001 young London based Joyce Vincent died - but got found only more than 2 years later. Nobody noticed her disappearance. And this concept album is not only about this woman but also about isolation and our society not taking notice on anybody else. My favorite songs are Happy Returns and Perfect Life. But the entire album is full of great songs. It's an awesome piece of music.

    It's actually the album I have listened the most often when being on the road.

    If you'd like to read more:

    Happy New Year :-)

     --Mike
    .

    Tuesday Dec 22, 2015

    Patch, patch, patch - Journey to the Cloud - III

    DBaaS Oracle Cloud

    What happened so far on my Journey to the Cloud?

    I haven't stopped my journey if you wonder about the inactivity for two weeks. I just had to learn a bit as my pretty naive approach caused me some trouble. Based on my last experience I had to find out why APEX left so many leftovers when I removed it the friendly way. And I wanted to patch my environment to a higher version of PSUs as - learning experience - a new deployment does not get deployed with the most recent PSU applied from scratch.  So there's still plenty of stuff to do.
    .

    Patching my DBaaS database with the most recent PSU

    If you've got frustrated in the past by yourself while applying a PSU then please read on - I guarantee you'll smile more than once ...

    First of all what is my expectation from a cloud environment?
    Yes, Push A Button.

    Well ... with the July 2015 PSU (12.1.0.2.4) this worked just fine even though it took a bit of time to download the PSU. But it got applied flawlessly by just pushing a button.

    PSU Choice

    But we have December 2015 already. So I would like to apply the Oct 2015 PSU (12.1.0.2.5) to my environment. And it turns out that currently this is the only one getting offered in the DBaaS environment to be applied. 

    First step: Execute the precheck

    See the results ...

    Oh, it failed. But why? It does not tell me anything about the reason why it had failed. Was it no space left? Or a patch conflict? Or something else? No idea as Error [1] isn't very useful.  

    And what's next? Live without the PSU even though I'm preaching since ages in our workshops that you MUST apply PSUs on a regular basis? Ok, so I have to sort this out.
    ,

    Let's find it out

    I did login to my environment via SSH. Then:

    cd /u01

    Ah, and there's a psu.n subdirectory - that looks promising. And there's a file conflictlog in it - that does look even more promising. 

    It tells me:

    Invoking prereq "checkconflictagainstohwithdetail"

    ZOP-47: The patch(es) has supersets with other patches installed in the Oracle Home (or) among themselves.

    ZOP-40: The patch(es) has conflicts with other patches installed in the Oracle Home (or) among themselves.

    Prereq "checkConflictAgainstOHWithDetail" failed.

    Summary of Conflict Analysis:
    There are no patches that can be applied now.

    Following patches have conflicts. Please contact Oracle Support and get the merged patch of the patches :
    21359755, 21627366

    Following patches will be rolled back from Oracle Home on application of the patches in the given list :
    20281121

    Whole composite patch Conflicts/Supersets are:
    Composite Patch : 21359755

            Conflict with 21627366
            Bug Superset of 20281121

    Detail Conflicts/Supersets for each patch are:

    Sub-Patch : 21359755
            Conflict with 21627366
            Conflict details:
            /u01/app/oracle/product/12.1.0/dbhome_1/lib/libserver12.a:kzan.o
            /u01/app/oracle/product/12.1.0/dbhome_1/lib/libserver12.a:kspt.o
            /u01/app/oracle/product/12.1.0/dbhome_1/lib/libserver12.a:kcb.o
            /u01/app/oracle/product/12.1.0/dbhome_1/lib/libserver12.a:kcrfw.o
            /u01/app/oracle/product/12.1.0/dbhome_1/lib/libserver12.a:kokt.o

            Bug Superset of 20281121
            Super set bugs are:
            20281121

    Patch failed with error code 1

    Ah, my friend Error [1] again. Now all is clear, isn't it? There's a conflict as the previous installation deployed in the cloud does seem to have gotten some extra treatments - which is good in one way but bad in another as I will have to solve this now. The Cloud Console does not offer me anything to solve this.
    .

    Luckily ...

    I'm subscribed to our internal cloud mailing lists. And other people are way more smarter than I am so I found an email linking an explanation in the official documentation (Known Issues for the Database Cloud As A Service). There a quite a few known issues and it's very useful to have such a document. And here we go with the solution to my problem:

    Ok, I have two options, one in the graphical interface, the other one on the command line. I'll go with the first option as this is meant to be Push A Button style and not type on the command line.

    So first I click on PATCH in the hamburger menu: 

    And then I chose the FORCE option.
    May the force be with me.

    The alternative would have been on the command line using the dbpatchm subcommand of the dbaascli utility:

    Before applying the patch, set the value of the ignore_patch_conflict key to 1 in the /var/opt/oracle/patch/dbpatchm.cfg patching configuration file; for example:

    ignore_patch_conflict=1
    .

    Et voilà ...

    I took a bit ...

    Actually a bit over 30 minutes ... but finally ...

    The most recent PSU 12.1.0.2.5 from October 2015 has been applied to my DBaaS Cloud installation. It wasn't that complicated - if I had known upfront to hit the magic "FORCE" button ;-)

    Finally let's check:

    COLUMN PATCH_ID FORMAT 99999999
    COLUMN PATCH_UID FORMAT 99999999
    COLUMN VERSION FORMAT A12
    COLUMN STATUS A12
    COLUMN DESCRIPTION A30

    select  PATCH_ID, PATCH_UID, VERSION, STATUS, DESCRIPTION from DBA_REGISTRY_SQLPATCH order by  BUNDLE_SERIES;


    .

    Remove APEX from my database and install it into my PDB

    In my previous journey removing APEX from my cloud database didn't work quite well. I had leftovers afterwards, mainly from the Multitenant Self Service Provisioning APEX application (owner: C##PDBSS) and from the Database-As-A-Service-Monitor. 

    Removing  Multitenant Self Service Provisioning Application

    From some internal email conversation (and from the readme.pdf included in the download of the Multitenant Self Service Provisioning Application) I learned that there's a pdbss_remove.sql. And a find showed me the unusual location in the cloud deployment:

    /var/opt/oracle/log/pdbss/pdbss

    So first of all I connected to my CDB$ROOT and started the removal script:

    sqlplus / as sysdba
    spool /tmp/pdbss_remove.log
    @/var/opt/oracle/log/pdbss/pdbss/pdbss_remove.sql

    It took 1:45 minutes to complete. Spool off is not necessary as the script ends SQL*Plus.

    Then I started the APEX removal script from the $ORACLE_HOME/apex subdirectory:

    cd $ORACLE_HOME/apex
    sqlplus / as sysdba

    @apxremov_con.sql

    Well ... but again something seemed to fail as I end up with a good bunch of invalid objects. 

    First I did recompile:

    @?/rdbms/admin/utlrp.sql 

    But even then those 3 objects kept INVALID after the removal.

    SQL> select owner, object_name from dba_objects where status='INVALID';

    OWNER         OBJECT_NAME
    ------------- --------------------------
    SYS           WWV_DBMS_SQL
    FLOWS_FILES   WWV_BIU_FLOW_FILE_OBJECTS
    APEX_040200   APEX$_WS_ROWS_T1

    And furthermore a good number of APEX user schema did not get removed as well.

    CON_ID USERNAME
    ------ --------------------------
         1 APEX_REST_PUBLIC_USER
         1 APEX_PUBLIC_USER
         1 APEX_LISTENER
         1 APEX_040200
         3 APEX_REST_PUBLIC_USER
         3 APEX_PUBLIC_USER
         3 APEX_LISTENER
         3 APEX_040200

    8 rows selected.

    Will update this blog post as soon as I have more news about how to remove APEX flawless from the cloud deployment. The issue is under investigation.

    So better don't remove APEX at the moment from the DBaaS cloud deployment. 

    --Mike


    Friday Dec 18, 2015

    Tech Tip: Get updates about new/important MOS Notes

    There's nothing more annoying than unwanted and useless email newsletters in your inbox.

    No doubt.

    But sometimes I wish to have an alert about important MOS Note changes, recent additions to my favorite notes, updates to the bugs I monitor etc. I would like to know about helpful new MOS articles - when they get published ;-)

    And this is what you need to do (and I'll have to apologize for the small pictures but our fancy, modern blog design template does not allow to include larger pictures ...).
    .

    Log into MOS and change your SETTINGS

    Once you've logged in you'll click on MORE and then navigate to SETTINGS.

    Change SETTINGS in MOS
    .

    Choose HOT TOPICS EMAIL in the left navigation menu

    HOT TOPICS EMAIL
    .

    Update the settings

    Here it is important that you choose "Send with Selected Options" first and then "HTML" as otherwise the email does not seem to get sent ("Text" option does not seem to work).

    HTML Option
    .

    Add Products you'd like to get updates about 

    Add Product Selection

    Of course you may add more products - I chose only the "Database" in my example. 
    .

    Change the number of items you'd like to see 

    No of Items

    *** AND DON'T FORGET TO HIT THE "APPLY" BUTTON FURTHER DOWN ON THE SCREEN ***
    .

     Check your current Favorites

    Then it's important to check your current "Favorites". Click on the yellow/orange STAR in the upper menu close to the search field and select "Documents". If "Documents" does not get displayed there's no need to clean up as you haven't marked anything as favorite yet. But in my below example you see that I have a lot of very old docs in my list. 

    Check Favorites

    So I will clean them up first.
    .

    Clean Up the favorite documents list

    Click on "Manage Favorites" via the yellow/orange STAR and mark all unwanted documents by using either the usual SHIFT or STRG keys. Hit "Remove".

    In my case the list is empty now.  
    .

    Mark documents as FAVORITES 

    The important work is now to mark certain documents or bugs as FAVORITES. In the below example I'd choose the most important MOS Note 161818.1 via the plain ID search (right side of the screenshot), and once the note gets displayed I'll push the withe STAR on the upper left corner above the document - the STAR will turn yellow/orange

    Empty fav list

    Once I've did that I will find this note now in my list of favorite documents:

    Repeat this with the notes you'd like to get informed about changes. For instance I monitor always the "Known Issues and Alerts" for Oracle Database 12.1.0.2 and 11.2.0.4:

    Issues and Alerts
    .

    Receive the email update

    Once you've did that you'll get an email per day with the recent changes - and it will contain information about the updates in your fav notes.


    .

    Summary

    It's hard to keep track about recent changes and updates to important MOS Notes. And it's even harder to learn about newly published articles in your areas of interest. The HOT Topics email feature in MOS is pretty cool and extremely helpful. I like it a lot.

    But you'll have to enable it first in your settings as MOS does not want to spam you. 

    Credits to Roy who dug out all the tiny pieces such as the HTML switch ;-)

    --Mike

    About

    Mike Dietrich - Oracle Mike Dietrich
    Master Product Manager - Database Upgrade & Migrations - Oracle

    Based in Germany. Interlink between customers/partners and the Upgrade Development. Running workshops between Arctic and Antartica. Assisting customers in their reference projects onsite and remotely. Connect via:

    - -

    Search

    Archives
    « July 2016
    SunMonTueWedThuFriSat
         
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
          
    Today
    Slides Download Center
    Visitors since 17-OCT-2011
    White Paper and Docs
    Workshops
    Viewlets and Videos
    Workshop Map
    x Oracle related Tech Blogs
    This week on my Rega & Pono
    Upgrade Reference Papers