Tuesday May 17, 2016

Recap - OTN Tour EMEA - Baku, Azerbaijan - 2016

Working on a Saturday is not common for me but happens from time to time mostly when I travel abroad or when a customer has a critical upgrade or migration on a weekend.

Having 120 skilled and enthusiastic people in a conference day on a Saturday is VERY UNCOMMON. And having them stay from 9am to 6pm is a sign that the selection of topics fit the interest of the audience - and the presenters got their attention the entire day.

Please find the slides and the Hands-On-Lab here:

I'm full of a ton of positive emotions (you know that Germans are ice cold and keep their feelings mostly under control, don't you *haha*) - still two days later. That was such a amazing day in Azerbaijan. I haven't been to this country before neither have I visited Baku. The only thing I knew about Baku was from my youth days when I used to play chess in our school team and tried to learn from Garri Kasparov's books. He was born in Baku.

The city is beautiful. Unfortunately I didn't see much as I wished as I had to rush back to Germany. But looking at the 1000s of pictures Oracle ACE Director Joél Perez has taken in two days it is very scenic. Speaking of the other presenters I learned a lot from everybody, Chris Antonini from Trivadis, Björn Rost from Pythias and of course from Joel.

And ... I know now - thanks to the AzerOUG's president Kamran Agayev that ALT+N advances the Java-based DBUA when the NEXT button is outside of your screen's scope and does not appear no matter what you do. Thanks my friend, you saved me :-)

Congrats also to all the organizers - everything worked perfectly well.

The conference got even attention in television:


;-)

I hope I can come back soon with a bit more time to look around.

CU all soon!

--Mike

Wednesday May 11, 2016

OTN Tour EMEA 2016 - Milan and Baku

Time to travel a bit.

OTN Tour EMEA 2016 has started yesterday in Düsseldorf, and tomorrow, May 12, Milan will be on the route.

OTN Tour Milan, Italy

.

Then on to Baku, Azerbaijan for the event on Saturday, May 14.

First time for me. And I'm so looking forward to it.

OTN Tour EMEA 2016 Baku

For Baku you will need to hurry as only 20 tickets are left.

Thanks to all the people like Ludovico Caldara and Kamran Agayev and all the others for their hard work setting ip those events.

And I look forward to see you and learn a lot from the Oracle ACE Directors presenting throughout the entire day. 

CU soon :-)

--Mike

Friday May 06, 2016

Upgrade NOW! - OTN Interview at Collaborate16

Thanks again to Laura for this interview at Collaborate 2016 :-)

Why you need to Upgrade to Oracle Database 12c - NOW!

--Mike

Thursday Mar 31, 2016

DROP PLUGGABLE DATABASE - things you need to know

Directly after my DOAG (German Oracle User Group) Conference presentation about "How Single-/Multitenant will change a DBA's life" Martin Bach (Enkitec) approached me and told me about his experiences with the DROP PLUGGABLE DATABASE command and future recoverability.

Martin discovered that once you issued the DROP PLUGGABLE DATABASE command you can't reuse a previously taken backup of this particular PDB anymore and recover the PDB into this existing CDB. I wasn't aware of this and I'm glad that Martin told me about it.

Actually only the meta information in the controlfile or the RMAN catalog will be deleted. But archive logs and backup still persist.

See also my blog post from Jan 9, 2015:
Recent News about Pluggable Databases - Oracle Multitenant

This is the error message you'll see when you try to recover a dropped pluggable database:

RMAN> restore pluggable database pdb2drop;

Starting restore at 01-JUN-15
using channel ORA_DISK_1
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of restore command at 06/01/2015 10:10:40
RMAN-06813: could not translate pluggable database pdb2drop

Just a few weeks ago a colleague from German presales asked me if I know a more convenient way to restore a PDB once the magic command (DROP PLUGGABLE DATABASE) has been issued than recovering it into an auxiliary container database abd unbplug/plug it. I haven't.

But Nik (thanks!!!) told me that he pushed a MOS Note to being published explaining how to workaround this issue: 

MOS Note: 2034953.1
How to Restore Dropped PDB in Multitenant

In brief this MOS Note describes how to:

  • Create an auxiliary container database
  • Recover the backup (yes, you will have to have a backup of your container database) including this particular PDB
  • Unplug the PDB after recovery has been finished and plug it back into the original CDB

Now some will say: Hey, that's simple and obvious. For me it wasn't ;-) That's why I write about it to remind myself of this workaround ...

--Mike
.



Tuesday Mar 08, 2016

Parameter Recommendations for Oracle Database 12c - Part II


Best Practice Hint

Time for a new round on Parameter Recommendations for Oracle Database 12.1.0.2. The focus of this blog post settles on very well known parameters with interesting behavior. This can be a behavior change or simply something we'd like to point out. And even if you still work on Oracle Database 11g some of the below recommendations may apply to your environment as well.

Preface

Again, please be advised - the following parameter list is mostly based on personal experience only. Some of them are officially recommended by Oracle Support. Always use proper testing mechanisms.

We strongly recommend Real Application Testing, especially the SQL Performance Analyzer but also Database Replay to verify the effect of any of those parameters. 
.

Known Parameters - Interesting Behavior

  • parallel_min_servers
    • What it does?
    • Default:
      • CPU_COUNT * PARALLEL_THREADS_PER_CPU * 2
    • Behavior Change:
      • Setting it to a value below the default will let the database ignore it.
      • In Oracle Database 11g the default was 0
      • Compare 11.2.0.4 vs 12.1.0.2 on the same box:
        • 11g:
          SQL> show parameter parallel_min_servers
          NAME                  TYPE     VALUE
          --------------------- -------- ------
          parallel_min_servers  integer  0

        • 12c:
          SQL> show parameter parallel_min_servers
          NAME                  TYPE     VALUE
          --------------------- -------- ------
          parallel_min_servers  integer  8
    • Explanation:

  • job_queue_processes
    • What it does?
      • See the Oracle Documentation - value specifies the maximum number of job slaves to be created to execute jobs started by either DBMS_JOBS or DBMS_SCHEDULER
    • Default:
      • 1000
    • Recommendation:
      • Set it to a rough equivalent of 2 * CPU cores
    • Explantion:
      • In Oracle Database 12c we introduced the automatic stats gathering during CTAS and IAS (into an empty table only) operations. This can potentially lead to too many jobs doing the stats gathering. Furthermore issues can happen due to the default of concurrent stats gathering.
        Therefore a limitation of this parameter seems to be a good idea. 
      • Be aware when switching it to 0 - this will block all recompilation attempts. Furthermore generally no jobs can be executed anymore with  DBMS_JOBS or DBMS_SCHEDULER.
      • Multitenant behavior change:
        In 12.1.0.1, job_queue_process was a Container Database (CDB) modifiable parameter (ie. at a global level). However, in 12.1.0.2, the job_queue_process parameter is not CDB modifiable; instead it's PDB modifiable which means each PDB can have its own job_queue_process value.  
    • More Information:
    • Annotation:
      I've had an email exchange with Stefan Köhler about the stats behavior for CTAS. As I couldn't myself reproduce the behavior we say at two customer with job_queue_processes=1000 and an heavy CTAS activity (which could be remedied by setting JQP to a lower value) I would put a question mark behind my above statement.

      .
      .
  • recyclebin
    • What it does?
      • See the Oracle Documentation - controls whether the Flashback Drop capability is turned on or off. If the parameter is set to OFF, then dropped tables do not go into the recycle bin. If this parameter is set to ON, then dropped tables go into the recycle bin and can be recovered.
    • Default:
      • ON
    • Recommendation:
      • If the recyclebin is ON (the default) in your environment then empty it at least once per week. Create a default job in all your environments emptying the recycle bin every Sunday morning at 3am for instance:
        SQL> purge DBA_RECYCLEBIN;
    • Explantion:
      • The recycle bin is on in every database by default since Oracle 10g. The danger is that it may not be emptied but especially on developer databases many objects may be created and dropped again. As a result the dropped objects and its dependents still stay in the database until the space needs to be reclaimed. That means, they exist in the data dictionary as well, for instance in TAB$. Their name is different now starting with "BIN$..." instead of "EMP" - but they will blow up your dictionary. And emptying it not often enough may introduce a performance dip to your system as the cleanup of many objects can be quite resource intense
      • Check your current recycle bins:
        SQL > SHOW RECYCLEBIN;
        ORIGINAL NAME RECYCLEBIN NAME              OBJECT TYPE DROP TIME
        ------------- ---------------------------- ----------- -------------------
        TEST_RBIN     BIN$2e51YTaSK8TL/mPy+FuA==$0 TABLE       2010-05-27:15:23:45
        TEST_RBIN     BIN$5dF60S3GSEOSSYREaqCg==$0 TABLE       2010-05-27:15:23:43
        TEST_RBIN     BIN$JHCDN9YwQRXjXGOJcCIg==$0 TABLE       2010-05-27:15:23:42
    • More Information:
.
.

  • deferred_segment_creation
    • What it does?
      • See the Oracle Documentation - set to the default (TRUE), then segments for tables and their dependent objects (LOBs, indexes) will not be created until the first row is inserted into the table
    • Default:
      • TRUE
    • Recommendation:
      • Set it to FALSE unless you plan to create a larger number of tables/indexes knowing that you won't populate many of them.
    • Explantion/Risk:
      • If my understanding is correct this parameter got introduced with Oracle Database 11.2 in order to save space when applications such as EBS, Siebel or SAP create tons of tables and indexes which never may get used as you don't work with the matching module of the software
      • The risk can be that certain query check DBA_SEGMENTS and/or DBA_EXTENTS - and if there's no segment allocated you won't find an indication about the existence of the object in there - but it actually exists. Furthermore we have seen issues with Data Pump workers getting contention, and some other things. 
    • More Information:
      • The documentation has become now pretty conservative as well since Oracle 11.2.0.4 and I'll second that:
        Before creating a set of tables, if it is known that a significant number of them will not be populated, then consider setting this parameter to true. This saves disk space and minimizes install time.
        ..
 --Mike

Friday Mar 04, 2016

Parameter Recommendations for Oracle Database 12c - Part I

Best Practice Hint

 A few weeks ago we've published some parameter recommendations including several underscores but based on an internal discussion (still ongoing) we decided to remove this entry and split up the tasks. The optimizer team will take over parts of it and I'll post an update as soon as something is published.

.

Preface

Please be advised - the following parameter list is mostly based on personal experience only. Some of them are officially recommended by Oracle Support. Always use proper testing mechanisms.

We strongly recommend SQL Performance Analyzer to verify the effect of any of those parameters. 
.

How to read this blog post?

Never ever blindly set any underscore or hidden parameters because "somebody said" or "somebody wrote on a blog" (including this blog!) or "because our country has the best tuning experts worldwide" ... Only trust Oracle Support if it's written into a MOS Note or an official Oracle White Paper or if you work with a particular support or consulting engineer for quite a long time who understands your environment.
.

Important Parameter Settings

    • _kks_obsolete_dump_threshold
      • What it does?
        • Introduced in Oracle 12.1.0.2 as an enhancement  to improve cursor sharing diagnostics by dumping information about an obsolete parent cursor and its child cursors after the parent cursor has been obsoleted N times. 
      • Problem:
        • Trace files can grow like giant mushrooms due to cursor invalidations
      • Solution:
      • Patches:
        • Fix included in DBBP 12.1.0.2.160216
        • Fix on-top of 12.1.0.2.13DBEngSysandDBIM
        • Since Feb 13, 2016 there's a one-off available but on Linux only - and only on top of a fresh 12.1.0.2 
      • Remarks:
        • The underlying cursor sharing problem needs to be investigated - always
          If you have cursor sharing issues you may set this parameter higher therefore not every invalidation causes a dump, then investigate and solve the issue, and finally switch the parameter to 0 once the issue is taken care of. 
          Please be aware that switching the parameter to 0 will lead to a lack of diagnostics information in case of cursor invalidations.


    • _use_single_log_writer
    • memory_target
      • What it does?
      • Problem:
        • Unexpected failing database upgrades with settings of  memory_target < 1GB where equal settings ofsga_target and pga_aggregate_target didn't cause issues 
        • It prevents the important use of HugePages
      • Solution:
        • Avoid memory_target by any chance
        • Better use sga_target and pga_aggregate_target instead


    • pga_aggregate_limit

    Essential MOS Notes for Oracle Database 12.1.0.2

    --Mike
    .

    Tuesday Feb 02, 2016

    How to find out if a PSU has been applied? DBMS_QOPATCH

    pflaster.jpgSince we change the PSU and BP patch numbering from Oracle Database 12.1.0.2.PSU6 to 12,1,0,2,160119 it is almost impossible to distinguish from the patch name only if you have applied a PSU or a BP.

    But:
    In Oracle Database 12c there's a package available which is very useful to query plenty of information about patches from within the database: DBMS_QOPATCH.

    Here are a few helpful examples which I created by checking in our DBaaS Cloud database.

    Which patches have been applied (or rolled back)?

    SQL> set serverout on

    SQL> exec dbms_qopatch.get_sqlpatch_status;

    Patch Id : 20415564
            Action : APPLY
            Action Time : 24-JUN-2015 06:19:23
            Description : Database PSU 12.1.0.2.3, Oracle JavaVM Component (Apr2015)
            Logfile : /u01/app/oracle/cfgtoollogs/sqlpatch/20415564/18617752/
                      20415564_apply_ORCL_CDBRO
    OT_2015Jun24_06_18_09.log
            Status : SUCCESS

    Patch Id : 20299023
            Action : APPLY
            Action Time : 24-JUN-2015 06:19:23
            Description : Database Patch Set Update : 12.1.0.2.3 (20299023)
            Logfile : /u01/app/oracle/cfgtoollogs/sqlpatch/20299023/18703022/
                      20299023_apply_ORCL_CDBRO
    OT_2015Jun24_06_18_11.log
            Status : SUCCESS

    Patch Id : 20848415
            Action : APPLY
            Action Time : 24-JUN-2015 06:19:23
            Description :
            Logfile : /u01/app/oracle/cfgtoollogs/sqlpatch/20848415/18918227/
                      20848415_apply_ORCL_CDBRO
    OT_2015Jun24_06_18_15.log
            Status : SUCCESS

    Patch Id : 20848415
            Action : ROLLBACK
            Action Time : 24-JUN-2015 06:52:31
            Description :
            Logfile : /u01/app/oracle/cfgtoollogs/sqlpatch/20848415/18918227/
                      20848415_rollback_ORCL_CD
    BROOT_2015Jun24_06_52_29.log
            Status : SUCCESS

    Patch Id : 20618595
            Action : APPLY
            Action Time : 24-JUN-2015 13:52:13
            Description :
            Logfile : /u01/app/oracle/cfgtoollogs/sqlpatch/20618595/18956621/
                      20618595_apply_ORCL_CDBRO
    OT_2015Jun24_13_52_12.log
            Status : SUCCESS

    Patch Id : 20618595
            Action : ROLLBACK
            Action Time : 24-JUN-2015 14:37:11
            Description :
            Logfile : /u01/app/oracle/cfgtoollogs/sqlpatch/20618595/18956621/
                      20618595_rollback_ORCL_CD
    BROOT_2015Jun24_14_37_10.log
            Status : SUCCESS

    Patch Id : 20415564
            Action : ROLLBACK
            Action Time : 27-JAN-2016 17:43:18
            Description : Database PSU 12.1.0.2.3, Oracle JavaVM Component (Apr2015)
            Logfile : /u01/app/oracle/cfgtoollogs/sqlpatch/20415564/18617752/
                      20415564_rollback_MIKEDB_
    CDBROOT_2016Jan27_17_42_16.log
            Status : SUCCESS

    Patch Id : 21555660
            Action : APPLY
            Action Time : 27-JAN-2016 17:43:18
            Description : Database PSU 12.1.0.2.5, Oracle JavaVM Component (Oct2015)
            Logfile : /u01/app/oracle/cfgtoollogs/sqlpatch/21555660/19361790/
                      21555660_apply_MIKEDB_CDB
    ROOT_2016Jan27_17_42_17.log
            Status : SUCCESS

    Patch Id : 21359755
            Action : APPLY
            Action Time : 27-JAN-2016 17:43:18
            Description : Database Patch Set Update : 12.1.0.2.5 (21359755)
            Logfile : /u01/app/oracle/cfgtoollogs/sqlpatch/21359755/19194568/
                      21359755_apply_MIKEDB_CDB
    ROOT_2016Jan27_17_42_18.log
            Status : SUCCESS

    Patch Id : 21962590
            Action : APPLY
            Action Time : 27-JAN-2016 17:43:18
            Description :
            Logfile : /u01/app/oracle/cfgtoollogs/sqlpatch/21962590/19426224/
                      21962590_apply_MIKEDB_CDB
    ROOT_2016Jan27_17_42_21.log
            Status : SUCCESS

    PL/SQL procedure successfully completed.
    .

    Where's my home and inventory?

    SQL> set pagesize 0

    SQL> set long 1000000 

    SQL> select xmltransform(dbms_qopatch.get_opatch_install_info, dbms_qopatch.get_opatch_xslt) "Home and Inventory" from dual;

    Home and Inventory
    -------------------------------------------------------------

    Oracle Home     : /u01/app/oracle/product/12.1.0/dbhome_1
    Inventory    
        : 
    /u01/app/oraInventory


    Has a specific patch been applied?

    Lets check for the latest PSU. 

    SQL> select xmltransform(dbms_qopatch.is_patch_installed('21359755'), dbms_qopatch.get_opatch_xslt) "Patch installed?" from dual;

    Patch installed?
    -------------------------------------------------------

    Patch Information:
             21359755:   applied on 2015-10-22T21:48:17Z

    .

    What's tracked in my inventory?

    The equivalent of opatch lsinventory -detail ...

    SQL> select xmltransform(dbms_qopatch.get_opatch_lsinventory, dbms_qopatch.get_opatch_xslt) from dual; 

    Oracle Querayable Patch Interface 1.0
    ----------------------------------------------------------------
    Oracle Home       : /u01/app/oracle/product/12.1.0/dbhome_1
    Inventory         : /u01/app/oraInventory
    ----------------------------------------------------------------

    Installed Top-level Products (1):
                                        12.1.0.2.0
    Installed Products ( 135)
                                   ...

    .

    Additional Information and Patches

    If you need more helpful examples you may check this excellent blog post by Simon Pane (Pythian):

    And credits to Martin Berger for sending me this important information:

    Just in case there are multiple DBs running from the same O_H, and someone      
    queries dbms_qopatch.get_opatch_lsinventory automated from all DBs (as in       
    automated monitoring/reporting scripts) I'd recommend Patch 20599273 -          
    otherwise there might be strange XM errors due to race conditions. 

    .

    --Mike 

    Thursday Jan 28, 2016

    TDE is wonderful - Journey to the Cloud V

    DBaaS Oracle Cloud

     What happened so far on my Journey to the Cloud?

    Today's journey:
    Learn about TDE (Transparent Data Encryption) and other secrets

    What I really really love about my job: Every day I learn something new.

    But sometimes learning can be frustrating at the beginning. And so it was for Roy and myself in the past days when we explored the use of TDE (Transparent Data Encryption) in our DBaaS Cloud environments. But many thanks to Brian Spendolini for his continuous 24x7 support :-)

    Never heard of Transparent Data Encryption before? Then please read on here. It's usually part of ASO (Advanced Security Option) but it is included in the cloud offering. 

    But first of all before taking care on TDE and PDBs I tried to deploy a new DBaaS VM ...
    .

    PDB names can't contain underscores?

    Well, one learning experience is that initially you can't create a PDB in the DBaaS environment with an underscore in the name. I wanted to name my PDB in my new env simply "TDE_PDB1" (8 characters, all should be fine) - but received this nice message:

    Don't worry if you don't speak German - it basically says that it can't be longer than 8 characters (ok, I knew that), must begin with a character (mine does of course) and can only contain characters and number (eh?? no underscores???). Hm ...?!?

    Ok, I'll name mine "TDEPDB1".

    Of course outside this page you can create PDBs containing an underscore:

    SQL> create pluggable database PDB_MIKE admin user mike identified by mike
      2  file_name_convert=('/oradata/CDB2/pdbseed', '/oradata/CDB2/pdb_mike');

    Pluggable database created
    .

    That's what happens when application logic tries to superseed database logic.
    .

    (Almost) undocumented parameter: encrypt_new_tablespace

    Thanks to Robert Pastijn for telling me about this hidden secret. A new parameter which is not in the regular database deployment but only in the cloud.

    encrypt_new_tablespaces

    First check in MOS:

    Interesting.

    So let's check with Google.
    And here it is: 7 hits, for example:

    Controlling Default Tablespace Encryption

    The ENCRYPT_NEW_TABLESPACES initialization parameter controls default encryption of new tablespaces. In Database as a Service databases, this parameter is set to CLOUD_ONLY.

    Value
    Description

    ALWAYS

    Any tablespace created will be transparently encrypted with the AES128 algorithm unless a different algorithm is specified on the ENCRYPTION clause.

    CLOUD_ONLY

    Tablespaces created in a Database Cloud Service database will be transparently encrypted with the AES128 algorithm unless a different algorithm is specified on the ENCRYPTION clause. For non-Database Cloud Service databases, tablespaces will only be encrypted if the ENCRYPTION clause is specified. This is the default value.

    DDL

    Tablespaces are not transparently encrypted and are only encrypted if the ENCRYPTION clause is specified.

    What I found really scary is the fact that I couldn't find it in my spfile/pfile. You can alter it with an "alter system" command but you can't remove it.

    The idea behind this is great as tablespaces should be encrypted, especially when they reside in a cloud environment. TDE is a very useful feature. And this mechanism exists regardless of your edition, whether you have Standard Edition or Enterprise Edition in any sort of flavor in the DBaaS Cloud.

    A new tablespace will be encrypted by default:

    SQL> CREATE TABLESPACE TS_MIKE DATAFILE 'ts_mike01.dbf' SIZE 10M;

    Then check:

    SQL> select TABLESPACE_NAME, ENCRYPTED from DBA_TABLESPACES;

    But we'll see later if this adds some constraints to our efforts to migrate a database for testing purposes into the DBaaS cloud environment.
    .

    Is there anything encrypted yet?

    Quick check after setting:

    SQL> alter system set exclude_seed_cdb_view=FALSE scope=both;

    I tried to find out if any of the tablespaces are encrypted.

    SQL> select tablespace_name, encrypted, con_id from CDB_TABLESPACES order by 3;

    TABLESPACE_NAME                ENC     CON_ID
    ------------------------------ --- ----------
    SYSTEM                         NO           1
    USERS                          NO           1
    SYSAUX                         NO           1
    UNDOTBS1                       NO           1
    TEMP                           NO           1

    SYSTEM                         NO           2
    USERS                          NO           2
    TEMP                           NO           2
    SYSAUX                         NO           2

    EXAMPLE                        NO           3
    USERS                          NO           3
    TEMP                           NO           3
    SYSAUX                         NO           3
    APEX_1701140435539813          NO           3
    SYSTEM                         NO           3

    15 rows selected.

    Looks good.  Nothing encrypted yet.
    .

    How does the new parameter ENCRYPT_NEW_TABLESPACES effect operation?

    Ok, lets try.

    SQL> show parameter ENCRYPT_NEW_TABLESPACES

    NAME                                 TYPE        VALUE
    ------------------------------------ ----------- ---------------
    encrypt_new_tablespaces              string      CLOUD_ONLY

    And further down the road ...

    SQL> alter session set container=pdb1;

    SQL> create tablespace MIKE_PLAYS_WITH_TDE datafile '/u02/app/oracle/oradata/MIKEDB/PDB1/mike_plays_with_tde.dbf' size 10M;

    Tablespace created.

    SQL> select tablespace_name, encrypted, con_id from CDB_TABLESPACES order by 3;

    TABLESPACE_NAME                ENC     CON_ID
    ------------------------------ --- ----------
    SYSTEM                         NO           3
    SYSAUX                         NO           3
    TEMP                           NO           3
    USERS                          NO           3
    EXAMPLE                        NO           3
    APEX_1701140435539813          NO           3
    MIKE_PLAYS_WITH_TDE            YES          3

    7 rows selected.

    Ah ... so my new tablespace is encrypted. Not bad ... so far TDE has no influence. I can create objects in this tablespace, query them etc. It is not disturbing at all. Good.
    .

    How does this key thing work in the DBaaS Cloud?

    The documentation in above WP tells us this:

    Managing the Software Keystore and Master Encryption Key

    Tablespace encryption uses a two-tiered, key-based architecture to transparently encrypt (and decrypt) tablespaces. The master encryption key is stored in an external security module (software keystore). This master encryption key is used to encrypt the tablespace encryption key, which in turn is used to encrypt and decrypt data in the tablespace.

    When the Database as a Service instance is created, a local auto-login software keystore is created. The keystore is local to the compute node and is protected by a system-generated password. The auto-login software keystore is automatically opened when accessed.

    You can change (rotate) the master encryption key by using the tde rotate masterkey  subcommand of the dbaascli  utility. When you execute this subcommand you will be prompted for the keystore password. Enter the password specified when the service instance was created.
    .

    Creating a new PDB

    That's easy, isn't it?

    SQL> alter session set container=cdb$root;

    Session altered.

    SQL> create pluggable database pdb2 admin user mike identified by mike;

    Pluggable database created.

    SQL> alter pluggable database pdb2 open;

    Pluggable database altered.

    SQL> select tablespace_name, encrypted, con_id from CDB_TABLESPACES order by 3;

    TABLESPACE_NAME                ENC     CON_ID
    ------------------------------ --- ----------
    SYSTEM                         NO           4
    SYSAUX                         NO           4
    TEMP                           NO           4
    USERS                          NO           4

    SQL> select file_name from dba_data_files;

    FILE_NAME
    --------------------------------------------------------------------------------
    /u02/app/oracle/oradata/MIKEDB/2A6680A0D990285DE053BA32C40AED53/datafile/o1_mf_s
    ystem_cbn8fo1s_.dbf

    /u02/app/oracle/oradata/MIKEDB/2A6680A0D990285DE053BA32C40AED53/datafile/o1_mf_s
    ysaux_cbn8fo20_.dbf

    /u02/app/oracle/oradata/MIKEDB/2A6680A0D990285DE053BA32C40AED53/datafile/o1_mf_u
    sers_cbn8fo27_.dbf

    Ah, bad thing. As I neither used the file_name_convert option nor changed the PDB_FILE_NAME_CONVERT initialization parameter my new PDB files get created in the "root" path of the CDB. I don't want this. But isn't there this cool new feature called ONLINE MOVE OF DATAFILES in Oracle Database 12c? Ok, it's an EE feature but let me try this after checking the current OMF file names in DBA_DATA_FILES and DBA_TEMP_FILES:

    SQL> !mkdir /u02/app/oracle/oradata/MIKEDB/PDB2

    SQL> ALTER DATABASE MOVE DATAFILE '/u02/app/oracle/oradata/MIKEDB/2A6680A0D990285DE053BA32C40AED53/datafile/o1_mf_system_cbn8fo1s_.dbf' TO '/u02/app/oracle/oradata/MIKEDB/PDB2/system01.dbf';

    SQL> ALTER DATABASE MOVE DATAFILE '/u02/app/oracle/oradata/MIKEDB/2A6680A0D990285DE053BA32C40AED53/datafile/o1_mf_sysaux_cbn8fo20_.dbf' TO '/u02/app/oracle/oradata/MIKEDB/PDB2/sysaux01.dbf';

    SQL> ALTER DATABASE MOVE DATAFILE '/u02/app/oracle/oradata/MIKEDB/2A6680A0D990285DE053BA32C40AED53/datafile/o1_mf_users_cbn8fo27_.dbf' TO '/u02/app/oracle/oradata/MIKEDB/PDB2/users01.dbf';

    Be prepared:
    This will create a 1:1 copy of the file in the designated location and synch afterwards. It may take a minute per file.

    And moving the TEMP tablespace(s) file(s) will fail.

    SQL> ALTER DATABASE MOVE DATAFILE '/u02/app/oracle/oradata/MIKEDB/2A6680A0D990285DE053BA32C40AED53/datafile/o1_mf_temp_cbn8fo25_.dbf' TO '/u02/app/oracle/oradata/MIKEDB/PDB2/temp01.dbf';

    *
    ERROR at line 1:
    ORA-01516: nonexistent log file, data file, or temporary file
    "/u02/app/oracle/oradata/MIKEDB/2A6680A0D990285DE053BA32C40AED53/datafile/o1_mf_temp_cbn8fo25_.dbf"

    The temporary tablespace will have to be dropped and recreated. But not a big deal. 

    Check;

    SQL> select file_name from dba_data_files;

    FILE_NAME
    -----------------------------------------------------
    /u02/app/oracle/oradata/MIKEDB/PDB2/sysaux01.dbf
    /u02/app/oracle/oradata/MIKEDB/PDB2/users01.dbf
    /u02/app/oracle/oradata/MIKEDB/PDB2/system01.dbf

    Let me fix this so I don't hit this pitfall again:

    SQL> alter system set pdb_file_name_convert='/u02/app/oracle/oradata/MIKEDB/pdbseed','/u02/app/oracle/oradata/MIKEDBPDB2';

    Final verification:

    SQL> select name, value from v$system_parameter where con_id=4;

    NAME                   VALUE
    ---------------------- ----------------------------------
    resource_manager_plan
    pdb_file_name_convert  /u02/app/oracle/oradata/MIKEDB/pdbseed, /u02/app/oracle/oradata/MIKEDBPDB2


    Now the fun part starts ... ORA-28374: typed master key not found in wallet

    Remember this command from above in my PDB1? It run fine. But now it fails in PDB2.

    SQL> create tablespace MIKE_PLAYS_WITH_TDE datafile '/u02/app/oracle/oradata/MIKEDB/PDB2/mike_plays_with_tde.dbf' size 10M;

    create tablespace MIKE_PLAYS_WITH_TDE datafile '/u02/app/oracle/oradata/MIKEDB/PDB2/mike_plays_with_tde.dbf' size 10M
    *
    ERROR at line 1:
    ORA-28374: typed master key not found in wallet

    Voodoo in the database? I'm worried - especially as Roy had the same issue days before. But why did the command pass through without issues before in PDB1 - and now it doesn't in PDB2? Is it because the PDB1 is precreated - and my PDB2 is not?

    Kinda strange, isn't it?
    So connecting back to my PDB1 and trying again: 

    SQL> alter session set container=pdb1;

    Session altered.

    SQL> create tablespace MIKE_STILL_PLAYS_WITH_TDE datafile '/u02/app/oracle/oradata/MIKEDB/PDB1/mike_still_plays_with_tde.dbf' size 10M;

    Tablespace created.

    Ok, now I'm worried.
    What is the difference between the precreated PDB1 and my new PDB2?
    .

    Why do I get an ORA-28374 in my fresh PDB2?

    When we compare the wallet status in both PDBs we'll recognize the difference:

    PDB1:

    SQL> select wrl_type, wallet_type, status from v$encryption_wallet;

    WRL_TYPE        WALLET_TYPE          STATUS
    --------------- -------------------- -----------------------
    FILE            AUTOLOGIN            OPEN

    PDB2:

    SQL> select wrl_type, wallet_type, status from v$encryption_wallet;

    WRL_TYPE        WALLET_TYPE          STATUS
    --------------- -------------------- -----------------------
    FILE            AUTOLOGIN            OPEN_NO_MASTER_KEY

    .
    Now thanks to Brian Spendolini I have a working solution. But I'm not convinced that this is an obvious path ...
    Remember? I just would like to create a tablespace in my new (own, non-precreated) PDB. That's all ... 

    SQL> alter session set container=cdb$root;

    SQL> administer key management set keystore close;

    keystore altered.

    SQL> administer key management set keystore open identified by <your-sysadmin-pw> container=all;

    keystore altered.

    SQL> alter session set container=pdb2;

    Session altered.

    SQL> SELECT WRL_PARAMETER,STATUS,WALLET_TYPE FROM V$ENCRYPTION_WALLET;

    WRL_PARAMETER                             STATUS             WALLET_TYPE
    ----------------------------------------- ------------------ -----------
    /u01/app/oracle/admin/MIKEDB/tde_wallet/  OPEN_NO_MASTER_KEY PASSWORD

    SQL>  ADMINISTER KEY MANAGEMENT SET KEY USING TAG  "tde_dbaas" identified by <your-sysadmin-pw>  WITH BACKUP USING "tde_dbaas_bkup"; 

    keystore altered.

    SQL> SELECT WRL_PARAMETER,STATUS,WALLET_TYPE FROM V$ENCRYPTION_WALLET;

    WRL_PARAMETER                             STATUS   WALLET_TYPE
    ----------------------------------------- -------- -----------
    /u01/app/oracle/admin/MIKEDB/tde_wallet/  OPEN     PASSWORD

    And finally ... 

    SQL> create tablespace MIKE_STILL_PLAYS_WITH_TDE datafile '/u02/app/oracle/oradata/MIKEDB/PDB2/mike_still_plays_with_tde.dbf' size 10M;

    Tablespace created.

    Wow!!!

    That was not straight forward at all. Maybe it all happens due to my almost non-existing knowledge about TDE.

    Ah ... and let me say that I find the missing uppercase letter with all keystore altered. echo messages quite disturbing. But this is a generic one and non-critical of course ...

    --Mike
    .

    PS: Read on about Seth Miller's experience here on Seth's blog:
    http://sethmiller.org/oracle-2/oracle-public-cloud-ora-28374-typed-master-key-not-found-in-wallet/ 




    Tuesday Dec 22, 2015

    Patch, patch, patch - Journey to the Cloud - III

    DBaaS Oracle Cloud

    What happened so far on my Journey to the Cloud?

    I haven't stopped my journey if you wonder about the inactivity for two weeks. I just had to learn a bit as my pretty naive approach caused me some trouble. Based on my last experience I had to find out why APEX left so many leftovers when I removed it the friendly way. And I wanted to patch my environment to a higher version of PSUs as - learning experience - a new deployment does not get deployed with the most recent PSU applied from scratch.  So there's still plenty of stuff to do.
    .

    Patching my DBaaS database with the most recent PSU

    If you've got frustrated in the past by yourself while applying a PSU then please read on - I guarantee you'll smile more than once ...

    First of all what is my expectation from a cloud environment?
    Yes, Push A Button.

    Well ... with the July 2015 PSU (12.1.0.2.4) this worked just fine even though it took a bit of time to download the PSU. But it got applied flawlessly by just pushing a button.

    PSU Choice

    But we have December 2015 already. So I would like to apply the Oct 2015 PSU (12.1.0.2.5) to my environment. And it turns out that currently this is the only one getting offered in the DBaaS environment to be applied. 

    First step: Execute the precheck

    See the results ...

    Oh, it failed. But why? It does not tell me anything about the reason why it had failed. Was it no space left? Or a patch conflict? Or something else? No idea as Error [1] isn't very useful.  

    And what's next? Live without the PSU even though I'm preaching since ages in our workshops that you MUST apply PSUs on a regular basis? Ok, so I have to sort this out.
    ,

    Let's find it out

    I did login to my environment via SSH. Then:

    cd /u01

    Ah, and there's a psu.n subdirectory - that looks promising. And there's a file conflictlog in it - that does look even more promising. 

    It tells me:

    Invoking prereq "checkconflictagainstohwithdetail"

    ZOP-47: The patch(es) has supersets with other patches installed in the Oracle Home (or) among themselves.

    ZOP-40: The patch(es) has conflicts with other patches installed in the Oracle Home (or) among themselves.

    Prereq "checkConflictAgainstOHWithDetail" failed.

    Summary of Conflict Analysis:
    There are no patches that can be applied now.

    Following patches have conflicts. Please contact Oracle Support and get the merged patch of the patches :
    21359755, 21627366

    Following patches will be rolled back from Oracle Home on application of the patches in the given list :
    20281121

    Whole composite patch Conflicts/Supersets are:
    Composite Patch : 21359755

            Conflict with 21627366
            Bug Superset of 20281121

    Detail Conflicts/Supersets for each patch are:

    Sub-Patch : 21359755
            Conflict with 21627366
            Conflict details:
            /u01/app/oracle/product/12.1.0/dbhome_1/lib/libserver12.a:kzan.o
            /u01/app/oracle/product/12.1.0/dbhome_1/lib/libserver12.a:kspt.o
            /u01/app/oracle/product/12.1.0/dbhome_1/lib/libserver12.a:kcb.o
            /u01/app/oracle/product/12.1.0/dbhome_1/lib/libserver12.a:kcrfw.o
            /u01/app/oracle/product/12.1.0/dbhome_1/lib/libserver12.a:kokt.o

            Bug Superset of 20281121
            Super set bugs are:
            20281121

    Patch failed with error code 1

    Ah, my friend Error [1] again. Now all is clear, isn't it? There's a conflict as the previous installation deployed in the cloud does seem to have gotten some extra treatments - which is good in one way but bad in another as I will have to solve this now. The Cloud Console does not offer me anything to solve this.
    .

    Luckily ...

    I'm subscribed to our internal cloud mailing lists. And other people are way more smarter than I am so I found an email linking an explanation in the official documentation (Known Issues for the Database Cloud As A Service). There a quite a few known issues and it's very useful to have such a document. And here we go with the solution to my problem:

    Ok, I have two options, one in the graphical interface, the other one on the command line. I'll go with the first option as this is meant to be Push A Button style and not type on the command line.

    So first I click on PATCH in the hamburger menu: 

    And then I chose the FORCE option.
    May the force be with me.

    The alternative would have been on the command line using the dbpatchm subcommand of the dbaascli utility:

    Before applying the patch, set the value of the ignore_patch_conflict key to 1 in the /var/opt/oracle/patch/dbpatchm.cfg patching configuration file; for example:

    ignore_patch_conflict=1
    .

    Et voilà ...

    I took a bit ...

    Actually a bit over 30 minutes ... but finally ...

    The most recent PSU 12.1.0.2.5 from October 2015 has been applied to my DBaaS Cloud installation. It wasn't that complicated - if I had known upfront to hit the magic "FORCE" button ;-)

    Finally let's check:

    COLUMN PATCH_ID FORMAT 99999999
    COLUMN PATCH_UID FORMAT 99999999
    COLUMN VERSION FORMAT A12
    COLUMN STATUS A12
    COLUMN DESCRIPTION A30

    select  PATCH_ID, PATCH_UID, VERSION, STATUS, DESCRIPTION from DBA_REGISTRY_SQLPATCH order by  BUNDLE_SERIES;


    .

    Remove APEX from my database and install it into my PDB

    In my previous journey removing APEX from my cloud database didn't work quite well. I had leftovers afterwards, mainly from the Multitenant Self Service Provisioning APEX application (owner: C##PDBSS) and from the Database-As-A-Service-Monitor. 

    Removing  Multitenant Self Service Provisioning Application

    From some internal email conversation (and from the readme.pdf included in the download of the Multitenant Self Service Provisioning Application) I learned that there's a pdbss_remove.sql. And a find showed me the unusual location in the cloud deployment:

    /var/opt/oracle/log/pdbss/pdbss

    So first of all I connected to my CDB$ROOT and started the removal script:

    sqlplus / as sysdba
    spool /tmp/pdbss_remove.log
    @/var/opt/oracle/log/pdbss/pdbss/pdbss_remove.sql

    It took 1:45 minutes to complete. Spool off is not necessary as the script ends SQL*Plus.

    Then I started the APEX removal script from the $ORACLE_HOME/apex subdirectory:

    cd $ORACLE_HOME/apex
    sqlplus / as sysdba

    @apxremov_con.sql

    Well ... but again something seemed to fail as I end up with a good bunch of invalid objects. 

    First I did recompile:

    @?/rdbms/admin/utlrp.sql 

    But even then those 3 objects kept INVALID after the removal.

    SQL> select owner, object_name from dba_objects where status='INVALID';

    OWNER         OBJECT_NAME
    ------------- --------------------------
    SYS           WWV_DBMS_SQL
    FLOWS_FILES   WWV_BIU_FLOW_FILE_OBJECTS
    APEX_040200   APEX$_WS_ROWS_T1

    And furthermore a good number of APEX user schema did not get removed as well.

    CON_ID USERNAME
    ------ --------------------------
         1 APEX_REST_PUBLIC_USER
         1 APEX_PUBLIC_USER
         1 APEX_LISTENER
         1 APEX_040200
         3 APEX_REST_PUBLIC_USER
         3 APEX_PUBLIC_USER
         3 APEX_LISTENER
         3 APEX_040200

    8 rows selected.

    Will update this blog post as soon as I have more news about how to remove APEX flawless from the cloud deployment. The issue is under investigation.

    So better don't remove APEX at the moment from the DBaaS cloud deployment. 

    --Mike


    Friday Dec 18, 2015

    Tech Tip: Get updates about new/important MOS Notes

    There's nothing more annoying than unwanted and useless email newsletters in your inbox.

    No doubt.

    But sometimes I wish to have an alert about important MOS Note changes, recent additions to my favorite notes, updates to the bugs I monitor etc. I would like to know about helpful new MOS articles - when they get published ;-)

    And this is what you need to do (and I'll have to apologize for the small pictures but our fancy, modern blog design template does not allow to include larger pictures ...).
    .

    Log into MOS and change your SETTINGS

    Once you've logged in you'll click on MORE and then navigate to SETTINGS.

    Change SETTINGS in MOS
    .

    Choose HOT TOPICS EMAIL in the left navigation menu

    HOT TOPICS EMAIL
    .

    Update the settings

    Here it is important that you choose "Send with Selected Options" first and then "HTML" as otherwise the email does not seem to get sent ("Text" option does not seem to work).

    HTML Option
    .

    Add Products you'd like to get updates about 

    Add Product Selection

    Of course you may add more products - I chose only the "Database" in my example. 
    .

    Change the number of items you'd like to see 

    No of Items

    *** AND DON'T FORGET TO HIT THE "APPLY" BUTTON FURTHER DOWN ON THE SCREEN ***
    .

     Check your current Favorites

    Then it's important to check your current "Favorites". Click on the yellow/orange STAR in the upper menu close to the search field and select "Documents". If "Documents" does not get displayed there's no need to clean up as you haven't marked anything as favorite yet. But in my below example you see that I have a lot of very old docs in my list. 

    Check Favorites

    So I will clean them up first.
    .

    Clean Up the favorite documents list

    Click on "Manage Favorites" via the yellow/orange STAR and mark all unwanted documents by using either the usual SHIFT or STRG keys. Hit "Remove".

    In my case the list is empty now.  
    .

    Mark documents as FAVORITES 

    The important work is now to mark certain documents or bugs as FAVORITES. In the below example I'd choose the most important MOS Note 161818.1 via the plain ID search (right side of the screenshot), and once the note gets displayed I'll push the withe STAR on the upper left corner above the document - the STAR will turn yellow/orange

    Empty fav list

    Once I've did that I will find this note now in my list of favorite documents:

    Repeat this with the notes you'd like to get informed about changes. For instance I monitor always the "Known Issues and Alerts" for Oracle Database 12.1.0.2 and 11.2.0.4:

    Issues and Alerts
    .

    Receive the email update

    Once you've did that you'll get an email per day with the recent changes - and it will contain information about the updates in your fav notes.


    .

    Summary

    It's hard to keep track about recent changes and updates to important MOS Notes. And it's even harder to learn about newly published articles in your areas of interest. The HOT Topics email feature in MOS is pretty cool and extremely helpful. I like it a lot.

    But you'll have to enable it first in your settings as MOS does not want to spam you. 

    Credits to Roy who dug out all the tiny pieces such as the HTML switch ;-)

    --Mike

    Wednesday Dec 16, 2015

    New version of our Hands-On Lab available for download "Upgrade/Migrate/Consolidate to Oracle 12c" HOL

    Yes! Finally ...

    Roy did all the hard work and patched all our environments with the most recent PSUs. And he managed also to upload our Hands-On Lab not only internally to retriever (Oracle internally) but also to OTN.

    What you'll be able to do?

    • Upgrade an 11.2.0.4 database UPGR to Oracle Database 12.1.0.2
    • Plug in UPGR into the CDB2, an Oracle Database 12.1.0.2 Container database, and make it a PDB
    • Migrate an 11.2.0.4 database FTEX into another PDB using the Full Transportable Export/Import feature
    • Unplug an Oracle Database 12.1.0.1 PDB and plug it into an Oracle Database 12.1.0.2 container database and finally upgrade it
    • And of course you can play around with the entire environment
      .

    What we've did for your convenience? 

    • Switched to a new PDF viewer which allows better copy&paste - see the instructions document.
      Unfortunately we are not allowed to distribute the relatively old version of Adobe PDF Reader for Linux. 
    • Applied the most recent PSUs to the 11.2.0.4 and 12.1.0.2 homes
    • Fixed some minor things
    • Updated the HOL Instructions
    • Added an "aritificial" environment variable $OH12 to shorten typing
    • And some other tiny little things ...
    • And the lab is packaged with Oracle VBox 5.0.10 including the matching VBox Extensions which caused us trouble with the previous lab environment which was settled on Vbox 4.3 - so we hope that now all will go well.
      .

    Where can you download the lab and the instructions?

    All credits to Roy!!!

    Thanks
    --Mike

    Friday Dec 04, 2015

    Switch on/off + Remove - Journey to the Oracle Cloud - II

    DBaaS Oracle Cloud

    What happened so far on my Journey to the Cloud?

    Day 2 of my Cloud exploration journey.

    In my blog posts about my DBaaS Cloud journey I won't explain a lot about how to login to the Oracle Cloud or how to setup a simple database as others have done this already including the SSH key topic:

    But let me show you some findings I've had when logging in via SSH to my DBaaS environment. In the following article I will describe how to:

    • Switch off Flashback Database
    • Clean up expired Backups and Archivelogs
    • Clean up traces and set the retention policy
    • Remove APEX from the CDB$ROOT
    .

    Flashback Database is ON by default

    After playing around with my DBaaS environment for a few days all got stuck. The database hung. A quick look into the alert.log revealed the simple reason: my database had no space anymore on the Flash Recovery Area mount point. "Why that?" I did ask myself.

    Well,

    SQL> select FLASHBACK_ON from V$DATABASE;
    YES

    told me the answer. By default FLASHBACK DATABASE is ON. Great idea but for playground-mode you may or may not switch it OFF. Furthermore in my (developers) environment the logs will be written to /u03/app/oracle/flash_recovery_area/flashback (DB_RECOVERY_FILE_DEST) - and DB_RECOVERY_FILE_DEST_SIZE is defined at only 7.5GB. That got consumed quickly.  

    You may switch it OFF if you don't need FLASHBACK DATABASE at the moment. Or check at least increase the DB_RECOVERY_FILE_DEST_SIZE.

    SQL> alter database flashback off; 
    SQL> exit
    $  rm /u03/app/oracle/flash_recovery_area/flashback/*

    And just for the records:
    You don't need to have FLASHBACK DATABASE switched ON as a feature in order to use GUARANTEED RESTORE points (which is one of the best things - I used it almost daily before I "destroy" something in my databases).

    Quick check:

    SQL> select FLASHBACK_ON from V$DATABASE;
    NO

    SQL> create restore point DESTROY_ME guarantee flashback database;

    If you'd check now:

    SQL> select flashback_on from v$database;

    FLASHBACK_ON
    ------------------
    RESTORE POINT ONLY

    Don't forget to drop your GUARANTEED RESTORE POINT later on. 

    SQL> drop restore point DESTROY_ME; 

    .

    Clean up your expired BACKUPS and ARCHIVE LOGS 

    Every database should be run in archivelog mode - no doubt. But if you are short on disk space for the archive logs you may need to clean up from time to time. Furthermore archivelog mode is necessary in order to support certain actions in Oracle Multitenant.

    Keep in mind:
    Backups and Archivelogs don't clean up themselves.

    Clean up expired backups:

    $ rman target /
    RMAN> crosscheck backup;
    RMAN> delete noprompt obsolete;
    RMAN> delete expired backup;

    Clean up archivelogs: 

    $ rman target /
    RMAN> crosscheck archivelog all;
    RMAN> delete noprompt expired archivelog all;


    Clean up Trace and Log Files and set the Retention

    The default retention for traces has been set to standard values in the cloud as well. You may change this to get a more frequent purging of traces. The interface to tweak settings is ADRCI. Find a nice ADRCI Survival Guide for DBAs here written by my colleague Uwe Hesse, 

    adrci> show control

    To solve DIA-48448 you'll have to set the correct diag directory you want to look at:

    adrci> show homes
    ADR Homes:
    diag/rdbms/mike/MIKE
    diag/tnslsnr/slc01qnv/listener
    diag/tnslsnr/MIKEVM/listener

    adrci> set home diag/rdbms/mike/MIKE

    adrci> show control

    ADR Home = /u01/app/oracle/diag/rdbms/mike/MIKE:
    *************************************************************************
    ADRID                SHORTP_POLICY        LONGP_POLICY         LAST_MOD_TIME                            LAST_AUTOPRG_TIME                        LAST_MANUPRG_TIME                        ADRDIR_VERSION       ADRSCHM_VERSION      ADRSCHMV_SUMMARY     ADRALERT_VERSION     CREATE_TIME
    -------------------- -------------------- -------------------- ---------------------------------------- ---------------------------------------- ---------------------------------------- -------------------- -------------------- -------------------- -------------------- ----------------------------------------
    4162796853           720                  8760 2015-09-17 13:51:04.699486 +00:00        2015-11-30 17:53:33.062134 +00:00                                                 1                    2                    82                   1                    2015-09-17 13:51:04.699486 +00:00
    1 rows fetched

    This means SHORTP_POLICY and LONGP_POLICY are set to 30 days and 365 days (~1 year). 

    I'd usually change it (that is pretty up to your personal preferences) and purge whatever is older than a few days for a start.

    adrci> set control (SHORTP_POLICY=96, LONGP_POLICY=720)

    switching the short term policy to 4 days, the long term policy (alert.log, incidents) to 30 days. And finally cleaning up also older traces for a fresh start:

    adrci> purge -age 24

    This tool a while (~ 10 minutes) in my environment.
    .

    Remove Oracle Application Express (APEX) from the CDB$ROOT

    As I wrote a few weeks ago Oracle APEX is a fantastic - and unfortunately still the most undervalued - Oracle tool. The only real treatment you should do to your Cloud database first is removing APEX from the CDB$ROOT before either plugging something in or creating pluggable database. If you have created pluggable databases (PDBs) already please make sure that APEX is not used in any of them as removing APEX from the CDB$ROOT will remove it from all existing PDBs as well.

    See: https://blogs.oracle.com/UPGRADE/entry/apex_in_pdb_does_not 
    on Why and How to remove Oracle APEX from the CDB$ROOT.

    In short:

    1. Make sure to change into the local $ORACLE_HOME/apex directory first before starting SQL*Plus:
      cd $ORACLE_HOME/apex

    2. Connect to your CDB$ROOT:
      sqlplus / as sysdba

    3. Run the "Remove APEX from the CDB$ROOT" script:
      SQL> @apxremov_con.sql

    4. Recompile to get rid of any invalid objects left over by the removal script:
      SQL> @?/rdbms/admin/utlrp.sql

    That sounds simple, doesn't it?

    Well, but my journey ends here for today as I'm getting 28 invalid objects (which don't compile) afterwards - and plenty of nice ORA-600 errors in the alert.log looking fantastically similar to Data Dictionary corruptions. I think my Part 3 of the journey will be a patching cycle. 

    ORA-00600: internal error code, arguments: [ktsircinfo_num1], [1], [4], [71210]
    ORA-00600: internal error code, arguments: [6711], [4228381], [1], [4228380], [3
    ]

    A look into the list of INVALID objects give me this result - from the Cloud documentation I see that C##PDBMGR is a self-provisioning user env:

    The Oracle Pluggable Database Self-Service Provisioning application provides an interface to Oracle Database 12c Multitenant option and allows for the provisioning of Pluggable Databases (PDBs). You can perform PDB operations including create, clone, open/close, plug/unplug, and drop. Additionally, you can grant others access to the application, giving them rights to create and manage their own PDBs within the quota limits you set.

    So maybe it is not a good idea to remove APEX from the CDB$ROOT in our Oracle DBaaS Cloud environment? I will find out next week ...

    SQL> column object_type heading "Type" Format a15
    SQL> column object_name heading "Name" format a30
    SQL> column owner heading "Owner" format a20
    SQL> set pagesize 100
    SQL> select owner, object_type, object_name from dba_objects where status='INVALID';

    Owner                Type            Name
    -------------------- --------------- ------------------------------
    SYS                  PACKAGE BODY    WWV_DBMS_SQL
    FLOWS_FILES          TRIGGER         WWV_BIU_FLOW_FILE_OBJECTS
    APEX_040200          TRIGGER         APEX$_WS_ROWS_T1
    C##DBAAS_MONITOR     FUNCTION        DBM_IS_CDB
    C##PDBMGR            PACKAGE BODY    PDBSS_PDB_MGR
    C##PDBMGR            TRIGGER         PDBSS_APPLICATION_LOG_BI
    C##PDBMGR            TRIGGER         PDBSS_ERROR_LOG_BI
    C##PDBMGR            TRIGGER         PDBSS_PREFERENCES_BIU
    C##PDBMGR            TRIGGER         PDBSS_RESERVED_DB_NAMES_BIU
    C##PDBMGR            TRIGGER         PDBSS_DB_SIZES_BIU
    C##PDBMGR            TRIGGER         PDBSS_RESOURCE_PLANS_BIU
    C##PDBMGR            TRIGGER         PDBSS_USERS_BIU
    C##PDBMGR            TRIGGER         PDBSS_USER_LOG_BI
    C##PDBMGR            TRIGGER         PDBSS_ACCESS_REQUESTS_BIU
    C##PDBMGR            TRIGGER         PDBSS_DATABASES_BIU
    C##PDBMGR            TRIGGER         PDBSS_DATABASE_LOG_BI
    C##PDBMGR            TRIGGER         PDBSS_DATABASE_ADMINS_BIU
    C##PDBMGR            TRIGGER         BIU_PDBSS_EMAIL_TEMP_TYPES
    C##PDBMGR            TRIGGER         BIU_PDBSS_EMAIL_TEMPLATES
    C##PDBMGR            PACKAGE         PDBSS_FILTER_FW
    C##PDBMGR            PACKAGE         PDBSS_FW
    C##PDBMGR            TRIGGER         PDBSS_HISTORY_BIU
    C##PDBMGR            TRIGGER         PDBSS_NOTIF_BIU
    C##PDBMGR            TRIGGER         PDBSS_TZ_PREF_BIU
    C##PDBMGR            PACKAGE BODY    PDBSS
    C##PDBMGR            PACKAGE BODY    PDBSS_FILTER_FW
    C##PDBMGR            PACKAGE BODY    PDBSS_FW
    C##PDBMGR            PACKAGE BODY    PDBSS_EMAIL_API

    28 rows selected.

    I certainly can drop the user C##PDBMGR - but removing the Database Monitor user C##DBAAS_MONITOR may cause some trouble as the DBaaS Monitor won't work anymore. And furthermore still three APEX objects remain INVALID.

    I'll update you as soon as I have my environment patched and refreshed sometime next week after UKOUG Conference.

    --Mike

    Friday Oct 02, 2015

    OOW 2015 Sessions and Labs - Oracle Open World

    OOW 2015OMG ... only a few weeks to go ... Oracle Open World 2015 in San Francisco is coming closer and closer ...

    And this year will be really tough as we have a reduced number of people there - but more work to do as in previous years. 3 talks (2 for Upgrade, 1 for Data Pump), 4 labs (all in Nikko Hotel 15min walking distance from Moscone Center) - plus a good number of customer meetings already lined up. Plus the chance to meet so many great people ... and not to forget the Data Warehouse Global Leaders event at the Oracle HQ. 

    I have that strange feeling that I will be VERY tired when I'll board the plane on Friday night heading back to Germany ... ;-)
    .

    Focus On Upgrades/Migrations

    As the fantastic application we are using for the OOW content catalog doesn't allow me to link directly to a session Roy has built a Focus On document to guide you to some important talks around Upgrades and Migrations at OOW2015.for your convenience: 


    Talks

    Session ID

    Title

    Start Time

    Room

    CON6777 Upgrade and Migrate to Oracle Database 12c: Live and Uncensored!

    Many customers now have database environments numbering in the hundreds or even thousands. This session addresses the challenge of maintaining technical currency of such an environment while also containing upgrade and migration costs at a reasonable level. Learn from Oracle Database upgrade experts about product features, options, tools, techniques, and services that can help you maintain control of your database environment. You will also see examples of how real customers are successfully meeting this challenge today.
    .
    .
    October 26
    at 13:30h
    Moscone South—102
    CON8375 How to Upgrade Hundreds or Thousands of Databases in a Reasonable Amount of Time

    Many customers now have database environments numbering in the hundreds or even thousands. This session addresses the challenge of maintaining technical currency of such an environment while also containing upgrade and migration costs at a reasonable level. Learn from Oracle Database upgrade experts about product features, options, tools, techniques, and services that can help you maintain control of your database environment. You will also see examples of how real customers are successfully meeting this challenge today.
    .
    .
    October 28
    at 12:15h
    Moscone South—102
    CON8376 Deep Dive: More Oracle Data Pump Performance Tips and Tricks

    The Oracle Data Pump development team is back with even more performance tips and tricks for DBAs! In this session, learn about Oracle Data Pump features, parameters, and patches—some added since the first patch set of Oracle Database 12c 12.1.0.2—that will improve performance and decrease overhead for Oracle Data Pump projects. Whether you are an Oracle Data Pump novice or already an expert, you are sure to learn something new in this session that will help you maximize the throughput of your export and import operations.
    .
    .
    October 29
    at 9:30h

    Moscone South—305

    HOL10348 Upgrade, Migrate, and Consolidate to Oracle Database 12c [HOL10438]

    The Oracle Data Pump development team is back with even more performance tips and tricks for DBAs! In this session, learn about Oracle Data Pump features, parameters, and patches—some added since the first patch set of Oracle Database 12c 12.1.0.2—that will improve performance and decrease overhead for Oracle Data Pump projects. Whether you are an Oracle Data Pump novice or already an expert, you are sure to learn something new in this session that will help you maximize the throughput of your export and import operations.
    .
    Oct 26 at 11:00h
    Oct 27 at 11:45h
    Oct 28 at 13:15h
    Oct 29 at 12:30h

    Hotel Nikko - Golden Gate


    Hope to see you at OOW 2015!

    --Mike

    Thursday Oct 01, 2015

    Some Seoul Impressions - 09/2015

    A very intense week in Seoul, Korea, is over - and Roy and I both returned a bit sick ;-) The pleasures of traveling ...

    But thanks to all our colleagues who did all the orga work, to everybody who traveled from near and far to our 2 day internal workshop (and I still remember many names :-) - it was great fun to spend those two days with all of you) - and of course the the customers and partners we've had the pleasure either to visit or come to our workshop. Upgrade is in your hands now - just do it

    Finally some Seoul impressions - and even though it was my 5th visit to Seoul I'd realize still haven't seen a lot of the city :-( Next time ...

    Roy and I ended our trip again - our usual ritual - with a visit to the Best Bar on the Planet :-) 

    Seoul 1

    Seoul 2

    Seoul 3

    Seoul 4

    Seoul 5

    Seoul 6

    Seoul 7

    Seoul 8

    Seoul 9

    Seoul 10

    Pictures were taken in the Bongeunsa Temple, and in and around the COEX Center.

    CU soon again!

    --Mike

    PS. Forgot to mention this one here ... looks a bit like row-organization inside ... :-)

    Hana

    Monday Sep 28, 2015

    Upcoming Upgrade Workshops in Prague and Bucharest

    Oracle Upgrade Workshops Fall 2015

    I've just returned from a work intense week from Seoul, South Korea - and there are upcoming Upgrade / Migrate /Consolidate to Oracle Database 12c workshops already on my radar - and both are almost booked out completely. So if you are either based in the Prague or the Bucharest region you may quickly sign up (and show up as well) :-)
    .

    • Upgrade / Migrate / Consolidate to Oracle Database 12c
      .
      Wednesday, October, 7, 2015
      Prague, Czech Republic
      @Oracle Czech
      .
      Registration Link is here
      .
      .
    • Upgrade / Migrate / Consolidate to Oracle Database 12c
      .
      Tuesday, October 13, 2015
      Bucharest, Romania
      @Oracle Romania
      .
      Registration Link is here 
      .

    Hope to see you there :-)

    --Mike

    Friday Feb 13, 2015

    Hands-On-Lab "Upgrade, Migrate & Consolidate to Oracle Database 12c" available for DOWNLOAD now!

    Wow ... that was a hard piece of work. Roy put a lot of effort into getting our Hands-On-Lab on OTN for download. We promised to have it available after OOW - or at least a description how to create it by yourself. And finally it's there. Find it here:

    A few important things to mention before you start the download: 

    • It's a Virtual Box image
    • You will need to install Oracle Virtual Box first - and please install also the VBox Extensions
    • Your PC must have a 64-bit host operating system
    • You need to enable Virtualization options in your computer's BIOS
    • You PC should have at least 4GB of RAM - having 8GB is very helpful
    • A fast disk (SSD) will speed up things
    • The instructions are available for download but are included in the download as well
    • The lab will guide you through the following tasks:
      1. Upgrade an 11.2.0.4 database to Oracle 12.1.0.2
      2. Plug in this database into a 12.1.0.2 CDB
      3. Migrate an 11.2.0.4 database with Full Transportable Export into another PDB
      4. Unplug an 12.1.0.1 PDB and plug/upgrade it into an 12.1.0.2 CDB

    You'll find a picture as screen background inside the VBox image always giving you guidance about "what to accomplish" and "how to switch environments".

    Enjoy :-)

    --Mike 

     

     

    Monday Jan 12, 2015

    Upcoming Upgrade Workshops Jan/Feb 2015

    The new year will start with a good bunch of Upgrade, Migrate & Consolidate to Oracle Database 12c workshops in Japan, USA, Canada and Europe: 

    January 20, 2015  
    Tokyo,
    Japan
    Oracle Database 12c Upgrade Seminar
    including Hands-On for Partners
    by invitation only



    February 04, 2015  
    Irving (Dallas), TX
    USA
    Oracle Database 12c Upgrade Seminar - Dallas
    Register
    .

    February 05, 2015  
    Houston, TX
    USA
    Oracle Database 12c Upgrade Seminar - Houston
    Register
    .

    February 09, 2015  
    Bratislava,
    Slovakia
    Oracle 12c Database Upgrade Seminar - Bratislava
    Register
    .

    February 10, 2015  
    Budapest,
    Hungary
    Oracle 12c Upgrade Seminar Budapest
    Register
    .

    February 10, 2015  
    Reston, VA
    USA
    Oracle 12c Upgrade Seminar Reston
    Register
    .

    February 11, 2015  
    Atlanta, GA
    USA
    Oracle Database 12c Upgrade Seminar - Atlanta
    Register
    .
    February 19, 2015  
    Nürnberg,
    Germany
    Franconian User Group:
    Oracle Database 12c Upgrade
    17:30-21:00 Uhr - TU Nürnberg GSO - 
    Raum WE.012 Wasertorstraße 10
    Anmeldung
    .

    Sunday Nov 30, 2014

    VBOX Hands-on-Lab image - build your own :-)

    Oh ... I know ... I promised to post all the details how I've build up our pretty straight forward Hands-On-Lab Roy, Carol, Cindy, Joe and I used at OOW and some other occasions to let you upgrade, migrate and consolidate databases to Oracle Database 12c and into Oracle Multitenant.

    And well, some have emailed me already ... and I had this feeling that my schedule will be very tight after OOW. Even right now (Sunday evening) I'm already back at my second home, Lufthansa Senator Lounge at Munich Airport. Waiting for my flight to Rome in an hour or so. Honestly speaking I had really no time in the past weeks to sit down for 2 hours to write down all the steps to guide you through the rebuild. And I didn't want to throw just a few nuggets - my intention is always to get you detailed steps which really work and don't miss anything.

    But I have very good news for all who are waiting for the HOL Image :-)
    Roy is working hard (and I'm confident that he'll succeed) to get the image published on OTN within the next weeks. So please stay tuned. Even with the Christmas holidays coming up I'm tied into a schedule to visit Rome, Torino, Milan, Brussels, assist some customers in their final go-live-phase for Oracle Database 12c - and I'm really looking forward to that vacation.

    Stay tuned - and thanks again for your patience :-)

    -Mike 

    Thursday Nov 06, 2014

    Beijing, Seoul - and OTN Tour Tokyo - just in one week

    Uhhh ... next week will be a tough traveling experience. 19870 km (12347 mi), all in cattle class on 4 different airlines.

    Picture rights remain at The New York Times

    I tried to avoid Lufthansa as they canceled recently flights I heavily relied on - with no reason! And there's still a huge risk of announcing just their 9th strike within a year and a bit.

    But I'm really looking forward to the workshops, to see all the colleagues, customers and of course some great friends. Roy and I will do two internal two-day workshops in Beijing and Seoul with each more than 50 participants - and we'll include a nice hands-on-experience as well. And mid-week I'll head out to Tokyo to present at the DB TECH SHOWCASE 2014 - which is part of the OTN ACE Tour.

    CU soon - thanks!!!

    -Mike 


    Monday Apr 14, 2014

    Upcoming UPGRADE Workshops in Europe - Register Now

    There will be a lot of travel within the next few weeks. So please keep your fingers crossed that Lufthansa pilots don't go on strike for their ridicolous goals again. And I hope to see you in one of our upcoming workshops within the next weeks somewhere in the below cities.

    Click on the date/location to the RIGHT to register, see location and agenda etc. 

    CU soon :-)

    - Mike and Roy 

    Thursday Mar 06, 2014

    Free Webcast available now On Demand --- Upgrade and Migrate to Oracle Database 12c and Consolidate with Oracle Multitenant

    Almost 90 minutes about Upgrade, Migrate and Consolidate to Oracle Database 12c with or without Multitenant Option.
    Available now on demand. Just register yourself and watch it whenever it is convenient for you:

    Register to receive the On Demand Link and Watch It!

    -Mike 

    Tuesday Jan 07, 2014

    Upcoming ASIA Upgrade to Oracle 12c workshops

    .

    Happy New Year to all of you and your families!!!

    .

    We have still a few remaining seats for the upcoming workshops in January 2014.

    Oracle Database 12c Upgrade, Migrate and Consolidate Workshops


    • Singapore - Jan 21, 2014
      • Register Here!
        @Singapore Post Centre Theatrette - Level 5 - 10 Eunos Road 8 - Singapore 408600

    • Jakarta/Indonesia - Jan 22, 2014
    • Kuala Lumpur/Malaysia - Jan 23, 2014

    Oracle's Global Server Technology Upgrade Development Group offers you an opportunity to attend this exclusive full day seminar on how to Upgrade, Migrate & Consolidate to Oracle Database 12c.

    Oracle's latest generation of database technology has significant new features. It offers great potential for migrations and consolidation, making more efficient use of hardware and delivering major improvements in management efficiency.

    In this seminar the Oracle Database Upgrade Development team will guide you through several different examples of upgrade, migration and consolidation techniques and strategies - shown with real world customer cases.

    You will learn:
    • How to take full advantage of the new features and options in Database 12c
    • The changes and enhancements to the Oracle Database upgrade process
    • Best practices to upgrade and migrate successfully

    So you may register asap and spread the word :-)
    Looking forward to meet you there!

    -Mike

    About

    Mike Dietrich - Oracle Mike Dietrich
    Master Product Manager - Database Upgrade & Migrations - Oracle

    Based in Germany. Interlink between customers/partners and the Upgrade Development. Running workshops between Arctic and Antartica. Assisting customers in their reference projects onsite and remotely. Connect via:

    - -

    Search

    Archives
    « July 2016
    SunMonTueWedThuFriSat
         
    1
    2
    3
    6
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
          
    Today
    Slides Download Center
    Visitors since 17-OCT-2011
    White Paper and Docs
    Workshops
    Viewlets and Videos
    Workshop Map
    x Oracle related Tech Blogs
    This week on my Rega & Pono
    Upgrade Reference Papers