Friday Feb 13, 2015

Is it always the Optimizer? Should you disable Group By Elimination in 12c?

I wouldn't say it's always the optimizer - but sometimes one or two tiny little things are broken making it necessary to turn off new functionality for a while.

Please don't misinterpret this posting!
As far as I see (and I really do work with customers!) I'd state the Oracle Database Optimizer is more stable, faster and predictable compared to 11.2.0.x. Our Optimizer Architects and Developers have done a great job. But with all the complexity involved sometimes it takes a few fixes or incarnations until a great new feature really matures. The Group-By-Elimination feature in Oracle Database 12c seems to be such a candidate. 

What does the feature do? A simple example demonstrates the feature.

First the elimination is OFF (FALSE): 

SQL> explain plan for
  2  select /*+ opt_param('_optimizer_aggr_groupby_elim', 'false')*/
  3   dummy, sum(cnt)
  4    from (select dummy,
  5                 count(*) cnt
  6            from dualcopy
  7           group by dummy)
  8   group by dummy
  9  ; 

|  Ld | Operation            | Name     |
|   0 | SELECT STATEMENT     |          |
|   1 |  HASH GROUP BY       |          |
|   2 |   VIEW               |          |
|   3 |    HASH GROUP BY     |          |

And now it's ON (TRUE):

SQL> explain plan for
  2  select /*+ opt_param('_optimizer_aggr_groupby_elim', 'true')*/
  3   dummy, sum(cnt)
  4    from (select dummy,
  5                 count(*) cnt
  6            from dualcopy
  7           group by dummy)
  8   group by dummy
  9  ;
| Ld  | Operation          | Name     |
|   0 | SELECT STATEMENT   |          |
|   1 |  HASH GROUP BY     |          |

By comparing the two execution plans you'll see the difference immediately.

But there seem to be a few issues with that new feature such as several wrong query result bugs. The issues will be be tracked under the non-public bug20508819. Support may release a note as well.

At the moment we'd recommend to set: 



Wednesday Jul 11, 2012

Upgrade to - OCM: ORA-12012 and ORA-29280

OCM is the Oracle Configuration Manager, a tool to proactively monitor your Oracle environment to provide this information to Oracle Software Support. As OCM is installed by default in many databases but is some sort of independent from the database's version you won't expect any issues during or after a database upgrade ;-)

But after the upgrade from Oracle to Oracle on Exadata X2-2 one of my customers found the following error in the alert.log every 24 hours:

Errors in file /opt/oracle/diag/rdbms/db/trace/db_j001_26027.trc:
ORA-12012: error on auto execute of job "ORACLE_OCM"."MGMT_CONFIG_JOB_2_1"
ORA-29280: invalid directory path
ORA-06512: at "ORACLE_OCM.MGMT_DB_LL_METRICS", line 2436
ORA-06512: at line 1

Why is that happening and how to solve that issue now?

OCM is trying to write to a local directory which does not exist. Besides that the OCM version delivered with Oracle Database Patch Set is older than the newest available OCM Collector 10.3.7 - the one which has that issue fixed.

So you'll either drop OCM completely if you won't use it:

SQL> drop user ORACLE_OCM cascade;

or you'll disable the collector jobs:

SQL> exec dbms_scheduler.disable('ORACLE_OCM.MGMT_CONFIG_JOB');

SQL> exec dbms_scheduler.disable('ORACLE_OCM.MGMT_STATS_CONFIG_JOB');

or you'll have to reconfigure OCM - and please see MOS Note:1453959.1 for a detailed description how to do that - it's basically executing the script ORACLE_HOME/ccr/admin/scripts/installCCRSQL - but there maybe other things to consider especially in a RAC environment.

- Mike

Just for the records: Bug 12927935: ORA-12012, ORACLE_OCM.MGMT_DB_LL_METRICS, ORA-29280

Tuesday Jul 03, 2012

Data Pump: Consistent Export?

Ouch ... I have to admit as I did say in several workshops in the past weeks that a data pump export with expdp is per se consistent.

Well ... I thought it is ... but it's not. Thanks to a customer who is doing a large unicode migration at the moment. We were discussing parameters in the expdp's par file. And I did ask my colleagues after doing some research on MOS. And here are the results of my "research":

  • MOS Note 377218.1 has a nice example showing a data pump export of a partitioned table with DELETEs on that table as inconsistent
  • Background:
    Back in the old 9i days when Data Pump was designed flashback technology wasn't as popular and well known as today - and UNDO usage was the major concern as a consistent per default export would have heavily relied on UNDO. That's why - similar to good ol' exp - the export won't operate per default in consistency mode
  • To get a consistent data pump export with expdp you'll have to set:
    in your parameter file. Then it will be consistent according to the timestamp when the process has been started. You could use FLASHBACK_SCN instead and determine the SCN beforehand if you'd like to be exact.

So sorry if I had proclaimed a feature which unfortunately is not there by default :-(

- Mike


Mike Dietrich - Oracle Mike Dietrich
Master Product Manager - Database Upgrade & Migrations - Oracle Corp

Based near Munich/Germany and spending plenty of time in airplanes to run either upgrade workshops or work onsite or remotely with reference customers. Acting as interlink between customers/partners and the Upgrade Development.

Follow me on TWITTER

Contact me via LinkedIn or XING


« April 2015
Oracle related Tech Blogs
Slides Download Center
Visitors since 17-OCT-2011
White Paper and Docs
Viewlets and Videos
This week on my Rega/iPod/CD
Workshop Map
Upgrade Reference Papers