Wednesday Jul 01, 2015

Log Writer Slave Issues in 12.1.0.2 - mainly on IBM AIX

Currently we see a lot of issues with the Multiple Logwriter feature in Oracle Database 12.1.0.2, especially on the IBM AIX platform.

What are Multiple LGWRs?

You will see multiple LGnn (Log Writer Worker) processes on multiprocessor systems, The LGWR (Log Writer) creates worker processes (slaves) to improve the performance of writing to the redo log. LGnn (Log Writer Worker) processes are not used when there is a SYNC standby destination. Possible processes include LG00-LG99.

Here is a list of new background processes in Oracle Database 12.1:
MOS Note 1625912.1 - New Background Processes In 12c 

For deep dive information about how to trace the LGWR (and DBWR) and the differences 11.2/12.1 and wait events and much more, please see this Deep Dive PDF from an UKOUG talk from Frits Hoogland.

Known Symptoms:


Solution:

Turn off multiple logwriters on IBM AIX at the moment - in some cases this may solve issues on other platforms as well but please don't take this as a generic recommendation for all platforms. We are seeing the issues mainly on IBM AIX.

Set in your spfile: 

_use_single_log_writer=true

.

--Mike 

Tuesday Jun 30, 2015

Some Data Pump issues:
+ DBMS_DATAPUMP Import via NETWORK_LINK fails
+ STATUS parameter giving bad performance

One of my dear Oracle ACS colleagues (Danke Thomas!) highlighted this issue to me as one of his lead customers hit this pitfall a week ago. . 

DBMS_DATAPUMP Import Over NETWORK_LINK fails with ORA-39126 / ORA-31600

Symptoms are: 

KUPW$WORKER.CONFIGURE_METADATA_UNLOAD [ESTIMATE_PHASE]
ORA-31600: invalid input value IN ('VIEWS_AS_TABLES/TABLE_DATA') for parameter VALUE in function SET_FILTER

This can be cured with the patch for bug19501000 -  but this patch can conflict with:Bug 18793246  EXPDP slow showing base object lookup during datapump export causes full table scan per object and therefore may require a merge patch - patch 21253883 is the one to go with in this case.

 .

Another issue Roy just came across:

Data Pump is giving bad performance in Oracle 12.1.0.2 when the STATUS parameter option is used on command line

Symptoms are: 

It looks like the routines we are using to get status are significantly slower in 12c than in 11g. On 11.2.0.4 a STATUS call of expdp/impdp runs in 0.2-0.3 seconds, but in 12.1.0.2 it takes 0.8-1.6 seconds. As a result the client falls behind on 12.1.0.2; it is taking about 0.5-0.8 seconds to put out each line in the logfile because it is getting the status each time. With over 9000 tables in a test that half a second really adds up. The result in this test case was that the data pump job completed in 35 minutes, but it took another 30-35 minutes to finish putting out messages on the client (the log file was already complete) and return control to the command line. This happens only when you use STATUS on the command line.

Recommendation is:  

Don't use the STATUS parameter on the expdp/impdp command line in Oracle 12.1.0.2 until the issue is fixed. This will be tracked under Bug 21123545.

--Mike 

Tuesday May 26, 2015

Oracle 12.1.0.2 - Security Behavior Change with non-SYSDBA Triggers

Oracle Database SecuritySometimes things get revealed at unexpected occasions. This one happened during a recent customer upgrade to Oracle Database 12c with a 3rd party geospatioanl application installed (ESRI).

At the very end of the upgrade the customer saw many ORA-1031 (insufficient privileges) errors and it seemed to be that nothing was working correctly anymore. 

This happened during the run of catupend.sql. The following code path in  catupend.sql causes the error.

cursor ddl_triggers is                                       
   select o.object_id from dba_triggers t, dba_objects o     
    where t.owner = o.owner and t.trigger_name = o.object_name
      and o.object_type = 'TRIGGER'                          
      and (t.triggering_event like '%ALTER%' or              
    t.triggering_event like '%DDL%');     


ERROR at line 1:
ORA-04045: errors during recompilation/revalidation of
SDE.DB_EV_ALTER_ST_METADATA
ORA-01031: insufficient privileges
ORA-06512: at "SYS.DBMS_UTILITY", line 1279
ORA-06512: at line 20

Apparently there's no access to an application trigger anymore - which got deployed as a system trigger (for more information about ESRI's system trigger please click this link). Even though this is strange it doesn't seem like a big issue. But in fact it is as this procedure failed and caused other stuff not getting validated correctly. So subsequent actions (for instance the run of utlu121s.sql, the post upgrade script) failed with ORA-1031 as well pointing to DBMS_UTILITY.

The customer [Danke Andy!!!] itself found the workaround by pattern matching similar issues in MOS and trying some grants - the 3rd one did the trick:

  • GRANT ADMINISTER DATABASE TRIGGER to SDE;


So it was obvious that something in the security architecture in Oracle Database 12.1.0.2 had been changed - and somebody forgot to document it. Later on I've learned that this change got introduced with the July 2013 PSU/CPU as well. I don't blame the customer for not applying PSUs since almost two years - I knew that upfront and we are implementing a 2-PSUs-per-year strategy now with the upgrade to Oracle Database 12c. 

The system trigger ESRI had created couldn't be validated anymore under the context of the SDE (ESRI's application) user. Therefore it failed but caused other actions to fail as well.  

This behavior change is related to "SYSDBA privilege should not be available in non-SYS owned DR procedure / trigger execution" which is first fixed into 12.1.0.2, and then backported as part of CPU July-2013.
When SYS executes a non-SYS owned DR procedure or a Trigger, the SYS privileges would not available during the procedure/trigger execution. The procedure/trigger owner privileges prevail.

--Mike

Friday Feb 13, 2015

Is it always the Optimizer? Should you disable Group By Elimination in 12c?

I wouldn't say it's always the optimizer - but sometimes one or two tiny little things are broken making it necessary to turn off new functionality for a while.

Please don't misinterpret this posting!
As far as I see (and I really do work with customers!) I'd state the Oracle Database 12.1.0.2 Optimizer is more stable, faster and predictable compared to 11.2.0.x. Our Optimizer Architects and Developers have done a great job. But with all the complexity involved sometimes it takes a few fixes or incarnations until a great new feature really matures. The Group-By-Elimination feature in Oracle Database 12c seems to be such a candidate. 

What does the feature do? A simple example demonstrates the feature.

First the elimination is OFF (FALSE): 

SQL> explain plan for
  2  select /*+ opt_param('_optimizer_aggr_groupby_elim', 'false')*/
  3   dummy, sum(cnt)
  4    from (select dummy,
  5                 count(*) cnt
  6            from dualcopy
  7           group by dummy)
  8   group by dummy
  9  ; 
Explained

-----------------------------------------
|  Ld | Operation            | Name     |
-----------------------------------------
|   0 | SELECT STATEMENT     |          |
|   1 |  HASH GROUP BY       |          |
|   2 |   VIEW               |          |
|   3 |    HASH GROUP BY     |          |
|   4 |     TABLE ACCESS FULL| DUALCOPY |
-----------------------------------------


And now it's ON (TRUE):

SQL> explain plan for
  2  select /*+ opt_param('_optimizer_aggr_groupby_elim', 'true')*/
  3   dummy, sum(cnt)
  4    from (select dummy,
  5                 count(*) cnt
  6            from dualcopy
  7           group by dummy)
  8   group by dummy
  9  ;
Explained
 
---------------------------------------
| Ld  | Operation          | Name     |
---------------------------------------
|   0 | SELECT STATEMENT   |          |
|   1 |  HASH GROUP BY     |          |
|   2 |   TABLE ACCESS FULL| DUALCOPY |
---------------------------------------

By comparing the two execution plans you'll see the difference immediately.

But there seem to be a few issues with that new feature such as several wrong query result bugs. The issues will be be tracked under the non-public bug20508819. Support may release a note as well.

At the moment we'd recommend to set: 

"_optimizer_aggr_groupby_elim"=false

--Mike 

Wednesday Jul 11, 2012

Upgrade to 11.2.0.3 - OCM: ORA-12012 and ORA-29280

OCM is the Oracle Configuration Manager, a tool to proactively monitor your Oracle environment to provide this information to Oracle Software Support. As OCM is installed by default in many databases but is some sort of independent from the database's version you won't expect any issues during or after a database upgrade ;-)

But after the upgrade from Oracle 11.1.0.7 to Oracle 11.2.0.3 on Exadata X2-2 one of my customers found the following error in the alert.log every 24 hours:

Errors in file /opt/oracle/diag/rdbms/db/trace/db_j001_26027.trc:
ORA-12012: error on auto execute of job "ORACLE_OCM"."MGMT_CONFIG_JOB_2_1"
ORA-29280: invalid directory path
ORA-06512: at "ORACLE_OCM.MGMT_DB_LL_METRICS", line 2436
ORA-06512: at line 1

Why is that happening and how to solve that issue now?

OCM is trying to write to a local directory which does not exist. Besides that the OCM version delivered with Oracle Database Patch Set 11.2.0.3 is older than the newest available OCM Collector 10.3.7 - the one which has that issue fixed.

So you'll either drop OCM completely if you won't use it:

SQL> drop user ORACLE_OCM cascade;

or you'll disable the collector jobs:

SQL> exec dbms_scheduler.disable('ORACLE_OCM.MGMT_CONFIG_JOB');

SQL> exec dbms_scheduler.disable('ORACLE_OCM.MGMT_STATS_CONFIG_JOB');

or you'll have to reconfigure OCM - and please see MOS Note:1453959.1 for a detailed description how to do that - it's basically executing the script ORACLE_HOME/ccr/admin/scripts/installCCRSQL - but there maybe other things to consider especially in a RAC environment.

- Mike


Just for the records: Bug 12927935: ORA-12012, ORACLE_OCM.MGMT_DB_LL_METRICS, ORA-29280


Tuesday Jul 03, 2012

Data Pump: Consistent Export?

Ouch ... I have to admit as I did say in several workshops in the past weeks that a data pump export with expdp is per se consistent.

Well ... I thought it is ... but it's not. Thanks to a customer who is doing a large unicode migration at the moment. We were discussing parameters in the expdp's par file. And I did ask my colleagues after doing some research on MOS. And here are the results of my "research":

  • MOS Note 377218.1 has a nice example showing a data pump export of a partitioned table with DELETEs on that table as inconsistent
  • Background:
    Back in the old 9i days when Data Pump was designed flashback technology wasn't as popular and well known as today - and UNDO usage was the major concern as a consistent per default export would have heavily relied on UNDO. That's why - similar to good ol' exp - the export won't operate per default in consistency mode
  • To get a consistent data pump export with expdp you'll have to set:
    FLASHBACK_TIME=SYSTIMESTAMP
    in your parameter file. Then it will be consistent according to the timestamp when the process has been started. You could use FLASHBACK_SCN instead and determine the SCN beforehand if you'd like to be exact.

So sorry if I had proclaimed a feature which unfortunately is not there by default :-(

- Mike

About

Mike Dietrich - Oracle Mike Dietrich
Master Product Manager - Database Upgrade & Migrations - Oracle Corp

Based near Munich/Germany and spending plenty of time in airplanes to run either upgrade workshops or work onsite or remotely with reference customers. Acting as interlink between customers/partners and the Upgrade Development.

Follow me on TWITTER

Contact me via LinkedIn or XING

Search

Archives
« July 2015
SunMonTueWedThuFriSat
   
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
 
       
Today
Oracle related Tech Blogs
Slides Download Center
Visitors since 17-OCT-2011
White Paper and Docs
Workshops
Viewlets and Videos
This week on my Rega/iPod/CD
Workshop Map
Upgrade Reference Papers