How do I capture a 10053 trace for a SQL statement called in a PL/SQL package?

Traditionally if you wanted to capture an Optimizer trace (10053) for a SQL statement you would issue an alter session command to switch on a 10053 trace for that entire session, and then issue the SQL statement you wanted to capture the trace for. Once the statement completed you would exit the session to disable the trace. You would then look in the USER_DUMP_DEST directory for the trace file.

But what if the SQL statement you were interested  in was actually called as part of a PL/SQL package? Oracle Database 11g, introduced a new diagnostic events infrastructure, which greatly simplifies the task of generating a 10053 trace for a specific SQL statement in a PL/SQL package. All you will need to know is the SQL_ID for the statement you are interested in. Instead of turning on the trace event for the entire session you can now switch it on for a specific SQL ID. Oracle will then capture a 10053 trace for the corresponding SQL statement when it is issued in that session. Remember the SQL statement still has to be hard parsed for the 10053 trace to be generated.  Let's begin our example by creating a PL/SQL package called 'cal_total_sales'.

The SQL statement we are interested in is the same as the one in our original example, SELECT SUM(AMOUNT_SOLD) FROM SALES WHERE CUST_ID = :B1. We need to know the SQL_ID of this SQL statement to set up the trace, and we can find in V$SQL.

We now have everything we need to generate the trace.

Finally  you would look in the USER_DUMP_DEST directory for the trace file with the name you specified.

Maria Colgan+


Interresting feature, but the point would be to find the sql_id of a sql which was not hard parsed. In your example how could you find the sql_id in v$sql if the query was not hard parsed ?

Posted by guest on November 16, 2011 at 07:45 AM PST #

Eric, assuming the statement was parsed at some point in the past you could find the SQL_ID in the AWR repository.

Posted by Maria Colgan on November 16, 2011 at 07:52 AM PST #

In practice, we should call a script which contains all of those commands. Starting a trace session, executing a package, ending a trace session.

Otherwise, the time between typing out those three commands will be included in the trace. Giving you inaccurate TIM values in your trace file.

Posted by jimmyb on November 16, 2011 at 08:53 AM PST #

I agree with Jimmy. It is far wiser to create a script to run these commands, especially if you misspell things as often as I do! : )

Posted by Maria Colgan on November 16, 2011 at 08:55 AM PST #

Great post as usual. I have only one question: where is the "new diagnostic events infrastructure" documented, i.e. all the various option of the events 'trace... ' command?
Of course, by this question I reveal that I haven't read (all) the documentation.

Posted by Wolfgang Breitling on November 16, 2011 at 11:44 PM PST #

Wolfgang, that is a really good question. I found out about the new event infrastructure from one of the developers. I just searched the Oracle doc and there are references to aspects of it in the Performance and Tuning Guide as well as in the Reference Guide. But I can't find a section that clearly discusses the new tracing in detail. However, I suspect support have create a MOS note on this and will try and look for it later today and update the blog with it!

Posted by Maria Colgan on November 17, 2011 at 06:00 AM PST #

2 questions:

1) Can I set this at the SYSTEM level? That way any session that does hard parsing for a particular sql_id will generate a 10053 trace. I don't have to run the pl/sql in my own session.

2) How can I do similar thing for a 10046 trace for a specific sql_id?

Posted by dba on November 17, 2011 at 06:18 AM PST #

Yes you can set it at a system level and have it capture the trace for the SQL_ID in any session.

Posted by Maria Colgan on November 17, 2011 at 09:49 AM PST #

If I set it at the system level, then how do I unset it after a while?

Posted by dba on November 18, 2011 at 04:29 AM PST #

Prashant, you can unset it at the system level using,
alter system set events 'trace [SQL_Optimizer.*] off';

Posted by Maria Colgan on November 18, 2011 at 05:26 AM PST #

Hi Maria,
Great presentation in NCOUG last week.

I was trying to find some clue about how CBO breaks tie when there are two index range scan have the same cost in a 10053 trace file? I have heard object_id, or simply the order of possible plans that CBO evaluates, but I have no solid resource to confirm that.

For instance, from the trace file I cound not figure out the reason why foo_IX1 was picked over foo_UX. It seems both have cost 12.30 (I masked the real index names for privacy as this is from work)

Thank you!

Access Path: index (AllEqGuess)
Index: foo_IX1
resc_io: 41.00 resc_cpu: 0
ix_sel: 0.000016 ix_sel_with_filters: 0.000016
Cost: 12.30 Resp: 12.30 Degree: 1
Access Path: index (RangeScan)
Index: foo_UX
resc_io: 41.00 resc_cpu: 0
ix_sel: 0.000000 ix_sel_with_filters: 0.000000
Cost: 12.30 Resp: 12.30 Degree: 1
Best:: AccessPath: IndexRange
Index: foo_IX1
Cost: 13.00 Degree: 1 Resp: 12.30 Card: 1.57 Bytes:

Posted by jianhui on November 18, 2011 at 08:34 AM PST #

Hi Jianhui,

The Optimizer selected foo_IX1 because the cost was slightly lower than that of foo_UX. Cost for foo_IX1 was 'Cost:12.30' while the cost for foo_UX was actually 'Cost:13.00'.

But if it had been a true tie then the Optimizer actually begins by picking the index which has the highest number of columns matched with the query predicates. For example, if you have predicates on columns A and B only then we pick the index on (A,B) rather than the index on A,B,C. If there is still a tie, for example the available
indexes are A,B,C and A,B,D then we favor the index with the highest number of distinct keys (combination of A,B values) and lowest number of leaf blocks.

Posted by Maria Colgan on November 18, 2011 at 09:10 AM PST #

Thank you for the comment on how CBO breaks the tie, is there any further official doc or MOS article# discussing about tie breaking rules? (I was hoping 10053 trace file directly dump out the rule it uses)

Secondly, I read the trace file differently than what you said, it says foo_UX cost was 12.30, not 13.00. Did i interpret it incorrectly? You can see 12.30 value appears twice in the trace file copy I pasted here.

But I did get answers about why IX1 was picket if the tie situation is the case, IX1 does have higher matching columns in WHERE clause.


Posted by Jianhui on November 18, 2011 at 09:55 AM PST #

Wolfgang, Maria,
if you search for the full syntax of this great new feature, just ask Tanel Poder:
happy tracing,

Posted by Martin Berger on November 19, 2011 at 01:40 AM PST #

Hi Maria.

I was wondering if anyone with alter session can do this for any SQLID or is it tied to security at all?

A lot of times I may be logged in with a non-DBA account and want to see the trace for a session running under a different ID.

I think I'll have to go play with this.



Posted by Lewis Cunningham on November 21, 2011 at 02:19 AM PST #

Hi Lewis,

You need the DBA privilege to issue the Alter Session command. Without it you get the following error,

SQL> alter system set events 'trace[rdbms.SQL_Optimizer.*][sql:a8phhr6w1an53]';
alter system set events 'trace[rdbms.SQL_Optimizer.*][sql:a8phhr6w1an53]'
ERROR at line 1:
ORA-01031: insufficient privileges


Posted by Maria Colgan on November 21, 2011 at 09:09 AM PST #

An alternative method:

Posted by Kevin on November 24, 2011 at 11:39 PM PST #

An alternativeis to use API dbms_sqldiag.dump_trace

Posted by zebonsoft on November 24, 2011 at 11:43 PM PST #


Could you please explain where in the below syntax that we are referring that the trace be a 10053 or a 10046 trace event?

alter session set events 'trace[rdbms.SQL_Optimizer.*][sql:sql_id]';


Posted by harsha on November 29, 2011 at 04:09 PM PST #

Thanks for this Maria.

When using a statement such as this one:
alter system set events 'sql_trace[SQL:aua4zfxn0953q] plan_stat=all_executions,wait=true,bind=true';

... I found that the trace could not be disabled with this:
alter system set events 'trace[rdbms.SQL_Execution.*] off';

Rather, an explicit command for the SQL_ID was required:

alter system set events 'sql_trace[SQL:aua4zfxn0953q] plan_stat=all_executions,wait=true,bind=true';

Also, in this case (sql_id tracing) when the trace is enabled, all subsequent SQL statements for the session are traced as well.

Posted by guest on July 11, 2012 at 09:42 AM PDT #

I had not intended the previous comment to be anonymous, guess I forgot to fill in my name.

Posted by Jared on July 11, 2012 at 10:10 AM PDT #

Hi Maria, Just to note an anomaly I found using the dump_trace.
There is a bug/feature in using mat views. see note 1326110.1 where a piece of SQL works fine but hangs when using it to create a materialized view.
why is that relevant here? well read on...

Run explain on a piece of SQL that will satisfy that bug and you get a plan, lets call it GOOD_PLAN.
wrap a "create materialized view as <SQL select statement>" and according to the above note the query will hang which it did for me. However, during my troubleshooting I also found the plan had changed during run time to BAD_PLAN by querying v$sql_plan for the sql_id.
In trying to see why the plan had changed, I ran the DBMS_SQLDIAG.DUMP_TRACE against the sql_id but the resulting trace showed GOOD_PLAN as it only reparsed the SQL itself. It didnt regenerate the BAD_PLAN as there wasnt a "create mat. view" in the reparse.
Yes this is related to a bug and it shouldn't happen in a normal working day but the behavior should be noted in case any bug/feature causes a run time change of a plan. ie. Believe your eyes and contact support. Hopefully this will save someone some time in future.

Posted by guest on December 28, 2012 at 02:00 AM PST #

Hi Maria,
Thank you for all these information you share with us. Under what specific circumstances do you believe that your function GET_TOTAL_SALES could raise either NO_DATA_FOUND or TOO_MANY ROWS exception?
Marius NITU

Posted by guest on March 21, 2013 at 02:47 AM PDT #

Post a Comment:
  • HTML Syntax: NOT allowed

The Oracle Optimizer blog is written by members of the Optimizer development team. The goal of this blog is to provide an insight into the workings of the Optimizer and the statistics it relies on. The views expressed on this blog are our own and do not necessarily reflect the views of Oracle and its affiliates. The views and opinions expressed by visitors on this blog are theirs solely and may not reflect ours.


« January 2017