When you’re trying to get SQL to run faster there’s one thing that’s vital: an execution plan. In this post we’ll investigate four ways to create these:
But before we begin let's answer the question:
The plan for a SQL statement is a set of instructions. This tells the database how to access the data and join it together.
Plans come in two varieties:
These sound similar. But they’re different. To understand why consider this example.
You’re making a car journey. You plan it beforehand. This is the route you expect to take.
But just as you’re going to leave, you hear on the news that there’s been an accident on your chosen route. This will make it much slower. So you go a different way.
There are two routes here. The one you expected to take and the one you actually took.
After you arrive you wonder whether you could have completed the journey faster. To figure this out, you need to look at where you went. Not where you planned to go.
An explain plan predicts how Oracle will process your query.
An execution plan describes the steps it actually took.
Just as in the driving example above, Oracle may use a different route than the one it predicted. As Tom Kyte discusses, there are several reasons this could happen. So to diagnose your queries, you need to know what Oracle did. Not what it guessed it might do!
"Explain plan" sounds a lot like "execution plan". This can lead to confusion. Many times when I ask people to supply an execution plan they provide one of the explain variety instead.
I think it would be better if we gave these very different names. For example, calling execution plans “described paths” or similar. This would help alert people to the fact these are different things (though they look similar) and reduce the chance of confusion.
Top tip: If someone asks you for “the plan” they’re usually looking for one of the execution variety. I.e. what really happened.
So with that cleared up, how do you go about gathering them?
First up, autotrace. This is a freely available tool. You can enable it in SQL*Plus using the “set autotrace” command.
With this on, after each query Oracle displays its output. Below this is the plan along with its performance stats:
If your query returns hundreds or thousands of rows you have to wait for Oracle to display them all. This can be a pain.
Luckily you can suppress the results with the trace[only] option. You can also choose to show just the plan or the stats. For example, to hide the query output and show just the stats, use:
set autotrace trace stat
To include the output and the plan, enter
set autotrace on exp
Once you're finished you can switch it off with:
set autotrace off
While using SQL*Plus is better than nothing, it has several limitations:
So while it’s useful for a quick finger in the air, there are better versions available.
Such as:
This is a step up over the SQL*Plus version. It displays the stats for every step of the plan. So for each operation you can see metrics such as:
This makes it much easier to which points are consuming the most resources.
To see how to configure and run it, check out this video:
Besides the step-by-step breakdown, autotrace in SQL Developer has more advantages over SQL*Plus. These include:
This is a big step up over the SQL*Plus version. So from now on when I refer to autotrace I mean the SQL Developer version.
A quick word of warning. By default SQL Developer only fetches the number of rows according to the "SQL array fetch size" parameter. For queries that return a large number of rows this could miss a big chunk of the work. Ensure you set it to "Fetch all rows".
Often when you're tuning queries you want to save the plan for later reference or to share with others. To do this, right click the plan and you'll get an option to "Export HTML":
Save this and you'll get a file like this:
|
This all sounds great.
But there is one big drawback to autotrace: you have to wait for the query to finish!
If your SQL takes minutes or hours to complete waiting for this is a pain. It would be awesome if you could see query progress in real time.
Which brings us to the next method:
The SQL Monitor levels up autotrace. It provides similar operation-level stats. But with an added bonus. You can view the progress of execution plans in real time!
Got a troublesome full tablescan? You can watch while Oracle churns away at it. Instead of having to wait for the query to finish, you can see the plan immediately. This makes identifying bottlenecks easy.
Even better, unlike autotrace which you need to run manually, Oracle will capture the plan for you automatically.
So how do you view the plans?
They're available in SQL Developer by going to Tools -> Monitor SQL…
Or you can view them in the Performance tab of Enterprise Manager:
Clicking on the SQL ID for your statement brings up the full details:
As with SQL Developer, you can expand and collapse the sections of your plan. For more details on reading these reports see this post on monitoring parallel execution.
For those of you who prefer text, you can output the plans using SQL. To do so, use the following query:
select dbms_sqltune.report_sql_monitor( sql_id => '4htx5uyx0gxxx', type => 'TEXT', report_level => 'ALL' ) as report from dual;
You'll need to replace SQL ID parameter with the ID of your query. You can find this with the following statement:
select sql_id, sql_text from v$sql where sql_text like '%some text from your query%' and sql_text not like '%not this%';
If there are many matches, check the SQL text to see which one is your statement.
Note SQL_text is limited to 1,000 characters. So for really large statements, you may need to include the SQL_fulltext column too!
If it’s so great, why isn’t this my favourite approach?
Well, there are a couple of caveats:
The first limitation is easy to get around. One way to do this is by adding the monitor hint:
select /*+ monitor */… from …
For licensing you’ll need to speak to you local sales rep ;)
Assuming you are licensed, the SQL Monitor is a fantastic way to do early troubleshooting on those hour-long queries. Often you can spot which parts of the plan are doing the most damage within a few minutes. This enables early diagnosis. You can plan new approaches without waiting for the statement to finish!
Both of these methods are great. But they work on a single statement at a time. What if you want to analyze the performance of several pieces of SQL in a transaction?
Enter:
TKPROF is a command line utility that analyzes trace files and turns them into readable form. It gives you the execution stats for all the SQL in the file. Which begs the question:
There are several methods. The easiest is to set SQL tracing on. The command to do this is:
alter session set sql_trace = true;
Oracle will capture all statements you execute after this in the trace file. To stop this, either disconnect or turn tracing off with:
alter session set sql_trace = false;
This method is easy but limited. For example, it only traces your session.
A more powerful method is to call DBMS_monitor.session_trace_enable. This has five parameters:
Pass in the relevant session_id and serial_num to trace another session. If you leave these null, Oracle will trace your current session. Setting waits and binds to true includes information about these in the file.
To stop tracing, call DBMS_monitor.session_trace_disable. As with the enable procedure, pass the relevant session_id and serial_num. Or leave blank if tracing your current session.
So to generate a trace file for your current session, including waits and bind details, do the following:
exec DBMS_monitor.session_trace_enable ( null, null, true, true ); ***your code here*** exec DBMS_monitor.session_trace_disable;
DBMS_monitor also includes procedures to trace all statements:
Note that tracing adds overhead. So avoid enabling it for the whole database. And remember to disable it when you've finished!
Once you've traced your code you need to get the file so you can analyze it.
“But where can I find the trace file?” I hear you ask.
It lives on the database server. This means you need access to it (or help from someone who does!). You also need to know where it is and what it’s called!
Connor explains how to find them in this video:
Getting the files can be fiddly, especially if you don’t have access to the server. You can get around this by configuring a file reader enabling you to query their contents (and thus save to your local machine).
With the trace file in hand you can parse it with TKPROF. The basic syntax for this is:
tkprof <trace_file_name> <output_file_name>
For example:
tkprof ORCL_ora_27883.trc trace.log
This parses the contents of ORCL_ora_27883.trc into trace.log. You're now ready to start analyzing the queries!
Quick note: TKPROF also includes an explain option. This will show you explain, not execution plans. Use this with caution.
Compared to autotrace and the SQL Monitor this is a lot of work.
So you may be wondering: is this extra hassle worth it?
Well TKPROF has some of advantages over these:
The first benefit is great if you have a slow transaction that includes several statements. You can sort them in the output file from slowest to fastest. This helps you spot which is takes the longest. To do this, use the sort option:
tkprof <trace_file_name> <output_file_name> sort=prsela,exeela,fchela
This is particularly useful for spotting fast queries that you execute many times. This could be because you're doing row-by-row processing. Or you have a SQL statement with many calls PL/SQL which itself calls SQL (something you should generally avoid). Or trigger logic you weren't aware of. Though, as Jonathan Lewis notes in the comments, be aware that a SQL statement could have a different plan for each exection. TKPROF will only report one of these.
The parse, execution and fetch breakdown helps you spot issues unrelated to the plan itself. For example, in an overloaded system parse times can be higher. If a significant fraction of a statement’s runtime is parsing then you should start looking for issues other than the plan itself.
So TKPROF enables you to see information not visible in the previous two tools. Sometimes the extra hassle is worth it!
The final method we’ll look at uses the package DBMS_XPlan. This includes several display functions. These are pipelined, so you can query these like normal tables using the table() operator.
Using these, you can get the performance stats for each step of the plan: actual rows, time taken, etc.
To see this information, you need to increase Oracle’s stats level. You can do this by:
alter session set statistics_level = all;
Once you’ve done one of these and run the statement to completion you can get your plan!
Call display_cursor to get the plan immediately after executing your SQL (taking care to ensure you’ve set serveroutput off):
select * from table(dbms_xplan.display_cursor);
This fetches the plan for the last statement in your session. This is good but misses many of the juicy details. To include these, use the format parameter. This enables you to include or exclude parts of the plan. Some of my favourite options are:
If you want to exclude parts of the plan, just prefix the format with the minus sign '-'.
You can also use this to get the plans for previous statements. You just need to find the SQL ID and pass it in:
select * from table(dbms_xplan.display_cursor('sql_id', null, 'ALLSTATS LAST'));
The plan must still be in the cursor cache to use this. Head to LiveSQL for an example of how to do this. If you're using the Automatic Workload Repository, you can get older plans using display_AWR:
select * from table(dbms_xplan.display_awr('sql_id', null, null, 'ALLSTATS LAST'));
This is all well and good. But the output is static text. You can’t expand and collapse the plan like you can in SQL Developer. This can make large plans tricky to read.
So why would you choose this over SQL Developer?
An obvious answer is because this is part of a scripted process. You need pure SQL for these!
Another is if you want to post the plan somewhere (*cough* Ask Tom *cough*) to get help tuning help from others. The simple text format makes it great for sharing.
We’ve looked at four methods of creating execution plans.
These all come with their own pros and cons. In the end which you use is largely a matter of personal taste.
For short running queries, the ability to expand, collapse and compare execution plans makes autotrace in SQL Developer the winner for me. For longer queries – where licensed – I’d go for the SQL monitor.
But according to my recent Twitter poll, DBMS_XPlan is the most popular in my followers:
There are many other ways to generate Oracle execution plans. Notably SQLTXPLAIN. This is a powerful tool that also includes lots of supporting details, such as the current table statistics. This is most useful if need help from other people (e.g. support!). This ensures they have all the information they need to diagnose the problem.
The most important thing is to ensure that you are getting execution plans, not explain plans. If the output is missing details such as actual rows and time taken then there's a good chance you've got the wrong type. At the very least, you're missing vital information to help you tune your SQL.
How about you? Which do you like best? Are there any other tools you use I haven’t mentioned? I’ve heard there are tools from other vendors for creating execution plan tools too ;) Do any of these have significant benefits?
Let us know in the comments!
UPDATED: 19 July 2017: added Jonathan Lewis' comment about multiple plans, link to LiveSQL demo for DBMS_XPlan and fixed some formatting
Chris,
Thanks for the walk-through (on SQL Developer in particular).
Can I put in a bid for an option to "Export Text" for the SQL Dev execution plans - preferably in a format similar to the dbms_xplan form. When people try to post plans to the OTN database forums the HTML looks like garbage and their attempts to take screenshots rarely work well.
So... which methods give you the actual execution plan?
Thanks Jonathan. I'll pass on your request for plain text export in SQL Developer.
Guest - all the methods above except autotrace in SQL*Plus give you the actual execution plan! If your output includes A (actual) rows and similar stats then it's an execution plan.
Hi,
Thanks for quick walk-through on methods to generate execution plan.
I have query on TKProf method. I always felt that TKProf gives explain plan of the queries traced in a session(s) rather than execution plan because the plans are actually created when trace file is converted to readable format using TKProf utility that includes parameter EXPLAIN usernm/passwd@db.
Can please clarify the same and confirm trace files gives explain/execution plan.
Thanks,
Imran
I see you've already asked this on AskTom Imram. For the benefit of anyone else coming here:
The explain option of tkprof will give an explain (not execution) plan in the output file.
If you've set statistics_level = all or similar, the file itself will contain the actual execution plans.
So you could have two plans in your output file! The actual plan (with row source information) and an explain plan.
Even though tkprof without the "explanation plan" has to give you only plans that really were used by your session you still have to be a little careful. A single statement might have used several execution plans in the course of a session, and some plans may have been flushed from memory before they could be written to the trace file. Tkprof will only show you one of the plans for a statement, so the plan you really need to see might be in the trace file but not reported, or it might have been lost.
Great point, thanks for clarifying tkprof plans Jonathan.
Hi, Chris.
Very useful article.
I have to disagree on your "classification" of plans. I definitely agree that we need to make a distinction of "predicted" plans vs. actually used ones, but I would say that both of them are execution plans.
In my opinion, "explain" is no a type of plan, but only a method you use to generate a plan. The result is still an execution plan, or in other words, the plan that the database predicts will use to execute the statement. It is a plan of execution, even though the statement has not been executed yet.
The documentation seems to see it that way too. This is from the official documentation on Explain Plan (in 11g):
"The EXPLAIN PLAN statement displays execution plans chosen by the Oracle optimizer for SELECT, UPDATE, INSERT, and DELETE statements. A statement's execution plan is the sequence of operations Oracle performs to run the statement".
And from the 12c docs:
"Use the EXPLAIN PLAN statement to determine the execution plan Oracle Database follows to execute a specified SQL statement. This statement inserts a row describing each step of the execution plan into a specified table. You can also issue the EXPLAIN PLAN statement as part of the SQL trace facility."
Thanks for your comments. Personally I think statements like those you've quoted from the docs confuse the issue.
They imply that Oracle will choose the plan shown by EXPLAIN PLAN. But for many reasons it could do something completely different.
In any case, explain plan has no execution stats (actual rows, gets, etc.) which are critical when deciding whether Oracle used an optimal plan.
Yes, the docs should clearly state that the execution plan generated by EXPLAIN PLAN is just a prediction, because the statement is not actually executed.
And yes, it is definitely more useful to get the plan that was actually used when running the statement, and the stats. That is why I prefer to autotrace (from SQL Developer).
-Carlos
Hi Chris,
This post of yours is purely enlightening.
Could you please add to this post, about the below.
Considering a fact I cant run the query, because it is in production and a scheduled job, so running it at any time to see the EXECUTION PLAN is not allowed. How to find out what was the execution plan used, when the query executed yesterday, and day before yesterday and before that may be. And once I found out EXECUTION PLAN what should I be looking for next, like "why it is using a costly plan" just to say, where this information is recorded, I mean where the reason is recorded for which the optimizer is choosing the EXECUTION PLAN (which is different than the ideal explain plan)?
Thanks and Regards,
Sandeep
Thanks Sandeep.
If you're looking for historic execution plans, you can get them from AWR. Note you need to be licensed for the Diagnostic and Tuning packs for this. You can see an example at:
https://uhesse.com/2010/03/26/retrieve-sql-and-execution-plan-from-awr-snapshots/
Hi Chris,
Thank you for the response.
Could you please also help in the below question.
The information about "why the optimizer is using a particular execution plan" is recorded where in the database ?
Thanks and Regards,
Sandeep
No. If you want to know why the optimizer picked a given plan, you need to enable the 10053 event and inspect the trace files:
https://blogs.oracle.com/optimizer/entry/how_do_i_capture_a
https://asktom.oracle.com/pls/asktom/f?p=100:11:0%3A%3A%3A%3AP11_QUESTION_ID:63445044804318
After running a user's SQL query in a given schema, I locate the appropriate SQL_ID value and I then run the following:-
SELECT * FROM TABLE(dbms_xplan.display_cursor(sql_id => '23qb4mfwqa1wc', cursor_child_no => 0, FORMAT => 'TYPICAL -ROWS -BYTES +COST +PARALLEL +PARTITION +IOSTATS +MEMSTATS +ALIAS +PEEKED_BINDS +OUTLINE +PREDICATE -PROJECTION +REMOTE +NOTE'));
Is there a way to "hide" or "suppress" the entire SQL Query Text from the output? It is currently taking a lot of real-estate space.
I'm not aware of an option to suppress the text of the SQL statement sadly.
Wonderful Post.
I may be asking a basic question but can you please help me with the facts , when I get an execution plan where to actually look into or take into consideration for the analysis whether the execution plan is good or bad.
Thanks
Pol
One of the key indicators of whether a plan is "good" is whether the row estimates ~ the actuals. So you want to check is:
E-rows * starts ~ A-rows
If these figures are different by orders of magnitude, it's likely there's a better plan available (though not guaranteed).
Another thing to look at it how much work each step does. This can be either in terms of I/O (buffer gets) or time. Or both!
Then look to see how you can reduce (or remove) the work these steps do.
There's a lot more to this, worthy of a blog post in its own right ;)
Chris