I have a query that is not very complex but is executed very often. All the appropriate indexes are in place and are being used. The query is
FROM T1, T2, T3
WHERE T2.id = T1.id
AND T2.s_id = T3.s_id
AND T3.related_s_id =
How can I tune this query to do less work?
I frequently receive “tune my query” questions. They are hard to answer, for a couple of reasons.
Mostly they are hard to answer because I don’t know the query’s context. Why is it being executed? Could it be merged into some other process? Is the query in question being executed “very often” because it is inside some procedural loop and shouldn’t be there at all (because it should be in some larger query—executed just once—not inside a loop)? Answering a question about tuning an individual query typically leads to other questions.
My favorite way to tune this particular query is very simple. My initial answer (on AskTom at asktom.oracle.com) was “just don’t execute it; there is no faster way to do something than not to do it.” That is a quote I picked up from a friend of mine, Cary Millsap (carymillsap.blogspot.com). Whenever I encounter a COUNT(*) query, my initial reaction is to remove it, for the simple reason that when I see a COUNT (*) query, I immediately envision the surrounding code looking something like this:
select count(*) into some_variable from table …; if (count(*) > 0) then do_something(); end if;
If I guessed correctly—and the surrounding code does look something like that—I would suggest rewriting that code as simply
Just the call to do_something()—nothing more, nothing less. The do_something routine must be looping over some result set and returning when the result set is exhausted. If that is the case, then just calling do_something when there is no data to process would conclude in an empty result set right from the beginning and the routine would return. If there is data to process, the routine will win resourcewise, because it skips an unnecessary query and processes the data. If there is no data to process, the routine will still win, because it hasn’t done any more or any less work.
But in the case we’re considering, let’s assume that the query must be executed, for whatever reason. How can I tune it? Given the information I have, I can—at best—simply remove the outer join. This won’t affect the runtime performance of the query (the optimizer will have already done that for us), but it will make the query more correct.
Whenever you see a construct such as
AND T2.s_id = T3.s_id AND T3.related_s_id = TO_NUMBER(:v0);
you should know automatically that it can (and should!) be coded as follows:
AND T2.s_id = T3.s_id /* NO outer join! */ AND T3.related_s_id = TO_NUMBER(:v0);
The reason is quite simple: if the outer join from T2 to T3 were necessary and you in fact made up a row in T3 to join to T2 (that is what an outer join does if there is no matching data in T3 for T2), T3.related_s_id would be NULL, as would all the attributes of T3 for that record. And because NULL is never equal to anything, you know that the last bit of the predicate could never be true. Therefore, the outer join is not needed and the only thing it does by being there is make the coder look bad!
Fortunately, however, the cost-based optimizer is generally smart enough to figure that out itself and will have already removed the outer join. So that leaves me with the original query minus the outer join. It won’t run any faster (or slower), but it is more correct. Is there anything else I can do?
The answer to that right now, knowing what I know, is “no, there isn’t.” The query is as tuned as it will ever be. I know nothing about the schema and therefore cannot make any assumptions. But what if I know more about the schema and the relationships between the tables? Might there be something more I can do?
As often happens on AskTom, I asked for more information—in this case, I asked for the schema. I received three create tables and some create indexes. I replied and asked again for the schema, but with more detail. I requested primary keys, foreign keys, not-null constraints, and the like. The information I got back told me that there were no constraints in the database and that the application included constraint equivalents and enforced them. The information also included a list of what the database constraints would look like if there were any. With this information, I was able to make some huge changes to the query.
The first thing I saw was that ID was a primary key in table T1 and a foreign key from T2 to T1 and, furthermore, that T2.ID was defined as NOT NULL.
CREATE TABLE T1 ( ID NUMBER(18) NOT NULL, … data varchar2(1000) ); ALTER TABLE T1 ADD CONSTRAINT T1_PK1 PRIMARY KEY (ID); CREATE TABLE T2 ( S_ID NUMBER(18) NOT NULL, ID NUMBER(18) NOT NULL, … data varchar2(1000) ); ALTER TABLE T2 ADD CONSTRAINT T2_PK1 PRIMARY KEY (S_ID); ALTER TABLE T2 ADD CONSTRAINT T2_OSO_FK1 FOREIGN KEY (ID) REFERENCES T1 (ID);
Once I’d determined that, I was able to start rewriting the query, given that I now knew the following:
That is, given these constraints, the query SELECT T2.* FROM T1, T2 WHERE T1.ID = T2.ID is the same as SELECT T2.* FROM T2, and I can remove T1 from the query altogether. So now the query is simply
SELECT COUNT(*) FROM T2, T3 WHERE T2.s_id = T3.s_id AND T3.related_s_id = TO_NUMBER(:v0);
Now that query stands a chance of performing better than the initial query—it certainly hits fewer database objects. But I am not done yet. Looking further at the definition of T3 I was provided with—
CREATE TABLE T3 ( S _ID NUMBER(18), RELATED_S _ID NUMBER(18), data varchar2(1000) ); ALTER TABLE T3 ADD CONSTRAINT T3_ORDER_PK1 PRIMARY KEY (S_ID); ALTER TABLE T3 ADD CONSTRAINT T3_OLS_S_FK1 FOREIGN KEY (S_ID) REFERENCES T2 (S_ID); CREATE INDEX T3_OLS_RS_1 ON T3 (RELATED_S_ID);
—I could do some more tuning. Given the preceding information, I now know the following:
So I can now simplify the query to
SELECT COUNT(*) FROM T3 WHERE T3.related_s_id = TO_NUMBER(:v0);
Yes, the original three-table outer join was rewritten as a single-table query with a very simple predicate. The query plan for the final query will be simple: an index range scan followed by an aggregate. It won’t hit three tables, and it won’t use multiple indexes—in three or four I/Os, I’ll have the answer.
So the question you may be asking is, “Why couldn’t the database do that for us?” The answer is that it can and it would have (that is, Oracle Database 11g Release 1 and later would have) if it had had the constraints. The problem was that the system I was working on didn’t use constraints—at least not database constraints. The system enforced the above rules in the application. It did not have foreign keys, not-null constraints, and the like in place.
That had two consequences: First, no one could write a sensible query unless he or she had memorized the entire data model, because the data dictionary didn’t provide any information. To tune the preceding query , I needed to know a lot more than just the query. I also needed to understand the data, the schema itself. If you take away the not-null constraints, the foreign key information, and the existence of primary keys, you cannot tune that query. In fact, you really cannot write any sensible queries, because you need to understand the schema before you can write any query “safely” (correctly).
Second, the optimizer is prevented from doing its job! The optimizer uses constraints when it optimizes queries—it uses every bit of available data to optimize a query. If you do not give the optimizer this metadata—the constraints—it cannot fully optimize a query. It cannot remove tables from a plan, and it might not be able to use indexes (not-null constraints can have a huge impact on the optimizer’s ability to use an index). In short, the optimizer has fewer alternatives to consider when developing the plan.
Listing 1 shows two query plans: one for the untuned three-table outer join query against the preceding schema created in Oracle Database 11g with all the metadata in place and the other for the same query against a schema with no metadata available to the optimizer. Code Listing 1: Two query plans—with and without metadata for the optimizer
Query plan for the untuned three-table outer join query against the preceding schema created in Oracle Database 11g with all the metadata in place:
—————————————————————————————————————————————————————————————————————————— |Id |Operation |Name |Rows |Bytes |Cost (%CPU)|Time | —————————————————————————————————————————————————————————————————————————— | 0 |SELECT STATEMENT | | 1 | 26| 1 (0)|00:00:01 | | 1 | SORT AGGREGATE | | 1 | 26| | | |* 2 | INDEX RANGE SCAN|T3_OLS_RS_1 | 1 | 26| 1 (0)|00:00:01 | ——————————————————————————————————————————————————————————————————————————
Query plan for the same query, but against a schema with no metadata in place (and available to the optimizer):
———————————————————————————————————————————————————————————————————————— |Id |Operation |Name |Rows |Bytes |Cost | ———————————————————————————————————————————————————————————————————————— | 0 |SELECT STATEMENT | | 1 | 65 | 2 | | 1 | SORT AGGREGATE | | 1 | 65 | | | 2 | NESTED LOOPS | | 1 | 65 | 2 | | 3 | NESTED LOOPS | | 1 | 52 | 2 | | 4 | TABLE ACCESS BY INDEX ROWID|T3 | 1 | 26 | 1 | |* 5 | INDEX RANGE SCAN |T3_OLS_RS_1 | 1 | | 1 | | 6 | TABLE ACCESS BY INDEX ROWID|T2 | 1 | 26 | 1 | |* 7 | INDEX UNIQUE SCAN |T2_PK1 | 1 | | | |* 8 | INDEX UNIQUE SCAN |T1_PK1 | 1 | 13 | | ————————————————————————————————————————————————————————————————————————
I don’t know about you, but I prefer the first plan in Listing 1. I’d like it best if the query were written correctly, because I would know that the developer actually understood the data, the question to be answered, and SQL in general. But I’d still like the optimizer to be able to figure it all out.
Metadata matters, not just for data integrity but also for getting the best-possible plan. See the following article for even more examples: oracle.com/technetwork/issue-archive/2009/09-may/o39asktom-096149.html.Partition Plans
Do ”partitioned plans“ exist? What I mean by partitioned plans is the ability of the optimizer to come up with different methods for accessing different partitions—all in the same plan. For example, the optimizer might use an index to access partition 1 and a full scan to access partition 2.
Not entirely, but perhaps they are getting close. However, “partitioned plans” would be expensive in terms of optimization, because the number of possible plans goes up rather quickly if you have to consider every possible way for each partition individually.
But the framework for partitioned plans does exist in current database releases. For example, in Oracle Database 11g Release 2, you can have indexes on some partitions but not others, and the optimizer can in some cases take advantage of that. For example, suppose I have the following table:
CREATE TABLE t ( dt date, x int, y varchar2(30) ) PARTITION BY RANGE (dt) ( PARTITION part1 VALUES LESS THAN (to_date('01-jan-2010','dd-mon-yyyy')) , PARTITION part2 VALUES LESS THAN (to_date('01-jan-2011','dd-mon-yyyy')) , PARTITION junk VALUES LESS THAN (MAXVALUE) )
and I place some data in it:
insert into t select to_date('01-jun-2010', 'dd-mon-yyyy'), rownum, object_name from all_objects;
and then I gather statistics on the table and create a local index on column X. Note that this index is to be created as UNUSABLE, meaning that it will be registered in the data dictionary but won’t actually be populated with data. The index will “exist,” but it will not consume any storage—it will not actually be built.
exec dbms_stats.gather_table_stats(user,'T'); create index t_idx on t(x) local unusable;
Now I’ll rebuild (or in this case, build for the first time) a single partition of this local index—partition PART2:
alter indext_idx rebuild partition part2;
If I execute a query against this table now—one that can make use of the index on column X—I’ll see something surprising, as shown in Listing 2.
Code Listing 2: Different plan for different partitions
SQL> set autotrace traceonly explain SQL> select * from t where x = 42; —————————————————————————————————————————————————————————————————————————————— |Id |Operation |Name |Rows|Cost|Pstart |Pstop | —————————————————————————————————————————————————————————————————————————————— | 0 |SELECT STATEMENT | | 1| 4| | | | 1 | VIEW |VW_TE_2| 2| 4| | | | 2 | UNION-ALL | | | | | | | 3 | PARTITION RANGE SINGLE | | 1| 2| 2 | 2 | | 4 | TABLE ACCESS BY LOCAL INDEX ROWID|T | 1| 2| 2 | 2 | |* 5 | INDEX RANGE SCAN |T_IDX | 1| 1| 2 | 2 | | 6 | PARTITION RANGE OR | | 1| 2|KEY(OR)|KEY(OR)| |* 7 | TABLE ACCESS FULL |T | 1| 2|KEY(OR)|KEY(OR)|
This demonstrates that the optimizer has the ability to develop a query plan with different methods for accessing different partitions. In this case, the different methods reflect that the index partitions for some partitions are “unusable” whereas others are “usable.” Currently the only way to get these sorts of “plans by partitions” is to use this capability, new in Oracle Database 11g Release 2, with unusable index partitions.Resource Use
We want to calculate the resources used by database users based on their userid. For example, at a given time, what amount of CPU, memory, or disk I/O is consumed by a database user on the database server. Some of our databases are shared by different applications, and we want to identify how many resources each application is using. I think v$session_time_model or v$sesstat can be used to get the CPU and memory usage details.
You cannot get precisely what you asked for (on any system). You asked for a “given time,” as in a given point in time—as in “right now,” for example. You cannot tell what percentage of any resource an individual is using at a point in time, because points in time are infinitely small. For example, at a given point in time, a given user is either
At a point in time is binary. You are either using a resource or not. What you can achieve, and probably what you want, is to compute the amount of some resource over some period of time (between two points in time). By far the easiest way to achieve that would be to use the active session history (ASH) data or active workload repository (AWR) data. Using ASH information, you can get a really good feel for how much CPU a given session (associated with a user) has been using over a period of time. The V$view—V$ACTIVE_SESSION_HISTORY—contains short-term memory of what each session has been doing for about the last hour. You can set up a process easily that queries that view from time to time and saves the metrics you are interested in. Additionally, the DBA_HIST_ACTIVE_SESS_HISTORY view contains the ASH data for the last few days, but at a much coarser level of detail. The history data in this view contains a fuzzy picture of what the session has been doing, but it isn’t accurate enough to definitively state how much CPU a given session has been using.
Another approach is to utilize a logoff trigger. Such a trigger can capture any metric (such as CPU time) from V$MY_STAT and save that to an audit trail table of sorts. Although this would be very accurate, it might lose a few observations here or there. For example, if a session were terminated in an unusual fashion (by crashing or being killed, for example), this logoff trigger would not fire.Dynamic Search Queries with Contexts
Our system has many search packages, which have dynamic queries with joins on multiple tables. DBMS_SESSION.SET_CONTEXT and SYS_CONTEXT are used in building the dynamic search queries.
Can you tell me what the difference is in building a dynamic search query with and without contexts? That is, I’d like to know why should I create a query such as
WHERE x = SYS_CONTEXT( 'my_ctx', 'x' );
WHERE x = 42;
Why use the SYS_CONTEXT function instead of just literals?
This is the “most popular question ever on Ask Tom” raising its head again! I’ve written in the past about how to construct dynamic queries using application contexts (Oracle Magazine, July/August 2009), and it is in fact the most read question/answer on asktom.oracle.com. In that 2009 column, I described how to use the context, and in the original column (Oracle Magazine, November/December 2006), I hinted about why you want to do this—and not use literals—with a simple list:
Note how in the WHERE clause, I always use the SYS_CONTEXT function. I never put the literal value into the query—that would be very bad for Performance Scalability Shared pool utilization Perhaps most important: security (SQL injection)
To expand on that a little, I’ll explain how using contexts is all about performance, scalability, and security.
If you were to create queries using literals—
where x = 42; where x = 55; .... and lots more of literals here .... for each search .... where x = 10325432;—you would create a unique SQL statement for each unique set of inputs. Each unique SQL statement will necessitate a hard parse (which is very, very, very bad). A hard parse takes much longer—and uses many more resources—than a “no parse” or a “soft parse” does. (See bit.ly/ToGbO for details on hard/soft/no parse.)
If you create a query using SYS_CONTEXT (or bind variables), you will generate just one query:
where x = sys_context('my_ctx','x')
or at least a small number of queries (if you have more than one possible column against which to run the predicate). You will generate a reasonable number of unique SQL statements, each of which will be hard-parsed once and soft-parsed from then on. You will have dramatically reduced the amount of hard parsing you do.
You may be surprised to find that for small, quick SQL statements, as much as 95 percent of your runtime can be spent in parsing SQL, not actually executing it. So reducing hard parsing is definitely good for single-user performance.
Using contexts or bind variables is also imperative for multiuser scalability. To hard-parse, you have to latch—that is, lock or serialize—data structures in the system global area (SGA) frequently. This is a major (I might say the major) scalability inhibitor I see in systems today. Applications that hard-parse frequently cannot execute hundreds of statements per second, because they cannot hard-parse hundreds of statements per second. Because of the serialization, your system will grind to a halt if it tries to hard-parse every SQL statement using literals.
Last, contexts and bind variables are relevant from a security perspective. If you are using the famous “single application user account in a middle-tier application” (whereby everyone logs in to the database as a single user), your system will be subject to SQL injection if you place literals in your SQL. SQL injection is insidious, hard to detect, and hard to protect against. If you use bind variables or the SYS_CONTEXT approach, your SQL will not be subject to SQL injection, because the input to your procedure will not become part of the SQL statement—the bind variables or contexts will only be inputs to the SQL statement.
I believe that in the year 2011, the most important aspect of the performance, scalability, and security trio is security. Far too many developers are not aware of the consequences of SQL injection, but a quick search on the internet will turn up many widely publicized instances of SQL injection.
Photography by Meric Dagli, Unsplash