Thursday Apr 18, 2013

Changing the Explain plan for one SQL statement to another with NO code changes or Hints

CheetahOn a client site and had an issue with a 3rd party product to do with an inbuilt query executing from an application. A specific query was taking forever to run and the execution plan was not ideal.  Nothing obvious had shown anything useful such as hints or indexes etc. 

Obviously as the code is submitted from a product itself therefore it could not be changed!  Table and Index statistics were up to date and the query had always been pretty poor in performance when examining the AWR history.

However during an initial analysis of the code a much better performing execution plan was located, the difference between the 2 pieces of code were simply that one was using a bind variable and the second was a hard coded value given as a test case.  There are documented examples of how to simulate code in PLSQL with bind variables so that this issue does not actually occur, however the interesting thing was that without the bind variable the plan was just what the customer wanted.

 The real answer to the above question would be to look at dynamic sampling / statistics gathering etc to ensure nothing is being influenced due to the bind variables (such as bind variable peeking or histograms etc), however a really simple solution was how to get the explain plan of SQL A over to SQL B with no actual code change.

There is a blog entry already showing how to introduce hints and also parallelism when a code change is not possible:

However this is a test case where we simply want to link the execution plan of SQL A to that of SQL B.

Please Note these scripts were written only for demonstration purposes. They are not optimized and they have almost no error checking, so be careful!

  1. The first thing required is to know the SQL_ID for the SQL that you wish to change the explain plan. (poor performer)
  2. The Second thing required to know is the SQL_ID and Plan_Hash_Value for the SQL that has the desired execution plan.

 Then it is simply a matter of running the following PL/SQL block:

The above once executed created a baseline entry in dba_sql_baselines.  By executing the following query this could be seen :

SQLPLUS > select fixed,accepted,enabled,created,sql_text,sql_handle from dba_sql_plan_baselines;

Other useful baseline queries are:

SQLPLUS> select * from table(dbms_xplan.display_cursor(‘&1’,&2));

NOTE:  above needs &1 set to the sql_id and &2 the child number.  At the end of the explain plan you can see if the baseline has been used or not.

SQLPLUS> select * from table(dbms_xplan.display_sql_plan_baseline(sql_handle=>’xxxxxx’, format=>’basic’));

NOTE:   xxxxxx is the sql_handle from dba_sql_plan_baselines.

As it could be seen the baseline was taken and being used so the final step was also to fix the baseline:

If the baseline is not applicable anymore or did not work then it is easy to tidy up and remove it as below:

Conclusion: Although the long term solution is still to review the code and database to see really why the bind variable execution was giving a considerably poorer execution plan to that of the hard coded SQL. The issue was at least resolved with a nice work around due to being able to use a baseline and influence the explain plan as desired.

Wednesday Apr 17, 2013

11gR2 Flashback Guaranteed Restore Point without enabling Flashback logging

Flashback PicturePlenty has been documented and published regarding flashback database since the first implementation in Oracle 10g. However little is documented regarding using flashback restore points in Oracle when the flashback logging (flashback off) is not enabled.

At a client site flashback logging was turned off as a feature due to performance implications, in 11gR2 it is a dynamic online feature that you can now turn flashback on and off without a database bounce. A situation arose and questions were asked regarding the use of guaranteed restore points and flashback logging for  testing purposes.

For reference this paper gives a short summary and example of the Oracle documentation:

Download The Full PDF here: 11gR2_Flashback_Guaranteed_Restore_Point_without_enabling_Flashback_logging.pdf

Friday Mar 15, 2013

Applying a Parallel Hint to SQL Code that cannot change

PerformanceLast night during investigation of a performance issue for a client it was found that due to significant growth in the database a select query was suddenly taking hours to run instead of minutes. Obvious things like statistics and the database were checked and it was clear to see the SQL code needed a review. However this code is from a vendor and cannot be changed so the client was interested in anything that could be done to assist them in the short term. This blog update covers the analysis and final answer which was to introduce the parallel hint into the SQL code with NO actual code change using the ‘ dbms_sqldiag’ package and ‘i_create_patch’ procedure.

The first answer and quickest to implement was to execute the SQL tuning advisor against the code and that prompted a couple of SQL profiles for answers. The most positive of these was a PX_PROFILE which would introduce parallelism of a DOP (48) into the query and although warning about resource increases it looked good enough to investigate further. To take a stage forward the query was tested using the PARALLEL hint to see how much benefit parallelism would introduce. It should be noted that care is required before jumping into parallelism as a solution. For example in Oracle RAC what is the value of ‘parallel_force_local’ and what is the available CPU, memory, ‘large_pool’ etc.

Developers executed the code from SQL developer using the PARALLEL hint as below to see the real effect of the parallelism suggestion.

Note this is not the actual code but just an example of using the PARALLEL hint.

Select /*+PARALLEL */ count(*) from test_table;

The test was very positive with execution now taking around four minutes to complete which was a large difference from over 2 hours previously. However examining the explain plan with the PARALLEL hint showed it was very different to that of the explain plan for the SQL profile produced by the SQL tuning advisor. This meant the SQL profile was not the answer as the requested approach was now to get the PARALLEL hint implemented as tested and proven and not the execution as the SQL profile suggested with the PX_PROFILE.

Easy, create a baseline was the next answer to adopt. Baselines are very easy to generate from the SQL cache linking the execution plan for the SQL with the hint to that of the SQL without the hint. References on how to do this are contained in the acknowledgements.

After the baseline was introduced the code was re-tested to ensure correct results. Although the baseline was enabled and accepted the original code refused to implement the new baseline. The execution was not going to parallel execution as desired. What was wrong?

Researching this further, the reason why the baseline was not being selected is that the PARALLEL hint does not make it into the plan hints of the baseline but is obviously essential to force the parallel plan and get the appropriate hash plan value. Again this is not going to be discussed further here as it is available in other blog entries elsewhere, see acknowledgements.

So a profile and a baseline were now ruled out as possible solutions. It was then that an alternative arose called SQL PATCH. There are some pre-requisites in what SQL PATCH will and won’t do as also referenced in the acknowledgements section. However for a straight /*+PARALLEL */ hint it worked a treat!

To add the parallel hint to the query with SQL PATCH:


Sqltext clob;


SELECT sql_text into sqltext FROM DBA_HIST_SQLTEXT WHERE sql_id =’9h8a1m1a4fwzc’;

DBMS_SQLDIAG_INTERNAL.I_CREATE_PATCH(Sql_text => sqltext, Hint_text=> ‘PARALLEL’,Name=>’PARALLEL Hint Patch’);



NOTE: The SQL text was obtained from the view DBA_HIST_SQLTEXT with the sql_id as it was a large query.

After submitting the above and testing with explain plan and execution finally we have the PARALLEL hint being used as was desired without any code change. In the explain plan there is even a note showing the name as above which aids in identification later.

If required at a later stage to drop the above it can be implemented with the code as below:






The following blogs were referenced in this research piece:





Tuesday Feb 26, 2013

Oracle LogMiner: Seven Steps to Mine Redo data

Log_minerWhilst on a client site it was requested to investigate a data issue where data had been deleted by an unknown source to the application team within an Oracle database table.  After attempting to use Oracle flashback techniques it could be seen this was insufficient due to the changes occurring a few days previous.  Therefore Oracle log miner was used to locate the required information, analyzing the Oracle redo logs and archive logs.  It was clear that the usage of this tool is widely unknown to a lot of administrators therefore this paper gives a simple seven step process on how to safely implement an Oracle log mining session to look at database redo data. Although this can be implemented through Oracle enterprise manager (OEM), the techniques shown here are using Oracle SQL and PL/SQL.

Please Note these scripts were written only for demonstration purposes. They are not optimized and they have almost no error checking, so be careful!

Download The Full PDF here: Oracle_Log_Mining_Data.pdf

Tuesday Feb 12, 2013

RMAN Backup Strategy utilising multiple media (disk and tape)

RMANMost Database administrators are comfortable with setting up an Oracle RMAN backup strategy that is simple.  This would be along the lines of a full backup to tape or just a full backup to disk. This may be scripted or else a standard configuration from OEM grid control.  But what if someone wanted something more complex than this? 

An existing strategy at a client site was exactly this and the solution from a performance perspective the client ended up with backup runtimes reduced from hours to minutes with over 10 times performance gains. Not only this but tape contention was removed from the solution and by having the latest backup on disc faster restores were possible.  This paper discusses the issues and the solution in detail.

The client had performed a full backup on the weekend followed by an incremental backup on other days direct to tape. This solution had many issues with tape media failures causing backups to have to be re-executed as well as an excessive time taken for a backup to be completed. A typical full backup of just one of the many databases direct to tape if lucky to complete was in the region of 19 hours un-tuned. The over running RMAN backups themselves also had a large negative impact on the online and batch performance times due to over running. At the site there have also been many issues with tape restores and access to backups as well as restore times being extremely excessive. It was clear from a quick review that the direct tape media was a major bottleneck to the implementation and for capacity increasing reasons that the strategy had to change.

The new backup requirement from the client became to implement a backup strategy to utilize multiple media at the same time. This was an RMAN level-0 full backup to disk weekly and then to push this backup off to tape (retention 7 years) and retain the disk backup for only another week. On subsequent days take an incremental level-1 and follow the same strategy. The disk backups are only house kept as part of the following week full backup. This strategy is shown below in the calendar, it should also be noted that the project also did not want to use the Fast Recovery Area for backup storage. Due to the capacity required they chose to use a separate tier 3 storage disk group +BACKUP for archive log destination and backup piece destination. The fast recovery (tier 2 storage) area was then only utilized by flashback logs (when they wanted flashback on) and multiplexed control / online redo logs. This allowed for a storage reduction in required tier 2 storage and a cost saving.

From a recovery point of view if a failure were to occur, the client wanted the ability to be able to restore from the local diskgroup using the last level-0 full backup and any incremental / archive log backups / archive logs as required. If there were an issue or they needed to go further back then and only then would be a requirement to utilize the tape media management.

For support of this and further implementation help please contact the author of the paper as required.
The aims are purely for demonstration and training purposes.

Please Note these scripts were written only for demonstration purposes. They are not optimized and they have almost no error checking, so be careful!

Download The Full PDF here: backup_strategy_paper.pdf

Thursday Feb 07, 2013

Utilising ASM with an ACFS volume

Disk_Image As part of a recent data migration project it was decided to test out and an Oracle 'ACFS' disc volume as an alternative to existing Oracle 'OCFS' volumes for external table dumps for the ETL migration processing. The attached paper covers the steps in the 'ACFS' process to set up and also links to some other resources on 'ACFS'.

It is hoped to show how easy this technology is to implement as well as making further use of existing Oracle ASM disk already used for Oracle RAC clusters greater than version 11.1.  Care needs to be taken with prerequisites and kernel versions before deciding to implement, please see metalink note: ACFS Supported On OS Platforms. [ID 1369107.1]

Download The Full PDF hereACFS_TestCase.pdf

Friday Feb 01, 2013

How to setup a simple Oracle streams example: 1 table to another table in the same schema and database

Oracle Streams ImageWhilst recently on site with a client, it was requested to provide a simple Oracle streams demonstration with documentation that showed how to replicate a single table to another table within the same database and schema. 

The attached document gives the scripts and step by step workings for the example.  It is aimed for those new to Oracle streams and wanting a simple working example to follow.  It should be noted that if looking at replication technology it is recommended to look first at the product Oracle Golden Gate. 

For support of this and further implementation help please contact the author of the paper as required.  The aims of the supplied scripts are purely for demonstration and training purposes.

The views expressed on this blog are those of the author and do not necessarily reflect the views of Oracle.  Please download the full paper for a detailed discussion as well as some recovery tricks you may not have come across.

Download The Full PDF here: Oracle_Streams_Example.pdf

Wednesday Jan 30, 2013

Taking the Black Magic Out Of Oracle Database Recovery



One of the biggest issues seen today with an Oracle database recovery is human error. More damage is done by administrators executing commands without a true understanding or an appreciation of the real condition of the existing Oracle database. Since the release of 11g it is interesting to question, how many administrators are actually using the RMAN recovery advisor which is now a tool built into the database?

The aim of this attached paper is to show that database recovery is not black magic. If a deep breath is taken at the point of failure and time is taken to properly diagnose the real condition of the database, whilst forming a firm action plan from what is available, and then the recovery becomes a simple straight forward process. That is providing any required backups are actually available of course!

Experience has shown that the main issue at the time of any fault is an administrator that jumps straight to the recovery phase with little or no analysis of the actual situation, potentially causing a bad situation to suddenly become worse.

The views expressed on this blog are those of the author and do not necessarily reflect the views of Oracle.  Please download the full paper for a detailed discussion as well as some recovery tricks you may not have come across.

Download The Full PDF hereMagic_Backup_Recovery.pdf

What Is Baker's Byte?

Hi, I’m Andy Baker and this blog is my attempt to give the Oracle production database administrators view of life. I hope to provide some interesting updates covering general items of interest, the issues I face as well as some I have previously solved. This is all based on my experiences at real client sites and will cover the Oracle RDBMS, RAC, Exadata and general administration Oracle DBA tasks as well as tips and tricks.

Although I am currently based in Oracle Consulting Services (OCS) my previous experience within Oracle has also been with Oracle Support Services (OSS) as a senior backup and recovery engineer. I have been working with Oracle databases for over 18 years and have experience in a wide range of industries and roles. This covers Banking, Oil and Gas, Insurance and Telecoms for many major global organizations. I have worked in software development but mainly focus in production support dealing with management and staff at varying levels, whilst having to meet rigorous performance and demanding time schedules.


About Me Image
Andy Baker, Senior Principal Consultant for Oracle Consulting Services (@Bakers_byte), shares his news, views and ideas about the Oracle Database with a focus on innovation and emerging technologies.


« June 2016