AWR Data Extract
By Brian Diehl on May 28, 2013
The Oracle RDBMS has an excellent repository of performance data that is collected automatically. It's called Automatic Workload Repository, or AWR for short. The AWR is a snapshot-based collector of database information. This includes not only the information from V$SYSSTAT, but also snapshots of other configuration information like database parameter values. This information is invaluable in diagnosing issues at the database-level. AWR information is typically exchanged via the AWR reports in HTML format (created with awrrpt.sql). While this can be useful, what would be better--and more flexible--is to have the raw data used to create the HTML report.
This is possible using the AWR extract and import utility. The scripts are available in the same rdbms/admin directory under the Oracle Home. The extraction script is awrextr.sql. Run this script as a DBA user. First, you will see a list of available databases in the AWR repository. Unless you have imported previous AWR data from other databases, there should be only one DB ID listed.
Similar to the HTML report, the script will prompt for the number of days to display a list of snapshots. Choose the appropriate number of days that includes the data you want to export.
Enter value for num_days: 1 Listing the last day's Completed Snapshots DB Name Snap Id Snap Started ------------ --------- ------------------ ORCL 402 28 May 2013 09:06 403 28 May 2013 10:00 404 28 May 2013 11:00 405 28 May 2013 12:00 406 28 May 2013 13:00 407 28 May 2013 14:00 408 28 May 2013 15:00 409 28 May 2013 16:00 410 28 May 2013 17:00 411 28 May 2013 18:00
Enter the start and end snapshot dates when prompted.
Specify the Begin and End Snapshot Ids ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Enter value for begin_snap: 403 Begin Snapshot Id specified: 403 Enter value for end_snap: 411 End Snapshot Id specified: 411
Next, the script will display a list of available directories. The extraction process ultimately creates a dump file using Oracle Datapump. The resulting file will be in the directory you choose here. There is no default, but I usually choose the DATA_PUMP_DIR since it exists in instances by default.
Choose a Directory Name from the above list (case-sensitive). Enter value for directory_name: DATA_PUMP_DIR Using the dump directory: DATA_PUMP_DIR
Accept the default file name (format awrdat_<start_snap>_<end_snap>). What happens next is important to understand. A temporary schema is created and data from the core AWR tables is moved to copies in this schema. This schema is exported using datapump. The script will finish and give the location of the new dump file.
Using the dump file prefix: awrdat_403_411 | | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | The AWR extract dump file will be located | in the following directory/file: | /u01/app/oracle/admin/orcl/dpdump/ | awrdat_403_411.dmp | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | | *** AWR Extract Started ... | | This operation will take a few moments. The | progress of the AWR extract operation can be | monitored in the following directory/file: | /u01/app/oracle/admin/orcl/dpdump/ | awrdat_403_411.log | End of AWR Extract
The resulting dump file can be imported into another Oracle database using awrload.sql. I will cover the import process in the next post. Because the AWR tables contains such an extensive collection of information about the database, the time required to diagnose database-related issues may be greatly reduced.