Database, SQL and PL/SQL

My First Day with the New Release


Our technologist talks about his first experience with Oracle Database 10g Release 2.

By Tom Kyte Oracle Employee ACE

September/October 2005


Can you tell us about your experience with Oracle Database 10g Release 2 and point out some of the important new features?

First, this column will not even attempt to cover everything new under the sun with Oracle Database 10g Release 2—that would be an impossible task in just a few pages. For the entire story, see the Oracle Database 10 g Release 2 New Features Guide (see Next Steps). Rather, what you'll see here are some of the things I noticed and felt deserved pointing out.


In my first couple of minutes using Oracle Database 10g Release 2, I immediately found this improvement—shown in Listing 1—to Autotrace. Autotrace is now using the DBMS_XPLAN package to display the explain plans. This gives us a much more detailed explain plan than in previous releases. We could have gotten this plan in Oracle9i Database Release 2 and above using the supplied DBMS_XPLAN package, but having Autotrace use DBMS_XPLAN for us just makes it so much easier. Of particular note in the DBMS_XPLAN output is the addition of the predicates to the bottom of the plan, showing at exactly which step Oracle Database is applying them. This is great.

Code Listing 1: Autotrace with DBMS_XPLAN

SQL> set autotrace traceonly explain
SQL> select *
  2  from emp, dept
  3  where emp.deptno = dept.deptno
  4  and emp.job = 'CLERK';
Execution Plan
Plan hash value: 877088642
| Id |   Operation        | Name| Rows  | Bytes| Cost (%CPU)   | Time     |
|  0 |  SELECT STATEMENT  |     |     4 |   468|     7     (15)| 00:00:01 | 
|* 1 |  HASH JOIN         |     |     4 |   468|     7     (15)| 00:00:01 |
|* 2 |  TABLE ACCESS FULL | EMP |     4 |   348|     3      (0)| 00:00:01 |
Predicate Information (identified by operation id):
   1 - access("EMP"."DEPTNO"="DEPT"."DEPTNO")
   2 - filter("EMP"."JOB"='CLERK')
   - dynamic sampling used for this statement

Conditional Compilation

PL/SQL has lots of new stuff. The first thing I noticed was conditional compilation, something I missed from my days as a C programmer. Conditional compilation is the ability to make the compiler effectively ignore code (or not) at will. It may not sound too useful at first, but it is. With conditional compilation,

  • You can leave your really slow debug code in your application now—and turn it on and off at will.

  • You can program assertions as you might in C. Each subprogram can test the values of its inputs, for example, and verify that they meet some criteria. These tests can be active during the whole development cycle and inactive for production. However, they remain as formal documentation of the conditions upon which the unit depends, and can be simply reactivated for debugging a production-time bug.

  • You can write version-independent code—by compiling one set of code for version X and a different set of code for version Y—without having two sets of code. Check out the new DBMS_DB_VERSION supplied package.

  • You can support best practices during unit testing. For production, helper subprograms will be declared only in a package's body. But when you want to invoke them from a unit-testing framework, you need to declare them in the specification. By conditionally compiling these declarations, you can have your cake and eat it too: you honor your modular design by hiding what should be hidden in your shipped code, but you allow separate compilation units to see the hidden elements for testing purposes. Further, you can cause helper subprograms to produce expected (data-dependent) exceptions so that you can test the robustness of code that calls your code without the effort of contriving the exceptional data conditions.

Listing 2 shows a quick view of what conditional compilation implies and how it works.

Code Listing 2: Conditional PL/SQL compilation example

SQL> create or replace procedure p
  2  as
  3  begin
  4      $IF $$debug_code $THEN
  5          dbms_output.put_line( 'Our debug code' );
  6          dbms_output.put_line( 'Would go here' );
  7      $END
  8      dbms_output.put_line( 'And our real code here' );
  9  end;
 10  /
Procedure created.

Notice how the "debug" code is not printed and the code compiles without the $$debug_code value being defined.

SQL> exec p
And our real code here
PL/SQL procedure successfully completed.

By simply enabling the debug_code variable, we can enable that debug code. Note that "debug_code" is my name—it is not a special name; you can define your own variables at will.

SQL> alter procedure P compile 
  2 plsql_ccflags = 'debug_code:true' reuse settings;
Procedure altered.
SQL> exec p
Our debug code
Would go here
And our real code here
PL/SQL procedure successfully completed.


DBMS_OUTPUT has been upgraded in Oracle Database 10g Release 2. Not only can you specify a buffer size of unlimited (no more 1,000,000-byte limit!), but you can also print lines much larger than 255 characters. The line length limit is now 32K. The following demonstrates the DBMS_OUTPUT upgrade:

SQL> set serveroutput on -
   > size unlimited
SQL> begin
  2  dbms_output.put_line
  3  ( rpad('*',2000,'*') );
  4  end;
  5  /
PL/SQL procedure successfully completed.

Oracle Fast-Start Failover

Another new capability in Oracle Database 10g Release 2 is automatic failover to a standby database. Instead of a human being running a sequence of commands or pushing a button, Oracle Data Guard can now automatically fail over to the standby database upon failure of the production site, storage, or network.

Database Transports

Oracle Database 10g Release 1 introduced the cross-platform transportable tablespace, but Oracle Database 10g Release 2 takes it to the next level. Now you can transport an entire database across platforms that share the same endianess (byte ordering). That means you can move an entire database (not just transport individual tablespaces) from Apple Macintosh to HP-UX or from Solaris x86 to Open VMS. No more dump and reload.

Data Pump Compression

When Oracle Database 10g Release 1 came out, the good news was that the export (EXP) and import (IMP) utilities had been totally rewritten, and the new Oracle Data Pump utilities EXPDP and IMPDP were introduced. One downside of the EXPDP and IMPDP tools was that compressing the DMP files during the export process was impossible. With EXPDP, you had to create the DMP file and then compress it, unlike the process with the older EXP tool, which you could tell to write to a named pipe and the data written to the named pipe could be compressed—all in one step.

Fortunately, Oracle Database 10g Release 2 makes it easier to create and partially compress DMP files than the old tools ever did. EXPDP itself will now compress all metadata written to the dump file and IMPDP will decompress it automatically—no more messing around at the operating system level. And Oracle Database 10g Release 2 gives Oracle Database on Windows the ability to partially compress DMP files on the fly for the first time (named pipes are a feature of UNIX/Linux).

Asynchronous Commit

Normally when a Java, C, or Visual Basic application issues a COMMIT , that causes a wait—specifically, a log file sync wait. This wait is due to the client waiting for the log writer process to flush the redo to disk—to make the transaction permanent . Normally this is exactly what you want to have happen. When you COMMIT , it should be permanent.

However, there are exceptions to all rules. What about a system that is processing incoming records as fast as possible, perhaps from a sensor or a network feed? This program's goal in life is to batch up a couple records, INSERT them, COMMIT them (to make them visible), and continue on. It doesn't really care to wait for the COMMIT to complete; in fact, by waiting, this program is not running as fast as it can.

Enter the asynchronous COMMIT in Oracle Database 10g Release 2. We can now say "commit, but don't wait" or "commit, and please do wait." If you use the commit-but-don't-wait mode, however, you must be prepared at some point in the future to have data that was "committed" be lost. If you commit but don't wait and the system fails, that data may not have been committed. Only you can decide whether this is acceptable to you. But there are many classes of applications—including high-speed data ingest programs whose only goal is to get the stream of data into the database—where commit-but-don't-wait is not only acceptable but very desirable performancewise.

As a small example, I wrote the Java routine in Listing 3.

Code Listing 3: Commit but don't wait—asynchronous commit

import java.sql.*;
public class instest
   static public void main(String args[]) throws Exception
      DriverManager.registerDriver(new oracle.jdbc.driver.OracleDriver());
         conn = DriverManager.getConnection
                ("jdbc:oracle:thin:@desktop:1521:ora10 gr2",
      conn.setAutoCommit( false ) 
      Statement sql_trace =
      ( "truncate table t" );
         System.out.println( "Table truncated" );
      ( "begin " +
        "  dbms_monitor.session_trace_enable( WAITS=>TRUE );" +
        "end;" );
      // create table t(x char(2000))
      // create index t_idx on t(x)
      PreparedStatement pstmt =
          ("insert into t(x) values(?)" );
      pstmt.setString( 1, "x" );
      PreparedStatement commit =
          ("commit work write batch nowait");
          // ("commit work write immediate wait");
      for( int i = 0; i < 1000; i++ )

Note how the COMMIT statement I'm using in Listing 3 is either COMMIT WORK WRITE BATCH NOWAIT or COMMIT WORK WRITE IMMEDIATE WAIT . The former is the new feature—it shows how to COMMIT without waiting for the COMMIT to be finalized. The latter is the way it has historically happened. When I ran the test program in Listing 3 (the CREATE TABLE is a comment before the INSERT statement for reference), I observed a lot of time spent waiting for log file sync waits when using the WAIT option and virtually no time spent waiting with the NOWAIT option. The results are shown in Listing 4.

Code Listing 4: Comparing WAIT and NOWAIT results

Event Waited On         Times Waited         Max. Wait     Total Waited
----------------        ----------------     ----------    ------------      
log file sync (nowait)    19                 0.02          0.04
log file sync (wait)    1017                 0.04          1.43

This will make a measurable difference in high-speed data load programs that need to COMMIT periodically during the load.

Transparent Data Encryption

Now for a revolutionary feature in Oracle Database 10g Release 2—Transparent Data Encryption. This is the ability to easily and—as the name implies—transparently encrypt stored information. Authorized users need not deal with encryption keys; the data will be decrypted (and encrypted) for them quite transparently. The data is stored encrypted on-disk, so that even if someone steals your database, the information is protected. My account of my experience with Transparent Data Encryption is not intended to be a thorough "here is everything you can do" example of this new feature, but rather it shows how the feature is set up and how you would use it.

The first thing I had to do was to set up a wallet. This is where the encryption keys will be stored—and this file is password-protected.

For my quick-and-dirty demonstration, I simply created a directory named "wallet" off my Oracle home and put the following into my sqlnet.ora file:

   (DIRECTORY=/home/ora10 gr2/wallet)

Once that was done, all I needed to do was to open the wallet:

SQL> alter system 
  2  set encryption wallet open 
  3  identified by foobar;
 System altered.

This is something you must do at database startup once, when you start using this feature (or else the encryption keys won't be available to the system, and therefore the encrypted data won't be available either).

Now, since I had not set up any encryption keys, I needed to do that:

SQL> alter system 
  2  set encryption key 
  3  identified by foobar;
System altered.

In this case I let the system generate the key, but I could just as easily have specified a key of my own choice—something I might do if I wanted to move this data around from system to system easily.

And that was it for setup. I was ready to store encrypted data:

SQL> create table t 
  2  ( c1 varchar2(80), 
  3    c2 varchar2(80) encrypt ) 
  4  tablespace test;
Table created.
SQL> insert into t values 
  2  ( 'this_is_unencrypted', 
  3    'this_is_encrypted' );
1 row created.

This data was transparently encrypted on-disk and decrypted when we queried it:

SQL> select *
  2  from t;
C1                         C2
----------------------     ---------------------
this_is_unencrypted        this_is_encrypted

How could I confirm that this data was, in fact, encrypted? I used strings and grep (a UNIX utility to search files) on the Oracle datafile to see if I could see the string. First, I made sure the block containing the encrypted data was written to disk:

SQL> alter system checkpoint;
System altered.

And then I looked for the strings. I looked for any string in the encrypt.dbf datafile that had "this_is" in it:

$ strings -a encrypt.dbf|grep this_is

As you can see, only the "this_is_unencrypted" string appeared. The other string was not visible because it was, in fact, scrambled before being stored.

This quick example exposes the tip of the iceberg with this new encryption capability. Transparent Data Encryption works with external table unloads, data pump, and so on, and GUI tools—including Oracle Wallet Manager—let you manage your wallet and passwords. There are also commands to re-key data in the event that you feel the keys have been compromised. I believe Transparent Data Encryption is one of the more revolutionary features in Oracle Database 10g Release 2.


LOG ERRORS is a new clause in DELETE, INSERT, MERGE , and UPDATE statements in Oracle Database 10g Release 2. LOG ERRORS allows for a bulk statement that affects many rows to record rows that failed in processing, rather than just failing the entire statement. You can issue a statement such as INSERT INTO T SELECT A,B,C FROM T2 LOG ERRORS INTO ERRLOG('BAD_ROWS_FOR_T') REJECT LIMIT UNLIMITED . What this statement does is log into the table BAD_ROWS_FOR_T any row that violates a constraint; for example, it will log errors caused by column values that are too large, constraint violations ( NOT NULL , unique, referential, and check constraints), errors raised during trigger execution, errors resulting from type conversion between a column in a subquery and the corresponding column of the table, partition mapping errors, and certain MERGE operation errors ( ORA-30926: Unable to get a stable set of rows for MERGE operation. ). When an error occurs, the row causing the failure is logged into a "bad" table along with the Oracle error number, the text of the error message, the type of operation ( INSERT, UPDATE , or DELETE ) as well as the ROWID of the failed row for UPDATE and DELETE operations.

This new feature is destined to be my favorite new feature of Oracle Database 10g Release 2. Performing large bulk operations rather than row-by-row processing is superior for speed and resource usage, but error logging of failed rows has always been an issue in the past. It's no longer an issue with this new feature.

Restore Points

Oracle Database 10g Release 2 includes a new ability to create restore points. Oracle Database 10g Release 1 introduced the ability to flash back a database, but you needed to figure out the system change number (SCN) or time to flashback to. Now in Oracle Database 10g Release 2 you can CREATE RESTORE POINT "X" , then do something potentially damaging, perhaps an upgrade of some application, and if the upgrade fails, you simply FLASHBACK DATABASE TO RESTORE POINT "X" . You no longer have to create a SELECT statement to find the SCN and record it before flashback operations, or guess what time to flash back to.

Native XQuery support

Oracle Database 10g Release 2 adds a native implementation of XQuery, W3C's emerging standard for querying XML. This new feature introduces two new functions, XMLQUERY and XMLTABLE. XMLQUERY gives you the ability to query Oracle XML DB repository documents or XML views over relational data using the XQuery language from a SQL client. XMLTABLE maps the result of an XQuery expression to relational rows and columns, so that result can be queried using SQL.

In Summary

That was just a quick glance at some of the new features in Oracle Database 10g Release 2. Now I'm off to read the documentation in full so I can take advantage of all the new stuff.

Next Steps

Oracle Vice President Tom Kyte answers your most difficult technology questions. Highlights from that forum appear in this column.

more about
Oracle Database 10g Release 2
Oracle Database 10g Release 2 New Features Guide
 Transparent Data Encryption

READ more Tom
Expert Oracle: 9i and 10g Programming Techniques and Solutions


Photography by Dmitri Popov, Unsplash