It's a rare event. Removing data from a table. But every now and then you may need to do a bit of spring cleaning and clear down data.
This is easy to do with
But this presents a problem. Deleting lots of rows can be slow. And there's a chance it'll take even longer because another session has locked the data you want to remove.
Luckily there's a trick to speed up the process:
Turn the DML into DDL!
In this post we'll start with a quick recap of how delete works. Then look at several alternatives you can use in Oracle Database to remove rows faster:
create-table-as-selectto wipe a large fraction of the data
If you want to see which is quickest, you can skip straight to the performance comparison.
Removing rows is easy.
delete statement. This lists the table you want to remove rows from. Make sure you add a
where clause that identifies the data to wipe, or you'll delete all the rows!
delete from table_to_remove_data where rows_to_remove = 'Y';
I discuss how delete works - including why you probably don't want to do this - in more detail in this video.
But when you do want to erase data, delete can take a while. Particularly if you're clearing down most of the rows in a table.
So how can you make delete faster?
It's time to switch the SQL statements to DDL.
Let's start with the easiest case: emptying all the data from a table.
If you want to wipe all the data in a table, the fastest, easiest way is with a
truncate table to_empty_it;
This is an instant metadata operation. This will also reset the high-water mark for the table. By default it also deallocates space all the space above the
minextents for the table. You can change this behaviour with the storage clause:
truncate table ... reuse storageleaves all the space allocated to the table
truncate table ... drop storage(the default) deallocates all the space above the
minextentsfor the table
truncate table ... drop all storagedeallocates all space from the table
Before using truncate you'll also need to check if there are any foreign keys pointing to this table. If there are - even if the child tables are empty – truncate will throw the following error:
truncate table to_empty_it; ORA-02266: unique/primary keys in table referenced by enabled foreign keys
To get around this you'll need to disable the foreign keys referencing the table first. And re-enable them afterwards. For example:
alter table to_empty_it_child modify constraint fk disable; truncate table to_empty_it; alter table to_empty_it_child modify constraint fk enable;
Of course, the child tables must be empty too. If there are rows in them, you can't revalidate the foreign keys!
If you want to clear down the parent and child tables in one fell swoop, from Oracle Database 12c there is a trick. Add the
truncate table to_empty_it cascade;
Beware! The cascade option will remove all the rows from your table. And the child tables referencing it. This is an easy way to end up with an empty database! Use extreme caution when doing this.
Fortunately you need to declare the foreign keys as
on delete cascade for this to work. This is a rare option. So it's unlikely you'll be able to do this. Make sure you double-check before running a cascaded truncate!
But deleting everything in a table in unusual. Normally you want to remove a subset of the rows. There are several other DDL tricks you can use to make this faster.
This most widely available is to save the rows you want to keep in a temporary table. Then switch the data over.
Hang on. Removing data by creating a table? How does that work?
Bear with me.
Inserting rows in a table is faster than deleting them. Loading data into a new table using create-table-as-select (CTAS) is faster still.
So if you're removing most of the rows from a table, instead of issuing a delete you can:
insert as select
create table rows_to_keep select * from massive_table where save_these = 'Y'; truncate table massive_table; insert into massive_table select * from rows_to_keep;
Or there's another version of this technique which can be even faster.
Switch the tables over.
The process for this is similar:
create table rows_to_keep select * from massive_table where save_these = 'Y'; rename massive_table to massive_archived; rename rows_to_keep to massive_table;
This only loads the data once. So can be even faster than using
truncate + insert to swap the rows over as in the previous method.
To complete the switch you also need to copy any indexes, constraints, grants, etc. from the old table to the new. These steps could take a long time. So you'll need to test on your tables to see which CTAS method is quickest.
It's also more likely you'll make a mistake copying the dependencies.
But both of these options are fiddly. And come with a huge drawback:
You'll need to take your application offline to do these safely.
This makes CTAS methods a non-starter for many scenarios.
Luckily Oracle Database has a couple of other tricks available.
When you partition a table, you logically split it into many sub-tables. You can then do operations which only affect rows in a single partition.
This gives an easy, fast way to remove all the rows in a partition. Drop or truncate it!
alter table to_archive_from drop partition to_remove_it; alter table to_archive_from truncate partition to_empty_it;
As with a regular truncate, you'll need to disable foreign keys on child tables before you can truncate a partition.
To use this method the following criteria need to be true:
So the big question:
Is it worth adding partitioning a table to make deletes faster?
To answer this, you first need to ask a follow-up question:
Will you want to repeat this delete on a regular schedule?
If this is a one-off cleanse, there's little to gain by partitioning the table first.
But for regular data removal, this can help a lot. For example, an information lifecycle management process. With this you archive the oldest data to another table, database or storage system.
Provided you've partitioned the table on insert date it's simple to wipe out the data in the oldest partition.
But remember: partitioning a table affects all operations against it. Partitioning by date may make your archival process simple and fast, but it may also make key queries against the table slower.
Test your entire application workload against the table before diving in with partitioning! You may find the impact to other SQL statements makes partitioning infeasible for many of your tables.
Luckily Oracle Database 12c Release 2 added another fast way to remove lots of data:
A filtered table move.
Typically you use
alter table … move to change which tablespace you store rows in. Or other physical properties of a table such as compression settings.
With a filtered table move you can migrate a subset of the data. Do this by adding a where clause to the statement:
alter table to_delete_from move including rows where rows_to_keep = 'Y';
This only moves the rows matching the where clause. Provided you want to remove a large chunk of data, this can be much faster than delete.
And it has an
online clause. So unlike the CTAS methods, you can do this while the application is still running.
Though - like
truncate - it will fail with an
ORA-02266 if there are enabled foreign keys pointing to the table. So this needs careful planning to use on parent tables. Or an application outage.
So, the all-important question: which method is quickest?
To find out, I loaded a table with 100,000 rows. Then measured how long it took to delete 90 percent, 50 percent, and 10 percent of the data.
Except for the partitioning comparison, the tests were on a non-partitioned table. The partitioned table was split into ranges of 10,000 rows. So the tests truncated 9, 5, and 1 partitions respectively.
This graph shows the findings:
A regular delete was by far and away the slowest operation. When deleting 90% of the data it took 11 seconds on average. It only bested any DDL methods when removing as little as 10% of the rows. And even then it had a similar runtime to many of the DDL methods.
So it can be worth checking if there's a faster option than delete when removing relatively few rows.
Truncating partitions gave the best performance overall. Though was marginally slower than the other DDL methods when removing most of the data. This makes it a great option if you want to schedule regular deletes. For example, to remove the oldest month of data.
But partitioning has other implications. Adding it to a table may be inappropriate for other use cases. Provided you're on Oracle Database 12.2 or higher, a filtered table move will often give the best performance to simplicity trade-off.
A final note. My test tables only had one index: the primary key. It's likely your tables have at least a couple of other indexes. Which will make the processes slower. And may change the relative speed of each.
So if you need to find the fastest way to delete your data, test each method on your table!
If you want to repeat the tests I ran, use the
remove_rows procedure in this Live SQL script. Note the storage quota on Live SQL is too small to test deleting 100,000 rows! So you'll need to try another environment.
Removing most of the rows in a table with
delete is a slow process. One that gets slower the more data you're wiping. Add in other user activity such as
updates that could block it and deleting millions of rows could take minutes or hours to complete.
Changing the process from DML to DDL can make the process orders of magnitude faster. By far and away the safest of these is a filtered table move.
But if fast SQL is your goal, it's worth trying out a few of the techniques above to find which makes the process the quickest for you.
Just remember: DDL commits in Oracle Database!
So while using DDL can give great performance gains complete to delete, it's also more risky. Should anything go wrong in the process, you can't rollback the statement to recover the lost data. You have to restore from a backup.
Ensure you double, triple or even quadruple test your code to avoid accidentally removing lots of data from your production database!
Got any other questions on how you can make your deletes faster? Ask in the comments, or submit your question on Ask TOM.
Want to try these scripts out yourself? Grab them over on Live SQL.