Celebrating the joy and power of Oracle SQL with the Oracle Developer Advocate team

How to Delete Millions of Rows Fast with SQL

Chris Saxon
Developer Advocate

It's a rare event. Removing data from a table. But every now and then you may need to do a bit of spring cleaning and clear down data.

This is easy to do with delete.

But this presents a problem. Deleting lots of rows can be slow. And there's a chance it'll take even longer because another session has locked the data you want to remove.

Luckily there's a trick to speed up the process:

Turn the DML into DDL!

In this post we'll start with a quick recap of how delete works. Then look at several alternatives you can use in Oracle Database to remove rows faster:

If you want to see which is quickest, you can skip straight to the performance comparison.

How to Delete Rows with SQL

Removing rows is easy.

Use a delete statement. This lists the table you want to remove rows from. Make sure you add a where clause that identifies the data to wipe, or you'll delete all the rows!

delete from table_to_remove_data
where  rows_to_remove = 'Y';

I discuss how delete works - including why you probably don't want to do this - in more detail in this video.

But when you do want to erase data, delete can take a while. Particularly if you're clearing down most of the rows in a table.

So how can you make delete faster?

It's time to switch the SQL statements to DDL.

Let's start with the easiest case: emptying all the data from a table.

Delete all the Rows Fast with Truncate

If you want to wipe all the data in a table, the fastest, easiest way is with a truncate:

truncate table to_empty_it;

This is an instant metadata operation. This will also reset the high-water mark for the table. By default it also deallocates space all the space above the minextents for the table. You can change this behaviour with the storage clause:

  • truncate table ... reuse storage leaves all the space allocated to the table
  • truncate table ... drop storage(the default) deallocates all the space above the minextents for the table
  • truncate table ... drop all storage deallocates all space from the table

Before using truncate you'll also need to check if there are any foreign keys pointing to this table. If there are - even if the child tables are empty – truncate will throw the following error:

truncate table to_empty_it;

ORA-02266: unique/primary keys in table referenced by enabled foreign keys

To get around this you'll need to disable the foreign keys referencing the table first. And re-enable them afterwards. For example:

alter table to_empty_it_child 
  modify constraint fk disable;

truncate table to_empty_it;

alter table to_empty_it_child 
  modify constraint fk enable;

Of course, the child tables must be empty too. If there are rows in them, you can't revalidate the foreign keys!

If you want to clear down the parent and child tables in one fell swoop, from Oracle Database 12c there is a trick. Add the cascade clause:

truncate table to_empty_it

The cascade option will remove all the rows from your table. And the child tables referencing it. This is an easy way to end up with an empty database!

Use extreme caution when doing this.

Fortunately you need to declare the foreign keys as on delete cascade for this to work. This is a rare option. So it's unlikely you'll be able to do this. Make sure you double-check before running a cascaded truncate!

But deleting everything in a table in unusual. Normally you want to remove a subset of the rows. There are several other DDL tricks you can use to make this faster.

This most widely available is to save the rows you want to keep in a temporary table. Then switch the data over.

Remove Rows with Create-Table-as-Select

Hang on. Removing data by creating a table? How does that work?

Bear with me.

Inserting rows in a table is faster than deleting them. Loading data into a new table using create-table-as-select (CTAS) is faster still.

So if you're removing most of the rows from a table, instead of issuing a delete you can:

  • Create a new table saving the rows you want to keep
  • Truncate the original table
  • Load the saved rows back in with insert as select

For example:

create table rows_to_keep
  select * from massive_table
  where  save_these = 'Y';

truncate table massive_table;

insert into massive_table
  select * from rows_to_keep;

Or there's another version of this technique which can be even faster.

Switch the tables over.

The process for this is similar:

For example:

create table rows_to_keep
  select * from massive_table
  where  save_these = 'Y';

rename massive_table
  to massive_archived;

rename rows_to_keep
  to massive_table;

This only loads the data once. So can be even faster than using truncate + insert to swap the rows over as in the previous method.

To complete the switch you also need to copy any indexes, constraints, grants, etc. from the old table to the new. These steps could take a long time. So you'll need to test on your tables to see which CTAS method is quickest.

It's also more likely you'll make a mistake copying the dependencies.

But both of these options are fiddly. And come with a huge drawback:

You'll need to take your application offline to do these safely.

This makes CTAS methods a non-starter for many scenarios.

Luckily Oracle Database has a couple of other tricks available.

Delete All Rows in a Partition Fast

When you partition a table, you logically split it into many sub-tables. You can then do operations which only affect rows in a single partition.

This gives an easy, fast way to remove all the rows in a partition. Drop or truncate it!

alter table to_archive_from
  drop partition to_remove_it;

alter table to_archive_from
  truncate partition to_empty_it;

As with a regular truncate, you'll need to disable foreign keys on child tables before you can truncate a partition.

To use this method the following criteria need to be true:

  • The table is partitioned
  • You want to remove all the rows in a partition

So the big question:

Is it worth adding partitioning a table to make deletes faster?

To answer this, you first need to ask a follow-up question:

Will you want to repeat this delete on a regular schedule?

If this is a one-off cleanse, there's little to gain by partitioning the table first.

But for regular data removal, this can help a lot. For example, an information lifecycle management process. With this you archive the oldest data to another table, database or storage system.

Provided you've partitioned the table on insert date it's simple to wipe out the data in the oldest partition.

But remember: partitioning a table affects all operations against it. Partitioning by date may make your archival process simple and fast, but it may also make key queries against the table slower.

Test your entire application workload against the table before diving in with partitioning! You may find the impact to other SQL statements makes partitioning infeasible for many of your tables.

Luckily Oracle Database 12c Release 2 added another fast way to remove lots of data:

A filtered table move.

Delete Data with a Filtered Table Move

Typically you use alter table … move to change which tablespace you store rows in. Or other physical properties of a table such as compression settings.

With a filtered table move you can migrate a subset of the data. Do this by adding a where clause to the statement:

alter table to_delete_from
  move including rows
  where rows_to_keep = 'Y';

This only moves the rows matching the where clause. Provided you want to remove a large chunk of data, this can be much faster than delete.

And it has an online clause. So unlike the CTAS methods, you can do this while the application is still running.

Though - like truncate - it will fail with an ORA-02266 if there are enabled foreign keys pointing to the table. So this needs careful planning to use on parent tables. Or an application outage.

Which is the Fastest Way to Delete Lots of Data?

So, the all-important question: which method is quickest?

To find out, I loaded a table with 100,000 rows. Then measured how long it took to delete 90 percent, 50 percent, and 10 percent of the data.

Except for the partitioning comparison, the tests were on a non-partitioned table. The partitioned table was split into ranges of 10,000 rows. So the tests truncated 9, 5, and 1 partitions respectively.

This graph shows the findings:

A regular delete was by far and away the slowest operation. When deleting 90% of the data it took 11 seconds on average. It only bested any DDL methods when removing as little as 10% of the rows. And even then it had a similar runtime to many of the DDL methods.

So it can be worth checking if there's a faster option than delete when removing relatively few rows.

Truncating partitions gave the best performance overall. Though was marginally slower than the other DDL methods when removing most of the data. This makes it a great option if you want to schedule regular deletes. For example, to remove the oldest month of data.

But partitioning has other implications. Adding it to a table may be inappropriate for other use cases. Provided you're on Oracle Database 12.2 or higher, a filtered table move will often give the best performance to simplicity trade-off.

A final note. My test tables only had one index: the primary key. It's likely your tables have at least a couple of other indexes. Which will make the processes slower. And may change the relative speed of each.

So if you need to find the fastest way to delete your data, test each method on your table!

If you want to repeat the tests I ran, use the remove_rows procedure in this Live SQL script. Note the storage quota on Live SQL is too small to test deleting 100,000 rows! So you'll need to try another environment.


Removing most of the rows in a table with delete is a slow process. One that gets slower the more data you're wiping. Add in other user activity such as updates that could block it and deleting millions of rows could take minutes or hours to complete.

Changing the process from DML to DDL can make the process orders of magnitude faster. By far and away the safest of these is a filtered table move.

But if fast SQL is your goal, it's worth trying out a few of the techniques above to find which makes the process the quickest for you.

Just remember: DDL commits in Oracle Database!

So while using DDL can give great performance gains complete to delete, it's also more risky. Should anything go wrong in the process, you can't rollback the statement to recover the lost data. You have to restore from a backup.

Ensure you double, triple or even quadruple test your code to avoid accidentally removing lots of data from your production database!

Learn more about SQL performance tuning in Databases for Developers: Performance

Got any other questions on how you can make your deletes faster? Ask in the comments, or submit your question on Ask TOM.

Want to try these scripts out yourself? Grab them over on Live SQL.

UPDATED 27 July 2020 - Adding Databases for Developers Link

Join the discussion

Comments ( 20 )
  • ROBERT CARLIN Wednesday, March 18, 2020
    Thee CTAS example can be extended to the Update of Millions of rows.
  • Chris Saxon Thursday, March 19, 2020
    Indeed it can, great point Robert!
  • Frank Friday, March 27, 2020
    Thanks for the info! Question here: For partitioned tables with global indexes, and to ensure there's no interruption to ongoing DML and queries, is partition truncation still a good choice? If not, what's the best choice for such scenario? Thanks!
  • Chris Saxon Friday, March 27, 2020
    Hi Frank,

    Yes you can use truncate partition with global indexes. Oracle Database will maintain them for you - provided you use the UPDATE INDEXES clause:

    alter table ...
    truncate partition ...
    update indexes;

    Same applies when dropping partitions.
  • chad Friday, May 29, 2020
    It doesn't seem to like the online clause its throwing an ora-14811, can you test? tried it on a 12.2 database. Works great without it, just remember to rebuild any indexes. Still faster than a delete.
  • Chris Saxon Friday, May 29, 2020
    Sorry - you've lost me Chad!

    What would throw an ORA-14811? I've not referenced that error in the post...
  • chad Friday, May 29, 2020
    must be a bug in 12.2, same test but with online works in 19
  • Jinay Friday, May 29, 2020
    Does filtered table move work on hash partitioned tables? I tried it and got an error. ORA-14511: cannot perform operation on a partitioned
  • chad Saturday, May 30, 2020
    I tested this with the online feature and I get an error in 12.2, Ive opened a support call. It may be because we use supplemental logging, it works without ONLINE, we still have to rebuild the indexes after but its still faster than either a delete or CTAS

    alter table to_delete_from
    move ONLINE including rows
    where rows_to_keep = 'Y';

    ORA-14811: Online DDL with the INCLUDING ROWS clause is not supported under supplemental logging.
  • Chris Saxon Monday, June 1, 2020
    Chad - the error seems clear to me there, it's not supported in 12.2!

    There are several DDL commands where the ONLINE clause is unsupported when supplemental logging is enabled.

    If you want this to change in 12.2, you'll have to work with support.
  • Chris Saxon Monday, June 1, 2020
    Jinay - it does - at the partition level:

    alter table t
    move partition p1
    including rows where c2 = 0;

    This applies to all alter table ... move operations. You can't move the whole table, only its partitions!
  • chad Wednesday, June 3, 2020
    Chris just to finish this one off, yes, its the supplemental logging that has the issue and its carried forward to at least 19c, I added supplemental logging to my 19c database and it fails there as well.

    Note: Im only referring to the ONLINE part of it.
    thanks for bringing this feature to us, will be used!
  • Chris Saxon Wednesday, June 3, 2020
    Thanks for clarifying Chad. Glad you find this useful :)
  • vijay Thursday, November 12, 2020
    I have a table with almost 2 billion rows.
    It has about 5 indexes.
    tables and indexes are consuming space in TBs.
    I am wondering if i purge the say 1 billion rows,
    do i need to
    1. gather the stats on the table
    2. will index get deleted themselves and if yes do i need to gather index stats too?
    now able to find a good article which explains whats happens on a delete
  • Chris Saxon Thursday, November 12, 2020
    Hi Vijay,

    If you remove half the rows in a table, the default stats gathering process should pick this table up next time around. This will also update index stats.

    The database also maintains the index entries too (assuming you mean by using delete, not one of the DDL options above). With non-online options (e.g. offline table move), you'll need to rebuild the indexes yourself.

    You might want to gather stats manually yourself straight away, but it's not strictly necessary.
  • TIRUMALA R GUNDALA Tuesday, November 24, 2020
    So the big question:

    Is it worth adding partitioning a table to make deletes faster? Yes

    Can you please provide examples to do this via partitions ?

    what if i have to separate data on some conditions instead of simple Date range or list ?

    how to achieve partition with that where criteria and i am interested to delete rows filtered with that where criteria. is it possible ?
  • Harisai M Wednesday, November 25, 2020
    We have 19c SE2 there i found expdp -impdp is best for deleting 7days data from 97days table data, which can have 1+billion rows.
  • Chris Saxon Wednesday, November 25, 2020
    Tirumala - using partitioning to make deletes faster is most effective when you want to remove ALL the rows in a partition. Then you can wipe them all instantly with drop/truncate partition.

    If you're using regular delete, you write it in the same way as against an unpartitioned table. To be most effective you must include the partitioning columns in the where clause.

    Whether this is faster or slower than the same delete on an unpartitioned table depends on what exactly you're doing.
  • Elavarashan VR Thursday, December 24, 2020
    Thank you very much for this article. I was looking for a faster way of removing the data without locking the table and found this option "Delete Data with a Filtered Table Move" found to be very useful. Its taking only 2 mins to delete 21 million data.
  • Chris Saxon Tuesday, January 5, 2021
    Awesome, great to hear Elavarashan! :)
Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.