In the last weeks I had the opportunity to work on two cases of disaster recovery.
There are a couple of lessons learned from both cases:
- Check your backups
- Backup your archived logs
- Make backups of the database structure (full export without data)
- Make backups of your controlfiles (backup controlfile to trace)
The first case was a development environment, where a month worth of work by a team of several developers was lost, despite that the database was working on archive log mode and they had cold backups.
The last backup was corrupted, and the archived logs of the next backup were missing.
The database undo tablespace and one of the data tablespace datafiles were lost.
In this case I did open the old backup to create a full database export, I used this export to create the application user objects in a new database. Then I did dump the data from the corrupted database using DUL and imported it into the new database.
The second case was related to a production clone that was periodically refreshed by an automatic job. As part of the job the metadata of one tablespace was exported in order to be able to plug-in it back into the refreshed database. That tablespace contained critical information and the delta of developers work.
In this case both the metadata export and the database refresh failed, the transportable tablespace datafile was left orphaned without its metadata export.
In this case I didn't have the data dictionary so the dumps of data lacked the original create table statements, the only reference I had was the object number and an old metadata export.
I did extract from this export the create table statements, linked them to its original objects dumps and imported the data. Some new objects, not contained on the old metadata export, were left unnamed, to be identified by the owners of the application.
Is good to be able to recover valuable data when all hope is almost lost. But the better is to have good validated backups!