Snap Clone: Instant, self-serviced database-on-demand
By Sudip Datta-Oracle on Mar 06, 2013
Snap Clone: Introduction
Oracle just released Enterprise Manager 12c Release 2 plugin Update 1 in February, 2013. This release has several new cloud management features that, such as Schema as a Service and Snap Clone. While the relevance of Schema as a Service is in the context of new database services, Snap Clone is useful in performing functional testing on pre-existing data.
One big consumer group of cloud is QA Engineers or Testers. They perform User Acceptance Tests (UAT) for various applications. To perform an UAT, they need to create copies of the production database. For intense testing, such as in pre-upgrade scenarios, they need a full updateable copy of the production data. There are other situations, such as in functional testing, they need to perform minimal updates to the data, but at the same time, need multiple functional copies. Enterprise Manager 12c supports both the scenarios. In the former case, it leverages RMAN backups to clone the data. In the latter case, it leverages the “Copy on Write” technology at the storage layer to perform Enterprise Manager 12c Snap Clone (or just Snap Clone). Currently, NAS technologies viz. Netapp and ZFS Storage Appliance are supported for Snap Clone. By using this technology, the entire data does not need to be cloned, but the new database can physically point to the source blocks within the same filer and only needs to allocate new blocks if there are updates to the cloned copy.
Underlying “Copy on Write” technology
To cover the underlying technology, let us look at the Netapp and Sun ZFS storage technologies. First of all, Netapp supports pooling of storage resources and creating volumes on top of those. These volumes are called Flexvols. NetApp FlexClone technology enables true data cloning - instant replication of the Flexvols without requiring additional storage space at the time of creation. Each cloned volume is a transparent, virtual copy that can be used for a wide range of operations such as product/system development testing, bug fixing, upgrade checks, data set simulations, etc. FlexClone volumes have all the capabilities of a FlexVol volume, including growing, shrinking, and being the source of a snapshot copy or even another FlexClone volume. Data ONTAP makes it happen by Copy on Write technology. When a volume is cloned, ONTAP does not allocate any new physical space but simply updates the metadata to point to the old blocks of the parent volume. NetApp filers use a Write Anywhere File Layout (WAFL) to manage disk storage. When a file is changed, the snapshot copy still points to the disk blocks where the file existed before it was modified, and only the changes (deltas) are written to new disk blocks. A block in WAFL currently can have a maximum of 255 pointers to it. This means that a single FlexVol volume can be cloned upto 255 times. All the metadata updates are just pointer changes, and the filer takes advantage of locality of reference, NVRAM, and RAID technology to keep everything fast and reliable. I found this documentation on the Netapp site specially useful to understand the concept. The following picture provides a graphical illustration of how this works.
Oracle ZFS employs a similar copy-on-write methodology that creates clones that point to the source block of data. When one needs to modify the block, data is never overwritten in place. Oracle Solaris ZFS then creates new pointers to the new data and a new master block (uberblock) that points to the modified data tree. Only then does it move to using the new uberblock and tree. In addition to providing data integrity, having new and previous versions of the data on disk allows for services such as snapshots to be implemented very efficiently.
The best way to think of storage snapshots is that it is a point-in-time view of the data. It’s a time machine, letting you look into the past. Because it’s all just pointers, you can actually look at the snapshot as if it was the active filesystem. It’s read-only, because you can’t change the past, but you can actually look at it and read the data. NetApp and SunZFS snapshots just write the new information to a special bit of disk reserved for storing these changes, called the SnapReserve. Then, the pointers that tell the system where to find the data get updated to point to the new data in the SnapReserve.
Since we are only recording the deltas, you get the disk savings of
copy-on-write snapshots (typically a few hundred kilobytes for a 1
Because the snapshot is just pointers, to restore data (using
SnapRestore), we simply update the pointers to point to the original
data again. This is faster than copying all the data back. So
taking a snapshot completes in seconds, even for really large volumes
(like, terabytes) and so do restores. A typical terabyte database
therefore takes only a couple of minutes to clone, backup and restore.
So, what is the additional benefit of Enterprise Manager Snap Clone over storage level cloning?
Snap Clone is complementary to the copy-on-write technologies described above. It leverages the technologies mentioned above; however it provides additional value in:
- Automated registration and association with Test Master database: Registering the storage with Enterprise Manager in context of the Test Master database. For example, it queries the filer to find the storage volumes and then associates those with the volumes that the datafiles are associated with. It provides granular control to the admins to make a database clonable, since there could be databases that DBAs do not want cloned off.
- Database as a Service using a self-service paradigm: Provides a self-service user (typically a functional tester) to provision a clone based on the Test Master. The self-service capability has administrator side feature like setting up the pool of servers which will host the databases, creating a zone, creating service templates for provisioning and setting access controls for the users both at the zone level and the service template level.
- Time travel: Functional testers often need to go back to an earlier incarnation of a database. Enterprise Manager provides the self service users to take multiple snapshots of the database as backups. The users can then easily restore from an earlier snapshot. Since the snapshot is only a thin copy, the backup and restore are almost instantaneous, typically a couple of minutes. During restore a large part of that is spent in actually starting the database, for example and discovering its state in Enterprise manager and not in the actual restore.
- Manageability: Finally, Enterprise Manager provides the complete manageability of these clones. This includes performance management, lifecycle management, etc. For example, when cloning at a storage volume level, sysadmin tools have little idea on the databases and applications that are consuming those volumes. From an inventory management, capacity planning and compliance it is important to track the storage association and lineage of the clones at the database level. Enterprise Manager provides that rich set of manageability features.
So how does this work in Enterprise Manager 12c?
In order to understand the Snap Clone feature of Enterprise Manager and its relevance to DBaaS, it is important to understand the sequence of steps that enable the feature and the DbaaS.
Step 1: Setting up the DbaaS Pool
First of all the Sysadmin has to designate few servers (which become Enterprise Manager hosts when the agent is deployed on them) to constitute the PaaS Infrastructure Zone. Each of these servers should have the connectivity to be able to mount the volumes participating in the Snap Clone process.
The DBA intrinsically knows the exact versions and flavor of databases being used within each LoB along with the operating system version compatibility. As the next level of streamlining he/she can add each unique type of the database configuration to a single place called Pool. For example, single Instance 22.214.171.124, cluster database 126.96.36.199 …etc.
A database pool contains a set of homogeneous resources that can be used to provision a database instance within a PaaS Infrastructure Zone. For Snap Clone in particular, the administrator needs to pre-provision the same version of Oracle Homes either on standalone hosts or in a RAC cluster, which should be a part of the PaaS Infrastructure Zone.
Step 2: Setting up the Test Master
In the first step the administrator has to set up a Test Master as a clone of the production. Sometimes, the administrator has to create another copy of production at the source itself for masking and subsetting. The solution would vary depending on the customer's specific need and infrastructure. One can use one of RMAN, Dataguard, Golden Gate or even storage technologies such as Netapp Snapmirror, but usually our customers have figured out one way or other to do it. If the customer wishes to use EM for this, they can also use the Database Clone feature to clone the data (this leverages RMAN behind the scenes) or even use data synchronization feature of the Change Manager (part of Database Lifecycle Management Pack) to keep production and Test Master consistent. There is no unique way of accomplishing this; it all depends on the specific use case. There can be cases where the customer may need to mask or subset the data at source for which they can use those EM features as well.
The test master has to be created on ZFS Storage Appliance or Netapp Filer. Currently, the versions supported are:
· ZFS Storage Appliance models 7410 and 7420
· Any Netapp storage model where Version ONTAP® 188.8.131.52P1D18 or above of Netapp is supported. The Netapp interoperability matrix is available here.
Here’s a sample of database files on a Netapp filer that could constitute the Test Master database.
· /vol/oradata (datafiles and indexes): [8-16 luns]
· /vol/oralog (redologs only): [2-4 luns]
· /vol/orarch (archived redo logs ):[2-4 luns]
· /vol/controlfiles (small vol for controlfiles):[2-4 luns]
· /vol/oratemp (temp tablespace):[4-8 luns]
Step 3: Register the storage and designate the Test Master
Once the Test Master database has been created, one has to
1. Discover the Test Master database as an EM target
2. Register the storage with Enterprise Manager. Enterprise Manager uses an agent installed on Linux x86-64 bit to communicate with the filer. For NetApp storage, the connection is over http or https. For Sun ZFS storage, the connection is over ssh.
Enterprise Manager associates a database with a filer by deriving the volumes from the data files and then associating the volumes with those seen by the filer. For a database to participate in Snap Clone, it should be wholly located on flexvols or shares with Copy on Write enabled. Enterprise Manager performs the necessary validations for that.
Step 4: Creating the service template using the Profile
Finally, the Test Master needs to be exposed as a source of cloning to functional clones to self-service users. This is done by creating a provisioning profile. Provisioning Profile, in general, is an Enterprise Manager concept that denotes a gold image-whether in the form of a “tarball” archive or an RMAN backup or a Test Master. The concept of profile makes the process repeatable by several users, such as QA testing different parts of the application.
The profile is exposed to the service catalog via a service template which also includes the provisioning procedure, pre and post scripts for deploying the image.
Finally, comes the user side experience. Enterprise Manager supports a self-service model where users can provision databases without being gated by DBA. The self-service user can pick a service template (which indirectly via the provisioning profile links to the Test Master) , specify the zone where to deploy and the database gets provisioned. This new database is actually a "thin clone" of the Test Master and new blocks will get allocated only when the data is updated. The user can also take backup the cloned database, which are essentially read-only snapshots of the database. If the user needs to restore the database the latest incarnation of the database is simply pointed to the snapshot, so that the restore is instantaneous. This literally enables the self-service user to go back in time, in a "time travel" fashion. In addition to provisioning and backup, self-service users can also monitor the databases-check their statuses, look at session statistics, etc.
Before concluding this blog entry, let me point to a bunch of collateral related to DBaaS that we recently published. Check out the new whitepaper, demos, and presentation. We will soon publish a technical whitepaper on performing E-Business Suite testing using Snap Clone. Till then...