This short post is a response to a comment on the "Why Use PL/SQL" post. It said "Nice read. Thanks. How do you go about addressing automating schema deployments and rollbacks."
Let's take "schema deployment" as an umbrella term for both the first-time installation of the total set of user-defined artifacts, inside an Oracle Database, that implement an application's backend and the subsequent patching and upgrading of this set. We prefer to use the loose term "artifact", rather than "object", because it covers things like constraints and rows in tables that act as configuration data as well as what DBA_Objects lists.
In practical terms, "schema deployment" simply means running a sequence of SQL statements. Here I'm using the operational definition of "SQL Statement" as any statement that you submit to Oracle using the Oracle Call Interface (hereinafter the OCI), or something that builds upon it. To be brief here, we'll take this to cover thin JDBC too. I explain this more carefully in my "Why Use PL/SQL" paper. Some people prefer to say that "SQL Statement" covers only insert, update, delete, and select. But my operational definition includes all sorts of DDL statements, alter session statements, the call statement, the anonymous PL/SQL block, and so on.
The complete set of SQL statements that together install an application's backend is, quite simply, its source code definition. These definitions are always (in my experience) grouped into several files. Such a file is commonly called an installation script. Most shops seem to be happy just to use the old warhorse, SQL*Plus, as the script runner, and we therefore usually hear the files referred to as SQL*Plus scripts, or SQL scripts for short. (I do know of shops who use their own installation programs built on the OCI. This makes no difference to what I say here.)
In general, ordering matters. For example, you cannot create an index before the column list that defines it is valid. Usually this just means that the table that contains the one or few columns in question has already been created. In more ambitious cases, when the expression that defines the index references a PL/SQL function, then this function must already exist. Getting the order right is generally very straightforward, and this emerges naturally during development. Ideally, all the sources are held in a source control system, and its facilities allow a definition to be stored that will generate a specified set of scripts to run in a specified order.
Though the deployment details vary (for example, in some cases pre-built executable programs are shipped) all important computer systems are, ultimately, so defined and so installed.
When the application's backend includes lots of PL/SQL (which architecture style I applaud), installation time can be reduced by first installing all the objects in namespace 1 (tables, views, synonyms, package specs, type specs, and so on). For brevity, I'll refer collectively to objects of all these types as specs. To minimize the time for this phase, the objects should be created in topologically sorted order.
Maybe the term "topological sort" is unfamiliar. It denotes the sort that produces the installation order that leaves behind it the minimum number of invalid objects that then need to be validated by running one of the Utl_Recomp subprograms. When you have a model installation, as you necessarily must have, in a test environment, it's straightforward to compute this optimal ordering from the information exposed by the DBA_Dependencies view. You start with the objects that depend only upon already-installed objects; then you install their immediate dependants; and you continue in this fashion until you are done. You can guess that the query involves "connect by".
The spec installation phase is run serially. Now we come to bodies. A spec doesn't depend upon its body, so body installation can never cause invalidation. In the same way, a table doesn't depend on any indexes that it might have. This means that installation order is unimportant. The scripts that define the bodies can therefore be parallelized up to whatever degree of parallelization your kit can sustain. The scripts for creating, patching, and upgrading Oracle Database are run in this way: first the specs, serially; then the bodies, in parallel. You'll always finish up by running one of the the Utl_Recomp subprograms. of course, this supports parallelization too.
There's a general consensus that "patching" means changing an existing system to make it conform to its present specification when non-conformity has been detected in ongoing use. To put this more straightforwardly, patching means fixing bugs. In the same spirit, the consensus is that "upgrading" means changing an existing system to make it conform to a new version of its specification by introducing brand new functionality. In strict operational terms, both the patching and the upgrading of the user-created artifacts in an Oracle Database are done in the same way: by running some SQL scripts, serially, in a pre-defined order and then others, in parallel, in a random order. From now on, therefore, I'll simply use the one term "patching" as the umbrella both for "patching" in the strict sense and for "upgrading".
There's very little in what I've said so far that doesn't apply to the installation and patching of any software system.
Initial installation excites far less interest than does the patching of an installed system. It's hard to define "downtime" or "unavailability" before you already have a system in use! But when patching is done in downtime, the duration of the outage is a major concern. Here's where the term "rollback" is heard. It sometimes happens that, despite all best effort to test the patch installation scripts, a condition is met when they're run in the production database that hadn't been met in rehearsal, and the consequence is so bad that the application cannot be opened up for regular use until the problem is fixed. Because the clock is ticking, a brute-force remedy is often preferred: simply restore the database to the state it was in at the start of the downtime window, open up for use, and work out the solution back in the development shop. The result is that an outage has been experienced whose net effect is ZERO benefit.
There is a critical, and exciting, difference between the task of patching the user-defined artifacts inside an Oracle Database and the corresponding task for installations into other environments. Installation into an Oracle Database can be done with zero downtime by taking advantage of edition-based redefinition (hereinafter EBR). In general, both patching classically in downtime, and patching with zero downtime as an EBR exercise, are done the same way: by running SQL scripts in an appropriate mixture of serial and parallel. The difference is that the SQL scripts that implement an EBR exercise are spelled differently from those that implement classic downtime patching. The key feature is that all changes that are made in an EBR exercise are made in a purely additive fashion in the privacy of the new edition, while the unchanged pre-patch artifacts continue to be used.
This has a crucial benefit for the "patch rollback" proposition. If the EBR exercise hits a problem, the online users don't know that it's happened. In most cases, the problem can be diagnosed and fixed in situ so that the EBR exercise can then be completed as intended. The only bad consequence is that the moment at which the patched system becomes available has been delayed; but no-one has suffered downtime. In an extreme case, advantage can be taken of the purely additive nature of the exercise and everything can be abandoned cleanly, leaving no trace, so that presently a new fixed-up EBR exercise can be conducted. People always like to hear this when they first learn about EBR. But I have never heard of such a drastic step being needed in practice. Rather, in situ fix-up to allow ordinary progress to completion always seems to be viable.