X

PL/SQL and edition-based redefinition:
the perfect marriage.

Recent Posts

“Why use PL/SQL—the Movie”

“Why use PL/SQL—the Movie” is now available as a playlist on the Oracle Database Product Management YouTube Channel HERE. These two conference presentations are the core of the production: “Why Use PL/SQL” by Bryn recorded at Kscope16 (60 minutes) “A Real World Comparison of the NoPlsql and Smart Database Paradigms” by Toon Koppelaars, recorded at Kscope17 (90 minutes deep dive) Three informal conversations argue that the database should be used as a processing engine rather than just as a bag of tables. Advocates of the bag-of-tables approach (referred to in the videos as usually the NoPlsql approach) refer to the database as the “persistence layer”—presumably in the hope that using this term will dress up that sow’s ear as a silk purse. Check out the playlist for these: “Andrew Holdsworth, Bryn, Bikes and Scenery” “Bryn and Cary Millsap discuss SmartDB vs NoPlsql” “From Helsinki Declaration to SmartDB vs NoPlsql” You might like, mentally, to add THIS other YouTube video to the playlist. It’s called “NoPlsql and Thick Database Approaches with Toon Koppelaars”. Because it was published in the Fall of 2016, and has already received thousands of views, we decided to leave it as a stand-alone piece. It gives the full detail of the “flame graph experiment” that was conducted and analyzed by members of Oracle’s Real World Performance Group. Toon refers to this experiment in his Kscope17 deep dive session. But the duration of that session didn’t allow more than just a summary. A note on “Smart” vs “Thick” The earlier pieces in the playlist use the term “Thick Database." Both this term and “Fat Database” have been used by various well-respected speakers and bloggers for a decade or more. Both “thick” and “fat” resonate with the visual metaphor of drawing a software stack as horizontal layers, like a stack of pancakes. When all the SQL statements that manipulate the application’s tables are issued by PL/SQL code in the database, the database pancake is thick and correspondingly the middle tier pancake is thin. However, we switched to “smart” because it better captures another crucial aspect of the paradigm. Not only is all the SQL issued from PL/SQL code; but also, set-based SQL and bulk-binding techniques are used to the full, and the full power of Oracle Database’s rich feature set is exploited. In other words, not only are we encapsulating the database with a hard shell PL/SQL API; but also, we’re using the database in a smart way. This stands in stark contrast to the approach that aims to be database vendor agnostic and that limits itself to just a universally available, and therefore very basic, set of features.

“Why use PL/SQL—the Movie” is now available as a playlist on the Oracle Database Product Management YouTube Channel HERE. These two conference presentations are the core of the production: “Why Use...

How to install a #SmartDB application back-end

In this essay, we use the term DB-user as short-hand for what the Oracle RDBMS Sys.DBA_Users view lists. We also define these subclasses: —Power-DB-user: a DB-user that has at least one Any privilege (either directly or via a role) —DBA-DB-user: a DB-user that has (at least) the DBA role —Ordinary-DB-user: a DB-user that has not a single Any privilege (neither directly nor via a role) —Connect-DB-User: a DB-user that starts with just Create Session, that will never own objects, that doesn’t have a partner password-protected role (see below), and that (following application back-end installation) will have only the Execute privilege on some or all of the PL/SQL subprograms that jointly define the database’s hard shell API. Some additional useful deinitions: —DBA: a person who knows the password for a DBA-DB-user —App-Admin: a person who knows the password for an Ordinary-DB-user and (see below) for its partner password protected role—but who doesn’t know the password for a single DBA-DB-user. Experts agree that a nefarious person, who knows the password for a Power-DB-user, can find a way to elevate its privileges to turn it into a DBA-DB-user—in other words, to turn her/himself into a DBA. They agree, too, that such a nefarious App-Admin cannot find a way to turn an Ordinary-DB-user into a DBA-DB-user. This leads to a clear recommendation: —never allow the direct grant of even a single Any privilege to a DB-user —rather, simply meet the perceived need by granting the DBA role to that DB-user. —limit the group of DBAs to a tiny few, each of whom has been scrupulously screened. The Development Shop presents a request to the DBAs: (1) To create a set of Ordinary-DB-users, each with its specified list of explicit non-Any privileges. (I prefer to avoid Oracle-maintained roles to make the analysis simpler). For ordinary OLTP applications, just Create Session is sufficient for those that will own only code; and this together with quotas on appropriate tablespaces is sufficient for those that will own quota-consuming objects like tables and indexes. (2) To create a set of password-protected roles in one-to-one correspondence with the Ordinary-DB-users, each with its specified list of explicit non-Any privileges (like Create Table, Create Procedure, and the like). This may include some object privileges with grant option like Execute on the Sys.DBMS_Lock package. (3) To create a set of empty roles (i.e. roles to which no privileges or roles are yet granted), to be used as CBAC roles. Then to grant these roles with delegate option to those Ordinary-DB-users that will own singleton packages that define the database's API. (4) To tell the the team of App-Admins their passwords. This can be fine-grained: for example, App-Admin-1 knows only the passwords for Ordinary-DB-user-1 and its partner password protected role. Thereafter the App-Admins can run the scripts that the Development Shop has provided both that do the initial fresh install and that later patch it and upgrade it. Each script is designed to run as a single designated Ordinary-DB-user and, by construction, will be able to create, modify, and drop only objects that this Ordinary-DB-user owns and to grant object privileges on these to other DB-users and (by intention) to CBAC roles. Each script is run by an App-Admin who connects as the given Ordinary-DB-user and then enables its partner password protected role. Within this regime, every single PL/SQL unit will be invokers rights. And each of the set of jacket PL/SQL singleton packages that implement the API—and only these—will be granted its partner CBAC role to which the minimum object privileges have been granted that allow it to fulfill its purpose. Critically, engineers who install client-side code will know the passwords only for  Connect-DB-Users. Notice that a deliberately designed consequence of this scheme is that, at run time, the invokers rights code executes with only object privileges—i.e. not a single system privilege.

In this essay, we use the term DB-user as short-hand for what the Oracle RDBMS Sys.DBA_Users view lists. We also define these subclasses: —Power-DB-user: a DB-user that has at least one Any privilege...

Reserved words, keywords, and the ends of labeled blocks

@ToonKoppelaars drew our attention to a question posted to oracle-l@freelists.org by knecht.stefan@gmail.com. I’ve paraphrased it thus:   << …personally I feel that this should not compile. What do you guys think? Simple case in point:   package foo is   procedure bar; end foo;   package body foo as   procedure bar is   begin     begin       null;     end loop;   end bar; end foo;   This compiles and executes without error… in my opinion, with a keyword [sic] like “loop” that should not be happening. In essence, the above is interpreted as:   package body foo as   procedure bar is   begin     begin       null;     end anon_block_name;   end bar; end foo;   If we switch the “end loop” to “end if” (which is a similar flow control keyword [sic]) it reacts (in my opinion) correctly:   package body foo as   procedure bar is   begin     begin       null;     end if;   end bar; end foo;   and fails to compile. What’s going on?  >>   The morals of this story are: Unless you use terms of art correctly and with precision you’ll never be able to reason soundly. Never, ever, ask anyone—and especially yourself—about the behavior of some PL/SQL code before compiling it with warnings turned on. You learn so much about your errors in thinking when you do. Never try reason about a program whose whole effect is “null”.  The optimizer might subvert the intention in your head because you told it “I do not care what this program does.” Anyone who’s been to any of my conference talks will have heard me bang on about all of these points. And anyone who hasn’t been to any of my talks should.   There are two separable concerns here.   (1) “Reserved Words” vs “Keywords”   Notice that the Oracle-l questioner doesn’t use these terms of art carefully and precisely. In fact, he doesn’t mention “reserved word” at all. So he’s doomed already.   There are those who think that the design of a computer language shouldn’t reserve any words. But it’s too late for that with SQL and PL/SQL. Both have both. At least, to meet its requirement not to break or change the meaning of extant code on upgrade, Oracle never introduces new reserved words.   Start this script in SQL*Plus. I ran it in 12.2.   Reserved_Words_Keywords_and_the_Ends_of_Labeled_Blocks.sql   Notice that p1 compiles without error and draws this warning:   PLW-06010: keyword "LOOP" used as a defined name   Of course, therefore, it runs without error. Notice that p2 fails to compile with this error:   PLS-00103: Encountered the symbol "IF" when expecting one of the following...   In other words, it found the reserved word “if” at a syntax spot where it isn’t legal.   The point, of course, is that “if” is a reserved word and “loop” is a keyword. See Appendix D, “PL/SQL Reserved Words and Keywords” in the PL/SQL Language Reference book.   Notice that p3 compiles clean and runs without error. Which brings us to separable point #2.   (2) What you might write in the spot between "end" and the semicolon that finishes the block statement   Only procedures, functions, package specs, package bodies, and compound triggers require that the text of the optional identifier between the final “end” and the final semicolon denotes the text of the block’s name. The final example demonstrates the intent, albeit in a contrived, but effective, way.   Please check out this enhancement request. I filed it on 31-Mar-2008.   ER 6931048 - Implement new warning when block label doesn't match “end ... ;”    Sadly, not a single customer has added their voice to this by by filing a Service Request and explaining their business case. I fear that unless customers start to do this, my ER will remain open.

@ToonKoppelaars drew our attention to a question posted to oracle-l@freelists.org by knecht.stefan@gmail.com. I’ve paraphrased it thus:   << …personally I feel that this should not compile. What do...

Abstracts and Slides for OpenWorld 2017 Presentations

The “hot patching” myth—or why you have no choice but to use EBR for zero-downtime patching A myth has arisen that certain changes can be made safely with “create or replace” while the application is still available for normal, highly concurrent use. Managers seem to be especially attached to this myth. This presentation goes through a couple of examples that seem, on intellectual analysis by the uninitiated, to be safe. We  then show you, with naively designed tests, that the “no danger” hypothesis isn’t disproved. Of course, these are simulated multi-user tests. But then, by tweaking some timing knobs, we bring the hidden danger out into the open. It’s all about relatively unlikely race conditions that, nevertheless, are bound, sooner or later, to occur in a busy production system. Then I’ll explain that, should Oracle Database implement a much more pessimistic locking strategy, the race condition errors could be prevented. But if we did that, you’d hate us because everything would slow down. I’ll remind you that Oracle Database, therefore, is designed to perform very well in the optimistic, common-sense, scenario: that you “just say no” to doing DDLs on an application’s within-database artifacts while it’s available for normal highly concurrent use. I’ll conclude by reminding you that EBR isn’t a luxury. Rather you have a simple stark choice: always take downtime for all patching; or adopt and use EBR to allow safe zero-downtime patching. HERE are the slides. Ten rules for doing a PL/SQL performance experiment I’m often asked to comment on the speed difference between two PL/SQL tests. Usually, the tests are presented as SQL*Plus spool files, which differ in just a couple of tiny details. The times are measured for just a single test run using SET TIMING ON, and are shown as “Elapsed: 00:00:02.36” with indefensible precision. Sometimes the times are shorter. This is no way to conduct a performance experiment! This session explains how to do your experiment properly with reference to a real, and interesting, case study. Though the title specifies PL/SQL, the method described applies to any single-user test that runs entirely inside Oracle Database. The title specifies 10 rules to get your attention. Of course, there are actually more rules. HERE are the slides. Guarding Your Data Behind a Hard Shell PL/SQL API—the detail My “Why Use PL/SQL?” paper presents a high-level appeal to common sense and to the abstract principles of computer science. This session examines in practical detail how to ensure that the hard shell of a database’s PL/SQL API is impenetrable. It advocates strict adherence to the principle of least privilege by using a four-schema model (data, code implementation, API, and connect) and invokers rights units together with code-based access control. Scrupulous care is taken to ensure that the privileges needed for installation and patching are not available at runtime, and the approach is reinforced by secure error-handling. HERE are the slides.

The “hot patching” myth—or why you have no choice but to use EBR for zero-downtime patching A myth has arisen that certain changes can be made safely with “create or replace” while the application is...

NoPlsql versus ThickDB

What do we mean by the terms NoPlsql and ThickDB? We use the terms NoPlsql paradigm and Thick Database paradigm—hereinafter ThickDB—in the "Why Use PL/SQL?" paper. They're reasonably self-explanatory; but they imply just a bit more than the bare words. Of course, NoPlsql implies that there's absolutely no PL/SQL anywhere in the picture: neither are there any PL/SQL units in the database; and nor are anonymous PL/SQL blocks issued from client-side code. This, in turn, implies (for a normal application that does no DDL at run-time) that the only database calls from client-side code are select, insert, update, and delete statements—together with commit and rollback. But we mean yet more by the term. There's a huge correlation between the avoidance of PL/SQL and the avoidance of anything but naïve single-row SQL statements. In contrast, ThickDB implies that every database call from client-side code invokes just a single PL/SQL subprogram, and that therefore all select, insert, update, delete, commit, and rollback statements are issued from PL/SQL code. But we mean yet more by this term too. We imply that the SQL that is issued by PL/SQL code is humanly written (so you might also see merge statements), and that the relational design of the tables that these statements query and change is also the result of intelligent, explicit human design. This implies, in turn, that bulk retrieval of results, and bulk batching of changes, is implemented as a matter of course; and, more significantly, that the full power of SQL is used, when appropriate, to specify operations on potentially very large sets of rows as single statements. Toon's teaser: The NoPlsql and ThickDB Approaches Toon delivered this talk in the OakTable World event that took place at the same time as, and next door to, Oracle OpenWorld 2016. I asked him to repeat it at Oracle HQ on the Friday after OpenWorld so that we could make a good recording to post on the Oracle Learning Library YouTube channel. Here's the description: Toon Koppelaars describes an experiment to measure the work done by Oracle Database to complete a specific task using different approaches. The NoPlsql approach treats the database as no more than a persistence layer, using only naïve single-row SQL statements; it implements all business logic outside of it. The ThickDB approach treats the database as a processing engine; it uses a combination of sophisticated set-based SQL statements and PL/SQL to implement all business logic inside it. 'No business logic in the database' advocates take note: the ThickDB approach gets the task done with far less database work than the NoPlsql approach. A Real-World Comparison of the NoPlsql and Thick Database Paradigms The NoPlsql paradigm starts with an object oriented domain model in the middle tier. This leads naturally to treating the database as a bag of tables, so that primitive SQL statements express its API. The ThickDB paradigm starts with the design of the relational model. This leads naturally to allowing client-side code only to call PL/SQL subprograms, so that all SQL is done from PL/SQL. These two paradigms are mutually incompatible. In our first session, we show, based on real-world use cases, how the NoPlsql paradigm brings problems with correctness, security, and performance, and we note that projects that adopt the NoPlsql paradigm use most of their resources on developing plumbing code rather than code that implements actual business functionality. These problems are just the tip of the iceberg and we explain how the NoPlsql paradigm also brings maintenance nightmares. In our second session, we turn to the ThickDB paradigm, rehearse the reasons for its superiority, and explain that it implies more than just allowing database calls to invoke only PL/SQL subprograms. We formalize a layered code classification scheme which leads to optimal understandability and maintainability of both your PL/SQL and your SQL code -- which properties bring the maximum probability of correct behavior. We will convince participants that adopting the ThickDB paradigm guarantees the avoidance of the awful problems brought by the NoPlsql paradigm and establishes the database tier as a reliable central service provider in the application landscape. We will debunk the myth that "moving business logic out of the data tier and into the middle tier" improves scalability.

What do we mean by the terms NoPlsql and ThickDB? We use the terms NoPlsql paradigm and Thick Database paradigm—hereinafter ThickDB—in the "Why Use PL/SQL?" paper. They're reasonably self-explanatory;...

Schema deployments and rollbacks in the regime of the Thick Database paradigm

This short post is a response to a comment on the "Why Use PL/SQL" post. It said "Nice read. Thanks. How do you go about addressing automating schema deployments and rollbacks." Let's take "schema deployment" as an umbrella term for both the first-time installation of the total set of user-defined artifacts, inside an Oracle Database, that implement an application's backend and the subsequent patching and upgrading of this set. We prefer to use the loose term "artifact", rather than "object", because it covers things like constraints and rows in tables that act as configuration data as well as what DBA_Objects lists. In practical terms, "schema deployment" simply means running a sequence of SQL statements. Here I'm using the operational definition of "SQL Statement" as any statement that you submit to Oracle using the Oracle Call Interface (hereinafter the OCI), or something that builds upon it. To be brief here, we'll take this to cover thin JDBC too. I explain this more carefully in my "Why Use PL/SQL" paper. Some people prefer to say that "SQL Statement" covers only insert, update, delete, and select. But my operational definition includes all sorts of DDL statements, alter session statements, the call statement, the anonymous PL/SQL block, and so on. The complete set of SQL statements that together install an application's backend is, quite simply, its source code definition. These definitions are always (in my experience) grouped into several files. Such a file is commonly called an installation script. Most shops seem to be happy just to use the old warhorse, SQL*Plus, as the script runner, and we therefore usually hear the files referred to as SQL*Plus scripts, or SQL scripts for short. (I do know of shops who use their own installation programs built on the OCI. This makes no difference to what I say here.) In general, ordering matters. For example, you cannot create an index before the column list that defines it is valid. Usually this just means that the table that contains the one or few columns in question has already been created. In more ambitious cases, when the expression that defines the index references a PL/SQL function, then this function must already exist. Getting the order right is generally very straightforward, and this emerges naturally during development. Ideally, all the sources are held in a source control system, and its facilities allow a definition to be stored that will generate a specified set of scripts to run in a specified order. Though the deployment details vary (for example, in some cases pre-built executable programs are shipped) all important computer systems are, ultimately, so defined and so installed. When the application's backend includes lots of PL/SQL (which architecture style I applaud), installation time can be reduced by first installing all the objects in namespace 1 (tables, views, synonyms, package specs, type specs, and so on). For brevity, I'll refer collectively to objects of all these types as specs. To minimize the time for this phase, the objects should be created in topologically sorted order. Maybe the term "topological sort" is unfamiliar. It denotes the sort that produces the installation order that leaves behind it the minimum number of invalid objects that then need to be validated by running one of the Utl_Recomp subprograms. When you have a model installation, as you necessarily must have, in a test environment, it's straightforward to compute this optimal ordering from the information exposed by the DBA_Dependencies view. You start with the objects that depend only upon already-installed objects; then you install their immediate dependants; and you continue in this fashion until you are done. You can guess that the query involves "connect by". The spec installation phase is run serially. Now we come to bodies. A spec doesn't depend upon its body, so body installation can never cause invalidation. In the same way, a table doesn't depend on any indexes that it might have. This means that installation order is unimportant. The scripts that define the bodies can therefore be parallelized up to whatever degree of parallelization your kit can sustain. The scripts for creating, patching, and upgrading Oracle Database are run in this way: first the specs, serially; then the bodies, in parallel. You'll always finish up by running one of the the Utl_Recomp subprograms. of course, this supports parallelization too. There's a general consensus that "patching" means changing an existing system to make it conform to its present specification when non-conformity has been detected in ongoing use. To put this more straightforwardly, patching means fixing bugs. In the same spirit, the consensus is that "upgrading" means changing an existing system to make it conform to a new version of its specification by introducing brand new functionality. In strict operational terms, both the patching and the upgrading of the user-created artifacts in an Oracle Database are done in the same way: by running some SQL scripts, serially, in a pre-defined order and then others, in parallel, in a random order. From now on, therefore, I'll simply use the one term "patching" as the umbrella both for "patching" in the strict sense and for "upgrading". There's very little in what I've said so far that doesn't apply to the installation and patching of any software system. Initial installation excites far less interest than does the patching of an installed system. It's hard to define "downtime" or "unavailability" before you already have a system in use! But when patching is done in downtime, the duration of the outage is a major concern. Here's where the term "rollback" is heard. It sometimes happens that, despite all best effort to test the patch installation scripts, a condition is met when they're run in the production database that hadn't been met in rehearsal, and the consequence is so bad that the application cannot be opened up for regular use until the problem is fixed. Because the clock is ticking, a brute-force remedy is often preferred: simply restore the database to the state it was in at the start of the downtime window, open up for use, and work out the solution back in the development shop. The result is that an outage has been experienced whose net effect is ZERO benefit. There is a critical, and exciting, difference between the task of patching the user-defined artifacts inside an Oracle Database and the corresponding task for installations into other environments. Installation into an Oracle Database can be done with zero downtime by taking advantage of edition-based redefinition (hereinafter EBR). In general, both patching classically in downtime, and patching with zero downtime as an EBR exercise, are done the same way: by running SQL scripts in an appropriate mixture of serial and parallel. The difference is that the SQL scripts that implement an EBR exercise are spelled differently from those that implement classic downtime patching. The key feature is that all changes that are made in an EBR exercise are made in a purely additive fashion in the privacy of the new edition, while the unchanged pre-patch artifacts continue to be used. This has a crucial benefit for the "patch rollback" proposition. If the EBR exercise hits a problem, the online users don't know that it's happened. In most cases, the problem can be diagnosed and fixed in situ so that the EBR exercise can then be completed as intended. The only bad consequence is that the moment at which the patched system becomes available has been delayed; but no-one has suffered downtime. In an extreme case, advantage can be taken of the purely additive nature of the exercise and everything can be abandoned cleanly, leaving no trace, so that presently a new fixed-up EBR exercise can be conducted. People always like to hear this when they first learn about EBR. But I have never heard of such a drastic step being needed in practice. Rather, in situ fix-up to allow ordinary progress to completion always seems to be viable.  

This short post is a response to a comment on the "Why Use PL/SQL" post. It said "Nice read. Thanks. How do you go about addressing automating schema deployments and rollbacks."Let's take "schema...

2b, or No2b, that is the question

[Written by a former PL/SQL product manager] I examine a thorny question: should we say "sequel" or "ess-cue-ell"? And I realize that Rhett's parting shot to Scarlett sums up my feelings on the matter. I decided to rewrite my little essay, prompted various remarks made during an exchange of Tweets that followed mine to announce that I'd posted the previous version of this. Thanks especially to @DBoriented and @ChandlerDBA for pointing out that I'd neglected to do a literature survey. Let's start by going to the horse's mouth: the Oracle Database SQL Language Reference (version 12.1). Look for the section "History of SQL". It mentions Codd's seminal paper and IBM's Structured English Query Language (SEQUEL). It then says that SEQUEL later became SQL (still pronounced "sequel"). Arguably this is a doc bug because it should say that SQL is still pronounced "sequel" by many people. However, it's crystal clear how Uncle Larry wants us to pronounce SQL. (Of course, the doc doesn't mention the trademark war.) Now let's answer @oraclenerd's original question by an empirical test. Searching in the SQL Language Reference for "a SQL" gets about 70 hits, finding things like "a SQL statement", "a SQL subquery", and "a SQL expression". This form of the indefinite article is consistent with the "sequel" pronunciation. The other form "an SQL", consistent with the "ess-cue-ell" pronunciation, gets not a single hit. For sport, let's do it using Google, using the site:docs.oracle.com qualifier. I know that the number is somewhat untrustworthy, but it does say that "a SQL" finds about 33K hits. Here's a cute example: "A SQL tuning set (STS) is a database object that...". So the abbreviation SQL is itself abbreviated! What about "an SQL". I wish that I could say it that found zero hits. But Google claims to have found about 12K. This is the herding cats syndrome deluxe. But let's look deeper. The Application Express SQL Workshop and Utilities Guide (Release 3.2) contains "To create an SQL script in the Script Editor". But it also has "Creating a SQL Script" and "What is SQL Scripts?". How embarrassing. There's no lesson to be learned from that book. Embarrassment aside, the overall vote within all of the Oracle docs seems to be three to one in support of the "sequel" pronunciation. Of course, you can just search the Internet for "SQL or sequel". It turns out that many people have written about this topic. (Now there's a surprise.) HERE is one that I liked, mainly for THIS graphic. Here's a little taster: << The standard says that ess-cue-ell is the appropriate way of speaking SQL. However, many English-speaking database professionals still use the nonstandard pronunciation sequel. Looking to the authorities: Prof. Jennifer Widom, co-author of four popular database books, pronounced SQL Language as sequel in her Stanford database course. Christopher J. Date in "A guide to the SQL Standard" (1987) also pronounced the term as sequel. Who is the winner? I guess nobody. Sequel gets the most votes, but Chamberlin says ess-cue-ell, and he gets an extra vote because he's the co-developer of SQL. >> HERE is another. I particularly like this for its story of how the author asked Chamberlin for a ruling. Chamberlin replied to say "ess-cue-ell". But the author found a video HERE, where Chamberlin pronounces it "sequel"! And so does the MC. Time to stop the survey, I think. Readers can continue the sport at their leisure. My own preference still stands. Such is our preference for pronounceability as a word that, with the new crop of "as a service" abbreviations, we'll choose a hybrid over saying all the letters. Thus we pronounce it sass because we can. And we pronounce it dee-bass because at least that's easier to say than dee-bee-eh-eh-ess. So it is, of course, with the hybrid pee-ell-sequel. This tendency is no surprise. All languages look to maximize the signal-to-noise ratio. If you shout a word across a crowded room, it'll penetrate the hubbub more effectively than a string of letters. And it's not just me. All my colleagues on the seventh floor of the Database Tower at Oracle HQ consistently say sequel and pee-ell-sequel both as bare terms and in all compounds like view names, column names, package names, subprogram names, and subsystems of Oracle Database. If it's good enough for them, it's good enough for me.

[Written by a former PL/SQL product manager] I examine a thorny question: should we say "sequel" or "ess-cue-ell"? And I realize that Rhett's parting shot to Scarlett sums up my feelings on the matter.I...

Transforming one table to another: SQL or PL/SQL?

[From a former PL/SQL product manager] This mantra is often recited by famous speakers at conferences about Oracle Database: “If you can do it in SQL, do so; only if you can’t, do it in PL/SQL” But should it be slavishly obeyed in all circumstances? As with all best practice prescriptions, the answer is “It depends”. This paper examines alternative solutions to a well-known problem: How to compute non-overlapping RowID ranges that completely cover a nominated table and that all contain, as nearly as is possible, the same number of rows. It’s relatively straightforward to implement a solution that simply orders the rows in the table by RowID and then allocates them into chunks that all have either S or S+1 rows; but for candidate tables of typical size, this exact method takes unacceptably long to execute. Therefore, an approximate method is called for – approximate, that is, by being uneven in the number of rows that each chunk holds. You can find examples of such approximate solutions using Internet search. They all use the information maintained by Oracle Database and exposed in DBA_Extents rather than the table itself. However, all solutions that I have seen impose special limitations in pursuit of brevity of exposition – and they leave generalization as an exercise for the reader. It turns out that there’s more to the generalization than meets the eye. I use the vehicle of the various implementations of each kind of method to discuss issues of best practice – both SQL versus PL/SQL, and within PL/SQL. Each competing choice is implemented as a Create_Chunks() procedure in a dedicated PL/SQL package. The choice between SQL and PL/SQL is therefore made in the context of the implementation of the Create_Chunks() procedure. It would be entirely foolish to implement the procedural code that is non-negotiably needed to issue SQL statements outside of the database. All solutions examined pass the same correctness tests; but there are performance differences. More significantly, there a big differences in source code size and in general clarity. My conclusion is that, for this particular example, it is better to implement the core logic of the algorithms in PL/SQL, even when a single insert... select approach is feasible. My preference is inevitably subjective – and reflects some rather subtle considerations. Your opinion might be different. >> My presentation slides are no use without me presenting them. I have therefore written a whitepaper, HERE, that gives a careful and detailed account of what my presentation addresses. I also provided a zip file, HERE, of working versions of all the implementations that the paper discusses together with the test harness. Note added 22-May-2016 I presented this topic at the HOTSOS Symposium in March 2016. OTN's Laura Ramsey interviewed me afterwards and posted the video HERE. We talked about the backstory of the Oracle Database Community's effort that generated many interesting SQL solutions. The ones that my paper discusses came from Jonathan Lewis ‏(@JLOracle) and Stew Ashton (@StewAshton). Stew credits Sven Weller (@SvenWOracle) for a major insight that kick-started his solutions. Jonathan seeded the crowd-sourcing work with this OTN forum posting. And Stew wrote up the account of his work in this blog post. It was a fun project. I've presented the work at quite a number of Oracle User Group conferences. And I had the privilege and pleasure of bringing the contributors to the stage to explain their work at several of the gigs: at OpenWorld 2015 and at UKOUG 2015, I was joined by Jonathan and Stew; at DOAG 2015 I was joined by Jonathan and Sven; and at BGOUG Fall 2015 I was joined by Jonathan.

[From a former PL/SQL product manager] This mantra is often recited by famous speakers at conferences about Oracle Database: “If you can do it in SQL, do so; only if you can’t, do it in PL/SQL” But...

How to write a safe result-cached PL/SQL function

Jonathan Lewis has recently posted Result Cache and Result Cache 2 on his blog. These are very carefully written accounts that show what can go wrong if you break the rules for the safe use of the PL/SQL Function Result Cache.  Jonathan's analysis, and a very fruitful private follow-on discussion showed that we'd neglected to state these safety rules clearly in our documentation. To bridge the gap until the version of the Oracle Database documentation that fixes the account is available, and to provide a rather different perspective on the whole business than Jonathan's post did, we offer the following essay: How to write a safe result-cached PL/SQL function A result-cached PL/SQL function is safe if it always produces the same output for any input that it would produce were it not marked with the result_cache keyword. This safety is guaranteed when, and only when, these conditions are met: * all tables that the function accesses are ordinary, non-Sys-owned permanent tables in the same database as the function * the function’s result must be determined only by the vector of input actuals together with the committed content, at the current SCN, of the tables that it references * when the function is run, it must have no side-effects.

Jonathan Lewis has recently posted Result Cache and Result Cache 2 on his blog. These are very carefully written accounts that show what can go wrong if you break the rules for the safe use of the...