Tuesday Nov 29, 2016

On issuing commit from database PL/SQL

There was something of a storm on Twitter on 12-Oct-2016. Twitter is a nice medium for, say, showing a photo to your followers or referring them to an interesting article. But it is shockingly awful for sustaining a debate. The character limit is a huge pain; and the inability to support a threaded discussion is an even greater problem. Nevertheless, I don’t have enough self-discipline to hold back when I read something with which I violently disagree—especially when it's about PL/SQL! The storm that I’m referring to concerned the advisability of issuing commit from database PL/SQL.

To make matters worse, the storm broke when I was on vacation in China. I was finally driven to write this: All this is starting to ruin my holiday. I’ll ensure [soon] that my “Why Use PL/SQL?” paper provides the required informed clarity on this.

Naturally, it took me a lot, lot longer than I wanted before I could make time to write this post. I've done so now. HERE it is.

Saturday Nov 26, 2016

Oren Nakdimon on #ThickDB

I just downloaded and watched an excellent talk entitled "Oracle Database Development at Moovit" given by Moovit's database expert, Oren Nakdimon (twitter: @DBoriented). It was delivered on 16-Nov-2016 in the Code Talk series ("Sponsored by ODTUG and IOUG, provided by Oracle Developer Advocates"). Steven Feuerstein ‏(twitter: @sfonplsql) interviews Oren about the overall architecture of the Moovit app and in particular how the database is exposed to all of the outside-of-the-database entities.

It turns out that Moovit's scheme is strict #ThickDB. Oren explains why they chose this architecture and rehearses the significant benefits that this choice has brought. To my great pleasure, Oren attributes the reasons for the benefits that the Moovit app has enjoyed in the real world to what I explain (as self-evident common sense) in my Why Use PL/SQL? post.

Interestingly, they use EBR. Of course, this is easy to adopt within a #ThickDB scheme. They've averaged one zero-downtime application patching exercise every three days, for the past three years. On occasion, they've even done two such exercises in one day. This is truly agile deluxe!

Download the talk HERE (42MB).

Tuesday Nov 01, 2016

NoPlsql versus ThickDB

The Talk is dead, long live the Talk

It's just over a year since I published my "Why Use PL/SQL?" blog post and paper. At the start of the paper, I say that Toon Koppelaars has, for at least the last decade and a half, been a practical exponent of, and a vigorous advocate for, the Thick Database paradigm. (These days, Toon is a Consulting Member of Technical staff in Oracle Corporation's Real-World Performance Group.)

I've given the companion talk more than a dozen times over the last year or so, and it's now time to take this advocacy project to the next level. I'm going to support Toon in a new season of conference speaking with a two-session talk called "A Real-World Comparison of the NoPlsql and Thick Database Paradigms". Toon has already delivered a teaser entitled "The NoPlsql and Thick Database Approaches—Which One Do You Think Requires a Bigger Database Server?" It's posted on YouTube.

What do we mean by the terms NoPlsql and ThickDB?

I use the terms NoPlsql paradigm and Thick Database paradigm—hereinafter ThickDB—in my "Why Use PL/SQL?" paper. They're reasonably self-explanatory; but they imply just a bit more than the bare words.

Of course, NoPlsql implies that there's absolutely no PL/SQL anywhere in the picture: neither are there any PL/SQL units in the database; and nor are anonymous PL/SQL blocks issued from client-side code. This, in turn, implies (for a normal application that does no DDL at run-time) that the only database calls from client-side code are select, insert, update, and delete statements—together with commit and rollback. But we mean yet more by the term. There's a huge correlation between the avoidance of PL/SQL and the avoidance of anything but naïve single-row SQL statements. Toon says more about this in the YouTube video.

In contrast, ThickDB implies that every database call from client-side code invokes just a single PL/SQL subprogram, and that therefore all select, insert, update, delete, commit, and rollback statements are issued from PL/SQL code. But we mean yet more by this term too. We imply that the SQL that is issued by PL/SQL code is humanly written (so you might also see merge statements), and that the relational design of the tables that these statements query and change is also the result of intelligent, explicit human design. This implies, in turn, that bulk retrieval of results, and bulk batching of changes, is implemented as a matter of course; and, more significantly, that the full power of SQL is used, when appropriate, to specify operations on potentially very large sets of rows as single statements.

Toon's teaser: The NoPlsql and ThickDB Approaches

Toon delivered this talk in the OakTable World event that took place at the same time as, and next door to, Oracle OpenWorld 2016. I asked him to repeat it at Oracle HQ on the Friday after OpenWorld so that we could make a good recording to post on the Oracle Learning Library YouTube channel. Here's the description:

Toon Koppelaars describes an experiment to measure the work done by Oracle Database to complete a specific task using different approaches. The NoPlsql approach treats the database as no more than a persistence layer, using only naïve single-row SQL statements; it implements all business logic outside of it. The ThickDB approach treats the database as a processing engine; it uses a combination of sophisticated set-based SQL statements and PL/SQL to implement all business logic inside it. 'No business logic in the database' advocates take note: the ThickDB approach gets the task done with far less database work than the NoPlsql approach.

I'm pleased to see that, just a day after publication, the video has already received a couple of hundred views.

Update added 28-Nov-2016: the video has now been viewed almost 2,000 times.

A Real-World Comparison of the NoPlsql and Thick Database Paradigms

Toon (with a little editorial help from me) has put together a two-session talk and we shall be presenting it this year at the upcoming Oracle User Group conferences in Bulgaria, Germany, and the UK, and next year (so far) at the Hotsos Symposium. We've also submitted it for Kscope17 and we're keeping our fingers firmly crossed that the committee will say "yes, please". Here's the abstract:

The NoPlsql paradigm starts with an object oriented domain model in the middle tier. This leads naturally to treating the database as a bag of tables, so that primitive SQL statements express its API. The ThickDB paradigm starts with the design of the relational model. This leads naturally to allowing client-side code only to call PL/SQL subprograms, so that all SQL is done from PL/SQL. These two paradigms are mutually incompatible.

In our first session, we show, based on real-world use cases, how the NoPlsql paradigm brings problems with correctness, security, and performance, and we note that projects that adopt the NoPlsql paradigm use most of their resources on developing plumbing code rather than code that implements actual business functionality. These problems are just the tip of the iceberg and we explain how the NoPlsql paradigm also brings maintenance nightmares.

In our second session, we turn to the ThickDB paradigm, rehearse the reasons for its superiority, and explain that it implies more than just allowing database calls to invoke only PL/SQL subprograms. We formalize a layered code classification scheme which leads to optimal understandability and maintainability of both your PL/SQL and your SQL code -- which properties bring the maximum probability of correct behavior. We will convince participants that adopting the ThickDB paradigm guarantees the avoidance of the awful problems brought by the NoPlsql paradigm and establishes the database tier as a reliable central service provider in the application landscape. We will debunk the myth that "moving business logic out of the data tier and into the middle tier" improves scalability.

Sunday May 01, 2016

What’s in a Word? Lexical Analysis for PL/SQL and SQL

HERE is a new essay from my colleague Charles Wetherell. This is the abstract:

A friend wrote a PL/SQL source code analyzer. He was surprised when his tool worked on return 'Y'; but failed on return'Y';. Is the blank-free version legal pl/sql? Does his tool have a bug? The answer lies in the lexical structure of pl/sql and sql. This note explains lexical analysis and provides an answer to our friend’s question.

And this is how the introduction ends:

Most programmers never worry about the rules [of lexical analysis]; they write sensible programs and don’t notice the odd cases. But for folks who write tools to create, analyze, or manage PL/SQL and SQL, the details do matter. Mondegreens are to be avoided. The knowledge may also help those who set coding standards, write elegant code, or are curious about everything to do with the two languages.

You might be wondering what a mondegreen is; you'll soon find out by reading the whole of Charles's introduction. I really hope that you are curious about everything to do with PL/SQL and SQL, and that this will be enough to motivate you to read Charles's essay.

I encourage you to read it for an entirely different reason: it's a shining example of how to write about a technical subject in a direct and clear way. Both Charles and I are fans of Steven Pinker's book "The Sense of Style: The Thinking Person's Guide to Writing in the 21st Century". Pinker presents an abridged version of his thesis in a lecture given in October 2015 at the Royal Institution, London. Watch it HERE. The lecture is a nice appetizer for the book. I warmly recommend the lecture and the book to anyone who writes about any aspect of Oracle Database – and this must surely include everyone who visits this blog. After all, you must, now and then, discuss technical topics with colleagues by email, post questions to various forums, and – I'm sure – answer such questions.

Friday Apr 01, 2016

Schema deployments and rollbacks in the regime of the Thick Database paradigm

I wrote this short post in response to a comment on my "Why Use PL/SQL" post. It said "Nice read. Thanks. How do you go about addressing automating schema deployments and rollbacks." So I used its wording for the title of this post.

I had fun reading all the April Fools Day tweets and blog posts. THIS was my personal favorite. Despite the coincidence of posting date, what I say here is meant only seriously.

I'll take "schema deployment" as an umbrella term for both the first-time installation of the total set of user-defined artifacts, inside an Oracle Database, that implement an application's backend and the subsequent patching and upgrading of this set. I prefer to use the loose term "artifact", rather than "object", because it covers things like constraints and rows in tables that act as configuration data as well as what DBA_Objects lists.

In practical terms, "schema deployment" simply means running a sequence of SQL statements. Here I'm using the operational definition of "SQL Statement" as any statement that you submit to Oracle using the Oracle Call Interface (hereinafter the OCI), or something that builds upon it. To be brief here, I'll take this to cover thin JDBC too. I explain this more carefully in my "Why Use PL/SQL" paper. Some people prefer to say that "SQL Statement" covers only insert, update, delete, and select. But my operational definition includes all sorts of DDL statements, alter session statements, the call statement, the anonymous PL/SQL block, and so on.

The complete set of SQL statements that together install an application's backend is, quite simply, its source code definition. These definitions are always (in my experience) grouped into several files. Such a file is commonly called an installation script. Most shops seem to be happy just to use the old warhorse, SQL*Plus, as the script runner, and we therefore usually hear the files referred to as SQL*Plus scripts, or SQL scripts for short. (I do know of shops who use their own installation programs built on the OCI. This makes no difference to what I say here.)

In general, ordering matters. For example, you cannot create an index before the column list that defines it is valid. Usually this just means that the table that contains the one or few columns in question has already been created. In more ambitious cases, when the expression that defines the index references a PL/SQL function, then this function must already exist. Getting the order right is generally very straightforward, and this emerges naturally during development. Ideally, all the sources are held in a source control system, and its facilities allow a definition to be stored that will generate a specified set of scripts to run in a specified order.

Though the deployment details vary (for example, in some cases pre-built executable programs are shipped) all important computer systems are, ultimately, so defined and so installed.

When the application's backend includes lots of PL/SQL (which architecture style I applaud), installation time can be reduced by first installing all the objects in namespace 1 (tables, views, synonyms, package specs, type specs, and so on). For brevity, I'll refer collectively to objects of all these types as specs. To minimize the time for this phase, the objects should be created in topologically sorted order.

Maybe the term "topological sort" is unfamiliar. It denotes the sort that produces the installation order that leaves behind it the minimum number of invalid objects that then need to be validated by running one of the Utl_Recomp subprograms. When you have a model installation, as you necessarily must have, in a test environment, it's straightforward to compute this optimal ordering from the information exposed by the DBA_Dependencies view. You start with the objects that depend only upon already-installed objects; then you install their immediate dependants; and you continue in this fashion until you are done. You can guess that the query involves "connect by".

The spec installation phase is run serially. Now we come to bodies. A spec doesn't depend upon its body, so body installation can never cause invalidation. In the same way, a table doesn't depend on any indexes that it might have. This means that installation order is unimportant. The scripts that define the bodies can therefore be parallelized up to whatever degree of parallelization your kit can sustain. The scripts for creating, patching, and upgrading Oracle Database are run in this way: first the specs, serially; then the bodies, in parallel. You'll always finish up by running one of the the Utl_Recomp subprograms. of course, this supports parallelization too.

There's a general consensus that "patching" means changing an existing system to make it conform to its present specification when non-conformity has been detected in ongoing use. To put this more straightforwardly, patching means fixing bugs. In the same spirit, the consensus is that "upgrading" means changing an existing system to make it conform to a new version of its specification by introducing brand new functionality. In strict operational terms, both the patching and the upgrading of the user-created artifacts in an Oracle Database are done in the same way: by running some SQL scripts, serially, in a pre-defined order and then others, in parallel, in a random order. From now on, therefore, I'll simply use the one term "patching" as the umbrella both for "patching" in the strict sense and for "upgrading".

There's very little in what I've said so far that doesn't apply to the installation and patching of any software system.

Initial installation excites far less interest than does the patching of an installed system. It's hard to define "downtime" or "unavailability" before you already have a system in use! But when patching is done in downtime, the duration of the outage is a major concern. Here's where the term "rollback" is heard. It sometimes happens that, despite all best effort to test the patch installation scripts, a condition is met when they're run in the production database that hadn't been met in rehearsal, and the consequence is so bad that the application cannot be opened up for regular use until the problem is fixed. Because the clock is ticking, a brute-force remedy is often preferred: simply restore the database to the state it was in at the start of the downtime window, open up for use, and work out the solution back in the development shop. The result is that an outage has been experienced whose net effect is ZERO benefit.

There is a critical, and exciting, difference between the task of patching the user-defined artifacts inside an Oracle Database and the corresponding task for installations into other environments. Installation into an Oracle Database can be done with zero downtime by taking advantage of edition-based redefinition (hereinafter EBR). In general, both patching classically in downtime, and patching with zero downtime as an EBR exercise, are done the same way: by running SQL scripts in an appropriate mixture of serial and parallel. The difference is that the SQL scripts that implement an EBR exercise are spelled differently from those that implement classic downtime patching. The key feature is that all changes that are made in an EBR exercise are made in a purely additive fashion in the privacy of the new edition, while the unchanged pre-patch artifacts continue to be used. This has a crucial benefit for the "patch rollback" proposition. If the EBR exercise hits a problem, the online users don't know that it's happened. In most cases, the problem can be diagnosed and fixed in situ so that the EBR exercise can then be completed as intended. The only bad consequence is that the moment at which the patched system becomes available has been delayed; but no-one has suffered downtime. In an extreme case, advantage can be taken of the purely additive nature of the exercise and everything can be abandoned cleanly, leaving no trace, so that presently a new fixed-up EBR exercise can be conducted. People always like to hear this when they first learn about EBR. But I have never heard of such a drastic step being needed in practice. Rather, in situ fix-up to allow ordinary progress to completion always seems to be viable.

I won't try to explain EBR in this post. I do describe it briefly in my "Why Use PL/SQL" paper Please go to oracle.com/ebr and download my paper on the topic for the full story.

Wednesday Mar 16, 2016

2b, or No2b, that is the question

I examine a thorny question: should we say "sequel" or "ess-cue-ell"? And I realize that Rhett's parting shot to Scarlett sums up my feelings on the matter.

I decided to rewrite my little essay, prompted various remarks made during an exchange of Tweets that followed mine to announce that I'd posted the previous version of this. Thanks especially to @DBoriented and @ChandlerDBA for pointing out that I'd neglected to do a literature survey.

Let's start by going to the horse's mouth: the Oracle Database SQL Language Reference (version 12.1). Look for the section "History of SQL". It mentions Codd's seminal paper and IBM's Structured English Query Language (SEQUEL). It then says that SEQUEL later became SQL (still pronounced "sequel"). Arguably this is a doc bug because it should say that SQL is still pronounced "sequel" by many people. However, it's crystal clear how Uncle Larry wants us to pronounce SQL. (Of course, the doc doesn't mention the trademark war.)

Now let's answer @oraclenerd's original question by an empirical test. Searching in the SQL Language Reference for "a SQL" gets about 70 hits, finding things like "a SQL statement", "a SQL subquery", and "a SQL expression". This form of the indefinite article is consistent with the "sequel" pronunciation. The other form "an SQL", consistent with the "ess-cue-ell" pronunciation, gets not a single hit.

For sport, let's do it using Google, using the site:docs.oracle.com qualifier. I know that the number is somewhat untrustworthy, but it does say that "a SQL" finds about 33K hits. Here's a cute example: "A SQL tuning set (STS) is a database object that...". So the abbreviation SQL is itself abbreviated! What about "an SQL". I wish that I could say it that found zero hits. But Google claims to have found about 12K. This is the herding cats syndrome deluxe. But let's look deeper. The Application Express SQL Workshop and Utilities Guide (Release 3.2) contains "To create an SQL script in the Script Editor". But it also has "Creating a SQL Script" and "What is SQL Scripts?". How embarrassing. There's no lesson to be learned from that book.

Embarrassment aside, the overall vote within all of the Oracle docs seems to be three to one in support of the "sequel" pronunciation.

Of course, you can just search the Internet for "SQL or sequel". It turns out that many people have written about this topic. (Now there's a surprise.) HERE is one that I liked, mainly for THIS graphic. Here's a little taster:

The standard says that ess-cue-ell is the appropriate way of speaking SQL. However, many English-speaking database professionals still use the nonstandard pronunciation sequel.

Looking to the authorities: Prof. Jennifer Widom, co-author of four popular database books, pronounced SQL Language as sequel in her Stanford database course. Christopher J. Date in "A guide to the SQL Standard" (1987) also pronounced the term as sequel.

Who is the winner?

I guess nobody. Sequel gets the most votes, but Chamberlin says ess-cue-ell, and he gets an extra vote because he's the co-developer of SQL.

HERE is another. I particularly like this for its story of how the author asked Chamberlin for a ruling. Chamberlin replied to say "ess-cue-ell". But the author found a video HERE, where Chamberlin pronounces it "sequel"! And so does the MC.

Time to stop the survey, I think. Readers can continue the sport at their leisure.

My own preference still stands. Such is our preference for pronounceability as a word that, with the new crop of "as a service" abbreviations, we'll choose a hybrid over saying all the letters. Thus we pronounce it sass because we can. And we pronounce it dee-bass because at least that's easier to say than dee-bee-eh-eh-ess. So it is, of course, with the hybrid pee-ell-sequel. This tendency is no surprise. All languages look to maximize the signal-to-noise ratio. If you shout a word across a crowded room, it'll penetrate the hubbub more effectively than a string of letters.

And it's not just me. All my colleagues on the seventh floor of the Database Tower at Oracle HQ consistently say sequel and pee-ell-sequel both as bare terms and in all compounds like view names, column names, package names, subprogram names, and subsystems of Oracle Database. If it's good enough for them, it's good enough for me.

Thursday Nov 05, 2015

Oracle OpenWorld 2015 retrospect

To borrow a line from Margaret Mitchell's Gone With the Wind:

Death, taxes and OpenWorld! There's never any convenient time for any of them.

It's a bit dull to stay indoors writing slides and whitepapers while the rest of San Francisco is out and about having fun making the most of the glorious weather that seems always to coincide with Oracle's annual spectacle. And the sheer size of the thing makes getting from place to place, even by bicycle, quite a strain. Nevertheless, I do find that the event brings some masochistic enjoyment. My personal technical highlights were these talks:

CON8704: American Express’s Road to Zero-Downtime Releases via Edition-Based Redefinition


CON2082: Best Practices for Interpreting PL/SQL Hierarchical Profiles for Effective Tuning

(I mentioned them in my Picks for OpenWorld 2015 post.) The slides are available from the OpenWorld site. American Express's EBR talk was a very nice sequel to the one given by Salesforce.com at OpenWorld 2014:

CON7521: Salesforce.com. Using Edition-Based Redefinition for Near-Zero-Downtime Releases

I was pleased by the response to each of my two talks. As I said from the rostrum, my slides are no use without me presenting them; but I've written whitepapers that give a careful and detailed account of what I address. The abstracts and paper downloads are in this blog:

CON8729: Why Use PL/SQL?


CON8055: Transforming One Table to Another: SQL or PL/SQL?

The "Transforming" paper explains the contribution made to the solutions it discusses by Jonathan Lewis, Stew Ashton, and others who had fun in this Oracle Community discussion. By a stoke of luck, both Jonathan and Stew were around and agreed to come up on stage to explain their part in the project. Thanks to them both for enlivening what might otherwise have been a rather nerdy talk.

I also enjoyed "CON8416: Optimizing PL/SQL for Performance and Maintainability—a Panel Discussion".

It's all very well to talk about Oracle Database technology and to listen to others do the same; but it's nicer still to cycle around the much loved Paradise Loop with twenty or so OpenWorld attendees. This year was the second go at #BikeB4OOW. People seem to want it to become a regular fixture.

It was last year's inaugural ride that enticed me into the Twittersphere, as you can tell by the cute spelling of its name. And I had fun with Twitter at this year's show. If you can bear it, have a look at my Tweets & replies and my Favorites.

Wednesday Oct 28, 2015

Transforming one table to another: SQL or PL/SQL?

Here is the long version of the abstract of my OpenWorld 2015 talk.

This mantra is often recited by famous speakers at conferences about Oracle Database:

“If you can do it in SQL, do so; only if you can’t, do it in PL/SQL”

But should it be slavishly obeyed in all circumstances? As with all best practice prescriptions, the answer is “It depends”.

This paper examines alternative solutions to a well-known problem:

How to compute non-overlapping RowID ranges that completely cover a nominated table and that all contain, as nearly as is possible, the same number of rows.

It’s relatively straightforward to implement a solution that simply orders the rows in the table by RowID and then allocates them into chunks that all have either S or S+1 rows; but for candidate tables of typical size, this exact method takes unacceptably long to execute. Therefore, an approximate method is called for – approximate, that is, by being uneven in the number of rows that each chunk holds. You can find examples of such approximate solutions using Internet search. They all use the information maintained by Oracle Database and exposed in DBA_Extents rather than the table itself. However, all solutions that I have seen impose special limitations in pursuit of brevity of exposition – and they leave generalization as an exercise for the reader. It turns out that there’s more to the generalization than meets the eye.

I use the vehicle of the various implementations of each kind of method to discuss issues of best practice – both SQL versus PL/SQL, and within PL/SQL. Each competing choice is implemented as a Create_Chunks() procedure in a dedicated PL/SQL package. The choice between SQL and PL/SQL is therefore made in the context of the implementation of the Create_Chunks() procedure. It would be entirely foolish to implement the procedural code that is non-negotiably needed to issue SQL statements outside of the database.

All solutions examined pass the same correctness tests; but there are performance differences. More significantly, there a big differences in source code size and in general clarity. My conclusion is that, for this particular example, it is better to implement the core logic of the algorithms in PL/SQL, even when a single insert... select approach is feasible. My preference is inevitably subjective – and reflects some rather subtle considerations. Your opinion might be different.

My presentation slides are no use without me presenting them. I have therefore written a whitepaper, HERE, that gives a careful and detailed account of what my presentation addresses.

I also provided a zip file, HERE, of working versions of all the implementations that the paper discusses together with the test harness.

Note added 22-May-2016

I presented this topic at the HOTSOS Symposium in March 2016. OTN's Laura Ramsey interviewed me afterwards and posted the video HERE. We talked about the backstory of the Oracle Database Community's effort that generated many interesting SQL solutions. The ones that my paper discusses came from Jonathan Lewis ‏(@JLOracle) and Stew Ashton (@StewAshton). Stew credits Sven Weller (@SvenWOracle) for a major insight that kick-started his solutions.

Jonathan seeded the crowd-sourcing work with this OTN forum posting. And Stew wrote up the account of his work in this blog post.

It was a fun project. I've presented the work at quite a number of Oracle User Group conferences. And I had the privilege and pleasure of bringing the contributors to the stage to explain their work at several of the gigs: at OpenWorld 2015 and at UKOUG 2015, I was joined by Jonathan and Stew; at DOAG 2015 I was joined by Jonathan and Sven; and at BGOUG Fall 2015 I was joined by Jonathan.

Monday Oct 26, 2015

Why use PL/SQL?

Here is the abstract of a talk that I've so far given at more than a dozen Oracle User Group conferences, starting in Israel in Jun 2015:

“Large software systems must be built from modules. A module hides its implementation behind an interface that exposes its functionality. This is computer science’s most famous principle. For applications that use an Oracle Database, the database is, of course, one of the modules. The implementation details are the tables and the SQL statements that manipulate them. These are hidden behind a PL/SQL interface. This is the Thick Database paradigm: select, insert, update, delete, merge, commit, and rollback are issued only from database PL/SQL. Developers and end-users of applications built this way are happy with their correctness, maintainability, security, and performance. But when developers follow the NoPlsql paradigm, their applications have problems in each of these areas and end-users suffer. This session provides PL/SQL devotees with unassailable arguments to defend their beliefs against attacks from skeptics; skeptics who attend will see the light.”

My presentation slides are no use without me presenting them. I have therefore written a whitepaper that gives a careful and detailed account of what my presentation addresses. HERE it is.

Note added May-2016

I gave this talk at the DOAG conference, in November 2015. (DOAG is the German Oracle User Group.) I summarized what I said in an interview. The six-minute video is published HERE.

Note added November-2016

I just published the post "Oren Nakdimon on #ThickDB". Do please read it and then download and watch Oren's talk. It substantiates, with real-world experience, everything I say in my "Why Use PL/SQL?" whitepaper.

Friday Oct 23, 2015

Bryn's Picks for OpenWorld 2015

If you're coming to OpenWorld, you know that the schedule of sessions is HERE. Here are my recommendations. No apologies for the fact that I picked some of my own gigs. Listed in schedule order.

9:00 am Monday | Oaktable World, Creativity Museum
Mark W. Farnham
Minimizing the concurrency footprint of transactions with Logical Units of Work stored in PL/SQL

(Oaktable World is hosted at the Creativity Museum, 221 Fourth Street, about 100 yards from the entrance from Oracle Open World.)

12:15 p.m. Monday | Moscone South—306
Dominic Giles, Connor Mcdonald, Gerald Venzl (Oracle)
Modern Application Development with Oracle Database

Development approaches, tools, and frameworks are rapidly advancing and the tools used to build modern database applications are evolving equally as quickly. In this session, you will see a demo and learn how Oracle Database integrates with modern development environments and how you can exploit technologies such as Node.js, Python, JavaScript, APEX 5.0, REST, and Java.

1:30 p.m. Monday | Moscone South—306
Richard Miley, Penny Stanisavljevich (American Express)
introduced by Bryn
American Express’s Road to Zero-Downtime Releases via Edition-Based Redefinition

The internet-facing application for customers of American Express’s prepaid card products is backed by Oracle Database. Customers want to use their cards at any time, but before the company began using the Edition-Based Redefinition feature in Oracle Database, application releases required three hours of downtime early in the morning to do feature testing that might determine a rollback decision. Now customer spending is uninterrupted, and additional significant benefits were achieved by providing early feature testing before release and more-frequent upgrades. This session presents design considerations for Edition-Based Redefinition implementation and lessons learned, and concludes by describing the use of Edition-Based Redefinition to move from a single-schema to a multischema model.

2:45 p.m. Monday | Moscone South—302
Martin Buechi (Avaloq Evolution AG)
Best Practices for Interpreting PL/SQL Hierarchical Profiles for Effective Tuning

The PL/SQL hierarchical profiler is the primary tool for identifying performance problems in PL/SQL programs. The profiler creates dynamic execution profiles of PL/SQL programs in any environment without compile-time preparation. However, while the creation of profiles is simple, the effective interpretation requires experience. Hierarchical profiles show subprogram execution sequence, duration, count, and memory allocation. Profiles can be presented and analyzed as raw text, in SQL and HTML, and graphically. Values for individual executions, averages, and skew show algorithmic and implementation issues. This presentation shows 10 best practices for getting the most out of profiles based on the speaker’s eight years of experience with a 20-million–line PL/SQL application.

4:00 p.m. Monday | Park Central—Metropolitan III
Steven Feuerstein, Andy Mendelsohn (Oracle)
YesSQL Celebration and Oracle Database Developer Choice Awards

At our second-annual YesSQL celebration, Andy Mendelsohn, executive vice president of Database Server Technologies at Oracle, regales us with stories of Oracle’s past, present, and future. Steven Feuerstein announces the winners (chosen through popular vote) of the first-ever Oracle Database Developer Choice Awards. Attendees then have an opportunity to share a light meal with members of the development teams for SQL, PL/SQL, Oracle Application Express, Oracle REST Data Services, Oracle SQL Developer, and more.

11:00 a.m.  Tuesday | Moscone South—306
Bryn Llewellyn (Oracle)
Why Use PL/SQL?

Large software systems must be built from modules. A module hides its implementation behind an interface that exposes its functionality. This is computer science’s most famous principle. For applications that use an Oracle Database, the database is the persistence module. The tables and the SQL statements that manipulate them are the implementation details. The interface is expressed with PL/SQL. Developers and users of applications built this way are happy with their correctness, maintainability, security, and performance. But when developers follow the NoPlsql paradigm, their applications have problems in each of these areas and users suffer. This session provides PL/SQL devotees with unassailable arguments to defend their beliefs against attacks from skeptics. Skeptics who attend will see the light.

11:00 a.m. Wednesday | Moscone South—306
Bryn Llewellyn (Oracle)
Transforming One Table to Another: SQL or PL/SQL?

You’ve heard the mantra, “If you can do it in SQL, do so; only if you can’t, do it in PL/SQL.” Really? What if you want to compute nonoverlapping RowID ranges that completely span a table? An exact solution is trivial, but it’s too slow for a table of typical size. Approximate solutions, using only SQL, are posted on the internet. But they impose limitations, and leave generalization to the reader. This session examines alternative general solutions—some that use only SQL and others that combine SQL and PL/SQL. All pass the same correctness tests and all have comparable performance. Choosing one, therefore, requires some rather subtle and subjective considerations. Your choice might be different, and the speaker will welcome the debate.

12:15 p.m. Wednesday | Moscone South—307
Mike Hallas, Andrew Holdsworth, Robyn Sands (Oracle)
Developing Applications for High Performance: Does One Size Fit All?

Applications are often designed and tested without consideration for data volumes relying on infrastructure scale-out to process large volumes. In this session, learn why it is important to consider different algorithms for batch versus online, understand the impact of treating a "many rows" problem like a "small rows" problem, and see the benefits of using appropriate modern data management algorithms. This session includes traditional demos, examples, and humor.

1:45 p.m. Wednesday | Moscone South—306
Martin Buechi (Avaloq Evolution AG), Kim Berg Hansen (T.Hansen Gruppen A/S), Marcelle Kratochvil (Piction)
Hosted by Steven Feuerstein, Bryn Llewellyn (Oracle)
Optimizing PL/SQL for Performance and Maintainability—a Panel Discussion

After fully leveraging SQL in your Oracle Database applications, it’s time to completely utilize PL/SQL to implement business rules, create APIs on top of your data structures, and implement key processes within the database. Usually when developers think about optimization, they focus on performance, and this session does, too. But it’s also critical to think about how to write code today, so that tomorrow and years from now, future developers will be able to adapt that code to changing user requirements and database features. Join the experts for a thought-provoking discussion and lots of Q&A.

4:15 p.m. Wednesday | Moscone South—309
Tim Hall (Haygrays Limited)
John Clarke, Gerard Laker, Connor Mcdonald, Mohamed Zait (Oracle)
Optimizing SQL for Performance and Maintainability—a Panel Discussion

SQL is at the heart of every enterprise running on Oracle Database, so it is critical that our application’s SQL statements are optimized both in terms of performance and maintainability. Surprised to see the mention of maintainability? You shouldn’t be. Too much attention is paid on getting things working NOW. We need to think about the future, too. Join this session’s speaker and other SQL experts for a discussion (and lots of Q&A) on how to optimize your SQL statements.

9:30 a.m. Thursday | Moscone South—306
Connor Mcdonald (Oracle)

Oracle’s database cloud services solve so many concerns for organizations—keeping infrastructure up to date, ensuring the integrity of backup/recovery, elastic resource capacity, and high availability. Finally, IT professionals can focus on the true art of software development: writing quality SQL to provide data to upstream services. This session covers some of the best and latest SQL features available in Oracle Database that allow developers to supercharge their SQL skill set. Join this session to discuss such topics as temporal validity, inlining PL/SQL into your SQL, pagination options, cross-apply/lateral, pattern matching, query blocks, SQL transposition, and flashback data archive.

And don't forget to visit the engineers who built, and enhance, PL/SQL and EBR at "Using PL/SQL and Edition-Based Redefinition Together in Oracle Database 12c" in the  Database Development area in the DEMOgrounds.

Saturday Oct 10, 2015

How to write a safe result-cached PL/SQL function

Jonathan Lewis has recently posted Result Cache and Result Cache 2 on his blog.

These are very carefully written accounts that show what can go wrong if you break the rules for the safe use of the PL/SQL Function Result Cache. It's easy for me to say this, because -- as an Oracle insider -- I know the rules. Jonathan's analysis, and a very fruitful private follow-on discussion that he and I had, showed me that we'd neglected to state these safety rules clearly in our documentation. I've filed a doc bug to complain about this. Our discussion prompted me to create my own tests, and as a result, I filed three product bugs. Jonathan cited the doc bug and the product bugs in his blog posting. Anyone with access to My Oracle Support can read them.

To bridge the gap until the version of the Oracle Database documentation that fixes the account is available, and to provide a rather different perspective on the whole business than Jonathan's post did, I have written this essay:

How to write a safe result-cached PL/SQL function

The essay cites the doc bug and the product bugs that I filed too.

My tweet to announce this post was rather cryptic: What did Leo Tolstoy have to say about the PL/SQL Function Result Cache? Read my new blog post to find out. You'll actually have to read my essay to find out. But for those of you who are too busy to read essays, here are are the rules that I hope that I succeed, in my essay, in convincing you are self-evident.

A result-cached PL/SQL function is safe if it always produces the same output for any input that it would produce were it not marked with the result_cache keyword. This safety is guaranteed when, and only when, these conditions are met:

* all tables that the function accesses are ordinary, non-Sys-owned permanent tables in the same database as the function

* the function’s result must be determined only by the vector of input actuals together with the committed content, at the current SCN, of the tables that it references

* when the function is run, it must have no side-effects.

Monday Sep 28, 2015

A Surprising Program

A correspondent recently sent the PL/SQL team a small code example and reported surprise at the program's behavior. Once the example was stripped of superfluities, the behavior was easily explained because of a simple rule every PL/SQL programmer should know.The rule is

When a call terminates with an unhandled exception, the value of an actual argument associated with an out formal parameter becomes undefined.

This note shows the example code and provides a detailed explanation. I hope you will enjoy reading it.

Monday Sep 14, 2015

A PL/SQL Inlining Primer

Do look at this primer on PL/SQL inlining. It tells how it works, and guides its use in practical programs.

It was written by my colleague Charles Wetherell. Charles is the driving force behind the project that brought the brand-new PL/SQL optimizing compiler way back in Oracle Database version 10.1. This was a daring exercise in organ replacement. The operation was an unqualified success, and the patient continues to thrive. With modern optimizing technology in place, the essential foundations are laid for other improvements. Inlining is just such an example.

We've seen various questions about the knobs that fine-tune this feature. Charles's primer answers these within the context of a crystal clear account of how it all works. The main message is that Inlining is a safe and effective optimization that almost always improves application performance and has no important disadvantages. The basic recommendation, therefore, is that all units should be compiled at optimization level 3 – that is, with inlining on -- and that the fine-tuning knobs should be used only sparingly. You must read it. I promise that you'll enjoy doing so.

Welcome to the PL/SQL and EBR Blog

[Read More]

These blog entries are typed up by Bryn Llewellyn, Distinguished Product Manager in the Oracle Database Division, Oracle HQ, responsible for PL/SQL and EBR. But the thinking usually comes from the engineers who built, and continue to enhance, these capabilities.

Twitter: @BrynLite


« March 2017