X

PL/SQL and edition-based redefinition:
the perfect marriage.

Recent Posts

Welcome and overview

You can expect to see posts of small, medium, and large size here. A small post might be just a few sentences, for example, to answer to a specific question. A medium post might be a short essay to give a glimpse of how a particular feature might help you or to correct a common minor misunderstanding. And a large post could be a complete article presented as a linked self-contained PDF. I intend to post only when I have something to say. I won’t, therefore, commit to any particular posting frequency. You’ll come to this blog, I assume, because you’re already interested in PL/SQL or edition-based redefinition. (I hope that you're interested in both PL/SQL and EBR. Interest in one implies interest in the other.) And I hope that what you read here will inspire you to use PL/SQL and EBR both more happily and more intelligently. List of posts Announcing “Why use PL/SQL—the Movie” A sketch of my thinking on how to install a #SmartDB application back-end Reserved words, keywords, and the ends of labeled blocks Why does application architecture matter? Abstracts and slides for my talks at OpenWorld 2017 #BikeB4OOW 2017—All you need to know On issuing commit from database PL/SQL Oren Nakdimon on #ThickDB NoPlsql versus ThickDB What’s in a Word? Lexical Analysis for PL/SQL and SQL Schema deployments and rollbacks in the regime of the Thick Database paradigm 2b, or No2b, that is the question Transforming one table to another: SQL or PL/SQL? Why use PL/SQL? How to write a safe result-cached PL/SQL function A Surprising Program A PL/SQL Inlining Primer  

You can expect to see posts of small, medium, and large size here. A small post might be just a few sentences, for example, to answer to a specific question. A medium post might be a short essay to...

Announcing “Why use PL/SQL—the Movie”

I’m delighted to announce that we’ve just posted “Why use PL/SQL—the Movie” as a playlist on the Oracle Database Product Management YouTube Channel HERE. It’s been a long time in the making. But I hope that you’ll think that, like for all good things, the outcome was worth the wait. These two conference presentations are the meat of the production: “Why Use PL/SQL” by Bryn recorded at Kscope16 (60 minutes) “A Real World Comparison of the NoPlsql and Smart Database Paradigms” by Toon Koppelaars, recorded at Kscope17 (90 minutes deep dive) We’ve also added three informal conversations to give you a glimpse of the back-story that brought Toon and me to our decision to ramp up our advocacy for the proper use of the Oracle Database at the heart of classic OLTP applications. We argue, each in our own rather different ways, that the database should be used as a processing engine rather than just as a bag of tables. Advocates of the bag-of-tables approach usually refer to the database as the “persistence layer”—presumably in the hope that using this term will dress up that sow’s ear as a silk purse. But Toon and I prefer to call a spade a spade—or, for brevity, the NoPlsql approach. Check out the playlist for these: “Andrew Holdsworth, Bryn, Bikes and Scenery” “Bryn and Cary Millsap discuss SmartDB vs NoPlsql” “From Helsinki Declaration to SmartDB vs NoPlsql” There’s a few other pieces of light-hearted footage, including “Jenny Tsai-Smith and Bryn Make a Plan” where I try—and succeed—to persuade Jenny to provide videography resources from her team to make our movie. You might like, mentally, to add THIS other YouTube video to the playlist. It’s called “NoPlsql and Thick Database Approaches with Toon Koppelaars”. Because it was published in the Fall of 2016, and has already received thousands of views, we decided to leave it as a stand-alone piece. It gives the full detail of the “flame graph experiment” that was conducted and analyzed by members of Oracle’s Real World Performance Group. Toon refers to this experiment in his Kscope17 deep dive session. But the duration of that session didn’t allow more than just a summary. A note on “Smart” vs “Thick” The earlier pieces in the playlist use the term “Thick Database” about the approach that Toon and I advocate. Both this term and “Fat Database” have been used by various well-respected speakers and bloggers for a decade or more. Both “thick” and “fat” resonate with the visual metaphor of drawing a software stack as horizontal layers, like a stack of pancakes. When all the SQL statements that manipulate the application’s tables are issued by PL/SQL code in the database, the database pancake is thick and correspondingly the middle tier pancake is thin. However, we were persuaded by our Oracle colleagues to switch to “smart” because it better captures another crucial aspect of the paradigm. Not only is all the SQL issued from PL/SQL code; but also, set-based SQL and bulk-binding techniques are used to the full, and the full power of Oracle Database’s rich feature set is exploited. In other words, not only are we encapsulating the database with a hard shell PL/SQL API; but also, we’re using the database in a smart way. This stands in stark contrast to the approach that aims to be database vendor agnostic and that limits itself to just a universally available, and therefore very basic, set of features.

I’m delighted to announce that we’ve just posted “Why use PL/SQL—the Movie” as a playlist on the Oracle Database Product Management YouTube Channel HERE. It’s been a long time in the making. But I hope...

A sketch of my thinking on how to install a #SmartDB application back-end

I’ll use the term DB-user, in this essay, as short-hand for what the Oracle RDBMS Sys.DBA_Users view lists. And I’ll define these subclasses: —Power-DB-user: a DB-user that has at least one Any privilege (either directly or via a role) —DBA-DB-user: a DB-user that has (at least) the DBA role —Ordinary-DB-user: a DB-user that has not a single Any privilege (neither directly nor via a role) —Connect-DB-User: a DB-user that starts with just Create Session, that will never own objects, that doesn’t have a partner password-protected role (see below), and that (following application back-end installation) will have only the Execute privilege on some or all of the PL/SQL subprograms that jointly define the database’s hard shell API. To save breath, I’ll add these definitions: —DBA: a person who knows the password for a DBA-DB-user —App-Admin: a person who knows the password for an Ordinary-DB-user and (see below) for its partner password protected role—but who doesn’t know the password for a single DBA-DB-user. Experts agree that a nefarious person, who knows the password for a Power-DB-user, can find a way to elevate its privileges to turn it into a DBA-DB-user—in other words, to turn her/himself into a DBA. They agree, too, that such a nefarious App-Admin cannot find a way to turn an Ordinary-DB-user into a DBA-DB-user. This leads to a clear recommendation: —never allow the direct grant of even a single Any privilege to a DB-user —rather, simply meet the perceived need by granting the DBA role to that DB-user. —limit the group of DBAs to a tiny few, each of whom has been scrupulously screened. This thinking determines everything I now say. The Development Shop presents a request to the DBAs: (1) To create a set of Ordinary-DB-users, each with its specified list of explicit non-Any privileges. (I prefer to avoid Oracle-maintained roles to make the analysis simpler). For ordinary OLTP applications, just Create Session is sufficient for those that will own only code; and this together with quotas on appropriate tablespaces is sufficient for those that will own quota-consuming objects like tables and indexes. (2) To create a set of password-protected roles in one-to-one correspondence with the Ordinary-DB-users, each with its specified list of explicit non-Any privileges (like Create Table, Create Procedure, and the like). This may include some object privileges with grant option like Execute on the Sys.DBMS_Lock package. (3) To create a set of empty roles (i.e. roles to which no privileges or roles are yet granted), to be used as CBAC roles. Then to grant these roles with delegate option to those Ordinary-DB-users that will own singleton packages that define the database's API. (4) To tell the the team of App-Admins their passwords. This can be fine-grained: for example, App-Admin-1 knows only the passwords for Ordinary-DB-user-1 and its partner password protected role. Thereafter the App-Admins can run the scripts that the Development Shop has provided both that do the initial fresh install and that later patch it and upgrade it. Each script is designed to run as a single designated Ordinary-DB-user and, by construction, will be able to create, modify, and drop only objects that this Ordinary-DB-user owns and to grant object privileges on these to other DB-users and (by intention) to CBAC roles. Each script is run by an App-Admin who connects as the given Ordinary-DB-user and then enables its partner password protected role. Within this regime, every single PL/SQL unit will be invokers rights. And each of the set of jacket PL/SQL singleton packages that implement the API—and only these—will be granted its partner CBAC role to which the minimum object privileges have been granted that allow it to fulfill its purpose. Critically, engineers who install client-side code will know the passwords only for  Connect-DB-Users. Notice that a deliberately designed consequence of this scheme is that, at run time, the invokers rights code executes with only object privileges—i.e. not a single system privilege. —Bryn

I’ll use the term DB-user, in this essay, as short-hand for what the Oracle RDBMS Sys.DBA_Users view lists. And I’ll define these subclasses: —Power-DB-user: a DB-user that has at least one Any privileg...

Reserved words, keywords, and the ends of labeled blocks

  My partner in crime @ToonKoppelaars drew my attention to a question posted to oracle-l@freelists.org by knecht.stefan@gmail.com. I’ve paraphrased it thus:   << …personally I feel that this should not compile. What do you guys think? Simple case in point:   package foo is   procedure bar; end foo;   package body foo as   procedure bar is   begin     begin       null;     end loop;   end bar; end foo;   This compiles and executes without error… in my opinion, with a keyword [sic] like “loop” that should not be happening. In essence, the above is interpreted as:   package body foo as   procedure bar is   begin     begin       null;     end anon_block_name;   end bar; end foo;   If we switch the “end loop” to “end if” (which is a similar flow control keyword [sic]) it reacts (in my opinion) correctly:   package body foo as   procedure bar is   begin     begin       null;     end if;   end bar; end foo;   and fails to compile. What’s going on?  >>   The morals of this story are: Unless you use terms of art correctly and with precision you’ll never be able to reason soundly. Never, ever, ask anyone—and especially yourself—about the behavior of some PL/SQL code before compiling it with warnings turned on. You learn so much about your errors in thinking when you do. Never try reason about a program whose whole effect is “null”.  The optimizer might subvert the intention in your head because you told it “I do not care what this program does.” Anyone who’s been to any of my conference talks will have heard me bang on about all of these points. And anyone who hasn’t been to any of my talks should.   There are two separable concerns here.     (1) “Reserved Words” vs “Keywords”   Notice that the Oracle-l questioner doesn’t use these terms of art carefully and precisely. In fact, he doesn’t mention “reserved word” at all. So he’s doomed already.   There are those who think that the design of a computer language shouldn’t reserve any words. But it’s too late for that with SQL and PL/SQL. Both have both. At least, to meet its requirement not to break or change the meaning of extant code on upgrade, Oracle never introduces new reserved words.   Start this script in SQL*Plus. I ran it in 12.2.   Reserved_Words_Keywords_and_the_Ends_of_Labeled_Blocks.sql   Notice that p1 compiles without error and draws this warning:   PLW-06010: keyword "LOOP" used as a defined name   Of course, therefore, it runs without error. Notice that p2 fails to compile with this error:   PLS-00103: Encountered the symbol "IF" when expecting one of the following...   In other words, it found the reserved word “if” at a syntax spot where it isn’t legal.   The point, of course, is that “if” is a reserved word and “loop” is a keyword. See Appendix D, “PL/SQL Reserved Words and Keywords” in the PL/SQL Language Reference book.   Notice that p3 compiles clean and runs without error. That’s my segue to separable point #2.     (2) What you might write in the spot between "end" and the semicolon that finishes the block statement   Only procedures, functions, package specs, package bodies, and (as I recall) compound triggers require that the text of the optional identifier between the final “end” and the final semicolon denotes the text of the block’s name. Notice how careful I’m being with the way I use my terms of art. Precision is essential. My final example shows what I mean in a contrived, but effective, way.   Please check out this enhancement request. I filed it on 31-Mar-2008.   ER 6931048 - Implement new warning when block label doesn't match “end ... ;”    Sadly, not a single customer has added their voice to this by by filing a Service Request and explaining their business case. I fear that unless customers start to do this, my ER will remain open.

  My partner in crime @ToonKoppelaars drew my attention to a question posted to oracle-l@freelists.org by knecht.stefan@gmail.com. I’ve paraphrased it thus:   << …personally I feel that this should...

Database PL/SQL and EBR

Why does application architecture matter?

I speak at Oracle User Group meetings in several countries each year, and I've been doing this for several years. I attend talks that focus on various aspects of developing, and running, an application that uses an Oracle Database as its database of record. More importantly, I talk to very many developers and administrators of such applications. Some describe applications that do what's needed, and they want to talk about the best ways to make smallish specific improvements. Others refer to applications that don't do very well and mention difficulties that seem to be consequences of unfortunate high-level architecture choices. Let's talk about this Yes indeed: application architecture matters! And because it does, Toon Koppelaars and I will be hosting a regular web conference under the AskTOM Office Hours umbrella to talk about—and invite you all to talk about—this topic. The first session is on Tuesday 20-Februray-2018; and thereafter we'll be doing one on the third Tuesday in each calendar month. Sign up HERE—and, by extension, sign up to making high-level architecture choices that minimize the problems you'll have when your application is built, run, and maintained. A teaser I won't say anything in this post about how high-level architecture choices bear on the kinds of problems that Toon and I hear about. Rather, I'll simply list some examples in a moment. I claim that the size of all of these problems, and the extent to which ameliorating them is tractable, depends critically on the high-level choices made about the application's architecture. But first, what kind of choices do I have in mind? Here are two examples: —Is the information model conceived of within the classical paradigm of relational thinking, and implemented directly in the RDBMS so that tables, and their columns map directly to real phenomena in the business world that they model? Or is it modeled within the object-oriented paradigm where a class hierarchy maps to phenomena that the user interface manipulates? —Are SQL statements explicitly written by skilled professionals and issued from hand-written PL/SQL code? Or are they, at least in part, generated robotically by outside-of-the-database code, and issued from outside-of-the-database code? Some example problems 1. You can't do useful impact analysis. In other words, when tables need to be modified, you have no mechanical way to discover which SQL statements need to be re-written, where they originate, and how the code that issues them needs to be changed. 2. You can't answer the question "is your application vulnerable to SQL injection attacks?" Therefore you can't conceive of how to set things up so that you can prove that your application has no SQL injection vulnerabilities. 3. You've heard about the principle of least privilege, and about the wisdom of minimizing your attack surface, but your overall environment provides you with no explicit mechanisms to implement such notions. 4. You can't write mechanical tests for business function correctness because user-interface code and data manipulation code aren't cleanly separated. The best you can do is to use a scheme to drive the user-interface mechanically. But this is testing the joint effect of the UI code and the data manipulation code. And anyway, how can you test that the results that the human user sees on the browser screen are correct? 5. In the same way, you can't write mechanical tests for the performance of the actual business functions without muddying the waters with the performance characteristics of the UI code. Notice that for both #4 and #5, there's a need both for testing that simulates a single-user scenario and, critically, for testing that simulates a highly concurrent multi-user scenario. Some correctness concerns rest on proper enforcement of data rules when different users simultaneously attempt to make conflicting changes. And some performance concerns, discussed by using words like "contention", manifest only in multi-user tests. 6. When performance investigations identify a problem SQL statement, you can't find where, in your code, it is issued from and therefore what its purpose is in the context of other SQL statements that jointly with the problem statement implement a higher order business goal. 7. You can't do top-down performance analysis because the necessary low-level instrumentation (note the clock on "call into" and "return from" every subprogram, regarding a SQL statement just as one kind of subprogram) is infeasible when all the if-then-else code runs client-side. 8. Even if you can see how a SQL statement should be modified—or especially when the purpose that several jointly achieve can be better implemented by a different set of SQL statements (in the limit by just a single statement), you can't change the regime of statements because they are robotically generated. 9. Client-side developers can't implement new required functionality because this requires changing tables—and issuing table DDLs is regarded as a DBA task. DBAs resist all such changes for well-known reasons. You find yourself tempted to use Oracle Database as a [JSON] document store. 10. You can't respond promptly to changes in UI fashion because the UI code and the business logic code aren't cleanly separated. Rigorous modular discipline is too hard to enforce when all code is linked together and any subprogram can call any other.

I speak at Oracle User Group meetings in several countries each year, and I've been doing this for several years. I attend talks that focus on various aspects of developing, and running,...

Abstracts and slides for my talks at OpenWorld 2017

I did three talks during the week of OpenWorld 2017. Here are the abstracts, and links to the slides, in the order that I did them. The “hot patching” myth—or why you have no choice but to use EBR for zero-downtime patching I did this talk on the Monday as part of the OakTable World event in the Children’s Creativity Museum, right next door to the Moscone Center. My scope is doing patching among the set of artifacts inside an Oracle Database that implement an application’s back end. A myth has arisen that certain changes can be made safely with “create or replace” while the application is still available for normal, highly concurrent use. Managers seem to be especially attached to this myth. I’ll go through a couple of examples that seem, on intellectual analysis by the uninitiated, to be safe. And I’ll show you, with naively designed tests, that the “no danger” hypothesis isn’t disproved. Of course, these are simulated multi-user tests. But then, by tweaking some timing knobs, I’ll bring the hidden danger out into the open. It’s all about relatively unlikely race conditions that, nevertheless, are bound, sooner or later, to occur in a busy production system. Then I’ll explain that, should Oracle Database implement a much more pessimistic locking strategy, the race condition errors could be prevented. But if we did that, you’d hate us because everything would slow down. I’ll remind you that Oracle Database, therefore, is designed to perform very well in the optimistic, common-sense, scenario: that you “just say no” to doing DDLs on an application’s within-database artifacts while it’s available for normal highly concurrent use. I’ll conclude by reminding you that EBR isn’t a luxury. Rather you have a simple stark choice: always take downtime for all patching; or adopt and use EBR to allow safe zero-downtime patching. HERE are the slides. Ten rules for doing a PL/SQL performance experiment I’m often asked to comment on the speed difference between two PL/SQL tests. Usually, the tests are presented as SQL*Plus spool files, which differ in just a couple of tiny details. The times are measured for just a single test run using SET TIMING ON, and are shown as “Elapsed: 00:00:02.36” with indefensible precision. Sometimes the times are shorter. This is no way to conduct a performance experiment! This session explains how to do your experiment properly with reference to a real, and interesting, case study. Though the title specifies PL/SQL, the method described applies to any single-user test that runs entirely inside Oracle Database. The title specifies 10 rules to get your attention. Of course, there are actually more rules. HERE are the slides. Guarding Your Data Behind a Hard Shell PL/SQL API—the detail My “Why Use PL/SQL?” paper posted HERE on this blog presents a high-level appeal to common sense and to the abstract principles of computer science. This session examines in practical detail how to ensure that the hard shell of a database’s PL/SQL API is impenetrable. It advocates strict adherence to the principle of least privilege by using a four-schema model (data, code implementation, API, and connect) and invokers rights units together with code-based access control. Scrupulous care is taken to ensure that the privileges needed for installation and patching are not available at runtime, and the approach is reinforced by secure error-handling. HERE are the slides. In order to prepare this talk, I implemented a working model of the scheme that I describe. I intend to write a paper that will present a proper prose version of the talk and that will act as the external documentation for the code. I will then update this post with links to the paper and the companion code.

I did three talks during the week of OpenWorld 2017. Here are the abstracts, and links to the slides, in the order that I did them. The “hot patching” myth—or why you have no choice but to use EBR for...

#BikeB4OOW 2017—All you need to know

Please send me a Twitter DM if you intend to come. It’ll help me to have an idea of the numbers before hand. The forecast promises perfect weather! I’ve recruited a strong, young cyclist to help with the herding. She has nothing to to with Oracle. She’ll ride at the front and hold back the racers. I’ll ride at the back and make sure no-one gets lost. We have to keep the line of riders fairly compact. Else riders in the middle will get lost. She’s giving up her day for us. Please be ready to show your appreciation in the usual way. Start and end location: Bridge Cafe, (Bridge Plaza—at south end of the Golden Gate Bridge) HERE. Start time 10:00, Saturday 30-September. Please try to get there by 09:45 for meet-and-greet and briefing. Expected return time 16:00. Interpret this as a software developer’s response to “How long will it take?” See HERE for the route. The photos are the important navigation choices. 15.5 mi (25 km) to Tiburon—1:35 moving time. 37 mi (60 km) for the whole loop—approx 4:00 moving time. This was at the modest average moving speed of 9.6 mph (15 kmph). Bike rental (if you don’t have it already). There’s very many choices. Simply search HERE. Lots of people like the Sports Basement. Electric bikes HERE and HERE. Lunch stop: Caffe Acri, Tiburon. Some riders decide to stop there for cocktails and lunch. Then they get the ferry back to San Franciso. Look for the photo I took of the schedule of the GPS ROUTE that I uploaded. Facebook page HERE. Looking forward to it. Hope you all are too.

Please send me a Twitter DM if you intend to come. It’ll help me to have an idea of the numbers before hand. The forecast promises perfect weather! I’ve recruited a strong, young cyclist to help with...

NoPlsql versus ThickDB

The Talk is dead, long live the TalkIt's just over a year since I published my "Why Use PL/SQL?" blog post and paper. At the start of the paper, I say that Toon Koppelaars has, for at least the last decade and a half, been a practical exponent of, and a vigorous advocate for, the Thick Database paradigm. (These days, Toon is a Consulting Member of Technical staff in Oracle Corporation's Real-World Performance Group.)I've given the companion talk more than a dozen times over the last year or so, and it's now time to take this advocacy project to the next level. I'm going to support Toon in a new season of conference speaking with a two-session talk called "A Real-World Comparison of the NoPlsql and Thick Database Paradigms". Toon has already delivered a teaser entitled "The NoPlsql and Thick Database Approaches—Which One Do You Think Requires a Bigger Database Server?" It's posted on YouTube. What do we mean by the terms NoPlsql and ThickDB?I use the terms NoPlsql paradigm and Thick Database paradigm—hereinafter ThickDB—in my "Why Use PL/SQL?" paper. They're reasonably self-explanatory; but they imply just a bit more than the bare words.Of course, NoPlsql implies that there's absolutely no PL/SQL anywhere in the picture: neither are there any PL/SQL units in the database; and nor are anonymous PL/SQL blocks issued from client-side code. This, in turn, implies (for a normal application that does no DDL at run-time) that the only database calls from client-side code are select, insert, update, and delete statements—together with commit and rollback. But we mean yet more by the term. There's a huge correlation between the avoidance of PL/SQL and the avoidance of anything but naïve single-row SQL statements. Toon says more about this in the YouTube video.In contrast, ThickDB implies that every database call from client-side code invokes just a single PL/SQL subprogram, and that therefore all select, insert, update, delete, commit, and rollback statements are issued from PL/SQL code. But we mean yet more by this term too. We imply that the SQL that is issued by PL/SQL code is humanly written (so you might also see merge statements), and that the relational design of the tables that these statements query and change is also the result of intelligent, explicit human design. This implies, in turn, that bulk retrieval of results, and bulk batching of changes, is implemented as a matter of course; and, more significantly, that the full power of SQL is used, when appropriate, to specify operations on potentially very large sets of rows as single statements. Toon's teaser: The NoPlsql and ThickDB ApproachesToon delivered this talk in the OakTable World event that took place at the same time as, and next door to, Oracle OpenWorld 2016. I asked him to repeat it at Oracle HQ on the Friday after OpenWorld so that we could make a good recording to post on the Oracle Learning Library YouTube channel. Here's the description:Toon Koppelaars describes an experiment to measure the work done by Oracle Database to complete a specific task using different approaches. The NoPlsql approach treats the database as no more than a persistence layer, using only naïve single-row SQL statements; it implements all business logic outside of it. The ThickDB approach treats the database as a processing engine; it uses a combination of sophisticated set-based SQL statements and PL/SQL to implement all business logic inside it. 'No business logic in the database' advocates take note: the ThickDB approach gets the task done with far less database work than the NoPlsql approach.I'm pleased to see that, just a day after publication, the video has already received a couple of hundred views.Update added 28-Nov-2016: the video has now been viewed almost 2,000 times.A Real-World Comparison of the NoPlsql and Thick Database ParadigmsToon (with a little editorial help from me) has put together a two-session talk and we shall be presenting it this year at the upcoming Oracle User Group conferences in Bulgaria, Germany, and the UK, and next year (so far) at the Hotsos Symposium. We've also submitted it for Kscope17 and we're keeping our fingers firmly crossed that the committee will say "yes, please". Here's the abstract:The NoPlsql paradigm starts with an object oriented domain model in the middle tier. This leads naturally to treating the database as a bag of tables, so that primitive SQL statements express its API. The ThickDB paradigm starts with the design of the relational model. This leads naturally to allowing client-side code only to call PL/SQL subprograms, so that all SQL is done from PL/SQL. These two paradigms are mutually incompatible.In our first session, we show, based on real-world use cases, how the NoPlsql paradigm brings problems with correctness, security, and performance, and we note that projects that adopt the NoPlsql paradigm use most of their resources on developing plumbing code rather than code that implements actual business functionality. These problems are just the tip of the iceberg and we explain how the NoPlsql paradigm also brings maintenance nightmares.In our second session, we turn to the ThickDB paradigm, rehearse the reasons for its superiority, and explain that it implies more than just allowing database calls to invoke only PL/SQL subprograms. We formalize a layered code classification scheme which leads to optimal understandability and maintainability of both your PL/SQL and your SQL code -- which properties bring the maximum probability of correct behavior. We will convince participants that adopting the ThickDB paradigm guarantees the avoidance of the awful problems brought by the NoPlsql paradigm and establishes the database tier as a reliable central service provider in the application landscape. We will debunk the myth that "moving business logic out of the data tier and into the middle tier" improves scalability.

The Talk is dead, long live the Talk It's just over a year since I published my "Why Use PL/SQL?" blog post and paper. At the start of the paper, I say that Toon Koppelaars has, for at least the last...

What’s in a Word? Lexical Analysis for PL/SQL and SQL

HERE is a new essay from my colleague Charles Wetherell. This is the abstract: A friend wrote a PL/SQL source code analyzer. He was surprised when his tool worked on return 'Y'; but failed on return'Y';. Is the blank-free version legal pl/sql? Does his tool have a bug? The answer lies in the lexical structure of pl/sql and sql. This note explains lexical analysis and provides an answer to our friend’s question. And this is how the introduction ends: Most programmers never worry about the rules [of lexical analysis]; they write sensible programs and don’t notice the odd cases. But for folks who write tools to create, analyze, or manage PL/SQL and SQL, the details do matter. Mondegreens are to be avoided. The knowledge may also help those who set coding standards, write elegant code, or are curious about everything to do with the two languages. You might be wondering what a mondegreen is; you'll soon find out by reading the whole of Charles's introduction. I really hope that you are curious about everything to do with PL/SQL and SQL, and that this will be enough to motivate you to read Charles's essay. I encourage you to read it for an entirely different reason: it's a shining example of how to write about a technical subject in a direct and clear way. Both Charles and I are fans of Steven Pinker's book "The Sense of Style: The Thinking Person's Guide to Writing in the 21st Century". Pinker presents an abridged version of his thesis in a lecture given in October 2015 at the Royal Institution, London. Watch it HERE. The lecture is a nice appetizer for the book. I warmly recommend the lecture and the book to anyone who writes about any aspect of Oracle Database – and this must surely include everyone who visits this blog. After all, you must, now and then, discuss technical topics with colleagues by email, post questions to various forums, and – I'm sure – answer such questions.

HERE is a new essay from my colleague Charles Wetherell. This is the abstract:A friend wrote a PL/SQL source code analyzer. He was surprised when his tool worked on return 'Y'; but failed on...

Schema deployments and rollbacks in the regime of the Thick Database paradigm

I wrote this short post in response to a comment on my "Why Use PL/SQL" post. It said "Nice read. Thanks. How do you go about addressing automating schema deployments and rollbacks." So I used its wording for the title of this post. I had fun reading all the April Fools Day tweets and blog posts. THIS was my personal favorite. Despite the coincidence of posting date, what I say here is meant only seriously. I'll take "schema deployment" as an umbrella term for both the first-time installation of the total set of user-defined artifacts, inside an Oracle Database, that implement an application's backend and the subsequent patching and upgrading of this set. I prefer to use the loose term "artifact", rather than "object", because it covers things like constraints and rows in tables that act as configuration data as well as what DBA_Objects lists. In practical terms, "schema deployment" simply means running a sequence of SQL statements. Here I'm using the operational definition of "SQL Statement" as any statement that you submit to Oracle using the Oracle Call Interface (hereinafter the OCI), or something that builds upon it. To be brief here, I'll take this to cover thin JDBC too. I explain this more carefully in my "Why Use PL/SQL" paper. Some people prefer to say that "SQL Statement" covers only insert, update, delete, and select. But my operational definition includes all sorts of DDL statements, alter session statements, the call statement, the anonymous PL/SQL block, and so on. The complete set of SQL statements that together install an application's backend is, quite simply, its source code definition. These definitions are always (in my experience) grouped into several files. Such a file is commonly called an installation script. Most shops seem to be happy just to use the old warhorse, SQL*Plus, as the script runner, and we therefore usually hear the files referred to as SQL*Plus scripts, or SQL scripts for short. (I do know of shops who use their own installation programs built on the OCI. This makes no difference to what I say here.) In general, ordering matters. For example, you cannot create an index before the column list that defines it is valid. Usually this just means that the table that contains the one or few columns in question has already been created. In more ambitious cases, when the expression that defines the index references a PL/SQL function, then this function must already exist. Getting the order right is generally very straightforward, and this emerges naturally during development. Ideally, all the sources are held in a source control system, and its facilities allow a definition to be stored that will generate a specified set of scripts to run in a specified order. Though the deployment details vary (for example, in some cases pre-built executable programs are shipped) all important computer systems are, ultimately, so defined and so installed. When the application's backend includes lots of PL/SQL (which architecture style I applaud), installation time can be reduced by first installing all the objects in namespace 1 (tables, views, synonyms, package specs, type specs, and so on). For brevity, I'll refer collectively to objects of all these types as specs. To minimize the time for this phase, the objects should be created in topologically sorted order. Maybe the term "topological sort" is unfamiliar. It denotes the sort that produces the installation order that leaves behind it the minimum number of invalid objects that then need to be validated by running one of the Utl_Recomp subprograms. When you have a model installation, as you necessarily must have, in a test environment, it's straightforward to compute this optimal ordering from the information exposed by the DBA_Dependencies view. You start with the objects that depend only upon already-installed objects; then you install their immediate dependants; and you continue in this fashion until you are done. You can guess that the query involves "connect by". The spec installation phase is run serially. Now we come to bodies. A spec doesn't depend upon its body, so body installation can never cause invalidation. In the same way, a table doesn't depend on any indexes that it might have. This means that installation order is unimportant. The scripts that define the bodies can therefore be parallelized up to whatever degree of parallelization your kit can sustain. The scripts for creating, patching, and upgrading Oracle Database are run in this way: first the specs, serially; then the bodies, in parallel. You'll always finish up by running one of the the Utl_Recomp subprograms. of course, this supports parallelization too. There's a general consensus that "patching" means changing an existing system to make it conform to its present specification when non-conformity has been detected in ongoing use. To put this more straightforwardly, patching means fixing bugs. In the same spirit, the consensus is that "upgrading" means changing an existing system to make it conform to a new version of its specification by introducing brand new functionality. In strict operational terms, both the patching and the upgrading of the user-created artifacts in an Oracle Database are done in the same way: by running some SQL scripts, serially, in a pre-defined order and then others, in parallel, in a random order. From now on, therefore, I'll simply use the one term "patching" as the umbrella both for "patching" in the strict sense and for "upgrading". There's very little in what I've said so far that doesn't apply to the installation and patching of any software system. Initial installation excites far less interest than does the patching of an installed system. It's hard to define "downtime" or "unavailability" before you already have a system in use! But when patching is done in downtime, the duration of the outage is a major concern. Here's where the term "rollback" is heard. It sometimes happens that, despite all best effort to test the patch installation scripts, a condition is met when they're run in the production database that hadn't been met in rehearsal, and the consequence is so bad that the application cannot be opened up for regular use until the problem is fixed. Because the clock is ticking, a brute-force remedy is often preferred: simply restore the database to the state it was in at the start of the downtime window, open up for use, and work out the solution back in the development shop. The result is that an outage has been experienced whose net effect is ZERO benefit. There is a critical, and exciting, difference between the task of patching the user-defined artifacts inside an Oracle Database and the corresponding task for installations into other environments. Installation into an Oracle Database can be done with zero downtime by taking advantage of edition-based redefinition (hereinafter EBR). In general, both patching classically in downtime, and patching with zero downtime as an EBR exercise, are done the same way: by running SQL scripts in an appropriate mixture of serial and parallel. The difference is that the SQL scripts that implement an EBR exercise are spelled differently from those that implement classic downtime patching. The key feature is that all changes that are made in an EBR exercise are made in a purely additive fashion in the privacy of the new edition, while the unchanged pre-patch artifacts continue to be used. This has a crucial benefit for the "patch rollback" proposition. If the EBR exercise hits a problem, the online users don't know that it's happened. In most cases, the problem can be diagnosed and fixed in situ so that the EBR exercise can then be completed as intended. The only bad consequence is that the moment at which the patched system becomes available has been delayed; but no-one has suffered downtime. In an extreme case, advantage can be taken of the purely additive nature of the exercise and everything can be abandoned cleanly, leaving no trace, so that presently a new fixed-up EBR exercise can be conducted. People always like to hear this when they first learn about EBR. But I have never heard of such a drastic step being needed in practice. Rather, in situ fix-up to allow ordinary progress to completion always seems to be viable. I won't try to explain EBR in this post. I do describe it briefly in my "Why Use PL/SQL" paper Please go to oracle.com/ebr and download my paper on the topic for the full story.

I wrote this short post in response to a comment on my "Why Use PL/SQL" post. It said "Nice read. Thanks. How do you go about addressing automating schema deployments and rollbacks." So I used its...

2b, or No2b, that is the question

I examine a thorny question: should we say "sequel" or "ess-cue-ell"? And I realize that Rhett's parting shot to Scarlett sums up my feelings on the matter. I decided to rewrite my little essay, prompted various remarks made during an exchange of Tweets that followed mine to announce that I'd posted the previous version of this. Thanks especially to @DBoriented and @ChandlerDBA for pointing out that I'd neglected to do a literature survey. Let's start by going to the horse's mouth: the Oracle Database SQL Language Reference (version 12.1). Look for the section "History of SQL". It mentions Codd's seminal paper and IBM's Structured English Query Language (SEQUEL). It then says that SEQUEL later became SQL (still pronounced "sequel"). Arguably this is a doc bug because it should say that SQL is still pronounced "sequel" by many people. However, it's crystal clear how Uncle Larry wants us to pronounce SQL. (Of course, the doc doesn't mention the trademark war.) Now let's answer @oraclenerd's original question by an empirical test. Searching in the SQL Language Reference for "a SQL" gets about 70 hits, finding things like "a SQL statement", "a SQL subquery", and "a SQL expression". This form of the indefinite article is consistent with the "sequel" pronunciation. The other form "an SQL", consistent with the "ess-cue-ell" pronunciation, gets not a single hit. For sport, let's do it using Google, using the site:docs.oracle.com qualifier. I know that the number is somewhat untrustworthy, but it does say that "a SQL" finds about 33K hits. Here's a cute example: "A SQL tuning set (STS) is a database object that...". So the abbreviation SQL is itself abbreviated! What about "an SQL". I wish that I could say it that found zero hits. But Google claims to have found about 12K. This is the herding cats syndrome deluxe. But let's look deeper. The Application Express SQL Workshop and Utilities Guide (Release 3.2) contains "To create an SQL script in the Script Editor". But it also has "Creating a SQL Script" and "What is SQL Scripts?". How embarrassing. There's no lesson to be learned from that book. Embarrassment aside, the overall vote within all of the Oracle docs seems to be three to one in support of the "sequel" pronunciation. Of course, you can just search the Internet for "SQL or sequel". It turns out that many people have written about this topic. (Now there's a surprise.) HERE is one that I liked, mainly for THIS graphic. Here's a little taster: << The standard says that ess-cue-ell is the appropriate way of speaking SQL. However, many English-speaking database professionals still use the nonstandard pronunciation sequel. Looking to the authorities: Prof. Jennifer Widom, co-author of four popular database books, pronounced SQL Language as sequel in her Stanford database course. Christopher J. Date in "A guide to the SQL Standard" (1987) also pronounced the term as sequel. Who is the winner? I guess nobody. Sequel gets the most votes, but Chamberlin says ess-cue-ell, and he gets an extra vote because he's the co-developer of SQL. >> HERE is another. I particularly like this for its story of how the author asked Chamberlin for a ruling. Chamberlin replied to say "ess-cue-ell". But the author found a video HERE, where Chamberlin pronounces it "sequel"! And so does the MC. Time to stop the survey, I think. Readers can continue the sport at their leisure. My own preference still stands. Such is our preference for pronounceability as a word that, with the new crop of "as a service" abbreviations, we'll choose a hybrid over saying all the letters. Thus we pronounce it sass because we can. And we pronounce it dee-bass because at least that's easier to say than dee-bee-eh-eh-ess. So it is, of course, with the hybrid pee-ell-sequel. This tendency is no surprise. All languages look to maximize the signal-to-noise ratio. If you shout a word across a crowded room, it'll penetrate the hubbub more effectively than a string of letters. And it's not just me. All my colleagues on the seventh floor of the Database Tower at Oracle HQ consistently say sequel and pee-ell-sequel both as bare terms and in all compounds like view names, column names, package names, subprogram names, and subsystems of Oracle Database. If it's good enough for them, it's good enough for me.

I examine a thorny question: should we say "sequel" or "ess-cue-ell"? And I realize that Rhett's parting shot to Scarlett sums up my feelings on the matter.I decided to rewrite my little essay, prompted...

Oracle OpenWorld 2015 retrospect

To borrow a line from Margaret Mitchell's Gone With the Wind:Death, taxes and OpenWorld! There's never any convenient time for any of them.It's a bit dull to stay indoors writing slides and whitepapers while the rest of San Francisco is out and about having fun making the most of the glorious weather that seems always to coincide with Oracle's annual spectacle. And the sheer size of the thing makes getting from place to place, even by bicycle, quite a strain. Nevertheless, I do find that the event brings some masochistic enjoyment. My personal technical highlights were these talks:CON8704: American Express’s Road to Zero-Downtime Releases via Edition-Based RedefinitionandCON2082: Best Practices for Interpreting PL/SQL Hierarchical Profiles for Effective Tuning(I mentioned them in my Picks for OpenWorld 2015 post.) The slides are available from the OpenWorld site. American Express's EBR talk was a very nice sequel to the one given by Salesforce.com at OpenWorld 2014:CON7521: Salesforce.com. Using Edition-Based Redefinition for Near-Zero-Downtime ReleasesI was pleased by the response to each of my two talks. As I said from the rostrum, my slides are no use without me presenting them; but I've written whitepapers that give a careful and detailed account of what I address. The abstracts and paper downloads are in this blog:CON8729: Why Use PL/SQL?andCON8055: Transforming One Table to Another: SQL or PL/SQL? The "Transforming" paper explains the contribution made to the solutions it discusses by Jonathan Lewis, Stew Ashton, and others who had fun in this Oracle Community discussion. By a stoke of luck, both Jonathan and Stew were around and agreed to come up on stage to explain their part in the project. Thanks to them both for enlivening what might otherwise have been a rather nerdy talk. I also enjoyed "CON8416: Optimizing PL/SQL for Performance and Maintainability—a Panel Discussion".It's all very well to talk about Oracle Database technology and to listen to others do the same; but it's nicer still to cycle around the much loved Paradise Loop with twenty or so OpenWorld attendees. This year was the second go at #BikeB4OOW. People seem to want it to become a regular fixture.It was last year's inaugural ride that enticed me into the Twittersphere, as you can tell by the cute spelling of its name. And I had fun with Twitter at this year's show. If you can bear it, have a look at my Tweets & replies and my Favorites.

To borrow a line from Margaret Mitchell's Gone With the Wind:Death, taxes and OpenWorld! There's never any convenient time for any of them.It's a bit dull to stay indoors writing slides and...

Transforming one table to another: SQL or PL/SQL?

Here is the long version of the abstract of my OpenWorld 2015 talk. << This mantra is often recited by famous speakers at conferences about Oracle Database: “If you can do it in SQL, do so; only if you can’t, do it in PL/SQL” But should it be slavishly obeyed in all circumstances? As with all best practice prescriptions, the answer is “It depends”. This paper examines alternative solutions to a well-known problem: How to compute non-overlapping RowID ranges that completely cover a nominated table and that all contain, as nearly as is possible, the same number of rows. It’s relatively straightforward to implement a solution that simply orders the rows in the table by RowID and then allocates them into chunks that all have either S or S+1 rows; but for candidate tables of typical size, this exact method takes unacceptably long to execute. Therefore, an approximate method is called for – approximate, that is, by being uneven in the number of rows that each chunk holds. You can find examples of such approximate solutions using Internet search. They all use the information maintained by Oracle Database and exposed in DBA_Extents rather than the table itself. However, all solutions that I have seen impose special limitations in pursuit of brevity of exposition – and they leave generalization as an exercise for the reader. It turns out that there’s more to the generalization than meets the eye. I use the vehicle of the various implementations of each kind of method to discuss issues of best practice – both SQL versus PL/SQL, and within PL/SQL. Each competing choice is implemented as a Create_Chunks() procedure in a dedicated PL/SQL package. The choice between SQL and PL/SQL is therefore made in the context of the implementation of the Create_Chunks() procedure. It would be entirely foolish to implement the procedural code that is non-negotiably needed to issue SQL statements outside of the database. All solutions examined pass the same correctness tests; but there are performance differences. More significantly, there a big differences in source code size and in general clarity. My conclusion is that, for this particular example, it is better to implement the core logic of the algorithms in PL/SQL, even when a single insert... select approach is feasible. My preference is inevitably subjective – and reflects some rather subtle considerations. Your opinion might be different. >> My presentation slides are no use without me presenting them. I have therefore written a whitepaper, HERE, that gives a careful and detailed account of what my presentation addresses. I also provided a zip file, HERE, of working versions of all the implementations that the paper discusses together with the test harness. Note added 22-May-2016 I presented this topic at the HOTSOS Symposium in March 2016. OTN's Laura Ramsey interviewed me afterwards and posted the video HERE. We talked about the backstory of the Oracle Database Community's effort that generated many interesting SQL solutions. The ones that my paper discusses came from Jonathan Lewis ‏(@JLOracle) and Stew Ashton (@StewAshton). Stew credits Sven Weller (@SvenWOracle) for a major insight that kick-started his solutions. Jonathan seeded the crowd-sourcing work with this OTN forum posting. And Stew wrote up the account of his work in this blog post. It was a fun project. I've presented the work at quite a number of Oracle User Group conferences. And I had the privilege and pleasure of bringing the contributors to the stage to explain their work at several of the gigs: at OpenWorld 2015 and at UKOUG 2015, I was joined by Jonathan and Stew; at DOAG 2015 I was joined by Jonathan and Sven; and at BGOUG Fall 2015 I was joined by Jonathan.

Here is the long version of the abstract of my OpenWorld 2015 talk. << This mantra is often recited by famous speakers at conferences about Oracle Database: “If you can do it in SQL, do so; only if you...

Why use PL/SQL?

Here is the abstract of a talk that I've so far given at more than a dozen Oracle User Group conferences, starting in Israel in Jun 2015: “Large software systems must be built from modules. A module hides its implementation behind an interface that exposes its functionality. This is computer science’s most famous principle. For applications that use an Oracle Database, the database is, of course, one of the modules. The implementation details are the tables and the SQL statements that manipulate them. These are hidden behind a PL/SQL interface. This is the Thick Database paradigm: select, insert, update, delete, merge, commit, and rollback are issued only from database PL/SQL. Developers and end-users of applications built this way are happy with their correctness, maintainability, security, and performance. But when developers follow the NoPlsql paradigm, their applications have problems in each of these areas and end-users suffer. This session provides PL/SQL devotees with unassailable arguments to defend their beliefs against attacks from skeptics; skeptics who attend will see the light.” My presentation slides are no use without me presenting them. I have therefore written a whitepaper that gives a careful and detailed account of what my presentation addresses. HERE it is. Note added May-2016 I gave this talk at the DOAG conference, in November 2015. (DOAG is the German Oracle User Group.) I summarized what I said in an interview. The six-minute video is published HERE. Note added November-2016 I just published the post "Oren Nakdimon on #ThickDB". Do please read it and then download and watch Oren's talk. It substantiates, with real-world experience, everything I say in my "Why Use PL/SQL?" whitepaper.

Here is the abstract of a talk that I've so far given at more than a dozen Oracle User Group conferences, starting in Israel in Jun 2015: “Large software systems must be built from modules. A...

Bryn's Picks for OpenWorld 2015

If you're coming to OpenWorld, you know that the schedule of sessions is HERE. Here are my recommendations. No apologies for the fact that I picked some of my own gigs. Listed in schedule order.________________________________________________________________________________ 9:00 am Monday | Oaktable World, Creativity MuseumMark W. Farnham Minimizing the concurrency footprint of transactions with Logical Units of Work stored in PL/SQL (Oaktable World is hosted at the Creativity Museum, 221 Fourth Street, about 100 yards from the entrance from Oracle Open World.) ________________________________________________________________________________ 12:15 p.m. Monday | Moscone South—306Dominic Giles, Connor Mcdonald, Gerald Venzl (Oracle) CON8753: Modern Application Development with Oracle DatabaseDevelopment approaches, tools, and frameworks are rapidly advancing and the tools used to build modern database applications are evolving equally as quickly. In this session, you will see a demo and learn how Oracle Database integrates with modern development environments and how you can exploit technologies such as Node.js, Python, JavaScript, APEX 5.0, REST, and Java.________________________________________________________________________________ 1:30 p.m. Monday | Moscone South—306Richard Miley, Penny Stanisavljevich (American Express)introduced by Bryn CON8704: American Express’s Road to Zero-Downtime Releases via Edition-Based RedefinitionThe internet-facing application for customers of American Express’s prepaid card products is backed by Oracle Database. Customers want to use their cards at any time, but before the company began using the Edition-Based Redefinition feature in Oracle Database, application releases required three hours of downtime early in the morning to do feature testing that might determine a rollback decision. Now customer spending is uninterrupted, and additional significant benefits were achieved by providing early feature testing before release and more-frequent upgrades. This session presents design considerations for Edition-Based Redefinition implementation and lessons learned, and concludes by describing the use of Edition-Based Redefinition to move from a single-schema to a multischema model.________________________________________________________________________________ 2:45 p.m. Monday | Moscone South—302Martin Buechi (Avaloq Evolution AG) CON2082: Best Practices for Interpreting PL/SQL Hierarchical Profiles for Effective TuningThe PL/SQL hierarchical profiler is the primary tool for identifying performance problems in PL/SQL programs. The profiler creates dynamic execution profiles of PL/SQL programs in any environment without compile-time preparation. However, while the creation of profiles is simple, the effective interpretation requires experience. Hierarchical profiles show subprogram execution sequence, duration, count, and memory allocation. Profiles can be presented and analyzed as raw text, in SQL and HTML, and graphically. Values for individual executions, averages, and skew show algorithmic and implementation issues. This presentation shows 10 best practices for getting the most out of profiles based on the speaker’s eight years of experience with a 20-million–line PL/SQL application.________________________________________________________________________________ 4:00 p.m. Monday | Park Central—Metropolitan IIISteven Feuerstein, Andy Mendelsohn (Oracle) CON10770: YesSQL Celebration and Oracle Database Developer Choice AwardsAt our second-annual YesSQL celebration, Andy Mendelsohn, executive vice president of Database Server Technologies at Oracle, regales us with stories of Oracle’s past, present, and future. Steven Feuerstein announces the winners (chosen through popular vote) of the first-ever Oracle Database Developer Choice Awards. Attendees then have an opportunity to share a light meal with members of the development teams for SQL, PL/SQL, Oracle Application Express, Oracle REST Data Services, Oracle SQL Developer, and more.________________________________________________________________________________ 11:00 a.m.  Tuesday | Moscone South—306Bryn Llewellyn (Oracle) CON8729: Why Use PL/SQL?Large software systems must be built from modules. A module hides its implementation behind an interface that exposes its functionality. This is computer science’s most famous principle. For applications that use an Oracle Database, the database is the persistence module. The tables and the SQL statements that manipulate them are the implementation details. The interface is expressed with PL/SQL. Developers and users of applications built this way are happy with their correctness, maintainability, security, and performance. But when developers follow the NoPlsql paradigm, their applications have problems in each of these areas and users suffer. This session provides PL/SQL devotees with unassailable arguments to defend their beliefs against attacks from skeptics. Skeptics who attend will see the light.________________________________________________________________________________ 11:00 a.m. Wednesday | Moscone South—306Bryn Llewellyn (Oracle) CON8055: Transforming One Table to Another: SQL or PL/SQL?You’ve heard the mantra, “If you can do it in SQL, do so; only if you can’t, do it in PL/SQL.” Really? What if you want to compute nonoverlapping RowID ranges that completely span a table? An exact solution is trivial, but it’s too slow for a table of typical size. Approximate solutions, using only SQL, are posted on the internet. But they impose limitations, and leave generalization to the reader. This session examines alternative general solutions—some that use only SQL and others that combine SQL and PL/SQL. All pass the same correctness tests and all have comparable performance. Choosing one, therefore, requires some rather subtle and subjective considerations. Your choice might be different, and the speaker will welcome the debate.________________________________________________________________________________ 12:15 p.m. Wednesday | Moscone South—307Mike Hallas, Andrew Holdsworth, Robyn Sands (Oracle) CON8770: Developing Applications for High Performance: Does One Size Fit All?Applications are often designed and tested without consideration for data volumes relying on infrastructure scale-out to process large volumes. In this session, learn why it is important to consider different algorithms for batch versus online, understand the impact of treating a "many rows" problem like a "small rows" problem, and see the benefits of using appropriate modern data management algorithms. This session includes traditional demos, examples, and humor.________________________________________________________________________________ 1:45 p.m. Wednesday | Moscone South—306Martin Buechi (Avaloq Evolution AG), Kim Berg Hansen (T.Hansen Gruppen A/S), Marcelle Kratochvil (Piction)Hosted by Steven Feuerstein, Bryn Llewellyn (Oracle) CON8416: Optimizing PL/SQL for Performance and Maintainability—a Panel DiscussionAfter fully leveraging SQL in your Oracle Database applications, it’s time to completely utilize PL/SQL to implement business rules, create APIs on top of your data structures, and implement key processes within the database. Usually when developers think about optimization, they focus on performance, and this session does, too. But it’s also critical to think about how to write code today, so that tomorrow and years from now, future developers will be able to adapt that code to changing user requirements and database features. Join the experts for a thought-provoking discussion and lots of Q&A.________________________________________________________________________________ 4:15 p.m. Wednesday | Moscone South—309Tim Hall (Haygrays Limited)John Clarke, Gerard Laker, Connor Mcdonald, Mohamed Zait (Oracle)CON8415: Optimizing SQL for Performance and Maintainability—a Panel Discussion SQL is at the heart of every enterprise running on Oracle Database, so it is critical that our application’s SQL statements are optimized both in terms of performance and maintainability. Surprised to see the mention of maintainability? You shouldn’t be. Too much attention is paid on getting things working NOW. We need to think about the future, too. Join this session’s speaker and other SQL experts for a discussion (and lots of Q&A) on how to optimize your SQL statements.________________________________________________________________________________ 9:30 a.m. Thursday | Moscone South—306Connor Mcdonald (Oracle) CON8417: SQL-lectric Oracle’s database cloud services solve so many concerns for organizations—keeping infrastructure up to date, ensuring the integrity of backup/recovery, elastic resource capacity, and high availability. Finally, IT professionals can focus on the true art of software development: writing quality SQL to provide data to upstream services. This session covers some of the best and latest SQL features available in Oracle Database that allow developers to supercharge their SQL skill set. Join this session to discuss such topics as temporal validity, inlining PL/SQL into your SQL, pagination options, cross-apply/lateral, pattern matching, query blocks, SQL transposition, and flashback data archive.________________________________________________________________________________ And don't forget to visit the engineers who built, and enhance, PL/SQL and EBR at "Using PL/SQL and Edition-Based Redefinition Together in Oracle Database 12c" in the Database Development area in the DEMOgrounds.

If you're coming to OpenWorld, you know that the schedule of sessions is HERE. Here are my recommendations. No apologies for the fact that I picked some of my own gigs. Listed in schedule order._______...

How to write a safe result-cached PL/SQL function

Jonathan Lewis has recently posted Result Cache and Result Cache 2 on his blog. These are very carefully written accounts that show what can go wrong if you break the rules for the safe use of the PL/SQL Function Result Cache. It's easy for me to say this, because -- as an Oracle insider -- I know the rules. Jonathan's analysis, and a very fruitful private follow-on discussion that he and I had, showed me that we'd neglected to state these safety rules clearly in our documentation. I've filed a doc bug to complain about this. Our discussion prompted me to create my own tests, and as a result, I filed three product bugs. Jonathan cited the doc bug and the product bugs in his blog posting. Anyone with access to My Oracle Support can read them. To bridge the gap until the version of the Oracle Database documentation that fixes the account is available, and to provide a rather different perspective on the whole business than Jonathan's post did, I have written this essay: How to write a safe result-cached PL/SQL function The essay cites the doc bug and the product bugs that I filed too. My tweet to announce this post was rather cryptic: What did Leo Tolstoy have to say about the PL/SQL Function Result Cache? Read my new blog post to find out. You'll actually have to read my essay to find out. But for those of you who are too busy to read essays, here are are the rules that I hope that I succeed, in my essay, in convincing you are self-evident. A result-cached PL/SQL function is safe if it always produces the same output for any input that it would produce were it not marked with the result_cache keyword. This safety is guaranteed when, and only when, these conditions are met: * all tables that the function accesses are ordinary, non-Sys-owned permanent tables in the same database as the function * the function’s result must be determined only by the vector of input actuals together with the committed content, at the current SCN, of the tables that it references * when the function is run, it must have no side-effects.

Jonathan Lewis has recently posted Result Cache and Result Cache 2 on his blog. These are very carefully written accounts that show what can go wrong if you break the rules for the safe use of the...

Oracle

Integrated Cloud Applications & Platform Services