Should commits be performed in code that is outside the database?
This topic was explored in a "Twitter storm" in October 2016.
Follow tbis link to a detailed exploration of the topic.
Most of this makes perfect sense to me, but I'm a bit concerned about the "select from a table function workaround to invoke a function" which also issues a commit. I know this to work as we have tried it before.Are there any known (performance or other) implications with this approach as opposed to doing a direct procedure call (or anonymous block)?
I fully agree and have zero comments to make on this. I am curious to see if anyone else will, or dares ;-)
Thanks, Toon. I'm pleased to see that you and I agree. But, then—as co-champions of #ThickDB—we'd better!
Nice to hear from you Marcus. Not sure I get you. The "select from a table function" workaround is exactly that—a second best choice. Of course this approach:
call My_XAPI(....) into :r
or, if you prefer, this:
begin :r := My_XAPI(....); end;
is hugely to be preferred. But I describe the workarounds for the case that some stupidity in the framework that got chosen simply makes these ideal choices impossible. In that case, using the workaround is your only way to follow the ThickDB paradigm. I'm thinking here of its critical necessity for the robustly implementation data-changing transactions that affect two or more tables. Like I said, the "wasted" extra commit that the framework will probably insist on doing adds just a trivial cost. And so does one level of jacketing. After all, I'm proposing a pass-through jacket anyway, for the reasons I explain in my "Why use PL/SQL?" paper.
The bad "implications" are simply that you no longer say what you mean in an explicit way. The word "select" doesn't exactly imply a side effect of changing data in possibly several tables, and then committing. Nor, for that matter, does the word "insert" imply this general outcome.
Let me start with noting that writing “a commit statement in *your* PL/SQL package is a bug”, doesn’t mean I’m opposing commit from database PL/SQL. Of course PL/SQL has to commit. I’m just saying that *you* shouldn’t be the one writing that commit.
Also, let me state clearly that my statement was worded too strong. On purpose, because in my world, the exceptions are rare, and because of the Twitter 140 character constraint. Never say always. There are two rare exceptions that I can think of, besides the already mentioned autonomous transaction. Probably others can think of more, rare, exceptions.
1) In case you have a batch transaction that hits undo limits and you need to manually cut the one large transaction in multiple smaller transactions.
2) In case there is no calling framework that does transaction handling like you want it: commit when PL/SQL is succesfully executed, and log/display (rollback is automatic) when an error pops up. In this case, I really like your section 3 in which you clearly separate the code layer from the interface layer. I would even go one step further for better security: place the “interface-defining jacket subprograms” in its own database schema, and force with privileges that the code layer cannot call the interface layer, only the other way round.
The database systems I’m working on, typically use APEX (used to be Forms) as their front end, and use a scheduler for back end processing. In those environments, point 2 is an exception. APEX, Forms, dbms_scheduler, and 3rd party schedulers are all frameworks/environments that handle transactions and errors fine (mostly) and they do so inside PL/SQL. So there is no need to place extra commits yourself in the code. In fact, when you do add commits yourself, you’ll experience the mentioned disadvantages of your program units not being reusable and hard to test. Your context is different and this is where we differ. You mention “This paradigm is non-negotiably ruled out in today’s stateless paradigm supported by connection pooling”. In that context I fully agree you should have an interface layer (XAPI) with your own commits, as you described in section 3. And in that case, my point 2 is not a rare exception anymore, but a rule. However, in the type of systems I work with, the number of commits found in version control in your PL/SQL packages should probably be exactly one: the one for logging using an autonomous transaction.
Reading your section 3, I think we agree that the code below, in the code layer, is bad practice:
procedure p1isbegin lots and lots of code; commit;end p1;
It’s better to remove the commit from procedure p1, and have a second procedure p2 in the interface layer, in case you need to call the procedure from outside the database. You can skip procedure p2 if it is called from the scheduler for example.
procedure p2isbegin p1; commit;[exceptionwhen others then log and raise;]end;
To wrap up, I agree with everything in your paper - also and especially your stance regarding TAPI’s - AND I think we can have a code layer that is free of commits to allow easy reuse and testing.
Twitter indeed seems awful to have this kind of discussions. On the other hand, if it wasn’t for Twitter, you wouldn’t have written this thoughtful essay. Thank you very much for making that effort.
Thanks for this, Rob. I like what you say at the end: "I agree with everything in your [On issuing commit from database PL/SQL] paper." However, the detail of your comment implies that, in fact, you disagree with me!
I argue for a strict definition of ThickDB—at least as far as business functions that change data are concerned. This rests on a strict separation between the database module and the various modules that are its clients. I say this in my "Why Use PL/SQL?" paper:
<<The completeness and correctness criteria are prescribed by the specification of the business functions, and so the database module cannot be responsible for meeting its requirement without the authority brought by owning the entire implementation of the business function logic together with the persistence and retrieval of the data.>>
And I stand by it. The authority I refer to depends entirely on owning the whole transaction. This is what XAPI means, and the XAPI must therefore own the commit (or rollback).
One example of a client of the database module is a "calling framework", as you use the term. And you argue that, taking APEX as an example, the framework must own the ultimate responsibility for the integrity of the business transaction. This, then, is where we disagree. Sure, APEX happens to run inside the database. But this doesn't grant it the right to own "commit".
Finally, it seems that your mention of "commit" in connection with testing and re-use is a red herring. Testing philosophy demands that micromodules are tested individually and that macromodules are also tested. Of course an XAPI subprogram has a modular decomposition, so individual subprograms it uses (which never commit) need to be tested. But so must the XAPIs themselves be tested. And this means testing that the right stuff really is committed. Of course this implies a larger regime that sets that data to a known starting point for each XAPI test. This is simply a non-negotiable ingredient of the testing process. Similarly with re-use. The subprograms that implement an XAPI are candidates for re-use. But the XAPI itself cannot be reused within the database module. Of course, a XAPI may be (re)used by various different clients of the database module.
I thought your story left some room for a code layer free of commits that are allowed to be called from APEX/scheduler. But apparently I misunderstood. Let me please try to rephrase using my p1/p2 example above.
Assuming procedure p1 contains lots of code without a commit in the code layer, and procedure p2 just contains a call to p1 including transaction and error handling in the XAPI layer. Then, even though you know APEX or the scheduler handles transactions and errors just fine, you’d always call p2 from APEX or from the scheduler, instead of p1. Because business function atomicity dictates that there must be a single program unit that does all DML and transaction handling together. Leaving the commit to APEX or the scheduler would violate this business function atomicity.
In that case we do disagree indeed. Having an interface layer adhering to the business function atomicity principle is fine in my opinion when called from outside the database, because it relieves clients of the burden of having to handle the transaction. And you have made it impossible for them to do it the wrong way. Note that by only exposing code to the client that is allowed to be committed, but does not commit itself, also goes a long way. But leaving that responsibility in the database is a better idea, so no discussion in that area.
The goal of business function atomicity is to prevent partial client rollbacks and thereby to prevent database inconsistencies. If you know this cannot happen when called in an environment that handles transactions fine, like APEX or the scheduler, then implementing business function atomicity -procedure p2 in the example above- becomes a theoretical exercise. You’ll just introduce a layer of code that doesn’t provide additional value. It’s certainly not wrong, but code without value just adds costs and should be removed in my opinion.
Testing and re-use are indeed a red herring in this discussion, because I see you’re using commit statements in a correct way. However, I still encounter PL/SQL code on a regular basis that contains commit statements in the middle of the execution section. And I also encounter commit statements at the end of the execution section for code that is not in the XAPI layer. Both of these situations are the reason why I started tweeting about it.
I believe that this is the essence of Rob's feedback:
"We do disagree indeed... You’ll just introduce a layer of code that doesn’t provide additional value. It’s certainly not wrong, but code without value just adds costs and should be removed in my opinion."
I argue for a philosophy for a design of the database API that allows immediate and safe re-use by any scheme that interfaces with the outside world. And Rob argues for a design of the database API that relies upon specific idiosyncrasies of APEX or of the Oracle Scheduler. This is a religious difference. Further discussion is therefore fruitless.
I finally found some time to respond to this. As you are aware I am part of the "commit in plsql is a almost always a bug" fraction. I'm very sorry that this discussion did disturb your holidays.
The rest of this comment is addressed to all readers of this.
I firmly believe that the two positions are not mutually exclusive. I will try to explain that in a moment. However there might be some misunderstandings and there was some bad timing involved.
First let me rephrase what the two positions are in my eyes. I think the advice from Rob certainly was ment as a "rule of thumb". The problem with twitter is that it is easy to state such a rule, but hard to explain why the rule or "best practice" is there in the first place (and under which circumstances we are allowed to break it).
Position/Rule #1) Commit in plsql is almost always a bug!
Position/Rule #2) The business transaction belongs into plsql and therefore plsql must do the commit!
Representatives for #1 were Rob, myself, even Tom with his 2007 statement and others.
Representatives for #2 were Bryn, Toon and others.
Now let's put bit of context to the two statements but more importantly to the people behind those statements.
I am knowing Rob for a long time from the Oracle SQL and PLSQL forum. He is not active there anymore but was extremly active several years ago. I only met him once in person at one OOW. Rob, myself and even more so Tom have an extremly large history answering hundreds or thousands of technical questions in a forum. Naturally the questions there are from the more novice programmers. Almost every time a piece of code is shown there that includes a commit, this commit is plain wrong or at least highly questionable. To show all examples is not only to big for twitter, but also for this blog comment here.
On the other side, there is Bryn. He talks with all kinds of senior developers, systems engineers and application architects about what is the best place and the best way to work with data. Currently he is doing a wonderful job together with Toon to encourage people to follow the thick database paradigm. The background story here is that Bryn fights the notion to treat the database as simple data dump. Which is a fairly common attitude in the Java world.
We should recognize that the typical target audience for the representatives of the two positions is different! This is one of the misunderstandings. Robs statement was not specifically addressed to Bryn.
Additionally we should accept that each rule can be broken (apart from this one of cause).I think the #2 rule is the accepted exception for the #1 rule.
So let me try to slightly "rephrase" the to positions as two general best practices.
#1) A commit in plsql is a bug; unless it is accompanied by at least three lines of comment that explain why the commit is there and which transaction it finishes.
#2) Commits belong into a business transaction layer. This transaction layer should be in plsql (XAPI). Commits do not belong into any other layer.
If we also keep the different audiences for both recommendations in mind, then suddenly the two positions seem not to contradict anymore.
Ah... what is it about this thread that encourages people to contribute while I'm on holiday? Anyway, Thanks, Sven. Amazingly, while I was preparing my reply, there was a renewed flurry on Twitter. I'd hoped that I'd laid this to rest! I'll make my very last attempt now.
When the top-level database call invokes a PL/SQL subprogram that changes data, the design concept MUST be that it makes ALL the changes appropriate for the business txn in mind and that it takes responsibility for the txn's atomicity. In other words, it must implement the SINGLE "commit" or the SINGLE "rollback" that the notion of atomicity implies. This is a non-negotiable correctness principle.
NOTE: I'm excluding all forms of logging into tables here. Code that does this can, and MUST, "commit". And to do this it must use an autonomous txn.
Typically the PL/SQL subprogram that starts the top-level call invokes one or more other subprograms. Obviously, atomicity is ensured by the rule that the very last action must be the "commit" (or the "rollback").
Of course, "exactly one "commit" as the very last action in the call is another way of saying that "commit" must never be programmed anywhere except here. This point is at the heart of our debate. Some people focus on the obvious error of executing "commit" anywhere except as the final action (and worry about what novices might do in error). And others focus on the necessity of executing "commit" as the final action to ensure that the atomicity contract is visibly met. This was at the heart of Tom Kyte's little "almost always a bug" joke. I really thought that I'd dealt with this—in particular by citing Tom's follow-up piece. Sven, please re-read what I wrote on this. And please do not attribute to him a view that is the diametric opposite of what he believes.
While a top-level call that terminates with an error automatically issues a "rollback" as its last action, I see no reason to rely on this to save the "effort" of coding your own "rollback". And doing your own "rollback" also allows a design where the return of an ordinary status code is used to signal an error in preference to letting the error escape the database.
The middle-tier web-server code of APEX constructs an anonymous block dynamically and executes this to start the database call. In all generations of this technology (i.e. for about the last two decades) the generated anonymous block ends with a "commit". The block has no exception section, so if an unhandled exception gets this far, the implicit "rollback" takes place and the "commit" never happens. Knowing this, Rob wants to trust this because he promises himself that there will never be a re-use of his code from any other client-side scheme and he wants to avoid "unnecessary" code. As I said, this is religion.
Finally, let me stress that I hate the schemes that I described in my paper that subvert "insert" into a view, or even "select" from a table function to let you call a PL/SQL subprogram. Seems to me that you've already gone horribly wrong if you're using some kind of "framework" technology that makes it impossible, or stupidly hard, simply to invoke a PL/SQL subprogram when you want to. But if such a jacketing scheme is used then you should still ensure that the subprogram that the jacket invokes owns the responsibility for the atomicity of the txn that it implements. This simply forces the ugly, but harmless, use of an autonomous txn to persist real operational data rather reserving it only to persist logging data.
Here are the titles and abstracts of Oracle OpenWorld sessions
delivered by customers about their successful use of EBR in...
The Swedish Oracle User Gruop, SweOUG, recorded a full day of
training on Edition-based Redefinition. Here are links to the
“Why use PL/SQL—the Movie” is now available as a playlist on the
Oracle Database Product Management YouTube Channel HERE.