Thursday Aug 02, 2012

New recording on using JMeter to test ADF applications

Before joining Oracle I maintained an older ADF blog where I covered using Apache JMeter to load test ADF.  That post has been picked up by a number of people over the years and it's nice to see it was useful.

Unfortunately one of the problems in using JMeter to test ADF is there's an extreme amount of fussy configuration to get right.  As a result to this day I continue to get hit with questions - why don't our tests work?  From my own investigation 99% of the time it's a configuration error on the developer's part.  Like I said, there's lots of fussy configuration you must get exactly right otherwise ADF gets confused by the messed up HTTP requests it receives from JMeter (more rightly ADF says the user session has expired, which is just ADF's way of saying it doesn't know who the current session is because the ADF HTTP state parameters JMeter is sending to the ADF server are not what it expected).

While the original blog post was useful in teaching people the technique of using JMeter, it really could do with a recorded demonstration to show all the steps involved in a live test.  Lucky for you as I'm now an ADF product manager with far too much time on my hands, I've taken time out to record such a demo as part of our ever expanding ADF Insider series.

At the conclusion of the demo you may decide it all sounds like too much effort.  Without a doubt this is why you should look at Oracle's Application Test Suite (OATS).  OATS has ADF intelligence built in, there's far less fussy configuration required, so you can focus on the job of testing rather than configuring the test tool.  I hope to publish some demos on using OATS soo.

One final caveat, I don't expect the existing JMeter configurations to survive for every future version of ADF.  So if you do find your old JMeter tests stop working on adopting a future ADF version, time to look under the covers, discover how we need to change the JMeter tests, and most importantly please share your knowledge by blogging about it or post it on the ADF EMG and leaving a comment here for people to find.

Post edit 12th March 2013: Jan Vervecken has provided a very useful update for JMeter, check out the following OTN forums post.

Post edit 2nd September 2013: Ray Tindall has provided the following updates for using this under The following changes need to be made to the JMeter solution:

Previously afrLoop was extracted from:
query = query.replace(/_afrLoop=[^&]*/__,"_afrLoop=21441675777790");
query = query += "_afrLoop=21441675777790";

Under it should be extracted from:
query = _addParam(query, "_afrLoop", "21137373554065");

As such afrLoop should now be extracted using:
_afrLoop", "([-_0-9A-Za-z]{13,16})

Thanks to both Jan and Ray for these updates.

Thursday Jul 12, 2012

Perth reveals itself as an ADF hotspot - ADF Community Event

I don't know if you've every visited Perth, but it's a loooooong way from anywhere.  As a result sandgropers often feel like we're left out of the rest of the world's excitement (which isn't a bad thing sometimes either).

In the IT industry this as true as other worldly events.  We read all about those exciting Silicon Valley conferences, huge European technical user groups, and look local to find, well with a population of only 1.2 million and the next major city 2500kms away, IT events are on a much smaller scale.

So I'm happy to announce regardless of the tyranny of distance, Perth proved itself a little ADF hotspot yesterday. Yesterday marked the first ADF Community Event trial in Perth, opened to only 4 of our local customers to test will this work locally, and I must say the event was a success!

The ADF Community Events are designed not to be Oracle sales events, but rather gathering parties who are interested in ADF to discuss and collaborate and network, while learning more about what ADF and other FMW products have to offer (think: SIG).   All in all this is an idea we "borrowed" from Frank Nimphius and Germany who are running their own successful events series.

Over 3 hours we covered discussing how to build large scale ADF applications, as well as what I thought an excellent hands-on session by Tim Middleton on integrating Coherence with ADF (Tim, I finally get Coherence, thanks!).

So how do I know the event was a success? Well firstly Oracle staff were trying to push their way in too, so I had an overly full room.  Secondly I've already lined up our customer speakers for the next event and they volunteered themselves without (much) prompting! ;-)

The next event is tentatively scheduled for Wednesday 12th September.  I'm deliberately controlling the invites, but if you're desperate to attend please email me at chris DOT muir AT oracle DOT com.

Thanks to everyone who attended yesterday and I look forward to seeing everyone at the next event.

Saturday Jul 07, 2012

Do you know your ADF "grace period?"

What does the term "support" mean to you in context of vendors such as Oracle giving your organization support with our products? Over the last few weeks I'm taken a straw poll to discuss this very question with customers, and I've received a wide array of answers much to my surprise (which I've paraphrased):

"Support means my staff can access dedicated resources to assist them solve problems"

"Support means I can call Oracle at anytime to request assistance"

"Support means we can expect fixes and patches to bugs in Oracle software"

The last expectation is the one I'd like to focus on in this post, keep it in mind while reading this blog.

From Oracle's perspective as we're in the business of support, we in fact offer numerous services which are captured on the table in the following page.

As the text under the table indicates, you should consult the relevant Oracle Lifetime Support brochures to understand the length of time Oracle will support Oracle products. As I'm a product manager for ADF that sits under the FMW tree of Oracle products, let's consider ADF in particular. The FMW brochure is found here.

On page 8 and 9 you'll see the current "Application Development Framework 11gR1 (11.1.1.x)" and "Application Development Framework 11gR2 (11.1.2)" releases are supported out to 2017 for Extended Support. This timeframe is pretty standard for Oracle's current released products, though as new releases roll in we should see those dates extended.

On page 8 of the PDF note the comment at the end of this page that refers to the Oracle Support document 209768.1:

For more-detailed information on bug fix and patch release policies, please refer to the “Error Correction Support Policy” on MyOracle Support.

This policy document is important as it introduces Oracle's Error Correction Support Policy which addresses "patches and fixes". You can find it attached the previous Oracle Support document 209768.1.

Broadly speaking while Oracle does provide "generalized support" up to 2017 for ADF, the Error Correction Support Policy dictates when Oracle will provide "patches and fixes" for Oracle software, and this is where the concept of the "grace period" comes in.

As Oracle releases different versions of Oracle software, say, you are fully supported for patches and fixes for that specific version. However when we release the next version, say, Oracle provides at minimum of 3 months to a maximum of 1 year "grace period" where we'll continue to provide patches and fixes for the previous version. This gives you time to move from to without being unsupported for patches and fixes.

The last paragraph does generalize as I've attempted to highlight the concept of the grace period rather than the specific dates for any version. For specific ADF and FMW versions and their respective grace periods and when they terminated you must visit Oracle Support Note 1290894.1. I'd like to include a screenshot here of the relevant table from that Oracle Support Note but as it is will be frequently updated it's better I force you to visit that note.

Be careful to heed the comment in the note:

According to policy, the Grace Period has passed because a newer Patch Set has been released for more than a year. Its important to note that the Lifetime Support Policy and Error Correction Support Policy documents are the single source of truth, subject to change, and will provide exceptions when required. This My Oracle Support document is providing a summary of the Grace Period dates and time lines for planning purposes.

So remember to return to the policy document for all definitions, note 1290894.1 is a summary only and not guaranteed to be up to date or correct.

A last point from Oracle's perspective. Why doesn't Oracle provide patches and fixes for all releases as long as they're supported? Amongst other reasons, it's a matter of practicality. Consider JDeveloper 10.1.3 released in 2005. JDeveloper 10.1.3 is still currently supported to 2017, but since that version was released there has been just under 20 newer releases of JDeveloper. Now multiply that across all Oracle's products and imagine the number of releases Oracle would have to provide fixes and patches for, and maintain environments to test them, build them, staff to write them and more, it's simple beyond the capabilities of even a large software vendor like Oracle. So the "grace period" restricts that patches and fixes window to something manageable.

In conclusion does the concept of the "grace period" matter to you? If you define support as "getting assistance from Oracle" then maybe not. But if patches and fixes are important to you, then you need to understand the "grace period" and operate within the bounds of Oracle's Error Correction Support Policy.

Disclaimer: this blog post was written July 2012. Oracle Support policies do change from time to time so the emphasis is on you to double check the facts presented in this blog.

Sunday Jun 24, 2012

466 ADF sample applications and growing - ADF EMG Kaleidoscope announcement

Interested in finding more ADF sample applications?  How does 466 applications take your fancy?

Today at ODTUG's Kaleidoscope conference in San Antonio the ADF EMG announced the launch of a new ADF Samples website, an index of 466 ADF applications gathered from expert ADF bloggers including customers and Oracle staff.

For more details on this great ADF community resource head over to the ADF EMG announcement.

Thursday Jun 14, 2012

An invitation to join a JDeveloper and ADF productivity clinic (and more!) at KScope

Would you like a chance to influence Oracle's decisions on tool usability and productivity?

If you're attending ODTUG's Kaleidoscope conference this year in San Antonio, Oracle would like to invite you to participate in our Usability Activity Research and separately our JDeveloper and ADF Productivity Clinics with our experienced user experience teams.  The teams are keen to hear what you have to say about your experiences with our tools in general and specifically JDeveloper and ADF.  The details of each event are described below.

Invitation to Usability Activity - Sunday June 24th to Wednesday June 27th

Oracle is constantly working on new tools and new features for developers, and invites YOU to become a key part of the process!  As a special addition to Kscope 12, Oracle will be conducting onsite usability research in the Alyssum room, from Sunday June 24 to Wednesday June 27.

Usability activities are scheduled ahead of time for participants' convenience.  If you would like to take part, please fill out this form to let us know of the session(s) that you would like to attend and your development experience. You will be emailed with your scheduled session before the start of the conference.

JDeveloper and ADF Productivity Clinic - Thursday June 28th

Are you concerned that Java, Oracle ADF or JDeveloper is difficult? Is JDeveloper making you jump through hoops?  Do you hate a particular dialog or feature of JDeveloper? Well, come and get things off your chest! Oracle is hosting a product management and user experience clinic where we want to hear about your issues and concerns. What's difficult to use?  What doesn't work the way you want, and how would you want it to work?  What isn't behaving like your current favorite tool?  If we can't help on you the spot, we'll take your feedback and use it to improve the product experience.  A great opportunity to get answers, or get improvements.

Drop by the Alyssum room, anytime from 8:30 to 10:30 on Thursday, June 28.

We look forward to seeing you at KScope soon! 

Sunday May 27, 2012

Page based Prematurely Terminating Task Flow scenario

In a previous blog post I highlighted the issue of ADF Prematurely Terminating Task Flows, essentially where ADF page fragment based task flows embedded in regions can be terminated early by their enclosing page causing some interesting side effects.  In that post I concluded the behavior was restricted to task flows embedded in regions, and to be honest besides a log out/timeout scenario, I thought this issue could only occur in regions.

While reading our documentation on the CLIENT_STATE_MAX_TOKENS and browser back button support I realized there is indeed another prematurely terminating task flow scenario for page based task flows rather than fragment based task flows which we'll describe here.  For anyone who hasn't read the previous blog post, I suggest you read it before reading this post as it won't make much sense otherwise.

Let's describe the application we'll use to demonstrate the scenario:

1) First it contains an unbounded task flow which includes a ViewCountries.jspx page to show data from the Countries table, followed by call to a countries-task-flow.xml.

2) The ViewCounteries.jspx page contains a read-only af:table showing countries data and the ability to select a record, an edit button to navigate to the countries-task-flow, and finally a plain old submit button.

4) The countries-task-flow includes an EditCountries.jspx and an exit Task Flow Return Commit activity.

Note the countries-task-flow transaction options are set to Always Begin New Transaction and a shared Data Control Scope:

6) Finally the EditCountries.jspx page includes an editable af:form for the countries data, and a button to exit the task flow via the Task Flow Return Commit activity.

Similar to the last blog post we'll use ADFLoggers on the underlying Application Module to show what's happening under the hood.

On running the application and accessing the ViewCountries.jspx page
we see the Application Module initialized in the logs:

<AppModuleImpl> <create> AppModuleImpl created as ROOT AM
<AppModuleImpl> <prepareSession> AppModuleImpl prepareSession() called as ROOT AM

We'll pick the Brazil record....

....then the edit button which navigates us to the EditCountries.jspx page within the countries-task-flow.  Note the Brazil record is showing as the current row as the countries-task-flow is using a shared data control scope:

Now if we use the browser back button to return to the previous page we see something interesting in the logs as soon as we click the button.....

<AppModuleImpl> <beforeRollback> AppModuleImpl beforeRollback() called as ROOT AM

....and because of the rollback note that the current row has reset to the first row:

As promised this is another prematurely terminating task flow scenario, this time with pages rather than fragments.  As we can see the framework on detecting the back button press terminates the task flow's transaction by automatically issuing the rollback.

You can download the sample application from here.

Thursday May 17, 2012

Which JDeveloper is right for me?

Developers downloading JDeveloper will notice that there are two "current" releases to download, 11g Release 1 and 11g Release 2 (abbreviated to 11gR1 and 11gR2 respectively).  11gR1 encompasses the 11.1.1.X.0 JDeveloper versions including the latest release.  11gR2 encompasses the 11.1.2.X JDeveloper versions including the latest release.

What's the difference between the two releases and when would you want to use them?

JDeveloper 11g Release 2 includes support for JavaServer Faces 2.0 and was released for customers who are specifically interested in using this contemporary Java EE technology.  Oracle plans to bring in full Java EE 6 support in JDeveloper 12c which JSF2.0 is apart of, but in listening to customers there was interest in obtaining the JSF2.0 support earlier.  Thus the 11gR2 release.

The question begs then why would you want 11gR1 if 11gR2 includes the latest Java EE JSF standards?  Surely 11gR1 only supports the older JSF1.2?  The answer revolves around JDeveloper's Fusion Middleware (FMW) support.  Only 11gR1 and the yet-to-be-released 12c versions of JDeveloper will support the full FMW tools including WebCenter, SOA Suite and so on.

So if you want the latest JSF2.0 support go 11gR2, but if you're happy with 11gR1 or need the rest of the FMW stack stay on the 11gR1 platform for now as Oracle is continuing to actively improve it.  Eventually JDeveloper 12c will arrive where the 11gR1 and 11gR2 releases will converge, and your choice will again be a simple one.

Friday May 04, 2012

ADF UI Shell update

Developers who use the ADF UI Shell (aka Dynamic Tab Shell) will be interested to know it now has support for multi browser tabs.  What does multi browser tab support mean?

As separate to the dynamic tab feature provided by the ADF UI Shell, contemporary browsers give the user the ability to open multiple tabs within the browser. Each browser tab can view different URLs allowing the user to browse different websites simultaneously, or even the same website multiple times. 

There's effectively two ways you can currently be using the ADF UI Shell, either you're using the version coupled with JDeveloper and selected through the New Page dialog the Dynamic Tab Shell template, or you've downloaded the source code via the ADF UI Shell patterns page.

If you're using the former option, note that the multi browser tab support within the Shell  became available in JDeveloper (patchset 5).  If you want to make use of this support you will need to consider adding the context parameter USE_PAGEFLOW_TAB_TRACKING to your web.xml to turn on the multi browser tab support in the shell.  By default the Shell will not turn this on for backwards compatibility reasons.

Alternatively if you're using the ADF UI Shell source code as downloaded via the original pattern web page, you will not only need to configure this new parameter, but you will need to download the source code (via the zip in the link above) and modify your local copy of the template too.  For reference the only code change has been to the class.

Note while this will make the ADF UI Shell ready for multi browser tab support, it does not mean your entire application suddenly can support multi browser tabs. You need to have taken special care in determining your application's bean scopes as detailed in one of my old blogs.

Friday Apr 20, 2012

ADF Prematurely Terminated Task Flows

In this post I'll describe some interesting side effects on task flow transactions if a task flow terminates/finalizes earlier than expected.  To demonstrate this we'll use the following application built in JDev

The app based on the Oracle HR schema renders a single page:

Before describing the prematurely terminating task flow behavior let's describe the characteristics of the application first:

1) The app makes use of 2 independent view objects DepartmentsView and EmployeesView.

2) The overall page is Main.jspx which has an embedded region calling the departments-task-flow which itself has another embedded region calling the employees-task-flow.  The departments task flow has the editable departments form and navigation buttons to walk the departments, the employees task flow the table of relating employees for the department.

3) As the user navigates between departments records, the department ID is passed to the employees task flow which calls an ExecuteWithParams activity then displays the resulting employees.  The employees task flow binding has it's refresh = ifNeeded and the associated region has partialTriggers on the navigation buttons, ensuring the employees task flow is updated as the user navigates the departments using the supplied buttons.

4) Of particular interest, the departments-task-flow is using the Always Begin New Transaction task flow transaction behaviour and has an isolated data control scope:

And the employees-task-flow is using Use Existing Transaction if Possible and a shared data control scope:

If you run this application and navigate amongst the departments using the navigation buttons, the application works as expected.  Both the departments and employees records move onto the next departments ID after each button click.

In the application I've also added some ADFLoggers which help capture the current behaviour:

<TaskFlowBean> <taskFlowInit> Task flow /WEB-INF/departments-task-flow.xml#departments-task-flow initialized
<AppModuleImpl> <create> AppModuleImpl created as ROOT AM
<AppModuleImpl> <prepareSession> AppModuleImpl prepareSession() called as ROOT AM
<AppModuleImpl> <create> AppModuleImpl created as NESTED AM under AppModule
<TaskFlowBean> <taskFlowInit> Task flow /WEB-INF/employees-task-flow.xml#employees-task-flow initialized
<TaskFlowBean> <taskFlowFinalizer> Task flow /WEB-INF/employees-task-flow.xml#employees-task-flow finalized
<TaskFlowBean> <taskFlowInit> Task flow /WEB-INF/employees-task-flow.xml#employees-task-flow initialized

As the page first renders we can see the departments task flow initialized then the associated application module created and prepared to serve the data for the DepartmentsView.  Subsequently we can see the employees task flow initialized.  We don't see a new application module as the employees task flow is sharing the data control. From here each time we step onto another departments record, we'll see the employees task flow finalizer called, then the employees task flow intializer called.  This occurs because the ifNeeded refresh property on the employees task flow is restarting the task flow each time the department ID is changed.

This restarting of the task flow is what I coin the "premature termination" of the task flow.  Essentially the calling parent has forced the framework to terminate the employees task flow, rather than the task flow gracefully exiting via a task flow return activity.

At the moment though, this is still a "So what?" scenario.  What do we care?  Everything appears to work?

Let's change the setup slightly to demonstrate something unexpected.  Return to the application and set the departments task flow transaction option to <No Controller Transaction> (and leave the data control scope option = isolated/unselected):

Rerun the application.  Note now when it runs and we press one of the navigation buttons, besides a screen refresh nothing happens.  We don't walk onto a new departments record in the departments task flow, and we don't see the associated employees for the expected new department.  The application seems stuck on the first department.

A clue to what's going on occurs in the logs:

<TaskFlowBean> <taskFlowInit> Task flow /WEB-INF/departments-task-flow.xml#departments-task-flow initialized
<AppModuleImpl> <create> AppModuleImpl created as ROOT AM
<AppModuleImpl> <prepareSession> AppModuleImpl prepareSession() called as ROOT AM
<TaskFlowBean> <taskFlowInit> Task flow /WEB-INF/employees-task-flow.xml#employees-task-flow initialized
<TaskFlowBean> <taskFlowFinalizer> Task flow /WEB-INF/employees-task-flow.xml#employees-task-flow finalized
<AppModuleImpl> <beforeRollback> AppModuleImpl beforeRollback() called as ROOT AM
<TaskFlowBean> <taskFlowInit> Task flow /WEB-INF/employees-task-flow.xml#employees-task-flow initialized

Notice in between the last employees task flow finalizer/initializer pair we see the application module has performed a rollback.  This partially explains the behaviour we're seeing.  When a rollback is issued, a rollback resets the current row indicators for all view objects attached to the application module.  This is why we can't move onto another record.

But why is the rollback called in this scenario?

The answer is wrapped around the concept of task flow transactions and the associated data control frame.

In the first scenario the departments task flow initiated the task flow transaction and associated data control frame.  In turn the employees task flow joined the departments transaction and data control frame.  Only the initiator of a task flow transaction can commit/rollback the overall transaction associated with the data control frame.  In the case where the employees task flow is prematurely terminated, as it a secondary citizen in the overall task flow transaction, the framework leaves the initiator of the task flow to tidy up the transaction. No automatic rollback occurs on the work done by the secondary task flow.

In the second scenario the departments task flow is not initiating the task flow transaction as it's chosen the <No Controller Transaction> option.  Instead the employees task flow initiates the transaction because when the Use Existing Transaction if Possible option finds no transaction open it defaults to the equivalent of Always Begin New Transaction behaviour.

In remembering the initiator of a task flow transaction can commit/rollback the overall transaction, the framework automatically rolls back the employees task flow and this is the cause of the behaviour we're seeing.  Even though the departments task flow is using <No Controller Transaction> this doesn't mean the underlying view object doesn't participate in a transaction, it just doesn't participate in a task flow transaction (which is an abstraction sitting about the data control transactions).  As the two task flows share data controls, there is only a single application module shared by both task flows, a rollback from one task flow will result in a roll back in both task flows.

The solution? Either revert back to the original settings where the bookings task flow uses Always Begin New Transaction, or alternatively use an isolated data control scope for the employees task flow.

Saturday Apr 07, 2012

Oracle JDeveloper 11gR2 Cookbook book review

I recently received a free copy of Oracle JDeveloper 11gR2 Cookbook published by Packt Publishing for review.

Readers of technical cookbooks would know this genre of text includes problems that developers will hit and the prescribed solutions, in this case for Oracle's Application Development Framework (ADF).  Books like this excel themselves on excellent coverage, a logical progress of solutions through out the book, and providing a readable narrative around the numerous steps and code.

This book progresses well through ADF application assembly, ADF Business Components, the view layer, security, deployment and tuning.  Each recipe had a clear introduction and I especially enjoyed the "There's more" follow up sections for some recipes that leads the reader onto related ideas and issues the reader really needs to be aware of.

Also worthy of comment having worked with ADF for over 5 years, there certainly was recipes and solutions I hadn't encountered before, this book gets bonus points for that.

As a reviewer what negatives can I give this text? The book has cast it's net too wide by trying to cover "everything from design and construction, to deployment, testing, debugging and optimization."  ADF is such a large and sophistication technology, this book with 100 recipes barely scrapes the surface.  Don't expect all your ADF problems to be solved here.

In turn there is inconsistency in the level of problems and solutions.  I felt at the beginning the book was pitching itself at advanced problems to solve (that's great for me), but then it introduces topics like building a static View Object or train.  These topics in my opinion are fairly simple and are covered by the Oracle documentation just as well, they shouldn't have been included here. 

In conclusion, ADF beginners will find this book worthwhile as it will open your eyes to the wider problems and solutions required for ADF, and experts for just the fact they can point junior programmers at the book for certain problems and say "get on with it".

Is there scope for more ADF tombs like this?  Yes!  I'd love to see a cookbook specializing on ADF Business Components (hint hint to budding authors).

Tuesday Apr 03, 2012

Solution for developers wanting to run a standalone WLS 10.3.6 server against JDev

In my previous post I discussed how to install the ADF Runtimes into a standalone WLS 10.3.6 server by using the ADF Runtime installer, not the JDeveloper installer.  Yet there's still a problem for developers here because JDeveloper comes coupled with a WLS 10.3.5 server.  What if you want to develop, deploy and test with a 10.3.6 server?  Have we lost the ability to integrate the IDE and the WLS server where we can run and stop the server, deploy our apps automatically the server and more?

JDeveloper actually solved this issue sometime back but not many people will have recognized the feature for what it does as it wasn't needed until now.

Via the Application Server Navigator you can create 2 types of connections, one to a remote "standalone WLS" and another to an "integrated WLS".  It's this second option that is useful because what we can do is install a local standalone WLS 10.3.6 server on our developer PC, then create a separate "integrated WLS" connection to the standalone server.  Then by accessing your Application's properties through the Application menu -> Application Properties -> Run -> Bind to Integration Application Server option we can choose the newly created WLS server connection to work with our application.

In this way JDeveloper will now treat the new server as if it was the integrated WLS.  It will start when we run and deploy our applications, terminate it at request and so on.  Of course don't forget you still need to install the ADF Runtimes for the server to be able to work with ADF applications.

Note there is bug 13917844 lurking in the Application Server Navigator for at least JDev and earlier.  If you right click the new connection and select "Start Server Instance" it will often start one of the other existing connections instead (typically the original IntegratedWebLogicServer connection).  If you want to manually start the server you can bypass this by using the Run menu -> Start Server Instance option which works correctly.

Friday Mar 30, 2012

Solution for installing the ADF Runtimes onto a standalone WLS 10.3.6

Solution for installing the ADF Runtimes onto a standalone WLS 10.3.6[Read More]

Wednesday Mar 21, 2012

The case of the phantom ADF developer (and other yarns)

A few years of ADF experience means I see common mistakes made by different developers, some I regularly make myself.  This post is designed to assist beginners to Oracle JDeveloper Application Development Framework (ADF) avoid a common ADF pitfall, the case of the phantom ADF developer [add Scooby-Doo music here].

ADF Business Components - triggers, default table values and instead of views.

Oracle's JDeveloper tutorials help with the A-B-Cs of ADF development, typically built on the nice 'n safe demo schema provided by with the Oracle database such as the HR demo schema. However it's not too long until ADF beginners, having built up some confidence from learning with the tutorials and vanilla demo schemas, start building ADF Business Components based upon their own existing database schema objects.  This is where unexpected problems can sneak in.

The crime

Developers may encounter a surprising error at runtime when editing a record they just created or updated and committed to the database, based on their own existing tables, namely the error:

JBO-25014: Another user has changed the row with primary key oracle.jbo.Key[x]

...where X is the primary key value of the row at hand.  In a production environment with multiple users this error may be legit, one of the other users has updated the row since you queried it.  Yet in a development environment this error is just plain confusing.  If developers are isolated in their own database, creating and editing records they know other users can't possibly be working with, or all the other developers have gone home for the day, how is this error possible? There are no other users?  It must be the phantom ADF developer! [insert dramatic music here]

The following picture is what you'll see in the Business Component Browser, and you'll receive a similar error message via an ADF Faces page:

A false conclusion

What can possibly cause this issue if it isn't our phantom ADF developer?  Doesn't ADF BC implement record locking, locking database records when the row is modified in the ADF middle-tier by a user?  How can our phantom ADF developer even take out a lock if this is the case?  Maybe ADF has a bug, maybe ADF isn't implementing record locking at all?  Shouldn't we see the error "JBO-26030: Failed to lock the record, another user holds the lock" as we attempt to modify the record, why do we see JBO-25014? :

Let's verify that ADF is in fact issuing the correct SQL LOCK-FOR-UPDATE statement to the database.

First we need to verify ADF's locking strategy.  It is determined by the Application Module's jbo.locking.mode property.  The default (as of JDev if memory serves me correct) and recommended value is optimistic, and the other valid value is pessimistic.

Next we need a mechanism to check that ADF is issuing the LOCK statements to the database.  We could ask DBAs to monitor locks with OEM, but optimally we'd rather not involve overworked DBAs in this process, so instead we can use the ADF runtime setting –Djbo.debugoutput=console.  At runtime this options turns on instrumentation within the ADF BC layer, which among a lot of extra detail displayed in the log window, will show the actual SQL statement issued to the database, including the LOCK statement we're looking to confirm.

Setting our locking mode to pessimistic, opening the Business Components Browser of a JSF page allowing us to edit a record, say the CHARGEABLE field within a BOOKINGS record where BOOKING_NO = 1206, upon editing the record see among others the following log entries:

[423] Where binding param 1: 1206 

As can be seen on line 422, in fact a LOCK-FOR-UPDATE is indeed issued to the database.  Later when we commit the record we see:

[441] OracleSQLBuilder: SAVEPOINT 'BO_SP'
[442] OracleSQLBuilder Executing, Lock 1 DML on: BOOKINGS (Update)
[443] UPDATE buf Bookings>#u SQLStmtBufLen: 210, actual=62
[445] Update binding param 1: N
[446] Where binding param 2: 1206
[447] BookingsView1 notify COMMIT ... 
[448] _LOCAL_VIEW_USAGE_model_Bookings_ResourceTypesView1 notify COMMIT ... 
[449] EntityCache close prepared statement

....and as a result the changes are saved to the database, and the lock is released.

Let's see what happens when we use the optimistic locking mode, this time to change the same BOOKINGS record CHARGEABLE column again.  As soon as we edit the record we see little activity in the logs, nothing to indicate any SQL statement, let alone a LOCK has been taken out on the row.

However when we save our records by issuing a commit, the following is recorded in the logs:

[509] OracleSQLBuilder: SAVEPOINT 'BO_SP'
[510] OracleSQLBuilder Executing doEntitySelect on: BOOKINGS (true)
[513] Where binding param 1: 1205
[514] OracleSQLBuilder Executing, Lock 2 DML on: BOOKINGS (Update)
[515] UPDATE buf Bookings>#u SQLStmtBufLen: 210, actual=62
[517] Update binding param 1: Y
[518] Where binding param 2: 1205
[519] BookingsView1 notify COMMIT ... 
[520] _LOCAL_VIEW_USAGE_model_Bookings_ResourceTypesView1 notify COMMIT ... 
[521] EntityCache close prepared statement

Again even though we're seeing the midtier delay the LOCK statement until commit time, it is in fact occurring on line 412, and released as part of the commit issued on line 419.  Therefore with either optimistic or pessimistic locking a lock is indeed issued.

Our conclusion at this point must be, unless there's the unlikely cause the LOCK statement is never really hitting the database, or the even less likely cause the database has a bug, then ADF does in fact take out a lock on the record before allowing the current user to update it.  So there's no way our phantom ADF developer could even modify the record if he tried without at least someone receiving a lock error.

Hmm, we can only conclude the locking mode is a red herring and not the true cause of our problem. 

Who is the phantom?

At this point we'll need to conclude that the error message "JBO-25014: Another user has changed" is somehow legit, even though we don't understand yet what's causing it. This leads onto two further questions, how does ADF know another user has changed the row, and what's been changed anyway?

To answer the first question, how does ADF know another user has changed the row, the Fusion Guide's section 4.10.11 How to Protect Against Losing Simultaneous Updated Data , that details the Entity Object Change-Indicator property, gives us the answer:

At runtime the framework provides automatic "lost update" detection for entity objects to ensure that a user cannot unknowingly modify data that another user has updated and committed in the meantime. Typically, this check is performed by comparing the original values of each persistent entity attribute against the corresponding current column values in the database at the time the underlying row is locked. Before updating a row, the entity object verifies that the row to be updated is still consistent with the current state of the database. 

The guide further suggests to make this solution more efficient:

You can make the lost update detection more efficient by identifying any attributes of your entity whose values you know will be updated whenever the entity is modified. Typical candidates include a version number column or an updated date column in the row.....To detect whether the row has been modified since the user queried it in the most efficient way, select the Change Indicator option to compare only the change-indicator attribute values.

We now know that ADF BC doesn't use the locking mechanism at all to protect the current user against updates, but rather it keeps a copy of the original record fetched, separate to the user changed version of the record, and it compares the original record against the one in the database when the lock is taken out.  If values don't match, be it the default compare-all-columns behaviour, or the more efficient Change Indicator mechanism, ADF BC will throw the JBO-25014 error.

This leaves one last question.  Now we know the mechanism under which ADF identifies a changed row, what we don't know is what's changed and who changed it?

The real culprit

What's changed?  We know the record in the mid-tier has been changed by the user, however ADF doesn't use the changed record in the mid-tier to compare to the database record, but rather a copy of the original record before it was changed.  This leaves us to conclude the database record has changed, but how and by who?

There are three potential causes:
  • Database triggers
The database trigger among other uses, can be configured to fire PLSQL code on a database table insert, update or delete.  In particular in an insert or update the trigger can override the value assigned to a particular column.  The trigger execution is actioned by the database on behalf of the user initiating the insert or update action.

Why this causes the issue specific to our ADF use, is when we insert or update a record in the database via ADF, ADF keeps a copy of the record written to the database.  However the cached record is instantly out of date as the database triggers have modified the record that was actually written to the database.  Thus when we update the record we just inserted or updated for a second time to the database, ADF compares its original copy of the record to that in the database, and it detects the record has been changed – giving us JBO-25014.

This is probably the most common cause of this problem.

  • Default values

A second reason this issue can occur is another database feature, default column values.  When creating a database table the schema designer can define default values for specific columns.  For example a CREATED_BY column could be set to SYSDATE, or a flag column to Y or N.  Default values are only used by the database when a user inserts a new record and the specific column is assigned NULL.  The database in this case will overwrite the column with the default value.

As per the database trigger section, it then becomes apparent why ADF chokes on this feature, though it can only specifically occur in an insert-commit-update-commit scenario, not the update-commit-update-commit scenario.

  • Instead of trigger views
I must admit I haven't double checked this scenario but it seems plausible, that of the Oracle database's instead of trigger view (sometimes referred to as instead of views).  A view in the database is based on a query, and dependent on the queries complexity, may support insert, update and delete functionality to a limited degree.  In order to support fully insertable, updateable and deletable views, Oracle introduced the instead of view, that gives the view designer the ability to not only define the view query, but a set of programmatic PLSQL triggers where the developer can define their own logic for inserts, updates and deletes.
While this provides the database programmer a very powerful feature, it can cause issues for our ADF application.  On inserting or updating a record in the instead of view, the record and it's data that goes in is not necessarily the data that comes out when ADF compares the records, as the view developer has the option to practically do anything with the incoming data, including throwing it away or pushing it to tables which aren't used by the view underlying query for fetching the data.

Readers are at this point reminded that this article is specifically about how the JBO-25014 error occurs in the context of 1 developer on an isolated database.  The article is not considering how the error occurs in a production environment where there are multiple users who can cause this error in a legitimate fashion.  Assuming none of the above features are the cause of the problem, and optimistic locking is turned on (this error is not possible if pessimistic locking is the default mode *and* none of the previous causes are possible), JBO-25014 is quite feasible in a production ADF application if 2 users modify the same record.

At this point under project timelines pressure, the obvious fix for developers is to drop both database triggers and default values from the underlying tables.  However we must be careful that these legacy constructs aren't used and assumed to be in place by other legacy systems.  Dropping the database triggers or default value that the existing Oracle Forms  applications assumes and requires to be in place could cause unexpected behaviour and bugs in the Forms application.  Proficient software engineers would recognize such a change may require a partial or full regression test of the existing legacy system, a potentially costly and timely exercise, not ideal.

Solving the mystery once and for all

Luckily ADF has built in functionality to deal with this issue, though it's not a surprise, as Oracle as the author of ADF also built the database, and are fully aware of the Oracle database's feature set.  At the Entity Object attribute level, the Refresh After Insert and Refresh After Update properties.  Simply selecting these instructs ADF BC after inserting or updating a record to the database, to expect the database to modify the said attributes, and read a copy of the changed attributes back into its cached mid-tier record.  Thus next time the developer modifies the current record, the comparison between the mid-tier record and the database record match, and JBO-25014: Another user has changed" is no longer an issue.

[Post edit - as per the comment from Oracle's Steven Davelaar below, as he correctly points out the above solution will not work for instead-of-triggers views as it relies on SQL RETURNING clause which is incompatible with this type of view]

Alternatively you can set the Change Indicator on one of the attributes.  This will work as long as the relating column for the attribute in the database itself isn't inadvertently updated.  In turn you're possibly just masking the issue rather than solving it, because if another developer turns the Change Indicator back on the original issue will return.

Thursday Mar 15, 2012

How competent in Java do I need to be for ADF?

I recently received the following question via email:

"Chris - what competency level in Java does a developer need to have in order to develop medium to complex ADF applications?  Looking forward to your future postings."

This is a common question asked of ADF and I think a realistic one too as it puts emphasis on medium to complex developments rather than simple applications.

In my experience a reasonable answer for this comes from Sten Vesterli's Oracle ADF Enterprise Application Development - Made Simple:

Getting Organized - Skills required - Java programming

"Not everybody who writes needs the skills of Shakespeare. But everybody who writes need to follow rules of spelling and grammar in order to make themselves understood.

All serious frameworks provide someway for a programmer to add logic and functionality beyond what the framework offers. In the case of the ADF framework, this is done by writing Java code. Therefore, every programmer on the project needs to know Java as a programming language and to be able to write syntactically correct Java code. But this is a simple skill for everyone familiar with a programming language. You need to know that Java uses { curly brackets } for code blocks instead of BEGIN-END, you need to know the syntax for if-then-else, constructs and how to build a loop and work with an array.

But not everyone who writes Java code needs to be a virtuoso with full command of inheritance, interfaces and inner classes."

Sten's book is a recommended read for teams looking to commence large ADF projects.

From my own experience it's hard to comment on the specifics of every project, what constitutes medium to complex requirements for one ADF team maybe complex to "yeeks!" for another. But from my own experience as an independent ADF developer for several years, I'm willing to share the levels of Java skills I think required.

In addressing the question I think a good way is to look at the Java SE and Java EE certification exams, what topics they cover and note which topics I think are valuable. Before doing this readers need to note that JDeveloper at the time this blog was written still runs on Java SE 1.6 and Java EE 1.5. However I'm going to link to the later Java SE 1.7 exams, as that'll increase the lifetime relevance of this post. Note those exams are currently beta so subject to change, and the list of topics I've got below might not be in the final exams..

As such from the Oracle Certified Associate, Java SE 7 Programmer I certification exam topics, in my honest opinion ADF developers need to know *all* of the following topics:
  • Java Basics
  • Working with Java Data Types
  • Using Operators and Decision Constructs
  • Creating and Using Arrays
  • Using Loop Constructs
  • Working with Methods and Encapsulation
  • Working with Inheritance
  • Handling Exceptions
I might have a few people argue with me on the list above, particularly inheritance and exceptions. But in my experience ADF developers who don't know about inheritance and in particular type casting, as well as exception handling in general will struggle.  In reality all of the topics above are Java basics taught to first year IT undergraduates, so nobody should be surprised by the list.

When we move to the Java SE 7 Programmer II exam topics, the list is as follows.  You'll note the numbers next to each topic, 1 being mandatory, 2 not mandatory but knowledge in this area will certainly help most projects, and 3 not required.
  • 1- Java Class Design
  • 1- Java Advanced Class Design
  • 1 -Object-Oriented Principles
  • 2 - String Processing
  • 1 - Exceptions
  • 3 - Assertions
  • 2 - Java I/O Fundamentals
  • 2 - Java File I/O
  • 1 - Building Database Applications with JDBC
  • * - Threads
  • * - Concurrency
  • * - Localization

In the #1 list there's no surprises but maybe JDBC. From my own personal experience even though ADF BC & EJB/JPA abstracts away from knowing the language of the database, at customer sites frequently I've had to build solutions that need to interface with legacy database PL/SQL using JDBC. Your site might not have this requirement, but the next site you work at probably will.

The #2 list is more interesting. String processing is useful because without some internal knowledge of the standard Java APIs you can write some poorly performing code . Java I/O is not an uncommon requirement, being able to read/write uploaded/downloaded files to WLS.

As for the #3 list, assertions simply don't work in the Java EE world that ADF runs.

Finally the topics marked with stars require special explanation. First localization, often called internationalization really depends on the requirements of your project. For me sitting down in Australia, I've never worked on a system that requires any type of localization support besides some daylight saving calculations. For you, this requirement might be totally the opposite if you sit in Europe, so as a requirement it depends.

Then the topics of threading and concurrency. Threading and concurrency are useful topics only because there "be demons in thar" (best said in a pirate voice) for future Java projects. ADF actually isolates programmers from the issues of threading and concurrency. This isolation is risky as it may give ADF programmers a false belief they can code anything Java. You'll quickly find issues of thread safety and collection classes that support concurrency are a prime concern for none-ADF Java solutions.

So do you need to be an expert Java programmer for ADF? The answer is no. But a reasonable level of Java is required. And this can be capped off with the more Java you know, of course this will be beneficial, and not just for your ADF project! Java remains in my opinion a popular language and something to have on your resume (or is that LinkedIn profile these days?).

Tuesday Mar 13, 2012

ADF EMG at Collaborate 2012

I'm happy to announce the ADF EMG will have sessions at this year's Collaborate conference in Las Vegas April 22-26th 2012.  This is the first time the ADF EMG has presented at Collaborate.

Chad Thompson, Chris Ostrowski and Penny Cookson will be leading the charge presenting the following topics on the Wednesday:

1) ADF: A Path to the Future for Dinosaur Nerds - Penny Cookson - Session 173 - Wednesday 11:00am-12:00pm
2) Getting Started with ADF - Chad Thompson - Session 655 - Wednesday 1:00pm-2:00pm
3) JDeveloper ADF and the Oracle Database - Friends Not Foes - Session 172 - Wednesday 3:00pm-4:00pm
4) ADF + Faces: Do I Have to Write ANY Java Code - Session 164 - Wednesday 4:15pm-5:15pm

Penny Cookson won best paper for presentation 3 at the Aussie AUSOUG Perth conference in 2011, so the calibre of speakers here is high and well worth attending.  Even if you can't make the sessions it would be great if you could just pop your head in and say hi & thanks to these speakers for presenting at Collaborate.

Note the above session times are subject to change, you can find more information here.

If anybody is interested in ADF EMG speakers presenting at their conference, please let an EMG representative know so we can see what we can arrange.

Not a selfie
Chris Muir
Oracle Mobility and Development Tools Product Manager

The views expressed on this blog are my own and do not necessarily reflect the views of Oracle.



« July 2016