Friday Apr 20, 2012

ADF Prematurely Terminated Task Flows

In this post I'll describe some interesting side effects on task flow transactions if a task flow terminates/finalizes earlier than expected.  To demonstrate this we'll use the following application PrematurelyTerminatingTaskFlows.zip built in JDev 11.1.1.6.0.

The app based on the Oracle HR schema renders a single page:



Before describing the prematurely terminating task flow behavior let's describe the characteristics of the application first:

1) The app makes use of 2 independent view objects DepartmentsView and EmployeesView.

2) The overall page is Main.jspx which has an embedded region calling the departments-task-flow which itself has another embedded region calling the employees-task-flow.  The departments task flow has the editable departments form and navigation buttons to walk the departments, the employees task flow the table of relating employees for the department.

3) As the user navigates between departments records, the department ID is passed to the employees task flow which calls an ExecuteWithParams activity then displays the resulting employees.  The employees task flow binding has it's refresh = ifNeeded and the associated region has partialTriggers on the navigation buttons, ensuring the employees task flow is updated as the user navigates the departments using the supplied buttons.

4) Of particular interest, the departments-task-flow is using the Always Begin New Transaction task flow transaction behaviour and has an isolated data control scope:



And the employees-task-flow is using Use Existing Transaction if Possible and a shared data control scope:



If you run this application and navigate amongst the departments using the navigation buttons, the application works as expected.  Both the departments and employees records move onto the next departments ID after each button click.

In the application I've also added some ADFLoggers which help capture the current behaviour:

<TaskFlowBean> <taskFlowInit> Task flow /WEB-INF/departments-task-flow.xml#departments-task-flow initialized
<AppModuleImpl> <create> AppModuleImpl created as ROOT AM
<AppModuleImpl> <prepareSession> AppModuleImpl prepareSession() called as ROOT AM
<AppModuleImpl> <create> AppModuleImpl created as NESTED AM under AppModule
<TaskFlowBean> <taskFlowInit> Task flow /WEB-INF/employees-task-flow.xml#employees-task-flow initialized
<TaskFlowBean> <taskFlowFinalizer> Task flow /WEB-INF/employees-task-flow.xml#employees-task-flow finalized
<TaskFlowBean> <taskFlowInit> Task flow /WEB-INF/employees-task-flow.xml#employees-task-flow initialized

As the page first renders we can see the departments task flow initialized then the associated application module created and prepared to serve the data for the DepartmentsView.  Subsequently we can see the employees task flow initialized.  We don't see a new application module as the employees task flow is sharing the data control. From here each time we step onto another departments record, we'll see the employees task flow finalizer called, then the employees task flow intializer called.  This occurs because the ifNeeded refresh property on the employees task flow is restarting the task flow each time the department ID is changed.

This restarting of the task flow is what I coin the "premature termination" of the task flow.  Essentially the calling parent has forced the framework to terminate the employees task flow, rather than the task flow gracefully exiting via a task flow return activity.

At the moment though, this is still a "So what?" scenario.  What do we care?  Everything appears to work?

Let's change the setup slightly to demonstrate something unexpected.  Return to the application and set the departments task flow transaction option to <No Controller Transaction> (and leave the data control scope option = isolated/unselected):



Rerun the application.  Note now when it runs and we press one of the navigation buttons, besides a screen refresh nothing happens.  We don't walk onto a new departments record in the departments task flow, and we don't see the associated employees for the expected new department.  The application seems stuck on the first department.

A clue to what's going on occurs in the logs:

<TaskFlowBean> <taskFlowInit> Task flow /WEB-INF/departments-task-flow.xml#departments-task-flow initialized
<AppModuleImpl> <create> AppModuleImpl created as ROOT AM
<AppModuleImpl> <prepareSession> AppModuleImpl prepareSession() called as ROOT AM
<TaskFlowBean> <taskFlowInit> Task flow /WEB-INF/employees-task-flow.xml#employees-task-flow initialized
<TaskFlowBean> <taskFlowFinalizer> Task flow /WEB-INF/employees-task-flow.xml#employees-task-flow finalized
<AppModuleImpl> <beforeRollback> AppModuleImpl beforeRollback() called as ROOT AM
<TaskFlowBean> <taskFlowInit> Task flow /WEB-INF/employees-task-flow.xml#employees-task-flow initialized

Notice in between the last employees task flow finalizer/initializer pair we see the application module has performed a rollback.  This partially explains the behaviour we're seeing.  When a rollback is issued, a rollback resets the current row indicators for all view objects attached to the application module.  This is why we can't move onto another record.

But why is the rollback called in this scenario?

The answer is wrapped around the concept of task flow transactions and the associated data control frame.

In the first scenario the departments task flow initiated the task flow transaction and associated data control frame.  In turn the employees task flow joined the departments transaction and data control frame.  Only the initiator of a task flow transaction can commit/rollback the overall transaction associated with the data control frame.  In the case where the employees task flow is prematurely terminated, as it a secondary citizen in the overall task flow transaction, the framework leaves the initiator of the task flow to tidy up the transaction. No automatic rollback occurs on the work done by the secondary task flow.

In the second scenario the departments task flow is not initiating the task flow transaction as it's chosen the <No Controller Transaction> option.  Instead the employees task flow initiates the transaction because when the Use Existing Transaction if Possible option finds no transaction open it defaults to the equivalent of Always Begin New Transaction behaviour.

In remembering the initiator of a task flow transaction can commit/rollback the overall transaction, the framework automatically rolls back the employees task flow and this is the cause of the behaviour we're seeing.  Even though the departments task flow is using <No Controller Transaction> this doesn't mean the underlying view object doesn't participate in a transaction, it just doesn't participate in a task flow transaction (which is an abstraction sitting about the data control transactions).  As the two task flows share data controls, there is only a single application module shared by both task flows, a rollback from one task flow will result in a roll back in both task flows.

The solution? Either revert back to the original settings where the bookings task flow uses Always Begin New Transaction, or alternatively use an isolated data control scope for the employees task flow.

Saturday Apr 07, 2012

Oracle JDeveloper 11gR2 Cookbook book review

I recently received a free copy of Oracle JDeveloper 11gR2 Cookbook published by Packt Publishing for review.

Readers of technical cookbooks would know this genre of text includes problems that developers will hit and the prescribed solutions, in this case for Oracle's Application Development Framework (ADF).  Books like this excel themselves on excellent coverage, a logical progress of solutions through out the book, and providing a readable narrative around the numerous steps and code.

This book progresses well through ADF application assembly, ADF Business Components, the view layer, security, deployment and tuning.  Each recipe had a clear introduction and I especially enjoyed the "There's more" follow up sections for some recipes that leads the reader onto related ideas and issues the reader really needs to be aware of.

Also worthy of comment having worked with ADF for over 5 years, there certainly was recipes and solutions I hadn't encountered before, this book gets bonus points for that.

As a reviewer what negatives can I give this text? The book has cast it's net too wide by trying to cover "everything from design and construction, to deployment, testing, debugging and optimization."  ADF is such a large and sophistication technology, this book with 100 recipes barely scrapes the surface.  Don't expect all your ADF problems to be solved here.

In turn there is inconsistency in the level of problems and solutions.  I felt at the beginning the book was pitching itself at advanced problems to solve (that's great for me), but then it introduces topics like building a static View Object or train.  These topics in my opinion are fairly simple and are covered by the Oracle documentation just as well, they shouldn't have been included here. 

In conclusion, ADF beginners will find this book worthwhile as it will open your eyes to the wider problems and solutions required for ADF, and experts for just the fact they can point junior programmers at the book for certain problems and say "get on with it".

Is there scope for more ADF tombs like this?  Yes!  I'd love to see a cookbook specializing on ADF Business Components (hint hint to budding authors).

Tuesday Apr 03, 2012

Solution for developers wanting to run a standalone WLS 10.3.6 server against JDev 11.1.1.6.0

In my previous post I discussed how to install the 11.1.1.6.0 ADF Runtimes into a standalone WLS 10.3.6 server by using the ADF Runtime installer, not the JDeveloper installer.  Yet there's still a problem for developers here because JDeveloper 11.1.1.6.0 comes coupled with a WLS 10.3.5 server.  What if you want to develop, deploy and test with a 10.3.6 server?  Have we lost the ability to integrate the IDE and the WLS server where we can run and stop the server, deploy our apps automatically the server and more?

JDeveloper actually solved this issue sometime back but not many people will have recognized the feature for what it does as it wasn't needed until now.

Via the Application Server Navigator you can create 2 types of connections, one to a remote "standalone WLS" and another to an "integrated WLS".  It's this second option that is useful because what we can do is install a local standalone WLS 10.3.6 server on our developer PC, then create a separate "integrated WLS" connection to the standalone server.  Then by accessing your Application's properties through the Application menu -> Application Properties -> Run -> Bind to Integration Application Server option we can choose the newly created WLS server connection to work with our application.


In this way JDeveloper will now treat the new server as if it was the integrated WLS.  It will start when we run and deploy our applications, terminate it at request and so on.  Of course don't forget you still need to install the ADF Runtimes for the server to be able to work with ADF applications.

Note there is bug 13917844 lurking in the Application Server Navigator for at least JDev 11.1.1.6.0 and earlier.  If you right click the new connection and select "Start Server Instance" it will often start one of the other existing connections instead (typically the original IntegratedWebLogicServer connection).  If you want to manually start the server you can bypass this by using the Run menu -> Start Server Instance option which works correctly.


Friday Mar 30, 2012

Solution for installing the ADF 11.1.1.6.0 Runtimes onto a standalone WLS 10.3.6

Solution for installing the ADF 11.1.1.6.0 Runtimes onto a standalone WLS 10.3.6[Read More]

Wednesday Mar 21, 2012

The case of the phantom ADF developer (and other yarns)

A few years of ADF experience means I see common mistakes made by different developers, some I regularly make myself.  This post is designed to assist beginners to Oracle JDeveloper Application Development Framework (ADF) avoid a common ADF pitfall, the case of the phantom ADF developer [add Scooby-Doo music here].

ADF Business Components - triggers, default table values and instead of views.

Oracle's JDeveloper tutorials help with the A-B-Cs of ADF development, typically built on the nice 'n safe demo schema provided by with the Oracle database such as the HR demo schema. However it's not too long until ADF beginners, having built up some confidence from learning with the tutorials and vanilla demo schemas, start building ADF Business Components based upon their own existing database schema objects.  This is where unexpected problems can sneak in.

The crime

Developers may encounter a surprising error at runtime when editing a record they just created or updated and committed to the database, based on their own existing tables, namely the error:

JBO-25014: Another user has changed the row with primary key oracle.jbo.Key[x]

...where X is the primary key value of the row at hand.  In a production environment with multiple users this error may be legit, one of the other users has updated the row since you queried it.  Yet in a development environment this error is just plain confusing.  If developers are isolated in their own database, creating and editing records they know other users can't possibly be working with, or all the other developers have gone home for the day, how is this error possible? There are no other users?  It must be the phantom ADF developer! [insert dramatic music here]

The following picture is what you'll see in the Business Component Browser, and you'll receive a similar error message via an ADF Faces page:


A false conclusion

What can possibly cause this issue if it isn't our phantom ADF developer?  Doesn't ADF BC implement record locking, locking database records when the row is modified in the ADF middle-tier by a user?  How can our phantom ADF developer even take out a lock if this is the case?  Maybe ADF has a bug, maybe ADF isn't implementing record locking at all?  Shouldn't we see the error "JBO-26030: Failed to lock the record, another user holds the lock" as we attempt to modify the record, why do we see JBO-25014? :


Let's verify that ADF is in fact issuing the correct SQL LOCK-FOR-UPDATE statement to the database.

First we need to verify ADF's locking strategy.  It is determined by the Application Module's jbo.locking.mode property.  The default (as of JDev 11.1.1.4.0 if memory serves me correct) and recommended value is optimistic, and the other valid value is pessimistic.


Next we need a mechanism to check that ADF is issuing the LOCK statements to the database.  We could ask DBAs to monitor locks with OEM, but optimally we'd rather not involve overworked DBAs in this process, so instead we can use the ADF runtime setting –Djbo.debugoutput=console.  At runtime this options turns on instrumentation within the ADF BC layer, which among a lot of extra detail displayed in the log window, will show the actual SQL statement issued to the database, including the LOCK statement we're looking to confirm.

Setting our locking mode to pessimistic, opening the Business Components Browser of a JSF page allowing us to edit a record, say the CHARGEABLE field within a BOOKINGS record where BOOKING_NO = 1206, upon editing the record see among others the following log entries:

[421] Built select: 'SELECT BOOKING_NO, EVENT_NO, RESOURCE_CODE, CHARGEABLE, MADE_BY, QUANTITY, COST, STATUS, COMMENTS FROM BOOKINGS Bookings'
[422] Executing LOCK...SELECT BOOKING_NO, EVENT_NO, RESOURCE_CODE, CHARGEABLE, MADE_BY, QUANTITY, COST, STATUS, COMMENTS FROM BOOKINGS Bookings WHERE BOOKING_NO=:1 FOR UPDATE NOWAIT
[423] Where binding param 1: 1206 

As can be seen on line 422, in fact a LOCK-FOR-UPDATE is indeed issued to the database.  Later when we commit the record we see:

[441] OracleSQLBuilder: SAVEPOINT 'BO_SP'
[442] OracleSQLBuilder Executing, Lock 1 DML on: BOOKINGS (Update)
[443] UPDATE buf Bookings>#u SQLStmtBufLen: 210, actual=62
[444] UPDATE BOOKINGS Bookings SET CHARGEABLE=:1 WHERE BOOKING_NO=:2
[445] Update binding param 1: N
[446] Where binding param 2: 1206
[447] BookingsView1 notify COMMIT ... 
[448] _LOCAL_VIEW_USAGE_model_Bookings_ResourceTypesView1 notify COMMIT ... 
[449] EntityCache close prepared statement

....and as a result the changes are saved to the database, and the lock is released.

Let's see what happens when we use the optimistic locking mode, this time to change the same BOOKINGS record CHARGEABLE column again.  As soon as we edit the record we see little activity in the logs, nothing to indicate any SQL statement, let alone a LOCK has been taken out on the row.

However when we save our records by issuing a commit, the following is recorded in the logs:

[509] OracleSQLBuilder: SAVEPOINT 'BO_SP'
[510] OracleSQLBuilder Executing doEntitySelect on: BOOKINGS (true)
[511] Built select: 'SELECT BOOKING_NO, EVENT_NO, RESOURCE_CODE, CHARGEABLE, MADE_BY, QUANTITY, COST, STATUS, COMMENTS FROM BOOKINGS Bookings'
[512] Executing LOCK...SELECT BOOKING_NO, EVENT_NO, RESOURCE_CODE, CHARGEABLE, MADE_BY, QUANTITY, COST, STATUS, COMMENTS FROM BOOKINGS Bookings WHERE BOOKING_NO=:1 FOR UPDATE NOWAIT
[513] Where binding param 1: 1205
[514] OracleSQLBuilder Executing, Lock 2 DML on: BOOKINGS (Update)
[515] UPDATE buf Bookings>#u SQLStmtBufLen: 210, actual=62
[516] UPDATE BOOKINGS Bookings SET CHARGEABLE=:1 WHERE BOOKING_NO=:2
[517] Update binding param 1: Y
[518] Where binding param 2: 1205
[519] BookingsView1 notify COMMIT ... 
[520] _LOCAL_VIEW_USAGE_model_Bookings_ResourceTypesView1 notify COMMIT ... 
[521] EntityCache close prepared statement

Again even though we're seeing the midtier delay the LOCK statement until commit time, it is in fact occurring on line 412, and released as part of the commit issued on line 419.  Therefore with either optimistic or pessimistic locking a lock is indeed issued.

Our conclusion at this point must be, unless there's the unlikely cause the LOCK statement is never really hitting the database, or the even less likely cause the database has a bug, then ADF does in fact take out a lock on the record before allowing the current user to update it.  So there's no way our phantom ADF developer could even modify the record if he tried without at least someone receiving a lock error.

Hmm, we can only conclude the locking mode is a red herring and not the true cause of our problem. 

Who is the phantom?

At this point we'll need to conclude that the error message "JBO-25014: Another user has changed" is somehow legit, even though we don't understand yet what's causing it. This leads onto two further questions, how does ADF know another user has changed the row, and what's been changed anyway?

To answer the first question, how does ADF know another user has changed the row, the Fusion Guide's section 4.10.11 How to Protect Against Losing Simultaneous Updated Data , that details the Entity Object Change-Indicator property, gives us the answer:

At runtime the framework provides automatic "lost update" detection for entity objects to ensure that a user cannot unknowingly modify data that another user has updated and committed in the meantime. Typically, this check is performed by comparing the original values of each persistent entity attribute against the corresponding current column values in the database at the time the underlying row is locked. Before updating a row, the entity object verifies that the row to be updated is still consistent with the current state of the database. 

The guide further suggests to make this solution more efficient:

You can make the lost update detection more efficient by identifying any attributes of your entity whose values you know will be updated whenever the entity is modified. Typical candidates include a version number column or an updated date column in the row.....To detect whether the row has been modified since the user queried it in the most efficient way, select the Change Indicator option to compare only the change-indicator attribute values.

We now know that ADF BC doesn't use the locking mechanism at all to protect the current user against updates, but rather it keeps a copy of the original record fetched, separate to the user changed version of the record, and it compares the original record against the one in the database when the lock is taken out.  If values don't match, be it the default compare-all-columns behaviour, or the more efficient Change Indicator mechanism, ADF BC will throw the JBO-25014 error.

This leaves one last question.  Now we know the mechanism under which ADF identifies a changed row, what we don't know is what's changed and who changed it?

The real culprit

What's changed?  We know the record in the mid-tier has been changed by the user, however ADF doesn't use the changed record in the mid-tier to compare to the database record, but rather a copy of the original record before it was changed.  This leaves us to conclude the database record has changed, but how and by who?

There are three potential causes:
  • Database triggers
The database trigger among other uses, can be configured to fire PLSQL code on a database table insert, update or delete.  In particular in an insert or update the trigger can override the value assigned to a particular column.  The trigger execution is actioned by the database on behalf of the user initiating the insert or update action.

Why this causes the issue specific to our ADF use, is when we insert or update a record in the database via ADF, ADF keeps a copy of the record written to the database.  However the cached record is instantly out of date as the database triggers have modified the record that was actually written to the database.  Thus when we update the record we just inserted or updated for a second time to the database, ADF compares its original copy of the record to that in the database, and it detects the record has been changed – giving us JBO-25014.

This is probably the most common cause of this problem.

  • Default values

A second reason this issue can occur is another database feature, default column values.  When creating a database table the schema designer can define default values for specific columns.  For example a CREATED_BY column could be set to SYSDATE, or a flag column to Y or N.  Default values are only used by the database when a user inserts a new record and the specific column is assigned NULL.  The database in this case will overwrite the column with the default value.

As per the database trigger section, it then becomes apparent why ADF chokes on this feature, though it can only specifically occur in an insert-commit-update-commit scenario, not the update-commit-update-commit scenario.

  • Instead of trigger views
I must admit I haven't double checked this scenario but it seems plausible, that of the Oracle database's instead of trigger view (sometimes referred to as instead of views).  A view in the database is based on a query, and dependent on the queries complexity, may support insert, update and delete functionality to a limited degree.  In order to support fully insertable, updateable and deletable views, Oracle introduced the instead of view, that gives the view designer the ability to not only define the view query, but a set of programmatic PLSQL triggers where the developer can define their own logic for inserts, updates and deletes.
While this provides the database programmer a very powerful feature, it can cause issues for our ADF application.  On inserting or updating a record in the instead of view, the record and it's data that goes in is not necessarily the data that comes out when ADF compares the records, as the view developer has the option to practically do anything with the incoming data, including throwing it away or pushing it to tables which aren't used by the view underlying query for fetching the data.

Readers are at this point reminded that this article is specifically about how the JBO-25014 error occurs in the context of 1 developer on an isolated database.  The article is not considering how the error occurs in a production environment where there are multiple users who can cause this error in a legitimate fashion.  Assuming none of the above features are the cause of the problem, and optimistic locking is turned on (this error is not possible if pessimistic locking is the default mode *and* none of the previous causes are possible), JBO-25014 is quite feasible in a production ADF application if 2 users modify the same record.

At this point under project timelines pressure, the obvious fix for developers is to drop both database triggers and default values from the underlying tables.  However we must be careful that these legacy constructs aren't used and assumed to be in place by other legacy systems.  Dropping the database triggers or default value that the existing Oracle Forms  applications assumes and requires to be in place could cause unexpected behaviour and bugs in the Forms application.  Proficient software engineers would recognize such a change may require a partial or full regression test of the existing legacy system, a potentially costly and timely exercise, not ideal.

Solving the mystery once and for all

Luckily ADF has built in functionality to deal with this issue, though it's not a surprise, as Oracle as the author of ADF also built the database, and are fully aware of the Oracle database's feature set.  At the Entity Object attribute level, the Refresh After Insert and Refresh After Update properties.  Simply selecting these instructs ADF BC after inserting or updating a record to the database, to expect the database to modify the said attributes, and read a copy of the changed attributes back into its cached mid-tier record.  Thus next time the developer modifies the current record, the comparison between the mid-tier record and the database record match, and JBO-25014: Another user has changed" is no longer an issue.

[Post edit - as per the comment from Oracle's Steven Davelaar below, as he correctly points out the above solution will not work for instead-of-triggers views as it relies on SQL RETURNING clause which is incompatible with this type of view]

Alternatively you can set the Change Indicator on one of the attributes.  This will work as long as the relating column for the attribute in the database itself isn't inadvertently updated.  In turn you're possibly just masking the issue rather than solving it, because if another developer turns the Change Indicator back on the original issue will return.

Thursday Mar 15, 2012

How competent in Java do I need to be for ADF?

I recently received the following question via email:

"Chris - what competency level in Java does a developer need to have in order to develop medium to complex ADF applications?  Looking forward to your future postings."

This is a common question asked of ADF and I think a realistic one too as it puts emphasis on medium to complex developments rather than simple applications.

In my experience a reasonable answer for this comes from Sten Vesterli's Oracle ADF Enterprise Application Development - Made Simple:

Getting Organized - Skills required - Java programming

"Not everybody who writes needs the skills of Shakespeare. But everybody who writes need to follow rules of spelling and grammar in order to make themselves understood.

All serious frameworks provide someway for a programmer to add logic and functionality beyond what the framework offers. In the case of the ADF framework, this is done by writing Java code. Therefore, every programmer on the project needs to know Java as a programming language and to be able to write syntactically correct Java code. But this is a simple skill for everyone familiar with a programming language. You need to know that Java uses { curly brackets } for code blocks instead of BEGIN-END, you need to know the syntax for if-then-else, constructs and how to build a loop and work with an array.

But not everyone who writes Java code needs to be a virtuoso with full command of inheritance, interfaces and inner classes."

Sten's book is a recommended read for teams looking to commence large ADF projects.

From my own experience it's hard to comment on the specifics of every project, what constitutes medium to complex requirements for one ADF team maybe complex to "yeeks!" for another. But from my own experience as an independent ADF developer for several years, I'm willing to share the levels of Java skills I think required.

In addressing the question I think a good way is to look at the Java SE and Java EE certification exams, what topics they cover and note which topics I think are valuable. Before doing this readers need to note that JDeveloper at the time this blog was written still runs on Java SE 1.6 and Java EE 1.5. However I'm going to link to the later Java SE 1.7 exams, as that'll increase the lifetime relevance of this post. Note those exams are currently beta so subject to change, and the list of topics I've got below might not be in the final exams..

As such from the Oracle Certified Associate, Java SE 7 Programmer I certification exam topics, in my honest opinion ADF developers need to know *all* of the following topics:
  • Java Basics
  • Working with Java Data Types
  • Using Operators and Decision Constructs
  • Creating and Using Arrays
  • Using Loop Constructs
  • Working with Methods and Encapsulation
  • Working with Inheritance
  • Handling Exceptions
I might have a few people argue with me on the list above, particularly inheritance and exceptions. But in my experience ADF developers who don't know about inheritance and in particular type casting, as well as exception handling in general will struggle.  In reality all of the topics above are Java basics taught to first year IT undergraduates, so nobody should be surprised by the list.

When we move to the Java SE 7 Programmer II exam topics, the list is as follows.  You'll note the numbers next to each topic, 1 being mandatory, 2 not mandatory but knowledge in this area will certainly help most projects, and 3 not required.
  • 1- Java Class Design
  • 1- Java Advanced Class Design
  • 1 -Object-Oriented Principles
  • 2 - String Processing
  • 1 - Exceptions
  • 3 - Assertions
  • 2 - Java I/O Fundamentals
  • 2 - Java File I/O
  • 1 - Building Database Applications with JDBC
  • * - Threads
  • * - Concurrency
  • * - Localization

In the #1 list there's no surprises but maybe JDBC. From my own personal experience even though ADF BC & EJB/JPA abstracts away from knowing the language of the database, at customer sites frequently I've had to build solutions that need to interface with legacy database PL/SQL using JDBC. Your site might not have this requirement, but the next site you work at probably will.

The #2 list is more interesting. String processing is useful because without some internal knowledge of the standard Java APIs you can write some poorly performing code . Java I/O is not an uncommon requirement, being able to read/write uploaded/downloaded files to WLS.

As for the #3 list, assertions simply don't work in the Java EE world that ADF runs.

Finally the topics marked with stars require special explanation. First localization, often called internationalization really depends on the requirements of your project. For me sitting down in Australia, I've never worked on a system that requires any type of localization support besides some daylight saving calculations. For you, this requirement might be totally the opposite if you sit in Europe, so as a requirement it depends.

Then the topics of threading and concurrency. Threading and concurrency are useful topics only because there "be demons in thar" (best said in a pirate voice) for future Java projects. ADF actually isolates programmers from the issues of threading and concurrency. This isolation is risky as it may give ADF programmers a false belief they can code anything Java. You'll quickly find issues of thread safety and collection classes that support concurrency are a prime concern for none-ADF Java solutions.

So do you need to be an expert Java programmer for ADF? The answer is no. But a reasonable level of Java is required. And this can be capped off with the more Java you know, of course this will be beneficial, and not just for your ADF project! Java remains in my opinion a popular language and something to have on your resume (or is that LinkedIn profile these days?).

Tuesday Mar 13, 2012

ADF EMG at Collaborate 2012

I'm happy to announce the ADF EMG will have sessions at this year's Collaborate conference in Las Vegas April 22-26th 2012.  This is the first time the ADF EMG has presented at Collaborate.

Chad Thompson, Chris Ostrowski and Penny Cookson will be leading the charge presenting the following topics on the Wednesday:

1) ADF: A Path to the Future for Dinosaur Nerds - Penny Cookson - Session 173 - Wednesday 11:00am-12:00pm
2) Getting Started with ADF - Chad Thompson - Session 655 - Wednesday 1:00pm-2:00pm
3) JDeveloper ADF and the Oracle Database - Friends Not Foes - Session 172 - Wednesday 3:00pm-4:00pm
4) ADF + Faces: Do I Have to Write ANY Java Code - Session 164 - Wednesday 4:15pm-5:15pm

Penny Cookson won best paper for presentation 3 at the Aussie AUSOUG Perth conference in 2011, so the calibre of speakers here is high and well worth attending.  Even if you can't make the sessions it would be great if you could just pop your head in and say hi & thanks to these speakers for presenting at Collaborate.

Note the above session times are subject to change, you can find more information here.

If anybody is interested in ADF EMG speakers presenting at their conference, please let an EMG representative know so we can see what we can arrange.

Friday Mar 09, 2012

ADF Runtimes vs WLS versions as of JDeveloper 11.1.1.6.0

The following blog post attempts to give Oracle WebLogic Server (WLS) administrators and Oracle Application Development Framework (ADF) customers some guidance of the pairing of ADF Runtime versions to WLS, in order to assist future planning and project management.

The blog post discusses two different branches of Oracle's JDeveloper, namely the 11.1.1.X.0 branch including versions 11.1.1.1.0 through the current 11.1.1.6.0, and separately the 11.1.2.X.0 branch including 11.1.2.0.0 through the current 11.1.2.1.0.  In reading this post readers must be clear on the two different branches.

The recent Oracle JDeveloper 11.1.1.6.0 release shows a small change in Oracle's pairing of ADF Runtimes versions to WebLogic Server which WLS administrators should be aware of.

Since the inception of JDeveloper 11g each new release has required a new version of WLS too.  For example:

  • ADF Runtimes 11.1.1.1.0 required WLS 10.3.1
  • ADF Runtimes 11.1.1.2.0 required WLS 10.3.2
  • ADF Runtimes 11.1.1.3.0 required WLS 10.3.3
  • ADF Runtimes 11.1.1.4.0 required WLS 10.3.4
  • ADF Runtimes 11.1.1.5.0 required WLS 10.3.5

This "history" is articulated in summary form in the 11.1.1.6.0 Certification and Support Matrix under the Application Server heading.

Note with the release of JDeveloper 11.1.1.6.0 there is a subtle change in the ADF Runtime to WLS version pairing.  The latest ADF Runtimes 11.1.1.6.0 can run against WLS 10.3.6 and 10.3.5.  This is the first time in the 11.1.1.X branch we've seen a version run on two versions of WLS.  As such if you have a 10.3.5 WLS server or have just installed WLS 10.3.6 you can also happily install the 11.1.1.6.0 ADF Runtimes on either.

Customers need to be careful though as this does not imply the opposite.  If you install WLS 10.3.6, only the ADF Runtimes 11.1.1.6.0 are certified, the 11.1.1.5.0 ADF Runtimes are not (though the 11.1.1.5.0 ADF Runtimes are still of course certified against WLS 10.3.5).

While I'm not in a position to comment publicly in detail on future JDeveloper versions beyond those revealed in the roadmaps at OOW, in terms of future releases in the 11.1.1.X.0 branch you should see this trend continue (note the italics on "should", there's no guarantees), namely the 11.1.1.6.0+ runtimes running on both WLS 10.3.5 and WLS 10.3.6.  Obviously to customers having some indication of the trend here is useful, as in previous releases customers had to build a new set of WLS servers for each JDeveloper 11.1.1.X.0 release which was considerable effort.

On considering the other 11.1.2.X.0 branch of JDeveloper, as per it's Certification and Support Matrix, the current requirement is a WLS 10.3.5 server with the ADF Runtimes 11.1.1.5.0 installed and a ADF Runtime 11.1.2.X.0 patch applied over the top.

Observant readers referring to Oracle's roadmap from OOW will note the upcoming 12c JDeveloper release.  There are no specifics I can give on versions and release dates at all, but it is reasonable to say the ADF 12c runtimes will only run on WLS 12c, not WLS 10.3.X.  There is no information available beyond the general release numbers, so readers should not assume any of the existing or future WLS 12c versions will be satisfactory at this time for ADF 12c - essentially this is to-be-advised at the official release.  The only thing to take from this last paragraph is the 12c release of JDeveloper will require a new stripe of 12c WLS servers, which should assist your future planning efforts if you wish to move to that platform when available.

For customers interested in Fusion Middleware (FMW) including SOA Suite etc over ADF, note the same rules apply across the board. However I recognize my reader base is mostly ADF developers thus my focus on the ADF Runtimes.

If there's anything unclear in the explanation or in the Certification and Support Matrixes please leave a comment and we'll endeavour to rectify this.

Thanks to Brian Fry with his assistance on this blog post.

Thursday Mar 08, 2012

Who is afraid of the big bad "MVC"?

Okay, the title is tongue and cheek and not meant to stir anyones' blood. The quality of writing a good blog is among other things attracting readership by a catchy title.

Recently on the ODTUG LinkedIn group there was a thread entitled How many of you are developing in Oracle Forms and Reports?.  The thread encompasses a number of answers including a discussion on ADF, initially peaking my interest.  Within that thread the following comment was posted:

"With respect to the ADF environment...I've been to a couple of workshops and what I see missing is the discussion of the impact of the decisions you make when starting a new ADF application. I'll bet the MVC based IDE makes a whole lot of sense to a Java developer but from the Forms side looking a J-Dev is like looking at the Rosetta Stone...where the heck do you start? And why? When building an ADF application why do you pick some components and not others? Is there a specific set that I should always pick? If I miss one can I go back and add it? Why is it in one tutorial I can move a field on the form to another spot but in another I can't move the field anywhere...it's stuck to the frame (if that's what it's called). Mind you....this is just the tip of the iceberg."

Now I just recently started wearing the hat of an Oracle Product Manager for ADF.  And it's possible you'll be thinking straight away "here we go, the brainwashing has done its job, Chris is going to start rabbiting on about the virtues of ADF".

Not today I'm afraid.

What I wanted to do instead was address the comment about MVC. I don't intend to pick on the original poster and his opinions. I'd just like to take the ideas and run with them, and express some opinions of my own (which aren't necessarily sanctioned by Oracle Corp either!).

What I'm specifically worried about is I think Oracle Forms programmers will be doing themselves a diservice to discount MVC.

Why?
  1. Because Oracle Forms is MVC-like
  2. The general principles of MVC will assist Forms programmers too
  3. From experience the best Oracle Forms solutions I worked on were working towards an MVC ideal
Firstly let's consider my point of "Oracle Forms is MVC like".  MVC dictates loose coupling and separation of concerns of the business/model logic, from the view (user interface) logic.  Oracle Forms as a framework built into the IDE attempts to present this same concept, to a certain greater or lesser degree.  Blocks and items represent the model, and the canvases represent the view.  Now it's not the most ideal MVC solution to be sure, but you can see Oracle's original Forms designers had an inkling of MVC in their thoughts.

What I'm *not* surprised about is the original Forms designers didn't implement a full MVC solution for customers.  While MVC is an old concept going back to 1979 from Xerox PARC and SmallTalk it really didn't take off to more relatively recently (I'll take a punt and say it was the explosion of J2EE just before 2000 though I have no evidence to support this).  As such there probably was no directive within Oracle to "build something MVC like so developers can create an UI for our database".

Yet even though it appears MVC only recently "had it's day in the sunshine", it shouldn't be pegged to the Java arena only, from my reckoning it's a popular concept. And by popular we'll return to the original posters comments:

"I'll bet the MVC based IDE makes a whole lot of sense to a Java developer but from the Forms side looking a J-Dev is like looking at the Rosetta Stone"

Here's the thing.  MVC is not limited to the realms of Java (or SmallTalk for that matter).  .Net has it.  PHP has it in oodles. Ruby on Rails is based on it.  It appears to be pretty popular all round.   So I'm not sure there's grounds for dismissing it as a Java peculiarity, there must be something worthy about it if it's used so widely?

This doesn't mean of course MVC is the be-all-end-all of architectural solutions.  Indeed Google "Disadvantages of MVC" to find the counter arguments and there's even variations of it such as MVVM.  But to just place MVC in the Java camp and close yourselves to its popularity is not giving yourself a chance to learn the pros and cons of MVC, and maybe grab some of its benefits rather cheaply for your own use?

This leads onto my second point "the general principles of MVC will assist Forms programmers too".  Where Forms fails in the eyes of MVC is it too easily allows developers to make tightly coupled code where the model and view layers combine logic (in other words, poor separation of concerns).  The fact that Forms' triggers for UI items reside at the model layer (i.e. the block), means developers can easily intermingle business/model logic with the UI representation.  For example a WHEN-VALIDATE-ITEM trigger with an enforced business rule that turns the erroneous field red on a violation shows such tight coupling.  Why should the business rule code care the field must be red?  What if the user is color blind and we need to change the color to blue; we're going to need to change the business rule code to fix a color?  This coupling is a problem, a small one admittedly, but still a problem, especially if we need to change the color in hundreds of WHEN-VALIDATE-ITEM triggers.  Solution: don't couple your UI and business logic.

By the way let's be very clear here.  I'm not saying that's wrong all the time, indeed in an application made up of 1 Oracle Form with 1 piece of code, who cares. What I am doing is highlighting this is wrong from the MVC point of view.  But again why should you care?  What can Forms programmers learn from MVC?

This question leads onto my third and final point "from experience the best Oracle Forms solutions I worked on were working towards an MVC ideal even if the programmers knew it or not."

One of the common problems tackled at Oracle Forms sites where Forms has been a proven technology for years, is the problem where the complete system (Forms, database and all) needs to be extended with other technologies.  Today organizations don't rely just on Oracle Forms, but a combination of technologies to maintain and integrate with their systems.  Maybe a .NET team implemented an ASP solution to backend the same database the Forms system used.  Maybe a group of keen graduates implemented part of the solution in Ruby on Rails.  From an Oracle perspective maybe even an APEX or ADF or SOA solution has been put in place.  Point being Oracle Forms is only but one of the moving parts in the overall system architecture.

Now there's an inherent problem in the WHEN-VALIDATE-ITEM Oracle Forms example I gave before.  Regardless of the technology used to update the underlying database table relating to the Oracle Form's block in the previous example, the business rule must still apply.  Hmm, that's a clincher.  We may have .NET solution, a Ruby on Rails package and even our Oracle Forms system, they all need to implement that same business logic.  But the business rule is stuck in our Oracle Forms solution.  What do we do? Do we copy the same logic into every platform and then if a change is required have to duplicate that change across all platforms?

Doesn't sound ideal does it?  And this is where experienced Forms customers come to the fore, because sites I've visited have strict guidelines on separating out the Forms UI logic from the business logic, where the business logic goes into PLSQL PLLs attached to the Forms, or more ideally down into the database where it can be used by anyone who can connect.

Now putting the logic in the database is a whooooole separate set of arguments (which has yet to resolved and I don't even want to go there, but Google the concepts of "thin vs thick databases" or "thin vs thick middletier" if you're interested, or maybe even the idea of "service oriented architecture").  Yet the fact that the business logic is decoupled from the UI logic this is what I want to focus on.  Experienced Oracle Forms sites will be familiar with the pattern I just described.  In turn Oracle Designer developers will be familiar with the Table API approach.  Indeed many Forms programmers will already understand the what-and-why of this approach.  By design you were separating concerns and creating a looser coupling, which enables greater reuse.  And it's those ideas MVC teaches.

And it's at this last point I'd like to finish with.  The concepts of MVC isn't something Forms programmers should isolate themselves from.  Your tool of choice and your experience has probably lead you down the MVC path, choosing the same ideals, all by your own effort and reckoning.

In a previous life as a software developer and consultant, on occasion I'd sit down with a client who would describe a neat solution they had come up with all by themselves, and I'd reply "oh, that's similar to [insert solution X here]".  Kudos to you.  You've worked out a best practice without guidance, but your own effort and brawn.  Maybe it's time to read some more on MVC to see what else you could learn from it?

Monday Mar 05, 2012

Running Oracle's ADF Faces Skin Editor under Mac OS

Last year I bought my first Mac and have been slowly learning the in's and out's of Mac OS. My failsafe when I can't get something to work has been to drop to Windows running under an Oracle VirtualBox guest VM. But overtime I've succeeded in getting most things running under Mac OS.

Today's challenge was running Oracle's ADF Faces Skin Editor 11.1.2.1.0 natively under Mac OS 10.7 Lion. As a result I've documented a couple minor issues I overcame here for my own notes, and hopefully also useful to you too.

The generic instructions for installing the 11.1.2.1.0 Skin Editor can be found here, ensure to follow the Mac installation section.

Yet I hit three snags during the installation:

1) The default process prompts you for the location of the 1.6.0 JDK required for 11.1.2.1.0. Under the later Mac OS's finding the location has become a little difficult to do because by default Mac OS now attempts to hide the Library directory from you. The following StackOverFlow post gave me the location:

/System/Library/Java/JavaVirtualMachines/1.6.0.jdk/Contents/Home

2) On entering this location the Skin Editor still failed to start stating "Running Skin Editor under a JRE is unsupported".  This error is incorrect as we're correctly pointing at a JDK. Luckily the resulting error tells you the solution by placing the following flag in the <install-dir>/skineditor/bin/skineditor.conf file:

SetSkipJ2SDKCheck true


3) Finally when the Skin Editor started natively, virtually no toolbar buttons, menus or windows were displayed (making it a little hard to use):

The solution via Kevin Angus in the OTN Forums was to include the additional line in the skineditor.conf:

AddVMOption -Dapple.laf.useScreenMenuBar=true

Voila! 

Wednesday Feb 29, 2012

Free Advanced ADF eCourse now available

A common past complaint on the ADF EMG is that it's hard for members to attend Oracle Open World and other user group events.  This means you can miss out on ADF presentations from Oracle staff which often have all those little tips and tricks, indepth discussions and personal war stories that make learning and using the product easier.

I'm happy to say Oracle has taken steps to solve this by releasing a free Advanced ADF eCourse with combined content from many presentations.  This online course is available to everyone regardless where you're located on the big round blue-brown thing.

If ADF users are interested in other high level training courses beyond the how-do-I-get-this-poplist-to-work type questions, feel free to suggest ideas to your favourite product manager or post something similar on the ADF EMG.

Monday Feb 27, 2012

Minimising the Impact of Data Model Changes in ADF Application Deployment

In the complete lifecycle of an ADF application backed with a database, it's not uncommon for the data model to change. New columns are added to tables, datatypes are expanded, there are many changes that can take place in the database. Yet as the database is core to the overall application such small changes ripple up the three tier stack having a wider impact. This is as true for ADF applications as any other database centric technology, as the change causes disruption to the model layer (e.g. ADF Business Components) and the view-controller layers (e.g. ADF Faces RC).

Depending on your ADF application deployment setup, building and deploying your application can already take a considerable time. For data model changes as small as an additional column included in an ADF BC Entity Object (EO), it certainly will be undesirable to have to go through another large build and deploy exercise for what amounts to a single new field on the screen.

This raises the obvious question can we architect our ADF applications in such a manner to minimize the impact of data model changes on the build and deployment of our application?

This challenge was put to me in my first few days at Oracle.  The following post describes one such solution I came up with using ADF libraries and WebLogic Server shared libraries.  Hopefully I passed the "give the new employee something difficult to do" test but I'm sure readers will set me straight regardless ;-)

Why can't ADF automatically detect this change?

One argument that comes up from time to time is that ADF should be able to automatically detect such schema changes and run with them. Surely something as simple as an additional table column for example could be added to the ADF Business Components and JSF pages dynamically at runtime?

The problem with this is unless we're writing some sort of database to web query tool like Oracle's SQL Developer where you want to see all the columns in any table regardless, dynamically changing to take into account any database change is dangerous proposition for an application. Imagine if the table EMPLOYEES added a Blob column allowing upto 4GB images to be stored against each employee with their latest favourite pic? Should all ADF applications showing employee data automatically make use of this Blob column even if our application doesn't want to show the employee's portrait? Can our servers handle loading 4GBs worth of data for each employee?

The answer is obviously no, we could easily break our application's ability to scale, and in many cases we don't even want to show the employee's picture anyhow. As such it's prudent at design time to accommodate database changes into our application on a case by case basis, rather than allowing our application to dynamically evolve.

Angels in the Architecture

Last year for my previous employer I had the fortune to present at Open World on ADF architectural blueprints that I had observed at different sites (See: Angels in the Architecture). The presentation explored 6 architectural patterns of which the 3rd known as the "Master Application-Multi Bounded Task Flow Application" (abbreviated to: Master-App-Multi-BTF-App) presented the following application composition:


From the diagram we can see the overall application is broken into several JDeveloper workspaces:

  • One Common ADF BC Application Workspace - containing the majority of reusable ADF Business Components
  • One to many BTF Application Workspaces - each containing BTFs that mimic the user tasks of the system, dependent on the ADF BC Workspace common components through an ADF Library.
  • One Master Application - essentially the composite application that brings the BTF and ADF BC workspaces together into a presentable whole, again dependent on the individual ADF Libraries.

ADF Libraries and the Resource Palette are key to this architectural pattern. While this pattern splits the application into separate workspaces, it doesn't dictate a deployment model. By default when you add ADF Libraries to another application's projects, the destination application's WAR profile is updated as follows:


In the example above the three ADF Library JARs have been included for deployment with the main application's WAR, and as a result will be deployed in the overall EAR file for the application. This is ideal from a simplistic deployment point of view, a build-and-deploy-everything approach. But it doesn't satisfy our requirement to not build and redeploy the whole application if a simple database change occurs.

Using WLS Shared Libraries with ADF

A potential solution which has been documented before (See: Andrejus Baranovskis's blog Deploying ADF Applications as Shared Libraries on WLS) makes use of deploying ADF Libraries separately as Shared Libraries to WLS. Without unnecessarily reiterating the current documentation, the basic steps are:

- For the application workspace to be shared -

1) In the application workspace create a separate custom project

2) Add the ADF Library for the workspace to the new project via the Resource Palette

3) Add a WEB deployment profile to the project

4) Set the context-root to empty

5) Add a MANIFEST.MF file with the following options:

Manifest-Version: 1.0
Created-By: <author>
Implementation-Title: <module title>
Extension-Name: <module package name>
Specification-Version: <version>
Implementation-Version: <version>
Implementation-Version: <author>

6) On deployment via JDev or the WLS console ensure to select the Deploy as Shared Library option


- For the application workspace that's consuming the ADF Library -

If the consumer workspace is created as an ADF Library itself (to be further consumed by another module), you need to:

1) Follow the previous steps for a workspace to be shared

2) Add a weblogic.xml file under WEB-INF

3) Add a library-ref option to the shared library Extension-Name

If the consuming workspace is the final application, you need only do the previous steps 2 and 3 plus the following step:

4) In the WAR profile uncheck the attached ADF libraries

Example Application

The following zip file provides a demonstration application built in JDeveloper 11.1.2.1.0, based on 3 shared libraries, using the Oracle HR database schema.


To test this setup you must have the Oracle HR database schema available to you, a JDeveloper Resource Palette file connection to the "libs" directory as extracted from the zip file, and a preconfigured connection to your WLS server of choice.

In order to show the ADF Libraries working as Shared Libraries, follow these steps:

1) Start your WLS server

2) Ensure a data source is configured for that used by CommonModel

3) In JDeveloper open all 4 workspaces

4) In the CommonModel workspace:

4.1) Deploy the ADF Library for the Model project ... this will write the ADF Library to the libs directory above

4.2) Deploy the SharedLibs project to your WLS server as a shared library

5) Repeat the previous steps 4.1 and 4.2 for the DeptTaskFlows and EmpTaskFlows workspaces

6) Deploy the MasterApp EAR to the server

7) Access the application via http://<wls-host>:<port>/MasterApp/faces/Splash

8) Within the application press each button to see each BTF in action

Now that we've deployed and tested the existing application, we'll investigate a scenario with a data model change:

9) In the database add a new VARCHAR2 column to the employees table TEST

10) In the associated CommonModel ADF BC Employees Entity Object and Employees View Object add the new database column as an attribute

11) Deploy the ADF Library for the Model project

12) Open the EmpTaskFlows workspace

13) Refresh the Data Control palette

14) Locate and open the EditEmp.jsf in the ViewController project

15) Add the new VO attribute Test to the page via the Data Control Palette

16) Deploy the ADF Library for the ViewController project

At this point we want to upload the new CommonModel and EmpTaskFlows to the server, so let's try the following:

17) Deploy the CommonModel and EmpTaskFlows SharedLibs projects to the server

During this operation the 2nd one will fail with the following error message:

[03:52:21 PM] Weblogic Server Exception: weblogic.deploy.event.DeploymentVetoException: Cannot undeploy library Extension-Name: emp.taskflows, Specification-Version: 1, Implementation-Version: 1.0.0 from server DefaultServer, because the following deployed applications reference it: MasterApp.war
[03:52:21 PM] See server logs or server console for more details.
[03:52:21 PM] weblogic.deploy.event.DeploymentVetoException: Cannot undeploy library Extension-Name: emp.taskflows, Specification-Version: 1, Implementation-Version: 1.0.0 from server DefaultServer, because the following deployed applications reference it: MasterApp.war
[03:52:21 PM] Deployment cancelled.

While WLS wasn't smart enough to enforce the indirect dependency on CommonModel, it did so on the EmpTaskFlows as the MasterApp is still live.

The solution is to temporarily stop the MasterApp, then attempt the deployment again. Once finished restart the MasterApp and all should be fine.

Now when we access the application and navigate to the EmpTaskFlow we can see the change come through.

A copy of the final application can be downloaded here.

Conclusion and Final Thoughts

The key point to realize from the example was even though we changed the base CommonModel that is directly and indirectly related to all the modules, it was not necessary to redeploy all the modules to get the change. Instead we only deployed the CommonModel and the EmpTaskFlow where the changes occurred. Our goal has been met.

There is of one potentially undesirable issue with the above solution, that we need to stop the MasterApp to achieve the redeployment. For a high availability site this isn't ideal (read: understatement).

Questionably can we use the WebLogic Server Production Redeployment feature to stop the application from having to be redeployed? According to the section Restrictions for Updating J2EE Modules in an EAR:

"If redeploying a single J2EE module in an Enterprise application would affect other J2EE modules loaded in the same classloader, weblogic.Deployer requires that you explicitly redeploy all of the affected modules."

.... it would appear the only solution here is to redeploy all the parts of the updated application to the server, which defeats the point of the whole exercise.

With this limitation in mind I'll look to further research a solution for customers in the future and post it here. Of course, if you don't have such HA requirements then the current solution is satisfactory.

Thursday Feb 23, 2012

Classifying ADF Task Flow Navigation Choices

Having written the Angels in the Architecture: An ADF Application Architectural Blueprint presentation in 2011 it spawned a number of side projects which I had scribbled down but taken no further. Starting at Oracle has given me a little more time to rummage through my notebooks and turn these ideas into blogs posts hopefully to help others.

In the Angels in the Architecture presentation there was an in depth look at how Bounded Task Flows (BTF) in JDeveloper 11g+ could be placed in their own workspace and published as ADF Libraries for reuse in a master composite ADF application. In consuming the BTFs in the master application, it isn't uncommon to make use of the consumed BTFs in a parent composite BTF that brings the moving parts together. This is truly one of the delights of BTFs, the ability to shuffle the bits around like Lego to build any application you want.

It was in this composition that I discovered another interesting area of BTFs yet to be documented, that of the different navigation models used beyond just the concepts of Unbounded Bounded Task (UTFs) vs Bounded Task Flows (BTFs). This blog posts takes a stab at describing the different models. It shouldn't be considered complete, just a starting point to help you understand the options, and a chance for me to change my scribbled notes into something more substantial.

Unbounded Task Flows vs Bounded Task Flows

Of course for ADF beginners it's worth going over the basics and describing the characteristics of Unbounded Task Flows (UTFs) and Bounded Task Flows (BTFs).

Unbounded Task Flows of which every application has at least one comprise the main page flow of your application. Whether you're building an application with many separate pages each with their own URL, or a single page desktop like application with portals/regions, you'll have a UTF.

In terms of navigation an example UTF looks as follows:


The navigation characteristics of a UTF many of which been documented before include:

  • There is no set start or end to the UTF (thus the name "unbounded"), the user can enter the application at any activity.
  • Navigation is a combination of user free-form and design time structured (explained further next)
  • Free-form allows the user to access any view activity via a URL.
  • Because of the free-form navigation model, the minimum amount of steps to get to any view activity is 1.
  • Isolated activities are still accessible thanks to their URLs.
  • Structured allows developers to optionally define uni or bi directional navigation between nodes.
  • Wildcards provide a uni-directional leap from a source activity to a defined destination activity.
  • The UTF has no defined exit points for the user. In fact every activity is an exit point, the user can leave the application at any point.

Bounded Task Flows navigation takes a more constrained approach to navigation:


The characteristics of navigation within BTFs include:

  • As the name suggests, they're bounded, with one entry point and one or more exit points for the user.
  • There is no free-form navigation, all navigation (both uni and bi-directional) must be through predefined navigation rules or wildcards.
  • You cannot access any activity inside the BTF by an addressable URL.
  • Because of the structured navigation model, the minimum number of steps to get to any activity within the BTF is dictated by the developer (unlike the free-form nature of UTFs).
  • Isolated nodes are inaccessible.

Inter-Task Flow navigation - task flow calls and regions

Before we investigate task flow navigations further, readers need to be familiar that the two mechanisms for tasks flows to call each other:

1) To call a task flow based on pages we must use a task flow call

2) To call a task flow based on page fragments, we must embed the page fragment task flow as a region in a page or another page fragment.

Note how I use the term task flow here rather than Unbounded or Bounded Task Flows. The mechanisms for the different types of task flows to call each other is the same across both.

In addressing point 2 above it is an interesting one as the idea of embedding brings us to the idea of the "stack".

Stack navigation

At its simplest "stack navigation" is when one task flows call another without terminating the first:


To be precise stack navigation occurs when:

  • A source task flow calls a destination task flow
  • Control is passed to the destination task flow until it terminates
  • Upon which control is passed back to the source/caller
  • During the stack the state of the source task flow is persisted
  • The state of the destination task flow only exists for its life

The easy analogy here for developers to understand is the 3GL equivalent of functions calling functions.

Of course the "stack" model can be extended and we can have a set of task flows calling each other in a deep stack:

Some points on the stack:

  • It's suitable for both page or page fragment task flows
  • Task flow calls and returns are what allow the stack to grow and shrink.
  • As we progress deeper into the stack, as the previous task flows are still live and their state stored in memory, we will consume more memory
  • On returning to a previous item in the stack, its state is restored in tact with out modifications needed.
  • It is well suited to logical drill up/down solutions.
  • There are no short cuts from the stack.
  • It's messy at design time to reorganize the stack if we get the stack order wrong.
  • Task flow parent actions or contextual events to manipulate the calling task flow are not possible.
  • Task flow calls allow a terminating task flow to pass parameters back to the caller.

Network navigation

"Network navigation" is where we chain a number of task flows together in one composite master.

Relevant points of the network navigation model:

  • It is suitable for both page or page fragment task flows
  • Navigation between flows is controlled by a master composite task flow.
  • It's very easy to reorganize the calling order in the composite task flow.
  • At most there's only the two task flows on the stack, the composite or the called task flow, thus reducing concerns on the memory consumed.
  • If we do return to a previously visited called task flow in the composite, to provide a seamless experience for the user where it appears we never left the task flow, we need to reestablish it's similar state to when we left it. This will optionally require more task flow parameters and more logic internally to reexecute previous processing.
  • Task flow parent actions or contextual events to manipulate the calling task flow are not possible.
  • Task flow calls still allow a terminating task flow to pass parameters back to the caller.
  • Better suited to logic path or wizard style interfaces (noting the similarity to trains).

At this point we can start to see one of the key differentiators technically with stack vs network navigation is the stack model takes more memory (depending on it's depth), while network takes less but may require more processing. Readers should be careful not to make an ill formed decision here as I've not given you any empirical evidence on which one is better or worse from an overhead point of view. As example if stack navigation only takes up 1k per user, who cares. But if it takes up megabytes, there's something to worry about. The actual numbers will be dependent on your custom solution and you need to take your own measurements to make this judgement.

Hybrid navigation

Of course it's possible to have a combination of both stack and network navigation:


I wont go into details of the pro's and con's here as they are just a combination of the stack and network navigation characteristics.

Nested region navigation

"Nested regions navigation" is my name for when a page or page fragment embeds one or more separate regions to one or more separate Bounded Task Flows based on fragments. Unfortunately there's not an easy JDeveloper screenshot to describe this so we'll use a diagram instead:


The characteristics of this model:

  • The call from a region to a BTF can be thought of as a 2 level stack but where the state of the caller and the nested region BTF run in parallel.
  • Navigation within each BTF is independent of the parent task flow and as such can be any combination of the navigation models: stack, navigation or hybrid.
  • The nested BTF can communicate to the parent and other nested BTFs through parent actions or contextual events.
  • On termination of a nested BTF there is no way for the BTF to return parameters.
  • This includes the notion of inline popups containing regions within the parent page or fragment.
  • The more regions you have, the more memory and processing required for the page.

Parallel navigation

Finally returning to the model where one task flow calls another through a task flow call, task flow calls allows BTFs based on pages to be called either as an inline popup or external window.

The inline popup navigation is a kin to the "Nested region navigation" previously described.

The external window navigation is more complicated as this navigation occurs separately in a new browser window separate to the current browser window, thus the title "parallel navigation". While it doesn't have a separate HTTP session, it does have it's own pageFlowScope and it's operation is separate to that of the main window.

Conclusion

What can be seen from the different navigation models is they support different user experiences, different technical challenges and different features that can be utilised in each. It's simply not an understanding of task flows and their features ADF architects need. Rather an understanding of the different navigation models will help architects design new ADF applications.

If any readers come up with different navigation models I'd be glad to hear about them.

And we're back.....

The informal laws of blogging require every new blog to start with a special announcement "this is my first blog" so here's mine. (Phew, now the blogging police wont come after me).

Except of course this isn't my first blog at all.  As some readers will know this is just a continuation of my previous ADF blog on Blogger.  But now everything I say of course is from the other side of the fence as an Oracle employee, not a consultant.

Expect more ADF content soon!

About

Chris Muir - Oracle ADF Product Manager - The views expressed on this blog are my own and do not necessarily reflect the views of Oracle.

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today