Minimising the Impact of Data Model Changes in ADF Application Deployment
By Chris Muir-Oracle on Feb 27, 2012
In the complete lifecycle of an ADF application backed with a database, it's not uncommon for the data model to change. New columns are added to tables, datatypes are expanded, there are many changes that can take place in the database. Yet as the database is core to the overall application such small changes ripple up the three tier stack having a wider impact. This is as true for ADF applications as any other database centric technology, as the change causes disruption to the model layer (e.g. ADF Business Components) and the view-controller layers (e.g. ADF Faces RC).
Depending on your ADF application deployment setup, building and deploying your application can already take a considerable time. For data model changes as small as an additional column included in an ADF BC Entity Object (EO), it certainly will be undesirable to have to go through another large build and deploy exercise for what amounts to a single new field on the screen.
This raises the obvious question can we architect our ADF applications in such a manner to minimize the impact of data model changes on the build and deployment of our application?
This challenge was put to me in my first few days at Oracle.
This challenge was put to me in my first few days at Oracle.The following post describes one such solution I came up with using ADF libraries and WebLogic Server shared libraries. Hopefully I passed the "give the new employee something difficult to do" test but I'm sure readers will set me straight regardless ;-)
Why can't ADF automatically detect this change?
One argument that comes up from time to time is that ADF should be able to automatically detect such schema changes and run with them. Surely something as simple as an additional table column for example could be added to the ADF Business Components and JSF pages dynamically at runtime?
The problem with this is unless we're writing some sort of database to web query tool like Oracle's SQL Developer where you want to see all the columns in any table regardless, dynamically changing to take into account any database change is dangerous proposition for an application. Imagine if the table EMPLOYEES added a Blob column allowing upto 4GB images to be stored against each employee with their latest favourite pic? Should all ADF applications showing employee data automatically make use of this Blob column even if our application doesn't want to show the employee's portrait? Can our servers handle loading 4GBs worth of data for each employee?
The answer is obviously no, we could easily break our application's ability to scale, and in many cases we don't even want to show the employee's picture anyhow. As such it's prudent at design time to accommodate database changes into our application on a case by case basis, rather than allowing our application to dynamically evolve.
Angels in the Architecture
Last year for my previous employer I had the fortune to present at Open World on ADF architectural blueprints that I had observed at different sites (See: Angels in the Architecture). The presentation explored 6 architectural patterns of which the 3rd known as the "Master Application-Multi Bounded Task Flow Application" (abbreviated to: Master-App-Multi-BTF-App) presented the following application composition:
From the diagram we can see the overall application is broken into several JDeveloper workspaces:
- One Common ADF BC Application Workspace - containing the majority of reusable ADF Business Components
- One to many BTF Application Workspaces - each containing BTFs that mimic the user tasks of the system, dependent on the ADF BC Workspace common components through an ADF Library.
- One Master Application - essentially the composite application that brings the BTF and ADF BC workspaces together into a presentable whole, again dependent on the individual ADF Libraries.
ADF Libraries and the Resource Palette are key to this architectural pattern. While this pattern splits the application into separate workspaces, it doesn't dictate a deployment model. By default when you add ADF Libraries to another application's projects, the destination application's WAR profile is updated as follows:
In the example above the three ADF Library JARs have been included for deployment with the main application's WAR, and as a result will be deployed in the overall EAR file for the application. This is ideal from a simplistic deployment point of view, a build-and-deploy-everything approach. But it doesn't satisfy our requirement to not build and redeploy the whole application if a simple database change occurs.
Using WLS Shared Libraries with ADF
A potential solution which has been documented before (See: Andrejus Baranovskis's blog Deploying ADF Applications as Shared Libraries on WLS) makes use of deploying ADF Libraries separately as Shared Libraries to WLS. Without unnecessarily reiterating the current documentation, the basic steps are:
- For the application workspace to be shared -
1) In the application workspace create a separate custom project
2) Add the ADF Library for the workspace to the new project via the Resource Palette
3) Add a WEB deployment profile to the project
4) Set the context-root to empty
5) Add a MANIFEST.MF file with the following options:Manifest-Version: 1.0
Implementation-Title: <module title>
Extension-Name: <module package name>
6) On deployment via JDev or the WLS console ensure to select the Deploy as Shared Library option
- For the application workspace that's consuming the ADF Library -
If the consumer workspace is created as an ADF Library itself (to be further consumed by another module), you need to:
1) Follow the previous steps for a workspace to be shared
2) Add a weblogic.xml file under WEB-INF
3) Add a library-ref option to the shared library Extension-Name
If the consuming workspace is the final application, you need only do the previous steps 2 and 3 plus the following step:
4) In the WAR profile uncheck the attached ADF libraries
The following zip file provides a demonstration application built in JDeveloper 184.108.40.206.0, based on 3 shared libraries, using the Oracle HR database schema.
To test this setup you must have the Oracle HR database schema available to you, a JDeveloper Resource Palette file connection to the "libs" directory as extracted from the zip file, and a preconfigured connection to your WLS server of choice.
In order to show the ADF Libraries working as Shared Libraries, follow these steps:
1) Start your WLS server
2) Ensure a data source is configured for that used by CommonModel
3) In JDeveloper open all 4 workspaces
4) In the CommonModel workspace:
4.1) Deploy the ADF Library for the Model project ... this will write the ADF Library to the libs directory above
4.2) Deploy the SharedLibs project to your WLS server as a shared library
5) Repeat the previous steps 4.1 and 4.2 for the DeptTaskFlows and EmpTaskFlows workspaces
6) Deploy the MasterApp EAR to the server
7) Access the application via http://<wls-host>:<port>/MasterApp/faces/Splash
8) Within the application press each button to see each BTF in action
Now that we've deployed and tested the existing application, we'll investigate a scenario with a data model change:
9) In the database add a new VARCHAR2 column to the employees table TEST
10) In the associated CommonModel ADF BC Employees Entity Object and Employees View Object add the new database column as an attribute
11) Deploy the ADF Library for the Model project
12) Open the EmpTaskFlows workspace
13) Refresh the Data Control palette
14) Locate and open the EditEmp.jsf in the ViewController project
15) Add the new VO attribute Test to the page via the Data Control Palette
16) Deploy the ADF Library for the ViewController project
At this point we want to upload the new CommonModel and EmpTaskFlows to the server, so let's try the following:
17) Deploy the CommonModel and EmpTaskFlows SharedLibs projects to the server
During this operation the 2nd one will fail with the following error message:[03:52:21 PM] Weblogic Server Exception: weblogic.deploy.event.DeploymentVetoException: Cannot undeploy library Extension-Name: emp.taskflows, Specification-Version: 1, Implementation-Version: 1.0.0 from server DefaultServer, because the following deployed applications reference it: MasterApp.war
[03:52:21 PM] See server logs or server console for more details.
[03:52:21 PM] weblogic.deploy.event.DeploymentVetoException: Cannot undeploy library Extension-Name: emp.taskflows, Specification-Version: 1, Implementation-Version: 1.0.0 from server DefaultServer, because the following deployed applications reference it: MasterApp.war
[03:52:21 PM] Deployment cancelled.
While WLS wasn't smart enough to enforce the indirect dependency on CommonModel, it did so on the EmpTaskFlows as the MasterApp is still live.
The solution is to temporarily stop the MasterApp, then attempt the deployment again. Once finished restart the MasterApp and all should be fine.
Now when we access the application and navigate to the EmpTaskFlow we can see the change come through.
A copy of the final application can be downloaded here.
Conclusion and Final Thoughts
The key point to realize from the example was even though we changed the base CommonModel that is directly and indirectly related to all the modules, it was not necessary to redeploy all the modules to get the change. Instead we only deployed the CommonModel and the EmpTaskFlow where the changes occurred. Our goal has been met.
There is of one potentially undesirable issue with the above solution, that we need to stop the MasterApp to achieve the redeployment. For a high availability site this isn't ideal (read: understatement).
Questionably can we use the WebLogic Server Production Redeployment feature to stop the application from having to be redeployed? According to the section Restrictions for Updating J2EE Modules in an EAR:
"If redeploying a single J2EE module in an Enterprise application would affect other J2EE modules loaded in the same classloader, weblogic.Deployer requires that you explicitly redeploy all of the affected modules."
With this limitation in mind I'll look to further research a solution for customers in the future and post it here. Of course, if you don't have such HA requirements then the current solution is satisfactory.