Just a quick article today on ADF Controller Scopes and specifically ensuring that your application is correctly propagating state stored in PageFlow and View Scope across the cluster. This information can be found in the product doc and in Jobinesh Purushothaman's excellent book
(Chapter 12 - Ensuring High Availability), however, more references means more eyes and fewer mistakes!
When you store state in a managed bean scope how long does it live and where does it live? Well hopefully you already know the basic answers here, and for scopes such as Session and Request we're just dealing with very standard stuff. One thing that might be less obvious though, is how PageFlow and View Scope are handled. Now these scopes persist (generally) for more than one request, so there is obviously the possibility that you might get a fail-over between two of those requests. A Java EE server of whatever flavour doesn't know anything about these extra ADF memory scopes so it can't be automatically managing the propagation of their contents can it? Well the answer is yes and no. These "scopes" that we reference from the ADF world are ultimately stored on the Session (albeit with a managed lifetime by the framework), so you'd think that everything should be OK and no further work is going to be needed to ensure that any state in these scopes is propagated - right? Well no, not quite, it turns out that several key tasks are often missed out. So let's look at those.
First of All - Vanilla Session Replication
Assuming that WebLogic is all configured, this bit at least is all automatic right? Well no. In order to "know" that an object in the session needs to be replicated WebLogic relies on the HttpSession.setAttribute() API being used to put it onto the session. Now if you instanciate a managed bean in SessionScope through standard JSF mechanisms then this will be done and you're golden. Likewise if you grab the Faces ExternalContext and grab the Session throught that (e.g. using the getSession() API), then call the setAttribute() API on HttpSession, you've correctly informed WebLogic of the new object to propagate.
You might already see though, that there is a potential problem in the case where the object stored in the session is a bean and you're changing one of its properties. Just calling an attribute setter on an object stored on the session will not be a sufficient trigger to have that updated object re-propagated, so the version of the object elsewhere will be stale. So when you update a bean on the session in this way, and want to ensure that the change is propagated, then re-call the setAttribute() API.
Got it? OK, on to the ADF scopes:
Five Steps to Success For the ADF Scopes
The View and PageFlow scopes are, as I mentioned, ultimately stored on the session. Just as in the case of any other object stored in that way, changing an internal detail of those representaive objects would not trigger replication. So, we need some extra steps and of course we need to observe some key design principles whilst we're at it:
- Observe the UI Manager Pattern and only store state in View and PageFlow scope that is actually needed and is allowed (see 2)
- As for any replicatable Session scoped bean, any bean in View or PageFlow scope must be serializable (there are audits in JDeveloper to gently remind you of this).
- Only mark for storage that which cannot be re-constructed. Again a general principle; we wish to replicate as little as possible, so use the transient marker in your beans to exclude anything that you could possibly reconstruct over on the other side (so to speak).
- In the setters of any attributes in these beans (that are not transient) call the ControllerContext markScopeDirty(scope) API. e.g. ControllerContext.getInstance().markScopeDirty(AdfFacesContext.getCurrentInstance().getViewScope()); This does the actual work of making sure that the server knows to refresh this state across the cluster
- Finally, set the HA flag for the controller scopes in the .adf/META-INF/adf-config file. This corresponds to the following section inside of the file:
If this flag is not set, the aforementioned markScopeDirty() API will be a no-op. So this flag provides a master switch to throw when you need HA support and to avoid the cost when you do not.
So if you've not done so already, take a moment to review your managed beans and check that you are really all doing this correctly. Even if you don't need to support HA today you might tomorrow...