X

Announcements and Technical Advice for the Oracle
Utilities product community from the Product Management team

Recent Posts

Use Of Oracle Coherence in Oracle Utilities Application Framework

In the batch architecture for the Oracle Utilities Application Framework, a Restricted Use License of Oracle Coherence is included in the product. The Distributed and Named Cache functionality of Oracle Coherence are used by the batch runtime to implement clustering of threadpools and submitters to help support the simple and complex architectures necessary for batch. Partners ask about the libraries and their potential use in their implementations. There are a few things to understand: Restricted Use License conditions. The license is for exclusive use in the managing of executing executing members (i.e. submitters and threadpools) across hardware licensed for use with the Oracle Utilities Application Framework based products. It cannot be used in any code outside of that restriction. Partners cannot use the libraries directly in their extensions. It is all embedded in the Oracle Utilities Application Framework. Limited Libraries. The Oracle Coherence libraries are restricted to a subset needed by the license. It is not a full implementation of Oracle Coherence. As it is a subset, Oracle does not recommend using the Oracle Coherence Plug-In available for Oracle Enterprise Manager to be used with the Oracle Utilities Application Framework implementation of the Oracle Coherence cluster. Use of this plugin against the batch cluster will result in missing and incomplete information presented to the plug-in causing inconsistent results in that plug-in. Patching. The Oracle Coherence libraries are shipped with the Oracle Utilities Application Framework and therefore are managed by patches for the Oracle Utilities Application Framework not Coherence directly. Unless otherwise directed by Oracle Support, do not manually manipulate the Oracle Coherence libraries. The Oracle Coherence implementation with the Oracle Utilities Application Framework has been optimized for use with the batch architecture with a combination of prebuilt Oracle Coherence and Oracle Utilities Application Framework based configuration files. Note: If you need to find out the version of the Oracle Coherence Libraries used in the product at any time then the libraries are listed in the file $SPLEBASE/etc/ouaf_jar_versions.txt The following command can be used to see the version: cat $SPLEBASE/etc/ouaf_jar_versions.txt | grep coh For example in the latest version of the Oracle Utilities Application Framework (4.4.0.0.0): cat /u01/umbk/etc/ouaf_jar_versions.txt | grep coh coherence-ouaf                   12.2.1.3.0 coherence-work                   12.2.1.3.0

In the batch architecture for the Oracle Utilities Application Framework, a Restricted Use License of Oracle Coherence is included in the product. The Distributed and Named Cache functionality...

Special Tables in OUAF based products

Long time users of the Oracle Utilities Application Framework might recognize two common table types, recognized by their name suffixes, that are attached to most Maintenance Objects within the product: Language Tables (Suffix _L).  The Oracle Utilities Application Framework is multi-lingual and can support multiple languages at a particular site (for example customers who have multi-lingual call centers or operate across jurisdictions where multiple languages are required). The Language table holds the tags for each language for any fields that need to display text on a screen. The Oracle Utilities Application Framework matches the right language records based upon the users language profile (and active language code). Key Tables (Suffix _K). These tables hold the key values (and the now less used environment code) that are used in the main object tables. The original use for these tables was for key tracking in the original Archiving solution (which has now been replaced by ILM). Now that the original Archiving is not available, the role of these tables changed to be used in a number of areas: Conversion. The conversion toolkit in Oracle Utilities Customer Care and Billing and now in the Cloud Service Foundation, uses the key table for efficient key generation and black listing of identifiers. Key Generation. The Key generation utilities now use the key tables to quickly ascertain the uniqueness of a key. This is far more efficient than using the main table for this, especially with caching support in the database. Information Life-cycle Management. The ILM capability uses the key tables to drive some of its processes including recognizing when something is archived and when it has been restored. These tables are important for the operation of the Oracle Utilities Application Framework for all types of parts of the product. When you see them now you understand why they are there.

Long time users of the Oracle Utilities Application Framework might recognize two common table types, recognized by their name suffixes, that are attached to most Maintenance Objects within...

Batch Scheduler Integration Questions

One of the most common questions I get from partners is around batch scheduling and execution. Oracle Utilities Application Framework has a flexible set of methods of managing, executing and monitoring batch processes. The alternatives available are as follows: Third Party Scheduler Integration. If the site has an investment in a third party batch scheduler to define the schedules and execute product batch processes with non-product processes, at an enterprise level, then the Oracle Utilities Application Framework includes a set of command line utilities, via scripts, that can be invoked by a wide range of third party schedulers to execute the process. This allows scheduling to be managed by the third party scheduler and the scripts to be used to execute and manage product batch processes. The scripts return standard return codes that the scheduler to use to determine next actions if necessary. For details of the command line utilities refer to the Server Administration Guide supplied with your version of the product. Oracle Scheduler Integration. The Oracle Utilities Application Framework provides a dedicated API to allow implementations to use the Oracle DBMS Scheduler included in all editions of the database to be used as local or enterprise wide scheduler. The advantage of this is that the scheduler is already included in your existing database license and has inbuilt management capabilities provided via the base functionality of Oracle Enterprise Manager (12+) (via Scheduler Central) and also via Oracle SQL Developer. Oracle uses this scheduler in the Oracle Utilities SaaS Cloud solutions. Customers of those cloud services can use the interface provided by the included Oracle Utilities Cloud Service Foundation to manage their schedules or use the provided REST based scheduler API to execute schedules and/or processes from a third party scheduler. For more details of the scheduler interface refer to the Batch Scheduler Integration (Doc Id: 2138193.1) whitepaper available from My Oracle Support. Online Submission. The Oracle Utilities Application Framework provides a development and testing tool to execute individual batch processes from the online system. It is basic and only supports execution of individual processes (not groups of jobs like the alternatives do). This online submission capability is designed for cost effective developer and non-production testing, if desired, and is not supported for production use. For more details, refer to the online documentation provided with the version of the product you are using. Note: For customers of legacy versions of Oracle Utilities Customer Care and Billing, a basic workflow based scheduler was provided for development and testing purposes. This interface is not supported for production use and one of the alternatives outlined above should be used instead. All the above methods all use the same architecture for execution of running batch processes (though some have some additional features that need to be enabled).  For details of the each of the configurations, refer to the Server Administration Guide supplied with your version of the product.  When asked about which technology should be used I tend to recommend the following: If you have an existing investment, that you want to retain, in a third party scheduler then use the command line interface. This will retain your existing investment and you can integrate across products or even integrate non-product batch such as backups from the same scheduler. If you do not have an existing scheduler, then consider using the DBMS Scheduler provided with the database. It is more likely your DBA's are already using it for their tasks and it is used by a lot of Oracle products already. The advantage of this scheduler is that you already have the license somewhere in your organization already. It can be deployed locally within the product database or remotely as an enterprise wide solution. It has a lot of good features and Oracle Utilities will use this scheduler as a foundation of our cloud implementations. If you are on the cloud then use the provided interface in Oracle Utilities Cloud Service Foundation and if you have an external scheduler via the REST based Scheduler API. If you are on-premise, then use the Oracle Enterprise Manager (12+) interface (Scheduler Central) in preference to the SQL Developer interface (though the latter is handy for developers). Oracle also ships a command line interface to the scheduler objects if you like pl/sql type administration. Note: Scheduler Central in Oracle Enterprise Manager is included in the base functionality for Oracle Enterprise Manager and does not require any additional packs. I would only recommend to use the online submission for demonstrations, development and perhaps in testing (where you are not using Oracle Utilities Testing Accelerator or have the scheduler not implemented).  It has very limited support and will only execute individual processes.  

One of the most common questions I get from partners is around batch scheduling and execution. Oracle Utilities Application Framework has a flexible set of methods of managing, executing...

Configuration Management for Oracle Utilities

An updated series of whitepapers are now available for managing configuration and code in Oracle Utilities products whether the implementation is on-premise, hybrid or using Oracle Utilities SaaS Utilities. It has been updated for the latest Oracle Utilities Application Framework release. The series highlights the generic tools, techniques and practices available for use in Oracle Utilities products. The series is split into a number of documents: Concepts. Overview of the series and the concept of Configuration Management for Oracle Utilities products. Environment Management. Establishing and managing environments for use on-premise, hybrid and on the Oracle Utilities SaaS Cloud. There are some practices and techniques discussed to reduce implementation costs. Version Management. Understanding the inbuilt and third party integration for managing individual versions of individual extension assets. There is a discussion of managing code on the Oracle Utilities SaaS Cloud. Release Management. Understanding the inbuilt release management capabilities for creating extension releases and accelerators. Distribution. Installation advice for releasing extensions across the environments on-premise, hybrid and Oracle Utilities SaaS Cloud. Change Management. A generic change management process to approve extension releases including assessment criteria. Configuration Status. The information available for reporting state of extension assets. Defect Management. A generic defect management process to handle defects in the product and extensions. Implementing Fixes. A process and advice on implementing single fixes individually or in groups. Implementing Upgrades. The common techniques and processes for implementing upgrades. Preparing for the Cloud. Common techniques and assets that need to be migrated prior to moving to the Oracle Utilities SaaS Cloud. For more information and for the whitepaper associated with these topics refer to the Configuration Management Series (Doc Id: 560401.1) available from My Oracle Support.

An updated series of whitepapers are now available for managing configuration and code in Oracle Utilities products whether the implementation is on-premise, hybrid or using Oracle Utilities...

Oracle Utilities Application Framework V4.4.0.0.0 Released

The latest release of Oracle Utilities Application Framework V4.4.0.0.0 is now available with the first products becoming available on-premise and Oracle Utilities Cloud. This release is significant as it is forms the micro-services based foundation of the next generation of Oracle Utilities Cloud offering and whilst the bulk of the changes are cloud based there are some significant changes for on-premise customers as well: New Utilities User Experience. Last year we previewed our directions in terms of the user experience across the Oracle Utilities product portfolio.  Oracle Utilities Application Framework V4.4.0.0.0 delivers the initial set of this experience with a new look and feel which form the basis of the new user experience. The new user experience is based upon feedback from various user experience teams, customers and partners to deliver a better user experience including reducing eye strain and supporting a wider range of platforms/devices now and in the future. For example (edited for publishing purposes): New To Do Portals. Based upon customer feedback, the first of the new management portals for To Do Management is now included with Oracle Utilities Application Framework V4.4.0.0.0. These portals can used alongside the legacy To Do functionality and can be migrated to over time. The idea is these portals is to focus on finding, managing and resolving To Do's with  the minimum amount of effort from the portals as much as possible. The new portals support dynamic criteria based upon data and To Do Type, date ranges, saved views, search on text and multi-query. For example: Improved User Error Field Marking. In line with the New Utilities User Experience, the indication of fields in error has been improved both visually and programmatically. This allows implementations to be flexible on how fields in error are indicated in error routines and how they are indicated on the user experience. To Do Pre-creation Algorithm. In past releases, it was possible to use a To Do Pre-creation algorithm resided exclusively on the Installation records to implement logic to target when, in a lifecycle, a To Do can be created. This was seen as inefficient if implementations had a large number of To Do Types. It is now possible to introduce this logic at the To Do Type level to override the global algorithm. Cloud Enhancements. This release contains a set of enhancements for the Oracle Utilities Cloud SaaS versions which are not typically applicable to non-cloud implementations. These enhancements are shipped with the product in an appropriate format (some features are not available to non-cloud implementations). More Granular To Do Security. Complete All functionality in To Do now has separate security applications service to provide a more focused capability. This allows more granular security to be implemented, if desired. External Message Enhancements. The External Messages capability has been extended to now support URI substitutions to support Cloud/Hybrid implementations and isolate developers from configuration changes to reduce costs. Products using this new version of the framework are now available including Oracle Utilities Customer Care And Billing 2.7.0.1.0 are now available from Oracle Delivery Cloud. Refer to the release notes shipped with those products for details of these enhancements and other enhancements available in this release.

The latest release of Oracle Utilities Application Framework V4.4.0.0.0 is now available with the first products becoming available on-premise and Oracle Utilities Cloud. This release is significant...

Registering a Service Request With Oracle Support

As with other vendors, Oracle provides a web site My Oracle Support for customers to find information as well as register service requests for bugs and enhancements they want us to pursue. Effective use of this facility can both save you time as well as help you find the information you might need to answer your questions. Most customers think My Oracle Support is the place to get patches but it is far more of a resource than that. Apart from patches for all its products it provides some important resources for customers and partners including: Knowledge Base - A set of articles that cover all the Oracle products with announcements and advice. For example all the whitepapers you see available from this blog end up as Knowledge Base articles. Product Support and Product Development regularly publish to that part of MOS to provide customers and partner up to date information. Communities - For most Oracle products, there are communities of people who can answer questions. Some partners actually share implementation tips in these communities and they can become self sustaining with advice about features that have been actually implemented and tips on how to best implement them from partners and customers. Oracle Product Support and Development monitor those communities to see trends as well as determine possible optimizations to our products. They are yet another way you can contribute to the product. Now to help you navigate the site, I have compiled a list of the most common articles that you might find useful. This list is not comprehensive and I would recommend that you look at the site to find more than what is listed here: My Oracle Support Resource Center (Doc ID 873313.1) How to use My Oracle Support - How-to Training Video Series (Doc ID 603505.1) My Oracle Support (MOS) or Cloud Support Portal for New Users - A Getting Started Resource Center (Doc ID 1959163.1) How to create Service Request Reports (SR Reports), Asset Reports, Inventory Reports in My Oracle Support (Doc ID 1496117.1) Collecting Diagnostic Data to Troubleshoot Oracle Utilities Framework Based Functional Issues (Doc ID 2057204.1) Collecting Diagnostic Data to Troubleshoot Oracle Utilities Framework Based Products (Doc ID 2064324.1) Details Required when Logging an Oracle Utilities Framework Based Product Service Request (Doc ID 1905747.1) Collecting Diagnostic Data to Troubleshoot Oracle Utilities Framework Based Products - Batch Issues (Doc ID 2064310.1) Collecting Diagnostic Data to Troubleshoot Oracle Utilities Framework Based Patch Installation Issues (Doc ID 2064389.1) Collecting Diagnostic Data to Troubleshoot Oracle Utilities Framework Based Installation Issues (Doc ID 2058433.1) Oracle Utilities Framework Support Utility (Doc ID 1079640.1) Certification Matrix for Oracle Utilities Products (Doc ID 1454143.1) Optimizing Log levels In OUAF Based Applications (Doc ID 2090031.1) Supporting OUAF date time format in IWS without XAI compatibility enabled (Doc ID 2229893.1) OUAF Batch Commit Strategy (Doc ID 1482116.1) Usage Of G1GC In OUAF Product (Doc ID 2444942.1) Details around Storage.xml file for OUAF based products (Doc ID 1482635.1) Required Troubleshooting Information for OUAF Performance Problems [Video] (Doc ID 1233173.1) How to Capture Fiddler Logs in Oracle Utilities Application Framework Based Products (Doc ID 1293483.1) How to apply a custom stylesheet to the Oracle Utilities Application Framework? (Doc ID 1557818.1) Configuration And Log Files Checklist For Oracle Utilities Application Framework Based Products (Doc ID 797594.1) How To Install Patches On Oracle Utilities Application Framework Based Products (Doc ID 974985.1) For more articles I suggest you use the terms OUAF or Oracle Utilities Application Framework in the search. For specific product advice, use the product acronym or product name in the search to find articles.

As with other vendors, Oracle provides a web site My Oracle Support for customers to find information as well as register service requests for bugs and enhancements they want us to pursue. Effective...

Building Your Batch Architecture with Batch Edit

With the introduction of Oracle Coherence to our batch architecture introduces both power and flexibility to the architecture to support a wide range of batch workloads. But to quote one of the late Stan Lee's iconic characters "With great power comes great responsibility" configuration of this architecture can be challenging for complex scenarios. To address this the product introduced a text based utility called Batch Edit. The Batch Edit utility is a text based wizard to help implementers build simple and complex batch architectures without the need for complex editing of configuration files. The latter is still supported for more experienced implementers but the main focus is for inexperienced implementers to be able to build and maintain a batch architecture easily using this new wizard. The use of Batch Edit (bedit.sh command) is totally optional and to use it you must enable it in you must change the Enable Batch Edit Functionality to true using the configureEnv.sh -a command. For example: This is necessary for backward compatibility. Now that it is enabled the wizard can be use to build the configuration.  Here are a few tips in using it: The bedit.sh command has command options that need to be specified to edit parts of the architecture. The table below lists the command options: Command Line Usage bedit.sh -c Edit the cluster configuration. bedit.sh -w Edit the threadpoolworker configuration. bedit.sh -s Edit the submitter configuration. Use the bedit.sh -h option for a full list of options and combinations. Within bedit.sh you can setup different scenarios using predefined templates: Configuration Scope Styles Supported Cluster We support the following templates in respect to clusters: Single Server (ss) - This is ideal for simple non-production environments to restrict Unicast Server (wka) - Well Known Address based clusters with the ability to define nodes in your cluster within the wizard. Multi-cast Server (mc) - Using Multi-cast for dynamic node management and configuration. Threadpool It is possible to setup a individual or groups of threadpools using the tool with the following templates: Standard Threapools - Setting up individual threadpools or groups of threadpools with macro and micro level configuration (including sizing and caching) Cache Threadpools - Support for caching threadpools which are popular with complex setups to drastically reduce the network traffic across nodes. Admin Threadpools - Support for reserving threadpools for management and monitoring capabilities (this reduces the JMX overhead) Submitter If you not using the DBMS_SCHEDULER interface, which uses Batch Control for parameters, then properties files for the submitters will be required. The bedit.sh utility allows for the following types of submitter files to be generated and maintained: Global configuration - Set global defaults for all batch controls. For example, it is possible to specify the product user used for authorization purposes. Specific Batch Control configuration - Set the parameters for specific Batch Controls. For example: Within each template there are a list of settings with help on each setting to help decide the values. The bedit.sh utility allows setting each parameter individually using in-line commands. Use the set command to set values. Use help to get context sensitive help on individual parameters. For example: One of the most important commands is save which applies the changes you have made. The Batch Edit utility uses templates provided by product to build configuration files without user interaction. It is highly recommended for customers who do not want to manually manage templates or configurations for batch or do not have in-depth Oracle Coherence knowledge. For more information about the Batch Edit utility refer to the Server Administration Guide shipped with the product and Batch Best Practices (Doc Id: 836362.1) available from My Oracle Support. Customers wanting to know about the DBMS_SCHEDULER interface should refer to Batch Scheduler Integration (Doc Id: 2196486.1) available from My Oracle Support.

With the introduction of Oracle Coherence to our batch architecture introduces both power and flexibility to the architecture to support a wide range of batch workloads. But to quote one of the late...

Revision Control Basics

One of the features of the Oracle Utilities Application Framework is Revision Control. This is an optional facility where you can version manage your ConfigTools objects natively within the browser. Revision Control supports the following capabilities: Adding an object for the first time.  The new object is automatically checked out by the system on behalf of the current user.  The revision is finalized when the user checks in the object or reverts all changes.  The latter causes the object to be restored to the version at check-out time. Updating an object requires the object to be checked out prior to making any change.  A user can either manually check out an object or the first save confirms an automatic check out with the user.  The revision is finalized when the user checks in the object or reverts all changes.   Deletion is captured. Deleting an object results in a new revision record capturing the object at deletion time. This does not remove the object from Object Revision. It also allows for restores in the future if necessary. Restoring generated a new revision. Restoring an object also results in a new revision capturing the object after being restored to an older version. State is important. An object is considered checked out if it has a revision in a non-final state. Algorithms control the revision. A Revision Control Maintenance Object plug-in spot is introduced to enforce revision rules such as preventing from a user to work on an object checked out by another user.  Revision control is an optional feature that can be turned on for each eligible maintenance object.  To turn the feature on for a Maintenance Object, a revision control algorithm has to be plugged in to it.   Automatic revision dashboard zones. A new Revision Control context sensitive dashboard zone is provided to manage the above revision events, except the restore, for the current object being maintained.  The zone is visible only if the Maintenance Object has this feature turned on.  A hyperlink from this zone navigates to a new Revision History portal listing the history of revision events for the current object.       Tracking objects. A dashboard zone is provided that shows all the objects currently being checked out by the user.   Search revision history. In addition, a Revision History Search portal is provided to search and see information about a user's historical work and the historical revisions to an object. Revision Control supports the ability to check in, check out and revert versions of objects you develop in ConfigTools from within the browser interface. Additionally the Revision Control supports team based development with supervisor functions to force state of versions allocated to individuals. The diagram below summarizes the facilities: Note: By default Revision Control is disabled by default and the F1-REVCTL algorithm must be added as a Revision Control algorithm on the ConfigTools Maintenance Objects it applies to. For example: Once enabled whenever the configured object is edited a Revision Control dashboard zone will be displayed depending on the state of the object within the Revision Control system. The state of a revision can be queried using the Revision Control Search. For example: This is has just been a summary of some of the features of Revision Control. Refer to the online documentation for additional advice and a full description of the features.

One of the features of the Oracle Utilities Application Framework is Revision Control. This is an optional facility where you can version manage your ConfigTools objects natively within the browser....

Oracle Utilities Documentation

One of the most common questions I get from partners and customers is the location of the documentation for the product. In line with most Oracle products there are three locations for documentation: Online Help. The product ships with online help which has information about the screens and advice on the implementation and extension techniques available. Of course, this assumes you have installed the product first. Help can be accessed using the assigned keyboard shortcut or using the help icon on the product screens. Oracle Software Delivery Cloud. Along with the installation media for the products, it is possible to download PDF versions of all the documentation for offline use. This is usually indicated on the download when selecting the version of the product to down though it can downloaded at anytime. Oracle Utilities Help Center. As with other Oracle products, all the documentation is available online via the Oracle Help Center (under Industries --> Utilities). The following documentation is available: Document Usage Audience Release Notes Summary of the changes and new features in the Oracle Utilities product Implementation Teams Quick Install Summary of the technical installation process including prerequisites UNIX Administrators Installation Guide Detailed software installation guide for the Oracle Utilities product. UNIX Administrators Optional Products Installation Summary of any optional additional or third party products used for the Oracle Utilities product. This guide only exists if optional products are certified/supported with the product. UNIX Administrators Database Administrator's Guide Installation, management and guidelines for use with the Oracle Utilities product. DBA Administrators Licensing Information User Manual Legal license information relating to the Oracle Utilities product and related products UNIX Administrators Administrative User Guide Offline copy of the Administration documentation for the Oracle Utilities product. This is also available via the online help installed with the product. Implementation Teams, Developers Business User Guide Offline copy of the Business and User documentation for the Oracle Utilities product. This is also available via the online help installed with the product. Implementation Teams, Developers Package Organization Summary Summary of the different packages included in the Oracle Utilities product. This may not exist for single product installations. Implementation Teams Server Administration Guide Guide to the technical configuration settings, management utilities and other technical architecture aspects of the Oracle Utilities product. UNIX Administrators Security Guide Guide to the security aspects of the Oracle Utilities product centralized in a single document. Covers both security functionality and technical security capabilities. This is design for use by Security personnel to design their security solutions. Implementation Teams API Reference Notes Summary of the API's provided with the Oracle Utilities product. This is also available via online features. Developers Developers Guide This is the Oracle Utilities SDK guide for using the Eclipse based development tools for extending the Oracle Utilities product using Java. Partners using the ConfigTools or Groovy should use the Administrative User Guide instead. Developers Be familiar with this documentation as well as Oracle Support which has additional Knowledge Base articles.

One of the most common questions I get from partners and customers is the location of the documentation for the product. In line with most Oracle products there are three locations for documentation: On...

Running OUAF Database Installation in Non-Interactive Mode

Over the past few releases, the Oracle Utilities Application Framework introduced Java versions of our installers which were originally shipped as part of the Oracle Application Management Pack for Oracle Utilities (for Oracle Enterprise Manager). To use these utilities you need to set the CLASSPATH as outlined in the DBA Guides shipped with the product. Each product ships a Install-Upgrade sub-directory which contains the install files. Change to that directory to perform the install. If you want some custom storage parameters update Storage,xml and StorageOps.xml files. Use the following command line to install the database components: java -Xmx1500M com.oracle.ouaf.oem.install.OraDBI -d jdbc:oracle:thin:@<DB_SERVER>:<PORT>/<SID>,<DBUSER>,<DBPASS>,<RW_USER>,<R_USER>,<RW_USER_ROLE>,<R_USER_ROLE>,<DBUSER> -l 1,2 -j $JAVA_HOME Where: Parameter Comments <DB_SERVER> Host Name for Database Server <PORT> Listener Port for Database Server <SID> Database Service Name (PDB or non-PDB) <DBUSER> Administration Account for product (owns the schema) (created in earlier step) <DBPASS> Password for Administration Account (created in earlier step) <RW_USER> Database Read-Write User for Product (created in earlier step) <R_USER> Database Read Only User for Product (created in earlier step) <RW_USER_ROLE> Database Role for Read Write (created in earlier step) <R_USER_ROLE> Database Role for Read (created in earlier step) That will run the install directly. If you added additional users to your installation and want to generate the security definitions for those users then you need to run the new oragensec utility: java -Xmx1500M com.oracle.ouaf.oem.install.OraGenSec -d <DBUSER>,<DBPASS>,jdbc:oracle:thin:@<DB_SERVER>:<PORT>/<SID> -a A -r <R_USER_ROLE>,<RW_USER_ROLE> -u <RW_USER>,<R_USER> Where <RW_USER> is the additional user that you want to generate security for. You will need to provide <R_USER> as well.

Over the past few releases, the Oracle Utilities Application Framework introduced Java versions of our installers which were originally shipped as part of the Oracle Application Management Pack for...

New Process Flow Functionality

In Oracle Utilities Application Framework V4.3.0.6.0 we introduced an exciting new capability to model and process multi-step long running business processes solely using the ConfigTools functionality. The capability allows implementation to specify a complex step by step process, with associated objects, to perform a business process with the following capabilities: Forward and Back. The ability to move forward though a process and also backtrack in the process if the process permits it. For example, you might be interacting with a customer on the phone or via chat and the customer changes their mind about a particular part of the process. The operator can move back to the relevant part of the process to correct the interaction. A train UI element has been added to visually emphasize the location in the process. For example: Save At Any Time.  The ability to pause and save a business process in transit at any time. The Process Flow will pick up from the save point to continue the process. Model Complex Processing. It is possible to model simple and complex processing with branching, multiple panel support and panel group support. The latter allows groups of panels within a single step to be modeled. State Query. A Process Flow State Query has been introduced to allow for users to find processes in progress and deal with them appropriately. For example: Flexible Configuration. From a single maintenance dialog the Process Flow can be configured including Flow Attributes, Panel sequences, conditions and buttons visible at each stage. It should be noted that a library of common objects including UI Maps is provided with the capability to reduce the configuration effort. For example: The Process View is used in the Oracle Utilities Application Framework based applications to deliver key processes out of the box for on-premise and extensively in the Oracle Utilities Cloud Services (SaaS) accelerators. It is also used in the Oracle Utilities Cloud Service Foundation which is a component provided with each Oracle Utilities Cloud Service (SaaS). Refer to the online documentation for details of the capability as well as sequences of objects to create to use this facility.

In Oracle Utilities Application Framework V4.3.0.6.0 we introduced an exciting new capability to model and process multi-step long running business processes solely using the ConfigTools...

Cube Explorer - A new way of viewing your data

One of the most common uses for spreadsheets these days is to Pivot information. It is a way of displaying information using multiple dimensions in a way that make it easier to analyze. In Oracle Utilities Application Framework V4.3.0.6.0, introduces a new zone paradigm called the Cube Viewer which allows for pivot style analysis to be configured and made available to authorized users. The Cube Viewer is a new way of displaying and analyzing information with the following capabilities: Dynamic Configuration. End users can dynamically change the look and subset of data available to the Cube within the user interface. Save and Load Views. As with other zones, it is possible to save all the dynamic configuration for reuse across sessions. This allows optimizations at the user level. Configurable Selection Criteria. Allows for users to supply criteria for sub-setting the data to be displayed. For example: Comparison Support. Allowing two sets of criteria data to be available for comparison purposes. For example: Different Views. It is possible to view the data in a variety of formats including data views or graphically. This helps users to understand data in the style they prefer. It is possible to even see the raw data that the analysis was based upon. For example: Multiple Dimensions Supported. The dimensions for the Cube are available including any dimensions not shown but available. For example: Configurable View Columns. List of columns to display for selection. For example: Flexible Formatting.  The Cube Viewer allows for custom formula to be dynamically added for on the fly analysis. Dynamic Filtering. The Cube Viewers allows for filters to be dynamically imposed at both the global and hierarchical levels to allow for focused analysis. To use Cube Viewer the following can be configured: Query Portal - A query portal is built containing the SQL and any derived columns for use in the analysis. Filters (User and Hidden) are specified for use in the Cube Viewer. Column displayed are used as the basis for the Cube Viewer display. Business Service - The query portal is defined as a Business Service using the standard FWLZDEXP application service. Cube Type (New Object) - Define the Service and capabilities of the cube in a new object. Cube View (New Object) - Define the Portal to display the Cube Viewer/Cube Type This process is shown as follows: The Cube Viewer is an exciting new way of allowing users to interact with data and provide value added analysis. Note: This capability can be used independently of the use of the Oracle Database In-Memory capability. Use with the Oracle Database In-Memory can allow for more advanced analytics to be implemented with increased performance. For more information about the Cube Viewer, refer to the online documentation supplied with the product.

One of the most common uses for spreadsheets these days is to Pivot information. It is a way of displaying information using multiple dimensions in a way that make it easier to analyze. In Oracle...

New Batch Level Of Service Algorithms

Batch Level Of Service allows implementations to register a service target on individual batch controls and have the Oracle Utilities Application Framework assess the current performance against that target. In past releases of the Oracle Utilities Application Framework the concept of Batch Level Of Service was introduced. This is an algorithm point that assess the value of a performance metric against a target and returns the appropriate response to that target. By default, if configured, the return value is returned on the Batch Control maintenance screen or whenever called. For example: It was possible to build an algorithm to set and check the target and return the appropriate level. This can be called manually, be available via the Health Service API or used in custom Query Portals. For example: In this release a number of enhancements have been introduced: Possible to specify multiple algorithms. If you want to model multiple targets it is now possible to link more than one algorithm to a batch control. The appropriate level will be returned (the worst case) for the multiple algorithms. More Out Of The Box algorithms. In past releases a basic generic algorithm was supplied but additional algorithms are now provided to include additional metrics: Batch Control Description F1-BAT-LVSVC Original Generic Algorithm to evaluate Error Count and Time Since Completion F1-BAT-ERLOS Compare Count Of Records in Error to Threshold F1-BAT-RTLOS Compare Total Batch Run Time to Threshold F1-BAT-TPLOS Compare Throughput to Threshold For more information about these algorithms refer to the algorithm entries themselves and the online documentation.

Batch Level Of Service allows implementations to register a service target on individual batch controls and have the Oracle Utilities Application Framework assess the current performance against that...

New File Adapter - Native File Storage

In Oracle Utilities Application Framework V4.3.0.6.0, a new File Adapter has been introduced to parameterize locations across environments. In previous releases, environment variables or path's where hard coded to implement locations of files. With the introduction of the Oracle Utilities Cloud SaaS Services, the location of files are standardized and to reduce maintenance costs, these paths are now parameterized using an Extendable Lookup (F1-FileStorage) defining the path alias and the physical location. The on-premise version of the Oracle Utilities Application Framework V4.3.0.6.0 supports local storage (including network storage) using this facility. The Oracle Utilities Cloud SaaS version supports both local (predefined) and Oracle Object Storage Cloud. For example: To use the alias in any FILE-PATH (for example) the URL is used in the FILE-PATH: file-storage://MYFILES/mydirectory  (if you want to specify a subdirectory under the alias) or file-storage://MYFILES Now, if you migrate to another environment (the lookup is migrated using Configuration Migration Assistant) then this record can be altered. If you are moving to the Cloud then this adapter can change to Oracle Object Storage Cloud. This reduces the need to change individual places that uses the alias. It is recommended to take advantage of this capability: Create an alias per location you read or write files from in your Batch Controls. Define it using the Native File Storage adapter. Try and create the minimum number of alias as possible to reduce maintenance costs. Change all the FILE-PATH parameters in your batch controls to the use the relevant file-storage URL. If you decide to migrate to the Oracle Utilities SaaS Cloud, these Extensable Lookup values will be the only thing that changes to realign the implementation to the relevant location on the Cloud instance. For on-premise implementation and the cloud, these definitions are now able to be migrated using Configuration Migration Assistant.

In Oracle Utilities Application Framework V4.3.0.6.0, a new File Adapter has been introduced to parameterize locations across environments. In previous releases, environment variables or path's where...

Object Erasure capability introduced in 4.3.0.6.0

With data privacy regulations around the world being strengthened data management principles need to be extended to most objects in the product. In the past, Information Lifecycle Management (ILM) was introduced for transaction object management and is continued to be used today in implementations for effective data management. When designing the ILM capability, it did not make sense to extend it to be used for Master data such as Account, Persons, Premises, Meters, Assets, Crews etc as data management and privacy rules tend to be different for these types of objects. In Oracle Utilities Application Framework V4.3.0.6.0, we have introduced Object Erasure to support Master Data and take into account purging as well as obfuscation of data. This new capability is complementary to Information Lifecycle Management to offer full data management capability. This new capability does not replace Information Lifecycle Management or depends on Information Lifecycle Management being licensed. Customers using Information Lifecycle Management in conjunction with Object Erasure can implement full end to end data management capabilities. The idea behind Object Erasure is as follows: Any algorithm can call the Manage Erasure algorithm on the associated Maintenance Object to check for the conditions to ascertain that the object is eligible for object erasure. This is flexible to allow implementations to have the flexibility to initiate the process from a wide range of possibilities. This can be as simple as checking some key fields or some key data on an object (you decide the criteria). The Manage Erasure algorithm is used to detect the conditions, collate relevant information and call the F1-ManageErasureSchedule Business Service to create an Erasure Schedule Business Object in a Pending state to initiate the process. A set of generic Erasure Schedule Business Objects is provided (for example, a generic Purge Object for use in Purging data) and you can create your own to record additional information. The Erasure Schedule BO has three states which can be configured with algorithms (usually Enter Algorithms, a set are provided for reuse with the product). Pending - This is the initial state of the erasure Erased - This is the most common final state indicating the object has been erased or been obfuscated. Discarded - This is an alternative final state where the record can be parked (for example, if the object becomes eligible, an error has occurred in the erasure or reversal of obfuscation is required). A new Erasure Monitor (F1-OESMN) Batch Control can be used to transition the Erasure Schedule through its states and perform the erasure or obfuscation activity. Here is a summary of this processing: Note: The base supplied Purge Enter algorithm (F1-OBJERSPRG) can be used for most requirements. It should be noted that it does not remove the object from the _K Key tables to avoid conflicts when reallocating identifiers. The solution has been designed with a portal to link all the element together easily and the product comes with a set of pre-defined objects ready to use. The portal also allows an implementer to configure Erasure Days which is effectively the number of days the record remains in the Erasure Schedule before being considered by the Erasure Monitor (a waiting period basically). As an implementer you can just build the Manage Erasure algorithm to detect the business event or you can also write the algorithms to perform all of the processing (and every variation in between). The Erasure will respect any business rules configured for the Maintenance Object so the erasure or obfuscation will only occur if the business rules permit it. Customers using Information Lifecycle Management can manage the storage of Erasure Schedule objects using Information Lifecycle Management. Objects Provided The Object Erasure capability supplies a number of objects you can use for your implementation: Set of Business Objects. A number of Erasure Schedule Business Objects such as F1-ErasureScheduleRoot (Base Object), F1-ErasureScheduleCommon (Generic Object for Purges) and F1-ErasureScheduleUser (for user record obfuscation). Each product may ship additional Business Objects. Common Business Services. A number of Business Services including F1-ManageErasureSchedule to use within your Manage Erasure algorithm to create the necessary Erasure Schedule Object. Set of Manage Erasure Algorithms. For each predefined Object Erasure object provided with the product, a set of Manage Erasure algorithms are supplied to be connected to the relevant Maintenance Object. Erasure Monitor Batch Control. The F1-OESMN Batch Control provided to manage the Erasure Schedule Object state transition. Enter Algorithms. A set of predefined Enter algorithms to use with the Erasure Schedule Object to perform common outcomes including Purge processing. Erasure Portal. A portal to display and maintain the Object Erasure configuration. Refer to the online documentation for further advice on Object Erasure.

With data privacy regulations around the world being strengthened data management principles need to be extended to most objects in the product. In the past, Information Lifecycle Management (ILM)...

Inbound Web Services - REST Services

In Oracle Utilities Application Framework V4.3.0.6.0, the Inbound Web Services object has been extended to support both SOAP and REST based services. This has a lot of advantages: Centralized web services registration. The interface Application Programming Interface (API) are now centralized in the Inbound Web Services object. This means you can manage all your programmatic interfaces from a single object. This helps when using the Web Service Catalog used for Oracle Integration Cloud Service as well as any API management capabilities. Isolation from change. One of the major features of the REST capability within Inbound Web Services is the the URI is no longer fixed but can be different from the underlying service. This means you can isolate your interface clients from changes. Standardization. The Inbound Web Services object has inherent standards that can be reused across both SOAP and REST based services. For example, the ConfigTools object model can be directly wired into the service reducing time. Reduced cost of maintenance. One of the features of the new capability is to group all your interfaces into a minimal number of registrations. This reduces maintenance and allows you to control groups of interfaces easily. The Inbound Web Services now supports two Web Service Classes: SOAP - Traditional XAI and IWS based services based around the SOAP protocol. These services will be deployed to the Oracle WebLogic Server. REST - RESTful based services that are now registered for use. These services are NOT deployed as they are used directly using the REST execution engine. For REST Services, a new optimized maintenance function is now available. This facility has the following capabilities: Multiple Services in one definition. It is now possible to define multiple REST services in one registration. This reduces maintenance effort and the interfaces can be enabled and disabled at the Inbound Web Service level. Each REST Service is regarded as an operation on the Inbound Web Service. Customizable URI for service. The URL used for the REST Service can be the same or different than the operation. Business Object Support. In past releases, Business Objects were not supported. In this release, there are some limited support for Business Objects. Refer to the Release Notes and online documentation for clarification of level of support. Open API Support.  This release introduces Open API support for documenting the REST API. For example, the new Inbound Web Services maintenance function for REST is as follows: Active REST Services are available to the REST execution engine. Open API (OAS3) Support has been introduced which provides the following: Documentation of the API in various formats. The documentation of the REST based API based upon the meta data stored in the product. Ability to authorize Inbound Web Services directly in Open API. It is possible to authorize the API directly from the Open API documentation. Developers can check the API prior to making it active. Multiple formats supported. Developers can view payloads in various formats including Model format. Ability to download the API. You can download the API directly from the documentation in Open API format. This allows the API to be imported into Development IDE's. Ability to testing inline. Active API's can be tested directly into the documentation. The following are examples of the documentation: API Header including Authorization (Note: Server URL is generic as this server is NOT active). Operation/API List: Request API with inbuilt testing facility: Response API with response codes: Model Format: For more information about REST support, refer to the online documentation or Web Services Best Practices (Doc Id: 2214375.1) from My Oracle Support.

In Oracle Utilities Application Framework V4.3.0.6.0, the Inbound Web Services object has been extended to support both SOAP and REST based services. This has a lot of advantages: Centralized web...

Oracle Utilities Application Framework V4.3.0.6.0 Release

Oracle Utilities Application Framework V4.3.0.6.0 based products will be released over the coming months. As with past release the Oracle Utilities Application Framework has been enhanced with new and updated features for on-premise, hybrid and cloud implementations of Oracle Utilities products. The Oracle Utilities Application Framework continues to provide a flexible and wide ranging set of common services and technology to allow implementations the ability to meet the needs of their customers.  The latest release provides a wide range of new and updated capabilities to reduce costs and introduce exciting new functionality. The products ships with a complete listing of the changes and new functionality but here are some highlights: Improved REST Support - The REST support for the product has been enhanced in this release. It is now possible to register REST Services in Inbound Web Services as REST. Inbound Web Services definitions have been enhanced to support both SOAP and REST Services. This has the advantage that the registration of integration is now centralized and the server URL for the services can be customized to suit individual requirements. It is now possible to register multiple REST Services within a single Inbound Web Services to reduce costs in management and operations. Execution of the REST Services has been enhanced to use the Registry as the first reference for a service. No additional deployment effort is necessary for this capability. A separate article on this topic will provide additional information. Improved Web Registry Support for Integration Cloud Service - With the changes in REST and other integration changes such as Categories and supporting other adapters, the Web Service Catalog has been expanded to add support REST and other services directly for integration registration for use in the Oracle Integration Cloud. File Access Adapter - In this release a File Adapter has been introduced to allow implementations to parameterize all file integration to reduce costs of management of file paths and ease the path to the Oracle Cloud. In Cloud implementations, an additional adapter is available to allow additional storage on the Oracle Object Storage Cloud to supplement cloud storage for Oracle Utilities SaaS solutions. The File Access Adapter includes an Extendable Lookup to define alias and physical location attributes. That lookup can then be used an alias for file paths in Batch Controls, etc.. A separate article on this topic will provide additional information. Batch Start/End Date Time now part of Batch Instance Object - In past releases the Batch Start and End Dates and times where located as data elements with the thread attributes. This made analysis harder to perform. In this release these fields have been promoted as reportable fields directly on the Batch Instance Object for each thread. This will improve capabilities for reporting performance of batch jobs. For backward compatibility, these fields are only populated for new executions. The internal Business Service F1-GetBatchRunStartEnd has been extended to support the new columns and also detect old executions to return the correct values regardless. New Level of Service Algorithms - In past releases, Batch Level Of Service required the building of custom algorithms for checking batch levels. In this release additional base algorithms for common scenarios like Total Run Time, Throughput and Error Rate are now provided for use. Additionally, it is now possible to define multiple Batch Level Of Service algorithms to model complex requirements. The Health Check API has been enhanced to return the Batch Level Of Service as well as other health parameters. A separate article on this topic will provide additional information. Job Scope in DBMS_SCHEDULER interface - The DBMS_SCHEDULER Interface allowed for specification Batch Control and Global level of parameters as well as at runtime. In this release, it is possible to pre-define parameters within the interface at the Job level, allowing for support for control of individual instances Batch Controls that are used more than once across chains. Ad-hoc Recalculation of To Do Priority - In the past release of the Oracle Utilities Application Framework an algorithm to dynamically reassess ad recalculate a To Do Priority was introduced. In this release, it is possible to invoke this algorithm in bulk using the new provided F1-TDCLP Batch Control.  This can be used with the algorithm to reassess To Do's to improve manual processing. Introduction of a To Do Monitor Process and Algorithm - One of the issues with To Do's in the field has been that users can forget to manually close the To Do when the issue that caused the condition has been resolved. In this release a new batch control F1-TDMON and a new Monitor algorithm on the To Do Type has been added so that logic can be introduced to detect the resolution of the issue can lead to the product automatically closing the To Do. New Schema Editor - Based upon feedback from partners and customers, the usability and capabilities of the Schema Editor have been improved to provide more information as part of the basic views to reduce rework and support cross browser development. Process Flow Editor - A new capability has been added to the Oracle Utilities Application Framework to allow for complex workflows to be modeled and fully capable workflow introduced. This includes train support (including advanced navigation), saving incomplete work support, branching and object integration. This process flow editor was introduced internally successfully to use for our cloud automation in the Oracle Utilities Cloud Services Foundation and has now been introduced, in a new format, for use across the Oracle Utilities Application Framework based products. A separate article on this topic will provide additional information. Improved Google Chrome Support - This release introduces extensive Google Chrome for Business support. Check the availability with each of the individual Oracle Utilities Application Framework based products. New Cube Viewer - In the Oracle Utilities Market Settlements product we introduced a new Cube Viewer to embed advanced analytics into our products. That capability has been made generic and now included in the Oracle Utilities Application Framework so that products and implementations can now build their own cube analytical capabilities. In this release a series of new objects and ConfigTools objects have been introduce to build Cube Viewer based solutions. Note: The Cube Viewer has been built to operate independently of Oracle In-Memory Database support but would greatly benefit from use with Oracle In-Memory Database. A separate article on this topic will provide additional information. Object Erasure Support - To support various data privacy regulations introduced across the world, a new Object Erasure capability has been introduced to manage the erasure or obfuscation of master objects within the Oracle Utilities Application Framework based products. This capability is complementary to the Information Lifecycle Management (ILM) capability introduced to manage transaction objects within the product. A number of objects and ConfigTools objects have been introduced to allow implementations to add Object Erasure to their implementations. A separate article on this topic will provide additional information. Proactive Update ILM Switch Support - In past release, ILM eligibility and the ILM switch was performed in bulk exclusively by the ILM batch processes or using the Automatic Data Optimization (ADO) feature of the Oracle Database. To work more efficiently, it is now possible use the new BO Enter Status plug-in and BO Exit Status plug-in to proactively assess the eligibility and set the ILM switch as part of processing, thus reducing ILM workloads. Mobile Framework Auto Deploy Support - This releases includes a new optional parameter to auto deploy mobile content automatically when a deployment is saved. This can avoid the extra manual deployment step, if desired. Required Indicator on Legacy Screens - In past releases, the required indicator, based upon meta data, has been introduced for ConfigTools based objects, in this release it has been extended to Oracle Utilities Application Framework using legacy screens built using the Oracle Utilities SDK or custom JSP (that confirm to the standards required by the Oracle Utilities Application Framework). Note: Some custom JSP's may contain logic to prevent the correct display the required indicator. Oracle Identity Manager integration improved - In this release the integration with Oracle Identity Manager has been improved with multiple adapters supported and the parameters are now located as a Feature Configuration rather than properties settings. This allows the integration setup to be migrated using Configuration Migration Assistant. Outbound Message Mediator Improvements - In previous releases, implementations were required to use the Outbound Message Dispatcher (F1-OutmsgMediator) business services to send an outbound message without instantiating it but where the outbound message Business Object pre-processing algorithms need to be executed.  This business service orchestrated a creation and deletion of the outbound message, which is not desirable for performance reasons. The alternate business service Outbound Message Mediator (F1-OutmsgMediator) routes a message without instantiating anything, so is preferred when the outbound message should not be instantiated.  However, the Mediator did not execute the Business Object pre-processing algorithms.  In this release the Mediator business service has been enhanced to also execute the Business Object pre-processing algorithms. Deprecations - In this release a few technologies and capabilities will be removed as they were announced in previous releases. These include: XAI Servlet/MPL - After announcing the deprecation of XAI and MPL in 2012, the servlet and MPL software are no longer available in this release. XAI Objects are retained for backward compatibility and last minute migrations to IWS and OSB respectively. Batch On WebLogic - In the Oracle Cloud, batch threadpools were managed under Oracle WebLogic. Given changes to the architecture over the last few releases, the support for threadpools is no longer supported. As this functionality was never released for use on-premise customers, this change does not have any impact to on-premise customers. WebLogic Templates - With the adoption of Oracle WebLogic 12.2+, the necessity of custom WebLogic templates was no longer necessary. It is now possible to use the standard Fusion Middleware templates supplied with Oracle WebLogic with a few manual steps. These additional manual steps are documented in the new version of the Installation Guide supplied with the product. Customers may continue to use the Domain Builder supplied with Oracle WebLogic to build custom templates post Oracle Utilities Application Framework product installation. Customers should stop using the Native Installation or Clustering whitepaper documentation for Oracle Utilities Application Framework V4.3.0.5.0 and above as this information is now inside the Installation Guide directly or Oracle WebLogic 12.2.1.x Configuration Guide (Doc Id: 2413918.1) available from My Oracle Support. A number of additional articles will be published over the next few weeks going over some of these topics as well as updates to key whitepapers will be published.

Oracle Utilities Application Framework V4.3.0.6.0 based products will be released over the coming months. As with past release the Oracle Utilities Application Framework has been enhanced with new and...

New Oracle Utilities Testing Accelerator (6.0.0.0)

I am pleased to announce the next chapter in automated testing solutions for Oracle Utilities products. In the past some Oracle Utilities products have used Oracle Application Testing Suite with some content to provide an amazing functional and regression testing solution. Building upon that success, a new solution named the Oracle Utilities Testing Accelerator has been introduced that is an new optimized and focused solution for Oracle Utilities products. The new solution has the following benefits: Component Based. As with the Oracle's other testing solutions, this new solution is based upon testing components and flows with flow generation and databank support. Those capabilities were popular with our existing testing solution customers and exist in expanded forms in the new solution. Comprehensive Content for Oracle Utilities. As with Oracle's other testing solutions, supported products provided pre-built content to significantly reduce costs in adoption of automation. In this solution, the number of product within the Oracle Utilities portfolio has greatly expanded to provide content. This now includes both on-premise product as well as our growing portfolio of cloud based solutions. Self Contained Solution.  The Oracle Utilities Testing Accelerator architecture has been simplified to allow customers to quickly deploy the product with the minimum of fuss and prerequisites. Used by Product QA. The Oracle Utilities Product QA teams use this product on a daily basis to verify the Oracle Utilities products. This means that the content provided has been certified for use on supported Oracle Utilities products and reduces risk of adoption of automation. Behavior-Driven Development Support. One of most exciting capabilities introduced in this new solution, is the support for Behavior-Driven Development (BDD), which is popular with the newer Agile based implementation approaches. One of the major goals of the new testing capability is reduce rework from the Agile process into the building of test assets. This new capability introduces Machine Learning into the testing arena for generating test flows from Gherkin syntax documentation from Agile approaches. A developer can reuse their Gherkin specifications to generate a flow quickly without the need for rework. As the capability uses Machine Learning, it can be corrected if the assumptions it makes are incorrect for the flow and those corrections will be reused for any future flow generations. An example of this approach is shown below: Selenium Based. The Oracle Utilities Testing Accelerator uses a Selenium based scripting language for greater flexibility across the different channels supported by the Oracle Utilities products. The script is generated automatically and does not need any alteration to be executed correctly. Data Independence. As with other Oracle's testing products, data is supported independently of the flow and components. This translates into greater flexibility and greater levels of reuse in using automated testing. It is possible to change data at anytime during the process to explore greater possibilities in testing. Support for Flexible Deployments. Whilst the focus of the Oracle Utilities Testing Accelerator is functional and/or regression testing. Beyond Functional Testing. The Oracle Utilities Testing Accelerator is designed to be used for testing beyond just functional testing. It can be used to perform testing in flexible scenarios including: Patch Testing. The Oracle Utilities Testing Accelerator can be used to assess the impact of product patches on business processes using the flows as a regression test. Extension Release Testing. The Oracle Utilities Testing Accelerator can be used to assess the impact of releases of extensions from the Oracle Utilities SDK (via the migration tools in the SDK) or after a Configuration Migration Assistant (CMA) migration. Sanity Testing. In the Oracle Cloud the Oracle Utilities Testing Accelerator is being used to assess the state of a new instance of the product including its availability and that the necessary data is setup ensuring the instance is ready for use. Cross Oracle Utilities Product Testing. The Oracle Utilities Testing Accelerator supports flows that cross Oracle Utilities product boundaries to model end to end processes when multiple Oracle Utilities products are involved. Blue/Green Testing. In the Oracle Cloud, zero outage upgrades are a key part of the solution offering. The Oracle Utilities Testing Accelerator supports the concept of blue/green deployment testing to allow multiple versions to be able to be tested to facilitate smooth upgrade transitions. Lower Skills Required. The Oracle Utilities Testing Accelerator has been designed with the testing users in mind. Traditional automation involves using recording using a scripting language that embeds the data and logic into a script that is available for a programmer to alter to make it more flexible. The Oracle Utilities Testing Accelerator uses an orchestration metaphor to allow a lower skilled person, not a programmer, to build test flows and generate, no touch, scripts to be executed. An example of the Oracle Utilities Testing Accelerator Workbench: New Architecture The Oracle Utilities Testing Accelerator has been re-architectured to be optimized for use with Oracle Utilities products: Self Contained Solution. The new design is around simplicity. As much as possible the configuration is designed to be used with minimal configuration. Minimal Prerequisites. The Oracle Utilities Testing Accelerator only requires Java to execute and a Database schema to store its data. Allocations for non-production for existing Oracle Utilities product licenses are sufficient to use for this solution. No additional database licenses are required by default. Runs on same platforms as Oracle Utilities applications. The solution is designed to run on the same operating system and database combinations supported with the Oracle Utilities products. The architecture is simple: Product Components. A library of components from the Product QA teams ready to use with the Oracle Utilities Testing Accelerator. You decide which libraries you want to enable. Oracle Utilities Testing Accelerator Workbench. A web based design toolset to manage and orchestrate your test assets. Includes the following components: Embedded Web Application Server. A preset simple configuration and runtime to house the workbench. Testing Dashboard. A new home page outlining the state of the components and flows installed as well as notifications for any approvals and assets ready for use. Component Manager. A Component Manager to allow you to add custom component and manage the components available to use in flows. Flow Manager. A Flow Manager allowing testers to orchestrate flows and manage their lifecycle including generation of selenium assets for execution. Script Management. A script manager used to generate scripts and databanks for flows. Security. A role based model to support administration, development of components/flows and approvals of components/flows. Oracle Utilities Testing Accelerator Schema. A set of database objects that can be stored in any edition of Oracle (PDB or non-PDB is supported) for storing assets and configuration. Oracle Utilities Testing Accelerator Eclipsed based Plug In. An Oxygen compatible Eclipse plugin that executes the tests including recording of performance and payloads for details test analysis. New Content The Oracle Utilities Testing Accelerator has expanded the release of the number of products supported and now includes Oracle Utilities Application Framework based products and Cloud Services Products. New content will be released on a regular basis to provide additional coverage for components and a set of prebuilt flows that can be used across products. Note: Refer to the release notes for supported Oracle Utilities products and assets provided. Conclusion The Oracle Utilities Testing Accelerator provides a comprehensive testing solution, optimized for Oracle Utilities products, with content provided by Oracle to allow implementation to realize lower cost and lower risk adoption of automated testing. For more information about this solution, refer to the Oracle Utilities Testing Accelerator Overview and Frequently Asked Questions (Doc Id: 2014163.1) available from My Oracle Support. Note: The Oracle Utilities Testing Accelerator is a replacement for the older Oracle Functional Testing Advanced Pack for Oracle Utilities. Customers on that product should migrate to this new platform. Utilities to convert any custom components from the Oracle Application Testing Suite platform are provided with this tool.

I am pleased to announce the next chapter in automated testing solutions for Oracle Utilities products. In the past some Oracle Utilities products have used Oracle Application Testing Suite with some...

Updated Technical Best Practices

The Oracle Utilities Application Framework Technical Best Practices have been revamped and updated to reflect new advice, new versions and the cloud implementations of the Oracle Utilities Application Framework based products. The following summary of changes have been performed: Formatting change. The whitepaper uses a new template for the content which is being rolled out across Oracle products. Removed out of date advice. Advice that was on older versions and is not appropriate anymore has been removed from the document. This is ongoing to keep the whitepaper current and optimal. Added Configuration Migration Assistant advice. With the increased emphasis of the use of CMA we have added a section outlining some techniques on how to optimize the use of CMA in any implementation. Added Optimization Techniques advice. With the implementation of the cloud, there are various techniques we use to reduce our costs and risks on that platform. We added a section outlining some common techniques can be reused for on-premise implementations. This is based upon a series of talks given at customer forums the last year or so. Added Preparation Your Implementation For the Cloud advice. This is a new section outlining the various techniques that can be used to prepare an on-premise implementation for moving to the Oracle Utilities Cloud SaaS Services. This is based upon a series of talks given at customer forums the last year or so. The new version of the whitepaper is available at Technical Best Practices (Doc Id: 560367.1) from My Oracle Support.

The Oracle Utilities Application Framework Technical Best Practices have been revamped and updated to reflect new advice, new versions and the cloud implementations of the Oracle Utilities Application...

Oracle Utilities and the Oracle Database In-Memory Option

A few years ago, Oracle introduced an In-Memory option for the database to optimize analytical style applications. In Oracle Database 12c and above, the In-Memory option has been enhanced to support other types of workloads. All Oracle Utilities products are now certified to use the Oracle In-Memory option, on Oracle Database 12c and above, to allow customers to optimize the operational and analytical aspects of the products. The Oracle In-Memory option is a memory based column store that co-exists with existing caching schemes used within Oracle to deliver faster access speeds for complex queries across the products. It is transparent to the product code and can be easily implemented with a few simple changes to the database to implement the objects to store in memory. Once configured the Oracle Cost Based Optimizer becomes aware of the data loaded into memory and adjusts the execution plan directly, delivering much better performance in almost all cases. There are just a few option changes that need to be done: Enable the In-Memory Option. The In-Memory capability is actually already in the database software already (no relinking necessary) but it is disabled by default. After licensing the option, you can enable the option by setting the amount of the SGA you want to use for the In-Memory store. Remember to ensure that the SGA is large enough to cover the existing memory areas as well as the In-Memory Data Store. These are setting a few database initialization parameters. Enable Adaptive Plans. To tell the optimizer that you now want to take into account the In-Memory Option, you need to enable Adaptive Plans to enable support. This is flexible where you can actually turn off the In-Memory support without changing In-Memory settings. Decide the Objects to Load into Memory. Now that the In-Memory Option is enabled the next step is to decide what is actually loaded into memory. Oracle provides an In-Memory Advisor that analyzes workloads to make suggestions. Alter Objects to Load into Memory. Create the SQL DDL statements to specify the statements to instruct the loading of objects into memory. This includes priority and compression options for the objects to maximize flexibility of the option. The In-Memory Advisor can be configured to generate these statements from its analysis. No changes to the code is necessary to use the option to speed up common queries in the products and analytical queries. A new Implementing Oracle In-Memory Option (Doc Id: 2404696.1) whitepaper available from My Oracle Support has been published which outlines details of this process as well as specific guidelines for implementing this option. PS. The Oracle In-Memory Option has been significantly enhanced in Oracle Database 18c.  

A few years ago, Oracle introduced an In-Memory option for the database to optimize analytical style applications. In Oracle Database 12c and above, the In-Memory option has been enhanced to support...

Data Management with Oracle Utilities products

One of the most common questions I receive is about how to manage data volumes in the Oracle Utilities products. The Oracle Utilities products are designed to scale no matter how much data is present in the database but obviously storage costs and management of large amounts of data is not optimal. A few years ago we adopted the Information Lifecycle Management (ILM) capabilities of the Oracle Database as well as developed a unique spin on the management of data. Like biological life, data has a lifecycle. It is born when it is created, it has an active life while the business uses or manipulates it, it goes into retirement but is still accessible and eventually it dies when it is physically removed from the database. The length of that lifecycle will vary from data type to data type, implementation to implementation. The length of the life is dictated by its relevance to the business, company policies and even legal or government legislation. The data management (ILM) capabilities of Oracle Utilities take this into account: Data Retention Configuration. The business configures how long the active life of the individual data types are for their business. This defines what is called the Active Period. This is when the data needs to be in the database and accessible to the business for update and active use in their business. ILM Eligibility Rules. Once the data retention period is reached, before the data can enter retirement, the system needs to know that anything outstanding, from a business perspective, has been completed. This is the major difference with most data management approaches. I hear DBA's saying that they would just rather the data was deleted after a specific period. Whilst that would cover most situations it would not cover a situation where the business is not finished with the data. Lets explain with an example. In CCB customers are billed and you can also record complains against a bill if their is a dispute. Depending on the business rules and legal processes an old bill may be in dispute. You should not remove anything related to that bill till the complaint is resolved, regardless of the age. Legal issues can be drawn out for lots of reasons. If you use a retention rule only then the data used in the complaint would potentially be lost. In the same situation, the base ILM Eligbility rules would detect something outstanding and bypass the applicable records. Remember these rules are protecting the business and ensuring that the ILM solution adheres to the complex rules of the business. ILM Features in the Database. Oracle, like a lot of vendors, introduced ILM features into the database to help, what I like to call storage manage the data. This provides a set of flexible options and features to allow database administrators a full range of possibilities for their data management needs. Here are the capabilities (refer to the Database Administration Guide for details of each capability): Partitioning. One of the most common capabilities is using the Partitioning option. This allows a large table to be split up, storage wise, into parts or partitions using a partitioned tablespace. This breaks up the table into manageable pieces and allows the database administration to optimize the storage using hardware and/or software options. Some hardware vendors have inbuilt ILM facilities and this option allows you to target specific data partitions to different hardware capabilities or just split the data into trenches (for example to separate the retirement stages of data). Partitioning is also a valid option if you want to use hardware storage tiered based solutions to save money. In this scenario you would pout the less used data on cheaper storage (if you have it) to save costs. For Partitioning advice, refer to the product DBA Guides which outline the most common partitioning schemes used by customers. Advanced Compression. One of the popular options is the using the Advanced Compression option. This allows administrators to set compression rules against the database based upon data usage. The compression is transparent to the product and compressed data can be co-located with uncompressed data with no special processing needed by the code. The compression covers a wide range of techniques including CLOB compression as well as data compression. Customers using Oracle Exadata can also use Hybrid Columnar Compression (HCC) for hardware assisted compression for greater flexibility. Heat Map. One of the features added to Oracle Database 12c and above to help DBA's is the Heat Map. This is a facility where the database will track the usage patterns of the data in your database and give you feedback on the activity of the individual rows in the database. This is an important tool as it helps the DBA identify which data is actually being used by the business and is a useful tool for determining what is important to optimize. It is even useful in the active period to determine which data can be safely compressed as it has reduced update activity against it. It is a useful tool and is part of the autonomous capabilities of the database. Automatic Data Optimization. The Automatic Data Optimization (ADO) is a feature of database that allows database administrations  to implement rules to manage storage capabilities based upon various metrics including heat map. For example, the DBA can put in a rule that says data if data in a specific table is not touched for X months then it should be compressed. The rules cover compression, partition movement, storage features etc and can be triggered by Heat Map or any other valid metric (even SQL procedure code can be used). Transportable Tablespaces. One of the most expensive things you can do in the database is issue a DELETE statement. To avoid this in bulk in any ILM based solution, Oracle offers the ability to use the Partitioning option and create a virtual trash bin via a transportable tablespace. Using ADO or other capabilities you can move data into this tablespace and then using basic commands switch off the tablespace to do bulk removal quickly. An added advantages is that you can archive that tablespace and reconnect it later if needed. The Oracle Utilities ILM solution is comprehensive and flexible using both a aspect for the business to define their retention and eligibility rules and the various capabilities of the ILM in the database for the database administrator to factor in their individual sites hardware and support policies. It is not as simple as removing data in most cases and the Oracle Utilities ILM solution reduces the risk of managing your data, taking to account both your business and storage needs. For more information about the Oracle Utilities ILM solution, refer to the ILM Planning Guide (Doc Id: 1682436.1) available from My Oracle Support and read the product DBA Guides for product specific advice.

One of the most common questions I receive is about how to manage data volumes in the Oracle Utilities products. The Oracle Utilities products are designed to scale no matter how much data is present...

Information

EMEA Edge Conference 2018

I will be attending the EMEA Oracle Utilities Edge Conference on the 26 - 27 June 2018 in the Oracle London office. This year we are running an extended set of technical sessions around on-premise and the Oracle Utilities Cloud Services. This forum is open to Oracle Utilities customers and Oracle Utilities partners. The sessions mirror the technical sessions for the conference in the USA held earlier this year with the following topics: Reducing Your Storage Costs Using Information Life-cycle Management With the increasing costs of maintaining storage and satisfying business data retention rules can be challenging. Using Oracle Information Life-cycle Management solution can help simplify your storage solution and hardness the power of the hardware and software to reduce storage costs. Integration using Inbound Web Services and REST with Oracle Utilities Integration is a critical part of any implementation. The Oracle Utilities Application Framework has a range of facilities for integrating from and to other applications. This session will highlight all the facilities and where they are best suited to be used. Optimizing Your Implementation Implementations have a wide range of techniques available to implement successfully. This session will highlight a group of techniques that have been used by partners and our cloud implementations to reduce Total Cost Of Ownership. Testing Your On-Premise and Cloud Implementations Our Oracle Testing solution is popular with on premise implementations. This session will outline the current testing solution as well as outline our future plans for both on premise and in the cloud. Securing Your Implementations With the increase in cybersecurity and privacy concerns in the industry, a number of key security enhancements have made available in the product to support simple or complex security setups for on premise and cloud implementations. Turbocharge Your Oracle Utilities Product Using the Oracle In-Memory Database Option The Oracle Database In-Memory options allows for both OLTP and Analytics to run much faster using advanced techniques. This session will outline the capability and how it can be used in existing on premise implementations to provide superior performance. Developing Extensions using Groovy Groovy has been added as a supported language for on premise and cloud implementations. This session outlines that way that Groovy can be used in building extensions. Note: This session will be very technical in nature. Ask Us Anything Session Interaction with the customer and partner community is key to the Oracle Utilities product lines. This interactive sessions allows you (the customers and partners) to ask technical resources within Oracle Utilities questions you would like answered. The session will also allow Oracle Utilities to discuss directions and poll the audience on key initiatives to help plan road maps Note: These sessions are not recorded or materials distributed outside this forum. This year we have decided to not only discuss capabilities but also give an idea of how we use those facilities in our own cloud implementations to reduce our operating costs for you to use as a template for on-premise and hybrid implementations. See you there if you are attending. If you wish to attend, contact your Oracle Utilities local sales representative for details of the forum and the registration process.

I will be attending the EMEA Oracle Utilities Edge Conference on the 26 - 27 June 2018 in the Oracle London office. This year we are running an extended set of technical sessions around on-premise and...

Why the XAI Staging is not in the OSB Adapters?

With the replacement of the Multi-Purpose Listener (MPL) with the Oracle Service Bus (OSB) with the additional OSB Adapters for Oracle Utilities Application Framework based products, customers have asked about transaction staging support. One of the most common questions I have received is why there is an absence of an OSB Adapter for the XAI Staging table. Let me explain the logic. One Pass versus Two Passes. The MPL processed its integration by placing the payload from the integration into the XAI Staging table. The MPL would then process the payload in a second pass. The staging record would be marked as complete or error. The complete ones would need to be removed using the XAI Staging purge process run separately. You then used XAI Staging portals to correct the data coming in for ones in error. On the other hand, the OSB Adapters treat the product as a "black box" (i,e, like a product) and it directly calls the relevant service directly (for inbound) and polls the relevant Outbound or NDS table for outbound processing records directly. This is a single pass process rather than multiple that MPL did. OSB is far more efficient and scalable than the MPL because of this. Error Hospital. The idea behind the XAI Staging is that error records remain in there for possible correction and reprocessing. This was a feature of MPL. In the OSB world, if a process fails for any reason, the OSB can be configured to act as an Error Hospital. This is effectively the same as the MPL except you can configure the hospital to ignore any successful executions which reduces storage. In fact, OSB has features where you can detect errors anywhere in the process and allows you to determine which part of the integration was at fault in a more user friendly manner. OSB effectively already includes the staging functionality so adding this to the adapters just duplicates processing. The only difference is that error correction, if necessary, is done within the OSB rather than the product. More flexible integration model. One of the major reasons to move from the MPL to the OSB is the role that the product plays in integration. If you look at the MPL model, any data that was passed to the product from an external source was automatically the responsibility of the product (that is how most partners implemented it). This means the source system had no responsibility for the cleanliness of their data as you had the means of correcting the data as it entered the system. The source system could send bad data over and over and as you dealt with it in the staging area that would increase costs on the target system. This is not ideal. In the OSB world, you can choose your model. You can continue to use the Error Hospital to keep correcting the data if you wish or you can configure the Error Hospital to compile the errors and send them back, using any adapter, to the source system for correction. With OSB there is a choice, MPL did not really give you a choice. With these considerations in place it was not efficient to add an XAI Staging Adapter to OSB as it would duplicate effort and decrease efficiency which negatively impacts scalability.

With the replacement of the Multi-Purpose Listener (MPL) with the Oracle Service Bus (OSB) with the additional OSB Adapters for Oracle Utilities Application Framework based products, customers have...

Capacity Planning Connections

Customers and partners regularly ask me questions around capacity of traffic on their Oracle Utilities products implementations and how to best handle their expected volumes. The key to answering this question is to under a number of key concepts: Capacity is related to the number of users, threads etc, lets call them actors to be generic, are actively using the system. As the Oracle Utilities Application Framework is stateless, then actors only consume resources when they are active on any part of the architecture. If they are idle then they are not consuming resources. This is important as the number of logged on users does not dictate capacity. The goal of capacity is to have enough resource to handle peak loads and to minimize capacity when the load drops to the minimum expected. This makes sure you have enough for the busy times but also you are not wasting resource. Capacity is not just online users it is also batch threads, Web Service Clients, REST clients and mobile clients (for mobile application interfaces). It is a combination for each channel. Each channel can be monitored individually to determine capacity for each channel. This is the advice I tend to give customers who want to monitor capacity: For channels using Oracle WebLogic you want to use Oracle WebLogic Mbeans such as ThreadPoolRuntimeMbean (using ExecuteThreads) for protocol level monitoring. If you want to monitor each server individually to get an idea of capacity then you might want to try ServerChannelRuntimeMBean (using ConnectionsCount). In the latter case, look at each channel individually to see what your traffic looks like. For Batch, when using it with Oracle Coherence, then use the inbuilt Batch monitoring API (via JMX) and use the sum of NumberOfMembers attribute to determine the active number of threads etc running in your cluster. Refer to the Server Administration Guide shipped with the Oracle Utilities product for details of this metric and how to collect it. For database connections, it is more complex as connection pools (regardless of the technique used) rely on a maximum size limit. If this limit is exceeded then you want to know of how many pending requests are waiting to detect how bigger the pool should be. The calculations are as follows: Oracle WebLogic JDBC Data Sources: Use the Mbean JDBCDataSourceRuntimeMBean with the CurrCapacity + WaitingForConnectionCurrentCount attributes. Oracle Universal Connection Pool (12.x): Use the UCP MBean UniversalConnectionPoolMBean with the (maxPoolSize – availableConnectionsCount) + PendingRequestsCount attributes. Note: You might notice that the database active connections are actually calculations. This is due to the fact that the metrics capture the capacity within a limit and need to take into account when the limit is reached and has waiting requests. The above metrics should be collected at peak and non-peak times. This can be done manually or using Oracle Enterprise Manager. Once the data is collected it is recommended to be used for the following: Connection Pool Sizes – The connection pools should be sized using the minimum values experienced and with the maximum values with some tolerances for growth. Number of Servers to setup – For each channel, determine the number of servers based upon the numbers and the capacity on each server. Typically at a minimum of two servers should be setup for the minimum high availability solutions. Refer to Oracle Maximum Availability Architecture for more advice.

Customers and partners regularly ask me questions around capacity of traffic on their Oracle Utilities products implementations and how to best handle their expected volumes. The key to answering this...

Managing Your Environments

With the advent of easier and easier techniques for creating and maintaining Oracle Utilities environments, the number of environment will start to grow, increasing costs and introducing more risk into a project. This applies to on-premise as well as cloud implementations, though the cloud implementations have more visible costs. An environment is a copy of the Oracle Utilities product (one software and one database at a minimum). To minimize your costs and optimize the number of environments to manage there are a few techniques that may come in handy: Each Environment Must Be On Your Plan - Environments are typically used to support an activity or group of activities on some implementation plan. If the environment does not support any activities on a plan then it should be questioned. Each Environment Must Have An Owner - When I started working in IT a long time ago, the CIO of the company I worked for noticed the company had over 1500 IT systems. To rationalize he suggested shutting them all down and seeing who screamed to have it back on. That way he could figure out what was important to what part of the business. While this technique is extreme it points out an interesting thought. If you can identify the owner of each environment then that owner is responsible to determine the life of that environment including its availability or performance. Consider removing environments not owned by anyone. Each Environment Should Have a Birth Date And End Date - As an  extension to the first point, each environment should have a date it is needed and a date when it is no longer needed. It is possible to have an environment be perpetual, for example Production, but generally environments are needed for a particular time frame. For example, you might be creating environments to support progressive builds, where you would have a window of builds you might want to keep (a minimum set I hope). That would dictate the life-cycle of the environment. This is very common in cloud environments where you can reserve capacity dynamically so it can impose time limits to enforce regular reassessment. Reuse Environments - I have been on implementations where individual users wanted their own personal environments. While this can be valid in some situations, it is much better to encourage reuse of environments across users and across activities. If you can plan out your implementation you can identify how to best reuse environments to save time and costs. Ask Questions; Don't Assume - When agreeing to create and manage the environment, ask the above questions and more to ensure that the environment is needed and will support the project appropriately for the right amount of time. I have been on implementations where 60 environments existed initially and after applying these techniques and others was able to reduce it to around 20. That saved a lot of costs. So why the emphasis on keeping your environments to a minimal number given the techniques for building and managing them are getting easier? Well, no matter how easy keeping an environment consumes resources (computer and people) and keeping them at a minimum keeps costs minimized. The techniques outlined above apply to Oracle Utilities products but can be applied to other products with appropriate variations. For additional advice on this topic, refer to the Software Configuration Management Series (Doc Id: 560401.1) whitepapers available from My Oracle Support.

With the advent of easier and easier techniques for creating and maintaining Oracle Utilities environments, the number of environment will start to grow, increasing costs and introducing more risk...

Clarification of XAI, MPL and IWS

A few years ago, we announced that XML Application Integration (XAI) and the Multipurpose Listener (MPL) were being retired from the product and replaced with Inbound Web Services (IWS) and Oracle Service Bus (OSB) Adapters. In the next service pack of the Oracle Utilities Application Framework, XAI and MPL will finally be removed from the product. The following applies to this: The MPL software and XAI Servlet will be removed from the code. This is the final step in the retirement process. The tables associated with XAI and MPL will not be removed from the product for backward compatibility with newer adapters. Maintenance functions that will be retained will be prefixed with Message rather than XAI. Menu items not retained will be disabled by default. Refer to release notes of service packs (latest and past) for details of the menu item changes. Customers using XAI should migrate to Inbound Web Services using the following guidelines: XAI Services using the legacy Base and CorDaptix adapters will be automatically migrated to Inbound Web Services. These services will be auto-deployed using the Inbound Web Services Deployment online screen or iwsdeploy utility. XAI Services using the Business adapter (sic) can either migrate their definitions manually to Inbound Web Services or use a technique similar to the technique outlined in Converting your XAI Services to IWS using scripting. Partners should take the opportunity to rationalize their number of web services using the multi-operation capability in Inbound Web Services. XAI Services using any other adapter than those listed above are not migrate-able as they are typically internal services for use with the MPL. Customers using the Multi-purpose Listener should migrate to Oracle Service Bus with the relevant adapters installed. There are a key number of whitepapers that can assist in this process: Web Services Best Practices (Doc Id: 2214375.1) Migrating from XAI to IWS (Doc Id: 1644914.1) Oracle Service Bus Integration (Doc Id: 1558279.1)

A few years ago, we announced that XML Application Integration (XAI) and the Multipurpose Listener (MPL) were being retired from the product and replaced with Inbound Web Services (IWS) and...

Advice

Using the Infrastructure Version of Oracle WebLogic for Oracle Utilities Products

When using Oracle Utilities Application Framework V4.3.x with any Oracle Utilities product you need to use the Oracle Fusion Middleware 12c Infrastructure version of Oracle WebLogic not the vanilla release of Oracle WebLogic. The Oracle Fusion Middleware 12c Infrastructure version contains the Java Required Files (JRF) profile that is used by the Oracle Utilities Application Framework to display the enhanced help experience and for standardization within the Framework. The installation used by the Oracle Fusion Middleware 12c Infrastructure version is the same experience as the vanilla Oracle WebLogic version but it contains the applyJRF profile that applies extra functionality and libraries necessary for Oracle Utilities Application Framework to operate. The Oracle Fusion Middleware 12c Infrastructure version contains the following additional functionality: An additional set of Java libraries that are typically used by Oracle products to provide standard connectors and integration to Oracle technology. Diagnostic Frameworks (via WebLogic Diagnostic Framework) that can be used with Oracle Utilities products to proactively detect and provide diagnostic information to reduce problem resolution times. This requires the profile to be installed and enabled on the domain post release. The standard Fusion Diagnostic Framework can be used with Oracle Utilities products Fusion Middleware Control is shipped as an alternative console for advanced configuration and monitoring. As with all Oracle software the Oracle Fusion Middleware 12c Infrastructure software is available from Oracle Software Delivery Cloud. For example:

When using Oracle Utilities Application Framework V4.3.x with any Oracle Utilities product you need to use the Oracle Fusion Middleware 12c Infrastructure version of Oracle WebLogic not the vanilla rel...

Advice

Optimizing CMA - Linking the Jobs

One of the recent changes to the Configuration Migration Assistant is the ability to configure the individual jobs to work as a group to reduce the amount of time and effort in migrating configuration data from a source system to a target. This is a technique we use in our Oracle Utilities Cloud implementations to reduce costs. Basically after this configuration is complete you just have to execute F1-MGDIM (Migration Data Set Import Monitor) and F1-MGDPR (Migration Data Set Export Monitor) jobs to complete all your CMA needs. The technique is available for Oracle Utilities Application Framework V4.3.0.4.0 and above using some new batch control features. The features used are changing the Enter algorithms on the state transitions and setting up Post Processing algorithms on relevant batch controls. The last step will kick off each process within the same execution to reduce the need to execute each process individually. Set Enter Algorithms The first step is to configure the import process, which is a multi-step process, to autotransition data when necessary to save time. This is done on the F1-MigrDataSetImport business object and setting the Enter Algorithm on the following states: Status Enter Algorithm PENDING F1-MGDIM-SJ READY2COMP F1-MGOPR-SJ READY2APPLY F1-MGOAP-SJ APPLYING F1-MGTAP-SJ READYOBJ F1-MGOPR-SJ READYTRANS F1-MGTPR-SJ Save the changes to reflect the change Set Post Processing Algorithms The next step is to set the Post Processing algorithms on the Import jobs to instruct the Monitor to run multiple steps within its execution. Batch Control Post Processing Algorithm F1-MGOPR F1-MGTPR-NJ F1-MGTPR F1-MGDIM-NJ F1-MGOAP F1-MGDIM-NJ (*) F1-MGTAP F1-MGDIM-NJ (*) (*) Note: For multi-lingual solutions, consider adding an additional Post Processing algorithm F1-ENG2LNGSJ to copy any missing language entries Now you can run the Monitors for Import and Export with minimum interaction which simplifies the features. Note: To take full advantage of this new configuration enable Automatically Apply on Imports.

One of the recent changes to the Configuration Migration Assistant is the ability to configure the individual jobs to work as a group to reduce the amount of time and effort in migrating configuration...

Oracle Utilities Application Framework V4.3.0.5.0 Release Summary

The latest release of Oracle Utilities Application Framework, namely 4.3.0.5.0 (or 4.3 SP5 for short) will be included in new releases of Oracle Utilities products over the next few months. This release is quite diverse with a range of new and improved capabilities that can be used by implementations of the new releases. The key features included in the release including the following: Mobile Framework release - The initial release of a new REST based channel to allow Oracle Utilities products to provide mobile device applications. This release is a port of the Mobile Communication Platform (MCP) used in the Oracle Mobile Workforce Management product to the Oracle Utilities Application Framework. This initial release is restricted to allow Oracle Utilities products to provide mobile experiences for use with an enterprise. As with other channels in the Oracle Utilities Application Framework, it can be deployed alone or in conjunction with other channels. Support For Chrome for Business - In line with Oracle direction, the Oracle Utilities Application Framework supports Chrome for Business as a browser alternative. A new browser policy, in line with Oracle direction, has been introduced to clarify support arrangement for Chrome and other supported browsers. Check individual product release notes for supported versions. Improved Security Portal - To reduce effort in managing security definitions within the product, the application service portal has been extended to show secured objects or objects that an application service is related to. Attachment Changes - In the past to add attachments to object required custom UI maps to link attachment types to objects. In this release, a generic zone has been added reducing the need for any custom UI Maps. The attachment object now also records the extension of the attachment to reduce issues where an attachment type can have multiple extensions (e.g. DOC vs DOCX). Support for File Imports in Plug In Batch - In past releases Plug In Batch was introduced as a configuration based approach to replace the need for Java programming for batch programming. In the past, SQL processing and File Exports where supported for batch processing. In this release, importing files in CSV, Fixed format or XML format are now supported using Plug In Batch (using Groovy based extensions). Samples are supplied with the product that can be copied and altered accordingly. Improvements in identifying related To Do's - The logic determining related To Do's has been enhanced to provide additional mechanisms for finding related To Do's to improve closing related work. This will allow a wider range to To Do's to be found than previously determined. Web Service Categories - To aid in API management (e.g. when using Integration Cloud Service and other cloud services) Web Service categories can be attached to Inbound Web Services, Outbound Message Types and legacy XAI services that are exposed via Inbound Web Services. A given web service or outbound message can be associated with more than one category. Categories are supplied with the product release and custom categories can be added. Extended Oracle Web Services Manager Support - In past releases Oracle Web Services Manager could provide additional transport and message security for Inbound Web Services. In this release, Oracle Web Services Manager support has been extended to include Outbound Messages and REST Services. Outbound Message Payload Extension - In this release it is possible to include the Outbound Message Id as part of the payload as a reference for use in the target system. Dynamic URL support in Outbound Message - In the past Outbound Message destinations were static to the environment. In this release the URL used for the destination can vary according to the data or dynamically assembled programmatically if necessary. SOAP Header Support in Outbound Messages - In this release it is possible to dynamically set SOAP Header variables in Outbound Messages. New Groovy Imports Step Type - A new step type has been introduced to define classes to be imported for use in Groovy members. This promotes reuse and allows for coding without the need for the fully qualified package name in Groovy Library and Groovy Member step types.  New Schema Designer - A newly redesigned Schema Editor has been introduced to reduce total cost of ownership and improve schema development. Color coding has been now included in the raw format editor. Oracle Jet Library Optimizations - To improve integration with the Oracle Jet libraries used by the Oracle Utilities Application Framework, a new UI Map fragment has been introduced to include in any Jet based UI Map to reduce maintenance costs. YUI Library Removal - With the desupport of the YUI libraries, they have been removed from this release in the Oracle Utilities Application Framework. Any custom code directly referencing the YUI libraries should use the Oracle Utilities Application Framework equivalent function. Proxy Settings now at JVM level - In past release, proxy settings were required on individual connections where needed. In this release, the standard HTTP Proxy JVM options are now supported at the container/JVM layer to reduce maintenance costs. This is just a summary of some of the new features in the release. A full list is available in the release notes of the products using this service pack. Note: Some of these enhancements have been back ported to past releases. Check My Oracle Support for those patches. Over the next few weeks, I will be writing a few articles about a few of these enhancements to illustrate the new capabilities.

The latest release of Oracle Utilities Application Framework, namely 4.3.0.5.0 (or 4.3 SP5 for short) will be included in new releases of Oracle Utilities products over the next few months. This...

Annoucements

Edge Conference 2018 is coming - Technical Sessions

It is that time of year again, Customer Edge conference time. This year we will be once again holding a Technical stream which focuses on the Oracle Utilities Application Framework and related products. Once again, I will be holding the majority of the sessions at the various conferences. The sessions this year are focused around giving valuable advice as well as giving a window into our future plans for the various technologies we are focusing upon. As normal, there will be a general technical session covering our road map as well as specific set of session targeting important topics. The technical sessions planned for this year include: Session Overview Reducing Your Storage Costs Using Information Life-cycle Management With the increasing costs of maintaining storage and satisfying business data retention rules can be challenging. Using Oracle Information Life-cycle Management solution can help simplify your storage solution and hardness the power of the hardware and software to reduce storage costs. Integration using Inbound Web Services and REST with Oracle Utilities Integration is a critical part of any implementation. The Oracle Utilities Application Framework has a range of facilities for integrating from and to other applications. This session will highlight all the facilities and where they are best suited to be used. Optimizing Your Implementation Implementations have a wide range of techniques available to implement successfully. This session will highlight a group of techniques that have been used by partners and our cloud implementations to reduce Total Cost Of Ownership. Testing Your On-Premise and Cloud Implementations Our Oracle Testing solution is popular with on premise implementations. This session will outline the current testing solution as well as outline our future plans for both on premise and in the cloud. Securing Your Implementations With the increase in cybersecurity concerns in the industry, a number of key security enhancements have made available in the product to support simple or complex security setups for on premise and cloud implementations. Turbocharge Your Oracle Utilities Product Using the Oracle In-Memory Database Option The Oracle Database In-Memory options allows for both OLTP and Analytics to run much faster using advanced techniques. This session will outline the capability and how it can be used in existing on premise implementations to provide superior performance. Mobile Application Framework Overview The Oracle Utilities Application Framework has introduced a new Mobile Framework for use in the Oracle Utilities products. This session gives an overview of the mobile framework capabilities for future releases. Developing Extensions using Groovy Groovy has been added as a supported language for on premise and cloud implementations. This session outlines that way that Groovy can be used in building extensions. Note: This session will be very technical in nature. Ask Us Anything Session Interaction with the customer and partner community is key to the Oracle Utilities product lines. This interactive sessions allows you (the customers and partners) to ask technical resources within Oracle Utilities questions you would like answered. The session will also allow Oracle Utilities to discuss directions and poll the audience on key initiatives to help plan road maps. This year we have decided to not only discuss capabilities but also give an idea of how we use those facilities in our own cloud implementations to reduce our operating costs for you to use as a template for on-premise and hybrid implementations. For customers and partners interested in attending the USA Edge Conference registration is available.  

It is that time of year again, Customer Edge conference time. This year we will be once again holding a Technical stream which focuses on the Oracle Utilities Application Framework and...

Happy New Year to my blog readers

Welcome to 2018 for the ShortenSpot readers. This year is looking like another exciting year for the Oracle Utilities Application Framework and a new direction for the blog overall. In the past the blog has been a mixture of announcements and some advice with examples. Whilst it will still provide important technical announcements, this year we plan to have lots and lots of exciting advice with lots of example code to illustrate some amazing features you can use in the cloud, hybrid and on-premise implementations to inspire you to use the facilities provided to you. This year we also will be doing a major refit to all the whitepapers including rationalizing the number of them (it was fast approaching 50 at one stage) and making them more relevant with more examples. This will also remove the duplication those whitepapers have with the online documentation which is now the main source of information for advice for implementations. The whitepapers will act as more supplemental materials and complementary to the online documentation. The next few months are the busy months as we also prepare for the annual Edge conferences in the USA, APAC and Europe which will include a technical stream with a series of sessions on major technical features and some implementation advice. This year we decided to make it more beneficial for you by focussing on key implementation challenges and offer advice on how to solve implementation issues and business requirements. Each session will talk capabilities, offer general direction and offer advice garnered from our cloud implementations and advice from our implementations/partners gather over the years. Hopefully you can back from the sessions with some useful advice. The details of the 2018 Oracle Utilities Edge Customer Conference Product Forum are located at this site. This year looks like an amazing year and I look forward to publishing a lot more often this year to benefit us all.  

Welcome to 2018 for the ShortenSpot readers. This year is looking like another exciting year for the Oracle Utilities Application Framework and a new direction for the blog overall. In the past the...

Oracle Help Patches

In Oracle Utilities Application Framework V4.3.0.1.0, we introduced the new Oracle help engine to provide a better online help experience for online users. Due to a conflict in common libraries a series of patches have been released to ensure the correct instances of the libraries are used for a number of Oracle Utilities Application Framework V4.3.x releases. The patches outlined below allow for the Oracle Help engine to be continued to be used with the correct libraries. Note: These patches apply to Oracle WebLogic 12.x installations only. The following patches, available from My Oracle Support, apply to the following releases: Version Patch Comments 4.3.0.1.0 27051899 UPDATE OHELP TO BE THIN CLIENT 4.3.0.2.0 26354064 COPY OF 27051899 - UPDATE OHELP TO BE THIN CLIENT 4.3.0.3.0 26354238 COPY OF 26354064 - COPY OF 27051899 - UPDATE OHELP TO BE THIN CLIENT 4.3.0.4.0 26354259 COPY OF 26354238 - COPY OF 26354064 - COPY OF 27051899 - UPDATE OHELP TO BE THIN CLIENT These patches migrate the online help to use the Thin Client libraries. Customers on Oracle Weblogic 12.2 should apply 27112347 - OPTIONAL SPECIAL PATCH FOR REMOVAL OF OHW THICK CLIENT JAR FILES-4.3 SP1,2,3,4 available from My Oracle Support. The patches apply in the following ways: If you are on Oracle WebLogic 12.1.3, the patch will ensure the correct Oracle Help libraries are used. If you are on Oracle WebLogic 12.2.1, the patch will replace the default libraries with the thin client libraries. The additional patch (27112347) outlined above will cleanup any conflicted libraries. Customers on earlier versions of the Oracle Utilities Application Framework do not need to apply the above patches. Customers on Oracle Utilities Application Framework V4.3.0.5.0 and above, do not need to apply these patches as it is already included in those releases.

In Oracle Utilities Application Framework V4.3.0.1.0, we introduced the new Oracle help engine to provide a better online help experience for online users. Due to a conflict in common libraries a...

Updated Whitepapers for 4.3.0.5.0

With the anticipated releases of the first products based upon Oracle Utilities Application Framework V4.3.0.5.0 starting to appear soon, the first set of whitepapers have to been updated to reflect new functionality, updated functionality and experiences from the field and our Oracle Utilities cloud implementations. The following whitepapers have been updated and are now available from My Oracle Support: ConfigTools Best Practices (Doc Id: 1929040.1) - This has been updated with the latest advice from our implementations and cloud teams. There are a few new sections around Groovy and a new section which highlights the ability to write batch programs using the Plug-In Batch architecture. In Oracle Utilities Application Framework 4.3.0.5.0, we add the capability to implement File Import functionality using Groovy in our Plug-In Batch. We provide a mechanism to support Delimited, Fixed or XML based files within the algorithms. Samples of each are supplied in the product. Identity Management Suite Integration (Doc Id: 1375600.1) - This whitepaper has been greatly simplified to reflect the latest Oracle Identity Management Suite changes and the newer interface that has been migrated from XAI to IWS. The new interface as two new algorithms which are used in our cloud implementations and are now part of the F1-IDMUser object supplied with the product. Generation of Authorization Identifier - The F1-IDMUser object now supports the ability to generate the unique authorization identifier (the 8 character one) if the identifier is not provisioned from Oracle Identity Manager itself. This provides some flexibility of where this identifier can be provisioned as part of the Oracle Identity Manager solution. In the past the only place this was available was within Oracle Identity Manager itself. This enhancement means that the user can be provisioned from Oracle Identity Manager or part of the Identity Management interface to Oracle Utilities Application Framework. Duplication of User now supported within interface - In past releases the use of template users was a common way of quickly provisioning users. This release also allows the duplication function within the User Object to be used in isolation or in conjunction with template users for more flexible options in provisioning. If this method is used, a characteristic is added to the duplicated user to indicate it was duplicated from another user (for auditing purposes). As we get closer to release of products using Oracle Utilities Application Framework 4.3.0.5.0 you will see more and more updated whitepapers to reflect the new and improved changes in the releases.

With the anticipated releases of the first products based upon Oracle Utilities Application Framework V4.3.0.5.0 starting to appear soon, the first set of whitepapers have to been updated to reflect...

Batch History Portal - Driving the Portal with a flexible search

In a previous post, I illustrated a zone on displaying batch history. This is part of a prototype I am working on to illustrate some advanced UI features in the ConfigTools scripting language and zone features. The zone looked like this: Many people asked me for the zone definition so here are the steps I did to create the zone: I created a service Script that returned the FULL message from the Batch Level of Service called CMLOSM: 10: edit data      if ("parm/batchControlId = $BLANK")            terminate;      end-if;      move "parm/batchControlId" to "BatchLevelOfService/input/batchControlId";      //      // Get Level Of Service      //      invokeBS 'F1-BatchLevelOfService' using "BatchLevelOfService";      move "BatchLevelOfService/output/levelOfService" to "parm/levelOfService";      //      // Get Level Of Service Description      //      move 'F1_BATCH_LEVEL_OF_SERVICE_FLG' to "LookupDescription/fieldName";      move "BatchLevelOfService/output/levelOfService" to "LookupDescription/fieldValue";      invokeBS 'F1-GetLookupDescription' using "LookupDescription";      //      // Get Message      //      move "BatchLevelOfService/output/messageCategory" to "ReturnMessage/input/messageCategory";      move "BatchLevelOfService/output/messageNumber" to "ReturnMessage/input/messageNumber";      move '0' to "ReturnMessage/input/messageParmCollCount";      move "$LANGUAGE" to "ReturnMessage/input/language";      //      // Set Substitution Parms.. I have only coded 4 for now      //      if ("string(BatchLevelOfService/output/messageParameters/parameters[1]/parameterValue) != $BLANK")         move "BatchLevelOfService/output/messageParameters/parameters[1]/parameterValue" to "ReturnMessage/input/messageParms/messageParm1";         move '1' to "ReturnMessage/input/messageParmCollCount";      end-if;      if ("string(BatchLevelOfService/output/messageParameters/parameters[2]/parameterValue) != $BLANK")         move "BatchLevelOfService/output/messageParameters/parameters[2]/parameterValue" to "ReturnMessage/input/messageParms/messageParm2";          move '2' to "ReturnMessage/input/messageParmCollCount";      end-if;      if ("string(BatchLevelOfService/output/messageParameters/parameters[3]/parameterValue) != $BLANK")         move "BatchLevelOfService/output/messageParameters/parameters[3]/parameterValue" to "ReturnMessage/input/messageParms/messageParm3";         move '3' to "ReturnMessage/input/messageParmCollCount";      end-if;      if ("string(BatchLevelOfService/output/messageParameters/parameters[4]/parameterValue) != $BLANK")         move "BatchLevelOfService/output/messageParameters/parameters[4]/parameterValue" to "ReturnMessage/input/messageParms/messageParm4";         move '4' to "ReturnMessage/input/messageParmCollCount";      end-if;      //      // Compile the Message      //      invokeBS 'F1-ReturnMessage' using "ReturnMessage";      move "ReturnMessage/output/expandedMessage" to "parm/fullMessage"; end-edit; Schema: <schema>     <batchControlId dataType="string"/>       <levelOfService mdField="F1_BATCH_LEVEL_OF_SERVICE_FLG"/>       <levelOfServiceDesc mdField="LOS_DESC"/>       <fullMessage dataType="string"/> </schema> Data Areas: Schema Type Object Data Area Name Business Service F1-BatchLevelOfService BatchLevelOfService Business Service F1-GetLookupDescription LookupDescription Business Service F1-ReturnMessage ReturnMessage I created a script that would set the color of the level of service called CMCOLOR: 10: move 'black' to "parm/foreColor"; 11: move 'white' to "parm/bgColor"; 20: edit data      if ("parm/levelOfService = 'DISA'")          move 'white' to "parm/foreColor";          move '#808080' to "parm/bgColor";      end-if; end-edit; 30: edit data      if ("parm/levelOfService = 'ERRO'")          move 'white' to "parm/foreColor";          move 'red' to "parm/bgColor";      end-if; end-edit; 40: edit data      if ("parm/levelOfService = 'NORM'")          move 'white' to "parm/foreColor";          move 'green' to "parm/bgColor";      end-if; end-edit; 50: edit data      if ("parm/levelOfService = 'WARN'")          move 'black' to "parm/foreColor";          move 'yellow' to "parm/bgColor";      end-if; end-edit; Schema: <schema>     <levelOfService mdField="F1_BATCH_LEVEL_OF_SERVICE_FLG" dataType="lookup" lookup="F1_BATCH_LEVEL_OF_SERVICE_FLG"/>       <foreColor dataType="string" mdField="COLOR"/>       <bgColor dataType="string" mdField="BG_COLOR"/> </schema> I created a script to post process the records for advanced filtering called CMLOSRF:      if ("string(parm/levelOfServiceFilter) = $BLANK")        if ("parm/hideDisabled = $BLANK")            move 'true' to "parm/result";            terminate;        end-if;        move 'true' to "parm/result";        if ("string(parm/hideDisabled) = 'Y'")          if ("string(parm/levelOfService) = 'DISA'")             move 'false' to "parm/result";             terminate;          end-if;        end-if;        terminate;      end-if;      move 'false' to "parm/result";      if ("parm/levelOfServiceFilter = parm/levelOfService")            move 'true' to "parm/result";      end-if; Schema: <schema>     <levelOfService dataType="string"/>       <levelOfServiceFilter dataType="string"/>       <hideDisabled/>       <result dataType="string"/> </schema> I then built a zone called CMBH01 which has the following attributes: Parameter Value Description Batch History Query Zone Type F1-DE Application Service (Choose an appropriate one) Width Full Height Of Report 60 Display Row Number Column false User Filter 1 label=BATCH_CD likeable=S divide=below User Filter 2 label=LEVEL_OF_SERVICE type=LOOKUP lookup=F1_BATCH_LEVEL_OF_SERVICE_FLG User Filter 3 label='Hide Disabled' type=LOOKUP lookup=F1_YESNO_FLG divide=below User Filter 4 label=F1_BATCH_CTGY_FLG type=LOOKUP lookup=F1_BATCH_CTGY_FLG User Filter 5 label=F1_BATCH_CTRL_TYPE_FLG type=LOOKUP lookup=F1_BATCH_CTRL_TYPE_FLG No SQL Execute nosql=IGNORE Initial Display Columns C1 C2 C8 C5 C12 C10 C13 C14 SQL 1 Broadcast Columns BATCH_CD=C1 SQL Statement 1 SELECT UNIQUE I.BATCH_CD, B.F1_BATCH_CTRL_TYPE_FLG, B.F1_BATCH_CTGY_FLG, B.LAST_UPDATE_DTTM, B.NEXT_BATCH_NBR FROM CI_BATCH_INST I, CI_BATCH_CTRL B WHERE I.BATCH_CD = B.BATCH_CD  [ (F1) AND I.BATCH_CD LIKE :F1]  [ (F4) AND B.F1_BATCH_CTGY_FLG = :F4]  [ (F5) AND B.F1_BATCH_CTRL_TYPE_FLG = :F5] Column 1 for SQL 1 source=SQLCOL sqlcol=BATCH_CD label=BATCH_CD Column 2 for SQL 1 source=FKREF fkref='F1-BTCCT' input=[BATCH_CD=BATCH_CD] label=BATCH_CD_DESCR Column 3 for SQL 1 source=BS bs='F1-BatchLevelOfService' input=[input/batchControlId=C1] output=output/levelOfService  suppress=true suppressExport=true Column 4 for SQL 1 source=BS bs='F1-GetLookupDescription' input=[fieldName='F1_BATCH_LEVEL_OF_SERVICE_FLG' fieldValue=C3] label=LEVEL_OF_SERVICE color=C6 bgColor=C7 output=description suppress=true suppressExport=true Column 5 for SQL 1 source=SS ss='CMLOSM' input=[batchControlId=C1] label=LEVEL_OF_SERVICE_REASON output=fullMessage Column 6 for SQL 1 source=SS ss='CMCOLOR' input=[levelOfService=C3] label=COLOR output=foreColor suppress=true suppressExport=true Column 7 for SQL 1 source=SS ss='CMCOLOR' input=[levelOfService=C3] label=BG_COLOR output=bgColor  suppress=true suppressExport=true Column 8 for SQL 1 source=SPECIFIED spec=['<div style=" font-weight:bold; background-clip: content-box; border-radius: 10px; padding: 2px 8px; text-align: center; background-color:' C7 '; color:' C6 ';">' C4 '</div>'] label=LEVEL_OF_SERVICE Column 9 for SQL 1 source=SQLCOL sqlcol=2 label=F1_BATCH_CTRL_TYPE_FLG suppress=true suppressExport=true Column 10 for SQL 1 source=BS bs='F1-GetLookupDescription' input=[fieldName='F1_BATCH_CTRL_TYPE_FLG' fieldValue=C9] label=F1_BATCH_CTRL_TYPE_FLG output=description Column 11 for SQL 1 source=SQLCOL sqlcol=3 label=F1_BATCH_CTGY_FLG suppress=true suppressExport=true Column 12 for SQL 1 source=BS bs='F1-GetLookupDescription' input=[fieldName='F1_BATCH_CTGY_FLG' fieldValue=C11] label=F1_BATCH_CTGY_FLG output=description Column 13 for SQL 1 source=SQLCOL sqlcol=4 label='Last Executed' type=DATE/TIME Column 14 for SQL 1 source=SQLCOL sqlcol=5 label=NEXT_BATCH_NBR type=NUMBER dec=0 Allow Row Service Script 1 ss=CMLOSRF input=[levelOfServiceFilter=F2 levelOfService=C3 hideDisabled=F3] output=result Saving the Zone and adding it to a menu will then implement the zone in the menu and invoke it. Make sure the Application Service you use is connected to the users via a user group so that users can access the zone. Understanding the solution I want to also make you understand a few of the decisions I made in building this zone up: The zone type was just a personal choice (F1-DE). In a typical use case you would display the batch controls you favor using the filters. By using F1-DE, the SQL is run without asking for filters first as I would assume you would start with a full list and then using filters to refine what you wanted to see. If you got to a smaller subset you can use the Save View functionality to set those as your preferred filters. In other zone types you can filter first and then display the records, it is up to your personal preferences and business requirements. The solution was built up over time. I started with some basic SQL and then started looking at scripting to reformat and provide advanced functionality in the zone. This is a good example of development of zones. You start simple and build more and more into it until you are happy with the result. The SQL Statement will return the list of batch controls that have been executed at least once. This is intentional to filter out jobs that are never run from the product for an implementation. We supply lots of jobs in a product to cover all situations in the field but I have not encountered a customer that runs them all. As I am looking for at least one execution, I added the UNIQUE clause to ignore multiple executions. I added Batch Category and Batch Control Type to allow filtering on things that are important at different stages of an implementation as well as targeting products that have a mix of different job types. The Last Executed Date and Time is for information purposes mainly but also can be used as a sortable column so you can quickly fund jobs that were executed recently. The Next Run Number might seem a strange field to include but it gives you an idea of batch controls that have been executed more frequently. In the screen above I can see that F1-MGDIM has been executed far more than the other batch controls. There are a lot of suppressed columns in the list above. This is intentional as the values of these columns can be used for other columns. For example, I have Column 5 and Column 6 calculating the color of the Level Of Service. These are never to be displayed in the list or export as they are intermediate columns as they are used in the formatting in Column 8. The Allow Row Service Script is really handy as it allows for complex processing outside the SQL. For example, as I do not know the Level Of Service value in the SQL (as it is calculated) and I want to include it as a filter, I can use the Allow Row Service Script to use that information to return whether a column returned is actually to be included in the final result set (even though the SQL would return it). You might of noticed that I actually hardcoded some labels. This is typically not recommended as I would of created custom fields to hold the labels so that I can translate the label or change the text without changing this zone. If you ever create your own zones, I would strongly suggest avoiding hardcoding. I just used it to make the publishing of this article much easier. The code in the service scripts is really just examples. They are not probably optimal bits of code. I am sure you are looking at the code and working out better ways of doing it and that is fine. The code is just to give you some ideas. The script, CMLOSM, which builds the full Level Of Service message, is not really optimal and I am sure there are much easier methods to achieve the same result but it is functional for illustrative purposes. You will notice that Column 8 is actually some dynamic HTML coding enclosed in div tags. That is correct. It is possible to implement some HTML in a column for formatting. Just so you know, the HTML in ANY column has to conform to the HTML whitelist that is enabled across the product. You cannot put just any code in there. You are limited to formatting and some basic post-processing. My development team helped me with some of the possibilities as I wanted a specific look without resorting to graphics. It is both visual and functional (for sorting). You might also notice a Broadcast column (BATCH_CD). Well that is so that this zone can be part of a larger solution which I will expand upon in future blog entries to show off some very nice new functionality (actually most of it is available already).

In a previous post, I illustrated a zone on displaying batch history. This is part of a prototype I am working on to illustrate some advanced UI features in the ConfigTools scripting language...

Extendable Lookups vs Lookups

The Oracle Utilities Application Framework avoids hardcoding of values for maintenance, multi-lingual and configuration purposes. One of the features that supports this requirement is the Lookup object which lists the valid values (and associated related values like the description/override description and java code name for SDK use) for the field. Lookups can be exclusively owned by the product (where you can only change the override description and not add any additional values) or can customized where you can add new values. You are also free to use F1-GetLookupDescription to get the value for a lookup in any query zone, business service, business object (though you can do this on the element definition directly) and script. There is a maintenance function to maintain Lookups. For example: The Lookup object is ideal for simple fields with valid values but if you needed to add additional elements to the lookup the lookup object cannot be extended. The concept of an Extendable Lookup was introduced. It allows implementations to build complex configurations similar to a lookup and introduce extended features for their custom configuration settings. To use Extendable Lookup the following is typically done: You create a Business Object based upon the F1-EXT LKUP Maintenance Object. You can define the structure you want to configure for the lookup. There are numerous examples of this in the base product that you can use to get ideas for what you might need to support. It is highly recommended to use UI Hints on the BO Schema to build your user interface for the lookup. You can refer to the Extendable Lookup using the F1-GetExtLookUpVal common business service that can return up to five attributes from your Extendable Lookup (if you need more you can develop your own call to directly return the values - like calling the BO directly). Here are some delivered examples of Extendable Lookups: Extendable Lookup is very powerful where you not only want to put valid values in a list but also configure additional settings to influence the outcomes of your custom code. It is recommended to use Extendable Lookup instead of Lookup if the requirements for the valid value configuration is beyond the requirement of Lookup in terms of elements to record. For more information on both Lookups and Extendable Lookups, refer to the online documentation for further advice.

The Oracle Utilities Application Framework avoids hardcoding of values for maintenance, multi-lingual and configuration purposes. One of the features that supports this requirement is the...

Advice

Converting your XAI Services to IWS using scripting

With the deprecation announcement surrounding XML Application Integration (XAI), it is possible to convert to using Inbound Web Services (IWS) manually or using a simple script. This article will outline the process of building a script to bulk transfer the definitions over from XAI to IWS. Ideally, it is recommended that you migrate each XAI Inbound Service to Inbound Web Services manually so that you can take the opportunity to rationalize your services and reduce your maintenance costs but if you want to simply transfer over to the new facility in bulk this can be done via a service script to migrate the information. This can be done using a number of techniques: You can drive the migration via a query portal that can be called via a Business Service from a BPA or batch process. You can use the Plug-In Batch to pump the services through a script to save time. In this article I will outline the latter example to illustrate the migration as well as highlight how to build a Plug In Batch process using configuration alone. Note: Code and Design in this article are provided for illustrative purposes and only cover the basic functionality needed for the article. Variations on this design are possible through the flexibility of the extensible of the product. These are not examined in any detail except to illustrate the basic process. Note: The names of the objects in this article are just examples. Alternative values can be used, if desired. Design The design for this is as follows: Build a Service script that will take the XAI Inbound Service identifier to migrate and perform the following Read the XAI Inbound Service definition to load the variables for the migration Check that the XAI Inbound Service is valid to be migrated. This means it must be owned by Customer Modification and uses the Business Adaptor XAI Adapter. Transfer the XAI Inbound Service definition to the relevant fields in the Inbound Web Service and add the service. Optionally activate the service ready for deployment. The deployment activity itself should not be part of the script as it is not a per service activity usually. By default the following is transferred: The Web Service name would be the Service Name on the XAI Inbound Service not the identifier as that is randomly generated. Common attributes are transferred across from the existing definition A single operation, with the same name as the Inbound Web Service, is created as a minimalist migration option. Build a Plug In Batch definition to include the following: The Select Record algorithm will identify the list of services to migrate. It should be noted that only services that are owned by the Customer Modification (CM) owner should be migrated as ownership should be respected. The script for the above will be used in the Process Record algorithm. The following diagram illustrates the overall process: The design of the Plug In Batch will only work for Oracle Utilities Application Framework V4.3.0.4.0 and above but the Service Script used for the conversion can be used with any implementation of Oracle Utilities Application Framework V4.2.0.2.0 and above. On older versions you can hook the script into another script such as BPA or drive it from a query zone. Note: This process should ONLY be used to migrate XAI Inbound Services that are Customer Modifications. Services owned by the product itself should not be migrated to respect record ownership rules. XAI Inbound Service Conversion Service Script The first part of the process is to build a service script that establishes an Inbound Web Service for an XML Application Integration Inbound Service. To build the script the following process should be used: Create Business Objects - Create a Business Object, using Business Object maintenance, based upon XAI SERVICE (XAI Inbound Service) and F1-IWSSVC (Inbound Web Service) to be used as Data Areas in your script. You can leave the schema's as generated with all the elements defined or remove the elements you do not need (as this is only a transient piece of functionality). I will assume that the schema will be as the default generation using the Schema generator in the Dashboard. Remember to allocate the Application Service for security purposes (I used F1-DFLTS as that is provided in the base meta data). The settings for the Business Objects are summarized as follows: Setting XAI Inbound Service BO Values IWS Service BO Values Business Object CMXAIService CMIWSService Description XAI Service Conversion BO IWS Service Conversion BO Detailed Description Conversion BO for XML Application Integration Conversion BO for Inbound Web Services Maintenance Object XAI SERVICE F1-IWSSVC Application Service F1-DFLTS F1-DFLTS Instance Control Allow New Instances Allow New Instances Build Script - Build a Service Script with the following attributes: Setting Value Script CMConvertXAI Description Convert an XAI Service to IWS Service Detailed Description Script that converts the passed in XAI Service Id into an Inbound Web Service. - Reads the XAI Inbound Service definition - Copies the relevant attributes to the Inbound Web Service - Add the Inbound Web Service Script Type Service Script Application Service F1-DFLTAPS Script Engine Version 3.0 Data Area CMIWSService - Data Area Name IWSService Data Area CMXAIService - Data Area Name XAIService Schema (this is the input value and some temporary variables) <schema>   <xaiInboundService mdField="XAI_IN_SVC_ID"/>   <operations type="group">     <iwsName/>       <operationName/>       <requestSchema/>       <responseSchema/>       <requestXSL/>       <responseXSL/>       <schemaName/>       <schemaType/>       <transactionType/>       <searchType/>    </operations> </schema> The Data Area section looks like this: Add the following code to your script (this is in individual edit-data steps): Note: The code below is very basic and there are optimizations that can be done to make it smaller and more efficient. This is just some sample code to illustrate the process. 10: edit data      // Jump out if the inbound service Id is blank      if ("string(parm/xaiInboundService) = $BLANK")        terminate;      end-if; end-edit; 20: edit data      // populate the key value from the input parameter      move "parm/xaiInboundService" to "XAIService/xaiServiceId";      // invoke the XAI Service BO to read the service definition      invokeBO 'CMXAIService' using "XAIService" for read;      // Check that the Service Name is populated at a minimum      if ("XAIService/xaiInServiceName = $BLANK")        terminate;      end-if;      // Check that the Service type is correct      if ("XAIService/xaiAdapter != BusinessAdaptor")        terminate;      end-if;      // Check that the owner flag is CM      if ("XAIService/customizationOwner != CM")        terminate;      end-if; end-edit; 30: edit data      // Copy the key attributes from XAI to IWS      move "XAIService/xaiInServiceName" to "IWSService/iwsName";      move "XAIService/description" to "IWSService/description";      move "XAIService/longDescription" to "IWSService/longDescription";      move "XAIService/isTracing" to "IWSService/isTracing";      move "XAIService/postError" to "IWSService/postError";      move "XAIService/shouldDebug" to "IWSService/shouldDebug";      move "XAIService/xaiInServiceName" to "IWSService/defaultOperation";      // Assume the service will be Active (this can be altered)      // For example, set this to false to allow for manual checking of the      // setting. That way you can confirm the service is set correctly and then      // manually set Active to true in the user interface.      move 'true' to "IWSService/isActive";      // Process the list for the operation to the temporary variables in the schema      move "XAIService/xaiInServiceName" to "parm/operations/iwsName";      move "XAIService/xaiInServiceName" to "parm/operations/operationName";      move "XAIService/requestSchema" to "parm/operations/requestSchema";      move "XAIService/responseSchema" to "parm/operations/responseSchema";      move "XAIService/inputXSL" to "parm/operations/requestXSL";      move "XAIService/responseXSL" to "parm/operations/responseXSL";      move "XAIService/schemaName" to "parm/operations/schemaName";      move "XAIService/schemaType" to "parm/operations/schemaType";      // move "XAIService/transactionType" to "parm/operations/transactionType";      move "XAI/searchType" to "parm/operations/searchType";      // Add the parameters to the operation list object      move "parm/operations" to "IWSService/+iwsServiceOperation"; end-edit; 40: edit data      // Invoke BO for Add      invokeBO 'CMIWSService' using "IWSService" for add; end-edit; Note: The code example above does not add annotations to the Inbound Web Service to attach policies for true backward compatibility. It is assumed that policies are set globally rather than on individual services. If you want to add annotation logic to the script it is recommended to add an annotations group to the script internal data area and add annotations list in logic in the script. One thing to point out for XAI. To use the same payload for an XAI service in Inbound Web Services, a single operation must exist with the same name as the Service Name. This is the design pattern for a one to one conversion. It is possible to vary from that if you manually convert from XAI to IWS as it is possible to reduce the number of services in IWS using multiple operations. Refer to Migrating from XAI to IWS (Doc Id: 1644914.1) and Web Services Best Practices (Doc Id: 2214375.1) from My Oracle Support for a discussion of the various techniques available. The attribute mapping looks like this: The Service Script has now been completed. All it needs is to pass the XAI Inbound Service Identifier (not the name) to parm/xaiInboundService structure. Building The Plug In Batch Control In past releases, the only way to build a Batch process that is controlled via a Batch Control was to use the Oracle Utilities SDK using Java. It is now possible to define what is termed a Plug In based Batch Control which allows you to use ConfigTools and some configuration to build your batch process. The fundamental principle is that batch is basically selecting a set of records to process and then passing those records into something to process them. In our case, we will provide an SQL statement to subset the services to convert from XAI to pass to the service we just built in the previous step. Select Records Algorithm The first part of the Plug In Batch process is to define the Select Records algorithm that defines the parameters for the Batch process, the commit strategy and the SQL used to pump the records into the process. The first step is to create a script to be used for the Algorithm Type of Select Records to define the parameters and the commit strategy. For this example I created a script with the following parameters: Setting Value Script CMXAISEL Description XAI Select Record Script - Parameters Detailed Description This script is the driver for the Select Records algorithm for the XAI to IWS conversion Script Type Plug In Script Algorithm Entity Batch Control - Select Records Script Version 3.0 Script Step 10: edit data  // Set strategy and key field  // Strategy values are dictated by BATCH_STRATEGY_FLG lookup  //  Set JOBS strategy as this is a single threaded process  //  I could use THDS strategy but then would have to put in logic for  // restart in the SQL. The current SQL has that logic already implied.  move 'JOBS' to "parm/hard/batchStrategy";  move 'XAI_IN_SVC_ID' to "parm/hard/keyField"; end-edit; Note: I have NO parameters for this job. If you wish to add processing for parameters, take a look at some examples of this algorithm type to see the processing necessary for bind variables. The next step is to create an algorithm type. This will be used by the algorithm itself to define the process. Typically, an algorithm type is the definition of the physical aspects of the algorithm and its parameters. For the select algorithm the following algorithm type was created: Setting Value Algorithm Type CMXAISEL Description XAI Selection Algorithm Detailed Description This algorithm Type is a generic wrapper to set the job parameters Algorithm Entity Batch Control - Select Records Program Type Plug In Script Plug In Script CMXAISEL Parameter SQL (Sequence 1 - Required) - This is the SQL to pass into the process The last step is to create the Algorithm to be used in the Batch Control. This will use the Algorithm Type created earlier. Create the algorithm definition as follows: Setting Value Algorithm Code CMXAISEL Description XAI Conversion Selection Algorithm Type CMXAISEL Effective Date Any valid date in the past is acceptable SQL Parameter SELECT xai_in_svc_id FROM ci_xai_in_svc WHERE xai_adapter_id = 'BusinessAdaptor' AND xai_in_svc_name NOT IN ( SELECT in_svc_name FROM f1_iws_svc) AND owner_flg = 'CM' You might notice the SQL used in the driver. It passes the XAI_IN_SVC_ID's for XAI Inbound Services that use the Business Adaptor, are not already converted (for restart) and are owned by Customer Modification. Process Records Algorithm The next step is to link the script created earlier to the Process Records algorithm. As with the Select Records algorithm, a script, an algorithm type and algorithm entries need to be created. The first part of the process is to build a Plug-In Script to pass the data from the Select Records Algorithm to the Service Script that does the conversion. The parameters are as follows: Setting Recommended Value Script CMXAIProcess Description Process XAI Records in Batch Detailed Description This script reads the parameters from the Select records and passes them to the XAI Conversion script Script Type Plug-In Script Algorithm Entity Batch Control - Process Record Script Version 3.0 Data Area Service Script - CMConvertXAI - Data Area Name ConvertXAI Script Step if ("parm/hard/selectedFields/Field[name='XAI_IN_SVC_ID']/value != $BLANK")     move "parm/hard/selectedFields/Field[name='XAI_IN_SVC_ID']/value" to "ConvertXAI/xaiInboundService";     invokeSS 'CMConvertXAI' using "ConvertXAI" ; end-if; The script above basically takes the parameters passed to the algorithm and then passes them to the Service Script for processing The next step is to define this script as an Algorithm Type: Setting Value Algorithm Type CMXAIPROC Description XAI Conversion Algorithm Detailed Description This algorithm type links the algorithm to the service script to drive the process. Algorithm Entity Batch Control - Process Record Program Type Plug-In Script Plug-In Script CMXAIProcess The last step in the algorithm process is to create the Algorithm entry itself: Setting Value Algorithm Code CMXAIPROCESS Description XAI Conversion Process Record Algorithm Type CMXAIPROC Plug In Batch Control Configuration The last part of the process is to bring all the configuration into a single place, the Batch Control. This will pull in the algorithms into a configuration ready for use. Setting Value Batch Control CMXAICNV Description Convert XAI Services to IWS Detailed Description This batch control converts the XAI Inbound Services to Inbound Web Services to aid in the mass migration of the meta data to the new facility. This batch job only converts the following: - XAI Services that are owned by Customer Modification to respect record ownership. - XAI Services that use the Business Adaptor XAI Adapter. Other types are auto converted in IWS - XAI Services that are not already defined as Inbound Web Services Application Service F1-DFLTAPS Batch Control Type Not Timed Batch Category Adhoc Algorithm - Select Records CMXAISEL Algorithm - Process Records CMXAIPROCESS The Plug-in batch process is now defined. Summary The conversion process can be summarized as follows: A Service Script is required to transfer the data from the XAI Inbound Web Service to the Inbound Web Service definition. This converts only services owned by Customer Modification, have not been migrated already and use the Business Adaptor XAI Adapter. The script sets the same parameters as the XAI Service for backward compatibility and creates a SINGLE operation Web Service with the same payload as the original. The Select Records Algorithm is defined which defines the subset of records to process with a script that defines the job properties, an algorithm entry to define the script to the framework and an algorithm, with the SQL to use, to link to the Batch Control. The Process Records Algorithm is defined which defines the processing from the Select Records and links in the Service Script from the first step. As with any algorithm, the code is built, in this case in Plug-In Script to link the data to the script, an algorithm type entry defines the script and then an algorithm definition is created to link to the batch control. The last step is to create the Batch Control that links the Select Records and Process Records algorithms.

With the deprecation announcement surrounding XML Application Integration (XAI), it is possible to convert to using Inbound Web Services (IWS) manually or using a simple script. This article...

Information

Single Submitter Support in Oracle Scheduler Integration

The Oracle Scheduler integration was released for Oracle Utilities Application Framework to provide an interface to the DBMS_SCHEDULER package in the Oracle Database.  By default, when submitting a multi-threaded job where the thread_limit is set to a number greater than 1 and the thread_number on the submission is setting to it to zero (to spawn threads) the interface would submit each thread individually after each other. For a large number of threads, this may lead to a high level of lock contention on the Batch Control table. To resolve this issue we have enhanced the interface to include a new feature to reduce the lock contention using a single submitter. To use this facility you can either use a new command line override: OUAF_BATCH.Submit_Job( ... single_submitter => true, ... ) Or an be used on the Set_Option facility (Globally or on individual jobs). For example for a Global scope: OUAF_BATCH.Set_Option(scope => 'GLOBAL', name => 'single_submitter', value => true); The default for this facility is set to false (for backward compatibility). If the value is set to true, you cannot restart an individual thread till all running threads have ended. This patch is available from My Oracle Support for a number of releases: Release Patch 4.2.0.3.0 24299479 4.3.0.1.0 26440254 4.3.0.2.0 26452535 4.3.0.3.0 26452546 4.3.0.4.0 26452556  

The Oracle Scheduler integration was released for Oracle Utilities Application Framework to provide an interface to the DBMS_SCHEDULER package in the Oracle Database.  By default, when submitting a...

Advice

Team based To Do Management

One of the interesting discussions I have with customers and partners about the To Do functionality in the Oracle Utilities Application Framework based products is team management. Most partners and customers think that the To Do functionality is limited to one role per To Do type. This is due to the fact that most examples they see in training or in demonstrations shows one role per To Do type. There is "more than meets the eye" to the functionality. The To Do functionality can be configured in different ways to implement different allocation mechanisms. Let me discuss and alternative configuration that may appeal to some implementations. Create a To Role for each organizational team in your organization. These do not have to be whole parts of your organization, they can simply be groups of people with similar skills or work responsibilities. You decide the numbers of groups and their composition. I will use the word "team" rather than To Do Role in the rest of this article to emphasize the alternative view. By using teams you actually might reduce your maintenance costs as you will probably have less numbers of teams than the number of To Do types to manage. At the moment remember people think that you can only have one team per To Do Type. Allocate people to those teams. Now you have full flexibility here. A person can be a member of any team you wish and of course they can be members of multiple teams (even overlapping ones - more about his later).  Allocate the teams to the To Do Types they will be working on. Now that you have teams you can allocate multiple teams per To Do type. Remember one of the teams should be allocated as the Default so that your algorithms, batch jobs etc have a default to allocate. Now your implementation will be using teams of people rather than using one role per To Do Type. This means you can allocate to teams (or individuals) and supervisors can manage teams. Remember the use of a capability in the product is not restricted to what is shown in demonstrations. Think outside the box.

One of the interesting discussions I have with customers and partners about the To Do functionality in the Oracle Utilities Application Framework based products is team management. Most partners and...

Advice

High Availablity Designs

One of the most common tasks in any implementation of an Oracle Utilities Application Framework product is the design of a high availability environment to ensure business continuity and availability. The Oracle Utilities Application Framework is designed to allow implementations to use a wide variety of high availability and business continuity solutions available in the market. As the product is housed in Oracle WebLogic and Oracle Database then we can utilize the high availability features of those products. If you are considering designing a high availability architecture here are a few guidelines: Consider the Oracle Maximum Availability Architecture which has guidelines for designing high availability and business continuity solutions for a variety of solutions available. Design for your business requirements and hardware platform. Solutions can vary to low cost solutions with minimal hardware to highly configured complex hardware/software solutions. Do not discount solutions built into your hardware platform. Redundancy and high availability features of hardware can be part of the solution that you propose for an implementation. These are typically already in place so offer a cost effective component of any solution. Design for your budget. I have seen implementations where they design a complex high availability solution only to get "sticker shock" when the price is discussed. I usually temper costs of a solution against the estimated business loss from an availability issue or a business continuity issue. It is very similar to discussions around insurance you might have personally. Customers of Oracle Utilities Application Framework based product have used both hardware and/or software based availability and business continuity solutions. This includes hardware at the load balancing level, such as routers, to implement high availability. Oracle typically recommends clustering as one of the techniques to consider in your solutions. Oracle Utilities Application Framework supports clustering for Oracle WebLogic, Oracle Coherence and Oracle Database. We support clusters within user channels (online, web services and batch) and across those channels as well. Oracle typically recommends Real Application Clustering (including One Node implementations) as part of an availability solution. Oracle Utilities Application Framework supports RAC and includes support for newer implementations of that technology through features such as Oracle Notification Service (ONS). One of the most common business continuity solutions customers have chosen is to use Oracle Data Guard or Oracle Active Data Guard to keep a backup database in synchronization with the prime database. Customers wanting to use the backup database for reporting tend to choose Oracle Active Data Guard as their preferred solution. Batch can be clustered using Oracle Coherence (with flexibility in the architecture) and in Oracle Cloud SaaS implementations, we support batch clustering via Oracle WebLogic clustering. For customers interested in batch architecture refer to Batch Best Practices (Doc Id: 836362.1) available from My Oracle Support. The following references for MAA may help you design your solution: Oracle Fusion Middleware High Availability Guide Oracle Database High Availability Guide Oracle Coherence High Availability Guide

One of the most common tasks in any implementation of an Oracle Utilities Application Framework product is the design of a high availability environment to ensure business continuity and availability. T...

Advice

Securing Your JNDI Resources for Other Groups

As with other applications, the Oracle Utilities Application Framework respects the settings within the Oracle WebLogic domain, including any default settings. One of the default settings for the domain is access to the JNDI resources within the domain. By default, Oracle WebLogic grants access to Everyone that is defined in the security realm definition of the domain. Whilst, this is generally acceptable in the vast majority of domains that are setup (remember you tend to set up a lot of non-production copies in any implementation of the products), it may not be appropriate for production domains. There is a simple setup to correct that. Create a group to designate the specific users outside the application users you want to give access to the JNDI resources. Allocate the user identities to that group in your security repository. If you use the internal LDAP of Oracle WebLogic then you can add them using the console. If you want to designate different groups of people, create different groups. Remember you have groups already for other users, Administrators and the product group. For this documentation we will use the Administrators and cisusers groups. You can vary the values according to your site setup. These will be reused for the setup. Create a Global Role which refers to the above group. If you created multiple then specify each group in the role. On the product server(s) or cluster, select the View JNDI Tree option on the Configuration --> General tab. For example: On the root node of the server definition in the tree remove the Everyone from the node using the Remove button. The Administrators should be the only group that has access at the root level. Do NOT remove Administrators as this will corrupt your access to the domain. The following is an example of the recommended settings: All child nodes in the JNDI inherit the root node setup. Now for the product to work you need to add cisusers to the following JNDI objects: The servicebean must be accessible for cisusers. This will be under the context value set for your domain. The Data Sources (OUAF_DS in my example) must be accessible to cisusers. The JMX nodes should be accessible to cisusers if you are using JMX monitoring (directly or via OEM). If using the internal JMS processing, wither that is the JMS Senders or MDB, then you must allow cisusers access to the JMS resources in the domain. Add your custom group to the relevant JNDI objects they need to have access to. Set the Enable Remote JDBC Connection Property to false. This can be done using the JAVA_OPTIONS setting in the setDomainEnv[.sh] script shipped with Oracle WebLogic in the bin directory of your domain home (Add -Dweblogic.jdbc.remoteEnabled=false to JAVA_OPTIONS). Check that the variable WLS_JDBC_REMOTE_ENABLED is not set incorrectly. If you are using SSL, you need to set the RMI JDBC Security to Secure to ensure Administrators use SSL as well for connections. For example: The domain is now more secure.    

As with other applications, the Oracle Utilities Application Framework respects the settings within the Oracle WebLogic domain, including any default settings. One of the default settings for...

Advice

Calling Batch Level Of Service

As a followup to my Batch Level Of Service article, I want to illustrate how to call your new algorithm from other scripts and as part of query zones. In the base product we ship a Business Service, F1-BatchLevelOfService, that allows a script or query zone to call the Batch Level Of Service algorithm attached to a Batch Control, if it exists, to return the level of service. I should point out that if a Batch Level Of Service algorithm is not configured on the Batch Control, this call will return the Disabled state. The schema for this service is shown below (please use the View Schema feature on your version for later versions): To use this service you need to populate the batchControlId input parameter when calling the service for the service to return the message and levelOfService. Now, how do you call this in other objects: Service Scripts - Include the F1-BatchLevelOfService service as a Data Area attached to the script and use invokeBS to call the business service. For example: move "parm/batchControlId" to "F1-BatchLevelOfService/input/batchControlId"; invokeBS 'F1-BatchLevelOfService' using "F1-BatchLevelOfService"; Query Portal - Use the source=bs tag in your column with a call to the F1-BatchLevelOfService service passing the column that contains the Batch Control Id. For example: source=BS bs='F1-BatchLevelOfService' input=[input/batchControlId=C1] output=output/levelOfService Additionally you can use F1-ReturnMessage to format the message which is returned as well. Here is an example of the columns used in a query portal:

As a followup to my Batch Level Of Service article, I want to illustrate how to call your new algorithm from other scripts and as part of query zones. In the base product we ship a Business Service, F1...

Advice

Building a Batch Level of Service Algorithm

One of the features of the Oracle Utilities Application Framework is the Batch Level of Service. This is an optional feature where the Oracle Utilities Application Framework can assess the current execution metrics against some target metrics and return whether the batch job met its targets or failed in meeting targets (including the reason). This facility is optional and requires some configuration on the Batch Control using a Batch Level Of Service algorithm. This algorithm takes in the BATCH_CD as an input and performs the necessary processing to check the level of service (anyway you wish). The algorithm passes in a Batch Code (batchControlId) and it passes back the following: The Level Of Service, levelOfService,  (as expressed by the system lookup F1_BATCH_LEVEL_SERVICE_FLG): DISA (Disabled) - The Batch Level Of Service is disabled as the algorithm is not configured on the Batch Control record. This is the default. NORM (Normal) - The execution of the batch job is within the service level you are checking. ERRO (Error) - The execution of the batch job exceeds the service level is you are checking. WARN (Warning) - This can be used to detect that he job is close to the service level (if you require this functionality). The reason for the Level Of Service, expressed as a message (via Message Category, Message Number and Message Parameters). This allows you customize the information passed to express why the target was within limits or exceeded. So it is possible to use any metric in your algorithm to measure the target performance of your batch controls. This information will be displayed on the Batch Control or via the F1-BatchLevelOfService Business Service (for query portals). Now, I will illustrate the process for building a Batch Level Of Service with an example algorithm. This sample will just take a target value and assess the latest completed execution. The requirements for the sample algorithm are as follows: A target will be set on the parameters of the algorithm which is the target value in seconds. Seconds was chosen as that is the lowest common denominator for all types of jobs. The algorithm will determine the latest batch number or batch rerun number (to support reruns) for the completed jobs only. We have an internal business service, F1-BatchRunStatistics that returns the relevant statistics if given the batch code, batch number and batch rerun number. The duration returned will be compared to the target and the relevant levelOfService set with the appropriate message. Here is the process I used to build my algorithm: I created three custom messages that would hold the reason for the NORM, ERRO and WARN state. I do not use the last state in my algorithm though in a future set of articles I might revisit that. For example: You might notice that in the message for the times the target is exceeded I will include the target as part of the message (to tell you how far you are away from the target). The first parameter will be the target and the second will be the value returned from the product. The next step is to define the Business Service that will return the batch identifiers of the execution I want to evaluate for the statistic. In this case I want to find the latest run number for a given batch code. Now, there are various ways of doing this but I will build a business service to bring back the right value. In this case I will do the following: I will build a query zone with the following configuration to return the batch run number and batch rerun number: Parameter Setting Zone CMBHZZ Description Return Batch Last Run Number and Rerun Number Zone Type F1-DE-SINGLE Application Service F1-DFLTS Width Full Hidden Filter 1 label=BATCH_CD Initial Display Columns C1 C2 C3 SQL Statement select b1.batch_cd, max(b1.batch_nbr), max(b2.batch_rerun_nbr) from ci_batch_inst b1, ci_batch_inst b2 where b1.batch_cd = :H1 and b1.batch_cd = b2.batch_cd and b1.batch_nbr = b2.batch_nbr group by b1.batch_cd Column 1 source=SQLCOL sqlcol=1 label=BATCH_CD Column 2 source=SQLCOL sqlcol=2 label=BATCH_NBR Column 3 source=SQLCOL sqlcol=3 label=BATCH_RERUN_NBR I will convert this to a Business Service using the FWLZDEXP with the following schema: I need to create a Data Area to hold my input variables. I could do this inline but I might want to reuse the Data Area for other algorithms in the future. For example: I now have all the components to start my algorithm via Plug In Script. I create a Batch Level Of Service script with the following settings: I attach the following Data Areas. These are the data areas used by the various calls in the script: The script code looks something like this: Note: The code shown above is for illustrative processes. It is not a supported part of the product, just an example. I now create the Algorithm Type that will define the algorithm parameters and the interface for the Algorithm entries. Notice the only parameter is the Target Value: Now I create the Algorithm entries to set the target value. For example: I can create many different algorithm entries to reuse across the batch controls. For example: The final step is to add it to the Batch Controls ready to be used. As I wrote the script as a Plug-In Script there is no deployment needed as it auto deploys. For example, on the Batch Control, I can add the algorithm: Now the Batch Level Of Service will be invoked whenever I open the Batch Control. For example: This example is just one use case to illustrate the use of Batch Level Of Service. This article is the first in a new series of articles that will use this as a basis for a new set of custom portals to help plan and optimize your batch experience.

One of the features of the Oracle Utilities Application Framework is the Batch Level of Service. This is an optional feature where the Oracle Utilities Application Framework can assess the current...

Advice

Design Guidelines

The Oracle Utilities Application Framework is both flexible and powerful in terms of the extensibility of the products that use the product. As the famous saying goes though, "With Great Power comes Great Responsibility". Flexibility does not mean that you have carte blanche in terms of design when it comes to using the facilities of the product. Each object in the product has been specifically designed for a specific purpose and trying to use the extension facilities with those object must also respect those purposes. Let me give some advice that may help guide your design work when building extensions: Look at the base - The most important piece of advice I give partners and customers is look at the base product facilities first. I am amazed how many times I see an enhancement that has been implemented by a partner only to find that the base product already did that. This is particularly important when upgrading to a newer version. We spend a lot of time adding new features and updating existing ones (and sometimes replacing older features with newer features) so what you have as enhancements in previous now are part of the base product. It is a good idea to revert back to the base to reduce your maintenance costs. Respect the objects - We have three types of objects in the product: Configuration, Master and Transaction. The configuration objects are designed to hold meta data and configuration that influence the behavior of the product. They are cached in a L2 Cache that is designed for performance and are generally static data that is used as reference and guidance for the other objects. They tend to be low volume and are the domain of your Administrators or Power Users (rather than end users). A simple rule here is that they tend to exist on the Admin menu of the product. The master objects are medium volume, with low growth, and define the key identifier or root data used by the product. For example, Accounts, Meters, Assets, Crews, etc. The transaction objects are high volume and high growth and are added by processes in the product or interfaces and directly reference master objects. For example, bills, payments, meter reads, work activities, tasks etc.. These objects tend to also support Information Lifecycle Management. Now you need to respect each of them. For example, do not load transaction data into a configuration object is a good example. Each its own place and each resource profile and behaviors. Avoid overuse of the CLOB field - The CLOB field was introduced across most objects in the product and is a great way of extending the product. Just understand that while they are powerful they are not unlimited. They are limited in size for performance reasons and they are not a replacement for other facilities like characteristics and even building custom tables. They are XML remember and have limited maintenance and search capabilities over other methods. Avoid long term issues - This one is hard to explain so let me try. When you design something, think about the other issues that may arise due to your design. For example, lots of implementers forget about volume increases over time and run into issues such as storage long term. Remember data in certain objects has different lifecycles and needs to be managed accordingly. Factor that into your design. Too many times I see extensions that forget this rule and then customer calls support for advice only to hear they need to redesign it to cater for the issue. I have been in the industry over 30 years and made a lot of those mistakes myself early in my career so it is not impossible. Just learn and make sure you do not repeat your mistakes over time. One more piece of advice, talk about your designs with a few people (of various ages as well) to see if it makes sense. Do not take this as a criticism as a lot of great designers bounce ideas off others to see if they make sense. Doing that as part of any design process helps make the design more robust. Otherwise it just looks rushed and from the other side looks like lazy design. As designers I have seen great designs and bad designs, but it is possible to transform a requirement into a great design with some forethought.

The Oracle Utilities Application Framework is both flexible and powerful in terms of the extensibility of the products that use the product. As the famous saying goes though, "With Great Power comes...

Updates to Oracle Utilities Testing solution

We are pleased to announce the availability of new content for the Oracle Functional Testing Advanced Pack for Oracle Utilities. This pack allows customers of supported Oracle Utilities products to adopt automated testing quickly and easily by providing the testing components used by Product Development for use in the Oracle Application Testing Suite. We have released, as patches available from My Oracle Support, the following content patches: Oracle Utilities Customer Care And Billing v2.6.0.0.0 (available as patch 26075747). Oracle Utilities Customer To Meter v2.6.0.0.0 (available as patch 26075823). Oracle Utilities Meter Data Management/ Oracle Utilities Smart Grid Gateway v2.2.0.1 (available as patch 26075799). This means the current release of the pack, v5.0.1.0, supports the following products and versions: Oracle Utilities Customer Care And Billing 2.4.0.3, 2.5.0.1, 2.5.0.2 & 2.6.0.0 Oracle Utilities Mobile Workforce Management 2.2.0.3, 2.3.0.0 & 2.3.0.1 Oracle Real Time Scheduler 2.2.0.3, 2.3.0.0 & 2.3.0.1 Oracle Utilities Application Framework 4.2.0.3, 4.3.0.1, 4.3.0.2, 4.3.0.3 & 4.3.0.4 Oracle Utilities Meter Data Management 2.1.0.3, 2.2.0.0 & 2.2.0.1 Oracle Utilities Smart Grid Gateway (all adapters) 2.1.0.3, 2.2.0.0 & 2.2.0.1       Oracle Utilities Work And Asset Management 2.1.1, & 2.2.0 Oracle Utilities Operational Device Management 2.1.1 & 2.2.0 Oracle Utilities Customer To Meter 2.6.0.0 The pack continues to support the ability to build flows for these products, including flows across multiple products, packaged integration and supports all channels of access including online, web services and batch. We also support mobile testing for the Oracle Utilities Mobile Workforce Management and Oracle Real Time Scheduler products running on Android and iOS devices. The pack also includes sanity flows used by the Oracle Utilities cloud deployments that test the installation of the products are complete and operational.

We are pleased to announce the availability of new content for the Oracle Functional Testing Advanced Pack for Oracle Utilities. This pack allows customers of supported Oracle Utilities products to...

Information

The VERSION column - A unsung treasure

If you use an Oracle Utilities Application Framework based product you will notice the column VERSION exists on all objects in the product. There is a very important reason that this column exists on the tables. One of the common scenarios in an online system is the problem called the lost update problem. Let me explain, say we have two users (there can be more), say User A and User B. User A reads Object A to edit it. User B reads Object A as well to edit it at the same time. User B saves the Object changes first. User A saves the Object changes. Now, without protection, the changes that User B made would be overridden by User A's changes. We have lost User B's changes. This is the lost update problem in a nutshell. Now using the VERSION column changes the above scenario: When User A and User B reads the object, the current value of VERSION is noted. Whenever the object is updated, the value VERSION is checked. If it is the same than the value of VERSION when the record was read then value of VERSION is incremented as part of the update. If the value of VERSION does not match, the product will issue a "Concurrency Error" and ask the user to retry the transaction (after reloading the changed object). In our scenario, User A would receive the message as the value of VERSION has incremented, and therefore differs, since it was read by that user. VERSION is a standard column on all objects in the system and applies no matter what channel (online, web services or batch) updates the object.

If you use an Oracle Utilities Application Framework based product you will notice the column VERSION exists on all objects in the product. There is a very important reason that this column exists on...

Advice

Hidden gems in OUAF 4.3.0.4.0

Oracle Utilities Application Framework V4.3.0.4.0 has just been released with a few products and you will find a few hidden gems in the installation which provides a couple of useful features for those upgrading. Here is a summary of some of those features: You will notice that now the product requires the Oracle Java Required Files (JRF). These files are additional libraries Oracle uses in its products to standardize diagnostics and administration. The JRF is provided as a profile you apply to your Oracle WebLogic domain to provide additional facilities and features. It install JRF it is recommended to down the Fusion Middleware Infrastructure release of Oracle WebLogic as it includes all the files necessary to apply the template. These libraries are used by various components in the product and each release we will implement more and more of the advanced functionality they provide. One of the biggest gems is that JRF implements a new additional console in the form of Fusion Middleware Control. Customers familiar with Oracle SOA Suite will be familiar with this new console. It is a companion console and has some additional features around Web Services management and other administration features (including recording for replays) for common tasks. Here is an example of the console running with one of our products: The JRF inlcudes a prebuilt diagnostics framework (FMWDFW) setup for use with WLDF. The WebLogic Diagnostics Framework (WLDF) is a framework where you configure rules for detecting issues in your domain. When an issue arises, WLDF automatically collects the relevant information into a Diagnostics Package which can be sent to Oracle Support for diagnosis. This collects any relevant information (including flight recordings if you enable that) and creates a zip file full of diagnostic information to help solve the issue. The prebuilt setup can be used with OUAF products and can be altered to detect additional issues if necessary. At the present it helps detect the following: Deadlocks Heapspace (memory issues) Stuck Threads (it can be configured to detect hogging threads as well) UncheckedException - These are general errors The JRF is a collection of useful libraries and utilities that are now enabled with Oracle Utilities Application Framework to help you be more efficient and also detect issues for you to manage.

Oracle Utilities Application Framework V4.3.0.4.0 has just been released with a few products and you will find a few hidden gems in the installation which provides a couple of useful features...

Advice

Scripting, Groovy and Java for extending the product

In a recent past release of the Oracle Utilities Application Framework, we introduced Groovy as an alternative development technology for server side extensions on our products. This now means we have three technologies that can be used to extend our products: XPath/Xquery based scripting engine known as scripting Java Groovy Now, the issue becomes which technology do I use for my extensions. Here are a few guidelines to help you: In terms of performance, there is not much difference between the technologies as, at the end of the day, they all result in byte code that is executed by the product. The product runtime does not discriminate the technology at that level. There is a slight advantage of Java/Groovy over Scripting for extremely large volumes. If you are doing complex algorithmic or operating system level interaction it is recommended to use either Groovy or Java instead of scripting. While scripting can satisfy the most common of extensions, it may not be as efficient as Java/Groovy. If you are intending to move to the Oracle Utilities SaaS offerings, you cannot use Java for any extensions. This is due to the fact that Java tends to be low level and also you cannot deploy your own JAR/WAR/EAR files in a Saas environment. If you use Oracle PaaS then you have full access so you can use Java in those cases. Groovy was adopted as a language as it is the foundation of the Oracle Cloud offerings in general for extensions. The Groovy implementation across the Oracle Cloud is whitelisted so that it is restricted to accessing classes that do not have direct access to operating system resources. In this case we supply Groovy libraries to provide a contained integration with these resources. One of the major considerations is total cost of ownership. Typically if you use a mixture of languages in your implementation then the cost of maintenance of those extensions tends to be higher if you chose to use a single language. This is true for any product that has multiple ways of extension as while flexibility is a great asset, it can come with additional costs. I usually recommend that you pick one of the technologies and stick with it for your extensions unless, for some reason, you need to use a mixture. In terms of best practices, a lot of implementation partners tend to use scripting for the vast majority of their extensions and only use Groovy/Java when scripting is not applicable for some reason. One of the big advantages of scripting and Groovy is that the code assets are actually contained in the database and migration is all handled by either Bundling (for small migrations) or using Configuration Migration Assistant (CMA). The use of Java for extensions, typically requires a manual synchronization of data as well as code. From a vendor perspective, it does not matter which technology you choose to use. Personally, I would use scripting and the only use Groovy as necessary, it is easier to manage and you do not have physical JAR/WAR/EAR files to manage which makes your code/data synchronization much less an issue in a complex migration strategy. It also means you can move to the cloud a lot easier, in the future.

In a recent past release of the Oracle Utilities Application Framework, we introduced Groovy as an alternative development technology for server side extensions on our products. This now means we have...

Advice

High and Maximum Availability Architectures

One of the most common questions I get from partners is what are the best practices that Oracle recommends for implementing high availability and also business continuity. Oracle has a set of flexible architectures and capabilities to support a wide range of high availability and business continuity solutions available in the marketplace. The Oracle Utilities Application Framework supports the Oracle WebLogic and Oracle Database and related products with features inherited from the architecture or native facilities that allow features to be implemented. In summary the Oracle Utilities Application Framework supports the following: Oracle WebLogic Clustering and high availability architectures are supported natively including support for the load balancing facilities supported, whether they be hardware or software based. This support extends to the individual channels supported by the Framework and to individual J2EE resources such as JMS, Data Sources, MDB etc.. Oracle Coherence high availability clustering is available natively for the batch architecture. We now also support using Oracle WebLogic to cluster and manage our batch architecture (though it is exclusively used in our Oracle Cloud implementations at the moment). The high availability and business continuity features of the Oracle Database are also supported. For example, it is possible to implement Oracle Notification Service support within the architecture to implement Fast Connection Failure etc. Oracle publishes a set of guidelines for Oracle WebLogic, Oracle Coherence and Oracle Database that can be used with Oracle Utilities Application Framework to implement high availability and business continuity solutions. Refer to the following references for this information: Maximum Availability Architecture Oracle Database High Availability Fusion Middleware Clustering Guide Oracle Coherence Clustering Guide Oracle Fusion Middleware High Availability Guide

One of the most common questions I get from partners is what are the best practices that Oracle recommends for implementing high availability and also business continuity. Oracle has a set of flexible...

Advice

REST Support clarifications

In the Oracle Utilities Application Framework V4.3.0.3.0 release, the support for REST has been enabled for use as a complementary interface method adding to the SOAP support we already have in the product. The REST support in the Oracle Utilities Application Framework was originally developed to support our new generation of the mobile connection platform we used for the Oracle Utilities Mobile Workforce Management platform and limited to that product initially. Subsequently, we have decided to open up the support for general use. As the REST support was originally designed for its original purpose, the current release of REST is limited to specific aspects of that protocol but it is at a sufficient level to be used for general purpose functions. It is designed to be an alternative to SOAP integration for customers who want to a mixture of SOAP and REST in their integration architectures. In the initial release, the REST support has been implemented as part of the online channel to take advantage of the Oracle WebLogic facilities and share the protocol and security setup of that channel. In a future release, we have plans to incorporate enhanced REST features in a separate channel dedicated to integration. For more information about the REST platform support, including the limitations of this initial release, refer to the Web Services Best Practices whitepaper from My Oracle Support (Doc Id: 221475.1).

In the Oracle Utilities Application Framework V4.3.0.3.0 release, the support for REST has been enabled for use as a complementary interface method adding to the SOAP support we already have in the...

Multiple Policy Support (4.3.0.4.0)

One of the features of the latest Oracle Utilities Application Framework (V4.3.0.4.0) is the support for multiple WS-Policy compliant policies on Inbound Web Services. There are a number of ways to achieve this: Annotations - It is now possible to specify multiple inline policies (standard ones and custom ones) with order of precedence also supported via a Sequence. It is also now possible to delegate to security within Annotations to Oracle Web Services Manager. This means it is now possible to mix inline with external policies. For example: Oracle WebLogic - It is possible to attach the policies supported by Oracle WebLogic to the individually deployed Web Services on the container level. This supports multiple policies (order of precedence is designated by the order they appear in the Web Service) on the individual Web Service. Oracle Web Services Manager - It is possible to attach additional policies using the container (Web Services Manager includes the Oracle WebLogic supported policies, additional advanced policies and access controls) and like Oracle WebLogic, the order of precedence for multiple policies is the order they are attached to the individual Web Service. For example: Now why have multiple policies in the first place. Well, you do not have to use multiple policies but there are a few use cases where it makes sense: Some WS-Policies are for transport security and some are for message security only. Using a combination allows you to specify both using different policies. I should point out that most WS-Policies contain a transport and message combination so it reduces the need for multiple policies in the container. You can create WS-Policy compliant custom policies, as long as they are supported by Oracle WebLogic or Oracle Web Services Manager, and those can have separate transport or message security definitions. You should reuse web services as much as possible. You can choose not to expose the WS-Policy in your service but then use different policies for different interface systems. This might sound illogical but you may have different levels of security depending on the source of the call. In this case you would tell your sources the different policies they must adhere to. Multiple policies are an optional feature but can be used to support a wide range of different interface styles.

One of the features of the latest Oracle Utilities Application Framework (V4.3.0.4.0) is the support for multiple WS-Policy compliant policies on Inbound Web Services. There are a number of ways to...

Advice

SOA Suite Security with Inbound Web Services

With the introduction of Inbound Web Services the integration between these services and Oracle SOA Suite now has a few more options in terms of security. It is possible to specify the WS-Policy to use to secure the transport and message sent to the product web service on the SOA Composite. The product supports more than one WS-Policy per service and any composite must conform to one of those policies. As with older versions of the product and SOA Suite, you can specify the csf-key within the domain itself. This key holds the credentials of the interface in meta-data so that it avoids hardcoding the credentials in each call. This also means you can manage credentials from the console independently of the composite. In the latest releases it is possible to specify the csf-map as well (in past releases you had to use oracle.wsm.security as the map). Now the process to do the configuration is as follows: Using Oracle Fusion Middleware control, select the Oracle SOA Suite domain (usually soa_domain) and add the credentials (and map) to the domain. The credentials can be shared across composites or you choose to setup multiple credentials (one for each interface for example). In the example below, the map is the default oracle.wsm.security map and key is ouaf.key (just for the example): Now the credentials and the WS-Policies need to be specified on the composite within Oracle SOA Suite. This can be done within SOA Composer or Oracle JDeveloper. Below is an Oracle JDeveloper example, where you link the WS-Policies using Configure SOA WS Policies at the project level in Oracle JDeveloper for each external reference. For example: You then select the policy you want to use for the call. Remember you only use one of the policies you have configured on the Inbound Web Service. If you have a custom policy, that must be deployed to the Oracle SOA Suite and your Oracle JDeveloper instance to be valid for your composite. For example a list of policies is displayed and you select one: Edit the Policy to specify additional information. For example : At this point, specify which csf-map and csf-key you want to use for the call in the Override Value. In the example below the csf-key is specified. For example: The security has been setup for the composite. You have indicated the credentials (which can be managed from the console) and the policy to use can be attached to the composite to ensure that your security specification has been implemented. Depending on the WS-Policy you choose to use, there may be additional transport and message protection settings you will need to specify (for example if you use policy specific encryption, outside the transport layer, you may need to specify the encryption parameters for the message). For full details of Oracle SOA Suite facilities, refer to the Oracle SOA Suite documentation.

With the introduction of Inbound Web Services the integration between these services and Oracle SOA Suite now has a few more options in terms of security. It is possible to specify the WS-Policy to use...

Testing, the Oracle difference

Recently I attended the customer forums in London, to discuss the future of our product lines and also outline the work we have done over the last year. One of the questions that came up was the a discussion of the major advantages of using the Oracle Functional Testing Advanced Pack for Oracle Utilities which is part of the Oracle Testing solution. In the industry, functional testing, in an initial implementation and the subsequent upgrades of any product, is a major part of the implementation. Typically to reduce risk, implementations commonly decide to reduce the scope of testing, to meet deadlines, which increases the overall risk. One way of addressing this is to adopt automated testing. While this sounds logical it can have hidden costs: Traditional tools use user interface based scripting which basically records the screen and the interaction of the screen. In the old days in my career, I used to call this screen scraping. I am sure it is more than that, effectively it is using the screen recording, including the data entered, as a rerunnable test. Typically, data that is entered in the recording is embedded in the script used for recording. This means if you wanted to reuse the script you would probably need to record it again or have some programming resource to change the script. Effectively you need a specialist script programmer to maintain the testing assets for you. If the user experience changes, even due to a patch, the script may or may not work as originally intended which may return inconsistent results or you will need to re-record the asset again. This is more likely when you upgrade as new modern user experiences are introduced over time. Testing assets are really programmable objects that are typically maintained by a programmer rather than a testing resource. Whilst, these programming languages are made easier and easier to use they are still programming. Now, whilst it is possible to use the Oracle Application Testing Suite in the traditional sense as outlined above, when it is coupled with the Oracle Functional Testing Advanced Pack for Oracle Utilities it is much different and addresses the issues seen in a traditional automated testing approach. Oracle Functional Testing Advanced Pack for Oracle Utilities includes a full set of reusable components that are the SAME components used by the QA teams at Oracle on a day to day basis. The fact they are used on a daily basis by the product QA, reduces the risk of them actually executing and being able to be used against the product versions. The solution is based upon Oracle Application Testing Suite which is used by hundreds of Oracle customers across many Oracle products such as eBusiness Suite, Peoplesoft, Fusion, JD Edwards etc. Oracle Utilities is just one of the latest products to use the Oracle Application Testing Suite. In fact, some of the products have licensed packs as well that can be used with in conjunction with the Oracle Utilities pack. The components represent the full functions of the main functionality of the product they are supplied for. The only components we do not provide are the components that cover the administration objects. These objects are typically not cost effective to automate in an implementation, due to their very low usage after implementation. The supplied components are customization aware where algorithms, change handlers, etc are handled by the component automatically. The Oracle Functional Testing Advanced Pack for Oracle Utilities supplies a number of utilities to allow partners and implementations to add custom components to the solution for any customization not handled by the base components (this should be relatively rare). The process to use the pack with the Oracle Application Testing suite is more assembly (orchestration) rather than programming. Oracle Flow Builder, which is included in the solution, is a simple browser based too that allows business processes to be modeled with simple drag and drop of the components in the order they represent the business process. This allows a lower skilled person to build the flows rather than a programmer. The testing flows becomes a test script through a generator. The resulting script does not need to be altered or maintained by a developer after it is generated. Data for the flow is independent of the flow which encourages reuse. For example, it is possible to attach different data to represent different scenarios to a single flow. Flows can also contain multiple scenarios if desired. This extends even after the flow is expressed a test script where the physical data is separated out so it can be replaced at runtime rather than design time. The whole solution is designed for reuse so that the number of assets you need is actually far less than traditional methods. This reduces costs and risk. It is possible to reuse your flows across product versions. For example, it is possible to test multiple releases of products to reduce your upgrade risk by aligning the same flows to different versions of the supplied components. The testing solution from Oracle Utilities is far more cost effective than traditional methods with the content allowing implementations to quickly adopt automated testing with a lower implementation risk. Customers who have used the solution have found they have tested more, reduced their testing costs and increased accuracy of their solutions.

Recently I attended the customer forums in London, to discuss the future of our product lines and also outline the work we have done over the last year. One of the questions that came up was the a...

Annoucements

Oracle Utilities Work And Asset Management V2.2.0.0.0 Released

Oracle Utilities Work And Asset Management (WAM) V2.2.0.0.0 has been released and is available from Oracle Delivery Cloud. This version is also based upon Oracle Utilities Application Framework V4.3.0.4.0 (also known as 4.3 SP4). Included in this release are usability enhancements, an update to the Esri GIS Integration, Preventive Maintenance Event processing, and Construction Work Management.   With these new additions we are now able to support the full asset lifecycle, from design and construction to retirement, opening up the gas and electric distribution market.  Construction Work Management adds the final piece to the Asset Lifecycle process. Asset Performance Management - The Asset Performance Management features have been enhanced to offer new ways to calculate Asset Health Index scores and to set up Preventive Maintenance triggers based on the Asset Health Index.   We also offer integration points for third party predictive maintenance products to affect the Asset Health Index. Compatible Units - Compatible Units are job standards that can be used to provide consistency and assistance when creating work designs.  Compatible Units can be created for either internal resources or for contractors. Construction Work Design - Work Designs are critical to utility distribution companies.  The work design process leverages the compatible units to quickly scope and estimate the costs of work.  You are able to create multiple versions to designs to compare various construction options such as overhead or underground work.  You can also create design versions to compare contractor work.  When you pick a design to execute, you are able to easily transition the work design into a work package without having to create new work orders from scratch. Construction Work Orders - Construction work orders differ from regular work orders because we are creating new assets rather than maintaining existing assets.  A construction work order also manages Construction Work in Progress (CWIP) accounting to ensure the work in progress is accounted for correctly.  The closeout process allows you to create new WAM assets to start their lifecycle in WAM and also creates the fixed asset property unit data to feed the corporate accounting system. "As Built" Reconciliation - One of the big challenges for organizations is the reconciliation of the work design to the actual construction.  The actual construction work often diverges from the estimate due to the wide variety of variables that occur on a project.  WAM v2.2 offers a full reconciliation process to allow you to revise the values of assets, move costs between construction and maintenance accounts, review and adjust property unit valuation, and provides support for mass asset valuations. PM Event Processing -  You can now package up a group of work templates into a PM Event and trigger that event as a group rather than one work template at a time.  This can be used for outage work or any repetitive work that requires multiple work orders to be created. Esri GIS Integration - The user experience of the Esri GIS Integration was completely revised to provide a more intuitive experience.  Esri mapviewer components are directly integrated into the Work and Asset Management product.  Customers can publish any map component as an Esri Web Map and enroll that Web Map into WAM.  This includes feature layer maps as well as any thematic maps or metrics that customers choose to publish.  

Oracle Utilities Work And Asset Management (WAM) V2.2.0.0.0 has been released and is available from Oracle Delivery Cloud. This version is also based upon Oracle Utilities Application...

Annoucements

Oracle Utilities Customer Care and Billing V2.6.0.0.0 is now available

Oracle Utilities Customer Care And Billing V2.6.0.0.0 is now available for download and installation from Oracle's Delivery Cloud. This is the first Oracle Utilities product to release on the Oracle Utilities Application Framework V4.3.0.4.0, also know and 4.3 SP4. The latest Oracle Utilities Application Framework includes the latest updates, new functionality, content we have delivered from our cloud offerings and new versions of platforms. The release media includes a new set of updated documentation: Updated versions of the online documentation which are available using the Oracle Help engine online and in offline format as well. New technical documentation about installation, operations and security. We have released a new API Guide for the management API's now included in the release documentation. These API's are used by our new management interfaces and our next release of the OEM Management Pack for Oracle Utilities. As in my last posts OUAF 4.3.0.4.0 Release Summary you can see the Framework features that are now available for Oracle Utilities Customer Care And Billing customers that can be utilized. With the general availability of the Oracle Utilities Application Framework V4.3.0.4.0 a series of articles and new versions of whitepapers will be released over the coming months to highlight new features available for the use on the cloud and on-premise implementations of these products.

Oracle Utilities Customer Care And Billing V2.6.0.0.0 is now available for download and installation from Oracle's Delivery Cloud. This is the first Oracle Utilities product to release on the...

Advice

OUAF 4.3.0.4.0 Release Summary

The next release of the Oracle Utilities Application Framework (4.3.0.4.0) is in its final implementation across our product lines over the next few months. This release improves the existing Oracle Utilities Application Framework with exciting new features and enhanced existing features for our cloud and non-cloud implementations. Here is a summary of the key features of the new Oracle Utilities Application Framework. Main Features CMA Improvements The following highlights some improvements to CMA processing. Ad-hoc Migration Requests A new migration request BO has been provided to allow for building ‘ad-hoc’ migration requests using a list of specific objects.  It’s called the “entity list” migration request. A special zone is included to find records to include in the migration request.  This zone allows you to choose a maintenance object that is configured for CMA and enter search criteria to get a list of objects to choose.  The zone supports linking one or more objects for the same MO en masse. Once records are linked, a zone allows you to view the existing records and remove any if needed.   Grouping Migration Requests Migration requests may now be grouped so that you can maintain more granular migration requests that get grouped together to orchestrate a single export of data for a ‘wholesale’ migration.  The framework supplies a new ‘group’ migration request that includes other migration requests that logically group migration plans.  Edge products or implementations may include this migration request into their own migration request. Mass Actions During Migration Import Approval When importing data sets, a user may now perform mass actions on migration objects to approve or reject or mark as ‘needs review’. Groovy Library Support Implementers may now define a Groovy library script for common functionality that may be included in other Groovy scripts. There’s a new script type: Scripts of this type define a Groovy Library Interface step type to list the Groovy methods defined within the script that are available for use by other scripts. Additional script steps using the Groovy Member step type are used to define the Groovy code that the script implements. Groovy scripts that choose to reference the Groovy Library Script can use the createLibraryScript method provided by the system to instantiate the library interface. Search Menu Capability A new option in the toolbar allows a user to search for a page rather than using the menu to find the desired page. All menu items whose label matches what the user types are shown (as you type): Additional Features The following is a subset of additional features that are included.   Refer to the published release notes for more details. URI validation / substitution. Any place where a URI is configured can now use substitution variables to support transparency across environment. The fully substituted value can also be validated against a whitelist for added security. Minimizing the dashboard suppresses refresh. This allows a user to improve response when navigating throughout the system by delaying the refresh of zones in the dashboard while it is minimized.New support for UI design. Input maps may now support half width sections.  Both display and input maps may support “floating” half width sections that fill in available space on the UI based on what is displayed. Individual batch controls may now be secured independently. Ad-hoc batch parameters are supplied to all batch related plug-in spots. Additionally, plug-in driven batch programs may now support ad-hoc parameters. Elements in a schema that include the private=true attribute will no longer appear in the WSDL of any Inbound Web Service based upon that schema.  

The next release of the Oracle Utilities Application Framework (4.3.0.4.0) is in its final implementation across our product lines over the next few months. This release improves the existing...

Advice

Oracle Utilities 2017 Edge Customer Conference Product Forum

I will  be attending the USA and APAC Oracle Utilities 2017 Edge Customer Conference Product Forum this year, conducting a number of technical sessions. This year, to make the sessions more relevant, the content of the sessions has been tweaked to cover a number of aspects for the subject are chosen. The sessions are a combination of information, future plans, best practices and tips/techniques for getting the most out of your products. The information is based upon feedback from implementations across the world as well as information on how Oracle itself is implementing the products in the cloud. The sessions this year are as follows: Session Number  Topic  TECH-001 Data Management Strategies - Using ILM and CMA to manage your data. This session will not be conducted at the APAC conference.  TECH-002 Integration Techniques - Using the various techniques available to build an integration solution including Web Services, REST and the Oracle Integration Cloud Adapter.  TECH-003 Extending your implementation - Various techniques for extending your product on site and in the cloud.   TECH-004 Testing your implementation - Outlining testing accelerators with Oracle Utilities Advanced Testing Pack (co-presented with a customer).  TECH-005 Utilities in the Cloud - An architectural overview of the Oracle Utilities offerings in the cloud to understand the capabilities and learn about how to apply the same architectures to your onsite or cloud implementations.  TECH-006 Securing your implementation - Understanding the security aspects of the products as well as options for extending the security capabilities.  TECH-007 General Question and Answer session - A panel session where you can ask product experts questions about implementation issues and directions.  TECH-009 Batch Scheduling - A session outlining the new integration to the Oracle Scheduler.  If you are attending the forum, feel free to attend and catch up with me at the sessions or the various other avenues during the conference.

I will  be attending the USA and APAC Oracle Utilities 2017 Edge Customer Conference Product Forum this year, conducting a number of technical sessions. This year, to make the sessions more relevant,...

Advice

Whitepaper List as at December 2016

The following Oracle Utilities Application Framework technicalwhitepapers are available from MyOracle Support at the Doc Id's mentioned below. Some have beenupdated in the last few months to reflect new advice and new features. Refer to Whitepaper Strategy Now and In the Future for direction of the documentation. Note: If a link on this page does not work, this means the whitepaper may have been retired. In that case refer to the online documentation provided with your product for more information. Unless otherwise marked the technical whitepapers in the table beloware applicable for the following products (with versions): OracleUtilities Customer Care And Billing (V2 and above) OracleUtilities Meter Data Management (V2 and above) OracleUtilities Mobile Workforce Management (V2 and above) OracleUtilities Smart Grid Gateway (V2 and above) – All Adapters OracleUtilities Operational Device Management (V2 and above) Oracle Utilities Work and Asset Management (V2 and above) OraclePublic Service Revenue Management (all versions) OracleRevenue Management and Billing (all versions) Oracle Real Time Scheduler (V2 and above) Doc Id DocumentTitle Contents 559880.1 ConfigLabDesign Guidelines This whitepaper outlines how to design and implement a datamanagement solution using the ConfigLab facility. This whitepaper currently only applies to the following products: OracleUtilities Customer Care And Billing OracleEnterprise Taxation Management OraclePublic Service Revenue Management OracleRevenue Management and Billing Note: ConfigLab is no longer supported in OUAF 4.3.x, Use Configuration Migration Assistant instead. 560367.1 TechnicalBest Practices for Oracle Utilities Application Framework Based Products Whitepaper summarizing common technical best practices usedby partners, implementation teams and customers. 560382.1 PerformanceTroubleshooting Guideline Series A set of whitepapers on tracking performance at each tier inthe framework. The individual whitepapers are as follows: Concepts - General Conceptsand Performance Troublehooting processes Client Troubleshooting -General troubleshooting of the browser client with common issues andresolutions. Network Troubleshooting -General troubleshooting of the network with common issues andresolutions. Web Application Server Troubleshooting- General troubleshooting of the Web Application Server with commonissues and resolutions. Server Troubleshooting -General troubleshooting of the Operating system with common issues andresolutions. Database Troubleshooting -General troubleshooting of the database with common issues andresolutions. Batch Troubleshooting -General troubleshooting of the background processing component of theproduct with common issues and resolutions. 560401.1 SoftwareConfiguration Management Series A set of whitepapers on how to manage customization (code anddata)using the tools provided with the framework. Topics include RevisionControl, SDK Migration/Utilities, Bundling and Configuration MigrationAssistant. The individual whitepapers are as follows: Concepts - General conceptsand introduction. Environment Management -Principles and techniques for creating and managing environments. Version Management -Integration of Version control and version management of configurationitems. Release Management -Packaging configuration items into a release. Distribution - Distributionand installation of releases across environments Change Management - Genericchange management processes for product implementations. Status Accounting - Statusreporting techniques using product facilities. Defect Management - Genericdefect management processes for product implementations. Implementing Single Fixes -Discussion on the single fix architecture and how to use it in animplementation. Implementing Service Packs -Discussion on the service packs and how to use them in animplementation. Implementing Upgrades -Discussion on the the upgrade process and common techniques forminimizing the impact of upgrades. 773473.1 OracleUtilities Application Framework Security Overview A whitepaper summarizing the security facilities in theframework. Now includes references to other Oracle security productssupported. 774783.1 LDAPIntegration for Oracle Utilities Application Framework based products A generic whitepaper summarizing how to integrate an externalLDAP based security repository with the framework. 789060.1 OracleUtilities Application Framework Integration Overview A whitepaper summarizing all the various common integrationtechniques used with the product (with case studies). 799912.1 SingleSign On Integration for Oracle Utilities Application Framework basedproducts A whitepaper outlining a generic process for integrating anSSO product with the framework. 807068.1 OracleUtilities Application Framework Architecture Guidelines This whitepaper outlines the different variations ofarchitecture that can be considered. Each variation will include adviceon configuration and other considerations. 836362.1 BatchBest Practices This whitepaper outlines the common and best practicesimplemented by sites all over the world. 856854.1 TechnicalBest Practices V1 Addendum Addendum to Technical Best Practices for OracleUtilities Customer Care And Billing V1.x only. 942074.1 XAIBest Practices This whitepaper outlines the common integration tasks andbest practices for the Web Services Integration provided by the OracleUtilities Application Framework. 970785.1 OracleIdentity Manager Integration Overview This whitepaper outlines the principals of the prebuiltintergration between Oracle Utilities Application Framework BasedProducts and OracleIdentity Manager used to provision user and user group securityinformation. For Fw4.x customers use whitepaper 1375600.1instead. 1068958.1 ProductionEnvironment Configuration Guidelines A whitepaper outlining common production level settings forthe products based upon benchmarks and customer feedback. 1177265.1 What'sNew In Oracle Utilities Application Framework V4?  Whitepaper outlining the major changes to the framework sinceOracle Utilities Application Framework V2.2. 1290700.1 DatabaseVault Integration Whitepaper outlining the DatabaseVault Integration solution provided with Oracle UtilitiesApplication Framework V4.1.0 and above. 1299732.1 BIPublisher Guidelines for Oracle Utilities Application Framework Whitepaper outlining the interface between BIPublisher and the Oracle Utilities Application Framework 1308161.1 OracleSOA Suite Integration with Oracle Utilities Application Framework basedproducts This whitepaper outlines common design patterns andguidelines for using OracleSOA Suite with Oracle Utilities Application Framework basedproducts. 1308165.1 MPLBest Practices This is a guidelines whitepaper for products shipping withthe Multi-Purpose Listener. This whitepaper currently only applies to the following products: OracleUtilities Customer Care And Billing OracleEnterprise Taxation Management OraclePublic Service Revenue Management OracleRevenue Management and Billing Note: MPL has been deprecated in OUAF 4.3.x. Please use Oracle Service Bus as an alternative 1308181.1 OracleWebLogic JMS Integration with the Oracle Utilities Application Framework This whitepaper covers the native integration between OracleWebLogic JMS with Oracle Utilities Application Framework using thenew Message Driven Bean functionality and real time JMS adapters. 1334558.1 OracleWebLogic Clustering for Oracle Utilities Application Framework This whitepaper covers process for implementing clusteringusing OracleWebLogic for Oracle Utilities Application Framework based products. 1359369.1 IBMWebSphere Clustering for Oracle Utilities Application Framework This whitepaper covers process for implementing clusteringusing IBM WebSphere for Oracle Utilities Application Framework basedproducts 1375600.1 OracleIdentity Management Suite Integration with the Oracle UtilitiesApplication Framework This whitepaper covers the integration between OracleUtilities Application Framework and OracleIdentity Management Suite components such as OracleIdentity Manager, OracleAccess Manager, OracleAdaptive Access Manager, OracleInternet Directory and OracleVirtual Directory. 1375615.1 AdvancedSecurity for the Oracle Utilities Application Framework This whitepaper covers common security requirements and howto meet those requirements using Oracle Utilities Application Frameworknative security facilities, security provided with the J2EE WebApplication and/or facilities available in OracleIdentity Management Suite. 1486886.1 ImplementingOracle Exadata with Oracle Utilities Customer Care and Billing This whitepaper covers some advice when implementing OracleExaData for OracleUtilities Customer Care And Billing. 878212.1 OracleUtilities Application FW Available Service Packs This entry outlines ALL the service packs available for theOracle Utilities Application Framework. 1454143.1 CertificationMatrix for Oracle Utilities Products This entry outlines the software certifications for all theOracle Utilities products. 1474435.1 OracleApplication Management Pack for Oracle Utilities Overview This whitepaper covers the Oracle Application Management Packfor Oracle Utilities. This is a pack for OracleEnterprise Manager. 1506855.1 IntegrationReference Solutions This whitepaper covers the various Oracle technologies youcan use with the Oracle Utilities Application Framework. 1544969.1 NativeInstallation Oracle Utilities Application Framework Thiswhitepaper describes the process of installing Oracle UtilitiesApplication Framework based products natively within OracleWebLogic. 1558279.1 OracleService Bus Integration  Thiswhitepaper describes direct integration with OracleService Busincluding the new OracleService Bus protocol adapters available.Customers using the MPL should read this whitepaper as the OracleService Bus replaces MPL in the future and this whitepaper outlineshowto manually migrate your MPL configuration into OracleService Bus. Note: In Oracle Utilities Application Framework V4.2.0.1.0 and above,Oracle Service Bus Adapters for Outbound Messages andNotification/Workflow are available 1561930.1 UsingOracle Text for Fuzzy Searching This whitepaper describes how to use the Name Matchingand  fuzzy operator facilities in OracleText to implemement fuzzy searching using the @fuzzy helperfucntion available in Oracle Utilities Application FrameworkV4.2.0.0.0 1606764.1 AuditVault Integration This whitepaper describes theintegration with OracleAudit Vaultto centralize and separate Audit information from OUAF products. AuditVault integration is available in OUAF 4.2.0.1.0 and above only. 1644914.1 MigratingXAI to IWS Migration from XML ApplicationIntegration to the new native Inbound Web Services in Oracle UtilitiesApplication Framework 4.2.0.2.0 and above. 1643845.1 PrivateCloud Planning Guide Planning Guide for implementingOracle Utilities products on Private Clouds using Oracle's CloudFoundation set of products. 1682436.1 ILMPlanning Guide Planning Guide for OracleUtilities new ILM based data management and archiving solution. 207303.1 Client / Server Interoperability Support Matrix Certification Matrix. 1965395.1 Cache Nodes Configuration using BatchEdit utility Using the new Batch Edit Wizard to configure batch quickly and easily 1628358.1 Overview and Guidelines for Managing Business Exceptions and Errors Best Practices for To Do Management 2014163.1 Oracle Functional/Load Testing Advanced Pack for Oracle Utilities Overview Overview of the new Oracle Utilities testing solution. Updated for 5.0.0.1.0. 1929040.1 ConfigTools Best Practices Best Practices for using the configuration tools facility 2014161.1 Oracle Utilities Application Framework - Keystore Configuration Managing the keystore 2014163.1 Oracle Functional/Load Testing Advanced Pack for Oracle Utilities Overview Outlines the Oracle Application Testing Suite based testing solution for Functional and Load Testing available for Oracle Utilities Application Framework based products 2132081.1 Migrating From On Premise To Oracle Platform As A Service Outlines the process of moving an Oracle Utilities product from on-premise to Oracle Cloud Platform As A Service (PaaS) 2196486.1 Batch Scheduler Integration Outlines the Oracle Utilities Application Framework based integration with Oracle’s DBMS_SCHDEULER to build, manage and execute complex batch schedules 2211363.1 Enterprise Manager for Oracle Utilities: Service Pack Compliance Outlines the process of converting service packs to allow the Application Management Pack for Oracle Utilities to install service packs using the patch management capabilities 2214375.1 Web Services Best Practices Outlines the best practices of the web services capabilities available for integration

The following Oracle Utilities Application Framework technical whitepapers are available from My Oracle Support at the Doc Id's mentioned below. Some have beenupdated in the last few months to reflect...

Advice

Connection Pools

The Oracle Utilities Application Framework uses connection pooling to manage the number of connections in the architecture. The advantage of the pools is to be able to share the number of connections across concurrent users rather than just allocating connections to individual users, which may be stay allocated whilst they are not active. This ensures that the connections allocated are being used therefore the number of connections can be less than the number of users using the product at any time. The configuration of the pool has a number of configuration settings: Minimum Size - This is the initial size of the connection pool at startup. For non-production this is typically set to 1 but in production this is set to a size that represents the minimum number of concurrent users at anytime. The number of connections in the pool will not fall below this setting. Maximum Size - The maximum number of connections to support in the pool. This number typically represents the expected pack concurrent connections available in the pool. If this number is too small, connections will queue causing delays and eventually connections will be refused against the pool. Growth rate - This is the number of new connections to add to the pool when the pool is busy and new connections are required up to the Maximum Size. Some pool technologies allow you to configure the number of connections to add each time. Inactive detection - Pools will have a collection of settings to define when a connection in the pool can be shutdown due to inactivity. This allows the pool to dynamically shrink when traffic becomes off peak. There are typically two main connection pools that are used in the product: Client Connection Pool - These are the number of connections between the client machines (browser, web services client, etc) and the server. This connection pool can be direct or via proxies (if you are using a proxy generally or as part of a high availability solution). Now, if you are using Oracle WebLogic this is managed by default by a Global Work Manager, which has unlimited connections. While that sounds ideal, this means you will run into a possible out of memory issue as the number of connections increases before you hit the connection limit. It is possible to use configuration to specify Work Managers to restrict the number of connections to prevent out of memory conditions. Most customers use the Global Work Manager with no issues but it may be worth investigating Work Managers to see if you want to use the Capacity Constraint or Max Threads Constraint capabilities to restrict connection numbers. Database Connection Pool -  These are the number of connections between the product and the database. The connection pools can use UCP or JDBC datasources. The latter uses Oracle WebLogic JNDI to define the attributes of the connection pools as well as advanced connection properties such as FCF and GridLink. The choice to use UCP or JDBC data sources will depends on your site standards and how often you want to be able to change the connections to the database. Using JDBC datasources is more flexible in terms of maintenance of pool information whereas UCP is more desirable where fixed configurations are typical. Additionally if you are using a proxy to get to the product, most proxies have connection number restrictions to consider when determining pool sizes. So when deciding the size of the pools and its attributes there are a number of configurations: The goal is to have enough connections in a pool to satisfy the number of concurrent users at any time. This includes peak and non-peak periods. When designing pool sizes and other attributes, remember wasted connections are a burden on resources. Having the pool be dynamic will ensure the resources used are optimally used as traffic fluctuates. Conversely, the establishment of new connections is an overhead when traffic grows. In terms of overall performance the establishment of new connections is minimal. The connections in the pool are only needed for users actively using the server resources. They are not used when they are idle or using the client aspects of the product (for example, moving their mouse across the screen, interacting with non-dynamic fields etc). Set the minimum number of connections to the absolute minimum you want to start with or the number you want to always have available at all times. It is not recommended to use the non-production default of one (1) as that would get the pool to create lots of new connections as traffic ramps up during the business day. Set the maximum number to the expected peak concurrent connections at any time with some headroom for growth. Pool connections that are active take resources (CPU, memory etc) so making sure this number if reasonable for your business. Some customers use test figures as a starting point, talk to their management to determine number of peak user connections or use performance testing to decide the figure. I have heard implementation partners talk about rules of thumb where they estimate based upon total users. Set the inactivity to destroy connections when they become idle for a period of time. This value for the period of time can vary but generally a low value is not recommended due to the nature of typical traffic seen onsites. For example, generally partners will look at between 30-60 seconds inactivity time and maybe more. The idea is to gradually remove inactive connections as traffic drops in non-peak. If the pool, allows for specification of the number of new connections to create, consider using a number other than one (1) for online channels. Whilst this low value seems logical, it will result in a slower ramp up rate. Monitor the connection pools for the connection queuing as that may indicate your maximum is too low for your site. One other piece of advice from my troubleshooting days, do not assume the figures you use today will be valid in a years time. I have found that as the product implementation ages, end users will use the product very differently over time. I have chatted to consultants about the fact I have personally seen traffic double in the first 18 months of an implementation. Now, that is not a hard and fast rule, just an observation that when a product is initially implemented end users are conservative in its use, but over time, as they get more accustomed to the product, their usage and therefore traffic volume increases. This must be reflected in the pool sizing and attributes.

The Oracle Utilities Application Framework uses connection pooling to manage the number of connections in the architecture. The advantage of the pools is to be able to share the number of connections...

Advice

Authentication and Authorization Identifiers

In Oracle Utilities Application Framework, the user identification is actually divided into two parts: Authentication Identifier (aka Login Id) -  This the identifier used for authentication (challenge/response) for the product. This identifier is up to 256 characters in length and must be matched by the configured security repository for it to be checked against. By default, if you are using Oracle WebLogic, there is an internal LDAP based security system that can be used for this purpose. It is possible to link to external security repositories using the wide range of Oracle WebLogic security providers included in the installation. This applies to Single Sign On solutions as well. Authorization Identifier (aka UserId) - This is the short user identifier (up to 8 characters in length) used for all service and action authorization as well as low level access.  The two identifiers are separated for a couple of key reasons: Authentication Identifiers can be changed. Use cases like changing your name, business changes etc mean that the authentication identifier needs to be able to be changed. As long as the security repository is also changed then this identifier will be in synchronization for correct login. Authentication Identifiers are typically email addresses which can vary and are subject to change. For example, if the company is acquired then the user domain most probably will change. Changes to Authentication Identifiers do not affect any existing audit or authorization records. As the authorization user is used for internal processing, after login the authentication identifier, while tracked, is not used for security internally once you have be successfully authenticated. Authorization Identifiers are not changeable and can be related to the Authentication Identifier, such as using first initial and first 7 characters of the surname or be randomly generated by an external Identity Management solution. One of the main reasons the Authorization Identifier is limited in size is to allow a wide range of security solutions to be hooked into the architecture and provide an efficient means of tracking. For example, the identifier is propagated in the connection across the architecture to allow for end to end tracking of transactions. Security has been augmented in the last few releases of the Oracle Utilities Application Framework to allow various flexible levels of control and tracking. Each implementation can decide to track what aspects of security they want to track using tools available or using third party tools (if they want that).

In Oracle Utilities Application Framework, the user identification is actually divided into two parts: Authentication Identifier (aka Login Id) -  This the identifier used for authentication (challenge/...

Advice

Using ADO and HeatMap in the Utilities ILM Solution

The ILM features of the Oracle Database are used in the Oracle Utilities ILM capability to implement the technical side of the solution. In Oracle 12, two new facilities were added to the already available ILM features to make the implementation of ILM easier. These features are Automatic Data Optimization (ADO) and Heat Map. The Heat Map feature allows Oracle itself to track the use of blocks and segments in your database. Everytime a program or user touches a row in the database, such as using SELECT, UPDATE or DELETE SQL statements, Heat Map records that it was touched. This information is important as it actually helps profiles the actual usage of the data in your database. This information can be used by Automatic Data Optimization. The Heat Map is disabled by default and requires a database initialization parameter to be changed. Automatic Data Optimization is a facility where DBA's can set ILM rules, known as Policies, to perform certain ILM actions on the data. For example: If the data is not touched, using Heat Map data, within X months then COMPRESS it to save space. If the ILM_ARCH_SW is set to Y, move the data to partition X. There are a lot of combinations and facilities in the ADO rules to allow the DBA's flexibility in their rules. ADO allows DBA's to specify the rules and then supplies a procedure that can be scheduled, at the convenience of the site, to implement the rules. ADO and Heat Map are powerful data management tools that DBA's should get use to. They allow simple specification of rules and use features in the database to allow you to manage your data. For more information about Heat Map and ADO refer to the following information: Heat Map, Automatic Data Optimization and ILM with Oracle Database Automatic Data Optimization whitepaper Oracle VLDB Guide - Includes advice on Partitioning and Time Based Data Management (ADO/HeatMap). Enabling and Disabling Heat Map Using Oracle Enterprise Manager for Managing ADO and Heat Map Limitations and Restrictions with ADO and Heat Map (Note: In Oracle Database 12.1, you cannot use ADO/HeatMap on a multi-tenant database. This is supported in Oracle 12.2). Video: Exploring Oracle 12c ADO Features Video: Manage ADO from Oracle Enterprise Manager Cloud Control Video: Manage Compression from Oracle Enterprise Manager Cloud Control ILM Planning Guide (Doc Id: 1682436.1) Oracle Training Course: Oracle Database 12c R1: New Features for Administrators Ed 2 (covers ADO and Heat Map) Oracle Training Course: Oracle Database 12c: Implement Partitioning Ed 1

The ILM features of the Oracle Database are used in the Oracle Utilities ILM capability to implement the technical side of the solution. In Oracle 12, two new facilities were added to the already...

Advice

Overload Protection Support

One of the features we support in Oracle Utilities Application Framework V4.3.x and above is the Oracle WebLogic Overload Protection feature. By default, Oracle WebLogic is setup with a global Work Manager which gives you unlimited connections to the server. Whilst this is reasonable for non-production systems, Oracle generally encourages people to limit connections in Production to avoid overloading the server with connections. In production, it is generally accepted that the Oracle WebLogic servers will either be clustered or a set of managed servers, as this is the typical setup for the high availability requirements for that environment. Using these configurations,it is recommended to set limits on individual servers to enforce capacity requirements across your cluster/managed servers. There are a number of recommendations when using Overload Protection: The Oracle Utilities Application Framework automatically sets the panic action to system-exit. This is the recommended setting so that the server will stop and restart if it is overloaded. In a clustered or managed server environment, end users are routed to other servers in the configuration while the server is restarted by Node Manager. This is set at the ENVIRON.INI level as part of the install in the WLS_OVERRIDE_PROTECT variable. This variable is set using the WebLogic Overload Protection setting using the configureEnv utility. Ensure you have setup a high availability environment either using Clustering or multiple managed servers with a proxy (like Oracle HTTP Server or Oracle Traffic Director). Oracle has Maximum Availability Guidelines that can help you plan your HA solution. By default, the product ships with a single global Work manager within the domain (this is the default domain from Oracle WebLogic). It is possible to create custom Work Manager definitions with Capacity Constraint and/or Maximum Threads Constraint which is allocated to product servers to provide additional capacity controls.  For more information about Overload Protection and Work Managers refer to Avoiding and Managing Overload and Using Work Managers to Optimize Scheduled Work.

One of the features we support in Oracle Utilities Application Framework V4.3.x and above is the Oracle WebLogic Overload Protection feature. By default, Oracle WebLogic is setup with a global Work...

Advice

ILM Planning - The First Steps

The first part of implementing an Information Lifecycle Management (ILM) solution for your Oracle Utilities products using the ILM functionality provided is to decide the business retention periods for your data. Before discussing the first steps a couple of concepts need to be understood: Active Period - This is the period/data group where the business needs fast update access to the data. This is the period the data is actively used in the product by the business. Data Groups - These are the various stages the data is managed after the Active period and before archival. In these groups the ILM solution will use a combination of tiered storage solutions, partitioning and/or compression to realize cost savings. Archival - This is typically the final state of the data where it is either placed on non-disk related archival media (such as tape) or simply removed. The goal of the first steps is to decide two major requirements for each ILM enabled object: How long the active period should be? In other words, how long the business needs access to update the data? How long the data needs to remain accessible to the business? In other words, how long to keep the data in the database, overall? Remember the data is still accessible by the business whilst it is in the database. The decisions here are affected by a number of key considerations: How long for the business processes the data needs to be available for update - This can be how long the business needs to rebill or how long the update activity is allowed on a historical record. Remember this is the requirement for the BUSINESS to get update access. How long legally you need to be able to access the records - In each jurisdiction there will be legal and government requirements on how long data should be updated for? For example, there may be a government regulation around rebilling or how long a meter read can be available for change. The overall data retention periods are dictated by how long the business and legal requirements are for access to the data. This can be tricky as tax requirements vary from country to country. For example, in most countries the data needs to be available to tax authorities for say 7 years, in machine readable format. This does not mean it needs to be in the system for 7 years, it just needs to be available when requested. I have seen customers use tape storage, off site storage or even the old microfiche storage (that is showing my age!). Retention means that the data is available on the system even after update is no longer required. This means read only access is needed and the data can even be compressed to save storage and money. This is where the crossover to the technical aspects of the solution start to happen. Oracle calls these Data Groups where each group of data, based usually on date range, has different storage/compression/access characteristics. This can be expressed as a partition per data group to allow for physical separation of the data. You should remember that the data is still accessible but it is not on the same physical storage and location as the more active data. Now the best way of starting this process is working with the business to decide the retention and active periods for the data. It is not as simple as a single conversation and may require some flexibility in designing the business part of the solution. Once agreement has been reached the first part of the configuration in ILM is to update the Master Configuration for ILM with the retention periods agreed to for the active period. This will enable the business part of the process to be initiated. The ILM configuration will be on each object, in some cases subsets of objects, to set the retention period in days. This is used by the ILM batch jobs to decide when to assess the records for the next data groups. There will be additional articles in this series which walk you through the ILM process.

The first part of implementing an Information Lifecycle Management (ILM) solution for your Oracle Utilities products using the ILM functionality provided is to decide the business retention periods...

Advice

ILM Clarification

Lately I have received a lot of partner and customer questions about our ILM capability that we ship with our solutions. Our ILM solution is basically a combined business and technical capability to allow customers to implement cost effective data management capabilities for product transaction tables. These tables grow quickly and the solution allows the site to define their business retention rules as well as implement storage solutions to implement cost savings whilst retaining data appropriately. There are several aspects of the solution: In built functionality - These are some retention definitions, contained in a Master Configuration record, that you configure as well as some prebuilt algorithms and ILM batch jobs. The prebuilt algorithms are called by the ILM batch jobs to assess the age of a row as well as check for any outstanding related data for ILM enabled objects. There are additional columns added to the ILM enabled objects to help track the age of the record as well as setting flags for the technical aspect of the solutions to use. The retention period defines the ACTIVE period of the data for the business which is typically the period that the business needs fast and update access to the data. New columns - There are two columns added ILM_DATE and ILM_ARCH_SW. The ILM_DATE is the date which is used to determine the age of the row. By default, it is typically set to the creation date for the row but as it is part of the object, implementers can optionally alter this value after it is set to influence the retention period for individual rows. The ILM_ARCH_SW is set to the "N" value by default, indicating the business is using the row. When a row is eligible, in other words, when the ILM_DATE + the retention period configured for the object, the ILM batch jobs assess the row against the ILM Algorithms to determine if any business rules indicate the record is still active. If the business rules indicate nothing in the business is outstanding for the row, the ILM_ARCH_SW is set to the "Y" value. This value effectively tells the system that the business has finished with that row in the ACTIVE period. Conversely, if a business rule indicates the row needs to be retained then the ILM_ARCH_SW is not changed from the "N" value. Technical aspects of the solution - Once ILM_ARCH_SW is set to the "Y" value, the ILM features within the database are used. So there are some licensing aspects apply: Oracle Database Enterprise Edition is needed to support the ILM activities. Other editions do not have support for the features used. The Partitioning option of the Oracle Database Enterprise Edition is required as a minimum requirement. This is used for data group isolation and allowing storage characteristics to be set at the partition level for effective data management. Optionally, it is recommended to license the Oracle Advanced Compression option. This option allows for greater options for cost savings by allowing higher levels of compression to be used a tool to realize further savings. The base compression in Oracle can be used as well but it is limited and not optimized for some activities. Optionally customers can use the free ILM Assistant addon to the database (training for ILM Assistant). This is a web based planning tool, based upon Oracle APEX, that allows DBA's to build different storage scenarios and assess the cost savings of each. It does not implement the scenarios but it will generate some basic partitioning SQL. Generally for Oracle 12c customers, ILM Assistant is not recommended as it does not cover ALL the additional ILM capabilities of that version of the database. Personally, I only tend to recommend it to customers who have different tiered storage solutions, which is not a lot of customers generally. Oracle 12c now includes additional (and free) capabilities built into the database. These are namely Automatic Data Optimization and Heat Maps. These are disabled by default and can be enabled using initialization parameters on your database. The Heat Map tracks the usage profile of the data in your database automatically. The Automatic Data Optimization can use Heat Map information and other information to define and implement rules for data management. That can be as simple as instructions on compression to moving data across partitions based upon your criterion. For example, if the ILM_ARCH_SW is the "Y" value and the data has not been touched in 3 months then compress the data using the OLTP compression in Oracle Advanced Compression. These rules are maintained using the free functionality in Oracle Enterprise Manager or, if you prefer, SQL commands can be used to set policies. Support for storage solutions - Third party hardware based storage solutions (including Oracle's storage solutions) have additional ILM based solutions built at the hardware level. Typically those solutions will be able to be used in an ILM based solution with Oracle. Check with your hardware vendor directly for capabilities in this area. There are a number of resources that can help you understand ILM more: ILM Planning Guide (Doc Id: 1682436.1) available from My Oracle Support - This is an overview of the solution as well as a generic process to help implement the solution. Oracle VLDB Guide (covers ADO/Heat Map/Partitioning/ILM) - A DBA Guide covering the various database facilities you can use for ILM. There is an Oracle 11gR2, Oracle 12.1 and Oracle 12.2 version.

Lately I have received a lot of partner and customer questions about our ILM capability that we ship with our solutions. Our ILM solution is basically a combined business and technical capability to...

Advice

Whitepapers now and in the future

The whitepapers available for the product will be changing over the next few months to reflect the changes in the product documentation. The following changes will happen over the next few months: The online documentation provided with the product has been enhanced to encompass some of the content contained in the whitepapers. This means when you install the product you will get the information automatically in the online help and the PDF versions of the documentation. If the online help fully encompasses the whitepaper contents, the whitepaper will be retired to avoid confusion. Always refer to the online documentation first as it is always the most up to date. If some of the whitepaper information is not in the online help then the new version of the whitepapers will contain the information you need or other whitepaper such as the Best Practices series will be updated with the new information. I will be making announcements on this blog as each whitepaper is updated to reflect this strategy. This will mean you will not have to download most of the whitepaper information separately and the information is available either online with the product, on Oracle's documentation site or available as a PDF download from Oracle's Delivery Cloud. The first whitepaper to be retired is the Configuration Migration Assistant Overview which is now not available from My Oracle Support but is available from the documentation supplied with the product. Remember the FIRST rule is to check the documentation supplied with the product FIRST before using the whitepapers. The documentation provided with the product is always up to date and the whitepapers are only updated on a semi-regular basis.

The whitepapers available for the product will be changing over the next few months to reflect the changes in the product documentation. The following changes will happen over the next few months: The...

Advice

New Utilities Testing Solution version available (5.0.1.0)

We have released a new version(5.0.1.0) of the OracleFunctional/Load Testing Advanced Pack for Oracle Utilities (OFTAPOU) and is availablefrom Oracle Delivery Cloud for customers and partners.This new OFTAPOU version now includes support for moreversions  of our products. The packs are now cloud compatible i.e., theycan be used for testing applications on Oracle Utilities Cloud services. The pack now supports the following: Oracle Utilities Customer Care And Billing 2.4.0.3 (updated),2.5.0.1 (updated) and 2.5.0.2 (updated) Oracle Utilities Mobile Workforce Management 2.2.0.3 (updated) Oracle Real Time Scheduler 2.2.0.3 (updated) Oracle Utilities Mobile Workforce Management 2.3.0 (updated) – withadded support for Android/IOS mobile testing. Oracle Real Time Scheduler 2.3.0 (updated) – with added support forAndroid/IOS mobile testing. Oracle Utilities Application Framework 4.2.0.3, 4.3.0.1, 4.3.0.2 and4.3.0.3. Oracle Utilities Meter Data Management 2.1.0.3 (updated) Oracle Utilities Smart Grid Gateway (all adapters) 2.1.0.3 (updated) Oracle Utilities Meter Data Management 2.2.0 (new) Oracle Utilities Smart Grid Gateway (all adapters) 2.2.0 (new) Oracle Utilities Work And Asset Management 2.1.1 (updated) Oracle Utilities Operational Device Management 2.1.1 (updated) The pack now includes integration components that can beused for creating flows spanning multiple applications known as integration functionalflows. Components for testing mobile application of ORS/MWM havebeen added. Using the latest packs, customers will be able to execute automatedtest flows of ORS/MWM application on Android and IOS devices. In addition to the product pack content, the core testautomation framework has been enhanced with more features for ease of use.For example, the pack now includes sanity flows to verify installations of individual products. These sanity flows are the same flows used by our cloud teams to verify cloud installations. The pack includes 1000+ prebuilt testing components that canbe used to model business flows using Flow Builder and generate test scriptsthat can be executed by OpenScript, Oracle Test Manager and/or Oracle LoadTesting. This allows customers to adopt automated testing to accelerate theirimplementations and upgrade whilst reducing their risk overall. The pack also includes support for the latest OracleApplication Testing Suite release (12.5.0.3) and also includes a set ofutilities to allow partners and implementers to upgrade their custom built testautomation flows from older product packs to the latest ones.

We have released a new version(5.0.1.0) of the Oracle Functional/Load Testing Advanced Pack for Oracle Utilities (OFTAPOU) and is availablefrom Oracle Delivery Cloud for customers and partners.This...

Advice

Oracle Scheduler Integration Whitepaper available

As part of Oracle Utilities Application Framework V4.3.0.2.0 and above, a new API has been released to allow customers and partners to schedule and execute Oracle Utilities jobs using the DBMS_SCHEDULER package (Oracle Scheduler) which is part of the Oracle Database (all editions). This API allows control and monitoring of product jobs within the Oracle Scheduler so that these can be managed individually or as part of a schedule and/or job chain. Note: It is highly recommended that the Oracle Scheduler objects be housed in an Oracle Database 12c database for maximum efficiency.  This has a few advantages: Low Cost - The Oracle Scheduler is part of the Oracle Database license (all editions) so there is no additional license cost for existing instances. Simple but powerful - The Oracle Scheduler has simple concepts which makes it easy to implement but do not be fooled by its simplicity. It has optional advanced facilities to allow features like resource profiling and load balancing for enterprise wide scheduling and resource management. Local or Enterprise - There are many ways to implement Oracle Scheduler to allow it to just manage product jobs or become an enterprise wide scheduler. It supports remote job execution using the Oracle Scheduler Agent which can be enabled as part of the Oracle Client installation. One of the prerequisites of the Oracle Utilities product installation is the installation of the Oracle Client so this just adds the agent to the install. Once the agent is installed it is registered as a target with the Oracle Scheduler to execute jobs on that remote resource. Mix and Match - The Oracle Scheduler can execute a wide range of job types so that you can mix non-product jobs with product jobs in schedules and/or chains. Scheduling Engine is very flexible - The calendaring aspect of the scheduling engine is very flexible with overlaps supported as well as exclusions (for preventing jobs to run on public holidays for example). Multiple Management Interfaces - The Oracle Utilities products do not include a management interface for the Oracle Scheduler as there are numerous ways the Oracle Scheduler objects can be maintained including command line, Oracle SQL Developer and Oracle Enterprise Manager (base install no pack needed). Email Notification - Individual jobs can send status via email based upon specific conditions. The format of the email is now part of the job definition which means it can be customized far more easier. Before using the Oracle Scheduler it is highly recommended that you read the Scheduler documentation provided with the database: Oracle Scheduler Concepts Scheduling Jobs with Oracle Scheduler Administrating Oracle Scheduler We have published a new whitepaper which outlines the API as well as some general advice on how to implement the Oracle Scheduler with Oracle Utilities products. It is available from My Oracle Support at Batch Scheduler Integration for Oracle Utilities Application Framework (Doc id: 2196486.1). Update: Customers on Oracle Utilities Application Framework V4.3.0.2.0 should install Patch 23639775 to install the installation files. After the patch is installed and customers using Oracle Utilities Application Framework V4.3.0.3.0 and above, should install the interface using the installer contained in the $SPLEBASE/tools/bin/sched subdirectory.

As part of Oracle Utilities Application Framework V4.3.0.2.0 and above, a new API has been released to allow customers and partners to schedule and execute Oracle Utilities jobs using the DBMS_SCHEDULE...

Advice

Architecture Guidelines - Same Domain Issues

After a long leave of absence to battle cancer, I am back and the first article I wanted to publish is one about some architectural principles that may help in planning your production environments. Recently I was asked by a product partner about the possibility of housing more than one Oracle Utilities product and other Oracle products on the same machine in the same WebLogic Domain and in the same Oracle database. The idea was the partner wanted to save hardware costs somewhat by combining installations. This is technically possible (to varying extents) but not necessarily practical for certain situations, like production. One of mentors once told me, "even though something is possible, does not mean it is practical". Let me clarify the situation. We are talking about multiple products on the same WebLogic domain on the same non-virtualized hardware sharing the database via different schemas. That means non-virtualized sharing of CPU, memory and disk.  Let me explain why housing multiple products in the same domain and/or same hardware is not necessarily a good idea: Resource profiles - Each product typically has a different resource profile, in terms of CPU, memory and disk usage. By placing multiple products in this situation, you would have to compromise on the shared settings to take all the products into account. For example, as the products might share the database instance then the instance level parameters would represent a compromize across the products. This may not be optimal for the individual products. Scalability issues - By limiting your architecture to specific hardware you are constrained in any possible future expansion. As your transaction volumes grow, you need to scale and you do not want to limit your solutions. Incompatibilities - Whilst the Oracle Utilities products are designed to interact on the platform level, not all products are compatible when sharing resources. Let explain with an example. Over the last few releases we have been replacing our internal technology with Oracle technology. One of the things we replaced was the Multi-Purpose Listener (MPL) with the Oracle Service Bus to provide industry level integration possibilities. Now, it is not possible to house Oracle Service Bus within the same domain as Oracle Utilities products. This is not a design flaw but intentional as really a single instance of Oracle Service Bus can be shared across products and can be scaled separately. Oracle Service Bus is only compatible with Oracle SOA Suite as it builds domain level configuration which should not be compromized by sharing that domain with other products. There is a better approach to this issue: Virtualization - Using a virtualization technology can address the separation of resources and scalability. It allows for lots of combinations for configuration whilst allocating resources appropriately for profiles and scalability as your business changes over time. Clustering and Server separation - Oracle Utilities products can live on the same WebLogic domain but there are some guidelines to make it work appropriately. For example, each product should have its own cluster and/or servers within the domain. This allows for individual product configuration and optimization. Remember to put non-Oracle Utilities products on their own domain such as Oracle SOA Suite, Oracle Service Bus etc as they typically are shared enterprise wide and have their pre-optimized domain setups. This is a first in a series of articles on architecture I hope to impart over the next few weeks.

After a long leave of absence to battle cancer, I am back and the first article I wanted to publish is one about some architectural principles that may help in planning your production environments. Rec...

Advice

Embedded mode limitations for Production systems

In most implementations of Oracle Utilities products the installer creates an embedded mode installation. This is called embedded as the domain configuration is embedded in the application which is ideal for demonstration and development environments as the default setup is enough for those types of activities.Over time though customers and partners will want to use more and more of the Oracle WebLogic domain facilities including advanced setups like multiple servers, clusters, advanced security setups etc. Here are a few important things to remember about embedded mode:The embedded mode domain setup is fixed with a fixed single server that houses the product and the administration server with the internal basic security setup. In non-production this is reasonable as the requirements for the environment are simple. The domain file (config.xml) is generated by the product, using a template, assuming it is embedded only. When implementations need additional requirements within the domain there are three alternatives: Make the changes in the domain from the administration console and then convert the new config.xml generated by the console as a custom template. This needs to be done as remember when Oracle deliver ANY patches or upgrades (or when you make configuration changes) we need to run initialSetup[.sh] to add the patch, upgrade or configuration to the product. This will reset the file back to the factory provided template unless you are using the custom template. Basically, if you decide to use this option, and do not implement a custom template then you will lose your changes each time. In later versions of OUAF we introduced user exits. These allow implementations to add to the configuration using XML snippets. It does require you to understand the configuration file that is being manipulated and we have sprinkled user exits all over the configuration files to allow extensions. Using this method means that you make changes to the domain using the configuration files, examine the changes to the domain file and then decide which user exit is available to reflect that change and add the relevant XML snippet. Again you must understand the configuration file to make sure you do not corrupt the domain. The easiest option is to migrate to native mode. This basically removes the embedded nature of the domain and houses it within Oracle Weblogic. This is explained in Native Installation whitepaper (Doc Id: 1544969.1) available from My Oracle Support. Native Installations allows you to use the full facilities within Oracle WebLogic without the restrictions of embedded mode. The advantages of native installations is the following:The domain can be setup according to your company standards. You can implement clusters, multiple servers including dynamic clustering.. You can use the security features of Oracle WebLogic to implement complex security setups including SSO solutions. You can lay out the architecture according to your volumes to manage within your SLA's. You can implement JDBC connection pooling, Work Managers, advanced diagnostics etc. Oracle recommends that native installations be used for environments where you need to take advantage of the domain facilities. Embedded mode should only be used within the restrictions it poses.

In most implementations of Oracle Utilities products the installer creates an embedded mode installation. This is called embedded as the domain configuration is embedded in the application which is...

Advice

DISTRIBUTED mode deprecated

Based upon feedback from partners and customers, the DISTRIBUTED mode used in the batch architecture has been deprecated in Oracle Utilities Application Framework V4.3.x and above. The DISTRIBUTED mode was originally introduced to the batch cluster architecture back in Oracle Utilities Application Framework V2.x and was popular but suffered from a number of restrictions. Given the flexibility of the batch architect was expanded in newer releases it was decided to deprecate the DISTRIBUTED mode to encourage more effective use of the architecture.It is recommended that customers using this mode migrate to CLUSTERED mode using a few techniques:For customers on non-production environments, it is recommended to use CLUSTERED mode using the single server (ss) template used by the Batch Edit facility. This is a simple cluster that uses CLUSTERED mode without the advanced configurations in a clustered environment. It is restricted to single host servers so it is not typically recommended for production or clustered environments that use more than one host server. For customers on production environments, it is recommended to use CLUSTERED mode with the unicast (wka) template used by the Batch Edit facility. This will allow flexible configuration without the use of multi-cast which can be an issue on some implementations using CLUSTERED mode. The advantage of Batch Edit is that it has a simple interface to allow you to define this configuration without too much fuss.  The advantage of Batch Edit when building your new batch configurations is that it is a simple to use as well as it generates an optimized set of configuration files that can be used directly for the batch architecture. Execution of the jobs would have to remove the DISTRIBUTED tags on the command lines or configuration files to use the new architecture.Customers should read the Batch Best Practices (Doc Id: 836362.1) and the Server Administration Guide shipped with your product for advice on Batch Edit as well as the templates mentioned in this article.

Based upon feedback from partners and customers, the DISTRIBUTED mode used in the batch architecture has been deprecated in Oracle Utilities Application Framework V4.3.x and above. The DISTRIBUTED mode...

Advice

Migrating Oracle Utilities products from On Premise to Oracle Public Cloud

A while back Oracle Utilities announced that the latest releases of the Oracle Utilities Application Framework applications were supported on Platform As A Service (PaaS) on Oracle Public Cloud. As part of that support a new whitepaper has been released outlining the process of migrating an on-premise installation of the product to the relevant Platform As A Service offering on Oracle Public Cloud.The whitepaper covers the following from a technical point of view:The Oracle Cloud services to obtain to house the products, including the Oracle Java Cloud Service and Oracle Database As A Service with associated related services. Setup instructions on how to configure the services in preparation to house the product. Instructions of how to prepare the software for transfer. Instructions on how to transfer the product schema to a Oracle Database As A Service instance using various techniques. Instructions on how to transfer the software and make configuration changes to realign the product installation for the cloud. The configuration must follow the instructions in the Native Installation Oracle Utilities Application Framework (Doc Id: 1544969.1) available from My Oracle Support which has also been updated to reflect the new process. Basic instructions on using the native cloud facilities to manage your new PaaS instances. More information is available in the cloud documentation. The whitepaper applies to the latest releases of the Oracle Utilities Application Framework based products only. Customers and partners wanting to establish new environments (with no previous installation) can use the same process with the addition of actually running the installation on the cloud instance.Customers and partners considering using Oracle Infrastructure As A Service can use the same process with the addition of installing the prerequisites.The Migrating From On Premise To Oracle Platform As A Service (Doc Id: 2132081.1) whitepaper is available from My Oracle Support. This will be the first in a series of cloud based whitepapers.

A while back Oracle Utilities announced that the latest releases of the Oracle Utilities Application Framework applications were supported on Platform As A Service (PaaS) on Oracle Public Cloud. As...

Annoucements

Oracle Utilities Customer Care And Billing 2.5 Benchmark available

Oracle Utilities Customer Care and Billing v2.5.x marked a major change in application technology as it is an all Java-based architecture.  In past releases, both Java and COBOL were supported. Over the last few releases, COBOL support has been progressively been replaced to optimize the product.In recently conducted performance benchmark tests, it was demonstrated that the performance of Oracle Utilities Customer Care and Billing v2.5.x, an all java based release, is at least 15 percent better than that of the already high performing Oracle Utilities Customer Care and Billing v2.4.0.2, which included the COBOL-based architecture for key objects, in all use cases tested. The performance tests simulated a utility with 10 million customers with both versions running the same workloads. In the key use cases tested, Oracle Utilities Customer Care and Billing v2.5.x performed at least 15% faster than the previous release.Additionally, Oracle Utilities Customer Care and Billing v2.5.x processed 500,000 bills (representing the nightly batch billing for a utility serving 10 million customer accounts being divided into twenty groups, so that 5% of all customers are billed each night on each of the 20 working days during the month) within just 45 minutes. The improved Oracle Utilities Customer Care and Billing performance ultimately reduces utility staff overtime hours required to oversee batch billing, allows utilities to consolidate tasks on fewer servers and reduce data center size and cost required, and it enables utilities to confidently explore new business processes and revenue sources, such as running billing services to smaller utilities.A whitepaper is available summarizing the results and details of the architecture used. 

Oracle Utilities Customer Care and Billing v2.5.x marked a major change in application technology as it is an all Java-based architecture.  In past releases, both Java and COBOL were supported. Over...

Advice

Using Database Resource Plans for effective resource management

In a past article we announced the support for Database Resource Plans. This facility is a technique that can be used by implementations to set limits and other resource constraints on processing to help optimize resource usage for implementations of Oracle Utilities products.I have been asked a couple of follow-up questions about use cases that can exploit this facility. Here are a few things that might encourage its use:Database Resource Plans can help constrain multiple channels share resources helping to avoid database contention between channels. For example, typically most utilities will not run batch processes in online hours. Typically the batch processes may cause contention with online users causing both channels to run slower. Using Database Resource plans you can tell the database to share the resources more effectively and also constrain batch to have minimal impact on the online users. Of course, batch will borrow resources used by online  but by using resource plans you can constrain it as much as practical. Database Resource Plans are very flexible. You can set plans for time periods to reflect different resource profiles by channel by time of day. Using the batch/online use case in the last point, you can set batch to use less resources during the day and more at night. Conversely you can set online to use more resources during the day and less at night. This balances resources with their optimal use. Database Resource Plans can be set globally or at low levels. In past releases of Oracle Utilities Application Framework, a set of database session visibility variables were set so that the database connection can be identified for monitoring. These same variables can now be used with resource plans. These include the program/batch job, threadpool/thread, client authorization user, client user tag etc. This means, if you desire, you can set minute level information based upon session characteristics in your database resource plans using Consumer Groups. Database Resource Plans feature monitoring at the plan, directive, consumer group etc level to assess the effectiveness of those resource plans. This is available from database monitoring products including Oracle Enterprise Manager. Database Resource Plans are another feature you can use from the database to effectively manage your resource usage to ensure each channel stays within its allocated resource profile. It is all about sharing the available resources and minimizing contention whilst harnessing the processing power available more effectively.

In a past article we announced the support for Database Resource Plans. This facility is a technique that can be used by implementations to set limits and other resource constraints on processing to...

Oracle

Integrated Cloud Applications & Platform Services