X

Announcements and Technical Advice for the Oracle
Utilities product community from the Product Management team

Recent Posts

Using Batch Level Of Service

In past articles I discussed the Batch Level Of Service capability and one of the most important questions I am asked is how to best implement this for on-premise and cloud implementations. Here are some best advice to use this capability quickly with the minimum of work: Target Tolerances. Create algorithm entries using the provided base algorithm types setting your warning tolerance and error tolerance for each target value and method of evaluation. These represent the target values for warnings and error tolerances based around metrics like elapsed time, error rate, throughput etc. Remember these algorithm types are reusable across many batch controls. The base algorithms support a warning and an error tolerance. They also supporting using the latest completed or in progress execution. The latter is useful for live tracking during the execution. Link Targets to Batch Controls. Add the appropriate algorithm (with the appropriate target value) as the Batch Level Of Service algorithm. Multiple algorithms are also supported for the algorithm to cater for multiple criteria. To optimize the effort in managing the configuration of Batch Control's it is recommended to: Configure the most important or critical jobs to your business first. This is an important subset to focus upon. Configure the other processes as needed. Not all batch controls need targets. The Batch Level Of Service will now be assessed against the latest execution (depending on the configuration of the algorithm itself) and the target values returning the relevant status. Once you configure those targets you can use the Health Check Screen (or REST API for integration to other monitoring tools) and other capabilities to determine the batch level of service. For example: Note: In the Oracle Utilities Cloud Services you can set the level of service on the schedule itself to check batch windows. For more information about the Batch Level Of Service, refer to the following related articles: Configuring Batch Level Of Service Building A Batch Level Of Service Algorithm Using Batch Level Of Service - Batch History Portal Calling a Batch Level Of Service

In past articles I discussed the Batch Level Of Servicecapability and one of the most important questions I am asked is how to best implement this for on-premise and cloud implementations. Here are...

UTA - A Different Approach to Automation

The Oracle Utilities Testing Accelerator was introduced over a year ago and the tool continues to be popular with customers and is included in all Oracle Utilities SaaS Cloud Services available on Oracle Cloud Infrastructure. The tool takes a different approach to providing testing capabilities that other types of testing tend to not offer. There are three regimes of Testing that applies to Oracle Utilities products: Manual Testing. This is the baseline where a group of human testers simply use the product and test it manually. Whilst this is a valid way of testing for a lot implementations, it is high cost (due to the labor cost involved) and high risk as it tends to be timeboxed, to meet deadlines, so things are missed in the testing. When regular upgrades are involved it may soon become harder to sustain. Automated Testing (also known as Traditional Automation). Over the last decade or so, automated testing tools have been appearing on the market to try and reduce the manual effort involved in testing. They tend to concentrate on recording screens, creating scripts from those recording and embedding data into those scripts. The idea then is that recording can be played back over and over again. Some vendors have tried to separate the data from the script to make it reusable with different data, No matter what though they have a fundamental flow in that they only test online and any change to the online screens, for ANY reason, means manually manipulating the script or re-recording it. This means time to test (the time from installing to actually testing) in each iteration had to include refactoring test asset changes. Component Based Approach with PreBuilt Assets. This is where Oracle Utilities Testing Accelerator takes the advantages of automation but addresses the time to test concerns. The Oracle Utilities Testing Accelerator provides prebuilt content from Oracle which reduces test asset building times, tests all channels used in Oracle Utilities including online, batch, web services and mobile. Those assets tend to be upgradeable automatically (using the Oracle Utilities philosophy of backward compatibility) to retain their usefulness and save costs. The product also uses an orchestration metaphor to build tests rather than programming to reduce technical skills needed for the tool. The figure below summarizes the discussion above: The Oracle Utilities Test Accelerator is a testing solution optimized for use with Oracle Utilities product suite with prebuilt content including the Oracle Utilities Reference Model. It is available for on-premise use and now is included in all Oracle Utilities SaaS Cloud Services.

The Oracle Utilities Testing Accelerator was introduced over a year ago and the tool continues to be popular with customers and is included in all Oracle Utilities SaaS Cloud Services available...

Annoucements

UTA - Building Custom Content Whitepaper available

It is possible to extend the Oracle Utilities Testing Accelerator with custom components and custom libraries. This allows customers to include their extensions that may not be supported by a base supplied prebuilt test asset. The capability allows for the following: Building Custom Components on Base Capabilities. Whilst the packs provide the majority of the base functionality, some functions may not be available in the release of the pack you are using. Instead of waiting for that function to be released, you can use this capability to build a custom component to continue testing. Building Custom Components on Extensions. This is the most common use case where there is not a provided pre-built component that does not cover your extension. Generally if your extension alters the structure (schema) of the product, not just with data, then a custom component is required to educate the Oracle Utilities Testing Accelerator how to interface to the extension. Building Custom Libraries. The Oracle Utilities Testing Accelerator ships with a library of functions to allow manipulation or generation of individual data. Custom libraries can be build inside the tool, using Groovy, to add your own functions, if necessary. There is a whitepaper that outlines, with examples, the process for all of the above is now available for customers and partners using Oracle Utilities Testing Accelerator.  The UTA - Building Custom Components And Functions for Oracle Utilities Application Framework Based Products (Doc Id: 2662058.1) is available from My Oracle Support.

It is possible to extend the Oracle Utilities Testing Accelerator with custom components and custom libraries. This allows customers to include their extensions that may not be supported by a base...

UTA - Test Strategy Whitepaper available

The Oracle Utilities Testing Accelerator uses a unique approach to test automation for Oracle Utilities products, which uses techniques and content used at Oracle, to reduce risk and cost in taking advantage of test automation over traditional methods. Give this unique approach the techniques used in traditional automation need to be optimized to take full advantage of the platform and prebuilt content provided by the Oracle Utilities teams. Developing a coherent test strategy as part of your initial implementation of test automation. This strategy will outlines objectives, techniques and set the stage for using test automation to meet that strategy. A whitepaper from the developers of Oracle Utilities Testing Accelerator is now available from My Oracle Support titled UTA - Test Strategy Best Practices Guidance for Oracle Utilities Application Framework Based Products (Doc Id: 2659556.1). This whitepaper summarizes the following: Discussions of the different types of testing to be considered part of a strategy taking into consideration of the Oracle Utilities Testing Accelerator. This introduces a typical quadrant system with the standard principle "Do the High importance and High Value Business Processes First". For example: Techniques for designing test scenarios and test cases to take advantage of Oracle Utilities Testing Accelerator prebuilt content including the Oracle Utilities Reference Model content. Techniques to consider when designing test data to be used in testing including generation of data, importing data and how to manage that data effectively to help maximize reuse. This whitepaper is part of a series which includes existing whitepapers and set of new whitepapers to outline common practices when using the unique approach used in the Oracle Utilities Testing Accelerator.

The Oracle Utilities Testing Accelerator uses a unique approach to test automation for Oracle Utilities products, which uses techniques and content used at Oracle, to reduce risk and cost in taking...

Advice

UTA - Installation Summary

The installation of an on-premise version of Utilities Testing Accelerator (UTA) has been improved upon in both versions 6.0.0.1.0 and 6.0.0.2.0 to allow customers to quickly establish product environments. This article is a summary of this process, which is also outlined in the Oracle Utilities Testing Accelerator installation Guides. Prerequisites Before installing there are a number of prerequisites necessary for the installation: Database. The database to house the Utilities Testing Accelerator objects must be available and accessible to the server you want to install Utilities Testing Accelerator upon. This an be local or remote and can be a separate schema or a PDB in the same location as the Oracle Utilities product. Utilities Testing Accelerator supports the UTA Repository as a separate schema existing on a PDB or non-PDB database or even as a separate PDB on the Multi-tenant database. It does not require any database access to the Oracle Utilities products it is being used against. Note: The Oracle Utilities Testing Accelerator does not ship with a database license and reuses the existing database license for supported Oracle Utilities products. Customers with Unlimited License Agreement licenses for the Oracle Database have lesser restrictions in this regard. Java. The Java JDK or JRE must be installed on the machine that will house Utilities Testing Accelerator. A Java support license is included with all Oracle products including Oracle Utilities Testing Accelerator. XWindows. The Oracle installer used by the Oracle Utilities Testing Accelerator uses XWindows. This is the same installer used by Oracle Database and Oracle WebLogic and a host of other Oracle products so staff familiar with those products will be familiar with the installer. Note: XWindows may be directly invoked or via a virtualization such as vnc according to your site standards. Installation Process After downloading the software from Oracle Software Delivery Cloud, the following steps are performed: Create Database Resources. Create the Oracle Utilities Testing Accelerator schema user (UTA) and tablespaces in the database to house the UTA Repository objects using the database tool of your choice as per the installation guide. The installation guide outlines the minimum setup for the Oracle Utilities Testing Accelerator. Alter as your site standards. Remember the database settings used as the installer will ask you for them later. Create the following: Data Tablespace CREATE TABLESPACE uta_data DATAFILE '<datafile>' SIZE 500M AUTOEXTEND ON NEXT 200M MAXSIZE 1024M DEFAULT STORAGE ( INITIAL 10M NEXT 1M pctincrease 10 ) permanent online logging; where <datafile> is the location and filename in Oracle database to store the Oracle Utilities Testing Accelerator repository. Index Tablespace CREATE TABLESPACE UTA_idx DATAFILE '<indexfile>' SIZE 250M AUTOEXTEND ON NEXT 50M MAXSIZE 512M DEFAULT STORAGE (INITIAL 10M NEXT 1M PCTINCREASE 10) PERMANENT ONLINE LOGGING; where <indexfile> is the location and filename in Oracle database to store the Oracle Utilities Testing Accelerator repository. Execute the Installer. Within XWindows as an appropriate user (usually the same user used to install Oracle Utilities products), execute the installer command. If this command fails, examine the logs and correct the error. The command to start the installer is as follows: java -jar UTA_6.0.0.2.0_generic.jar Step By Step Installer process The following section outlines the installer process step by step: The Welcome Page is displayed outlining the steps that the installer will follow. Click Next to start the process. For example: Inventory Location. As per Oracle products provide the location of the Inventory directory and the group used for the installer. This will default to the group used by the user that initiated the install but can be altered to a group you desire. Click Next after providing the information. For example: Installation Location. Provide the directory you want to install the product into. This is effectively the UTA home directory. Click Next after specifying a location. For example: Java Home and Application Details. Provide the following information as requested by the installer: Java Home. Home location of the java JDK/JRE for use with Oracle Utilities Testing Accelerator. This will default to the location used for the installer but may reference other java installations if required. Application Server Port. The port number to be used by the Oracle Utilities Testing Accelerator. Default: 8082. Application Administrator User Name. The default user for the administrator. This is the initial user used to create other users. Default: system Application Administrator User Password. The password (and confirmation password) for the Application Administrator User Name. Note: This is NOT the database system user. It is a user defined to the Utilities Testing Accelerator product. Click Next after specifying the information. Application Keystore Details. As with all Oracle products, the Oracle Utilities Testing Accelerator is installed in secure by default mode. For this to occur a default keystore needs to be generated as part of the process. The information provided generates a unique key used by UTA for communications and encrpytion. Therefore the following information, used by the keytool utility for X.500 generation, via the RFC1779 standard, needs to be provided. Common Name. Name of the organization, machine name (for machine scoped installs) or company root website name. For example: utility.com Organization Unit. Name of Unit associated with use of Utilities Testing Accelerator. Organization Name. Name of company (not web site name) City. Name of Suburb or city. State. Name of State Country Code. The ISO-3166 2 character country code. Default: US (for USA). For other codes, refer to the ISO standard. Keystore Password. Password (and confirmation) to secure keystore. This is used by the product to access the keystore. Note: It is possible to replace the keystore after installation if your site has a company keystore. Refer to the Utilities Testing Accelerator documentation for post installation steps on how to replace the default keystore and use advanced keystore attributes. Click Next after specifying the information. Target Database Connection Details. Provide the details of the database you want to use for the installation of the UTA Repository. The following information needs to be provided: Database Host. The host name housing the database to house the UTA Repository. Database Port. The listener port number of the database to house the UTA Repository. Database Service Name. The PDB name or database service name of the database to house the UTA Repository. Database Administrator User Name. The DBA user to be used to install the UTA Repository objects. This user MUST have DBA access to create the objects. Database Administrator Password. The associated password (and confirmation) for the DBA user. Application Database Schema Password. The password (and confirmation) for the UTA schema user created in the prerequisite steps. Click Next after specifying the information. Installation Summary. A summary of the information provided is presented prior to executing the Installation. Click Install to start the installation. For example: Installation Progress. The installation will be visually shown with progress. If any steps error refer to the log outlined in the previous step. Correct and restart. An example of the Installation Progress is shown below. Installation Complete. A dialog will link to the success or failure of the installation process with a link to the log (if desired). Click Finish to close the installer. For example: After finishing the install it is possible to start the Oracle Utilities Testing Accelerator using the following commands: Navigate to the home directory specified on Page 3 of the installation (Oracle Home) as the user that installed the Oracle Utilities Testing Accelerator. Use the ./startUTA.sh command to start UTA. Refer to the Installation Guide for optional post installation processes including loading content packs into the Oracle Utilities Testing Accelerator.

The installation of an on-premise version of Utilities Testing Accelerator (UTA) has been improved upon in both versions 6.0.0.1.0 and 6.0.0.2.0 to allow customers to quickly establish...

Performance Troubleshooting Guides Updated for latest information

A few years ago I published a set of guides for Performance Troubleshooting Oracle Utilities Application Framework based products for on-premise implementations. These guides outlined the metrics and capabilities to monitor those products as well as general advice from our Black belt teams. The whitepapers have been updated with the following changes: The guides have been rewritten to flow better and make them easier to understand. This was based upon feedback from partners and customers. Some of the guides have been combined from the previous releases to save on duplicate information. The guides now include a lot more links to more detailed information. A Related Reading section is on the top of each topic to point you to very important advice on each topic available on the Oracle Support site. This is to keep the information up to date and help you with understanding the support process. The guides have been updated for the latest releases for the Oracle Utilities Application Framework. Some of the advice has been removed that was deemed out of date or not relevant to Oracle Utilities Application Framework products anymore. The guides are split into a number of documents: Concepts Guide. Introduces the topics and some key concepts used in the series. Client Monitoring. Monitoring the browser client and some advice on setting up the browser. Application Server Monitoring. Monitoring the Oracle WebLogic domain components used by the Oracle Utilities Application Framework. This guide strictly only covers the components directly used by the Oracle Utilities Application Framework within the domain. This used to be a number of guides now combined into one guide. Batch Monitoring. Monitoring the batch architecture including Coherence monitoring. Database Monitoring. Monitoring the Database components. This whitepaper is a companion to the Database documentation and the DBA Guides shipped with each product. Server Monitoring. Monitoring the CPU, Memory and Disk access with tolerances recommended by Oracle generically for those components. Network Monitoring. Monitoring the network to the server and within the server architecture. The Performance Troubleshooting Guides are available from My Oracle Support at Doc Id: 560382.1.  

A few years ago I published a set of guides for Performance Troubleshooting Oracle Utilities Application Framework based products for on-premise implementations. These guides outlined the metrics and...

Oracle Utilities Reference Model content available in Utilities Testing Accelerator

Oracle is delighted to announce the availability of exclusive content within Oracle Utilities Testing Accelerator for the Oracle Utilities Reference Model. This means that the Oracle Utilities Testing Accelerator has flow packs available representing the processes implemented by the Oracle Utilities Reference Model. The Oracle Utilities Reference Model was originally released as documentation a few years ago and represents generic common practices for business processes, typically experienced by a range of utilities across the world. The models are maintained by a specialist team within the Oracle Utilities development organization and were popular with partners and customers alike keen not only to implement Oracle Utilities products but benefit from these pre-built business models. Since the inception of the models, it has been a goal to make them more usable and more relevant to a wider Oracle Utilities customer audience. Inspired by the flow and component pack experiences with Oracle Application Testing Suite, which realized significant cost and risk savings, there was a strategy to express the models in the Oracle Utilities Testing Accelerator as an exclusive content pack. With the inclusion of the Oracle Utilities Reference Model content, Oracle Utilities Testing Accelerator customers not only have access to pre-built product components but now have access to flows representing best practices for using Oracle Utilities products. The flows are designed using the features of the latest release of the Oracle Utilities Testing Accelerator with flow fragments and complete flows, so that customers can get to testing quicker and with lower risk and costs. The release of the Oracle Utilities Reference Model content not only includes the test assets but comprehensive documentation on how to best use those assets to test more effectively.   Customers who licensed the Oracle Utilities Testing Accelerator are also licensed to use this Oracle Utilities Reference Model pack with the relevant product packs and can down the pack and related information from Oracle's documentation site and Patch 30427094 for updates to the Testing API from My Oracle Support. Note: This content is updated regularly so check the above site for new content and new releases. Note: The first release of the pack will be exclusively based around the Oracle Utilities SaaS Cloud Services. With subsequent releases covering additional processes and on-premise customers. Note: Oracle Utilities SaaS Cloud Service customers will receive the pack in the next relevant patch window in included Oracle Utilities Testing Accelerator instances included in that service. For more information about Oracle Utilities Testing Accelerator, refer to Oracle Utilities Testing Accelerator Overview (Doc Id: 2014163.1) from My Oracle Support which includes an overview and a Frequently Asked Questions.

Oracle is delighted to announce the availability of exclusive content within Oracle Utilities Testing Accelerator for the Oracle Utilities Reference Model. This means that the Oracle Utilities...

UTA 6.0.0.2.0 Features - Iterative Execution Support

In Oracle Utilities Testing Accelerator 6.0.0.1.0, the server based execution engine was introduced, as the default execution engine, to complement the original Eclipse based engine. This engine was restricted to a single manual execution of a flow with the related data. In this release, the engine has been expanded to not only support Flow Test Data Sets and Flow Subroutines but to support multiple executions, also known as Iterative Executions. This capability allows multiple test executions to be executed iterating through relevant Flow Test Data Sets. This allows testers to test complex scenarios or simply prepare data for other tests. When specifying an execution, there is now a Iterative execution type that allows number of iterations and the list of available Flow Test Data Sets to use for the iterations. For example: For more information about Iterative Execution, refer to the Flow Subroutines and Test Data Sets whitepaper (Doc Id: 2632033.1)  available from My Oracle Support. For more information about Oracle Utilities Testing Accelerator, refer to Oracle Utilities Testing Accelerator Overview (Doc Id: 2014163.1) from My Oracle Support which includes an overview and a Frequently Asked Questions.

In Oracle Utilities Testing Accelerator6.0.0.1.0, the server based execution engine was introduced, as the default execution engine, to complement the original Eclipse based engine. This engine was...

Understanding the difference between infrastructure and application

One of the most common sets of questions I get from partners and customers are things like "How do I set up Load Balancing with your products?" or "How do link my security repository to your products?". People are surprised by my response by linking to the Oracle WebLogic and Oracle Database documentation. This emphasizes the confusion over the role of the infrastructure versus the application. Let me explain. The Oracle Utilities products are applications housed in a container such as Oracle WebLogic and store their data in the database, namely Oracle Database. These are infrastructure and Oracle Utilities products should not be confused with them as it is an application using these services. Infrastructure provides a series of services that interface to the operating system and provide services to the application housed with them. There is a role for infrastructure and a role for applications housed in infrastructure. These infrastructure services provide the interface into a lot of capabilities that the applications. Some of the capabilities provided are: Security. Oracle WebLogic is responsible for the direct security of the applications that it houses. It provides a wide range of connectors to security such as its own internal LDAP server, external LDAP based repositories, SSO, OAuth2 and other security repositories. Oracle WebLogic provides the authentication and policy services for all the channels used by the Oracle Utilities products. High Availability. Oracle WebLogic and the Oracle Database supports a large variety of architectures including high availability architectures, such those outlined in Oracle's Maximum Availability Architecture. The Oracle Utilities products can take advantage of this capability for high availability and scalability by being housed in a highly available Oracle WebLogic or Oracle Database cluster. This includes the ability to support the various hardware and software load balancers supported by Oracle WebLogic and Oracle Database. JMS, Oracle WebLogic includes an inbuilt scalable JMS capability. This capability can be used by the Oracle Utilities products for inbound and outbound communications directly or via middleware. JDBC. Oracle WebLogic includes basic and advanced capabilities when connecting to the Oracle Database via JDBC. The Oracle Utilities products can take advantage of both the basic and advanced capabilities offered by these drivers. Other capabilities. Oracle WebLogic offers a wide range of additional capabilities that can be utiltized by the Oracle Utilities including caching, specialist networking etc. Remember the role of the application is to provide functionality for the business, the role for the infrastructure around the application is to support the application and its interfaces to the various capabilities provided by that infrastructure. It is not the responsibility of the product to take over the role filled by the infrastructure. A simple example I use with people is that you don't expect MSPaint to have the capability to change your Windows Password. Windows itself provides that capability and MSPaint provides user with the capability to compose and manipulate graphics. If you want to remind yourself, always remember that the application is housed in a container, in the case of Oracle Utilities products that container is Oracle WebLogic. To get into that container to the application, you must go through that container. My recommendation to partners and customers is to learn as much as you can about the capabilities of the infrastructure before learning about the capabilities of the application to avoid confusion.

One of the most common sets of questions I get from partners and customers are things like "How do I set up Load Balancing with your products?" or "How do link my security repository to your products?"...

UTA 6.0.0.2.0 Features - Flow Test Data Set Support

In Oracle Utilities Testing Accelerator 6.0.0.1.0 we introduced the concept of saved and reusable Component Test Data Sets which represents reusable test data at a component level. The idea is that as you use a component you can design different reusable test data sets and save those for reuse across flows. This data can be manually entered in the workbench, imported from an environment and/or generated using keyword/tag libraries provided by the product teams. This was very useful and allowed data experimentation to test different scenarios. Based upon feedback from our customers and partners, this capability has been expanded, in Oracle Utilities Testing Accelerator 6.0.0.2.0 to now allow for test data sets to be associated with flows as well as components. This means you can group a series of component test data sets into a flow based test data set for reuse across instances of that flow. This translates to greater configuration and greater levels of control. For example, you can now design a set of reusable test data that simulates a specific type of customer or type of business scenario you want to test a flow against. This can be saved as a named Flow Test Data Set and reused whenever that flow is executed. For example: This enhancement allows for maintenance of the Component Test Data Sets at an individual level and when they are used as a part of the Flow Test Data Set. Whenever a Flow Test Data Set is used then it is indicated on the user interface. For example: The introduction of Flow Test Data Sets is strategic to encourage reuse and manage the Test Data in a more cost effective way within the workbench. For more information about Flow Test Data Sets, refer to the Flow Subroutines and Test Data Sets whitepaper (Doc Id: 2632033.1)  available from My Oracle Support. For more information about Oracle Utilities Testing Accelerator, refer to Oracle Utilities Testing Accelerator Overview (Doc Id: 2014163.1) from My Oracle Support which includes an overview and a Frequently Asked Questions.

In Oracle Utilities Testing Accelerator 6.0.0.1.0 we introduced the concept of saved and reusable Component Test Data Sets which represents reusable test data at a component level. The idea is that as...

UTA 6.0.0.2.0 - Installing the Product

The installation of an on-premise version of Utilities Testing Accelerator has been improved upon in both versions 6.0.0.1.0 and 6.0.0.2.0 to allow customers to quickly establish product environments. This article is a summary of this process, which is also outlined in the Oracle Utilities Testing Accelerator installation Guides. Before installing there are a number of prerequisites necessary for the installation: Database. The database to house the UTA objects must be available and accessible to the server you want to install UTA upon. This an be local or remote and can be a separate schema or a PDB in the same location as the Oracle Utilities product. UTA supports the UTA repository as a seperate schema existing on a PDB or non-PDB database or even as a separate PDB on the Multi-tenant database. It does not require any database access to the Oracle Utilities products it is being used against. Java. The Java JDK or JRE must be installed on the machine that will house UTA. A Java support license is included with all Oracle products including Oracle Utilities Testing Accelerator. XWindows. The Oracle installer used by the Oracle Utilities Testing Accelerator uses XWindows. This is the same installer used by Oracle Database and Oracle WebLogic and a host of other Oracle products so staff familiar with those products will be familiar with the installer. Note: XWindows is supported natively or via virtualization such as VNC. The Oracle Utilities Testing Accelerator is available for on-premise customers from Oracle Software Delivery Cloud and includes the documentation and software: After downloading the software from Oracle Software Delivery Cloud, the following steps are performed: Create Database Resources. Create the UTA schema user (UTA) and tablespaces in the database to house the UTA Repository objects using the database tool of your choice as per the installation guide. The installation guide outlines the minimum setup for UTA. Alter as your site standards. Remember the database settings used as the installer will ask you for them later. Note: The SQL below are examples only and can be altered to suit site standards. Database Resource Example SQL Data Tablespace CREATE TABLESPACE uta_data DATAFILE '' SIZE 500M AUTOEXTEND ON NEXT 200M MAXSIZE 1024M DEFAULT STORAGE ( INITIAL 10M NEXT 1M pctincrease 10 ) permanent online logging; where <datafile> is the location and filename in Oracle database to store the Oracle Utilities Testing Accelerator repository. Index Tablespace CREATE TABLESPACE uta_idx DATAFILE '' SIZE 250M AUTOEXTEND ON NEXT 50M MAXSIZE 512M DEFAULT STORAGE (INITIAL 10M NEXT 1M PCTINCREASE 10) PERMANENT ONLINE LOGGING; where <indexfile> is the location and filename in Oracle database to store the Oracle Utilities Testing Accelerator repository. Execute the Installer. Within XWindows as an appropriate user (usually the same user used to install Oracle Utilities products), execute the installer command. If this command fails, examine the logs and correct the error. The command to start the installer is as follows: java -jar UTA_6.0.0.2.0_generic.jar The Welcome Page is displayed outlining the steps that the installer will follow. Click Next to start the process. For example: Inventory Location. As per Oracle products provide the location of the Inventory directory and the group used for the installer. This will default to the group used by the user that initiated the install but can be altered to a group you desire. Click Next after providing the information. For example: Installation Location. Provide the directory you want to install the product into. This is effectively the UTA home directory. Click Next after specifying a location. For example: Java Home and Application Details. Provide the following information as requested by the installer: Java Home. Home location of the java JDK/JRE for use with Oracle Utilities Testing Accelerator. This will default to the location used for the installer but may reference other java installations if required. Application Server Port. The port number to be used by the Oracle Utilities Testing Accelerator. Default: 8082. Application Administrator User Name. The default user for the administrator. This is the initial user used to create other users. Default: system Application Administrator User Password. The password (and confirmation password) for the Application Administrator User Name. Note: This is NOT the database system user. It is a user defined to the Utilities Testing Accelerator product. Click Next after specifying the information. Application Keystore Details. As with all Oracle products, the Oracle Utilities Testing Accelerator is installed in secure by default mode. For this to occur a default keystore needs to be generated as part of the process. The information provided generates a unique key used by UTA for communications and encrpytion. Therefore the following information, used by the keytool utility for X.500 generation, via the RFC1779 standard, needs to be provided. Common Name. Name of the organization, machine name (for machine scoped installs) or company root website name. For example: utility.com Organization Unit. Name of Unit associated with use of Utilities Testing Accelerator. Organization Name. Name of company (not web site name) City. Name of Suburb or city. State. Name of State Country Code. The ISO-3166 2 character country code. Default: US (for USA). For other codes, refer to the ISO standard. Keystore Password. Password (and confirmation) to secure keystore. This is used by the product to access the keystore. Note: It is possible to replace the keystore after installation if your site has a company keystore. Refer to the UTA documentation for post installation steps on how to replace the default keystore and use advanced keystore attributes. Click Next after specifying the information. Target Database Connection Details. Provide the details of the database you want to use for the installation of the UTA Repository. The following information needs to be provided: Database Host. The host name housing the database to house the UTA Repository. Database Port. The listener port number of the database to house the UTA Repository. Database Service Name. The PDB name or database service name of the database to house the UTA Repository. Database Administrator User Name. The DBA user to be used to install the UTA Repository objects. This user MUST have DBA access to create the objects. Database Administrator Password. The associated password (and confirmation) for the DBA user. Application Database Schema Password. The password (and confirmation) for the UTA schema user created in the prerequisite steps. Click Next after specifying the information. Installation Summary. A summary of the information provided is presented prior to executing the Installation. Click Install to start the installation. For example: Installation Progress. The installation will be visually shown with progress. If any steps error refer to the log outlined in the previous step. Correct and restart. An example of the Installation Progress is shown below. Installation Complete. A dialog will link to the success or failure of the installation process with a link to the log (if desired). Click Finish to close the installer. For example: After finishing the install it is possible to start UTA using the following commands: Navigate to the home directory specified on Page 3 of the installation (Oracle Home) as the user that installed UTA. Use the ./startUTA.sh command to start UTA. Manual Post Installation Steps The default certificate generated by the installation is limited in its use. It can be replaced or changed using the following process: After logging into UTA use the browser to view and export the certificate to your local drive. For example: Alter the certificate as necessary using your sites preferred certificate editor. Copy the file to the UTA server and execute the following command to set the certificate: keytool -import -alias <aliasname> -keystore <certificate store location> -file <certificate location> Where <aliasname> - The alias in the keystore to replace. For UTA this value is typically uta <certificate store location> - The store location which is the UTA home directory and keystore name utaKeystore.jks <certificate location> - The location of the file you altered. Note: You will be prompted for the keystore password when executing this command. Note: Other Cacerts can be found in your JDK location under $JAVA_HOME/jre/lib/security subdirectory. Common Operations The following commands are available from the UTA home directory. Command Operation ./startUTA.sh Starts the Utilities Testing Accelerator ./stopUTA.sh Stops the Utilities Testing Accelerator ./startUTARuntimeExectutor.sh Starts the Server Execution Engine ./stopUTARuntimeExectutor.sh Stop the Server Execution Engine UTA Eclipse Client Note: Use of this client is optional as the Server execution is the preferred method of execution. The UTA Eclipse client is located in the UTA Home directory and is named UTA_Client.zip. Download it from this location and refer to the installation guide for additional instructions for sites wanting to use this client.

The installation of an on-premise version of Utilities Testing Accelerator has been improved upon in both versions 6.0.0.1.0 and 6.0.0.2.0 to allow customers to quickly establish product environments....

UTA 6.0.0.2.0 Features - Flow Subroutines

One of the most innovative new capabilities introduced into the Oracle Utilities Testing Accelerator (6.0.0.2.0 and above) is the support for Flow Subroutines. The key concept to understand that it is now possible, using this capability, for any flow to be reused within another flow, yet still retain independence. This is a very useful to allow test engineers to model repeatable processes and reuse those processes across test assets more effectively. The concept has a key capability in the form of the Subroutine Interface. This takes a flow and defines the interface and the amount of data sharing within the flow it is included in. You can define the Inputs and Outputs from the flow in this interface. More importantly, it allows this specification to be optimized for the flow it is to be embedded in. This opens up a lot of possibilities. You can take a common flow, test it independently, and then include it in any way into other flows. It can be the initial part of another flow, somewhere inside the flow or at the end of the flow. In each style you can define how that subroutine flow interacts with the flow it is embedded in. The most interesting part of this enhancement is that flows do not have to be designed to be reused. They can be designed to be used as is and then reused as needed. This means as test assets are built you can choose to design them as you wish and use subroutines where appropriate. The end execution is the same with the same information processed whether it is embedded or simply included in the flow. Customers familiar with Oracle Application Testing Suite will be familiar with Component Groups. These were reusable groups of components you placed into flows, designed for reuse. When we built the Oracle Utilities Testing Accelerator on the design of the Oracle Application Testing Suite design, we decided not to implement Component Groups. We found the idea that you had to design an asset you could only used one way, limiting, so we decided to add Flow Subroutines as an alternative. This has the same reuse concept as Component Groups but left the assets as reusable and independent. We felt it was the best in both worlds. When we opened up Oracle Utilities Testing Accelerator to the Oracle Utilities Reference Model, working with that team made the concept we had more relevant. The Oracle Utilities Reference Model content pack use this Flow Subroutine to support a wider range of use cases and utility types than ever. The use of Flow Subroutines is optional but we have extended the Oracle Utilities Testing Accelerator workbench to not only allow flows to be built, but their interface when using them as a subroutine to be done. Let me illustrate the power of this capability using an example. Here we have three generic simple flows. As you see, the grouping of Component A, Component B and Component C are common to each flow. They are in different places in each flow but the sequence is the same. Now this is a perfectly normal situation and you can execute these flows as per this configuration. The grouping of Component A, Component B and Component C might represent a common sequence of events you repeat in different processes. The disadvantage of repeating this group of components is that if you want to change across the flows you would have to edit it a number of times. With the introduction of flow subroutines you would create a new flow holding those common components. This flow can be executed separately to test the sequence and becomes a flow in its own right. The new flow would look something like this: Now you would edit the original flows to now replace the original sequences with this new flow as a flow subroutine. You will notice that as part of replacing the components with a flow we needed to add a flow subroutine interface. This happens when you include a flow into another flow and defines the interface (what data is passed between flows) based upon their context. In the first flow example, the flow subroutine is at the start of the flow, so you will define the output from the subroutine exposed to the rest of the flow. In the second flow example, the flow subroutine is in the middle of the flow, so you need to define the input and outputs in context of the flow. In the third flow example, the flow subroutine is at the end of the flow you just need to define the data input into the flow. As pointed out, any flow can be used as a subroutine within another flow and the flow subroutine interface defines the interactions between the flow and its subroutines depending on the context of the test. For more information about Flow Subroutines, refer to Flow Subroutines and Test Data Sets (Doc Id: 2632033.1) available from My Oracle Support For more information about Oracle Utilities Testing Accelerator, refer to Oracle Utilities Testing Accelerator Overview (Doc Id: 2014163.1) from My Oracle Support which includes an overview and a Frequently Asked Questions.

One of the most innovative new capabilities introduced into the Oracle Utilities Testing Accelerator (6.0.0.2.0 and above) is the support for Flow Subroutines. The key concept to understand that it is...

Utilities Testing Accelerator 6.0.0.2.0 Available

Oracle is delighted to announce the release of the Oracle Utilities Testing Accelerator (UTA), known as Version 6.0.0.2.0, is now available for licensed customers to use. This release is significant as it is considered a foundation release for customers who wish to use the upcoming Utilities Reference Model (URM) content. This release realizes one of the major goals of the Utilities Testing Accelerator by providing reusable content with the tool to greatly reduce risk and costs of implementing Oracle Utilities products. This also contrasts the premise of traditional automation tools where the risk and cost is still with the testers on building and maintaining content. This release contains the following major enhancements: Foundation Release. This release is required as a minimum for customers considering to use the Oracle Utilities Reference Model UTA content packs. It is highly recommended that UTA customers upgrade to this release prior to using the Oracle Utilities Reference Model content. A subset of this pack is shown below: Note: The Utilities Reference Model content will not work on older versions of the Oracle Utilities Testing Accelerator. The initial release of the Oracle Utilities Reference Model pack is targeted at Oracle Utilities SaaS Cloud Services only. In subsequent releases of the pack other variations, including on-premise releases, will be supported. Flow Test Data Set Support. In prior releases, it was possible to associate test data set content with individual components within a flow for reuse. In this release, test data sets can be associated as a flow data set that is associated with flows. Once established, the flow test data sets can be managed and used as a set at anytime. This reduces data management at a flow level and promotes reuse. This is a foundation feature for the upcoming test planning capabilities planned in future releases. For example: For more information about Flow Test Data Sets, refer to Flow Subroutines and Test Data Sets (Doc Id: 2632033.1) available from My Oracle Support. Iterative Flow Support. In some test scenarios, multiple flow executions are needed to populate enough data for additional tests. In this release, it now possible at the Flow Group level to iterate a flow using different data banks to populate data in a single execution event. This capability is used in association with Flow Set Support to easily target relevant data with each iteration. For example: Flow Subroutine Support. To promote reuse, it is now possible to include a flow within a flow, in a similar method to components. This allows flows to model specific micro-level business processes and then incorporated into longer duration process to reduce testing costs and encourage reuse. Customers familiar with Oracle Application Testing Suite might remember Component Groups that had similar concepts. Oracle Utilities Testing Accelerator does not support Component Groups as we wanted to take advantage of the concepts and capabilities of the flows rather than groups of components. For example, flows can act as standalone and/or subroutines and can be independently executed. The Oracle Utilities Reference Model content fully exploits this capability. This allows implementation to treat a flow as a subroutine with a configurable Subroutine Interface uniquely on each reuse. This means that Flows can be standalone and/or reused wherever appropriate with a configurable interface to maximize reuse. For example: For more information about Flow Subroutines, refer to Flow Subroutines and Test Data Sets (Doc Id: 2632033.1) available from My Oracle Support. Improvements to the User Experience. With each release of the product, the user interface is improved based upon feedback from existing customers and in direction of our UX design team. In this release the following has been updated: Updated Oracle Jet Library. The rendering engine and libraries used for the product have been updated to the latest Oracle Jet release to offer greater browser compatibility, to address user experience inconsistencies and offer new capabilities. Dynamic Zone Sizing. To support a wider range of form factors the user zones can by dynamically resized at run-time. Zone Hiding Support. To maximize the effectiveness of the flow and component maintenance, it is now possible to hide the flow/component tree zones to maximize the canvas. Improved Error Messages. As part of the standardization,  messages from the product are progressively migrated to a new panel style interface. For example: Inbuilt Purge Capability. With the inclusion of additional data capabilities such as test data sets at component and flow level as well as results for all executions, data retention needs to be managed. This release includes a generic purge capability for test data sets and execution information to keep the database manageable. This is the first in a series of inbuilt data management capabilities planned for the product. For example: Improved Categorization of Flows. With the introduction of the new capabilities and the Utilities Reference Model the module system has been extended to allow users to manage their own modules to aid in organization of test assets. A Default module is now shipped with all packs to act as a default capability. In this release the implementation of flow modules has been restricted to a single level. This is planned to be expanded in future releases. For example: Improved Email Results Component. For backward compatibility, Oracle Utilities Testing Accelerator supplied an optional component to email a summary of the results. In this release this component has been altered to provide additional information in the email including performance metrics, request payload and response payload. Use of this component is optional as all this information is also available from the results section of the Oracle Utilities Testing Accelerator Workbench and the associated optional eclipse plugin. Improved Results Reporting. The results user experience has been revampled in anticipation of the planned Test Planning capabilities in future releases. This interface makes the results easier to understand with more summary screens to save the need to look into details. For example: There are additional enhancements and fixes from previous versions of the product. For a full list refer to the provided documentation. For on-premise customers this release will be available from Oracle Software Delivery Cloud and cloud customers will receive this release and content automatically as scheduled. Content Packs are available on-premise customers via My Oracle Support. For more information about Oracle Utilities Testing Accelerator, refer to Oracle Utilities Testing Accelerator Overview (Doc Id: 2014163.1) from My Oracle Support which includes an overview and a Frequently Asked Questions.

Oracle is delighted to announce the release of the Oracle Utilities Testing Accelerator (UTA), known as Version 6.0.0.2.0, is now available for licensed customers to use. This release is significant...

Configuring Batch Level Of Service

One of the most interesting features in the Oracle Utilities Application Framework (4.3.0.6.0 and above) is the Batch Level Of Service feature. This feature allows for a configuration of an algorithm to return the level of service based upon any calculated metrics. The service returns a code that describes whether the metric used met the configured target and if not, the reason it did not (as configured in the algorithm). The Batch Level Of Service algorithm is an information only algorithm and is not executed as part of the batch processing for the execution of the batch control it is attached to.  It is called by monitoring functions and portals to display the level of service. Typically this algorithm will compare the last execution of a process against a specific target metric and then return whether the metric exceeded some target value or not. The use of this algorithm is simple: Identify Algorithm Types. The code behind the Batch Level Of Service is contained in a Batch Control - Level Of Service algorithm type. The product supplies a number of these algorithm types to compare the last execution of a batch control against elapsed time, error numbers or throughput metrics. You can write your own in Groovy, scripting or Java (the latter for non-cloud implementations) to set up your own targets. The base algorithms take the worst performing thread in a batch execution to assess against the metric for the entire execution. For example, if the metric is elapsed time then the longest elapsed time of any thread in an execution is used as the basis for assessing the metric. For example: Configure Algorithm for Target. For the algorithm to work, you must set a target value. This is done with an Algorithm for each distinct target value. For example, you might want to set a target of 1 hour for a target. You would create an algorithm entry using the F1-BAT-RTLOS base algorithm type with a target value of 3600 seconds (1 hour). That algorithm entry can be reused for ANY batch process you want to assess against that 1 hour elapsed time goal. For example: Attach algorithm to Batch Control. For each batch control you want to check with the Level Of Service attach the appropriate configured algorithm that has the appropriate target metric and target value. For example: You are now ready.. Any portal that uses the Level Of Service will now display the level of service. For example of monitoring portals refer to Building Portals, Building A Level Of Service Algorithm, base Level Of Service algorithms and Calling Batch Level Of Service manually for additional examples and advice.  

One of the most interesting features in the Oracle Utilities Application Framework (4.3.0.6.0 and above) is the Batch Level Of Service feature. This feature allows for a configuration of an algorithm...

Tranisitioning to the Cloud Mindset

A few months ago I was asked to educate a few employees on how to transition their mindset from an on-premise to take advantage of the Oracle Utilities SaaS Cloud Services. They wanted to understand how they can think differently to take advantage of all the capabilities of the cloud to realize risk and cost savings. Most people assume that the cloud implementation of the Oracle Utilities product is just an installation of the product. While technically this is correct, this short changes the value of the cloud and reduces the cost and risk savings. They wanted to understand how they can change their implementation philosophy so that they can best take advantage of the service. Some of the advice I offered is obvious but the context makes it more relevant. Here is a summary of what I outlined in the session: Understand the different responsibilities in the cloud. The cloud has variations such as Infrastructure As A Service (IaaS), Platform as A Service (PaaS) and Software As A Service (SaaS). Each of these are understood in terms of platform but you also need to appreciate them in terms of the implementation responsibilities. Whilst IaaS and PaaS are straightforward, SaaS does need some implementation clarifications. In the Oracle Utilities SaaS Cloud, whilst it is a full feature service there are expectations that the implementation team will run the product like they would run their business. For example: All database work is handled as part of the service. The maintenance of the database is all handled by the service. Extension work is still the responsibility of the implementation. There are tools to manage the extensions in a lower cost and risk manner. The majority of security is handled by the service but the implementation is responsible for loading and maintaining the security definitions. Batch processes can be loaded and schedules loaded but the implementation may modify the schedule and is responsible for the scheduling. Understand the SaaS Solution. The Saas Cloud service is not just an installation of the product. It far more than that. The components in the solution include: Oracle Utilities Product. Obviously the core part of the Oracle Utilities SaaS Cloud Service is the Oracle Utilities product central to that service. The technology configuration is optimized for multi-channel scalability with optimizations for the high performance, high availability and business continuity features of the Oracle Cloud. Oracle Utilities Accelerator. The Oracle Utilities SaaS Cloud Service includes an accelerator, which is a set of data and accelerator code, unique to each service offering. This allows existing customers to potentially reduce their extension spectrum as well as new customers a potential accelerated implementation. Oracle Utilities Testing Accelerator. To support the cloud implementation the Oracle Utilities Testing Accelerator is provided with content related and unique to the Cloud Service it is attached to. This allows cloud customers the ability to rapidly test each release to keep up with the schedule of releases. Oracle BI Publisher. The Oracle Utilities SaaS Cloud Service includes an ad-hoc query and reporting capability that allows the data in the service to be reported effectively. The advantage of using BI Publisher is that it offers superior report writing capabilities as well as resource governance to reduce performance risks. Oracle Utilities Cloud Service Foundation. In any product implementation, there are processes that the implementation team need to perform to effectively manage the implementation. This is no different in the Oracle Utilities SaaS Cloud Service. Therefore, the Oracle Utilities Cloud Service Foundation is provided, exclusively for the Oracle Utilities SaaS Cloud Service to provide the following management capability: Process Automation. There are tasks that need to be performed by the implementation team on a regular basis. These have been implemented as Process Flows to allow the implementation team the traceability of the process. The steps in the process flow have been pre-configured and utilize exclusive cloud adapters to perform routine tasks. The Oracle Utilities Cloud Service Foundation Administrative Users Guide outlines the delivered process for each cloud release. Extension Management. Extensions in the cloud are managed using Configuration Migration Assistant (CMA). The Oracle Utilities SaaS Cloud Service includes exclusive configuration to allow the processing of accelerators, extensions as releases etc within this tool. The process is largely automated using Process Automation to reduce risk and costs. Schedule Management. The Oracle Utilities SaaS Cloud Service includes an inbuilt scheduler, namely the Oracle Scheduler. The Oracle Utilities Cloud Service Foundation includes the user interface to manage and monitor the schedule for implementations. Conversion Toolkit. The Oracle Utilities SaaS Cloud Service includes a conversion toolkit that includes a staging schema and a set of conversion related batch processes that implement both "big bang" and "incremental" conversions. The principles are based upon the conversion capability which has been used across all traditional on-premise implementations. Operations Capability. The Oracle Utilities SaaS Cloud Service includes a set of operational capabilities including environment management and monitoring capabilities to allow implementations to perform routine operational tasks. Most of these tasks are automated using the Process Automation capabilities to reduce risk and cost. Oracle Cloud API. As with all Oracle Cloud Services, there is a standard REST based API set to interface to the Oracle Utilities SaaS Cloud Service including its related components. This allows flexible integration scenarios to be implemented. Oracle Identity Cloud Service (options). The identity of users in the Oracle Utilities SaaS Cloud Service can be managed by a variety of security configurations. Embedded Identity. By default, a pre-built Oracle Identity Cloud Service can be included in the service which can be used exclusively to manage all identity by the Oracle Utilities SaaS Cloud Service. Existing Oracle Identity Cloud. If the customer already takes advantage of other Oracle Cloud Services, then they can reuse their existing Oracle Identity Cloud Service to manage identity for that service. Federated Security. If the customer has an external identity solution (external to the Oracle Cloud) or wants to use an on-premise identity solution, then there is a federated option (using OAuth2). Database for all components. All of the above options, requires database level storage. All the database are housed in Oracle Exadata servers to maximize performance and data management options. Oracle Object Storage Cloud.  All implementations require data storage for interfaces and integration. The Oracle Utilities SaaS Cloud Service includes a flexible amount of raw storage. Environments On The Cloud. One of the major differences in the Oracle Utilities SaaS Cloud Services is the quick provisioning of new environments to support the implementation. There are three classes of environment: Development. These are a set of environments where extension is performed. This can be one environment or extended to other environments. This has less restrictions but is sized smaller than other classes of environments to keep risk and costs low. Testing. This class of environments it provided to verify the functionality of the configuration and functionality, prior to use in production.  These environment can be used for a cross spectrum of testing or related activities (including training if necessary). One of big advantages with the Oracle Utilities SaaS Cloud is that it is possible to provide a production size test environment. Production. This is a single environment class reserved for production use. This environment includes additional high availability and business continuity capabilities. Each environment is a complete solution isolated for use for that class of activities: Oracle Utilities Product. The Oracle Utilities product at the basis of the service including the Oracle Utilities Application Framework to extend the service. Oracle Utilities Accelerator. A cloud exclusive accelerator, preloaded upon provisioning, to accelerate the implementation of the service. Oracle BI Publisher. An optimized business reporting and query tool for the service. This can be used to build reports or simply query data within the service with in-built resource governance. Oracle Utilities Testing Accelerator. An inbuilt testing solution for the service with content optimized for the service. This service is not installed on Production environments. Oracle Utilities Cloud Service Foundation. A cloud exclusive set of operational and implementation capabilities reserved for use with each Oracle Utilities SaaS Cloud Service. For example, conversion capabilities, operational workflows etc. This aspect is typically used by personnel administrating the service. Oracle Utilities Databases. A set of databases, running on Oracle ExaData hardware, to support all the products in the service. Oracle Object Storage. Raw storage is provided via the Object Storage Cloud. The Oracle Utilities SaaS Cloud Service is pre-configured to use this service. Oracle Utilities Data Connect. Data in and out of the service can be defined as part of the Data Connect capability built into the service. Oracle Utilities Cloud API. The Oracle Utilities SaaS Cloud Service is bound by a REST based API to allow integration and greater flexibility in implementation options. For example: Advanced Security. Security is one of the most important aspects of the Oracle Cloud with superior cloud infrastructure security as well as advanced security configuration within the Oracle Utilities SaaS Cloud Service itself. One of fundamental security practices is managing who accesses your cloud service via managing identity. Oracle has identity products traditionally used on premise to manage identity in a centralized cost effective way. These tools are now available as a fundamental building block in the Oracle Utilities SaaS Cloud Service. To meet the diverse needs for managing identity the service offers a number of identity possibilities: Embedded Identity. The Oracle Utilities SaaS Cloud Service can include an embedded identity solution to use exclusively with the service. This option is available to customers who only have one service on Oracle Cloud. Shared Identity. If the customer already owns another Cloud Service and uses the Oracle Identity Cloud Service, it is possible to connect the Oracle Utilities SaaS Cloud Service to manage identity and take advantage of existing investments. Federated Identity. If the customer already has an external identity solution or wishes to use a security repository external to the Oracle Cloud, then the Oracle Utilities SaaS Cloud Service can be configured to support identity federation. Understanding the advantages of the Cloud. There are unique advantages of the cloud that you must be aware of to full exploit the capability: Hardware On Demand. Given the fluctuations in demand for hardware during typical upgrade life-cycles and business volume fluctuations, having the ability to tap into hardware resources quickly is a huge benefit of the cloud. On-premise implementations can have long lead times, leaving the project at risk. Self Service Capabilities. The cloud implementation includes a set of native cloud tools and tools designed specifically for the services to reduce costs through self service. Scheduled Patching and Upgrades. One of the big advantages of the cloud is that the schedule for patching and upgrades is known before hand and has been  optimized for each service. This greatly reduces costs and risks. Transitioning Your Skills One the last things I talked about was how do I transition my on-premise skill set to an on-cloud skillset. Here were my tips: Take advantage of the online information available with the service. Oracle supplies additional online documentation with each service to help you understand how to manage your service in respect to your business as well as patch and upgrade information to cloud customers. Change your extension mindset to a reuse mindset. Whilst it is possible to transition your existing extensions to the cloud, using various techniques, a huge cost saving is to take advantage of the base and/or cloud accelerator functionality to reduce your extension risk and costs. Understand every aspect of your service. Understand what you are getting with the service to know what you can and cannot do. Do not worry about the things you do not need to worry. This is my number one piece of advice. The service provides a lot of capability and reduces risk by shifting responsibility of some processes to Oracle. For example, Oracle manages performance and backup for you. You don't have to test for those as they are already been tested for you. By taking advantage of what can and not worrying about the things handled by Oracle, you can go a long way to realize that lower risk and lower cost.

A few months ago I was asked to educate a few employees on how to transition their mindset from an on-premise to take advantage of the Oracle Utilities SaaS Cloud Services. They wanted to...

Using UTA to experiment with Business Processes

One the use cases I pointed out in my last blog post, UTA Beyond Testing, was to assist in transitioning extensions (i.e. customizations) to base functionality. Customers with a large number of extensions, perhaps written when they originally went live on an older version of the product, are keen to reassess each extension to see if they can replace it with the perhaps newer base functionality. Reducing the need for an extension has huge cost reduction and risk reduction benefits and also allows you to take advantage of core functions. The issue in the past to do this was a reliable means of assessing impact and testing change. Somehow you need to show the business process you had before would not change (or change in a small way) when you replaced extension with base. You needed the confidence that your business can operate without that need for the extension. This is where the Oracle Utilities Testing Accelerator (UTA) comes into play. The key concept in the tool is the modelling of business processes using flows to represent the business process and the components, base and custom, forming the basis of that process. When using UTA, you can model your business process (as is or how you want it to be) in the tool and experiment with configuration and code changes to remove extensions and replace them with base code, reflecting those changes as necessary. You can compare results of tests at various levels to decide the go/no-go decisions to migrate off an extension. There are three outcomes with using UTA in this fashion: You get the same results. Basically replacing the extension with the base did not impact your results so it is safe to move to the base component. This is the best case scenario. You get different results but they are close. This scenario, the results are different but close. This situation may require a change of the business process to take the difference in place or retaining the extensions. You get completely different results. This scenario is possible and may be used to reinforce the need for the extension or at least work to see how it can be resolved. The advantage of using UTA is that can set the pace of your migration to match your risk tolerance and your deployment plans. For more information about the Oracle Utilities Testing Accelerator, refer to the Oracle Utilities Testing Accelerator Overview (Doc Id: 2014163.1) available from My Oracle Support. This contains an overview. FAQ and datasheet on the product.

One the use cases I pointed out in my last blog post, UTA Beyond Testing, was to assist in transitioning extensions (i.e. customizations) to base functionality. Customers with a large number of...

UTA Beyond Testing

As one of the product managers of the Oracle Utilities Testing Accelerator, I am constantly amazed the different ways customers and partners are using the product to go beyond simple testing of the business processes. Remember Oracle Utilities Testing Accelerator is essentially modelling business processes and testing them against the product. Here are a few of the areas that the Oracle Utilities Testing Accelerator has been used to achieve: Assessing Change Patch/Fix Testing. One of the most interesting is assessing the impact of a patch or fix against your business process. Checking this using UTA can ascertain whether the implementation of a patch or fix will be adverse to the success of your business process. Extension Release Testing. Partners release changes to extensions on a regular basis. Assessing those changes for impacts to your business processes is also critical. Migration to Base from Extensions. This use case is particularly exciting. The idea that after you test your business processes with your extensions, you can use UTA to progressively substitute base components (or accelerator components) for extensions to assess whether you can migrate to that capability and replace an extension. Reduction of use of extensions reduces both cost and risk. Even if the base substitution fails to successfully replace your extension as a result of testing using UTA, it will serve to help assess how far you need to got adopt the base function or at least justify the use of your extension. Preparing for the Oracle Cloud. As part of the remediation to the cloud, assessing extensions migrated to cloud supported technologies against their originals is critical to reducing the risk to migrating to the cloud. UTA can be part of that migration to reduce your costs and risks. Note: The above use case can also apply to on-premise applications wishing to reduce their risks and costs by taking advantage of the capabilities of Oracle Utilities Application Framework used for the cloud implementation delivered for on-premise releases, such as ConfigTools, File Adapter etc.. Optimizing your Business Process. One of the key features of UTA is that we collect a lot of additional data when executing. Some of this data is used internally by the product and some of it has additional uses. When we designed the results we included capabilities to track performance of every call as a byproduct. One of the amazing use cases we have seen is analyzing the call times for each part of a business process, finding bottlenecks in the process and re-optimizing them. This is especially relevant in the cloud as part of the Oracle Utilities SaaS Cloud Services is at least one production sized testing database. This means you can see the effectiveness of your business process and make adjustments to optimize it. These are all exciting uses of the tools and they have inspired us to make these and more use cases both easier to use and more effective. We are looking at adding and modifying the roadmap to fully exploit these new use cases and go beyond the testing capabilities we already have. For more information about the Oracle Utilities Testing Accelerator, refer to Oracle Utilities Testing Accelerator Overview (Doc Id: 2014163.1) available from My Oracle Support.

As one of the product managers of the Oracle Utilities Testing Accelerator, I am constantly amazed the different ways customers and partners are using the product to go beyond simple testing of the...

Business Process Based Testing

Over the last 30 years of my career, I have been involved in lots of projects and across many industries. Over that time I have developed a set of techniques and principles that I tend to reuse over and over to help stay successful. Every opportunity I can, I strive to learn more and hone those skills as an effort of constant improvement. I learnt very early on in my career, thanks to a collection of great mentors, that locking down on some fundamental principles will always steer me away from making wrong decisions. Though one of my mentors once commented "I have learn so much from my mistakes I look forward to making more mistakes in the future". One of the fundamental principles I have been working with the last few years is around testing. As the late Stephen R Covey once stated, taking an idea and boiling it down to its fundamental base principles allows you to understand that idea and also come up with successful approaches to that idea (I am paraphrasing him). Working with the Oracle Utilities Testing Accelerator over the last few years, I have an opportunity to focus on the fundamentals of testing. I work with this fundamental testing principle: "Testing is verifying that your business process will work (or not work) with your configuration and extensions of the product with your data". This recognizes that implementing a product is about automation of process. In the days before computers (yes I am that old), a lot of that was paperwork. Computers, when they arrived, automated paperwork. Now once you understand that your product should represent your business process, test automation needs to verify that process works (or not) using various data scenarios. Once you understand that test automation must represent your business process, a few interesting possibilities are possible with this style of testing: A Working Business Process. Obviously the most important part of the strategy is that your business processes are proven to work with your data. This is the primary focus of business process type of testing tools. Timings of Business Process. How long it takes to complete a business process becomes important. This can translate to call times and the number of staff you need to complete an expected volume in a specified time. By modelling and testing your business process, you can see which components of that process take what time. This allows optimizations and experimentation around "what ifs" for changes to the process to optimize this. This is especially important in the cloud to ensure you have purchased enough capacity. This is very achievable in the Oracle Utilities SaaS cloud with the provision of production sized testing environments as part of the service. Migration to Base Technology. Extensions typically address changes from a base product for implementing a business process. Over time, products are enhanced and may actually support what you implemented in an extension originally or even get close to what you need (with some business process changes). Using the base reduces risk and costs and using an automation tool to see where extensions can be replaced with base functionality can be explored using business process tools. By carefully substituting base components where extensions have been used can assess whether moving to base for that situation is possible. Risk Assessment for Change. One of the most innovating ways of using a business process testing platform is to assess the impact of any business process for any change. That change may be a single patch, patch sets, extension releases or upgrades. You can quickly run your business processes after a change is implemented to assess the impact of that change to your business processes. Implementing Blue/Green Testing. One of the big advantages of the cloud has been the opportunity of implementing blue/green style deployment testing to keep up to date with regular changes. Whilst this seems to be exclusive to the cloud, due to the advantages of that platform in terms of readiness of infrastructure, partners have started implementing this on-premise to prepare business for rapid change and also take advantage of new functionality in newer versions. The list above are just some of the key advantages of a business process testing approach rather than a more traditional automation that supports spot testing. Saving costs and reducing risk in testing means you can test more and be more confident that what you have implemented will ultimately implement your business processes. The Oracle Utilities Testing Accelerator is a business process testing tool optimized for Oracle Utilities products on-premise and in the Oracle Cloud. It was based upon the popular Oracle Application Testing Suite that was implemented in 1000's of eBusiness Suite customers to save up to 90% of testing costs and significant reduction of risks. For more information about Oracle Utilities Testing Accelerator, refer to Oracle Utilities Testing Accelerator Overview (Doc Id: 2014163.1) available from My Oracle Support.

Over the last 30 years of my career, I have been involved in lots of projects and across many industries. Over that time I have developed a set of techniques and principles that I tend to reuse over...

Updated Whitepapers

In line with the release of Oracle Utilities Application Framework V4.4.0.2.0, a series of updates to the technical whitepapers have been competed to reflect the new version. This includes updated information for the new version as well as updates for other versions based upon feedback from the field. The following whitepapers have been updated and are available from My Oracle Support. Web Services Best Practices (Doc Id: 2214375.1). Updates for REST and the latest changes to the Web Services capabilities. Technical Best Practices (Doc Id: 560367.1). Generic best practices from the field. Software Configuration Management Series (Doc Id: 560401.1). A series of documents covering code management practices covering on-premise and cloud implementations. The series includes: Concepts. General concepts and introduction. Environment Management. Principles and techniques for creating and managing environments. Version Management. Integration of Version control and version management of configuration items. Release Management. Packaging configuration items into a release. Distribution. Distribution and installation of  releases across environments Change Management. Generic change management processes for product implementations. Configuration Status. Status reporting techniques using product facilities. Defect Management. Generic defect management processes for product implementations. Implementing Fixes. Discussion on the fix architecture and how to use it in an implementation. Implementing Upgrades. Discussion on the the upgrade process and common techniques for minimizing the impact of upgrades. Preparing for the Oracle Cloud. Discussion of techniques designed to help transition from on-premise, IaaS or PaaS implementations to Oracle Utilities SaaS offerings. Performance Troubleshooting Guideline Series (Doc Id: 560382.1). Updated series of whitepapers on what techniques, capabilities and metrics available for each layer. This series covers: Concepts. General Concepts and Performance Troubleshooting processes. Client Troubleshooting. General troubleshooting of the browser client with common issues and resolutions. Network Troubleshooting. General troubleshooting of the network with common issues and resolutions. Application Server Troubleshooting. General troubleshooting of the Application Server with common issues and resolutions. Server Troubleshooting. General troubleshooting of the Operating system with common issues and resolutions. Database Troubleshooting. General troubleshooting of the database with common issues and resolutions. Batch Troubleshooting. General troubleshooting of the background processing component of the product with common issues and resolutions. These documents cover multiple versions of the Oracle Utilities Application Framework and cover all the Oracle Utilities Application Framework based products.

In line with the release of Oracle Utilities Application Framework V4.4.0.2.0, a series of updates to the technical whitepapers have been competed to reflect the new version. This includes updated...

REST URI Customization

One of the key changes in the latest Oracle Utilities Application Framework (4.4.0.2.0) is the ability to influence the URI used to access the REST API's in the product. This is a combination of some key environmental parameters and parameters on the service itself. {http_verb} https://{base_uri}/{owner}/{category}/{service_uri}/{operation_uri}/{data} Where: Component Usage Comments {http_verb} Sets the verb used to access the service. Oracle Utilities Application Framework supports GET, POST, UPDATE and PATCH. POST should be used for Business Service and Service Script based operations {base_uri} Base Level URI. There are two possible values: Autogenerated. This is the default for most installs. Environment Variable. Uses the value of the CLOUD_LOCATION_F1_BASE_REST_URL Autogeneration uses: https://{host}:{port}/{context}/rest/apis/ {owner} Owner code for record. cm is used for custom services. Uses F1-RESTOwnerURLComponent Extended Lookup {category} Resource category Uses F1-RESTResourceCategory Extended Lookup {service_uri} Service Level URI This is configured on the REST Inbound Web Service at the service level {operation_uri} Operation Level URI This is configured on the REST Inbound Web Service at the operation level {data} Query or embedded parameters for REST call This is configured on the REST Inbound Web Service at the operation level Here is an example: In the REST specification the provision of parameters can be in the payload (default), embedded in the URI (for simple parameters) and as a query on the URI (for multiple parameters). This capability can be configured at the operation level. The capability allows for the specification of the parameter appearing on the URL, the method supported and the mapping to the underlying schema for the parameter. For example:   For more information about REST and other Web Service capabilities, refer to the online documentation and Web Services Best Practices (Doc Id: 2214375.1) available from My Oracle Support.  

One of the key changes in the latest Oracle Utilities Application Framework (4.4.0.2.0) is the ability to influence the URI used to access the REST API's in the product. This is a combination of some...

Global Configuration File - cm_properties.ini

One of the common processes on-premise with Oracle Utilities products is altering the various properties files to implement site specific settings. Some of the settings in these files are inherited from the various configuration files (such as etc/ENVIRON.INI) and others are unique enough to be defaulted for you. To change the latter settings, the product allowed for custom templates to be created to implement special settings or other custom settings. Whilst this is still supported, a global capability has been introduced in 4.4.x implementations to handle global configuration settings in a single override file. This file is etc/cm_properties.ini which hold the additional instructions for the configuration utilities (such as initialSetup) to implement changes. This file has four styles of entry: Setting format Usage <properties_file>:<properties_name>=<value> Override the setting <properties_name> to <value> in configuration file <properties_file>. If the property does not exist in the file, it will be added. <properties_name>=<value> Override the setting <properties_name> to <value> in all properties files where the property exists. <properties_file>:<properties_name>=DELETE Remove the setting <properties_name> from the configuration file <properties_file>. <properties_name>=DELETE Remove the setting <properties_name> from all properties files where the property exists. Note: Removal of properties will revert to implied defaults so should be used with caution. For example: hibernate.service.properties.template:hibernate.user=myuser hibernate.password=mypwd hibernate.iws.properties.template:hibernate.user=myuser hibernate.service.properties.template:hibernate.ucp.validate_connection=[DELETE] hibernate.service.properties.template:new.property=test This would override and delete entries from various hibernate properties files. Note: The product teams may deliver overrides for their products but the cm file would be the last file provided to override across properties files. For more examples of this capability, refer to the Installation Guide and Server Administration Guides shipped with the products or on the Oracle documentation site.

One of the common processes on-premise with Oracle Utilities products is altering the various properties files to implement site specific settings. Some of the settings in these files are inherited...

Using Metadata to Identify Code to Remediate for the Cloud

The Oracle Utilities Application Framework is meta data based. Some extension types are defined, in meta data, to define them to the Oracle Utilities Application Framework to augment base functionality to satisfy specific requirements. It is possible using a number of SQL statements against that meta data to identify the code. The following configuration objects can be identified in this manner. Extension Type Key Object Remediation Advice Custom Java/COBOL Based Batch Programs Batch Control Convert to Plug In Batch Custom Java/COBOL Based Algorithms Algorithm Type Convert to Scripting and/or Groovy Custom Java/COBOL Based Foreign Key References (Legacy) Foreign Key Convert to Scripting and/or Groovy Custom Java Based Audit Programs Audit Convert to Scripting and/or Groovy Custom COBOL Program Components Program Components Transition to Algorithms Custom FILE-PATHs on Batch Programs Batch Control Convert to File Adapter definitions Custom Tables Table Convert to base objects Custom External System XSL External System Convert XSL to Managed Content Custom Inbound Web Service XSL Inbound Web Services Convert XSL to Managed Content Custom Database Objects Data Dictionary Remove Custom Database objects and replace code using objects The SQL can be used to identify the objects to remediate and then the appropriate action taken to remediate the object. Additionally the following objects are not available in the Oracle Utilities SaaS Cloud: Custom XAI Classes - Not necessary in Oracle Utilities SaaS Cloud implementations. Custom Route Types - MPL/XAI is not supported in the Oracle Utilities SaaS Cloud. Custom XAI Inbound Web Services (Business Adapter) - XAI should be migrated prior to moving to the Oracle Utilities SaaS Cloud. Custom XAI Rules - MPL/XAI is not supported in the Oracle Utilities SaaS Cloud. Custom JMS Senders - Direct JMS integration not possible. Consider using Oracle Messaging Service with Oracle Integration Cloud. Custom JMS Receivers - Direct JMS integration not possible. Consider using Oracle Messaging Service with Oracle Integration Cloud. Custom JDBC - MPL/XAI is not supported in the Oracle Utilities SaaS Cloud. Custom JNDI - JNDI Resources have replaced with equivalents. For details of the SQL statements and remediation advice to use, refer to the Planning for the Cloud whitepaper in the Software Configuration Management Series (Doc Id: 560401.1) available from My Oracle Support.

The Oracle Utilities Application Framework is meta data based. Some extension types are defined, in meta data, to define them to the Oracle Utilities Application Framework to augment...

Information

Whitepaper List As At October 2019

It has been a while since I published a full list of all the whitepapers available for the Oracle Utilities Application Framework and related products available from My Oracle Support. The list below is current as at October 2019: Security Oracle Utilities Application Framework Security Overview (Doc Id: 773473.1) LDAP Integration for Oracle Utilities Application Framework based product (Doc Id: 774783.1) Single Sign On Integration for Oracle Utilities Application Framework based products (Doc Id: 799912.1) Database Vault Integration (Doc Id: 1290700.1) Oracle Utilities Application Framework Advanced Security (Doc Id: 1375615.1) Oracle Identity Management Suite Integration with Oracle Utilities Application Framework based products (Doc Id: 1375600.1) Audit Vault Integration (Doc Id: 1606764.1) Oracle Utilities SaaS Cloud Security (Doc Id: 2595978.1) Best Practices Technical Best Practices for Oracle Utilities Application Framework Based Products (Doc Id: 560367.1) Batch Best Practices for Oracle Utilities Application Framework based products (Doc Id: 836362.1) ConfigTools Best Practices (Doc Id: 1929040.1) Web Services Best Practices for Oracle Utilities Application Framework (Doc Id: 2214375.1) Note: Additional Best Practices documentation does exist for legacy technology such as XAI, MPL and V1.x of the products. Integration Oracle Utilities Application Framework Integration Overview (Doc Id: 789060.1) BI Publisher Integration Guidelines (Doc Id: 1299732.1) Oracle SOA Suite Integration with Oracle Utilities Application Framework based products (Doc Id: 1308161.1) Oracle WebLogic JMS Integration and Oracle Utilities Application Framework (Doc Id: 1308181.1) Integration Reference Solutions Oracle Utilities Application Framework (Doc Id: 1506855.1) Batch Scheduler Integration for Oracle Utilities Application Framework (Doc Id: 2196486.1) Oracle Service Bus Integration Oracle Utilities Application Framework (Doc Id: 1558279.1) Other Oracle Utilities Testing Accelerator (Doc Id: 2014163.1) - Data Sheet, Overview Whitepaper and FAQ Performance Troubleshooting Guideline Series (Doc Id: 560382.1) - This is a series of documents covering each tier of the architecture and the metrics available at each tier. Software Configuration Management Series (Doc Id: 560401.1) - This is a series talking about extension management and includes transition strategies to the Oracle Cloud. Oracle Utilities Application Framework Architecture Guidelines (Doc Id: 807068.1) What's New In Oracle Utilities Application Framework V4 (Doc Id: 1177265.1) Oracle Application Management Pack for Oracle Utilities Overview  (Doc Id: 1474435.1) Using Oracle Text for Fuzzy Searching Oracle Utilities Application Framework (Doc Id: 1561930.1) Private Cloud Planning Guide (Doc Id: 1643845.1) Migrating from XAI to IWS (Doc Id: 1644914.1) Multiple CM Development (Doc Id: 1901471.1) ILM Planning Guide (Doc Id: 1682436.1) Overview and Guidelines for Managing Business Exceptions and Errors (Doc Id: 1628358.1) Migrating From On Premise To Oracle Platform As A Service (Doc Id: 2132081.1) Each whitepaper also contains important links to other articles of interest. Lastly, trying to find the latest Oracle Utilities online documentation go to the Oracle Utilities Documentation Library for on-premise and cloud documentation.

It has been a while since I published a full list of all the whitepapers available for the Oracle Utilities Application Framework and related products available from My Oracle Support. The list below...

REST Support Improvements in V4.4.0.2.0

In past releases RESTful services were introduced as a complementary integration protocol for Inbound Web Services, which traditionally only supported SOAP. Over the last few releases we have endeavored to standardize our REST support so support both Oracle and industry standards. Supporting these standards reduces both cost and risk for implementations using REST. In Oracle Utilities Application Framework V4.4.0.2.0, a number of significant changes have been implemented to help standardize the REST interface. The changes include the following: Support for Business Objects. In past releases, the REST support has centered around Business Services and Service Scripts. In Oracle Utilities Application Framework V4.4.0.2.0, we not only added direct support for Business Objects but extended the model to include additional methods and URL changes to fully utilize the capabilities of REST and the underlying functionality. Support for Additional HTTP Methods. In past releases, the Oracle Utilities Application Framework supported exclusively supported the most common HTTP method, namely POST. In Oracle Utilities Application Framework V4.4.0.2.0, we introduced support for the GET, PUT and PATCH HTTP Methods configurable on the Operation. For backward compatibility the POST method will continue to be the default. This capability is now supported via configuration on individual operations. For example: Note: POST method should be used for all Business Service and Service Script based operations. PUT, GET and PATCH are applicable to Business Object based operations only. Advanced Support for Parameters. In past releases, the JSON/XML payload for the REST Service indicated the record information used to identify the object to process. In Oracle Utilities Application Framework V4.4.0.2.0, we introduced the capability to have the identification of the parameters on the URI or as query parameters, including support for multiple parameters, to support a wide range of REST styles. For backward compatibility, the capability will default to the payload for identifiers as per previous releases, if this capability is not used. This capability is now configurable on individual services and allows for flexibility in parameter naming, the style of the parameter and the mapping to the schema associated with the operation. For example: Flexible URI. In past releases, the URI used for REST services was hard-coded to reflect the URI in the container. In Oracle Utilities Application Framework V4.4.0.2.0, the URI is now a combination of the service level URI Component, Resource Category and operation level URI Component. All of these settings is configurable on the Inbound Web Service. For example: Note: For backward compatibility the default URI, used in past release will be supported for legacy services, but will be deprecated for this new capability to promote standardization. All these capabilities as well as other integration capabilities are outlined in the Web Services Best Practices (Doc Id: 2214375.1) available from My Oracle Support.

In past releases RESTful services were introduced as a complementary integration protocol for Inbound Web Services, which traditionally only supported SOAP. Over the last few releases we have...

XML Transformations as Managed Content

eXtensible Stylesheet Language Transformations (XSLT) provide a way of transforming XML into an alternative format (XML, etc). Oracle Utilities Application Framework supports these transformations for both inbound and outbound transmission of data as well as at both the request and response level. This is to allow the product to support a wide range of integration scenarios between other applications, market hubs, etc. Typically implementation teams create the necessary transformation, in the tool of their choice, and then configure the Oracle Utilities Application Framework to use the transformation with the relevant transaction at the relevant time. In past releases, this meant creating an .xsl file containing the transformation, depositing it on the file system and then, using the relevant utility, incorporating that file into the build of the deployment of the product and finally redeploying the product to enable the transformation to be used. Whilst this technique can be continued to be used for on-premise implementations of the products, this technique is not appropriate for the Oracle Utilities SaaS Cloud as the technique, as described above, introduces inefficiencies. In Oracle Utilities Application Framework V4.4.0.2.0 and above, to support XSL transformations more efficiently, they are now supported as Managed Content. The Managed Content object is part of the Oracle Utilities Application Framework, and was introduced to support a wide range of content types used across the products. The capability stores a number of content types used in ConfigTools objects and now has been extended to store and manage XSL transformations. The Oracle Utilities Application Framework has been extended now to use the Managed Content versions of those transformations rather than the more traditional file based solution. This has the following cost advantages: Deployment Costs are avoided. The use of Managed Content means no deployment activity is necessary which reduces deployment costs significantly and means that changes to objects can be done without the need for an outage. Content can be migrated with all other extensions. To reduce costs and risk, the Managed Content is automatically included in the Configuration Migration Assistant requests that migrate configuration. This means that these objects are change managed with any related object changes. This reduces synchronization costs.  Cache Managed. As with other administration objects, Managed Content is managed in the self managed configuration cache. This means it is loaded into the cache upon use and the loaded objects can be managed using cache commands to manage changes effectively. In the traditional file based method, as the object was managed by the container itself, it typically required a reset of the container to force a refresh, resulting in an outage in most cases. The Managed Content solution for XSL transformations extends to Inbound Web Services (SOAP) and Outbound Messages. Other areas where XSL transformations are available are not supported using this capability as they are either not appropriate or have been announced as deprecated (for example, XAI has been replaced by IWS so Managed Content XSL is not supported on XAI). Whilst this change will greatly benefit cloud customers, it can be used with appropriate on-premise implementations who want to save deployment costs. Note: Unlike previous service packs, the default setting for this capability is to use Managed Content. As the setting is global, customer who have on-premise implementations can retain use of file based XSL transformations by setting the XSL Location parameter of the External Messages Feature Configuration to F1FL. Setting this value will retain backward compatibility. Customers moving to an Oracle Utilities SaaS Cloud Service must use the Managed Content capability as part of the migration to the cloud. Migrating to this new capability If you wish to migrate to this capability the following process for each XSL transformation used on SOAP based Inbound Web Services and Outbound Messages configured on External Systems: Open the XSL file in an appropriate editor. Create a new Managed Content object for the style sheet. The name of the object should reflect the name of the original file or its purpose. Copy and Paste the XSL code into the Schema for the new Managed Content Object. Save the object. For example: In the object referring to the XSL transformation, change the name to the Managed Object identifier created in the last step. This change applies to the following objects: Inbound Web Services (SOAP)  - Request XSL Inbound Web Services (SOAP) - Response XSL Outbound Message/External System - Message XSL Outbound Message/External System - Response XSL (for Real-Time Adapters only) Once the XSL code has been migrated, it should be physically removed from the deployment to avoid costs and risk. The Managed Content can be migrated using Configuration Migration Assistant with all the other configuration objects.

eXtensible Stylesheet Language Transformations(XSLT) provide a way of transforming XML into an alternative format (XML, etc). Oracle Utilities Application Framework supports these transformations for...

Oracle Utilities Application Framework 4.4.0.2.0 Released

The latest version of the Oracle Utilities Application Framework (4.4.0.2.0) will be released with the latest on-premise and Oracle Utilities SaaS Cloud Services over the next few weeks. This release contains a series of new and improved capabilities that products using this release can take full advantage of. The key capabilities in this release are as follows: Introduction of Market Transaction Management into the Oracle Utilities Application Framework (Oracle Utilities Saas Cloud exclusive). The Market Transaction Management (MTM) objects are now included in the objects delivered with the Oracle Utilities Application Framework for use exclusively by the new MTM capabilities delivered in the Oracle Utilities SaaS Cloud Services. This also includes additional ILM capabilities to manage market message objects as part of the ILM solution. Integration API Improvements. More detail is now included in the REST API specification in the Open API documentation including more descriptions and valid values for lookups. Mobile Framework Changes (Oracle Utilities Saas Cloud exclusive). For the Oracle Utilities SaaS Cloud, the Mobile Framework has been enhanced to support additional capabilities in the Oracle Identity Cloud, including federation, and support Oracle Object Storage for managing mobile deployment files. REST URI Change. The URI for the REST API has been a configurable component in past releases. In this release a new generated URI will be used using configuration at both the service and operation level, in an effort to implement a consistent interface across product lines. The new format for the URI, by default is as follows: https://<host>:<port>/rest/<context>/api/iws/<owner>/<resourceCategory>/<component>/<operation> Where: <host> Host Name <port> Port Number <context> Server Context <owner> Product Owner <reourceCategory> Service Category <component> URI Component <operation> Operation Name Note: Older URI functionality, mainly used by the Mobile Framework, will be supported for backward compatibility in the interim but will be slowly phased out to use this capability. New REST HTTP Method Support. In past releases of the Oracle Utilities Application Framework, the POST HTTP method was only supported for all REST calls regardless of the transaction type or operation used. In this release, the PUT, GET and PATCH HTTP methods are now supported on REST based services by specifying the relevant method on each operation in the REST operation management capability. For example: Note: For backward compatibility, the POST method is defaulted for existing services. Security Improvements. The User Group Services Management Portal has been improved to provide lower cost management of membership including dynamic portals for improved maintenance of application services, both linked and non-linked relationships. Privacy Improvements. In past releases Object Erasure was introduced to support privacy legislation across the world. In this release, it is now possible to manually transition the life-cycle of an erasure object to support on-demand privacy situations. Update Of UI Libraries. This release updates the Oracle Jet Libraries to a new version that supports a wider range of browsers, device types and provides additional widget support as a foundation release for future planned features. To Do Improvements. The new To Do portal has been enhanced to support functionality to denote explicit work on an individual To Do. By using this functionality, an authorized user can go directly to the data referenced in the To Do to reduce navigation time. The Current To Do dashboard zone now supports quick completion of a To Do, detection of any related To Do and the ability to close this and related To Do in a simpler manner. These changes are part of the planned To Do management changes rolled out over a number of releases. New ILM Enabled Objects. This release adds ILM support for the Process Flow and Statistics Snapshot Objects.  Style Sheets as Managed Content. On the Oracle Utilities SaaS Cloud direct file references for code are not permitted for the management and storage for eXtensible Stylesheet Language Transformation (XSLT) files. In this release, these translations used in Inbound Web Services and Outbound Messages on External Systems, can now be stored and reference as Managed Content objects in the database. This means these previously file based translations can be defined as managed content and managed using the migration tools, such as Configuration Migration Assistant (CMA), to reduce maintenance costs and risks on the Oracle Utilities SaaS Cloud. This capability is not available for other uses of XSL files in the Oracle Utilities Application Framework as they are not applicable to the Oracle Utilities SaaS Cloud. Note: As this setting is defaulted to using managed content at the global level, on-premise customers wanting to continue to use the file based XSL file support MUST alter the XSL Location setting for the External Messages Feature Configuration to F1FL to configure backward compatibility. For example: Data Conversion Improvements (Oracle Utilities Saas Cloud exclusive). As part of the Oracle Utilities SaaS Cloud Service, the Oracle Utilities Cloud Services Foundation product provides a capability to perform conversion activities as a part of a migration to the cloud, if necessary. In this release a new set of conversion specific processes are now included with each service to provide a comprehensive conversion experience. These new batch programs can be used as provided or extended to support more complex situations. Date And Time Formatting Improvement. By default, date and time fields are retrieved and extracted in an internal 'OUAF' format. In this release, various changes have been implemented to support indicating that all date and time fields should be converted to standard XSD format. The plug-in driven extract batch program supplied by the product has been enhanced to include a new parameter to indicate the date format.  The F1-ConvertXMLToDelimited and F1-ConvertXMLToFileFormat business services have been enhanced to include a new parameter to indicate the date format. Improved Batch Error Handling. Batch jobs may fail because of technical or environmental reasons that are transient in nature, such as an interruption in the availability of the database.  With the previous batch implementation, temporary failures would result in the batch thread being marked in Error status and a customer alert.  Since these issue cannot always be resolved, batch processing now supports automatic thread re-submission when a failure occurs for technical reasons. The thread will now move to a status of  "Interrupted" and the system will attempt a number of retries before moving the thread to "Error". Cloud Deprecated Features. To take full advantage of the Oracle Cloud Infrastructure a number of Oracle Utilities Application Framework features are now not applicable on the Oracle Utilities Cloud SaaS Service to avoid duplication. In this release, the following features are no longer supported on Oracle Utilities Cloud SaaS implementations only: Log analysis features of the Oracle Cloud, available from Oracle Utilities Cloud Service Foundation, have rendered the following Debug Mode buttons (Start Debug, Stop Debug, Clear Trace, and Show Trace) superfluous and there have been disabled on Oracle Utilities Cloud SaaS implementations. On Inbound SOAP Web Service in the Operation collection, the Request and Response schema elements are not supported. Implementations may still use the Request and Response XSL fields to support this requirement. On External System/Outbound Message profile, the Response Schema and the W3C Schema are not supported. Implementations may still use the Request and Response XSL fields to support this requirement. Development and UX Improvements. The Oracle Utilities Application Framework introduces new and updated development capabilities in line with the user experience changes to reduce the coding time. In this release the following capabilities were introduced or enhanced: Alternate Row Headers on Data Explorers. To be accessible it is now possible to add the rowheader=true tag onto the data explorer to be picked up by accessibility features of the platform. Updated Icon Set. The icon set has been updated to include new and updated icons. More Visible Error Messages. Error messages have been optimized to be more efficient and more visible to prevent unnecessary scrolling. Query Filter Error Minimized. The filter query area now can minimize automatically to maximize the filters. Escape Key Support. When popping up a help window, it can now be dismissed with the use of the escape key. Timeline Zone Redesign. The look of the timeline has been changed to reflect the principles of the new user experience introduced in 4.4.0.0.0 with a more visible calendar and new icon set. Marking a List Element in Error. The error trapping has been enhanced to allow the marking of a specific list element in error. This also extends to support for Groovy which allows the Groovy script to denote which element is in error. Online Help Engine UX Overhaul. The first stages of the online help overhaul have been completed to align the major parts of the engine with the new experience. For example: Note: This release includes new and updated capabilities introduced in Oracle Utilities Application Framework 4.4.0.1.0, which was a cloud exclusive release as well as new and updated capabilities introduced in this service pack. Note: Whilst some of these changes are best used on the Oracle Utilities SaaS Cloud, they can also be applied to on-premise, IaaS and PaaS implementation of the products, unless otherwise indicated as Cloud Exclusive.

The latest version of the Oracle Utilities Application Framework (4.4.0.2.0) will be released with the latest on-premise and Oracle Utilities SaaS Cloud Services over the next few weeks. This...

SYSUSER Explained

The initial installation of Oracle Utilities products are not delivered with a completely blank database to start your implementation upon. The Oracle Utilities Application Framework is a meta data driven framework which requires base meta data that is owned by the various products using the framework. Like all meta data it must be owned by an application user as the custodian of that information. In the case of the Oracle Utilities Application Framework that user is SYSUSER. This is the base userid delivered with the product and is unique in terms of security in the product: SYSUSER is designed as the initial user. When you first install a product, you need an identity to first login into the system and add additional users for your implementation. It is no different, in this respect, to the privileged accounts on the database that are used to establish other accounts. SYSUSER should not be to process data. After other users are configured, the SYSUSER account should not be used for any processing in the product. The account is not designed for use post the initial requirement. One of the big reasons for this is that SYSUSER is owned by the product, so it cannot be changed significantly. This severely limits its usefulness. SYSUSER should never be deleted. The SYSUSER account should not be deleted as it is used as the identity for all the relevant meta data delivered by the product. SYSUSER should be disabled. Post it's initial use it should be disabled (this is allowed by an appropriate administration account) to prevent its use post it's original intent. I am aware of some early users of the products actively using SYSUSER for some operations. We recommend that you consider moving the user for those operations to another user and disabling the account.

The initial installation of Oracle Utilities products are not delivered with a completely blank database to start your implementation upon. The Oracle Utilities Application Framework is a meta data...

Oracle Utilities SaaS Cloud Security Summary

The Oracle Cloud Infrastructure provides comprehensive security infrastructure to protect all Oracle Cloud Services. Oracle Utilities SaaS Cloud Services take advantage of that security infrastructure, with native capabilities, to allows customers to secure their services. The additional key capabilities used by the Oracle Utilities SaaS Cloud Services include: Flexible Identity Solution. Identity and access control can be established using a flexible Identity Solution. This solution includes options for using the embedded Oracle Identity Cloud Service, an existing Oracle Identity Cloud Service or a federated identity solution. Accelerator Provided Security. The accelerators provided with the Oracle Utilities SaaS Cloud implementations include a predefined pre-loaded authorization model that can be adapted to suited individual needs. Customers migrating from on-premise solutions can migrate to the accelerator to reduce migration costs or retain their existing authorization definitions. Pre-built Identity Provisioning. The Oracle Identity Cloud Service (embedded or existing) includes a prebuilt adapter to optimize the provisioning and de-provisioning processes to save time and costs. The adapter supports coarse grained or fine grained provisioning. Encryption At Rest. The Oracle Utilities SaaS Cloud Services utilizes Oracle's Transparent Data Encryption capability to protect the storage of all data. This includes protections of any extracts and backups to prevent data loss. Encryption On the Wire.  The Oracle Utilities SaaS Cloud Services take advantage of the network encryption capabilities of Oracle Cloud Infrastructure to protect transmission of data between all network layers in the architecture. Key Rotation. With the implementation of encryption, the keys used to provide this encryption are rotated automatically in accordance with Oracle Cloud Infrastructure guidelines. Protecting Privileged Accounts. By default, privileged accounts such as database administrators, have SQL data manipulation language (DML) access to the data within the database schema's they manage. In accordance with Oracle Cloud Infrastructure policy, Oracle Database Vault has been implemented to limit privileged accounts to appropriate access to manage the database without accessing the data within the database. Whitelist enabled and locked. Capabilities to extend the Oracle Utilities SaaS Cloud are subject to several security based whitelists to protect the integrity of the service, reduce risk and reduce costs. The whitelists are consistent with other Oracle Cloud Services on Oracle Cloud Infrastructure. The whitelists cover the following areas: Groovy Whitelist. This whitelist defines the subset of the Groovy language permitted for use in extensions on the Oracle Utilities SaaS Cloud Services. URL Whitelist. Interfaces, including protocols, into and out of the service via URL are controlled via a whitelist to prevent data leakage. SQL Functions Whitelist. Use of functions within SQL statements used in queries and code is subject to a whitelist to prevent bypass of access controls in code. HTML Whitelist. The tags used in any HTML based extension, including generated content, are subject to a whitelist to maintain security compliance. Security Checked Code. In line with Oracle policy, all product code is inspected, using Oracle's Software Security Assurance practices, as part of the build process for security compliance against a raft of security standards and security attacks. These checks are performed using internal tools used for all Oracle Cloud Services as well as third party compliance tools. Utilitizes Oracle Security Practices. Oracle implements corporate security practices that encompass all the functions related to security, safety, and business continuity for Oracle’s internal operations and its provision of services to customers, across all its products. They include a suite of internal information security policies as well as different customer-facing security practices that apply to different service lines including the cloud. Data Masking Support. Data in the service can be masked, using configuration, to ensure properly authorized users have appropriate access to data.   Inbuilt Information Lifecycle Management/Object Erasure Support. Life-cycle and state of storage of key master and transaction data is predefined with the option of additional configuration to support specific privacy and data retention legislation. Key Chain Support. Security key integration with other Oracle Cloud services is managed internally to ensure compliance and availability. Security Policy Support. Integration using the SOAP or REST protocols supports a wide range of compliant security policies including specific policies supported by the integration services provided on the Oracle Cloud Infrastructure. Privacy Support. The Oracle Utilities SaaS Cloud service take advantage of the privacy capabilities of the platform and the underlying products to help implement industry recognized privacy controls and Oracle's Privacy Policy. Backup and Recovery.  The Oracle Utilities SaaS Cloud service takes advantage of the Oracle Cloud Infrastructure backup and recovery mechanisms to allow flexible management of data and protect data state using the techniques and principles outlined in Oracle's Maximum Availability Architecture. Cloud Service Foundation Extended Support. Compliance and management of security configuration at a service level is provided by the Oracle Utilities Cloud Service Foundation provided with each service. The Oracle Utilities SaaS Cloud Services extends the security provided by the Oracle Cloud Infrastructure to provide flexible security capabilities. The capabilities enhance and protect cloud services. Refer to the Oracle Utilities SaaS Cloud Service documentation and Cloud Service Descriptions for further details. This information is available via Oracle Utilities SaaS Cloud Security (Doc Id: 2595978.1) available from My Oracle Support. Additionally the following links are useful for additional information related to this topic: Oracle Cloud Infrastructure Security Oracle Cloud Infrastructure and GDPR Oracle Security Practices Oracle Cloud Security Practices Oracle Hosting and Delivery Policies Oracle Data Processing Agreement for Cloud Services / Moat Analytics Data Processing Agreement Oracle Maximum Availability Architecture E.U. General Data Protection Regulation (GDPR) Resource Center (Doc Id: 111.1) Records of Processing Activities (Doc Id: 115.2) Privacy and Security Feature Guidance for all Oracle Services (Doc Id: 114.2)

The Oracle Cloud Infrastructure provides comprehensive security infrastructure to protect all Oracle Cloud Services. Oracle Utilities SaaS Cloud Services take advantage of that security...

The Oracle Utilities SaaS Cloud Difference

I just completed a set of Customer Edge Conferences across the world and met a lot of customers and partners. One of the focus of the sessions I ran was to transition from on-premise to the cloud and they were well attended with lots of questions about that transition. One common theme of the questions was the clarification of what the Oracle Utilities SaaS cloud offerings, on Oracle Cloud Infrastructure, from Oracle Utilities offers customers from a technical point of view. I wanted to summarize the discussions I had with various partners to help transitions. The Oracle Utilities SaaS Cloud Services are a solution not just an installation. One common misconception is that the Oracle Utilities SaaS Cloud offerings were just simply an installation of the relevant Oracle Utilities product on the Oracle Cloud Infrastructure. Whilst we did in fact install the Oracle Utilities product on that infrastructure, the service is far more than just the product installation. The service contains the following additional capabilities: Oracle Utilities Cloud Service Foundation. This is a cloud exclusive product, provided with the service, that provides additional services like conversion tools, batch scheduling, code management and monitoring metrics for the cloud service. This is the exclusive console for the service directly linked into the service. It provides automation tools for managing your service for common tasks to reduce your costs and risks in the cloud environment.  Integrated Reporting. There is an integrated reporting capability that not only allows operational reporting for the business but allows site operations people access to the underlying infrastructure to provide advanced diagnosis capabilities. Integrated Testing. The Oracle Utilities Cloud Services now include integrated testing capabilities in the form of the Oracle Utilities Testing Accelerator as well as the content preloaded that exclusively designed for that Cloud Service. Cloud Accelerator. The Oracle Utilities SaaS Cloud Service includes a pre-loaded cloud accelerator for each service. This is important for new cloud customers but also allows customers to reduce their extension risk by using the service capabilities natively. Identity Capability. The Oracle Utilities SaaS Cloud Service is managed and protected by an in-built identity solution provided as part of the Oracle Cloud Infrastructure. This capability can be used natively in the service, via an external Oracle Identity Cloud Service or via an identity federated solution. Superior Hardware. The Oracle Utilities SaaS Service is built upon the Oracle next generation cloud, Oracle Cloud Infrastructure, which provides scalable hardware which includes Oracle ExaData (for all databases) and access to fast object storage. Tight Integration with PaaS Services. To keep the Oracle Utilities SaaS Service as cost effective as possible for the wide range of solutions (including hybrid solutions) the service is compatible with a wide range of Oracle Cloud PaaS Services. This allows partners and customers to augment the solution with additional cloud services as necessary. REST API Capability. The service uses a REST based API which exposes its functions as necessary for integration. This is in line with other Oracle Cloud Services which use a similar approach. Partners ask me what the recommendation in moving to the cloud, and my personal advice is that you assess what is provided in the service against your current extensions and use as much of the service, which you paid for, as possible. This will greatly reduce your extension risk and costs and allow you to take advantage of the regular upgrades provided by the cloud offering. Everything that you do not use in the service is just an additional cost to you. The service has been designed to be as comprehensive as possible with cost and risk taken into account. Note: The techniques used for extensions on the cloud can be also used on-premise implementations as well, regardless whether the cloud migration is happening now or in the future or at all. For information about the techniques refer to the Preparing for the Cloud on Software Configuration Management Series (Doc Id: 560401.1) available from My Oracle Support.

I just completed a set of Customer Edge Conferences across the world and met a lot of customers and partners. One of the focus of the sessions I ran was to transition from on-premise to the cloud and...

Workflow Scheduler and Oracle Scheduler

In legacy versions of Oracle Utilities Customer Care And Billing, there was a Workflow based batch scheduler made available. The scope of that implemented was very limited to be able to execute a chain of batch jobs. It was implemented to aid customers migrating from the PeopleSoft CIS product to the Oracle Utilities Customer Care And Billing product. With the introduction of Outbound Messages and the Oracle Scheduler the use of both workflow and the workflow based scheduler have been declining. It is recommended to customers who continue to use the workflow scheduler migrate to the Oracle Scheduler as it has the following advantages: Broader Scope. The Oracle Scheduler can be used with all Oracle Utilities Application Framework. The workflow scheduler only supported Oracle Utilities Customer Care And Billing. The Oracle Scheduler also supports local or remote invocation from a wide set of technology which means it can be used for third party application scheduling as well. Local and Enterprise Wide implementation. The Oracle Scheduler can be attached to the Oracle Utilities Application Framework based product or the Oracle Utilities Application Framework product can be just one of the application using the scheduler. This means the Oracle Scheduler can be used locally for an application or be shared globally for enterprise deployments (it requires an additional agent installation which is part of the Oracle Client installation to be used remotely). Robust. The Oracle Scheduler is part of all editions of the Oracle Database and is used across many Oracle products including Oracle Database, Oracle Enterprise Manager etc.. Cloud Friendly. The Oracle Scheduler is automatically deployed inside the Oracle Utilities SaaS Cloud Services as the main scheduler. Schedule maintenance is provided via REST API or via the Oracle Utilities Cloud Services Foundation that is supplied exclusively with Oracle Utilities SaaS Cloud Services. The workflow scheduler is not supported on the Oracle Utilities SaaS Cloud Services. Extensive Calendaring Support. The Oracle Scheduler uses a extensive calendaring syntax to allow flexible scheduling of work. This supports the time zone and daylight savings capability built into the database. Broad Management Capabilities. The management of the schedule within Oracle Scheduler can be performed via the command line, Oracle SQL Developer and/or Oracle Enterprise Manager. In the Oracle Utilities SaaS Cloud Services, the schedule can be maintained via the Oracle Utilities Cloud Service Foundation. For more information about the Oracle Scheduler refer to Oracle Scheduler Concepts and Batch Scheduler Integration (Doc Id: 2196486.1) available from My Oracle Support.

In legacy versions of Oracle Utilities Customer Care And Billing, there was a Workflow based batch scheduler made available. The scope of that implemented was very limited to be able to execute a...

Saving Costs with Middleware Integration

Over my career over (ahem)... 30 years, I have seen integration technologies come and go with varying levels of success. One of the most persistent and more successful integration technologies has been Middleware. In basic terms, middleware sits in the middle (hence the name) between the source and target applications in an integration. It liases with a transport method, the way the source delivers the payload to the target and may translate the payload into a format that the target supports, if necessary. It can also deal with the response, or lack of, from the target for the source. The figure below summarizes this relationship: While most people associate costs with purchasing middleware, I personally find the benefits tend to reduce the costs significantly in any implementation. There are a number of reasons why middleware is very cost effective: Simplify Your Integration. Middleware is flexible in the transports supported between the source and target applications. This means there are flexible ways of delivering payloads between application in both asychronous and synchronous modes. Scalability To Your Volume. Most middleware are based upon scalable and highly available architectures allowing you to support the full range of volumes and service levels. Isolates Targets and Sources. Source and Target applications need to concentrate on delivering superior functionality rather than worrying about how to deliver payloads. By integrating to middleware, the applications then are free to share their functionality across application data topologies. This is particularly important in the cloud where the services should concentrate on being superior services and let integration be best served by integration services. For on-premise implementations, middleware can isolate the technical aspects of an integration and act as a proxy to reduce costs in change. Flexible Payload Formatting. Source and target applications cannot be assumed to talk in the same structures. Therefore translation to and from payload formats may be necessary. Middleware products support a wide range of formats and translation languages (both simple and complex). Again isolating the payloads saves development costs in the source and target applications and hides that complexity in the middleware. Simplifies Error Detection. One of the most underestimated benefits of middleware is acting as an error hospital when integration goes wrong. being able to use the middleware to help poinpoint an issue in integration is critical. In some cases, you can configure the middleware to handle the exception automatically for you to further reduce costs. Partners have asked me for advice on reducing their costs in integration and I usually point out that the introduction of a middleware oriented architecture as the central conduit for integration is ideal for most situations. Certainly Oracle supplies a number of cloud and on-premise solutions and the products are open enough that they can be used or alternatives that support the relevant industry standards. Oracle offers a wide range of middleware solutions that can be used with on-premise and Oracle Utilities SaaS Cloud implementations. The most common of these are: Oracle Data Integration Cloud. Data level integration for large volumes in single events or across multiple events. Oracle Data Integration Platform Cloud. This is the Oracle Data Integration Cloud but with additional Data Quality and Machine Learning capabilities. Oracle SOA Cloud. This is a cloud implementation of SOA Suite, B2B, Service Bus and Integration Analytics for comprehensive middleware solutions. Oracle Integration Cloud. This is the premier integration capability allowing simple but powerful integration between source and targets including many industry standard cloud applications. Oracle SOA Suite. This is the on-premise implementation of a number of integration technologies to allow full levels of integration. For more information about integration with SOA Suite refer to Oracle SOA Suite Integration (Doc Id: 1308161.1) available from My Oracle Support. Additionally Oracle Utilities Application Framework has an outbound message adapter for the Oracle Service Bus part of the suite for use on-premise. Refer to Oracle Service Bus Integration (Doc Id: 1558279.1) available from My Oracle Support for more information about this adapter. Oracle Service Bus. This is an on-premise implementation of Oracle Service Bus for large volume interaction. Oracle Utilities Application Framework has an outbound message adapter for use with this capability. Refer to Oracle Service Bus Integration (Doc Id: 1558279.1) available from My Oracle Support for more information about this adapter. Oracle Data Integrator. This is Oracle's premier ELT integration tool for data based integration from multiple sources and targets. The figure below illustrates the integration with the Oracle Utilities Application Framework: The main points of contact with Oracle Utilities Application Framework can be summarized as follows: Inbound Web Services. This is the premier method for inbound integration using either SOAP or REST based services using Inbound Web Services. This also supports WS-Policy for security and registries for integrations. XML Application Integration is not supported for newer customers and cloud customers. Customers should migrate using the guidelines in Migrating from XAI to IWS (Doc Id: 1644914.1) available from My Oracle Support. Staging (Upload and Download). Using native database or Inbound Web Services to push data into the product/service or extract data from the product/service. Outbound Messages. Allowing real time (and batch) integration upon business events. SQL Access (on-premise only). Using the native database adapters in middleware to extract data in various formats. Middleware does cost money but the benefits it brings in terms of cost savings and simplifying your architecture can realize benefits beyond the license value. For more information about integration refer to Oracle Utilities Application Framework Integration Overview (Doc Id: 789060.1) available from My Oracle Support.

Over my career over (ahem)... 30 years, I have seen integration technologies come and go with varying levels of success. One of the most persistent and more successful integration technologies has...

OUAF Integration Architecture

One of the most important aspects of the Oracle Utilities Application Framework, is the ability to integrate to other technologies and applications using a flexible set of techniques. With each release of the Oracle Utilities Application Framework, the integration architecture is refined to support new and emerging standards for on-premise, hybrid and Oracle Utilities SaaS Cloud implementations of Oracle Utilities products. The integration architecture is summarized as follows: Screen Pop. The ability to pop into a pre-populated screen of the product from an external product (for example, your IVR/CTI system). This allows a URL to be formulated to jump to the screen and even pre-populate the screen to allow the end user quick access to the information. Client User Exit. This allows the existing user experience to be altered to integrate on the browser to any other browser or other application. For example, integrating to an inline address validation capability at the user experience level. URL Characteristic. This allows integration to other sites or external document control systems from individual objects. This allows sites with content solutions to reuse those solutions from objects. Navigation Keys. This allows sites to add links to internal sites to supplement materials to augment the user experience. For example, link to document repositories for policies etc. Inbound Web Services. One of the major integration points is using SOAP or REST based integration directly or via some software, like middleware, that supports Web Services. The Oracle Utilities Application Framework has been enhancing this capability even before it became popular and continues to enhance this capability to support new and emerging standards. Staging. Utilities require to pass data into the product and also extract data from the product. To protect data integrity, the products provide a set of prebuilt staging tables that can serve as conduits in and out of the product. Outbound Message. The Oracle Utilities Application Framework provides a configurable capability to detect business events and provide payload into a capability that can send that product payload, using various formats and transports, in real time, batch or via middleware to external systems. Algorithm. For on-premise and hybrid implementations, it is possible to integrate third party libraries directly to provide dedicated interfaces at the processing level. SQL, Stored Procedures and Triggers. For on-premise and hybrid implementations, it is possible to use database objects to perform integration at the database level. This is used for reporting tools, database level adapters and database addon products such as Oracle Audit Vault. A summary of the integration capabilities is shown below: A new version of the Oracle Utilities Application Framework Integration Overview (Doc Id: 789060.1) available from My Oracle Support is now available that covers the aspects of integration described above as well the techniques available for cloud implementations.

One of the most important aspects of the Oracle Utilities Application Framework, is the ability to integrate to other technologies and applications using a flexible set of techniques. With each...

Clarification of Application Management Pack for Oracle Utilities

As the Product Manager for a number of products including Oracle Application Management Pack for Oracle Utilities, the Oracle Enterprise Manager plugin for managing Oracle Utilities Application Framework product there are always a number of requests that come across my desk that need clarification. I wanted to post a few responses to some common issues we are seeing in the field and how to address them. Read the Installation Documentation. This is obvious but the installation documentation not only talks about the installation of the pack but also what features to enable on the Oracle Utilities Application Framework to maximize the benefits of the pack. For customers on older versions of the Oracle Utilities Application Framework, some of the advanced features of the pack are not available as those versions of the Oracle Utilities Application Framework do not exist. For example, most of the metrics were added in Oracle Utilities Application Framework 4.x. Clustering Support Changes. In earlier versions of Oracle Utilities Application Framework with older versions of Oracle WebLogic, to use clustering you needed to use the localhost hostname. Whilst this worked for those versions (and has been used with later version), Oracle recommends to use the actual host name in the configuration and in particular for the JMX integration used by the Oracle Utilities Application Framework. Using localhost in the host name may prevent Oracle Enterprise Manager and the pack recognizing the active ports and will result in the target not being recognized. Native Install Transition. A few years ago, the Oracle Utilities Application Framework transitioned to use Oracle WebLogic natively rather than the original embedded mode which was popular in legacy versions of the product. The main reason for moving to native mode was to allow customers full flexibility when using the Oracle WebLogic Domain so that the license value was increased and advanced configuration was supported. To support that transition a decision was made in respect to the pack: Embedded Mode Customers. It is recommended that customers still on the old embedded mode that have not transitioned to a native installation, yet, use the Start / Stop functionality on the Oracle Utilities targets to manage the availability of Oracle Utilities targets.  Native Mode Customers. Customers who have transitioned to the native installation are recommended to use the Oracle WebLogic targets to start and stop the product (as this is one of the benefits of the native mode). It is NOT recommended to use the start and stop functionality on the Oracle Utilities targets during the transition period. The current release of Enterprise Manager and the Application Management Pack for Oracle Utilities cannot manage cross targets at the present moment. Essentially, if you embedded mode then use the start/stop on the Oracle Utilities targets, if you are native use the start/stop on the Oracle WebLogic targets.

As the Product Manager for a number of products including Oracle Application Management Pack for Oracle Utilities, the Oracle Enterprise Manager plugin for managing Oracle Utilities...

Hash Keys and Security

One of the features that has been changed over the last few releases of the Oracle Utilities Application Framework has been security. To keep up with security requirements across the industry, the Oracle Utilities Application Framework utilizes the security features of the infrastructure (Operating System, Oracle WebLogic and Oracle Database) as well as provide inbuilt security capabilities. One of the major capabilities is the support for Hash Keys on the user identity. On the user object, there is a hash key that is managed by the Oracle Utilities Application Framework. The goal of this hash key is to detect any unauthorized changes to the user identity and prevent users from being used after an unauthorized change has been done. From an Oracle Utilities Application Framework point of view, an unauthorized change is a change that is done without going through the user object itself. For example, if you issued an UPDATE statement against the user tables directly, that did not go through the user object. That is an example of an unauthorized change. When a user record is accessed, for example at login time, the Oracle Utilities Application Framework recalculates the hash key and compares that against the stored hash key. If they match, then the user is authorized, using the authorization model, to access the product. If the hash key does not match, then the user record has been compromised and the user action is rejected. In the case of a login, the user is refused access to the product. The log will contain the message: User security hash doesn't match for userid From time to time we get customers reporting issues with these same characteristics. In most cases, this is caused by a number of practices: User Object Updated Directly. Some implementations update the user object via direct SQL for a particular reason. This technique is discouraged bypasses the business rules configured for the user object within the product. We recommend that customers update the user object via the provided methods to prevent the user becoming recognized as compromised. The user object is protected by the authentication and authorization model used. Encryption Key has been changed. At some sites, the encryption key is rotated on a regular basis. When this happens, the hash key becomes stale and needs to be rebuilt to reflect the new key.   These are the only two use cases where the hash key becomes invalid. So what can be done about it? Well there are two techniques that are suggested to resolve this issue: Manually resolve the issue. The hash key is rebuilt upon update of the user object using the maintenance function (user interface or user object directly). Use an alternative valid authenticated and authorized user, to edit the invalid user object and saving the user object will regenerate the hash key. This is recommended for spot problems with low numbers of users. This technique is available, with configuration, in web services, plugin batch or request/request type processing if necessary. Synchronize Data Encryption Utility. The Oracle Utilities Application Framework provides a command line based utility to reset hash keys (and other encryption related information) on-masse. This utility is documented in the Security Guide supplied with your product in the section named Synchronize Data Encryption. It is recommended to use this utility to reset the user hashs when the problem is widespread. See also CCB and C2M V2.6.0.0.0 Demo Install Error User Security Hash Doesn't Match For Userid When Logging Into Environment (Doc Id 2270728.1), MDM 2.2 Hash Key Generation (Doc Id: 2359462.1) and 'resource [/loginError.jsp], because it is stale' Because User Security Hash Doesn't Match (Doc Id: 2054791.1) available from My Oracle Support. Note: The utility will set all the hash's not just the invalid ones. It is recommended not to alter the User Object directly without going through the user object to avoid security hash issues.

One of the features that has been changed over the last few releases of the Oracle Utilities Application Framework has been security. To keep up with security requirements across the industry, the...

Cube Viewer - Designing Your Cube

In the last Cube Viewer article we outlined a generic process for building a cube, but the first step on this process is to actually design the data we want to analyze in the cube. A common misconception with Cube Viewer is that you can take an query and convert it into a cube for analysis. This is not exactly true as Cube Viewer is really designed for particular types of analysis and should be used for those types of analysis to take full advantage of the capability. The easiest way of deciding what type of analysis are optimal for the Cube Viewer is to visualize your end result. This is not a new technique as most designers work from what they want to determine the best approach. The easiest way I tend to visualize the Cube Viewer is to actually visualize the data in a more analytical view. If you are familiar with the "Pivot" functionality that is popular in spreadsheet programs then that is the idea. The pivot allows for different columns in a list to be combined in such a way to provide more analytical information. A very simple example is shown below: The above example we have three columns, two are considered dimensions (how we "cut" the data) and one the value we want to analyze. The pivot relationship in the above example is between Column A and Column B. In Cube Viewer there are three concepts: Dimensions. These are the columns used in the analysis. Typically dimensions represent the different ways you want to view the data in relation to other dimensions. Filters. These act on the master record set (the data you want to use in the analysis) to refine the subset to focus upon. For example, you might want to limit your analysis to specific date ranges. By their nature, Filters can also become dimensions in a cube. Values. These are the numeric values (including any functions) that need to analyzed. Note: Filters and Values can be considered dimensions as well due to the interactivity allows in the Cube Viewer. When designing your cube consider the following guidelines: Dimensions (including filters) define the data to analyze. The dimensions and filters are used to define the data to focus upon. The SQL will be designed around all the concepts. Interactively means analysis is fluid. Whilst dimensions, filters and values are defined in the cube definition, their roles can be altered at runtime through the interaction by the user of the cube. The user has interaction (within limits) to interactively define how the data is represented. Dimensions can be derived. It is possible to add ad-hoc dimensions that may or may not be even data in the database directly. The ConfigTools capability allows for additional columns to be added during the configuration that are not present directly in the SQL. For example, it is possible to pull in value from a related object not in the SQL but in the ConfigTools objects. Note: For large amounts of data to include or process as part of the cube it is highly recommended to build that logic into the cube query itself to improve performance. Values need to be numeric. The value to be analyzed should be numeric to provide the ability to be analyzed correctly. In he next series of articles we will explore actually building the SQL statement and then translating that into the ConfigTools objects to complete the Cube.

In the last Cube Viewer article we outlined a generic process for building a cube, but the first step on this process is to actually design the data we want to analyze in the cube. A common...

Information

UTA Components and License Restrictions

The Oracle Utilities Testing Accelerator is fast becoming a part of a lot of cloud and on-premise implementations as partners and customer recognize the value of pre-built assets in automated testing to reduce costs and risk. This growth necessitates a clarification in respect to licensing of the Oracle Utilities Testing Accelerator to ensure compliance with the license. Oracle Utilities product exclusive. The focus of the Oracle Utilities Testing Accelerator is to provide an optimized solution for optimizing testing of Oracle Utilities products. The Oracle Utilities Testing Accelerator is licensed for exclusive use with the Oracle Utilities products it is certified against. it will not work with product not certified as their is no content or capability inbuilt into the solution for product outside that realm. Named User Plus License. The Oracle Utilities Testing Accelerator uses the Named User Plus license metric. Refer to the License Definitions and Rules for a definition of the restrictions of that license. The license cannot be shared across physical users. The license gives each licensed user access to any relevant content available for any number of any supported non-production copies of certified Oracle Utilities products (including multiple certified products and multiple certified versions). Non-Production Use. The Oracle Utilities Testing Accelerator is licensed for use against non-Production copies of the certified products. It cannot be used against a Production environment. All components of the Oracle Utilities Testing Accelerator are covered by the license. The Oracle Utilities Testing Accelerator is provided in a number of components including the browser based Oracle Utilities Testing Workbench (including the execution engine), Oracle Utilities Testing Repository (storing assets and results of tests), Oracle Utilities Test Asset libraries provided by Oracle,  Oracle Utilities Testing Accelerator Eclipse Plug In and the Oracle Utilities Testing Accelerator Testing API implemented on the target copy of the Oracle Utilities product you are testing. These are subject to conditions of the license. For example, you cannot use the Oracle Utilities Testing Accelerator Testing API without the use of the Oracle Utilities Testing Accelerator. Therefore you cannot install the Testing API on a Production environment or even use the API in any respect other than with Oracle Utilities Testing Accelerator (and selected other Oracle testing products). Oracle Utilities Testing Accelerator continues to be the cost effective way of reducing testing costs associated with Oracle Utilities products on-premise and on the Oracle Cloud.

The Oracle Utilities Testing Accelerator is fast becoming a part of a lot of cloud and on-premise implementations as partners and customer recognize the value of pre-built assets in automated testing...

Moving from Change Handlers to Algorithms

One of the most common questions I receive from partners is that Java based Change Handlers are not supported on the Oracle Utilities Cloud SaaS. Change Handlers are not supported for a number of key reasons: Java is not Supported on Oracle Utilities SaaS Cloud. As pointed out previously, Java based extensions are not supported on the Oracle Utilities SaaS Cloud to reduce costs associated with deployment activities on the service and to restrict access to raw devices and information at the service level. We replaced Java with enhancements to both scripting and the introduction of Groovy support. Change Handlers are a legacy from the history of the product. Change handlers were introduced in early versions of the products to compensate for limited algorithm entities in those early versions. Algorithms entities are points in the logic, or process, where customers/partners can manipulate data and process for extensions using algorithms. In early versions, algorithm entities were limited by the common points of extension that were made available in those versions. Over time, based upon feedback from customers and partners, the Oracle Utilities products introduced a wider range of algorithm entities that can be exploited for extensions. In fact in the latest release of C2M, there are over 370 algorithm entities available. Over the numerous years, the need for change handlers have slowly been replaced by the provision of these new or improved algorithm entities to the point where they are no longer as relevant as they once were. On the Oracle Utilities SaaS Cloud, it is recommended to use the relevant algorithm entity with an appropriate algorithm, written in Groovy or ConfigTools based scripting rather than using Change Handlers. Customers using change handlers today are strongly encouraged to replace those change handlers with the appropriate algorithm. Note: Customers and Partners not intending to use the Oracle Utilities SaaS Cloud can continue to use the Change Handler functionality but it is highly recommended to also consider moving to using the appropriate algorithms to reduce maintenance costs and risks.

One of the most common questions I receive from partners is that Java based Change Handlers are not supported on the Oracle Utilities Cloud SaaS. Change Handlers are not supported for a number of key...

Utilities Testing Accelerator 6.0.0.1.0 Now Available

Oracle Utilities is pleased to announce the general availability of Oracle Utilities Testing Accelerator Version 6.0.0.1.0 via the Oracle Software Delivery Cloud with exciting new features which provide improved test asset building and execution capabilities. This release is a foundation release for future releases with key new and improved features. Last year the first release of the Oracle Utilities Testing Accelerator was released to replace the Oracle Functional Testing Advanced Pack for Oracle Utilities product to optimize the functional testing of Oracle Utilities products. The new version extends the existing feature set and adds new capabilities for the testing of Oracle Utilities products. The key changes and new capabilities in this release include the following: Accessible. This release is now accessible making the product available to a wider user audience. Extensions to Test Accelerator Repository. The Oracle Utilities Testing Accelerator was shipped with a database repository, Test Accelerator Repository, to store test assets. This repository has been extended to accommodate new objects introduced in this release including a newly redesigned Test Results API to provide comprehensive test execution information.  New! Server Execution Engine. In past releases, the only way to execute tests was using the provided Oracle Utilities Testing Accelerator Eclipse Plugin. Whilst that plugin is still available and will continue to be provided, an embedded scalable server execution engine has been implemented directly in the Oracle Utilities Testing Accelerator Workbench. This allows testers to build and execute test assets without leaving the browser. This engine will be the premier method of executing tests in this release and in future releases of the Oracle Utilities Testing Accelerator. New! Test Data Management. One of the identified bottlenecks in automation is the provision and re-usability of test data for testing activities. The Oracle Utilities Testing Accelerator has added an additional capability to extend the original test data capabilities by allowing test users to extract data from non-production sources for reuse in test data. The principle is based upon the notion that it is quicker to update data than create it. The tester can specify a secure connection to a non-production source to pull the data from and allow manipulation at the data level for testing complex scenarios. This test data can be stored at the component level to create reusable test data banks or at the flow level to save a particular set of data for reuse. With this capability testers can quickly get sets of data to be reused within and across flows. The capability includes the ability to save and name test data within the extended Test Accelerator repository. New! Flow Groups are now supported. The Oracle Utilities Testing Accelerator supports the concept of Flow Groups. These are groups of flows that can be executed as a set in parallel or serial to reduce test execution time. This capability is used by the Server Execution Engine to execute groups of flows efficiently. This capability is also foundation of future functionality. New! Groovy Support for Validation. In this release, it is possible to use Groovy to express rules for validation in addition to the component validation language already supported. This capability allows partners and testers to add complex rule logic at the component and flow level. As with the Groovy support within the Oracle Utilities Application Framework, the language is whitelisted and does not support external Groovy frameworks. Annotation Support. In the component API, it is possible to annotate each step in the process to make it more visible. This information, if populated, is now displayed on the flow tree for greater visibility. For backward compatibility, this information may be blank on the tree unless it is already populated. New! Test Dashboard Zones. An additional set of test dashboard zones have been added to cover the majority of the queries needed for test execution and results. New! Security Enhancements. For the Oracle Utilities SaaS Cloud releases of the product, the Oracle Utilities Testing Accelerator has been integrated with Oracle Identity Cloud Service to manage identity in the product as part of the related Oracle Utilities SaaS Cloud Services. Note: This upgrade is backward compatible with test assets built with the previous Oracle Utilities Testing Accelerator releases so no rework is anticipated on existing assets as part of the upgrade process. For more details of this release and the capabilities of the Oracle Utilities Testing Accelerator product refer to Oracle Utilities Testing Accelerator (Doc Id: 2014163.1) available from My Oracle Support.

Oracle Utilities is pleased to announce the general availability of Oracle Utilities Testing Accelerator Version 6.0.0.1.0 via the Oracle Software Delivery Cloud with exciting new features which...

CubeViewer - Process to Build the Cube Viewer

As pointed out in the last post, the Cube Viewer is a new way of displaying data for advanced analysis. The Cube Viewer functionality extends the existing ConfigTools (a.k.a Task Optimization) objects to allow the analysis to be defined as a Cube Type and Cube View. Those definitions are used by the widget to display correctly and define what level of interactivity the user can enjoy. Note: Cube Viewer is available in Oracle Utilities Application Framework V4.3.0.6.0 and above. The process of building a cube introduces new concepts and new objects to ConfigTools to allow for an efficient method of defining the analysis and interactivity. In summary form the process is described by the figure below: Design Your Cube. Decide the data and related information to to be used in the Cube Viewer for analysis. This is not just a typical list of values but a design of dimensions, filters and values. This is an important step as it helps determine whether the Cube Viewer is appropriate for the data to be analyzed. Design Cube SQL. Translating the design into a Cube based SQL. This SQL statement is formatted specifically for use in a cube. Setup Query Zone. The SQL statement designed in the last step needs to be defined in a ConfigTools Query Zone for use in the Cube Type later in the process. This also allows for the configuration of additional information not contained in the SQL to be added to the Cube. Setup Business Service. The Cube Viewer requires a Business Service based upon the standard FWLZDEXP application service. This is also used by the Cube Type later process. Setup Cube Type. Define a Cube Type object defining the Query Zone, Business Service and other settings to be used by the Cube Viewer at runtime. This  brings all the configuration together into a new ConfigTools object. Setup Cube View. Define an instance of the Cube Type with the relevant predefined settings for use in the user interface as a Cube View object. Appropriate users can use this as the initial view into the cube and use it as a basis for any Saved Views they want to implement. Over the next few weeks, a number of articles will be available to outline each of these steps to help you understand the feature and be on your way to building your own cubes.

As pointed out in the last post, the Cube Viewer is a new way of displaying data for advanced analysis. The Cube Viewer functionality extends the existing ConfigTools (a.k.a Task Optimization)...

Cube Viewer - A new way of analyzing operational data in OUAF

In past releases of Oracle Utilities Application Framework, Query Zones have been a flexible way of display lists of information with flexible filters and dynamic interaction including creating and saving views of the lists for reuse. In Oracle Utilities Application Framework V4.3.0.6.0 and above, we introduced the Cube Viewer, which extends the query model to support pivot style analytical analysis and visualization of operational data. The capability extends the ConfigTools (aka Task Optimization) capability to allow implementations to define cubes and provide interactivity for end users on operational data. The Cube Viewer brings together a number of ConfigTools objects to build an interactive visualization with the following capabilities: Toolbar. An interactive toolbar to decide the view of the cube to be shown by the user. This includes saving a view including the criteria for reuse. Settings. The view and criteria/filters to use on the data set to help optimize the analysis. For example you might want to see the raw data, a pivot grid, a line chart or bar chart. You can modify the dimensions shown and even add rules for how certain values are highlighted using formats. Filters. You can decide the filters and values shown in the grid within the selection criteria. View. The above configuration results in a number of views of the data. An example of the Cube Viewer is shown below: The Cube Viewer has many features that allow configuration to optimize and highlight critical data whilst allowing users to interact with the information presented. In summary the key features are: Flexible View Configuration. It is possible to use the configuration at runtime to determine the subset the data to analyze and display format as a saved view. As with query portals, views can be saved and reused. These views can be Private, Shared (within an Access Group) or Public. Formatting Support. To emphasize particular data values, it is possible at runtime to alter their display using simple rules. For example: Visual and Analytical Views. The data to be shown can be expressed in a number of view formats including a variety of graph styles, in grid format and/or raw format. This allows users to interpret the data according to their preferences. Configurable using ConfigTools. The Cube View uses and extends existing ConfigTools objects to allow greater flexibility and configuration control. This allows existing resources who have skills in ConfigTools. Comparison Feature. Allows different selection criteria  sets to be used for comparison purposes.  This allows for difference comparison between two sets of data. Save View as "Snapshot". It is possible to isolate data using the interactive elements of the Cube Viewer to find the data you want to analyze. Once found, you can save the configuration and filters etc for recall later, very similar to the concept of a "Snapshot". For example, if you find some data that needs attention, you can save the view and then reuse it to show others later if necessary. Function Support. In the details additional functions such as Average Value, Count, Maximum Value, Median Value, Minimum Value, Standard Deviation and Sum are supported at the row and column levels.  For example: Cube Views may be available with each product (refer to documentation shipped with the product) and Cubes Views can be configured by implementers and reused across users as necessary. Over the next few weeks a step by step guide will be published here and other locations to show the basic process and some best practices of building a Cube Viewer.

In past releases of Oracle Utilities Application Framework, Query Zones have been a flexible way of display lists of information with flexible filters and dynamic interaction including creating and...

Utilities Testing Accelerator - A new way to test quickly lowering your cost and risk

One of the most common pieces of feedback I got from attending the Oracle Utilities User Group and the Oracle Utilities Customer Edge Conference in Austin recently was that the approach that Oracle Utilities Testing Accelerator is taking is different and logical. Traditionally, test automation is really coding using the language provided by the tool. The process of using it is recording a screen, with the data for the test, and having that become a script in whatever language supported by the tool. To use the script for other tests, either means you have to record it again or get some programmer, fluent in the scripting language to modify the script. The issue becomes when the user interface changes for any reason. This will most likely make the script now invalid so the whole process is repeated. You end up spending more time building scripts than actually testing. Oracle Utilities Testing Accelerator takes a different and more cost effective approach: Component Based Testing. Oracle Utilities Test Accelerator uses the tried and tested Oracle Testing approach by exposing test assets as reusable components in a pre-built library. These components do not use the online as the API but use the same underlying API used by online and other channels into the product. This isolates the test from changes to any of the channels as it is purely focused on functionality not user experience testing only. More Than Online Testing. Oracle Utilities Testing Accelerator can test all channels (online, web service, mobile and batch). This allows maximum flexibility in testing across diverse business processes. Orchestrate Not Build. Oracle Utilities Testing Accelerator allows for your business processes to be orchestrated as a sequence of components which can be within a single supported Oracle Utilities product, across supported Oracle Utilities products, within the same channel or across multiple channels. The orchestration reduces the need for relying on traditional recording and maximizes flexibility.  No Touch Scripting. Oracle Utilities Testing Accelerator generates Selenium based code that requires no adjustment to operate which can be executed from an Eclipse Plugin or directly from the tool (the latter is available in 6.0.0.1.0). Data is Independent of Process. During the orchestration data is not required to build out the business process. Data can be added at any time during the process including after the business process is completed and/or at runtime. This allows business processes to be reused with multiple test data sets. Test data can entered using the Workbench directly, via an external file and in the latest release (6.0.0.1.0) it can be populated via test data extractors. Content from Product QA. The Oracle Utilities Testing Accelerator comes with pre-built component libraries, provided by Product QA, for over 30 version/product combinations for Oracle Utilities products. The license, which is user based, gives you access to any of the libraries appropriate for your site, regardless of the number of non-production environments or number of Oracle Utilities product it is used against. These differences reduce your costs and risk when adopting automated testing. For more information about Oracle Utilities Testing Accelerator refer to Oracle Utilities Testing Accelerator (Doc Id: 2014163.1) available from My Oracle Support.

One of the most common pieces of feedback I got from attending the Oracle Utilities User Group and the Oracle Utilities Customer Edge Conference in Austin recently was that the approach that Oracle...

Schedule Management for Oracle Scheduler Integration

One of the most common questions I get from my product users is how to manage your batch schedules when using the Oracle Scheduler Integration.  As Oracle Scheduler is part of the Oracle Database, Oracle provides a number of ways of managing your schedule: Command Line. If you are an administrator that manages your database using commands and PL/SQL calls then you can use the DBMS_SCHEDULER interface directly from any SQL tool. You have full access to the scheduler objects. Oracle SQL Developer. The latest versions of Oracle SQL Developer include capabilities to manage your schedule directly from that tool. The advantage of this is that the tool supports techniques such as drag and drop to simplify the management of scheduler objects. For example, you can create a chain and then drop the programs into the chain and "wire" them together. This interface generates the direct DBMS_SCHEDULER calls to implement your changes. Refer to the Oracle SQL Developer documentation for details of maintaining individual scheduler objects. For example: Oracle Enterprise Manager. From Oracle Database 12c and above, Oracle Enterprise Manager automatically includes DBA functions and is the recommended tool for all database work. Most DBA's will use this capability to manage the database. This includes Oracle Scheduler management. For example: Implementations have a range of options for managing your schedule. Customers on the cloud use the Oracle Utilities Cloud Service Foundation to manage their schedule in a similar interface to Enterprise Manager via our Scheduler API.  

One of the most common questions I get from my product users is how to manage your batch schedules when using the Oracle Scheduler Integration.  As Oracle Scheduler is part of the Oracle Database,...

Batch Architecture - Designing Your Submitters - Part 3

If you are using the command line submission interface, the last step in the batch architecture is to configure the Submitter configuration settings. Note: Customers using the Oracle Scheduler Integration or Online Submission (the latter is for non-production only) need to skip this article as the configuration outlined in this article is not used by those submission methods. As with the cluster and threadpool configurations, the use of the Batch Edit (bedit.sh) utility is recommended to save costs and reduce risk with the following set of commands: $ bedit.sh -s   $ bedit.sh -b <batch_control> Default option (-s). This sets up a default configuration file used for any batch control where no specific batch properties file exists. This creates a submitbatch.properties file located in the $SPLEBASE/splapp/standalone/config subdirectory. Batch Control Specific configuration (-b). This will create a batch control specific configuration file named process.<batchcontrol>.properties where <batchcontrol> is the Batch Control identifier (Note: Make sure it is the same case as the identified in the meta-data). This file is located in the $SPLEBASE/splapp/standalone/config/cm subdirectory. In this option, the soft parameters on the batch control can be configured as well. Use the following guidelines: Use the -s option where possible. Setup a global configuration to cover as many processes as possible and then create specific process parameter files for batch controls that require specific soft parameters. Minimize the use command Line overrides. The advantage of setting up submitter configuration files is to reduce your maintenance costs. Whilst it is possible to use command line overrides to replace all the settings in the configuration, avoid overuse of overrides to stabilize your configuration and minimize your operational costs. Set common batch parameters. Using the -s option specify the parameters for the common settings. Change the User used. The default user AUSER is not a valid user. This is intention to force the appropriate configuration for your site. Avoid using SYSUSER as that is only to be used to load additional users into the product. Setup soft parameters in process specific configurations. For batch controls that have parameters, these need to be set in the configuration file or as overrides on the command line. To minimize maintenance costs and potential command line issues, it is recommended to set the values in a process specific configuration file. using the add soft command in bedit.sh with the following recommendations:  The parameter name in the parm setting must match the name and case of the Parameter Name on the Batch Control.  The value set in the value setting must be the same or valid for the Parameter Value on the Batch Control.  Optional parameters do not not need to be specified unless used. For example: $ bedit.sh -s Editing file /u01/demo/splapp/standalone/config/submitbatch.properties using template /u01/demo/etc/submitbatch.be Batch Configuration Editor [submitbatch.properties] --------------------------------------------------------------- Current Settings   poolname (DEFAULT)   threads (1)   commit (10)   maxtimeout (15)   user (AUSER)   lang (ENG)   storage (false)   role ({batchCode}) > $ bedit.sh -b MYJOB File /u01/demo/splapp/standalone/config/cm/job.MYJOB.properties does not exist. Create? (y/n) y Editing file /u01/demo/splapp/standalone/config/cm/job.MYJOB.properties using template /u01/demo/etc/job.be Batch Configuration Editor [job.MYJOB.properties] --------------------------------------------------------------- Current Settings   poolname (DEFAULT)   threads (1)   commit (10)   user (AUSER)   lang (ENG)   soft.1       parm (maxErrors)       value (500) > For more advice on individual parameters use the help <parameter> command. To use the configuration use the submitjob.sh -b <batch_code> command. Refer to the Server Administration Guide supplied with your product for more information.

If you are using the command line submission interface, the last step in the batch architecture is to configure the Submitter configuration settings. Note: Customers using the Oracle...

Batch Scheduler Integration (Doc Id: 2196486.1) Updated

In line with the Batch Best Practices whitepaper being updated the Batch Scheduler Integration whitepaper has also been updated to reflect the new advice. The Batch Scheduler Integration whitepaper explains the DBMS_SCHEDULER (also known as Oracle Scheduler) Interface implemented within Oracle Utilities Application Framework. The Oracle Scheduler is included in the Database licensing for the Oracle Utilities Application Framework and can deployed locally or enterprise wide (the latter situation may incur additional licensing depending on the deployment model). The Oracle Utilities Application Framework includes a prebuilt interface that allows the Oracle Scheduler uses its objects to execute Oracle Utilities Application Framework based processes. The advantage of the Oracle Scheduler: Licensing is included in Oracle Database licenses already. It is already available to those customers. Someone in your organization is already using it. The Oracle Scheduler is used by a variety of  products including the Database itself, Oracle Enterprise Manager etc to schedule and perform work. It is a key element in the autonomous database. It is used by Oracle Utilities SaaS Cloud implementation. We use it for our Oracle Utilities SaaS Cloud Implementations natively and via an API for external usage. We also built a scheduling interface within the Oracle Utilities Cloud Services Foundation which is included exclusively for all Oracle Utilities SaaS Cloud implementations. Choice of interfaces to manage your schedule. As the Oracle Scheduler is part of the database, Oracle provides a management and monitoring capability interface via Command line, Oracle SQL Developer, Oracle JDeveloper and/or Oracle Enterprise Manager. The new version of the whitepaper is available from My Oracle Support at Batch Scheduler Integration (Doc Id: 2196486.1).

In line with the Batch Best Practices whitepaper being updated the Batch Scheduler Integration whitepaper has also been updated to reflect the new advice. The Batch Scheduler Integration...

Batch Best Practices (Doc Id: 836362.1) Updated

The Batch Best Practices whitepaper has been completely rewritten from scratch to optimize the information and provide a simpler mechanism for helping implementations configure and manage their batch architecture. The new whitepaper covers building and maintaining an effective batch architecture and then guidelines for optimizations around that architecture. It also separates different techniques for the various submission methods. The whitepaper now covers the following topics: Batch Concepts. How the batch architecture and its objects works together to form the batch functionality in the Oracle Utilities Application Framework. Batch Architecture. A new simpler view of the various layers in the batch architecture. Configuration. A look at the configuration process and guidelines using the Batch Edit to simplify the process. Batch Best Practices. These are generic but important best practices collected from our cloud and no-premise implementations that may prove useful to implementations. Plug In Batch. This is a primer for the Plug In Batch capability (it will be explored in detail in other documents). The whitepaper is available from My Oracle Support at Batch Best Practices (Doc Id: 836362.1).

The Batch Best Practices whitepaper has been completely rewritten from scratch to optimize the information and provide a simpler mechanism for helping implementations configure and manage their batch...

Batch Architecture - Designing Your Threadpools - Part 2

In the last article we discussed the setup of a cluster. Once the cluster is setup, the next step is to design and configure your threadpools. Before I illustrate how to quickly configure your threadpools here are a few things to understand about threadpools: Threadpools are Long Running JVMs. The idea behind threadpools are they are long running JVMS that accept work (from submitters) and process that work. Each individual instance of a threadpool is an individual running JVM on a host (or hosts). Threadpools are Named. Each threadpool is named with a tag (known as the Threadpool Name). This tag is used when running a batch process to target specific JVM's to perform the processing. The names are up to the individual implementation. Threadpools Can Have Multiple Instances. Threadpools can be singular instance or have multiple instances within a host or across hosts. Threadpools have thread limits. Each instance of a threadpool has a physical thread limit. This is not the Java thread limit but the maximum number of threads that can be safely executed in the instance. Threadpools with the same name have cumulative limits when clustered. When multiple instances of the same thread pool name are available, the number of threads available is the sum total of all instances. This is regardless whether the instances are on the same host or across host as long as they are in the same cluster. A summary of the above is shown in the figure below: For the above scenarios: Scenario Option Example Pool Comments A Single Thread Pool on a Single Host POOL1 This is the most simplest scenario. B Multi Thread Pool Instances on a Single Host POOL2 The number of threads across this scenario are cumulative. In this example there are ten (10) threads available. C Multi Thread Pool on Multi Hosts POOL3 This is a clustered setup across hosts. Again the threads available are cumulative and in this case there are twelve (12) threads available. Note: The second instance of POOL3 can have different thread limits to reflect the capacity on the host. In most cases, the number of threads will be the same but it is possible to change the configuration on a host to reflect the capacity of that individual machine. Note: You can combine any of these scenarios for a complex batch architecture. Building Your Threadpool Configuration As with the Cluster configuration the best way of building your threadpool configuration is using the Batch Edit (bedit.sh) utility. There are two commands available to you: The bedit.sh -w command is recommended as your initial command to create a set of default thread configurations in threadpoolworker.properties file. This file is used by the threadpoolworker.sh command by default. The bedit.sh -l <arg> command is used to create custom threadpools with the <arg> used to denote the template to base the configuration against. The product ships a cache and job template and it is possible to create customer templates directly from the command. The utility generates a threadpool.<arg>.properties file. To use the template use the threadpoolworker.sh -l <arg> command line. Here are some guidelines when building the threadpool configuration: Some Settings Are At The Template Level. Some of the settings are common to all threadpools using the template. For example, JVM Settings, Caching and Internal Threads are specified at  the template level. Create An Appropriate List Of Threadpool Names And Thread Limits In The File. The list of threadpools can be configured with names and limits. Use Cache Template for Cache Threadpools. In a complex architecture with lots of threadpool instances, it is recommended to invest in creating a cache threadpool to optimize network traffic. Use Job Template for Focussed Threadpools. If optimization of JVM settings, caching etc is required use the job template or create a custom template to configure these settings. The bedit.sh utility allows for configuration of settings from the templates. For example: $ bedit.sh -w Batch Configuration Editor [threadpoolworker.properties] -------------------------------------------------------------------------- Current Settings   minheap (1024m)   maxheap (1024m)   daemon (false)   rmiport (7540)   dkidisabled (false)   storage (true)   distthds (4)   invocthds (4)   role (OUAF_TPW)   jmxstartport (7540)   l2 (READ_WRITE)   devmode (false)   ollogdir (/u02/sploutput/demo)   ollogretain ()   thdlogretain ()   timed_thdlog_dis (false)   pool.1       poolname (POOL1)       threads (10)   pool.2       poolname (POOL2)       threads (5)   pool.3       poolname (POOL3)       threads (7) >      Use the help command on each setting for advice. When designing your threadpools there are several guidelines that can apply: Simple Is Best. One of the key capabilities of the architect is that you have access to a full range of alternative configuration to suit your needs. To quote a famous movie "With Great Power Comes Great Responsibility", so it is recommended not to go overboard with the complexity of the configuration. For example, in non-Production environment use a small number of threadpools so keep it simple. When designing your threadpool architecture, balance maintenance over technical elegance. The more complex solutions can increase maintenance costs so keep the solution as simple as you can. Consider Specialist Threadpools for L2 Caching. Some of the ancillary processes in the product require the L2 Cache to be disabled. This is because they are updating the information and the cache will actually adversely affect the performance of these processes. Processes such as Configuration Migration Assistant, LDAP Import and some of the conversion processes require the caching to be disabled. In this case create a template where a dedicated threadpool that turns off L2 Caching. Threadpools Need Only To Be Running As Needed. One misnomer about threadpools is that they must be up ALL the time to operate. This is not true. They only need to be active when a batch processes needs access to the particular threadpool. When they are not being used, any threadpool JVM's are still consuming memory and CPU (even just a little). There is a fundamental principal in Information Technology which is "Thou Shalt Not Waste Resources". An idle threadpool is still consuming resources, including memory, that can be best used by other active processes. Consider Specialist Threadpools for Critical Processing. A threadpool will accept any work from any process it is targeted for. Whilst this is flexible, it can cause issues with critical resources. When a critical process is executed in your architecture, it is best to make sure there are resources available to process it efficiently. If other resources are sharing the same threadpools then those critical processes are competing for resources from lesser critical processes. One technique is to setup dedicated threadpools, with optimized settings, for exclusive use of critical processes. There Are Limits. Even though it is possible to run many threadpools in your architecture there are limits to consider. The most obvious is memory. Each threadpool instance is a running JVM reserving memory for use by threads. By the default, this is between 768MB - 1GB (or more) per JVM. Your physical memory may be the limit that decides how many active JVM's are possible on a particular host (do not forget the operating system also needs some memory as well). Another limit will be contention on resources such as records and disk. One technique that has proven useful is to monitor throughput (records per second) and to increase threading or threadpools until this starts to reduce. The above techniques are but a few that are useful in designing your threadpools. The next step in the process is to decide the submitters and the number of threads to consider across these threadpools. This will be subject of the next part of the series.    

In the last article we discussed the setup of a cluster. Once the cluster is setup, the next step is to design and configure your threadpools. Before I illustrate how to quickly configure...

Batch Architecture - Designing Your Cluster - Part 1

The Batch Architecture for the Oracle Utilities Application Framework is both flexible and powerful. To simplify the configuration and prevent common mistakes the Oracle Utilities Application Framework includes a capability called Batch Edit. This is a command line utility, named bedit.sh, that provides a wizard style capability to build and maintain your configuration. By default the capability is disabled and can be enabled by setting the Enable Batch Edit Functionality to true in the Advanced Configuration settings using the configureEnv.sh script: $ configureEnv.sh -a ************************************************* * Environment Configuration demo * *************************************************   50. Advanced Environment Miscellaneous Configuration ...        Enable Batch Edit Functionality:                    true ... Once enabled the capability can be used to build and maintain your batch architecture. Using Batch Edit The Batch Edit capability is an interactive utility to build the environment. The capability is easy to use with the following recommendations: Flexible Options. When invoking the command you specify the object type you want to configure (cluster, threadpool or submitter) and any template you want to use. The command options will vary. Use the -h option for a full list. In Built Help. If you do not know what a parameter is about or even the object type. You can use the help <topic> command. For example, using when configuring help threadpoolworker gives you advice about the approaches you can take for threadpools. If you want a list of topics, type help with no topic. Simple Commands. The utility has a simple set of commands within the utility to interact with the settings. For example if you want to set the role within the cluster to say fred you would use the set role fred command within the utility. Save the Configuration. There is a save command to make all changes in the session reflect in the relevant file and conversely if you make a mistake you can exit without saving the session. Informative. It will tell you which file you are editing at the start of the session so you can be sure you are in the right location. Here is an example of an edit session: $ bedit.sh -w Editing file /u01/ugtbk/splapp/standalone/config/threadpoolworker.properties using template /u01/ugtbk/etc/threadpoolworker.be Includes the following push destinations:   dir:/u01/ugtbk/etc/conf/tpw Batch Configuration Editor 4.4.0.0.0_1 [threadpoolworker.properties] -------------------------------------------------------------------- Current Settings   minheap (1024m)   maxheap (1024m)   daemon (true)   rmiport (7540)   dkidisabled (false)   storage (true)   distthds (4)   invocthds (4)   role (OUAF_Base_TPW)   jmxstartport (7540)   l2 (READ_ONLY)   devmode (false)   ollogdir (/u02/sploutput/ugtbk)   ollogretain ()   thdlogretain ()   timed_thdlog_dis (false)   pool.1       poolname (DEFAULT)       threads (5)   pool.2       poolname (LOCAL)       threads (0) > save Changes saved Pushing file threadpoolworker.properties to /u01/ugtbk/etc/conf/tpw ... > exit Cluster Configuration The first step in the process is to design your batch cluster. This is the group of servers that will execute batch processes. The Oracle Utilities Application Framework uses a Restricted Use License of Oracle Coherence to cluster batch processes and resources. The use of Oracle Coherence allows you to implement different architectures from simple to complex. Using Batch Edit there are three cluster types supported (you must choose one type per environment). Cluster Type (template code) Use Cases Comments Single Server (ss) Cluster is restricted to a single host This is useful for non-production environments such as demonstration, development and testing as it is most simple to implement Uni-Cast (wka) The cluster uses unicast protocol with the hosts explicitly named within the cluster that are part of the cluster. This is recommended for sites wanting to lock down a cluster to specific hosts and does not want to use multi-cast protocols. Administrators will have to name the list of hosts, known as Well Known Addresses, that are part of the cluster as part of this configuration Multi-Cast (mc) The cluster uses the multi-cast protocol with a valid multi-cast IP address and port. This is recommended for sites who want a dynamic configuration where threadpools and submitters are accepted on demand. This is the lowest amount of configuration for product clusters as the threadpools can join a cluster from any server with the right configuration dynamically. It is not recommended for sites that do not use the multi-cast protocol. Single Server Configuration This is the simplest configuration with the cluster restricted to a single host. The cluster configuration is restricted networking wise within the configuration. To use this cluster type simply use the following command and follow the configuration generated for you from the template. bedit.sh -c -t ss Uni-Cast Configuration This is a multi-host cluster where the hosts in the configuration are defined explicitly in host and port number combinations. The port number is used for communication to that host in the cluster. This style is useful where the site does not want to use the multi-cast protocol or wants to micro-manage their configuration. To use this cluster type simply use the following command and follow the configuration generated for you from the template. bedit.sh -c -t wka You then add each host as a socket using the command: add socket This will add a new socket collection in the format socket.<socketnumber>. To set the values use the command: set socket.<socketnumber> <parameter> <value> where: <socketnumber> The host number to edit <parameter> Either wkaaddress (host or IP address of server) and wkaport (port number on that host to use) <value> the value for the parameter. For example: set socket.1 wkaaddress host1 To use this cluster style ensure the following: Use the same port number per host. Try and use the same broadcast port on each host in the cluster. If they are different then the port number in the main file for the machines that are on the cluster has to be changed to define that port. Ensure each host has a copy of the configuration file. When you build the configuration file, ensure the same file is on each of the servers in the cluster (each host will require a copy of the product). Multi-Cast Configuration This is the most common multi-host configuration. The idea with this cluster type is that a multi-cast port and IP Address are broadcast across your network per cluster. It requires very little configuration and the threadpools can dynamically connect to that cluster with little configuration. It uses the multi-cast protocol which network administrators either love or hate. The configuration is similar to the Single Server but the cluster settings are actually managed in the installation configuration (ENVIRON.INI) using the COHERENCE_CLUSTER_ADDRESS and COHERENCE_CLUSTER_PORT settings. Refer to the Server Administrator Guide for additional configuration advice. Cluster Guidelines When setting up the cluster there are a few guidelines to follow: Use Single Server for Non-Production. Unless you need multi-host clusters, use the Single Server cluster to save configuration effort. Name Your Cluster Uniquely. Ensure your cluster is named appropriately and uniquely per environment to prevent cross environment unintentional clustering. Set a Cluster Type and Stick with it. It is possible to migrate from one cluster type to another (without changing other objects) but to save time it is better to lock in one type and stick with it for the environment. Avoid using Prod Mode. There is a mode in the configuration which is set to dev by default. It is recommended to leave the default for ALL non-production environment to avoid cross cluster issues. The Prod mode is recommended for Production systems only. Note: There are further safeguards built into the Oracle Utilities Application Framework to prevent cross cluster connectivity. The cluster configuration generates a tangosol-coherence-override.xml configuration file used by Oracle Coherence to manage the cluster. Now we have the cluster configured, the next step is to design your threadpools to be housed in the cluster. That will be discussed in Part 2 (coming soon).

The Batch Architecture for the Oracle Utilities Application Framework is both flexible and powerful. To simplify the configuration and prevent common mistakes the Oracle Utilities Application...

Use Of Oracle Coherence in Oracle Utilities Application Framework

In the batch architecture for the Oracle Utilities Application Framework, a Restricted Use License of Oracle Coherence is included in the product. The Distributed and Named Cache functionality of Oracle Coherence are used by the batch runtime to implement clustering of threadpools and submitters to help support the simple and complex architectures necessary for batch. Partners ask about the libraries and their potential use in their implementations. There are a few things to understand: Restricted Use License conditions. The license is for exclusive use in the managing of executing executing members (i.e. submitters and threadpools) across hardware licensed for use with the Oracle Utilities Application Framework based products. It cannot be used in any code outside of that restriction. Partners cannot use the libraries directly in their extensions. It is all embedded in the Oracle Utilities Application Framework. Limited Libraries. The Oracle Coherence libraries are restricted to a subset needed by the license. It is not a full implementation of Oracle Coherence. As it is a subset, Oracle does not recommend using the Oracle Coherence Plug-In available for Oracle Enterprise Manager to be used with the Oracle Utilities Application Framework implementation of the Oracle Coherence cluster. Use of this plugin against the batch cluster will result in missing and incomplete information presented to the plug-in causing inconsistent results in that plug-in. Patching. The Oracle Coherence libraries are shipped with the Oracle Utilities Application Framework and therefore are managed by patches for the Oracle Utilities Application Framework not Coherence directly. Unless otherwise directed by Oracle Support, do not manually manipulate the Oracle Coherence libraries. The Oracle Coherence implementation with the Oracle Utilities Application Framework has been optimized for use with the batch architecture with a combination of prebuilt Oracle Coherence and Oracle Utilities Application Framework based configuration files. Note: If you need to find out the version of the Oracle Coherence Libraries used in the product at any time then the libraries are listed in the file $SPLEBASE/etc/ouaf_jar_versions.txt The following command can be used to see the version: cat $SPLEBASE/etc/ouaf_jar_versions.txt | grep coh For example in the latest version of the Oracle Utilities Application Framework (4.4.0.0.0): cat /u01/umbk/etc/ouaf_jar_versions.txt | grep coh coherence-ouaf                   12.2.1.3.0 coherence-work                   12.2.1.3.0

In the batch architecture for the Oracle Utilities Application Framework, a Restricted Use License of Oracle Coherence is included in the product. The Distributed and Named Cache functionality...

Special Tables in OUAF based products

Long time users of the Oracle Utilities Application Framework might recognize two common table types, recognized by their name suffixes, that are attached to most Maintenance Objects within the product: Language Tables (Suffix _L).  The Oracle Utilities Application Framework is multi-lingual and can support multiple languages at a particular site (for example customers who have multi-lingual call centers or operate across jurisdictions where multiple languages are required). The Language table holds the tags for each language for any fields that need to display text on a screen. The Oracle Utilities Application Framework matches the right language records based upon the users language profile (and active language code). Key Tables (Suffix _K). These tables hold the key values (and the now less used environment code) that are used in the main object tables. The original use for these tables was for key tracking in the original Archiving solution (which has now been replaced by ILM). Now that the original Archiving is not available, the role of these tables changed to be used in a number of areas: Conversion. The conversion toolkit in Oracle Utilities Customer Care and Billing and now in the Cloud Service Foundation, uses the key table for efficient key generation and black listing of identifiers. Key Generation. The Key generation utilities now use the key tables to quickly ascertain the uniqueness of a key. This is far more efficient than using the main table for this, especially with caching support in the database. Information Life-cycle Management. The ILM capability uses the key tables to drive some of its processes including recognizing when something is archived and when it has been restored. These tables are important for the operation of the Oracle Utilities Application Framework for all types of parts of the product. When you see them now you understand why they are there.

Long time users of the Oracle Utilities Application Framework might recognize two common table types, recognized by their name suffixes, that are attached to most Maintenance Objects within...

Batch Scheduler Integration Questions

One of the most common questions I get from partners is around batch scheduling and execution. Oracle Utilities Application Framework has a flexible set of methods of managing, executing and monitoring batch processes. The alternatives available are as follows: Third Party Scheduler Integration. If the site has an investment in a third party batch scheduler to define the schedules and execute product batch processes with non-product processes, at an enterprise level, then the Oracle Utilities Application Framework includes a set of command line utilities, via scripts, that can be invoked by a wide range of third party schedulers to execute the process. This allows scheduling to be managed by the third party scheduler and the scripts to be used to execute and manage product batch processes. The scripts return standard return codes that the scheduler to use to determine next actions if necessary. For details of the command line utilities refer to the Server Administration Guide supplied with your version of the product. Oracle Scheduler Integration. The Oracle Utilities Application Framework provides a dedicated API to allow implementations to use the Oracle DBMS Scheduler included in all editions of the database to be used as local or enterprise wide scheduler. The advantage of this is that the scheduler is already included in your existing database license and has inbuilt management capabilities provided via the base functionality of Oracle Enterprise Manager (12+) (via Scheduler Central) and also via Oracle SQL Developer. Oracle uses this scheduler in the Oracle Utilities SaaS Cloud solutions. Customers of those cloud services can use the interface provided by the included Oracle Utilities Cloud Service Foundation to manage their schedules or use the provided REST based scheduler API to execute schedules and/or processes from a third party scheduler. For more details of the scheduler interface refer to the Batch Scheduler Integration (Doc Id: 2138193.1) whitepaper available from My Oracle Support. Online Submission. The Oracle Utilities Application Framework provides a development and testing tool to execute individual batch processes from the online system. It is basic and only supports execution of individual processes (not groups of jobs like the alternatives do). This online submission capability is designed for cost effective developer and non-production testing, if desired, and is not supported for production use. For more details, refer to the online documentation provided with the version of the product you are using. Note: For customers of legacy versions of Oracle Utilities Customer Care and Billing, a basic workflow based scheduler was provided for development and testing purposes. This interface is not supported for production use and one of the alternatives outlined above should be used instead. All the above methods all use the same architecture for execution of running batch processes (though some have some additional features that need to be enabled).  For details of the each of the configurations, refer to the Server Administration Guide supplied with your version of the product.  When asked about which technology should be used I tend to recommend the following: If you have an existing investment, that you want to retain, in a third party scheduler then use the command line interface. This will retain your existing investment and you can integrate across products or even integrate non-product batch such as backups from the same scheduler. If you do not have an existing scheduler, then consider using the DBMS Scheduler provided with the database. It is more likely your DBA's are already using it for their tasks and it is used by a lot of Oracle products already. The advantage of this scheduler is that you already have the license somewhere in your organization already. It can be deployed locally within the product database or remotely as an enterprise wide solution. It has a lot of good features and Oracle Utilities will use this scheduler as a foundation of our cloud implementations. If you are on the cloud then use the provided interface in Oracle Utilities Cloud Service Foundation and if you have an external scheduler via the REST based Scheduler API. If you are on-premise, then use the Oracle Enterprise Manager (12+) interface (Scheduler Central) in preference to the SQL Developer interface (though the latter is handy for developers). Oracle also ships a command line interface to the scheduler objects if you like pl/sql type administration. Note: Scheduler Central in Oracle Enterprise Manager is included in the base functionality for Oracle Enterprise Manager and does not require any additional packs. I would only recommend to use the online submission for demonstrations, development and perhaps in testing (where you are not using Oracle Utilities Testing Accelerator or have the scheduler not implemented).  It has very limited support and will only execute individual processes.  

One of the most common questions I get from partners is around batch scheduling and execution. Oracle Utilities Application Framework has a flexible set of methods of managing, executing...

Configuration Management for Oracle Utilities

An updated series of whitepapers are now available for managing configuration and code in Oracle Utilities products whether the implementation is on-premise, hybrid or using Oracle Utilities SaaS Utilities. It has been updated for the latest Oracle Utilities Application Framework release. The series highlights the generic tools, techniques and practices available for use in Oracle Utilities products. The series is split into a number of documents: Concepts. Overview of the series and the concept of Configuration Management for Oracle Utilities products. Environment Management. Establishing and managing environments for use on-premise, hybrid and on the Oracle Utilities SaaS Cloud. There are some practices and techniques discussed to reduce implementation costs. Version Management. Understanding the inbuilt and third party integration for managing individual versions of individual extension assets. There is a discussion of managing code on the Oracle Utilities SaaS Cloud. Release Management. Understanding the inbuilt release management capabilities for creating extension releases and accelerators. Distribution. Installation advice for releasing extensions across the environments on-premise, hybrid and Oracle Utilities SaaS Cloud. Change Management. A generic change management process to approve extension releases including assessment criteria. Configuration Status. The information available for reporting state of extension assets. Defect Management. A generic defect management process to handle defects in the product and extensions. Implementing Fixes. A process and advice on implementing single fixes individually or in groups. Implementing Upgrades. The common techniques and processes for implementing upgrades. Preparing for the Cloud. Common techniques and assets that need to be migrated prior to moving to the Oracle Utilities SaaS Cloud. For more information and for the whitepaper associated with these topics refer to the Configuration Management Series (Doc Id: 560401.1) available from My Oracle Support.

An updated series of whitepapers are now available for managing configuration and code in Oracle Utilities products whether the implementation is on-premise, hybrid or using Oracle Utilities...

Oracle Utilities Application Framework V4.4.0.0.0 Released

The latest release of Oracle Utilities Application Framework V4.4.0.0.0 is now available with the first products becoming available on-premise and Oracle Utilities Cloud. This release is significant as it is forms the micro-services based foundation of the next generation of Oracle Utilities Cloud offering and whilst the bulk of the changes are cloud based there are some significant changes for on-premise customers as well: New Utilities User Experience. Last year we previewed our directions in terms of the user experience across the Oracle Utilities product portfolio.  Oracle Utilities Application Framework V4.4.0.0.0 delivers the initial set of this experience with a new look and feel which form the basis of the new user experience. The new user experience is based upon feedback from various user experience teams, customers and partners to deliver a better user experience including reducing eye strain and supporting a wider range of platforms/devices now and in the future. For example (edited for publishing purposes): New To Do Portals. Based upon customer feedback, the first of the new management portals for To Do Management is now included with Oracle Utilities Application Framework V4.4.0.0.0. These portals can used alongside the legacy To Do functionality and can be migrated to over time. The idea is these portals is to focus on finding, managing and resolving To Do's with  the minimum amount of effort from the portals as much as possible. The new portals support dynamic criteria based upon data and To Do Type, date ranges, saved views, search on text and multi-query. For example: Improved User Error Field Marking. In line with the New Utilities User Experience, the indication of fields in error has been improved both visually and programmatically. This allows implementations to be flexible on how fields in error are indicated in error routines and how they are indicated on the user experience. To Do Pre-creation Algorithm. In past releases, it was possible to use a To Do Pre-creation algorithm resided exclusively on the Installation records to implement logic to target when, in a lifecycle, a To Do can be created. This was seen as inefficient if implementations had a large number of To Do Types. It is now possible to introduce this logic at the To Do Type level to override the global algorithm. Cloud Enhancements. This release contains a set of enhancements for the Oracle Utilities Cloud SaaS versions which are not typically applicable to non-cloud implementations. These enhancements are shipped with the product in an appropriate format (some features are not available to non-cloud implementations). More Granular To Do Security. Complete All functionality in To Do now has separate security applications service to provide a more focused capability. This allows more granular security to be implemented, if desired. External Message Enhancements. The External Messages capability has been extended to now support URI substitutions to support Cloud/Hybrid implementations and isolate developers from configuration changes to reduce costs. Products using this new version of the framework are now available including Oracle Utilities Customer Care And Billing 2.7.0.1.0 are now available from Oracle Delivery Cloud. Refer to the release notes shipped with those products for details of these enhancements and other enhancements available in this release.

The latest release of Oracle Utilities Application Framework V4.4.0.0.0 is now available with the first products becoming available on-premise and Oracle Utilities Cloud. This release is significant...

Registering a Service Request With Oracle Support

As with other vendors, Oracle provides a web site My Oracle Support for customers to find information as well as register service requests for bugs and enhancements they want us to pursue. Effective use of this facility can both save you time as well as help you find the information you might need to answer your questions. Most customers think My Oracle Support is the place to get patches but it is far more of a resource than that. Apart from patches for all its products it provides some important resources for customers and partners including: Knowledge Base - A set of articles that cover all the Oracle products with announcements and advice. For example all the whitepapers you see available from this blog end up as Knowledge Base articles. Product Support and Product Development regularly publish to that part of MOS to provide customers and partner up to date information. Communities - For most Oracle products, there are communities of people who can answer questions. Some partners actually share implementation tips in these communities and they can become self sustaining with advice about features that have been actually implemented and tips on how to best implement them from partners and customers. Oracle Product Support and Development monitor those communities to see trends as well as determine possible optimizations to our products. They are yet another way you can contribute to the product. Now to help you navigate the site, I have compiled a list of the most common articles that you might find useful. This list is not comprehensive and I would recommend that you look at the site to find more than what is listed here: My Oracle Support Resource Center (Doc ID 873313.1) How to use My Oracle Support - How-to Training Video Series (Doc ID 603505.1) My Oracle Support (MOS) or Cloud Support Portal for New Users - A Getting Started Resource Center (Doc ID 1959163.1) How to create Service Request Reports (SR Reports), Asset Reports, Inventory Reports in My Oracle Support (Doc ID 1496117.1) Collecting Diagnostic Data to Troubleshoot Oracle Utilities Framework Based Functional Issues (Doc ID 2057204.1) Collecting Diagnostic Data to Troubleshoot Oracle Utilities Framework Based Products (Doc ID 2064324.1) Details Required when Logging an Oracle Utilities Framework Based Product Service Request (Doc ID 1905747.1) Collecting Diagnostic Data to Troubleshoot Oracle Utilities Framework Based Products - Batch Issues (Doc ID 2064310.1) Collecting Diagnostic Data to Troubleshoot Oracle Utilities Framework Based Patch Installation Issues (Doc ID 2064389.1) Collecting Diagnostic Data to Troubleshoot Oracle Utilities Framework Based Installation Issues (Doc ID 2058433.1) Oracle Utilities Framework Support Utility (Doc ID 1079640.1) Certification Matrix for Oracle Utilities Products (Doc ID 1454143.1) Optimizing Log levels In OUAF Based Applications (Doc ID 2090031.1) Supporting OUAF date time format in IWS without XAI compatibility enabled (Doc ID 2229893.1) OUAF Batch Commit Strategy (Doc ID 1482116.1) Usage Of G1GC In OUAF Product (Doc ID 2444942.1) Details around Storage.xml file for OUAF based products (Doc ID 1482635.1) Required Troubleshooting Information for OUAF Performance Problems [Video] (Doc ID 1233173.1) How to Capture Fiddler Logs in Oracle Utilities Application Framework Based Products (Doc ID 1293483.1) How to apply a custom stylesheet to the Oracle Utilities Application Framework? (Doc ID 1557818.1) Configuration And Log Files Checklist For Oracle Utilities Application Framework Based Products (Doc ID 797594.1) How To Install Patches On Oracle Utilities Application Framework Based Products (Doc ID 974985.1) For more articles I suggest you use the terms OUAF or Oracle Utilities Application Framework in the search. For specific product advice, use the product acronym or product name in the search to find articles.

As with other vendors, Oracle provides a web site My Oracle Support for customers to find information as well as register service requests for bugs and enhancements they want us to pursue. Effective...

Building Your Batch Architecture with Batch Edit

With the introduction of Oracle Coherence to our batch architecture introduces both power and flexibility to the architecture to support a wide range of batch workloads. But to quote one of the late Stan Lee's iconic characters "With great power comes great responsibility" configuration of this architecture can be challenging for complex scenarios. To address this the product introduced a text based utility called Batch Edit. The Batch Edit utility is a text based wizard to help implementers build simple and complex batch architectures without the need for complex editing of configuration files. The latter is still supported for more experienced implementers but the main focus is for inexperienced implementers to be able to build and maintain a batch architecture easily using this new wizard. The use of Batch Edit (bedit.sh command) is totally optional and to use it you must enable it in you must change the Enable Batch Edit Functionality to true using the configureEnv.sh -a command. For example: This is necessary for backward compatibility. Now that it is enabled the wizard can be use to build the configuration.  Here are a few tips in using it: The bedit.sh command has command options that need to be specified to edit parts of the architecture. The table below lists the command options: Command Line Usage bedit.sh -c Edit the cluster configuration. bedit.sh -w Edit the threadpoolworker configuration. bedit.sh -s Edit the submitter configuration. Use the bedit.sh -h option for a full list of options and combinations. Within bedit.sh you can setup different scenarios using predefined templates: Configuration Scope Styles Supported Cluster We support the following templates in respect to clusters: Single Server (ss) - This is ideal for simple non-production environments to restrict Unicast Server (wka) - Well Known Address based clusters with the ability to define nodes in your cluster within the wizard. Multi-cast Server (mc) - Using Multi-cast for dynamic node management and configuration. Threadpool It is possible to setup a individual or groups of threadpools using the tool with the following templates: Standard Threapools - Setting up individual threadpools or groups of threadpools with macro and micro level configuration (including sizing and caching) Cache Threadpools - Support for caching threadpools which are popular with complex setups to drastically reduce the network traffic across nodes. Admin Threadpools - Support for reserving threadpools for management and monitoring capabilities (this reduces the JMX overhead) Submitter If you not using the DBMS_SCHEDULER interface, which uses Batch Control for parameters, then properties files for the submitters will be required. The bedit.sh utility allows for the following types of submitter files to be generated and maintained: Global configuration - Set global defaults for all batch controls. For example, it is possible to specify the product user used for authorization purposes. Specific Batch Control configuration - Set the parameters for specific Batch Controls. For example: Within each template there are a list of settings with help on each setting to help decide the values. The bedit.sh utility allows setting each parameter individually using in-line commands. Use the set command to set values. Use help to get context sensitive help on individual parameters. For example: One of the most important commands is save which applies the changes you have made. The Batch Edit utility uses templates provided by product to build configuration files without user interaction. It is highly recommended for customers who do not want to manually manage templates or configurations for batch or do not have in-depth Oracle Coherence knowledge. For more information about the Batch Edit utility refer to the Server Administration Guide shipped with the product and Batch Best Practices (Doc Id: 836362.1) available from My Oracle Support. Customers wanting to know about the DBMS_SCHEDULER interface should refer to Batch Scheduler Integration (Doc Id: 2196486.1) available from My Oracle Support.

With the introduction of Oracle Coherence to our batch architecture introduces both power and flexibility to the architecture to support a wide range of batch workloads. But to quote one of the late...

Revision Control Basics

One of the features of the Oracle Utilities Application Framework is Revision Control. This is an optional facility where you can version manage your ConfigTools objects natively within the browser. Revision Control supports the following capabilities: Adding an object for the first time.  The new object is automatically checked out by the system on behalf of the current user.  The revision is finalized when the user checks in the object or reverts all changes.  The latter causes the object to be restored to the version at check-out time. Updating an object requires the object to be checked out prior to making any change.  A user can either manually check out an object or the first save confirms an automatic check out with the user.  The revision is finalized when the user checks in the object or reverts all changes.   Deletion is captured. Deleting an object results in a new revision record capturing the object at deletion time. This does not remove the object from Object Revision. It also allows for restores in the future if necessary. Restoring generated a new revision. Restoring an object also results in a new revision capturing the object after being restored to an older version. State is important. An object is considered checked out if it has a revision in a non-final state. Algorithms control the revision. A Revision Control Maintenance Object plug-in spot is introduced to enforce revision rules such as preventing from a user to work on an object checked out by another user.  Revision control is an optional feature that can be turned on for each eligible maintenance object.  To turn the feature on for a Maintenance Object, a revision control algorithm has to be plugged in to it.   Automatic revision dashboard zones. A new Revision Control context sensitive dashboard zone is provided to manage the above revision events, except the restore, for the current object being maintained.  The zone is visible only if the Maintenance Object has this feature turned on.  A hyperlink from this zone navigates to a new Revision History portal listing the history of revision events for the current object.       Tracking objects. A dashboard zone is provided that shows all the objects currently being checked out by the user.   Search revision history. In addition, a Revision History Search portal is provided to search and see information about a user's historical work and the historical revisions to an object. Revision Control supports the ability to check in, check out and revert versions of objects you develop in ConfigTools from within the browser interface. Additionally the Revision Control supports team based development with supervisor functions to force state of versions allocated to individuals. The diagram below summarizes the facilities: Note: By default Revision Control is disabled by default and the F1-REVCTL algorithm must be added as a Revision Control algorithm on the ConfigTools Maintenance Objects it applies to. For example: Once enabled whenever the configured object is edited a Revision Control dashboard zone will be displayed depending on the state of the object within the Revision Control system. The state of a revision can be queried using the Revision Control Search. For example: This is has just been a summary of some of the features of Revision Control. Refer to the online documentation for additional advice and a full description of the features.

One of the features of the Oracle Utilities Application Framework is Revision Control. This is an optional facility where you can version manage your ConfigTools objects natively within the browser....

Oracle Utilities Documentation

One of the most common questions I get from partners and customers is the location of the documentation for the product. In line with most Oracle products there are three locations for documentation: Online Help. The product ships with online help which has information about the screens and advice on the implementation and extension techniques available. Of course, this assumes you have installed the product first. Help can be accessed using the assigned keyboard shortcut or using the help icon on the product screens. Oracle Software Delivery Cloud. Along with the installation media for the products, it is possible to download PDF versions of all the documentation for offline use. This is usually indicated on the download when selecting the version of the product to down though it can downloaded at anytime. Oracle Utilities Help Center. As with other Oracle products, all the documentation is available online via the Oracle Help Center (under Industries --> Utilities). The following documentation is available: Document Usage Audience Release Notes Summary of the changes and new features in the Oracle Utilities product Implementation Teams Quick Install Summary of the technical installation process including prerequisites UNIX Administrators Installation Guide Detailed software installation guide for the Oracle Utilities product. UNIX Administrators Optional Products Installation Summary of any optional additional or third party products used for the Oracle Utilities product. This guide only exists if optional products are certified/supported with the product. UNIX Administrators Database Administrator's Guide Installation, management and guidelines for use with the Oracle Utilities product. DBA Administrators Licensing Information User Manual Legal license information relating to the Oracle Utilities product and related products UNIX Administrators Administrative User Guide Offline copy of the Administration documentation for the Oracle Utilities product. This is also available via the online help installed with the product. Implementation Teams, Developers Business User Guide Offline copy of the Business and User documentation for the Oracle Utilities product. This is also available via the online help installed with the product. Implementation Teams, Developers Package Organization Summary Summary of the different packages included in the Oracle Utilities product. This may not exist for single product installations. Implementation Teams Server Administration Guide Guide to the technical configuration settings, management utilities and other technical architecture aspects of the Oracle Utilities product. UNIX Administrators Security Guide Guide to the security aspects of the Oracle Utilities product centralized in a single document. Covers both security functionality and technical security capabilities. This is design for use by Security personnel to design their security solutions. Implementation Teams API Reference Notes Summary of the API's provided with the Oracle Utilities product. This is also available via online features. Developers Developers Guide This is the Oracle Utilities SDK guide for using the Eclipse based development tools for extending the Oracle Utilities product using Java. Partners using the ConfigTools or Groovy should use the Administrative User Guide instead. Developers Be familiar with this documentation as well as Oracle Support which has additional Knowledge Base articles.

One of the most common questions I get from partners and customers is the location of the documentation for the product. In line with most Oracle products there are three locations for documentation: On...

Running OUAF Database Installation in Non-Interactive Mode

Over the past few releases, the Oracle Utilities Application Framework introduced Java versions of our installers which were originally shipped as part of the Oracle Application Management Pack for Oracle Utilities (for Oracle Enterprise Manager). To use these utilities you need to set the CLASSPATH as outlined in the DBA Guides shipped with the product. Each product ships a Install-Upgrade sub-directory which contains the install files. Change to that directory to perform the install. If you want some custom storage parameters update Storage,xml and StorageOps.xml files. Use the following command line to install the database components: java -Xmx1500M com.oracle.ouaf.oem.install.OraDBI -d jdbc:oracle:thin:@<DB_SERVER>:<PORT>/<SID>,<DBUSER>,<DBPASS>,<RW_USER>,<R_USER>,<RW_USER_ROLE>,<R_USER_ROLE>,<DBUSER> -l 1,2 -j $JAVA_HOME Where: Parameter Comments <DB_SERVER> Host Name for Database Server <PORT> Listener Port for Database Server <SID> Database Service Name (PDB or non-PDB) <DBUSER> Administration Account for product (owns the schema) (created in earlier step) <DBPASS> Password for Administration Account (created in earlier step) <RW_USER> Database Read-Write User for Product (created in earlier step) <R_USER> Database Read Only User for Product (created in earlier step) <RW_USER_ROLE> Database Role for Read Write (created in earlier step) <R_USER_ROLE> Database Role for Read (created in earlier step) That will run the install directly. If you added additional users to your installation and want to generate the security definitions for those users then you need to run the new oragensec utility: java -Xmx1500M com.oracle.ouaf.oem.install.OraGenSec -d <DBUSER>,<DBPASS>,jdbc:oracle:thin:@<DB_SERVER>:<PORT>/<SID> -a A -r <R_USER_ROLE>,<RW_USER_ROLE> -u <RW_USER>,<R_USER> Where <RW_USER> is the additional user that you want to generate security for. You will need to provide <R_USER> as well.

Over the past few releases, the Oracle Utilities Application Framework introduced Java versions of our installers which were originally shipped as part of the Oracle Application Management Pack for...

New Process Flow Functionality

In Oracle Utilities Application Framework V4.3.0.6.0 we introduced an exciting new capability to model and process multi-step long running business processes solely using the ConfigTools functionality. The capability allows implementation to specify a complex step by step process, with associated objects, to perform a business process with the following capabilities: Forward and Back. The ability to move forward though a process and also backtrack in the process if the process permits it. For example, you might be interacting with a customer on the phone or via chat and the customer changes their mind about a particular part of the process. The operator can move back to the relevant part of the process to correct the interaction. A train UI element has been added to visually emphasize the location in the process. For example: Save At Any Time.  The ability to pause and save a business process in transit at any time. The Process Flow will pick up from the save point to continue the process. Model Complex Processing. It is possible to model simple and complex processing with branching, multiple panel support and panel group support. The latter allows groups of panels within a single step to be modeled. State Query. A Process Flow State Query has been introduced to allow for users to find processes in progress and deal with them appropriately. For example: Flexible Configuration. From a single maintenance dialog the Process Flow can be configured including Flow Attributes, Panel sequences, conditions and buttons visible at each stage. It should be noted that a library of common objects including UI Maps is provided with the capability to reduce the configuration effort. For example: The Process View is used in the Oracle Utilities Application Framework based applications to deliver key processes out of the box for on-premise and extensively in the Oracle Utilities Cloud Services (SaaS) accelerators. It is also used in the Oracle Utilities Cloud Service Foundation which is a component provided with each Oracle Utilities Cloud Service (SaaS). Refer to the online documentation for details of the capability as well as sequences of objects to create to use this facility.

In Oracle Utilities Application Framework V4.3.0.6.0 we introduced an exciting new capability to model and process multi-step long running business processes solely using the ConfigTools...

Cube Explorer - A new way of viewing your data

One of the most common uses for spreadsheets these days is to Pivot information. It is a way of displaying information using multiple dimensions in a way that make it easier to analyze. In Oracle Utilities Application Framework V4.3.0.6.0, introduces a new zone paradigm called the Cube Viewer which allows for pivot style analysis to be configured and made available to authorized users. The Cube Viewer is a new way of displaying and analyzing information with the following capabilities: Dynamic Configuration. End users can dynamically change the look and subset of data available to the Cube within the user interface. Save and Load Views. As with other zones, it is possible to save all the dynamic configuration for reuse across sessions. This allows optimizations at the user level. Configurable Selection Criteria. Allows for users to supply criteria for sub-setting the data to be displayed. For example: Comparison Support. Allowing two sets of criteria data to be available for comparison purposes. For example: Different Views. It is possible to view the data in a variety of formats including data views or graphically. This helps users to understand data in the style they prefer. It is possible to even see the raw data that the analysis was based upon. For example: Multiple Dimensions Supported. The dimensions for the Cube are available including any dimensions not shown but available. For example: Configurable View Columns. List of columns to display for selection. For example: Flexible Formatting.  The Cube Viewer allows for custom formula to be dynamically added for on the fly analysis. Dynamic Filtering. The Cube Viewers allows for filters to be dynamically imposed at both the global and hierarchical levels to allow for focused analysis. To use Cube Viewer the following can be configured: Query Portal - A query portal is built containing the SQL and any derived columns for use in the analysis. Filters (User and Hidden) are specified for use in the Cube Viewer. Column displayed are used as the basis for the Cube Viewer display. Business Service - The query portal is defined as a Business Service using the standard FWLZDEXP application service. Cube Type (New Object) - Define the Service and capabilities of the cube in a new object. Cube View (New Object) - Define the Portal to display the Cube Viewer/Cube Type This process is shown as follows: The Cube Viewer is an exciting new way of allowing users to interact with data and provide value added analysis. Note: This capability can be used independently of the use of the Oracle Database In-Memory capability. Use with the Oracle Database In-Memory can allow for more advanced analytics to be implemented with increased performance. For more information about the Cube Viewer, refer to the online documentation supplied with the product.

One of the most common uses for spreadsheets these days is to Pivot information. It is a way of displaying information using multiple dimensions in a way that make it easier to analyze. In Oracle...

New Batch Level Of Service Algorithms

Batch Level Of Service allows implementations to register a service target on individual batch controls and have the Oracle Utilities Application Framework assess the current performance against that target. In past releases of the Oracle Utilities Application Framework the concept of Batch Level Of Service was introduced. This is an algorithm point that assess the value of a performance metric against a target and returns the appropriate response to that target. By default, if configured, the return value is returned on the Batch Control maintenance screen or whenever called. For example: It was possible to build an algorithm to set and check the target and return the appropriate level. This can be called manually, be available via the Health Service API or used in custom Query Portals. For example: In this release a number of enhancements have been introduced: Possible to specify multiple algorithms. If you want to model multiple targets it is now possible to link more than one algorithm to a batch control. The appropriate level will be returned (the worst case) for the multiple algorithms. More Out Of The Box algorithms. In past releases a basic generic algorithm was supplied but additional algorithms are now provided to include additional metrics: Batch Control Description F1-BAT-LVSVC Original Generic Algorithm to evaluate Error Count and Time Since Completion F1-BAT-ERLOS Compare Count Of Records in Error to Threshold F1-BAT-RTLOS Compare Total Batch Run Time to Threshold F1-BAT-TPLOS Compare Throughput to Threshold For more information about these algorithms refer to the algorithm entries themselves and the online documentation.

Batch Level Of Service allows implementations to register a service target on individual batch controls and have the Oracle Utilities Application Framework assess the current performance against that...

New File Adapter - Native File Storage

In Oracle Utilities Application Framework V4.3.0.6.0, a new File Adapter has been introduced to parameterize locations across environments. In previous releases, environment variables or path's where hard coded to implement locations of files. With the introduction of the Oracle Utilities Cloud SaaS Services, the location of files are standardized and to reduce maintenance costs, these paths are now parameterized using an Extendable Lookup (F1-FileStorage) defining the path alias and the physical location. The on-premise version of the Oracle Utilities Application Framework V4.3.0.6.0 supports local storage (including network storage) using this facility. The Oracle Utilities Cloud SaaS version supports both local (predefined) and Oracle Object Storage Cloud. For example: To use the alias in any FILE-PATH (for example) the URL is used in the FILE-PATH: file-storage://MYFILES/mydirectory  (if you want to specify a subdirectory under the alias) or file-storage://MYFILES Now, if you migrate to another environment (the lookup is migrated using Configuration Migration Assistant) then this record can be altered. If you are moving to the Cloud then this adapter can change to Oracle Object Storage Cloud. This reduces the need to change individual places that uses the alias. It is recommended to take advantage of this capability: Create an alias per location you read or write files from in your Batch Controls. Define it using the Native File Storage adapter. Try and create the minimum number of alias as possible to reduce maintenance costs. Change all the FILE-PATH parameters in your batch controls to the use the relevant file-storage URL. If you decide to migrate to the Oracle Utilities SaaS Cloud, these Extensable Lookup values will be the only thing that changes to realign the implementation to the relevant location on the Cloud instance. For on-premise implementation and the cloud, these definitions are now able to be migrated using Configuration Migration Assistant.

In Oracle Utilities Application Framework V4.3.0.6.0, a new File Adapter has been introduced to parameterize locations across environments. In previous releases, environment variables or path's where...

Object Erasure capability introduced in 4.3.0.6.0

With data privacy regulations around the world being strengthened data management principles need to be extended to most objects in the product. In the past, Information Lifecycle Management (ILM) was introduced for transaction object management and is continued to be used today in implementations for effective data management. When designing the ILM capability, it did not make sense to extend it to be used for Master data such as Account, Persons, Premises, Meters, Assets, Crews etc as data management and privacy rules tend to be different for these types of objects. In Oracle Utilities Application Framework V4.3.0.6.0, we have introduced Object Erasure to support Master Data and take into account purging as well as obfuscation of data. This new capability is complementary to Information Lifecycle Management to offer full data management capability. This new capability does not replace Information Lifecycle Management or depends on Information Lifecycle Management being licensed. Customers using Information Lifecycle Management in conjunction with Object Erasure can implement full end to end data management capabilities. The idea behind Object Erasure is as follows: Any algorithm can call the Manage Erasure algorithm on the associated Maintenance Object to check for the conditions to ascertain that the object is eligible for object erasure. This is flexible to allow implementations to have the flexibility to initiate the process from a wide range of possibilities. This can be as simple as checking some key fields or some key data on an object (you decide the criteria). The Manage Erasure algorithm is used to detect the conditions, collate relevant information and call the F1-ManageErasureSchedule Business Service to create an Erasure Schedule Business Object in a Pending state to initiate the process. A set of generic Erasure Schedule Business Objects is provided (for example, a generic Purge Object for use in Purging data) and you can create your own to record additional information. The Erasure Schedule BO has three states which can be configured with algorithms (usually Enter Algorithms, a set are provided for reuse with the product). Pending - This is the initial state of the erasure Erased - This is the most common final state indicating the object has been erased or been obfuscated. Discarded - This is an alternative final state where the record can be parked (for example, if the object becomes eligible, an error has occurred in the erasure or reversal of obfuscation is required). A new Erasure Monitor (F1-OESMN) Batch Control can be used to transition the Erasure Schedule through its states and perform the erasure or obfuscation activity. Here is a summary of this processing: Note: The base supplied Purge Enter algorithm (F1-OBJERSPRG) can be used for most requirements. It should be noted that it does not remove the object from the _K Key tables to avoid conflicts when reallocating identifiers. The solution has been designed with a portal to link all the element together easily and the product comes with a set of pre-defined objects ready to use. The portal also allows an implementer to configure Erasure Days which is effectively the number of days the record remains in the Erasure Schedule before being considered by the Erasure Monitor (a waiting period basically). As an implementer you can just build the Manage Erasure algorithm to detect the business event or you can also write the algorithms to perform all of the processing (and every variation in between). The Erasure will respect any business rules configured for the Maintenance Object so the erasure or obfuscation will only occur if the business rules permit it. Customers using Information Lifecycle Management can manage the storage of Erasure Schedule objects using Information Lifecycle Management. Objects Provided The Object Erasure capability supplies a number of objects you can use for your implementation: Set of Business Objects. A number of Erasure Schedule Business Objects such as F1-ErasureScheduleRoot (Base Object), F1-ErasureScheduleCommon (Generic Object for Purges) and F1-ErasureScheduleUser (for user record obfuscation). Each product may ship additional Business Objects. Common Business Services. A number of Business Services including F1-ManageErasureSchedule to use within your Manage Erasure algorithm to create the necessary Erasure Schedule Object. Set of Manage Erasure Algorithms. For each predefined Object Erasure object provided with the product, a set of Manage Erasure algorithms are supplied to be connected to the relevant Maintenance Object. Erasure Monitor Batch Control. The F1-OESMN Batch Control provided to manage the Erasure Schedule Object state transition. Enter Algorithms. A set of predefined Enter algorithms to use with the Erasure Schedule Object to perform common outcomes including Purge processing. Erasure Portal. A portal to display and maintain the Object Erasure configuration. Refer to the online documentation for further advice on Object Erasure.

With data privacy regulations around the world being strengthened data management principles need to be extended to most objects in the product. In the past, Information Lifecycle Management (ILM)...

Inbound Web Services - REST Services

In Oracle Utilities Application Framework V4.3.0.6.0, the Inbound Web Services object has been extended to support both SOAP and REST based services. This has a lot of advantages: Centralized web services registration. The interface Application Programming Interface (API) are now centralized in the Inbound Web Services object. This means you can manage all your programmatic interfaces from a single object. This helps when using the Web Service Catalog used for Oracle Integration Cloud Service as well as any API management capabilities. Isolation from change. One of the major features of the REST capability within Inbound Web Services is the the URI is no longer fixed but can be different from the underlying service. This means you can isolate your interface clients from changes. Standardization. The Inbound Web Services object has inherent standards that can be reused across both SOAP and REST based services. For example, the ConfigTools object model can be directly wired into the service reducing time. Reduced cost of maintenance. One of the features of the new capability is to group all your interfaces into a minimal number of registrations. This reduces maintenance and allows you to control groups of interfaces easily. The Inbound Web Services now supports two Web Service Classes: SOAP - Traditional XAI and IWS based services based around the SOAP protocol. These services will be deployed to the Oracle WebLogic Server. REST - RESTful based services that are now registered for use. These services are NOT deployed as they are used directly using the REST execution engine. For REST Services, a new optimized maintenance function is now available. This facility has the following capabilities: Multiple Services in one definition. It is now possible to define multiple REST services in one registration. This reduces maintenance effort and the interfaces can be enabled and disabled at the Inbound Web Service level. Each REST Service is regarded as an operation on the Inbound Web Service. Customizable URI for service. The URL used for the REST Service can be the same or different than the operation. Business Object Support. In past releases, Business Objects were not supported. In this release, there are some limited support for Business Objects. Refer to the Release Notes and online documentation for clarification of level of support. Open API Support.  This release introduces Open API support for documenting the REST API. For example, the new Inbound Web Services maintenance function for REST is as follows: Active REST Services are available to the REST execution engine. Open API (OAS3) Support has been introduced which provides the following: Documentation of the API in various formats. The documentation of the REST based API based upon the meta data stored in the product. Ability to authorize Inbound Web Services directly in Open API. It is possible to authorize the API directly from the Open API documentation. Developers can check the API prior to making it active. Multiple formats supported. Developers can view payloads in various formats including Model format. Ability to download the API. You can download the API directly from the documentation in Open API format. This allows the API to be imported into Development IDE's. Ability to testing inline. Active API's can be tested directly into the documentation. The following are examples of the documentation: API Header including Authorization (Note: Server URL is generic as this server is NOT active). Operation/API List: Request API with inbuilt testing facility: Response API with response codes: Model Format: For more information about REST support, refer to the online documentation or Web Services Best Practices (Doc Id: 2214375.1) from My Oracle Support.

In Oracle Utilities Application Framework V4.3.0.6.0, the Inbound Web Services object has been extended to support both SOAP and REST based services. This has a lot of advantages: Centralized web...

Oracle Utilities Application Framework V4.3.0.6.0 Release

Oracle Utilities Application Framework V4.3.0.6.0 based products will be released over the coming months. As with past release the Oracle Utilities Application Framework has been enhanced with new and updated features for on-premise, hybrid and cloud implementations of Oracle Utilities products. The Oracle Utilities Application Framework continues to provide a flexible and wide ranging set of common services and technology to allow implementations the ability to meet the needs of their customers.  The latest release provides a wide range of new and updated capabilities to reduce costs and introduce exciting new functionality. The products ships with a complete listing of the changes and new functionality but here are some highlights: Improved REST Support - The REST support for the product has been enhanced in this release. It is now possible to register REST Services in Inbound Web Services as REST. Inbound Web Services definitions have been enhanced to support both SOAP and REST Services. This has the advantage that the registration of integration is now centralized and the server URL for the services can be customized to suit individual requirements. It is now possible to register multiple REST Services within a single Inbound Web Services to reduce costs in management and operations. Execution of the REST Services has been enhanced to use the Registry as the first reference for a service. No additional deployment effort is necessary for this capability. A separate article on this topic will provide additional information. Improved Web Registry Support for Integration Cloud Service - With the changes in REST and other integration changes such as Categories and supporting other adapters, the Web Service Catalog has been expanded to add support REST and other services directly for integration registration for use in the Oracle Integration Cloud. File Access Adapter - In this release a File Adapter has been introduced to allow implementations to parameterize all file integration to reduce costs of management of file paths and ease the path to the Oracle Cloud. In Cloud implementations, an additional adapter is available to allow additional storage on the Oracle Object Storage Cloud to supplement cloud storage for Oracle Utilities SaaS solutions. The File Access Adapter includes an Extendable Lookup to define alias and physical location attributes. That lookup can then be used an alias for file paths in Batch Controls, etc.. A separate article on this topic will provide additional information. Batch Start/End Date Time now part of Batch Instance Object - In past releases the Batch Start and End Dates and times where located as data elements with the thread attributes. This made analysis harder to perform. In this release these fields have been promoted as reportable fields directly on the Batch Instance Object for each thread. This will improve capabilities for reporting performance of batch jobs. For backward compatibility, these fields are only populated for new executions. The internal Business Service F1-GetBatchRunStartEnd has been extended to support the new columns and also detect old executions to return the correct values regardless. New Level of Service Algorithms - In past releases, Batch Level Of Service required the building of custom algorithms for checking batch levels. In this release additional base algorithms for common scenarios like Total Run Time, Throughput and Error Rate are now provided for use. Additionally, it is now possible to define multiple Batch Level Of Service algorithms to model complex requirements. The Health Check API has been enhanced to return the Batch Level Of Service as well as other health parameters. A separate article on this topic will provide additional information. Job Scope in DBMS_SCHEDULER interface - The DBMS_SCHEDULER Interface allowed for specification Batch Control and Global level of parameters as well as at runtime. In this release, it is possible to pre-define parameters within the interface at the Job level, allowing for support for control of individual instances Batch Controls that are used more than once across chains. Ad-hoc Recalculation of To Do Priority - In the past release of the Oracle Utilities Application Framework an algorithm to dynamically reassess ad recalculate a To Do Priority was introduced. In this release, it is possible to invoke this algorithm in bulk using the new provided F1-TDCLP Batch Control.  This can be used with the algorithm to reassess To Do's to improve manual processing. Introduction of a To Do Monitor Process and Algorithm - One of the issues with To Do's in the field has been that users can forget to manually close the To Do when the issue that caused the condition has been resolved. In this release a new batch control F1-TDMON and a new Monitor algorithm on the To Do Type has been added so that logic can be introduced to detect the resolution of the issue can lead to the product automatically closing the To Do. New Schema Editor - Based upon feedback from partners and customers, the usability and capabilities of the Schema Editor have been improved to provide more information as part of the basic views to reduce rework and support cross browser development. Process Flow Editor - A new capability has been added to the Oracle Utilities Application Framework to allow for complex workflows to be modeled and fully capable workflow introduced. This includes train support (including advanced navigation), saving incomplete work support, branching and object integration. This process flow editor was introduced internally successfully to use for our cloud automation in the Oracle Utilities Cloud Services Foundation and has now been introduced, in a new format, for use across the Oracle Utilities Application Framework based products. A separate article on this topic will provide additional information. Improved Google Chrome Support - This release introduces extensive Google Chrome for Business support. Check the availability with each of the individual Oracle Utilities Application Framework based products. New Cube Viewer - In the Oracle Utilities Market Settlements product we introduced a new Cube Viewer to embed advanced analytics into our products. That capability has been made generic and now included in the Oracle Utilities Application Framework so that products and implementations can now build their own cube analytical capabilities. In this release a series of new objects and ConfigTools objects have been introduce to build Cube Viewer based solutions. Note: The Cube Viewer has been built to operate independently of Oracle In-Memory Database support but would greatly benefit from use with Oracle In-Memory Database. A separate article on this topic will provide additional information. Object Erasure Support - To support various data privacy regulations introduced across the world, a new Object Erasure capability has been introduced to manage the erasure or obfuscation of master objects within the Oracle Utilities Application Framework based products. This capability is complementary to the Information Lifecycle Management (ILM) capability introduced to manage transaction objects within the product. A number of objects and ConfigTools objects have been introduced to allow implementations to add Object Erasure to their implementations. A separate article on this topic will provide additional information. Proactive Update ILM Switch Support - In past release, ILM eligibility and the ILM switch was performed in bulk exclusively by the ILM batch processes or using the Automatic Data Optimization (ADO) feature of the Oracle Database. To work more efficiently, it is now possible use the new BO Enter Status plug-in and BO Exit Status plug-in to proactively assess the eligibility and set the ILM switch as part of processing, thus reducing ILM workloads. Mobile Framework Auto Deploy Support - This releases includes a new optional parameter to auto deploy mobile content automatically when a deployment is saved. This can avoid the extra manual deployment step, if desired. Required Indicator on Legacy Screens - In past releases, the required indicator, based upon meta data, has been introduced for ConfigTools based objects, in this release it has been extended to Oracle Utilities Application Framework using legacy screens built using the Oracle Utilities SDK or custom JSP (that confirm to the standards required by the Oracle Utilities Application Framework). Note: Some custom JSP's may contain logic to prevent the correct display the required indicator. Oracle Identity Manager integration improved - In this release the integration with Oracle Identity Manager has been improved with multiple adapters supported and the parameters are now located as a Feature Configuration rather than properties settings. This allows the integration setup to be migrated using Configuration Migration Assistant. Outbound Message Mediator Improvements - In previous releases, implementations were required to use the Outbound Message Dispatcher (F1-OutmsgMediator) business services to send an outbound message without instantiating it but where the outbound message Business Object pre-processing algorithms need to be executed.  This business service orchestrated a creation and deletion of the outbound message, which is not desirable for performance reasons. The alternate business service Outbound Message Mediator (F1-OutmsgMediator) routes a message without instantiating anything, so is preferred when the outbound message should not be instantiated.  However, the Mediator did not execute the Business Object pre-processing algorithms.  In this release the Mediator business service has been enhanced to also execute the Business Object pre-processing algorithms. Deprecations - In this release a few technologies and capabilities will be removed as they were announced in previous releases. These include: XAI Servlet/MPL - After announcing the deprecation of XAI and MPL in 2012, the servlet and MPL software are no longer available in this release. XAI Objects are retained for backward compatibility and last minute migrations to IWS and OSB respectively. Batch On WebLogic - In the Oracle Cloud, batch threadpools were managed under Oracle WebLogic. Given changes to the architecture over the last few releases, the support for threadpools is no longer supported. As this functionality was never released for use on-premise customers, this change does not have any impact to on-premise customers. WebLogic Templates - With the adoption of Oracle WebLogic 12.2+, the necessity of custom WebLogic templates was no longer necessary. It is now possible to use the standard Fusion Middleware templates supplied with Oracle WebLogic with a few manual steps. These additional manual steps are documented in the new version of the Installation Guide supplied with the product. Customers may continue to use the Domain Builder supplied with Oracle WebLogic to build custom templates post Oracle Utilities Application Framework product installation. Customers should stop using the Native Installation or Clustering whitepaper documentation for Oracle Utilities Application Framework V4.3.0.5.0 and above as this information is now inside the Installation Guide directly or Oracle WebLogic 12.2.1.x Configuration Guide (Doc Id: 2413918.1) available from My Oracle Support. A number of additional articles will be published over the next few weeks going over some of these topics as well as updates to key whitepapers will be published.

Oracle Utilities Application Framework V4.3.0.6.0 based products will be released over the coming months. As with past release the Oracle Utilities Application Framework has been enhanced with new and...

New Oracle Utilities Testing Accelerator (6.0.0.0)

I am pleased to announce the next chapter in automated testing solutions for Oracle Utilities products. In the past some Oracle Utilities products have used Oracle Application Testing Suite with some content to provide an amazing functional and regression testing solution. Building upon that success, a new solution named the Oracle Utilities Testing Accelerator has been introduced that is an new optimized and focused solution for Oracle Utilities products. The new solution has the following benefits: Component Based. As with the Oracle's other testing solutions, this new solution is based upon testing components and flows with flow generation and databank support. Those capabilities were popular with our existing testing solution customers and exist in expanded forms in the new solution. Comprehensive Content for Oracle Utilities. As with Oracle's other testing solutions, supported products provided pre-built content to significantly reduce costs in adoption of automation. In this solution, the number of product within the Oracle Utilities portfolio has greatly expanded to provide content. This now includes both on-premise product as well as our growing portfolio of cloud based solutions. Self Contained Solution.  The Oracle Utilities Testing Accelerator architecture has been simplified to allow customers to quickly deploy the product with the minimum of fuss and prerequisites. Used by Product QA. The Oracle Utilities Product QA teams use this product on a daily basis to verify the Oracle Utilities products. This means that the content provided has been certified for use on supported Oracle Utilities products and reduces risk of adoption of automation. Behavior-Driven Development Support. One of most exciting capabilities introduced in this new solution, is the support for Behavior-Driven Development (BDD), which is popular with the newer Agile based implementation approaches. One of the major goals of the new testing capability is reduce rework from the Agile process into the building of test assets. This new capability introduces Machine Learning into the testing arena for generating test flows from Gherkin syntax documentation from Agile approaches. A developer can reuse their Gherkin specifications to generate a flow quickly without the need for rework. As the capability uses Machine Learning, it can be corrected if the assumptions it makes are incorrect for the flow and those corrections will be reused for any future flow generations. An example of this approach is shown below: Selenium Based. The Oracle Utilities Testing Accelerator uses a Selenium based scripting language for greater flexibility across the different channels supported by the Oracle Utilities products. The script is generated automatically and does not need any alteration to be executed correctly. Data Independence. As with other Oracle's testing products, data is supported independently of the flow and components. This translates into greater flexibility and greater levels of reuse in using automated testing. It is possible to change data at anytime during the process to explore greater possibilities in testing. Support for Flexible Deployments. Whilst the focus of the Oracle Utilities Testing Accelerator is functional and/or regression testing. Beyond Functional Testing. The Oracle Utilities Testing Accelerator is designed to be used for testing beyond just functional testing. It can be used to perform testing in flexible scenarios including: Patch Testing. The Oracle Utilities Testing Accelerator can be used to assess the impact of product patches on business processes using the flows as a regression test. Extension Release Testing. The Oracle Utilities Testing Accelerator can be used to assess the impact of releases of extensions from the Oracle Utilities SDK (via the migration tools in the SDK) or after a Configuration Migration Assistant (CMA) migration. Sanity Testing. In the Oracle Cloud the Oracle Utilities Testing Accelerator is being used to assess the state of a new instance of the product including its availability and that the necessary data is setup ensuring the instance is ready for use. Cross Oracle Utilities Product Testing. The Oracle Utilities Testing Accelerator supports flows that cross Oracle Utilities product boundaries to model end to end processes when multiple Oracle Utilities products are involved. Blue/Green Testing. In the Oracle Cloud, zero outage upgrades are a key part of the solution offering. The Oracle Utilities Testing Accelerator supports the concept of blue/green deployment testing to allow multiple versions to be able to be tested to facilitate smooth upgrade transitions. Lower Skills Required. The Oracle Utilities Testing Accelerator has been designed with the testing users in mind. Traditional automation involves using recording using a scripting language that embeds the data and logic into a script that is available for a programmer to alter to make it more flexible. The Oracle Utilities Testing Accelerator uses an orchestration metaphor to allow a lower skilled person, not a programmer, to build test flows and generate, no touch, scripts to be executed. An example of the Oracle Utilities Testing Accelerator Workbench: New Architecture The Oracle Utilities Testing Accelerator has been re-architectured to be optimized for use with Oracle Utilities products: Self Contained Solution. The new design is around simplicity. As much as possible the configuration is designed to be used with minimal configuration. Minimal Prerequisites. The Oracle Utilities Testing Accelerator only requires Java to execute and a Database schema to store its data. Allocations for non-production for existing Oracle Utilities product licenses are sufficient to use for this solution. No additional database licenses are required by default. Runs on same platforms as Oracle Utilities applications. The solution is designed to run on the same operating system and database combinations supported with the Oracle Utilities products. The architecture is simple: Product Components. A library of components from the Product QA teams ready to use with the Oracle Utilities Testing Accelerator. You decide which libraries you want to enable. Oracle Utilities Testing Accelerator Workbench. A web based design toolset to manage and orchestrate your test assets. Includes the following components: Embedded Web Application Server. A preset simple configuration and runtime to house the workbench. Testing Dashboard. A new home page outlining the state of the components and flows installed as well as notifications for any approvals and assets ready for use. Component Manager. A Component Manager to allow you to add custom component and manage the components available to use in flows. Flow Manager. A Flow Manager allowing testers to orchestrate flows and manage their lifecycle including generation of selenium assets for execution. Script Management. A script manager used to generate scripts and databanks for flows. Security. A role based model to support administration, development of components/flows and approvals of components/flows. Oracle Utilities Testing Accelerator Schema. A set of database objects that can be stored in any edition of Oracle (PDB or non-PDB is supported) for storing assets and configuration. Oracle Utilities Testing Accelerator Eclipsed based Plug In. An Oxygen compatible Eclipse plugin that executes the tests including recording of performance and payloads for details test analysis. New Content The Oracle Utilities Testing Accelerator has expanded the release of the number of products supported and now includes Oracle Utilities Application Framework based products and Cloud Services Products. New content will be released on a regular basis to provide additional coverage for components and a set of prebuilt flows that can be used across products. Note: Refer to the release notes for supported Oracle Utilities products and assets provided. Conclusion The Oracle Utilities Testing Accelerator provides a comprehensive testing solution, optimized for Oracle Utilities products, with content provided by Oracle to allow implementation to realize lower cost and lower risk adoption of automated testing. For more information about this solution, refer to the Oracle Utilities Testing Accelerator Overview and Frequently Asked Questions (Doc Id: 2014163.1) available from My Oracle Support. Note: The Oracle Utilities Testing Accelerator is a replacement for the older Oracle Functional Testing Advanced Pack for Oracle Utilities. Customers on that product should migrate to this new platform. Utilities to convert any custom components from the Oracle Application Testing Suite platform are provided with this tool.

I am pleased to announce the next chapter in automated testing solutions for Oracle Utilities products. In the past some Oracle Utilities products have used Oracle Application Testing Suite with some...

Updated Technical Best Practices

The Oracle Utilities Application Framework Technical Best Practices have been revamped and updated to reflect new advice, new versions and the cloud implementations of the Oracle Utilities Application Framework based products. The following summary of changes have been performed: Formatting change. The whitepaper uses a new template for the content which is being rolled out across Oracle products. Removed out of date advice. Advice that was on older versions and is not appropriate anymore has been removed from the document. This is ongoing to keep the whitepaper current and optimal. Added Configuration Migration Assistant advice. With the increased emphasis of the use of CMA we have added a section outlining some techniques on how to optimize the use of CMA in any implementation. Added Optimization Techniques advice. With the implementation of the cloud, there are various techniques we use to reduce our costs and risks on that platform. We added a section outlining some common techniques can be reused for on-premise implementations. This is based upon a series of talks given at customer forums the last year or so. Added Preparation Your Implementation For the Cloud advice. This is a new section outlining the various techniques that can be used to prepare an on-premise implementation for moving to the Oracle Utilities Cloud SaaS Services. This is based upon a series of talks given at customer forums the last year or so. The new version of the whitepaper is available at Technical Best Practices (Doc Id: 560367.1) from My Oracle Support.

The Oracle Utilities Application Framework Technical Best Practices have been revamped and updated to reflect new advice, new versions and the cloud implementations of the Oracle Utilities Application...

Oracle Utilities and the Oracle Database In-Memory Option

A few years ago, Oracle introduced an In-Memory option for the database to optimize analytical style applications. In Oracle Database 12c and above, the In-Memory option has been enhanced to support other types of workloads. All Oracle Utilities products are now certified to use the Oracle In-Memory option, on Oracle Database 12c and above, to allow customers to optimize the operational and analytical aspects of the products. The Oracle In-Memory option is a memory based column store that co-exists with existing caching schemes used within Oracle to deliver faster access speeds for complex queries across the products. It is transparent to the product code and can be easily implemented with a few simple changes to the database to implement the objects to store in memory. Once configured the Oracle Cost Based Optimizer becomes aware of the data loaded into memory and adjusts the execution plan directly, delivering much better performance in almost all cases. There are just a few option changes that need to be done: Enable the In-Memory Option. The In-Memory capability is actually already in the database software already (no relinking necessary) but it is disabled by default. After licensing the option, you can enable the option by setting the amount of the SGA you want to use for the In-Memory store. Remember to ensure that the SGA is large enough to cover the existing memory areas as well as the In-Memory Data Store. These are setting a few database initialization parameters. Enable Adaptive Plans. To tell the optimizer that you now want to take into account the In-Memory Option, you need to enable Adaptive Plans to enable support. This is flexible where you can actually turn off the In-Memory support without changing In-Memory settings. Decide the Objects to Load into Memory. Now that the In-Memory Option is enabled the next step is to decide what is actually loaded into memory. Oracle provides an In-Memory Advisor that analyzes workloads to make suggestions. Alter Objects to Load into Memory. Create the SQL DDL statements to specify the statements to instruct the loading of objects into memory. This includes priority and compression options for the objects to maximize flexibility of the option. The In-Memory Advisor can be configured to generate these statements from its analysis. No changes to the code is necessary to use the option to speed up common queries in the products and analytical queries. A new Implementing Oracle In-Memory Option (Doc Id: 2404696.1) whitepaper available from My Oracle Support has been published which outlines details of this process as well as specific guidelines for implementing this option. PS. The Oracle In-Memory Option has been significantly enhanced in Oracle Database 18c.  

A few years ago, Oracle introduced an In-Memory option for the database to optimize analytical style applications. In Oracle Database 12c and above, the In-Memory option has been enhanced to support...

Data Management with Oracle Utilities products

One of the most common questions I receive is about how to manage data volumes in the Oracle Utilities products. The Oracle Utilities products are designed to scale no matter how much data is present in the database but obviously storage costs and management of large amounts of data is not optimal. A few years ago we adopted the Information Lifecycle Management (ILM) capabilities of the Oracle Database as well as developed a unique spin on the management of data. Like biological life, data has a lifecycle. It is born when it is created, it has an active life while the business uses or manipulates it, it goes into retirement but is still accessible and eventually it dies when it is physically removed from the database. The length of that lifecycle will vary from data type to data type, implementation to implementation. The length of the life is dictated by its relevance to the business, company policies and even legal or government legislation. The data management (ILM) capabilities of Oracle Utilities take this into account: Data Retention Configuration. The business configures how long the active life of the individual data types are for their business. This defines what is called the Active Period. This is when the data needs to be in the database and accessible to the business for update and active use in their business. ILM Eligibility Rules. Once the data retention period is reached, before the data can enter retirement, the system needs to know that anything outstanding, from a business perspective, has been completed. This is the major difference with most data management approaches. I hear DBA's saying that they would just rather the data was deleted after a specific period. Whilst that would cover most situations it would not cover a situation where the business is not finished with the data. Lets explain with an example. In CCB customers are billed and you can also record complains against a bill if their is a dispute. Depending on the business rules and legal processes an old bill may be in dispute. You should not remove anything related to that bill till the complaint is resolved, regardless of the age. Legal issues can be drawn out for lots of reasons. If you use a retention rule only then the data used in the complaint would potentially be lost. In the same situation, the base ILM Eligbility rules would detect something outstanding and bypass the applicable records. Remember these rules are protecting the business and ensuring that the ILM solution adheres to the complex rules of the business. ILM Features in the Database. Oracle, like a lot of vendors, introduced ILM features into the database to help, what I like to call storage manage the data. This provides a set of flexible options and features to allow database administrators a full range of possibilities for their data management needs. Here are the capabilities (refer to the Database Administration Guide for details of each capability): Partitioning. One of the most common capabilities is using the Partitioning option. This allows a large table to be split up, storage wise, into parts or partitions using a partitioned tablespace. This breaks up the table into manageable pieces and allows the database administration to optimize the storage using hardware and/or software options. Some hardware vendors have inbuilt ILM facilities and this option allows you to target specific data partitions to different hardware capabilities or just split the data into trenches (for example to separate the retirement stages of data). Partitioning is also a valid option if you want to use hardware storage tiered based solutions to save money. In this scenario you would pout the less used data on cheaper storage (if you have it) to save costs. For Partitioning advice, refer to the product DBA Guides which outline the most common partitioning schemes used by customers. Advanced Compression. One of the popular options is the using the Advanced Compression option. This allows administrators to set compression rules against the database based upon data usage. The compression is transparent to the product and compressed data can be co-located with uncompressed data with no special processing needed by the code. The compression covers a wide range of techniques including CLOB compression as well as data compression. Customers using Oracle Exadata can also use Hybrid Columnar Compression (HCC) for hardware assisted compression for greater flexibility. Heat Map. One of the features added to Oracle Database 12c and above to help DBA's is the Heat Map. This is a facility where the database will track the usage patterns of the data in your database and give you feedback on the activity of the individual rows in the database. This is an important tool as it helps the DBA identify which data is actually being used by the business and is a useful tool for determining what is important to optimize. It is even useful in the active period to determine which data can be safely compressed as it has reduced update activity against it. It is a useful tool and is part of the autonomous capabilities of the database. Automatic Data Optimization. The Automatic Data Optimization (ADO) is a feature of database that allows database administrations  to implement rules to manage storage capabilities based upon various metrics including heat map. For example, the DBA can put in a rule that says data if data in a specific table is not touched for X months then it should be compressed. The rules cover compression, partition movement, storage features etc and can be triggered by Heat Map or any other valid metric (even SQL procedure code can be used). Transportable Tablespaces. One of the most expensive things you can do in the database is issue a DELETE statement. To avoid this in bulk in any ILM based solution, Oracle offers the ability to use the Partitioning option and create a virtual trash bin via a transportable tablespace. Using ADO or other capabilities you can move data into this tablespace and then using basic commands switch off the tablespace to do bulk removal quickly. An added advantages is that you can archive that tablespace and reconnect it later if needed. The Oracle Utilities ILM solution is comprehensive and flexible using both a aspect for the business to define their retention and eligibility rules and the various capabilities of the ILM in the database for the database administrator to factor in their individual sites hardware and support policies. It is not as simple as removing data in most cases and the Oracle Utilities ILM solution reduces the risk of managing your data, taking to account both your business and storage needs. For more information about the Oracle Utilities ILM solution, refer to the ILM Planning Guide (Doc Id: 1682436.1) available from My Oracle Support and read the product DBA Guides for product specific advice.

One of the most common questions I receive is about how to manage data volumes in the Oracle Utilities products. The Oracle Utilities products are designed to scale no matter how much data is present...

Information

EMEA Edge Conference 2018

I will be attending the EMEA Oracle Utilities Edge Conference on the 26 - 27 June 2018 in the Oracle London office. This year we are running an extended set of technical sessions around on-premise and the Oracle Utilities Cloud Services. This forum is open to Oracle Utilities customers and Oracle Utilities partners. The sessions mirror the technical sessions for the conference in the USA held earlier this year with the following topics: Reducing Your Storage Costs Using Information Life-cycle Management With the increasing costs of maintaining storage and satisfying business data retention rules can be challenging. Using Oracle Information Life-cycle Management solution can help simplify your storage solution and hardness the power of the hardware and software to reduce storage costs. Integration using Inbound Web Services and REST with Oracle Utilities Integration is a critical part of any implementation. The Oracle Utilities Application Framework has a range of facilities for integrating from and to other applications. This session will highlight all the facilities and where they are best suited to be used. Optimizing Your Implementation Implementations have a wide range of techniques available to implement successfully. This session will highlight a group of techniques that have been used by partners and our cloud implementations to reduce Total Cost Of Ownership. Testing Your On-Premise and Cloud Implementations Our Oracle Testing solution is popular with on premise implementations. This session will outline the current testing solution as well as outline our future plans for both on premise and in the cloud. Securing Your Implementations With the increase in cybersecurity and privacy concerns in the industry, a number of key security enhancements have made available in the product to support simple or complex security setups for on premise and cloud implementations. Turbocharge Your Oracle Utilities Product Using the Oracle In-Memory Database Option The Oracle Database In-Memory options allows for both OLTP and Analytics to run much faster using advanced techniques. This session will outline the capability and how it can be used in existing on premise implementations to provide superior performance. Developing Extensions using Groovy Groovy has been added as a supported language for on premise and cloud implementations. This session outlines that way that Groovy can be used in building extensions. Note: This session will be very technical in nature. Ask Us Anything Session Interaction with the customer and partner community is key to the Oracle Utilities product lines. This interactive sessions allows you (the customers and partners) to ask technical resources within Oracle Utilities questions you would like answered. The session will also allow Oracle Utilities to discuss directions and poll the audience on key initiatives to help plan road maps Note: These sessions are not recorded or materials distributed outside this forum. This year we have decided to not only discuss capabilities but also give an idea of how we use those facilities in our own cloud implementations to reduce our operating costs for you to use as a template for on-premise and hybrid implementations. See you there if you are attending. If you wish to attend, contact your Oracle Utilities local sales representative for details of the forum and the registration process.

I will be attending the EMEA Oracle Utilities Edge Conference on the 26 - 27 June 2018 in the Oracle London office. This year we are running an extended set of technical sessions around on-premise and...

Why the XAI Staging is not in the OSB Adapters?

With the replacement of the Multi-Purpose Listener (MPL) with the Oracle Service Bus (OSB) with the additional OSB Adapters for Oracle Utilities Application Framework based products, customers have asked about transaction staging support. One of the most common questions I have received is why there is an absence of an OSB Adapter for the XAI Staging table. Let me explain the logic. One Pass versus Two Passes. The MPL processed its integration by placing the payload from the integration into the XAI Staging table. The MPL would then process the payload in a second pass. The staging record would be marked as complete or error. The complete ones would need to be removed using the XAI Staging purge process run separately. You then used XAI Staging portals to correct the data coming in for ones in error. On the other hand, the OSB Adapters treat the product as a "black box" (i,e, like a product) and it directly calls the relevant service directly (for inbound) and polls the relevant Outbound or NDS table for outbound processing records directly. This is a single pass process rather than multiple that MPL did. OSB is far more efficient and scalable than the MPL because of this. Error Hospital. The idea behind the XAI Staging is that error records remain in there for possible correction and reprocessing. This was a feature of MPL. In the OSB world, if a process fails for any reason, the OSB can be configured to act as an Error Hospital. This is effectively the same as the MPL except you can configure the hospital to ignore any successful executions which reduces storage. In fact, OSB has features where you can detect errors anywhere in the process and allows you to determine which part of the integration was at fault in a more user friendly manner. OSB effectively already includes the staging functionality so adding this to the adapters just duplicates processing. The only difference is that error correction, if necessary, is done within the OSB rather than the product. More flexible integration model. One of the major reasons to move from the MPL to the OSB is the role that the product plays in integration. If you look at the MPL model, any data that was passed to the product from an external source was automatically the responsibility of the product (that is how most partners implemented it). This means the source system had no responsibility for the cleanliness of their data as you had the means of correcting the data as it entered the system. The source system could send bad data over and over and as you dealt with it in the staging area that would increase costs on the target system. This is not ideal. In the OSB world, you can choose your model. You can continue to use the Error Hospital to keep correcting the data if you wish or you can configure the Error Hospital to compile the errors and send them back, using any adapter, to the source system for correction. With OSB there is a choice, MPL did not really give you a choice. With these considerations in place it was not efficient to add an XAI Staging Adapter to OSB as it would duplicate effort and decrease efficiency which negatively impacts scalability.

With the replacement of the Multi-Purpose Listener (MPL) with the Oracle Service Bus (OSB) with the additional OSB Adapters for Oracle Utilities Application Framework based products, customers have...

Capacity Planning Connections

Customers and partners regularly ask me questions around capacity of traffic on their Oracle Utilities products implementations and how to best handle their expected volumes. The key to answering this question is to under a number of key concepts: Capacity is related to the number of users, threads etc, lets call them actors to be generic, are actively using the system. As the Oracle Utilities Application Framework is stateless, then actors only consume resources when they are active on any part of the architecture. If they are idle then they are not consuming resources. This is important as the number of logged on users does not dictate capacity. The goal of capacity is to have enough resource to handle peak loads and to minimize capacity when the load drops to the minimum expected. This makes sure you have enough for the busy times but also you are not wasting resource. Capacity is not just online users it is also batch threads, Web Service Clients, REST clients and mobile clients (for mobile application interfaces). It is a combination for each channel. Each channel can be monitored individually to determine capacity for each channel. This is the advice I tend to give customers who want to monitor capacity: For channels using Oracle WebLogic you want to use Oracle WebLogic Mbeans such as ThreadPoolRuntimeMbean (using ExecuteThreads) for protocol level monitoring. If you want to monitor each server individually to get an idea of capacity then you might want to try ServerChannelRuntimeMBean (using ConnectionsCount). In the latter case, look at each channel individually to see what your traffic looks like. For Batch, when using it with Oracle Coherence, then use the inbuilt Batch monitoring API (via JMX) and use the sum of NumberOfMembers attribute to determine the active number of threads etc running in your cluster. Refer to the Server Administration Guide shipped with the Oracle Utilities product for details of this metric and how to collect it. For database connections, it is more complex as connection pools (regardless of the technique used) rely on a maximum size limit. If this limit is exceeded then you want to know of how many pending requests are waiting to detect how bigger the pool should be. The calculations are as follows: Oracle WebLogic JDBC Data Sources: Use the Mbean JDBCDataSourceRuntimeMBean with the CurrCapacity + WaitingForConnectionCurrentCount attributes. Oracle Universal Connection Pool (12.x): Use the UCP MBean UniversalConnectionPoolMBean with the (maxPoolSize – availableConnectionsCount) + PendingRequestsCount attributes. Note: You might notice that the database active connections are actually calculations. This is due to the fact that the metrics capture the capacity within a limit and need to take into account when the limit is reached and has waiting requests. The above metrics should be collected at peak and non-peak times. This can be done manually or using Oracle Enterprise Manager. Once the data is collected it is recommended to be used for the following: Connection Pool Sizes – The connection pools should be sized using the minimum values experienced and with the maximum values with some tolerances for growth. Number of Servers to setup – For each channel, determine the number of servers based upon the numbers and the capacity on each server. Typically at a minimum of two servers should be setup for the minimum high availability solutions. Refer to Oracle Maximum Availability Architecture for more advice.

Customers and partners regularly ask me questions around capacity of traffic on their Oracle Utilities products implementations and how to best handle their expected volumes. The key to answering this...

Managing Your Environments

With the advent of easier and easier techniques for creating and maintaining Oracle Utilities environments, the number of environment will start to grow, increasing costs and introducing more risk into a project. This applies to on-premise as well as cloud implementations, though the cloud implementations have more visible costs. An environment is a copy of the Oracle Utilities product (one software and one database at a minimum). To minimize your costs and optimize the number of environments to manage there are a few techniques that may come in handy: Each Environment Must Be On Your Plan - Environments are typically used to support an activity or group of activities on some implementation plan. If the environment does not support any activities on a plan then it should be questioned. Each Environment Must Have An Owner - When I started working in IT a long time ago, the CIO of the company I worked for noticed the company had over 1500 IT systems. To rationalize he suggested shutting them all down and seeing who screamed to have it back on. That way he could figure out what was important to what part of the business. While this technique is extreme it points out an interesting thought. If you can identify the owner of each environment then that owner is responsible to determine the life of that environment including its availability or performance. Consider removing environments not owned by anyone. Each Environment Should Have a Birth Date And End Date - As an  extension to the first point, each environment should have a date it is needed and a date when it is no longer needed. It is possible to have an environment be perpetual, for example Production, but generally environments are needed for a particular time frame. For example, you might be creating environments to support progressive builds, where you would have a window of builds you might want to keep (a minimum set I hope). That would dictate the life-cycle of the environment. This is very common in cloud environments where you can reserve capacity dynamically so it can impose time limits to enforce regular reassessment. Reuse Environments - I have been on implementations where individual users wanted their own personal environments. While this can be valid in some situations, it is much better to encourage reuse of environments across users and across activities. If you can plan out your implementation you can identify how to best reuse environments to save time and costs. Ask Questions; Don't Assume - When agreeing to create and manage the environment, ask the above questions and more to ensure that the environment is needed and will support the project appropriately for the right amount of time. I have been on implementations where 60 environments existed initially and after applying these techniques and others was able to reduce it to around 20. That saved a lot of costs. So why the emphasis on keeping your environments to a minimal number given the techniques for building and managing them are getting easier? Well, no matter how easy keeping an environment consumes resources (computer and people) and keeping them at a minimum keeps costs minimized. The techniques outlined above apply to Oracle Utilities products but can be applied to other products with appropriate variations. For additional advice on this topic, refer to the Software Configuration Management Series (Doc Id: 560401.1) whitepapers available from My Oracle Support.

With the advent of easier and easier techniques for creating and maintaining Oracle Utilities environments, the number of environment will start to grow, increasing costs and introducing more risk...

Clarification of XAI, MPL and IWS

A few years ago, we announced that XML Application Integration (XAI) and the Multipurpose Listener (MPL) were being retired from the product and replaced with Inbound Web Services (IWS) and Oracle Service Bus (OSB) Adapters. In the next service pack of the Oracle Utilities Application Framework, XAI and MPL will finally be removed from the product. The following applies to this: The MPL software and XAI Servlet will be removed from the code. This is the final step in the retirement process. The tables associated with XAI and MPL will not be removed from the product for backward compatibility with newer adapters. Maintenance functions that will be retained will be prefixed with Message rather than XAI. Menu items not retained will be disabled by default. Refer to release notes of service packs (latest and past) for details of the menu item changes. Customers using XAI should migrate to Inbound Web Services using the following guidelines: XAI Services using the legacy Base and CorDaptix adapters will be automatically migrated to Inbound Web Services. These services will be auto-deployed using the Inbound Web Services Deployment online screen or iwsdeploy utility. XAI Services using the Business adapter (sic) can either migrate their definitions manually to Inbound Web Services or use a technique similar to the technique outlined in Converting your XAI Services to IWS using scripting. Partners should take the opportunity to rationalize their number of web services using the multi-operation capability in Inbound Web Services. XAI Services using any other adapter than those listed above are not migrate-able as they are typically internal services for use with the MPL. Customers using the Multi-purpose Listener should migrate to Oracle Service Bus with the relevant adapters installed. There are a key number of whitepapers that can assist in this process: Web Services Best Practices (Doc Id: 2214375.1) Migrating from XAI to IWS (Doc Id: 1644914.1) Oracle Service Bus Integration (Doc Id: 1558279.1)

A few years ago, we announced that XML Application Integration (XAI) and the Multipurpose Listener (MPL) were being retired from the product and replaced with Inbound Web Services (IWS) and...

Advice

Using the Infrastructure Version of Oracle WebLogic for Oracle Utilities Products

When using Oracle Utilities Application Framework V4.3.x with any Oracle Utilities product you need to use the Oracle Fusion Middleware 12c Infrastructure version of Oracle WebLogic not the vanilla release of Oracle WebLogic. The Oracle Fusion Middleware 12c Infrastructure version contains the Java Required Files (JRF) profile that is used by the Oracle Utilities Application Framework to display the enhanced help experience and for standardization within the Framework. The installation used by the Oracle Fusion Middleware 12c Infrastructure version is the same experience as the vanilla Oracle WebLogic version but it contains the applyJRF profile that applies extra functionality and libraries necessary for Oracle Utilities Application Framework to operate. The Oracle Fusion Middleware 12c Infrastructure version contains the following additional functionality: An additional set of Java libraries that are typically used by Oracle products to provide standard connectors and integration to Oracle technology. Diagnostic Frameworks (via WebLogic Diagnostic Framework) that can be used with Oracle Utilities products to proactively detect and provide diagnostic information to reduce problem resolution times. This requires the profile to be installed and enabled on the domain post release. The standard Fusion Diagnostic Framework can be used with Oracle Utilities products Fusion Middleware Control is shipped as an alternative console for advanced configuration and monitoring. As with all Oracle software the Oracle Fusion Middleware 12c Infrastructure software is available from Oracle Software Delivery Cloud. For example:

When using Oracle Utilities Application Framework V4.3.x with any Oracle Utilities product you need to use the Oracle Fusion Middleware 12c Infrastructure version of Oracle WebLogic not the vanilla rel...

Advice

Optimizing CMA - Linking the Jobs

One of the recent changes to the Configuration Migration Assistant is the ability to configure the individual jobs to work as a group to reduce the amount of time and effort in migrating configuration data from a source system to a target. This is a technique we use in our Oracle Utilities Cloud implementations to reduce costs. Basically after this configuration is complete you just have to execute F1-MGDIM (Migration Data Set Import Monitor) and F1-MGDPR (Migration Data Set Export Monitor) jobs to complete all your CMA needs. The technique is available for Oracle Utilities Application Framework V4.3.0.4.0 and above using some new batch control features. The features used are changing the Enter algorithms on the state transitions and setting up Post Processing algorithms on relevant batch controls. The last step will kick off each process within the same execution to reduce the need to execute each process individually. Set Enter Algorithms The first step is to configure the import process, which is a multi-step process, to autotransition data when necessary to save time. This is done on the F1-MigrDataSetImport business object and setting the Enter Algorithm on the following states: Status Enter Algorithm PENDING F1-MGDIM-SJ READY2COMP F1-MGOPR-SJ READY2APPLY F1-MGOAP-SJ APPLYING F1-MGTAP-SJ READYOBJ F1-MGOPR-SJ READYTRANS F1-MGTPR-SJ Save the changes to reflect the change Set Post Processing Algorithms The next step is to set the Post Processing algorithms on the Import jobs to instruct the Monitor to run multiple steps within its execution. Batch Control Post Processing Algorithm F1-MGOPR F1-MGTPR-NJ F1-MGTPR F1-MGDIM-NJ F1-MGOAP F1-MGDIM-NJ (*) F1-MGTAP F1-MGDIM-NJ (*) (*) Note: For multi-lingual solutions, consider adding an additional Post Processing algorithm F1-ENG2LNGSJ to copy any missing language entries Now you can run the Monitors for Import and Export with minimum interaction which simplifies the features. Note: To take full advantage of this new configuration enable Automatically Apply on Imports.

One of the recent changes to the Configuration Migration Assistant is the ability to configure the individual jobs to work as a group to reduce the amount of time and effort in migrating configuration...

Oracle Utilities Application Framework V4.3.0.5.0 Release Summary

The latest release of Oracle Utilities Application Framework, namely 4.3.0.5.0 (or 4.3 SP5 for short) will be included in new releases of Oracle Utilities products over the next few months. This release is quite diverse with a range of new and improved capabilities that can be used by implementations of the new releases. The key features included in the release including the following: Mobile Framework release - The initial release of a new REST based channel to allow Oracle Utilities products to provide mobile device applications. This release is a port of the Mobile Communication Platform (MCP) used in the Oracle Mobile Workforce Management product to the Oracle Utilities Application Framework. This initial release is restricted to allow Oracle Utilities products to provide mobile experiences for use with an enterprise. As with other channels in the Oracle Utilities Application Framework, it can be deployed alone or in conjunction with other channels. Support For Chrome for Business - In line with Oracle direction, the Oracle Utilities Application Framework supports Chrome for Business as a browser alternative. A new browser policy, in line with Oracle direction, has been introduced to clarify support arrangement for Chrome and other supported browsers. Check individual product release notes for supported versions. Improved Security Portal - To reduce effort in managing security definitions within the product, the application service portal has been extended to show secured objects or objects that an application service is related to. Attachment Changes - In the past to add attachments to object required custom UI maps to link attachment types to objects. In this release, a generic zone has been added reducing the need for any custom UI Maps. The attachment object now also records the extension of the attachment to reduce issues where an attachment type can have multiple extensions (e.g. DOC vs DOCX). Support for File Imports in Plug In Batch - In past releases Plug In Batch was introduced as a configuration based approach to replace the need for Java programming for batch programming. In the past, SQL processing and File Exports where supported for batch processing. In this release, importing files in CSV, Fixed format or XML format are now supported using Plug In Batch (using Groovy based extensions). Samples are supplied with the product that can be copied and altered accordingly. Improvements in identifying related To Do's - The logic determining related To Do's has been enhanced to provide additional mechanisms for finding related To Do's to improve closing related work. This will allow a wider range to To Do's to be found than previously determined. Web Service Categories - To aid in API management (e.g. when using Integration Cloud Service and other cloud services) Web Service categories can be attached to Inbound Web Services, Outbound Message Types and legacy XAI services that are exposed via Inbound Web Services. A given web service or outbound message can be associated with more than one category. Categories are supplied with the product release and custom categories can be added. Extended Oracle Web Services Manager Support - In past releases Oracle Web Services Manager could provide additional transport and message security for Inbound Web Services. In this release, Oracle Web Services Manager support has been extended to include Outbound Messages and REST Services. Outbound Message Payload Extension - In this release it is possible to include the Outbound Message Id as part of the payload as a reference for use in the target system. Dynamic URL support in Outbound Message - In the past Outbound Message destinations were static to the environment. In this release the URL used for the destination can vary according to the data or dynamically assembled programmatically if necessary. SOAP Header Support in Outbound Messages - In this release it is possible to dynamically set SOAP Header variables in Outbound Messages. New Groovy Imports Step Type - A new step type has been introduced to define classes to be imported for use in Groovy members. This promotes reuse and allows for coding without the need for the fully qualified package name in Groovy Library and Groovy Member step types.  New Schema Designer - A newly redesigned Schema Editor has been introduced to reduce total cost of ownership and improve schema development. Color coding has been now included in the raw format editor. Oracle Jet Library Optimizations - To improve integration with the Oracle Jet libraries used by the Oracle Utilities Application Framework, a new UI Map fragment has been introduced to include in any Jet based UI Map to reduce maintenance costs. YUI Library Removal - With the desupport of the YUI libraries, they have been removed from this release in the Oracle Utilities Application Framework. Any custom code directly referencing the YUI libraries should use the Oracle Utilities Application Framework equivalent function. Proxy Settings now at JVM level - In past release, proxy settings were required on individual connections where needed. In this release, the standard HTTP Proxy JVM options are now supported at the container/JVM layer to reduce maintenance costs. This is just a summary of some of the new features in the release. A full list is available in the release notes of the products using this service pack. Note: Some of these enhancements have been back ported to past releases. Check My Oracle Support for those patches. Over the next few weeks, I will be writing a few articles about a few of these enhancements to illustrate the new capabilities.

The latest release of Oracle Utilities Application Framework, namely 4.3.0.5.0 (or 4.3 SP5 for short) will be included in new releases of Oracle Utilities products over the next few months. This...

Annoucements

Edge Conference 2018 is coming - Technical Sessions

It is that time of year again, Customer Edge conference time. This year we will be once again holding a Technical stream which focuses on the Oracle Utilities Application Framework and related products. Once again, I will be holding the majority of the sessions at the various conferences. The sessions this year are focused around giving valuable advice as well as giving a window into our future plans for the various technologies we are focusing upon. As normal, there will be a general technical session covering our road map as well as specific set of session targeting important topics. The technical sessions planned for this year include: Session Overview Reducing Your Storage Costs Using Information Life-cycle Management With the increasing costs of maintaining storage and satisfying business data retention rules can be challenging. Using Oracle Information Life-cycle Management solution can help simplify your storage solution and hardness the power of the hardware and software to reduce storage costs. Integration using Inbound Web Services and REST with Oracle Utilities Integration is a critical part of any implementation. The Oracle Utilities Application Framework has a range of facilities for integrating from and to other applications. This session will highlight all the facilities and where they are best suited to be used. Optimizing Your Implementation Implementations have a wide range of techniques available to implement successfully. This session will highlight a group of techniques that have been used by partners and our cloud implementations to reduce Total Cost Of Ownership. Testing Your On-Premise and Cloud Implementations Our Oracle Testing solution is popular with on premise implementations. This session will outline the current testing solution as well as outline our future plans for both on premise and in the cloud. Securing Your Implementations With the increase in cybersecurity concerns in the industry, a number of key security enhancements have made available in the product to support simple or complex security setups for on premise and cloud implementations. Turbocharge Your Oracle Utilities Product Using the Oracle In-Memory Database Option The Oracle Database In-Memory options allows for both OLTP and Analytics to run much faster using advanced techniques. This session will outline the capability and how it can be used in existing on premise implementations to provide superior performance. Mobile Application Framework Overview The Oracle Utilities Application Framework has introduced a new Mobile Framework for use in the Oracle Utilities products. This session gives an overview of the mobile framework capabilities for future releases. Developing Extensions using Groovy Groovy has been added as a supported language for on premise and cloud implementations. This session outlines that way that Groovy can be used in building extensions. Note: This session will be very technical in nature. Ask Us Anything Session Interaction with the customer and partner community is key to the Oracle Utilities product lines. This interactive sessions allows you (the customers and partners) to ask technical resources within Oracle Utilities questions you would like answered. The session will also allow Oracle Utilities to discuss directions and poll the audience on key initiatives to help plan road maps. This year we have decided to not only discuss capabilities but also give an idea of how we use those facilities in our own cloud implementations to reduce our operating costs for you to use as a template for on-premise and hybrid implementations. For customers and partners interested in attending the USA Edge Conference registration is available.  

It is that time of year again, Customer Edge conference time. This year we will be once again holding a Technical stream which focuses on the Oracle Utilities Application Framework and...