Wednesday Dec 24, 2014

A prosperous New Year

It has been a very very 2014 and 2015 is shaping up to be a bumper year for a number of the products we deliver. I have no updated this blog as much as I wanted the last few months for various reasons, mainly I have been very busy getting new versions and new products out of the door. More about that in the new year.

2015 is shaping up to be a stellar year for the products I manage personally with announcements and exciting new features I am sure customers and partners will embrace.

I wish all my readers, our partners and our customers a happy holidays and a prosperous new year.

Monday Jul 07, 2014

Introduction to BatchEdit

BatchEdit is a new wizard style utility to help you build a batch architecture quickly with little fuss and technical knowledge. Customers familiar with the WLST tool that is shipped with Oracle WebLogic will recognize the style of utility I am talking about it. The idea behind BatchEdit is simple. It is there to provide a simpler method of configuring batch by boiling down the process to its simplest form. The power of the utility is the utility itself and the set of preoptimized templates shipped with the utility to generate as much of the configuration as possible but still have a flexible approach to configuration.

First of all, the BatchEdit utility, shipped with OUAF 4.2.0.2.0 and above, is disabled by default for backward compatibility. To enable it  you must execute the configureEnv[.sh] -a utility and in option 50 set the Enable Batch Edit Functionality to true and save the changes. The facility is now available to use.

Once enabled, the BatchEdit facility can be executed using the bedit[.sh] <options> utility where <options> are the options you want to use with the command. The most useful is the -h and --h which display the help for the command options and extended help. You will find lots of online help in the utility. Just typing help <topic> you will get an explanation and further advice on a specific topic.

The next step is using the utility. The best approach is to think of the configuration is various layers. The first layer is the cluster. The next layer is the definition of threadpools in that cluster and then the submitters (or jobs) that are submitted to those threadpools. Each of those layers has configuration files associated with them.

Concepts

Before understanding the utility, lets discuss a few basic concepts:

  • The BatchEdit allows for "labels" to be assigned to each layer. This means you can group like configured components together. For example, say you wanted to setup a specific threadpoolworker for a specific set of processes and that threadpoolworker had unique characteristics like unique JVM settings. You can create a label template for that set of jobs and dynamically build that. At runtime you would tell the threadpoolworker[.sh] command to use that template (using the -l option). For submitters the label is the Batch Code itself.
  • The BatchEdit will track if changes are made during a session. If you try and exit without saving a warning is displayed to remind you of unsaved changes. Customers of Oracle Enterprise Manager pack for Oracle Utilities will be able to track configuration file version changes within Oracle Enterprise Manager, if desired.
  • BatchEdit essentially edits existing configuration files (e.g. tangosol-coherence-override.xml for the cluster, threadpoolworker.properties for threadpoolworker etc). To ascertain what particular file is being configured during a session use the what command.
  • BatchEdit will only show the valid options for the scope of the command and the template used. This applies to the online help which is context sensitive.

Using the utility

The BatchEdit utility has two distinct modes to build and maintain various configuration files.

  • Initiation Mode - The first mode of the utility is to invoke the utility with the scope or configuration file to create and/or manage. This is done by specifying the valid options at the command line. This mode is recorded in a preferences file to remember specific settings across invocations. For example, once you decide which cluster type you want to adopt, the utility will remember this preference and show  the options for that preference only. It is possible to switch preferences by re-invoking the command with the appropriate options.
  • Edit Mode - Once you have invoked the command, a list of valid options are presented which can be altered using the set command. For example, the set port 42020 command will set the port parameter to 42020. You can add new sections using the add command, and so forth. Online help will show the valid commands. The most important is the save command which saves all changes.

Process for configuration

To use the command effectively here is a summary of the process you need to follow:

  • Decide your cluster type first. Oracle Utilities Application Framework supports, multi-cast, uni-cast and single server clusters. Use the bedit[.sh] -c [-t wka|mc|ss] command to set and manage the cluster parameters. For example:
$ bedit.sh -c
Editing file /oracle/FW42020/splapp/standalone/config/tangosol-coherence-override.xml using template /oracle/FW42020/etc/tangoso
l-
coherence-override.ss.be

Batch Configuration Editor 1.0 [tangosol-coherence-override.xml]
----------------------------------------------------------------

Current Settings

  cluster (DEMO_SPLADM)
  address (127.0.0.1)
  port (42020)
  loglevel (1)
  mode (dev)

> help loglevel

loglevel
--------
Specifies which logged messages will be output to the log destination.

Legal values are:

  0    - only output without a logging severity level specified will be logged
  1    - all the above plus errors
  2    - all the above plus warnings
  3    - all the above plus informational messages
  4-9  - all the above plus internal debugging messages (the higher the number, the more the messages)
  -1   - no messages

> set loglevel 2

Batch Configuration Editor 1.0 [tangosol-coherence-override.xml]
----------------------------------------------------------------

Current Settings

  cluster (DEMO_SPLADM)
  address (127.0.0.1)
  port (42020)
  loglevel (2)
  mode (dev)

> save
Changes saved
> exit
  • Setup your threadpoolworkers. For each group of threadpoolworkers use the bedit[.sh] -w [-l <label>] where <label> is the group name. We supply a default (no label) and cache threadpool templates. For example:
$ bedit.sh -w
Editing file /oracle/FW42020/splapp/standalone/config/threadpoolworker.properties using template /oracle/FW42020/etc/threadpoolw
orker.be

Batch Configuration Editor 1.0 [threadpoolworker.properties]
------------------------------------------------------------

Current Settings

  minheap (1024m)
  maxheap (1024m)
  maxperm (256m)
  daemon (true)
  rmiport (6510)
  dkidisabled (false)
  storage (true)
  distthds (4)
  invocthds (4)
  role (OUAF_Base_TPW)
  pool.1
      poolname (DEFAULT)
      threads (5)
  pool.2
      poolname (LOCAL)
      threads (0)

> set pool.2 poolname FRED

Batch Configuration Editor 1.0 [threadpoolworker.properties]
------------------------------------------------------------

Current Settings

  minheap (1024m)
  maxheap (1024m)
  maxperm (256m)
  daemon (true)
  rmiport (6510)
  dkidisabled (false)
  storage (true)
  distthds (4)
  invocthds (4)
  role (OUAF_Base_TPW)
  pool.1
      poolname (DEFAULT)
      threads (5)
  pool.2
      poolname (FRED)
      threads (0)

> add pool

Batch Configuration Editor 1.0 [threadpoolworker.properties]
------------------------------------------------------------

Current Settings

  minheap (1024m)
  maxheap (1024m)
  maxperm (256m)
  daemon (true)
  rmiport (6510)
  dkidisabled (false)
  storage (true)
  distthds (4)
  invocthds (4)
  role (OUAF_Base_TPW)
  pool.1
      poolname (DEFAULT)
      threads (5)
  pool.2
      poolname (FRED)
      threads (0)
  pool.3
      poolname (DEFAULT)
      threads (5)

> set pool.3 poolname LOCAL

Batch Configuration Editor 1.0 [threadpoolworker.properties]
------------------------------------------------------------

Current Settings

  minheap (1024m)
  maxheap (1024m)
  maxperm (256m)
  daemon (true)
  rmiport (6510)
  dkidisabled (false)
  storage (true)
  distthds (4)
  invocthds (4)
  role (OUAF_Base_TPW)
  pool.1
      poolname (DEFAULT)
      threads (5)
  pool.2
      poolname (FRED)
      threads (0)
  pool.3
      poolname (LOCAL)
      threads (5)

> set pool.3 threads 0

Batch Configuration Editor 1.0 [threadpoolworker.properties]
------------------------------------------------------------

Current Settings

  minheap (1024m)
  maxheap (1024m)
  maxperm (256m)
  daemon (true)
  rmiport (6510)
  dkidisabled (false)
  storage (true)
  distthds (4)
  invocthds (4)
  role (OUAF_Base_TPW)
  pool.1
      poolname (DEFAULT)
      threads (5)
  pool.2
      poolname (FRED)
      threads (0)
  pool.3
      poolname (LOCAL)
      threads (0)

>
  • Setup your global submitter settings using the bedit[.sh] -s command or batch job specific settings using the bedit[.sh] -b <batchcode> command where <batchcode> is the Batch Control Id for the job. For example:
$ bedit.sh -b F1-LDAP
File /oracle/FW42020/splapp/standalone/config/job.F1-LDAP.properties does not exist - create? (y/n) y
Editing file /oracle/FW42020/splapp/standalone/config/job.F1-LDAP.properties using template /oracle/FW42020/etc/job.be

Batch Configuration Editor 1.0 [job.F1-LDAP.properties]
-------------------------------------------------------

Current Settings

  poolname (DEFAULT)
  threads (1)
  commit (10)
  user (SYSUSER)
  lang (ENG)
  soft.1
      parm (maxErrors)
      value (500)
>

The BatchEdit facility is an easier way of creating and maintaining the configuration files with little bit of effort. For more examples and how to migrate to this new facility is documented in the Batch Best Practices for Oracle Utilities Application Framework based products (Doc Id: 836362.1) whitepaper available from My Oracle Support.

Friday Jul 04, 2014

New ConfigTools Training available on Youtube

The Oracle Public Sector Revenue Management product team have released a series of training videos for the Oracle Utilities Application Framework ConfigTools component. This component allows customers to use meta data and scripting to enhance and customize Oracle Utilities Application Framework based solutions without the need for Java programming.

The series uses examples and each recording is around 30-40 minutes in duration.

The channel for the videos is Oracle PSRM Training. The videos are not a substitute for the training courses available, through Oracle University, on ConfigTools, but are useful for people trying to grasp individual concepts while getting an appreciation for the power of this functionality.

 At time of publication, the recordings currently available are:


    Tuesday Feb 18, 2014

    Using Oracle Test Data Manager with OUAF

    The Oracle Test Data Management Pack allows the quick and safe  copying a subset of data from a production database to a non-production database. The pack can be used standalone or in association with the Oracle Data Masking Pack to comply with data privacy and data protection rules mandated by regulation or policy that restrict the use of actual customer data for non-production purposes.

    Oracle Utilities Application Framework based products can utilize this pack using the following technique:

    • A copy of the production schema with no data should be created on the production database. It is important not to load the data as this will aid in the creation. A copy of the schema can be built using Oracle SQL Developer or using tools included in Oracle Database Control/Oracle Database 12c EM Express.

    Note: Oracle highly recommends not using the live production schema for the definition process.

    • Create an Application Model on the copied and prepared schema using the instructions in the Data Discovery And Modeling documentation.
    • Optionally, remove any tables or objects you do not want managed with the Oracle Test Data Management Pack Application Data Model you just loaded. For example, you might want to remove administration tables to optimize the time for the extract. This can be done within the Oracle Test Data Management Pack interface available within Oracle Enterprise Manager.
    • The Application Model now can be used against any production schema (as the source) at execution time.
    • Define the data subset you wish to extract as outlined in the Data Subsetting documentation. This can be a fixed subset, percentage or a complex SQL condition to determine the active subset to extract.
    • Optionally, identify the sensitive data you want to mask and associate the formatting to be used for handling the masked data. This will automatically mask the data in the extract as outlined in the Masking Sensitive Data documentation.

    It is recommended that Oracle Test Data Management Pack be only used on Production environments to minimize licensing arrangements.

    Note: If there is a need to comply with local privacy and protection laws that Oracle Data Masking Pack is also used with the Oracle Test Data Management Pack.

    Note: This technique can be used with any release of the products or any release of the Oracle Utilities Application Framework.

    Friday Aug 09, 2013

    Batch Best Practices Updated

    The Batch Best Practices whitepaper has been updated with advice about memory management and custom log file naming features.

    It is available in Batch Best Practices for Oracle Utilities Application Framework based products (Doc Id: 836362.1) available from My Oracle Support.

    Friday Jan 11, 2013

    Oracle Access Manager Integration Landing Sample

    In the Oracle Identity Management Suite Integration with Oracle Utilities Application Framework based products (Doc Id: 1375600.1) whitepaper the Oracle Access Manager integration section mentions a custom landing page that can be used to complete the integration.

    A sample landing page is now available from My Oracle Support for customers to use as a basis for their own landing pages. It is located within My Oracle Support under Sample Code oamlanding.jsp - refer to the Instructions in Note 1375600.1 (Doc Id 1518856.1).

    This is a sample only and should be tested and modified to suit your individual site needs. Refer to Doc Id 1375600.1 for instructions on how to use the landing page.

    Configuration Migration Assistant Part 1 - Features

    One of the main features of Oracle Utilities Application Framework V4.2.0.0.0 is the Configuration Migration Assistant. The Configuration Migration Assistant is a new facility to allow customer owned configuration data to be migrated from one environment to another. Customers using Oracle Utilities Customer Care and Billing and Oracle Enterprise Taxation and Policy Management will use this new facility instead of Configuration Lab for versions of those products using the Oracle Utilities Application FrameworkOracle Utilities Application Framework V4.2.0.0.0 and above.

    The features of this new facility are:

    • Meta Data Driven Migration - The Configuration Migration Assistant uses the meta data within the product to understand the data and the relationships. A set of new migration objects have been added to define reuseable data relationships, sequence of migration and groups of data to migrate.
    • Reusability - The Configuration Migration Assistant emphasizes resuablility across migrations by providing reuseable migration plans allowing customers and partners to combine base and custom migration plans into reusable migrations.
    • Simple design - The Configuration Migration Assistant simplifies the specification and exeuction of migrations. No technical setup outside the product is required.
    • Support for different relationship types - Relationships between objects can be expressed using Constraints, SQL statements or XPATH statements. This allows Configuration Migration Assistant to support the wide variety of configuration objects in the products.
    • Export Data to a File - The export process now exports data to a file rather than using database links. This allows the export to be checked in to a code respository to match the code components involved in a configuration. This also allows the exports to be reused and imported across many environments and even be used to rollback configuration changes on a global basis within an environment.
    • Approval/Rejection of Changes - Individual changes can be forced to be approved before they are applied allowing customers fine levels of control over changes in their target environments.
    • Data Manipulation upon Import - Data can be manipulated upon import, using algorithms, to avoid configuration conflicts. For example, when importing Batch Controls the batch run numbers can be manipulated upon import to ensure they are consistent in the target environment.

    Over the next few weeks there will be a series of articles on this blog highlighting the Configuration Migration Assistant and its features and configuration. For more details about the facility refer to Configuration Migration Assistant Overview (Doc Id: 1506830.1) available from My Oracle Support.

      Monday Oct 15, 2012

      Internal Data Masking

      By default, the data in the product is unmasked for authorized users. If particular data within the object is considered a candidate for data masking then the masking capabilities with the product can be used to mask the data in an appropriate fashion.

      The inbuilt Data Masking capabilities of the Oracle Utilities Application Framework uses a number of configuration elements:

      • An algorithm, of type F1-MASK, is specified to configure the elements of the data masking including the masking character, number of suffix characters left unmasked, characters to ignore in the string, the application service, security type and authorization levels applicable to the mask.
      • A Data Masking Feature Configuration is created to define where the algorithm applies.
      • The specification of the feature allows you to define the fields to encrypt using the configured algorithm. The algorithm can be attached to a schema field, table field, characteristic, search field and even a child record (such as an identifier).
      • The appropriate user groups are then connected to the application services with the appropriate service types and level to indicate whether the masking applies to the user group or not.

      For example, say there is a field called CCNBR in the product which holds the credit card details. I would create an algorithm, say CCformatCC, to mask the credit card number with the last few digits as unmasked (as the standard in most systems dictate). I would specify on the Field Mask the following:

      field="CCNBR", alg="CMformatCC"

      On the algorithm CMfomatCC, I would specify the mask, application service, security type and the authorization level which users would see the credit card unmasked.

      To finish the configuration off and to implemention I would connect the appropriate user groups to the application service I specified with the security type and appropriate authorization level for that group.

      Whenever a user accesses the CCNBR field on any of the maintenance screens, searches and other screens that use the CCNBR meta data definition would then be masked according to the user group that the user was a member of.

      Refer to the documentation supplied with F1-MASK algorithm type entry for more examples of what is possible.

      Friday Oct 28, 2011

      Sending JMS messages in real time

      One of the features of the framework is the ability to send data to a JMS resource (Queue or Topic) from a business event in the product in real time. The idea is that some business event occurs, such as data changing or a status changing, which is picked up by an algorithm or other program object and then an outbound message created to be sent by the JMS Real Time Senders we supply in the product. The real time sender deposits the message on the JMS within the transaction context.

      To do this a number of objects need to be setup:

      • Create a XAI JNDI Server entry with the JNDI sever for your JMS Resources
      • Create a XAI JMS Connection entry with your JMS connection factory and referring to the XAI JNDI Server you created.
      • Create a XAI JMS Queue/Topic entry to connect to the JMS Connection and JNDI Server. You will also need to provide credentials if necessary in the Context tab (if your JMS resource is secured).
      • Configure a XAI JMS Sender entry of type RTJMSQSNDR referring to the JMS Connection, and JMS Queue/Topics you defined.
      • Create an Outbound Message Type entry you want to use to define your transaction and refer to the object you want to use as the primary object to send the data out on. 
      • Create an External System definition which links the Outbound Message Type to the JMS Sender you created. This tells the product to process the messgae of that type to the JMS resources you defined.
      • Decide the appropriate algorithm to use to detect the business even that occurs. Of course you can use class extensions or server user exits as well as objects to detect a business event but algorithms are the most commonly used.
      • Build a Business Object schema against the Outbound Messae (F1-OUTMSG) maintenance object. In the XML_SOURCE XML object you can define the structure you want to send to the JMS resource. Specify the External System in the NT_XID_CD element and Outbound Message Type in the OUTMSG_TYPE_CD element as defaults. The schema should be somewhat similar to the example below (this is used to send To Do status messages to BAM for example):

      <schema>
          <outboundMessageId mapField="OUTMSG_ID"/> 
          <externalSystem mapField="NT_XID_CD" fkRef="F1-ESTMP" default="XXXSYS"/> 
          <outboundMessageType mapField="OUTMSG_TYPE_CD" fkRef="F1-OMTYP" default="XXXMSG"/>  
          <todoPackage type="group" mapXML="XML_SOURCE">
              <toDoId mdField="TD_ENTRY_ID"/> 
              <toDoType mdField="TD_TYPE_CD"/> 
              <toDoRole mdField="ROLE_ID"/> 
              <toDoEntryStatus mdField="ENTRY_STATUS_FLG"/>
          </todoPackage>
      </schema>

      • Create the algorithm to populate the elements of the schema above and use invokeBO to create the outbound message. This will then appear in the JMS Queue or Topic in real time. Note: If the JMS Queue/Topic is not available at runtime then the transaction may fail as it assumes the JMS is now part of the transaction.

      The first part of the process is all definition of the JMS resources within XAI. The Sender, Business Object and Algorithm are the key elements for the interface. The data for the outbound message is not restricted to one object, your algorithm can get data from as many objects as you want to build the message. The key is to define the message elements in the XML Source XML element.

      For more information about JMS Outbound refer to the Oracle WebLogic JMS Integration and Oracle Utilities Application Framework (Doc Id: 1308181.1) whitepaper available from My Oracle Support.

      Tuesday Sep 27, 2011

      Database Technology Integration

      One of the features of the Oracle Utilities Application Framework is the ability to integrate external pieces of technology across the architecture to augment a solution at a client site. This blog entry summarizes some of the Oracle database features and products that can be used with the Oracle Utilities Application Framework.

      • Real Application Clusters - Oracle Utilities Application Framework V2.1 and above can use Oracle RAC to provide additional levels of performance and availability. In Oracle Utilities Application Framework V4.x we added native support for the Fast Connection Failover (FCF) facility (via Oracle's Notification Server (ONS)) in Oracle 11g and above.
      • Real Application Testing - If you are planning a database upgrade you can use Oracle's Real Application Testing facility to capture product SQL to replay on an upgraded database to assess the impact of the upgrade.
      • Data Guard/Active Data Guard/Streams/Goldengate - Oracle offers a comprehensive set of technologies that can provide high availability and business continuity solutions for the database.
      • Advanced Compression - With increased volumes and history to store, the ability to use storage efficiently may become a requirement. Using Advanced Compression on the Oracle Utilities Application Framework based product can reduce your disk and memory footprint for data. 
      • In Memory Database Cache - Whilst Oracle ships with an effective cache (SGA) it is possible to get more out of the cache using the In Memory Database Cache option.
      • Partitioning - Splitting data into more manageable chunks (partitions) can mean easier management of the data and introduce information lifecycle management into your implementation.
      • Database Vault -  By default DBA and database SYSDBA users have access to all application data, if this is a concern then you can use Database Vault to restrict this and similar access whilst retaining appropriate security levels for the product users. 
      • Transparent Data Encryption - If protecting data at the disk (and subsequently backup) level is important then Transparent Data Encryption can be implemented to provide lower levels of protection.
      • Audit Vault - Whilst the Oracle Utilities Application Framework contains an audit facility it is possible to centralize the audit information across an enterprise using Audit Vault.

      The features summarized above are just a subset of what is possible with the Oracle Database and Oracle Utilities Application Framework based product. Use of any of the combination of the above can be used to satisfy a range of business requirements.   

      Friday Jun 24, 2011

      Extended JMS Support

      In a previous post I discussed the real time JMS integration we added in FW4.1 and also as patches for FW2.2. There are some additional aspects of this integration I did not mention which may be of interest:

      • JMS Topic Support - In the post I concentrated on talking about JMS Queue support but failed to mention that the MDB and outgoing real time JMS also supports JMS Topics. JMS Queues are typically used for point to point decoupled integration and JMS Topics are used for hub integration that uses Publish and Subscribe.
      • JMS Selector Support - By default the MDB will process every message from a JMS resource (Queue or Topic). If you want to alter this behaviour to selectively filter JMS messages then you can use JMS Selectors to specify the conditions for the MDB to selectively process JMS messages based upon conditions. JMS Selectors allow filters to be specified on elements in the JMS Header and JMS Message Properties using SQL like syntax. Note: JMS Selectors do not support filters on the body elements.
      • JMS Header Support - It is possible to place custom information in the JMS Header and JMS Message Properties for outgoing messages (so that other applications can use JMS selectors if necessary as well). This is only available when installing Patches 11888040 (FW4.1) and 11850795 (FW2.2).

      These facilities coupled with the JMS facilities described in the previous posts gives the product integration capabilities in JMS which can be used with configuration rather than coding. Of course, the JMS facility I have described can also be used in conjunction with SOA Suite to provide greater levels of traceability and management.

      Thursday Jun 16, 2011

      Script Cache Enhancements for OUAF V2.2 and OUAF V4.1

      With the popularity of the Configuration Tools facility within the product for customer extension the increase load of XPath may cause memory issues under particular user transaction conditions (in particular high volume patterns). As with most technology in the Oracle Utilities Application Framework, the XPath statements used in the Configuration Tools are cached for improved performance. Increased load on the cache may cause memory issues at higher volumes.

      To minimize this the Oracle Utilities Application Framework has introduced two new settings in the spl.properties file for the Business Application Server, where the dimensions of the XPath statement cache are defined. These settings allow the site to optimize the control the XPath cache to support caching of commonly used XPath statements but allowing for optimal specification of the cache size (to help prevent memory issues).

      The settings are as follows:

      • com.oracle.XPath.LRUSize - Maximum number of XPath queries to hold in cache across all threads. A zero (0) value indicates no caching, minus one (-1) value indicates unlimited or other positive values indicate number of queries stored in cache. Cache is managed on a Least Reused basis. For memory requirements, assume approximately 7k per query). The default in the template is 2000 queries.
      • com.oracle.XPath.flushTimeout - The time, in seconds, when the cache is automatically cleared. A zero (0) value indicates never auto-flush cache and a positive value indicates the number of seconds. The default in the template is 86400 seconds (24 hours).

      In most cases the defaults are sufficient but can be altered if the following is guidelines are :

      • If there are memory issues (e.g. out of memory) then decreasing the com.oracle.XPath.LRUSize or decreasing the com.oracle.XPath.flushTimeout may result in a reduction in memory issues. com.oracle.XPath.LRUSize has a greater impact on memory than com.oracle.XPath.flushTimeout.
      • If decreasing value the value of the com.oracle.XPath.LRUSize causes performance issues, consider changing the com.oracle.XPath.flushTimeout initially only and ascertain if that works for your site.
      • There are no strict guidelines on values for both parameters as cache performance is subject to the user traffic profile and the amount and types of XPath queries executed. Experimentation will assist in determining the right mix of both settings for your site.

      Note: This facility is available for Oracle Utilities Application Framework V2.2 and above after installing patches 11885007 (for V2.2) or 12357553 for (V4.1) from My Oracle Support.

      Friday Aug 27, 2010

      New User Interface - Summary

      [Read More]

      Wednesday Aug 25, 2010

      Important OUAF Post Service Pack 8 fix

      [Read More]

      Tuesday Aug 24, 2010

      Single Fix Lists in Service Packs

      [Read More]
      About

      Anthony Shorten
      Hi, I am Anthony Shorten, I am the Principal Product Manager for the Oracle Utilities Application Framework. I have been working for over 20+ years in the IT Business and am the author of many a technical whitepaper, manual and training material. I am one of the product managers working on strategy and designs for the next generation of the technology used for the Utilities and Tax markets. This blog is provided to announce new features, document tips and techniques and also outline features of the Oracle Utilities Application Framework based products. These products include Oracle Utilities Customer Care and Billing, Oracle Utilities Meter Data Management, Oracle Utilities Mobile Workforce Management and Oracle Enterprise Taxation and Policy Management. I am the product manager for the Management Pack for these products.

      Search

      Archives
      « April 2015
      SunMonTueWedThuFriSat
         
      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      14
      15
      16
      17
      18
      19
      20
      21
      22
      23
      24
      25
      26
      27
      28
      29
      30
        
             
      Today