Wednesday Dec 11, 2013

Upgrading Versions

When running the CC&B (and OUAF) framework upgrade scripts, it is not uncommon to encounter "duplicate record" type errors, usually caused by custom records inserted with values which do not conform to the standard naming convention (custom fields are recommended to be prefixed with CM), or instances where accelerator-related fields are bundled into the base product. In instances where these errors are encountered it is recommended that the application architects review the records and either change the existing values to an unused code, or alternatively delete the records altogether before rerunning the script step (the script should pause once these errors are listed allowing the review and correction to occur).

These errors are typically encountered as part of the initial Package Release/ Config Master upgrade and as a result any rollbacks required are usually fast (as a result of the fact that these instances have extremely low volumes of transactional data present in the database).

It is vitally important to recognise that all data errors must be analysed before taking any corrective action, as the ability to corrupt the installed instance is very high, and that any direct data fixes circumvent the existing business object based validation rules built into the product. Ideally all deletions should be performed via the CC&B front-end (which will require a database restore back to a known version which aligns with the application version) to ensure that the referential integrity is maintained.

I have also encountered issues with these scripts when running against full volume database instances, specifically related to instances where the upgrade script is attempting to add or change column definitions on tables with large row counts. In these instances there are really only a couple of methods available to speed up the upgrade:

1. Make use of Oracle 11g features which speed up column changes (when compared to the older Oracle 10g method of applying these changes).

 2. Do a full table extract, change the table definition manually, and then reload the data back into the table.

3. turn on parallelism for ddl operations (alter session enable parallel ddl), note that this may not be appropriate based on the number of partitions, processors and disk channels available to you.

I tend to recommend option 2, as it is the easiest to implement across all installations, and is guaranteed to have the desired effect (as long as the changes applied manually align with the upgrade script expectations, otherwise the script will simply attempt to align the table definition, and negate all of the manual effort expended).

Thursday Jan 10, 2013

Data Extraction for Sub-sets of Accounts v2

As a follow-up to my earlier posting in relation to the extraction of sub-sets of data, I have been able to extend the solution to offer the ability to extract larger volumes of data (by spooling each output file to its own output file), and also offer the ability to perform mass purges of data via the same framework. This tool is intended for use with CC&B currently, but the solution should support users extending it to include all OUAF products (as long as the user can link the records back to an Account ID.

Once again, post a comment if you are interested in getting a bit more information in relation to this set of scripts.

Wednesday May 30, 2012

Changing the past

I have recently updated an earlier posting in relation to feature configuration differences between CC&B 2.2 and CC&B 2.3.1. There are a couple of new settings to be aware of, as well as a number of 2.2 settings which are no longer supported. Check out the updates at https://blogs.oracle.com/stusbraindump/entry/feature_configuration

Friday Oct 14, 2011

Budgets Update

Budgets Overview

Back in 2009 I generated a slide in an attempt to ease some of the confusion around what each of the various budget subsystems does in Oracle Utilities CC&B. Over time I have come to realise that this required a couple of updates to clarify some of the differences between the various Customer Care and Billing Budgets types.

As a result, please use this updated slide, in favour of my earlier posting.

Monday Feb 01, 2010

I can't remember if I had a memory leak..

Memory leaks seem to be one of our biggest issues on my current project (CC&B 2.2.0 on Windows Server x64 2003 R2). A review of the components experiencing these leaks has led us to upgrade the following (with thanks to Josh, Ken and Andre for their assistance):
  • Sun Java has now been patched from 1.5.0_09 to 1.5.0_22
  • BEA Weblogic 10.0 has been patched from MP1 to MP2
  • Single Fix 8882447 is recommended if your implementation includes complex plug-in scripts, since these appeared to suffer from a defect in regards to the caching of PreparedXQuery elements resulting in them consuming large amounts of memory.
  • Review all custom batch modules to ensure that they make use of the createJobWorkForEntityQuery method to build the JobWork object, or use createJobWorkForQueryIterator and createWorkUnit if you need to add supplemental data.  These methods cache the thread work units to a temporary disk file to reduce memory consumption, instead of the managing the ThreadWorkUnits list inside the code. Further details are available in the PFD for Single Fix 7447321.

Sunday Dec 20, 2009

My Oracle Support Community Link

The "My Oracle Support" Community Customer Care and Billing page is now available at https://communities.oracle.com/portal/server.pt/community/customer_care_and_billing for all registered "My Oracle Support" users.

Wednesday Dec 02, 2009

Service with a Smile

Oracle Utilities Framework 2.2.0 SP6 (Patch number 9042811) and Customer Care & Billing 2.2.0 SP6 (Patch number 9042819) are now available for download from My Oracle Support. 

Wednesday Nov 18, 2009

Handy Links

A couple of handy My Oracle Support (Metalink) links for TUGBU customers...

ID 804664.1 - Important" Patches for Customer Care & Billing V2.1.0
ID 804612.1 - Important" Patches for Customer Care & Billing V2.2.0
ID 804706.1 - Important" Patches for Enterprise Taxation Management V2.1.5

Tuesday Sep 22, 2009

Better than a Tardis..

Changing the System Date in CC&B (to cater for testing requirements where the system has to be artificially rolled backwards/forwards) used to be done via the Database initialisation parameters, and the Batch Run Date parameter. But it appears that FEATURE CONFIGURATION now supports this requirement.

Simply create a "General System Configuration" Feature Configuration instance (we tend to call ours CMTIMETRAVEL, but there is no restriction on this in the system), and define Option "System Override Date" with the required value.

Once again, be aware that it is not recommended that you roll your dates back and forward, since this makes investigation of defects a bit more complicated, but it definitely has its place in relation to test execution (especially Collection and Severance testing, where the process runs across a large number of days and test cycles are typically defined for a much smaller period).

Thursday Sep 17, 2009

Bundle Corrections.

We have identified an issue with the use of bundling and UI Maps, whereby the resulting Bundle was wrapping the UI Map in a set of CDATA tags, but only after it had already removed all formatting from the source records, resulting in corruption of the UI Map HTML fragments.

This has now been corrected by implementing Single Fix 8228025 on FW/CC&B 2.2.0.

If you are using the Bundling subsystem in your implementation, I strongly recommend that you implement this patch.

Monday Sep 07, 2009

A Little Bundle of Joy..

The Bundle entity was introduced in CC&B 2.2.0 as a single fix (Patch 7009383 and part of Service Pack 2), and can prove useful in relation to the management of the more complex "Cool Tool" reference data components. This entity supports simple Version Control structures by taking a snapshot of the data elements at a point in time, and wrapping this as an XML component for implementation in a target environment.

1. A quick overview of the concepts...
1.1. Base functionality in bundling supports some of the more complex data structures..ie Data Area, Script, Business Service, etc.
1.2. We can extend this functionality by:
1.2.1 generating a custom BO for the Maintenance Object and associate this with a complete schema (by clicking on the generate Schema button on the dashboard)
1.2.2 navigating to the Maintenance Object associated with the underlying records we are trying to migrate, and define the following MO Options..
1.2.2.1 Eligible for Bundling = 'Y'
1.2.2.2 Physical BO = the business object created in 2.1 above.
1.2.2.3 FK Reference = the primary key of the Maintenance Object (if none exists, create one in the Foreign Key Metadata).
NB: Ensure that the custom Business Objects created above exist in all environments and are named consistently.

2. To Create an Export Bundle:
2.1. Logon to the Source Environment (e.g. CONTROL)
2.2. Navigate to Admin Menu, Export Bundle+
2.3. Provide a descriptive Name for your bundle.
2.4. Save the bundle.
2.5. Navigate to the components to be added to the menu (via standard Admin menu navigation).
2.6. Retrieve the records which form part of this bundle and click on the Add button in the Bundle zone on the dashboard (the Bundling dashboard option only appears when you are maintaining an object against which bundling is supported, see 1.1 and 1.2 above).
2.7 Navigate to the Bundle Export function, retrieve your bundle and click on the 'Bundle' button.

3. To Create an Import Bundle:
3.1. Logon to the target environment.
3.2. Navigate to Admin Menu, Import Bundle+
3.3. Provide a descriptive Name for your bundle.
3.4. Cut-and-Paste the contents of the 'Bundle Entities' component from the Export Bundle created in 2 above into your new bundle created in 3.3 above.
3.5 Save the bundle.
3.6 Click on 'Apply' to submit this bundle to your environment.

Thursday Apr 23, 2009

My Generation..

I am aware of a number of performance issues with converting a large amount of transactional data, specifically around the Key Generation routines and the method whereby these jobs cannot be multi-threaded (resulting in massive rollback segments as a single SQL statement attempts to process all transactions in one hit).

As a result, I have looked into the ability to tune this conversion phase and recommend the following:
1. Review the code in CIPVBSGK (via AppViewer for an example of how CC&B conversion generates keys).
2. Consider dropping all indexes on the CK_* tables before running KeyGen to reduce the amount of I/O (ensure that these are all rebuilt as soon as KeyGen has completed to ensure that other dependent tables do not suffer slowdown as a result of full table scans).
3. Ensure that Rollback Segments, Transaction tables (i.e. Bills, FTs, Payments, Meter Read, etc), Master Data Table (ie Account, SA, etc) and Indexes are allocated their own table-spaces and I/O Channel at the database level to ensure that maximum throughput is ensured (my personal preference is to split the Meter Read and Interval-related Data sets out to yet another table-space and IO channel, but this will be dependent on the disk structure on your machines).
4. Assess whether your partitioning is efficient (or even needed), Partitioning is done to speed up DB accesses where the application is I/O bound. It is not required in order for CC&B to run. If the Database is configured correctly on a single table-space, feel free to leave it as-is.
5. Be aware that running the validation processes for every x records does expose you to the risk that not all errors will be encountered, but it does allow you to quickly identify the big ticket items that are incorrect in your mapping and get these out of the way easily. I recommend starting out running every x records as part of your initial conversion runs and setting the value to 1 for all of your dress-rehearsal iterations.

Failing this, some sites have elected to customise the KeyGen, in the event that this is your preferred approach take care to:
1. Consider running custom SQL to perform your key generation, aligned with your partition ranges (i.e. 10 concurrent SQL statements with high and low ranges aligned with your 10 partitions, or a factor of this split (e.g. 20 or 30 SQL statements each performing bite-sized chunks of each partition range).
2. Consider running the base KeyGen process in De-duplication mode after your custom SQL has completed to ensure that no duplicates are encountered (if they are this process will 'bounce' the SQL-generated value to another unique value).
3. Note that no manipulation of the input keys is expected as part of your Extract or Load processes in order to complete any of this functionality.
4. Make sure that KeyGen (or your custom SQL) is executed in the order specified by the online help to ensure that inherited keys are generated correctly (e.g. Account should be run first, since most transactional data inherits a component of the primary key from the Account Id).
5. Ensure that you create the initial "null-key" value on the CK_* table to support optional foreign key logic performed as part of the insert into PROD.
6. Ensure that your key fields are all zero-filled numeric values with no trailing spaces (see my earlier posting in relation to Staging Table Keys).

Regardless of the approach adopted:
1. Consider bringing down the Staging CC&B application during your runs to reduce the risk of table/record locking by individuals logged on to the environment while the batches are running (feel free to bring it up at various points in the overall Conversion process to perform sanity checks, but these must be well defined milestones in the conversion plan, and the application should then be shut down again once this phase has completed).
2. The performance of the Validation steps seems to be affected, to a large degree, by the existence of Characteristics on entities, as a result, expect your runs to take longer if your data model calls for a large amount of characteristics.
3. Financial Transactions that are converted with REDUNDANT_SW = 'Y' will normally be exempt from all further processing in CC&B (with the exception of bill segment processing, direct queries and archiving), as a result the actual key value assigned is of less importance than active transactions.
4. Cross partition population of transactions for a single account will not "break" CC&B, it will simply incur a minor performance hit when retrieving these transactions.
5. Due to the fact that historic Bill Segments are normally accessed regardless of the REDUNDANT_SW by the next bill creation process, it is recommended that inherited key structures be retained for this entity.

Wednesday Dec 03, 2008

Database Upgrades and their impact on Conversion

Hint: It appears that the current strategy in relation to application of Single Fixes and Service Packs no longer supports application of these scripts against a staging schema (at least as part of CC&B 2.x.x). As a result it is recommended that you drop and rebuild the Conversion schema from the Production instance once this has been upgraded with the latest patch-sets.

We have also noted that application of these single fixes may require a redelivery of all custom java code from the development environment, since the java compilation is not performed as part of the patch install process (unlike the Cobol implementation process).

Sunday Nov 23, 2008

Foreign Key Characteristic Values

We have encountered issues where Conversion runs in CC&B 2.2.0 (and possibly previous versions) results in Foreign key characteristics being populated with a value which contains trailing spaces. Whilst CC&B handles these correctly, it appears that the majority of reporting steps defined as part of the reconciliation processes do not do the required trimming of values, resulting in no match being found.

I recommend that all reconciliation reporting of foreign Key values includes the necessary 'TRIM' function calls in the selection criteria.

Sunday Oct 19, 2008

Running Keygen for Partial Conversion Runs

Hint: Whilst the majority of the CC&B Conversion process can be run as a subset of steps dependent on the type of conversion run being executed (ie. Convert on Person will only require that the Validation and Production steps specific to the Person entity be executed) it is important to note that once a project moves onto a conversion run involving a more complete 'V' structure, that the need to run all KeyGen steps becomes mandatory.

KeyGen will attempt to allocate a new CC&B key value for each record on the associated parent table, but it also creates a single 'blank' key value which is used by all Production Insert steps to link to optional key values. A prime example of this is the CI_ACCT MAILING_PREM_ID, this field is an optional field on the CI_ACCT records, but unless the associated CI_PREM KeyGen has been executed to create a ' ' entry on the CK_PREM table, none of the CI_ACCT records will be transferred from Staging to Production.

As a result, I recommend that once a site reaches the point where they are considering moving data from Staging to Production, that they include the full suite of KeyGen steps in the Conversion run schedule (regardless of how many of the CC&B data structures they are converting over). Given that these runs are being executed against a set of empty source tables, it is not anticipated that the overall run times will be extended to an unmanagable level.
About

Stuart Ramage

I am a Consulting Technical Manager for Oracle Corporation, and a member of the OU Black Belt Team, based in Hobart Tasmania.
I have worked in the Utility arena since 1999 on the Oracle UGBU product line, in a variety of roles including Conversion, Technical and Functional Architect.

Contact me on:

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today