Tuesday May 21, 2013

Implementing Oracle BI Applications using Universal Adaptors

Authored by : Sushanta Chakraborty

1. Implementing Oracle BI Applications – Universal Adaptors

 The purpose of this document is to provide enough information one might need while attempting an implementation of one or more offerings and subject areas using the Oracle BI Applications Universal Adaptors. There are several myths around what needs to be done while implementing Universal Adaptors, where can things go wrong if not configured correctly, what columns are to be populated as a ‘must’, how to provide ‘delta’ data set while shooting for incremental ETL runs and so on. All of these topics are discussed in this document.

Understanding the ‘entry points’ required to implement a subject area is the key to a successful implementation of Universal Adaptors.

2. General Background of Universal Adaptors

 Oracle BI Applications provide packaged ETL mappings against source OLTP systems like Oracle Fusion Applications, Oracle E-Business Suites, PeopleSoft, JD Edwards and Siebel across various business areas such as Human Resources, Supply Chain & Procurements, Order Management, Financials, Service and so on. However, Oracle BI Applications does acknowledge that there can be quite a few other source systems, including home-grown ones, typically used by SMB customers. And to that extent, some of the enterprise customers may also be using SAP as their source. Until it gets to a point where Oracle BI Applications can deliver pre-built ETL adaptors against each of these source systems, the Universal Adaptor becomes a viable choice.

A mixed OLTP system where one of them has pre-built adaptor for and the other doesn’t – is also a scenario that calls for the usage of Universal Adaptors. For instance, the core portion of Human Resources may be in PeopleSoft systems, but the Talent Management portion may be maintained in a non-supported (yet) application such as “Taleo”.

In order for customers to enable pulling in data from non-supported source systems into the Data Warehouse, Oracle BI Applications have created a so called “Universal Adaptor”. The reason this was doable in the first place was the fact that the ETL architecture of Oracle BI Applications had the evident support for this. Oracle BI Applications’ Data Warehouse consists of a huge set of facts, dimensions and aggregate tables. The portion of the ETL that loads to these ‘end’ tables are typically Source Independent (loaded using the folder SILOS, acronym of “Source Independent Load Objects”). These ETL maps/interfaces start from a staging table and load data incrementally into the corresponding end table. Aggregates are created upstream, and have no dependencies to the source system where the data came from. The ETL logic in SDE folder (acronym of “Source Dependent Extract”) that extracts into these staging tables (also called Universal Stage Tables) are the ones that go against a given source system, like EBS or PSFT FUSION and so on. For Universal, they go against a similarly structured CSV file. Take any Adaptor – the universal stage tables are exactly the same, structurally. The grain expectation is also exactly the same for all adaptors. And no wonder, while all these conditions are met, the SILOS logic will load the data (extracted from Universal) from the universal stage tables seamlessly.

Why did Oracle BI Applications decide to source from CSV files? In short, the answer to this is “to complete the end-to-end extract-transform-load story”. We will cover this in a bit more details and what the options are, in the next section.

3. General Implementation Considerations

One myth that implementers have while implementing Universal Adaptors is “Data for the universal staging tables should always be presented to Oracle BI Applications in the required CSV file format”.

If your source data is already present in a relational database, why dump it to CSV files and give it to Oracle BI Applications? You will anyway have to write brand new ETL mappings that read from those relational tables to get to the right grain and right columns. Then why target those to CSV files and then use the Oracle BI Applications Universal Adaptor to read from them and write to the universal staging tables? Why not directly target those custom ETL maps to the universal staging tables? In fact, when your source data is in relational tables, this is the preferred approach.

However, if your source data comes from 3rd party sources which you have outsourced, and probably have agreements with them to send you data files/reports/feeds once in a while, and if that 3rd party source doesn’t allow you to access their relational schema, then probably CSV files is the only alternative. A typical example would be Payroll data. A lot of organizations typically outsource their Payroll to 3rd party companies like ADP systems and so on. In those cases, ask for the data in the same manner that you expect in the Oracle BI Applications CSV files. Another valid example might be SaaS applications which usually do not allow direct database accesses. File is the best option for them.

Also, if your source data lies in IBM mainframe systems, where it is typically easier to write COBOL programs or whatever to extract the data in flat files, presenting CSV files to Oracle BI Applications Universal Adaptor is probably easier. Irrespective of how to populate the universal staging tables (relational sources or CSV sources) five very important points should always be kept in mind:

  • Grain of the universal staging tables are met properly.
  • The uniqueness of records do exists in the (typically) INTEGRATION_ID columns.
  • The mandatory columns are populated the way they should be.
  • The relational constraints are met well while populating data for facts. In other words, the natural keys that you provide in the fact universal staging table, must exist in the dimensions. This is with respect to the FK resolution (dimension keys into the end fact table) topic.
  • Incremental extraction policy is well set up. Some overlap of data is OK, but populating the entire dataset to the universal staging tables will prove to be non-performing.

Note: For the rest of the document, we will assume that you are going the CSV file approach, although re-iterating, it is recommended that if your source data is stored in a relational database you should write your own extract mappings.

4. Know your steps towards a successful implementation of Universal Adaptor

 There are several entry points while implementing a subject area using Universal Adaptors. The base dimension tables and base fact tables have their corresponding CSV files where you should configure the data at the right grain and expectations. Other kinds of tables include ‘Exchange Rate’ and ‘Codes’ (Domains). Exchange Rate (W_EXCH_RATE_G) has its own single CSV file, whereas the Codes table (W_CODE_D) has a CSV file, one per each code category. To get to see all code-names well enough in the dashboards/reports, you should configure all the ‘required’ code CSV files for the subject area in question.

Note: The Codes table has been replaced with Domains architecture in newer releases of BI Applications.

Key points:

  • Start with populating the offering specific common dimension CSV files, like Job, Pay Grade, HR Position, GL Account, etc., depending on your offering (such as HR, or FINANCE etc). It is not necessary that all offerings would have “their set of common dimensions used in all facts within the offering”.
  • Then configure subject area specific dimensions, like Job Requisitions, Recruitment Source etc (when implementing Recruitment) or Learning Grade, Course (when implementing Learning) or Pay Type etc (when implementing Payroll) or Absence Event, Absence Type Reason (when implementing Absence) and so on. These are examples from HR offering. Similarly, it applies for Financials subject areas such as AR or AP etc., where you should consider area specific dimension needs at this time.
  • Then configure related COMMON class dimensions applicable for all, like Internal Organizations (logical/applicable partitions being “Department”, “Legal Entity” , “Business Unit” etc), or Business Locations (logical/applicable partitions being “Employee Locations”, “Inventory Locations” etc.).
  • Consider other shared dimensions and helper dimensions like Status, Transaction Types, and Exchange Rate etc. Special handling of time dimension should be addressed here (such as Fiscal or enterprise calendars etc).
  • Then consider the code dimensions. By this time you are already aware of what all dimensions you are considering to implement, and hence, can save time by configuring the CSVs for only the corresponding code categories. For newer releases, you would configure the domain set up at this point.
  • For facts, start with configuring the primary fact for your offering. Since the dimensions are already configured, the natural key of the dimensions are already known to you and hence should be easy to configure them in the fact. For some offerings, there isn’t a concept of a “primary fact”. If so, go to the next step.
  • Primary fact should be followed by subject area specific facts, like Payroll, Job Requisition Event, Applicant Event, etc (under HR offering) or AR, AP etc (under Financial offering) and so on.
  • Now that all the CSV files for facts, dimensions, and helper tables are populated, you should move your focus towards Domains. For E-Business Suite & PeopleSoft Adaptors, we do mid-stream lookups against preconfigured lookup CSV files. The map between source values/codes to their corresponding domain values/codes come pre-configured in these lookup files. However, for Universal Adaptor, no such lookup files exist. This is because of the fact that we expect that the accurate domain values/codes will be configured “along-with” configuring the base dimension tables where they apply. Since everything is from a CSV file, there is no need to have the overhead of an additional lookup file acting in the middle. Domain value columns begin with “W_” [excluding the system columns like W_INSERT_DT and W_UPDATE_DT] and normally they are mandatory, cannot be nulls, and the value-set cannot be changed or extended. We do relax the extension part on a case by case basis, but in no way, the values can be changed. The recommendation at this stage is that you go to the DMR guide (Data Model Reference Guide), get the list of table-wise domain values, understand the relationships clearly in cases there exists any hierarchical or orthogonal relations, identify the tables where they apply and then their corresponding CSV files, look at the source data and configure the domain values in the same CSV files. Note that if your source data is in a relational database and you have chosen to go the ‘alternate’ route of creating all extract mappings by yourself, the recommendation is to follow what we have done for E-Business Suite Adaptors and PeopleSoft Adaptors and create separate domain value lookup CSV files, and do a mid-stream lookup.
  • Note that the above discussion on Domains has been revamped in newer releases on BI Applications. Now, we only have the CODE columns in our data model and the NAME and DESC columns have been taken away. We still do a mid-stream lookup in EBS/PSFT adaptors to resolve the target domain code against the source domain code, but no longer use lookups to resolve the target domain names and descriptions based on source/target domain codes. Rather, these name/description lookups happen on the fly at the RPD level, where the name or description is brought back to the report depending on the code and language preferred in reports. The domain mapping happens in Configuration Manager now, and no longer in CSV files. You will be taken to the step of configuring or mapping them in Configuration Manager.
  • Last, but not the least, configure the ETL parameters. Read up the documentation for these parameters, understand their expectations, study your own business requirements and then set the values accordingly.

5. Impact of incorrect configurations of domain values

 Domain values constitute a very important foundation for Oracle Business Intelligence Applications. We use this concept heavily all across the board to ‘equalize’ similar aspects from a variety of source systems. The Oracle Business Intelligence Applications provide packaged data warehouse solutions for various source systems such as E-Business Suite, PeopleSoft, Siebel, JD Edwards and so on. We attempt to provide a “source dependent extract” type of a mapping that leads to a “source independent load” type of a mapping, followed by a “post load” (also source independent) type of mapping. With data possibly coming in from a variety of source systems, this equalization is necessary. Moreover, the reporting metadata (OBIEE RPD) is also source independent. The metric calculations are obviously source independent.

The following diagram shows how a worker status code/value is mapped onto a warehouse domain to conform to a single target set of values. The domain is then re-used by any measures that are based on worker status.

Domain values help us to equalize similar aspects or attributes as they come from different source systems. We use these values in our ETL logic, sometimes even as hard-coded filters. We use these values in defining our reporting layer metrics. And hence, not configuring, incorrectly configuring, or changing the values of these domain value columns from what we expect, will lead to unpredictable results. You may have a single source system to implement, but still you have to go through all the steps and configure the domain values based on your source data. Unfortunately, this is small price you pay for going the “buy” approach VS the traditional “build” approach for your data warehouse.

One of the very frequently asked question is “what is the difference between domain value code/name pairs VS the regular code/name pairs that are stored in W_CODE_D (or W_DOMAIN_MEMBER_G and W_DOMAIN_MEMBER_MAP_G along with their corresponding Translation tables in newer versions of BI Applications)”.

If you look at the structure of W_CODE_D table, it appears to be also capable of standardizing code/name pairs to something common. This is correct. However, we wanted to give an extensive freedom to users to be able to do that standardization (not necessarily equalization) of their code/names and possibly use that for cleansing as well. For example, if the source supplied code/name are possibly CA/CALIF or CA/California, you can choose the W_CODE_D approach (using Master Code and Master Map tables – see configuration guide for details) to standardize on CA/CALIFORNIA. The equivalent of the above is the W_DOMAIN_MEMBER_MAP_G table in newer versions of BI Applications.

Now, to explain the difference of domain value code/name pairs Vs the regular source code/name pairs, it is enough if you understand the significance of the domain value concept. To keep it simple, wherever we (Oracle Business Intelligence Applications) felt that we should equalize two similar topics that give us analytic values, metric calculation possibilities etc, we have “promoted” a regular code/name pair to a domain value code/name pair.

If we have a requirement to provide a metric called “Male Headcount”, we can’t do that accurately unless we know which of the headcount is “Male” and which is “Female”. This metric therefore has easy calculation logic: Sum of headcount where sex = Male. Since PeopleSoft can call it “M” and EBS can have “male”, we decided to call it a domain value code/name pair, W_SEX_MF_CODE (available in the employee dimension table). Needless to say, if you didn’t configure your domain value for this column accurately, you won’t get this metric right.

6. Impact of incorrect population of CODE-NAME columns

The Oracle BI Applications mostly use Name and Description columns in the out-of-the-box dashboards and reports. We use Codes only during calculations, wherever required. Therefore, it is obvious that if the names and descriptions didn’t resolve against their codes during the ETL, you will see blank values of attributes (or in some cases, depending on the parameter setting, you might see strings like <Source_Code_Not_Supplied> or <Source_Name_Not_Supplied> and so on). In newer versions of BI Applications where names and descriptions are resolved using on-the-fly RPD lookups and not in the ETL, the reports might show values such as ‘__UNSASSIGNED__’, if not mapped well enough in Configuration Manager.

Another point to keep in mind is that all codes should have distinct name values. If two or more codes have the same name value, at the report level you will see them merged. The metric values may sometimes appear in different lines of the report, because OBIEE Server typically throws in a GROUP BY clause on the lowest attribute (code).

Once implemented, you are free to promote the source code columns from the logical layer to the presentation layer. You might do this when you know your business users are more acquainted to the source code values rather than the name values. But that is a separate business decision. The general behavior is not like that.


7. Best practices for extracting incremental changes

 Although you can choose to supply the entire dataset during incremental runs, for all practical reasons, this is not recommended. Firstly because then the ETL has to process all the records and determine what needs to be applied and what can be rejected. Secondly, the decision ETL takes may not be accurate. ETL decisions are based on the values of the system date columns like CHANGED_ON_DT, AUX1_CHANGED_ON_DT, AUX2_CHANGED_ON_DT, AUX3_CHANGED_ON_DT and AUX4_CHANGED_ON_DT columns only. We do not explicitly compare column-by-column and determine whether an update is required. We believe that if something has changed, probably one of the four date columns must have changed. And in that case, we simply update. If all 5 date columns are same, we pretty much tend to reject. The base of this decision is the correctness of the date columns. If your source system does not track the last updated date column on a record well enough, it becomes your responsibility to force an update, no matter what. An easy way to do this is to set SESSSTARTTIME in one of these columns during extract. This will force to detect a change, and we will end up updating.

No wonder, this is not be the best idea. By all means, you should provide the true “delta” data set during every incremental run. A small amount of overlap is acceptable, especially when you deal with flat files. Our generally accepted rules for facts or large dimensions are either:

· Customer does their own version of “persisted staging” so they can determine changes at the earliest opportunity and only load changes into universal staging tables

· If absolutely impossible to determine the “delta” or to go the “persistent staging” route, Customer only does full load. Otherwise doing a full extract every time and processing incrementally will take longer.

Follow the principles below to decide on your incremental strategy:

  • (Applies to relational table sources) Does your source system capture last update date/time accurately in the source record that change? If so, extracting based on this column would be the best idea. Now, your extract mapping may have used 2 or 3 different source tables. Decide which one is primary and which ones are secondary. The last update date on the primary table goes to the CHANGED_ON_DT column in the stage table. The same from the other two tables go to one of the auxiliary changed on date column in the stage table. If you design your extract mapping this way, you are almost done. Just make sure you add the filter criteria “primary_table.last_update_date >= LAST_EXTRACT_DATE parameter”. The value of this parameter is usually maintained within the ETL orchestration layer.
  • (Applies to CSV file sources) Assuming there is a mechanism which you can trust that gives you the delta file during each incremental load; does the ‘delta’ file come with a changed value of system dates? If yes, you’re OK. But if not, then you should add an extra piece of logic in the out of the box SDE_Universal** mappings that sets SESSSTARTTIME to one of the system date columns. This will force an update (when possible) no matter what.
  • (Applies to CSV file sources) If there are no mechanisms to easily give your delta file during incremental, and it seems easier to get a complete ‘dump’ every time, you have actually couple of choices:

a. Pass the whole file in every load, but run true incremental loads. Note that this is not an option for large dimensions or facts.

b. Pass the whole file each time and run full load always.

c. Do something at the back-office to process the files and produce the ‘delta’ file yourselves.

The choices (a) and (b) may sound a bad idea, but we’ve seen it to be a worthwhile solution compared to (c), if the source data volume is not very high. For an HR Analytics implementation (as an example), this could be OK as long as your employee strength is no more than 5000 and you have no more than 5 years of data. The benchmark might be different for a Financial Analytics implementation.

The choice (c) is more involved but produces best results. The idea is simple. You store the last image of the full file that your source gave you [call ‘A’]. You get your new full file today [call ‘B’]. Compare A & B. There are quite a few data-diff software available in the market, or better if you could write a Perl or python script on your own. The result of this script should be a new delta file [call ‘C’] that has the lines copied from B that has changed as compared to A. Use this new file C as your delta data for your incremental runs. Also, discard A and rename B as A, thereby getting ready for the next batch.

Having said that, it is worthwhile to re-iterate that the “Persisted Staging” is a better approach as it is simpler and uses the ETL logic to do the comparison. Oracle BI Applications have used this technique in HR adaptors for E-Business Suite and PeopleSoft, in case you wanted to refer to them.

If there are other options not considered here, by all means, try them out. This list is not comprehensive, it is rather indicative.

Tuesday May 14, 2013

Load Plan Generator – An Inside Look

Authored By : Don Co Seng

Oracle BI Applications (BIAPPS) 11.1.1.7.1 was recently released and is now available for download. One of the biggest features in this release is the introduction of Oracle Data Integrator (ODI) as the embedded ETL tool. With the introduction of ODI, a new method of orchestrating the task of loading your data warehouse is now required. This requirement brings about the introduction of Load Plan Generator (LPG).

LPG is a utility for generating ODI load plans for a desired subset of fact tables to be populated into BIAPPS Data Warehouse against one or more source systems. An ODI load plan is an executable object in ODI allowing you to organize tasks based on pre-defined order on the basis of the fact tables being loaded. LPG is invoked from Configuration Manager (CM) and makes use of metadata stored in CM and ODI repository. There is no separate repository required for LPG. This results in significantly lower metadata development and maintenance costs as LPG uses same metadata as in the ETL tool repository.

Key Concepts

Before we get to the inner workings of LPG, let’s take a quick look at a few key concepts that drive the LPG behavior.

BI Apps taxonomy

At the top of the BIAPPS taxonomy are the different offerings available such as Financial Analytics, HR, CRM, etc. Under the offerings are the functional areas such as accounts payable in finance or payroll in HR. Below the functional areas are the different fact groups like “AP Transactions and Balance” or “Payroll Balance”. For each fact group, there are dimension groups associated with it. A dimension group can be specific to a particular fact group or shared across different fact group. The BI apps taxonomy drives both load plan generation as well as the setup flows in Functional Setup manager.

Offering (contains) -> Functional Areas (contains) -> Fact Groups (associated to) -> Dimension Groups

Phase

As for BIAPPS load phases, it is similar to that of previous BIAPPS releases. There are 3 main phases: Source Data Extract (SDE), Source Independent Load (SIL), and Post Load Process (PLP). The SDE phase consists of tasks extracting data from your different source systems. This phase loads all your staging tables and requires source system downtime. The SIL phase loads data from your staging tables into their respective dimension or fact tables. The PLP phase loads data into aggregate tables or some other facts requiring additional data processing. The SIL and PLP phase requires data warehouse downtime to complete the load process.

With BI apps 11.1.1.7.1, due to clear separation of SDE and SIL/PLP phases it’s possible to control and optimize the source system / warehouse downtime to the corresponding phases.

Design Time vs. Runtime dependencies

The LPG algorithm deals with the design time dependencies and run time dependencies separately. Any intra-entity specific design time dependencies are seeded in the repository. For example if a dimension has multiple steps to load the target table in SIL phase, these steps are seeded once in the ODI repository as they are known at design time and never change. The run time dependencies i.e. association of dimension to fact, or association of a dimension or fact to corresponding staging tables on the basis of sources are calculated by the LPG engine.

This approach reduces the number of iterations and the metadata required to get task ordering correct at the entire graph level as the design time dependencies are always consistently honored. LPG behavior is deterministic and does not require multiple iterations to get your entire graph behave correctly.

Load Plan Generator

Now, let’s take a closer look at how LPG works. At the core of LPG are load plan components. The load plan components capture the design time dependencies. There is a load plan component per dimension or fact per phase in general. Load plan components are used as building blocks by LPG to generate a complete load plan. Load plan components are further classified into two categories:

Development components are defined at the grain of fact groups or dimension groups as described earlier. Each component contains one or more references to actual ODI scenarios. Each development component requires all steps for loading a particular dimension or fact group. The order of scenarios is pre-defined during development. Most of the development components are for a specific dimension or fact group. However, there are also those defined for supporting tables like persistent staging or general tables.

System components are defined to capture the phases and ordering of phases. The system components are mostly static, since BIAPPS load phases rarely change. For special cases, the dependencies across dimension groups and fact groups are defined here. Also, inclusion of support tables such as persistent staging table is controlled here. System components contain references to development components.

Generating a Load Plan

As mentioned earlier, LPG is invoked from CM. Generating a load plan is a two step process. First, a user creates a load plan definition using CM. A load plan definition is mainly a list of one or more fact groups per source system which the user desires to load. The user then invokes LPG to generate a load plan based on this load plan definition. When complete, the generated load plan is stored back in a load plan folder in ODI repository.

Load Plan generation logic

So, how does LPG figure out which dimension or fact group component to include in the load plan to be generated? LPG generates a load plan utilizing the following information:

1. Fact tables belonging to selected fact groups

o LPG determines the fact tables and fact group association via an ODI flex field at the fact table level in the ODI data store.

2. Dimensions dependencies to Facts

o The main source of dependency information between dimensions and facts are the foreign key constraints between them. Dimension to dimension dependency (snow-flake) is also supported up to the 2nd level.

3. Staging tables related to Facts and Dimensions

o Staging and warehouse tables are resolved via BIAPPS data model naming standard. For example XXX_DS is the staging table for XXX_D dimension table.

4. Related PS or TMP tables used as source in scenarios

o PS and TMP dependencies are resolved based on which are used as source or lookup table in scenarios required for the dimension or fact group.

5. Keywords in load plan steps for domains and class dimensions

o An additional resolution is through the use of keywords in load plan steps which are used mainly for resolving steps within class dimensions or domains. These are generic tables used across facts / dimensions and the keywords help associate as subset task for these tables to specific facts or dimensions.

Table Maintenance

For table maintenance, additional steps are added in the generated load plan to perform truncate, index management, and table analysis based on pre-defined settings and table types. We will cover this in more detail in a follow up post.

Multisource Support

For multi-source support, the user can create a load plan definition containing one or more source system for a fact particular fact group. There are 3 sequencing options in this release for multi-source support:

1. Extract-Load, Extract-Load, Extract-Load (EL, EL, EL) – LPG generates separate load plans containing all phases for each source system. Each load plan is executed serially of each other.

2. Extract, Extract, Extract, Load (E,E,E,L) – LPG generates multiple SDE only load plan for each source and a single SIL/PLP load plan. Each SDE only load plan will be sourcing from a particular adaptor. The SIL/PLP load plan should be executed after all of the SDE load plans. Each load plan is executed serially of each other.

3. Extract-Extract-Extract-Load (E-E-E-L) – LPG generates a single load plan containing multiple SDE phases and one SIL phase. This load plan will simultaneously extract data from different source systems. After the SDE phase completes, the SIL followed by PLP phase ensues.

 For more details and information on LPG, please refer to documentation available in OTN.

Thursday May 09, 2013

Oracle BI apps 11.1.1.7.1 (BI apps on ODI)

From the announcement today


Oracle Business Intelligence (BI) Applications 11.1.1.7.1 (also known as 11g PS1) is now available on the Oracle Software Delivery Cloud (eDelivery) and on the Oracle Technology Network (OTN). This is the first major release on the 11g code line leveraging the power of ODI, and certified with the latest version of Oracle BI Foundation 11.1.1.7.

What’s New

This major new release of Oracle Business Intelligence Applications 11g adds:

1. New in-memory analytic applications

2. Significantly expand functional content across existing CRM and ERP Oracle BI Applications

3. Re-design to leverage the power of Oracle Data Integrator and, optionally, GoldenGate

4. New out-of-the-box utilities to increase productivity and further drive down Total Cost of Ownership (TCO).

New Oracle BI Applications:

· Oracle Student Information Analytics enables academic institutions to improve recruitment, lower dropout rate by tracking student attendance in real-time, monitor graduation rate, manage student financial transactions, and match up school resources with student needs more effectively.

· Oracle Indirect Spend Planning – a prebuilt planning application that teams with Oracle Procurement and Spend Analytics to enable businesses to optimize spend through what-if modeling

New Content Enhancements: Significant expansion of content improves existing front and back-office Oracle BI Applications including:

· Oracle Human Resources Analytics expands analysis on HR programs and workforce performance with new

Time and Labor Analysis, global Payroll Analysis and Headcount Gain and Loss.

· Oracle Financial Analytics improves company Financial Performance with new Fixed Assets Analysis, providing complete lifecycle of assets from acquisitions, capitalization to retirements. Budgetary Control Analysis, ideal for public sector, delivers insight into expense & revenue budgets including analysis of budgeted amounts, encumbrances, expenditures, funds available, overspending, and recognized revenues.

· Oracle Project Analytics expands insight into project-based data with new Resource Management Analysis, providing visibility into project requirements, trending, utilization, competencies and alignment. Reconciliation of project sub-ledger and General Ledger assist project accountants in reconciling differences between the two ledgers for cost and revenue transactions. Perspective based dashboards provide an optimized Project Analytics user interface that orients the content around specific user perspectives.

· Oracle Supply Chain & Order Management Analytics introduces new Enhanced Inventory Analysis, providing deep insight into Inventory Cycle Count, Inventory Aging, Inventory Expiration/Obsolescence tracking and Inventory balances and transaction details at lot level. Item Cost History Analysis allows insight into historical cost of Supply Chain related items.

· Oracle Procurement & Spend Analytics introduces new Sourcing Analysis, providing a new comprehensive Sourcing Business Process that includes negotiations, awards, savings and performance. Award decisions will help identify trends and check if there is a margin for any savings opportunity.

· Oracle CRM Analytics expands insight into front office Analytics with Indirect Forecasting and Service Contracts support. Forecasting Analysis is now possible when submissions are made up a forecast hierarchy that is outside of the reporting hierarchy. Oracle Price Analytics has added E-Business Suite to the list of supported sources. Customers now have a ready means to assess pricing performance and arrest revenue and margin leakage.  Robust price waterfall analysis possible with the new Waterfall view.

Using New Oracle BI Foundation Suite 11.1.1.7 Features:

· Visualization Enhancements, and User Interaction Improvements

· BI Mobile HD Enhancements: BI Applications available on mobile without additional development

New Oracle Data Integration: BI Apps are completely re-architected to leverage Oracle Data Integrator and Oracle GoldenGate. 

· BI Applications is now available with next-generation Data Integration. Oracle Data Integrator (ODI) delivers unique next-generation, Extract Load and Transform (E-LT) technology that improves performance and reduces data integration costs—even across heterogeneous systems, and improves productivity.

· Functional Setup Manager provides a Web based Administrative User Interface to track and manage implementation projects and their required functional setup steps

· Configuration Manager provides a centralized Web based administrative user interface to simplify the setup and maintenance of Oracle Business Intelligence Applications.

· A new optional pre-packaged solution, that leverages GoldenGate, eliminates the need for an ETL batch window and minimizes the impact on OLTP systems, while providing near real-time reporting and enabling more reliable change data capture and delete support.

· Customers may continue alternatively to use Oracle BI Apps 7.9.6.x and Informatica ETL.

About

Oracle Blogs Admin-Oracle

Search

Categories
Archives
« May 2013 »
SunMonTueWedThuFriSat
   
1
2
3
4
5
6
7
8
10
11
12
13
15
16
17
18
19
20
22
23
24
25
26
27
28
29
30
31
 
       
Today