Thursday Dec 11, 2014

Why Aren't My Code Changes Being Reflected in Analytics?

The scenario is, you logged into P6, you made a series of activity code changes on your project.  An hour or two later you ran the ETL process, expecting to see the codes changes reflected when you viewed them in OBI or directly in your Star schema.  However they are not updated, why?

Let's walk through the process of how the data gets there and why codes are different than updating activities directly.

Step 1:  Project Publication - what counts as a change.

You are dependent on the thresholds being crossed for updating projects in the Extended schema (See earlier blog for more info on thresholds). Basically changes to a project are counted, then if you reach the threshold (say 100 changes) or the time threshold (say 24hrs. and 1 change) the project will be published.  However items like code assignments do not count as a change. The only updated areas that count as a change are updates to the following tables:

PROJWBS (WBS rows), TASK (Activities), TASKRSRC (Resource Assignments), and TASKPRED (Relationships)

Step 2: Publishing the Project

The design behind this is these are the most commonly updated areas in a schedule.  To throttle the system and avoid overloading the queue with unnecessary updates these are the core areas that drive updates.  If you are making major changes to codes it is good to be aware of this and you can choose to 'Publish Now' which will then force a publication of your project and capture your code changes.

Step 3: Run the ETL

OK now that we have the Extended Schema (Project publication) side understood and worked out you can run the ETL as normal.  Remember the Extended Schema is the heart of all the data that makes it into P6 Analytics.  P6 Analytics and your STAR schema is only as up to date as your Extended Schema.  

Friday Nov 21, 2014

History Selections (Part 2)

In the last blog we discussed how the history selections you make can effect the size of your Star schema.  Now let's talk about how we make the selections and what they mean.  In P6 go to Projects, then the EPS view.  Right click and choose Project Preferences, and click on Analytics.

Here you will be presented with History Level and History Interval.   History Level is the most important setting here.

Choosing Activity:

-This will populate daily activity level history for the Primavera-Activity History subject area.  This is always daily.  There is no setting for Activity History to be a granularity other than daily.

-When you choose Activity History the History Interval drop down is still available.  This is for choosing the granularity for WBS and Project History, which are turned on by default when you choose Activity Level History.  

-Slowly Changing Dimensions will be turned on for changes to this Project.

Choosing WBS:

-If you choose WBS for History Interval this will also turn on Project Level History and will make these available in the Primavera -Project History subject area.

-The History Interval selection will now affect both WBS and Project Level History.

-Activity Level history will be off as well as Slowly Changing Dimensions. 

Choosing Project: 

-If you choose Project for History Interval this will turn on only Project Level History and will make it available in the Primavera -Project History subject area.

-The History Interval selection will now Project Level History.

-Activity and WBS Level history will be off as well as Slowly Changing Dimensions. 

Thursday Oct 30, 2014

How Historical Selections Effects the Size of Star Schema?

For P6 Reporting Database, the Planning and Sizing guide covers expectations for the size of your Star schema based on the size of your P6 PMDB and gives recommendations.  History is discussed in this document as well, but one thing to go a little deeper on is how the selection of history per project can have an effect on your ETL process as well as the size of your Star schema.

If you select Activity History on a project this will:
- record any change on the Project, Activity, and any related Dimension each time the ETL is run (ex. change in activity name). This enables Slowly Changing Dimensions for rows related to this project.
- Activity history will be recorded in the w_activity_history_f on a daily level.  A row will be captured for each activity for each day the ETL process is run. Depending on the amount of projects you have opted in this could grow quite quickly.  This is why partitioning is a requirement if you are using Activity level history.  
- Choosing Activity level history also opts you in for WBS and Project level history at the selected granularity (Weekly, Monthly). Recording a row for each WBS and Project each time the ETL process is run and bucketing the results into the chosen granularity.

Because choosing Activity History can have a such a dramatic effect on the size and growth of your Star Schema it is suggested that this be a small subset of your projects. Be aware of your partitioning selections as this will help with performance.

For determining the amount of rows for Activity History, take number of projects opted in times number of activities in each project times the number of days in the duration of the project times number of ETL runs. This will give you a ballpark idea on the amount of rows that will be added per project.  A similar calculation can be done for WBS history as well, adjusting for number of WBS rows and selected granularity. 

In summary, be cautious of turning on Activity History on a project unless you absolutely need daily level history per activity.  If the project is on, once the project has ended turn off the Activity History on the project.  Keeping project controls accurate and up to date can help improve the ETL process and control Star schema. 

Friday Oct 03, 2014

P6 Analytics and P6 Reporting Database Security Enforcement

In P6 Analytics row level security is used to enforce data security. The security is calculated in P6 by the extended schema global security service. This means that P6 Analytics uses the same security defined in P6. If you gave a user access to a project in P6 they would have access to it in P6 Analytics. It is calculated, stored, and used across both products.  The calculated security data for projects, resources, and costs are pulled over during the ETL process. This is stored in the STAR schema and when querying the STAR schema for data using OBI there is a validation of the user in OBI against a user that should exist from P6. When the user is identified the data is filter based on the calculated out data they have access to. This is handled using a package in the STAR schema - SECPAC. Security policies for row level security are applied during the ETL process and can be found in the \scripts folder. This enables enforcement across the Facts and Dimensions. Using OBI and allowing the ETL process to handle the enforcement is the easiest way.

However, if you do not use OBI but still wanted to use the STAR schema for data and use the security enforcement you can still use just the P6 Reporting Database.  You would setup your STAR schema and ETL process just as you would if you had the full P6 Analytics.  Once the ETL is completed you will need to execute the SECPAC and pass a username to set the context. In a session with this defined, this user would only see the data they had access to as defined in P6. This would be an additional step and need to be pre-defined in any process to access the STAR data.  Using OBI is a much easier option and you can take advantage of all the benefits of OBI in creating Analysis and Dashboards but it is still possible to setup STAR and use the calculated P6 security even without OBI in your environment.

Wednesday Oct 01, 2014

Incorporating an HTML Weather Feed into an OBIEE Analysis

We've created a whitepaper that details how to embed an HTML weather feed within an OBIEE analysis, and then integrate that weather feed with a spatial view. The end result is an interactive analysis that displays the weather for the location selected in the spatial map. To download the white paper, follow the link below:

P6 Analytics: Weather Feed in OBIEE

Wednesday Sep 24, 2014

UDF's: what is available and where?

This blog covers what is available for User Defined Fields as of P6 Analytics version 3.3. 

Areas available:

Resource Assignment

These 5 are the Subject areas available for User Defined Fields. Each User Defined Field has a subject area for use with UDF's that are treated as Facts (Cost and Number).

Ex. Primavera - Project User Defined Fields.  

Types of UDFs:

Date and Text - date and text udf's are treated as dimensional values and are grouped into the specific area in each subject area containing that dimension. 
For example, in the Primavera - Activity subject area, under Project you will see UDF-Date and UDF-Text.  These will be available in Analysis in these subject areas. Because Date and Text are dimensional they are also available historically as a slowly changing dimension value when activity history is activated for that project.  Activity History should only be enabled for a small subset of projects because of the impact they will have on database growth.  Please see Planning and Sizing guide for more information.

Number and Cost - number and cost are treated as facts.  They will be the core of each UDF subject areas. For example, Primavera - Project User Defined Fields will have number and cost as the fields available in the Facts section.  The date and text UDF's will be available as dimensional values same as in other subject areas. Because these are facts they are not part of the slowly changing dimensions and do not have historical records. 

Friday Aug 08, 2014

Change to Logging for New Web Configuration Utility

Since P6 Reporting Database 2.0 and Analytics 1.0 the ETL process log has been appending during the ETL runs. In 3.2 staretlprocess.log will contain all ETL runs. We received customer feedback on a desire to have a log for each ETL run.  With the web configuration utility added in 3.3 this was implemented.  In the new configuration tool you can schedule your ETL runs and view the results and logs from the the web config. The logs were separated out so you can see the log for each individual run.  This also gives you the option to clean up old logs from your system directory for a given time period.  

The existing (non web based) configuration utility will still do the logging based on appending each run to the same log if this is the way you choose to have your ETL process logged. However you should choose one method, either the existing configuration utility or the web based configuration utility and not use both.  The new web based config can build up a queue of ETL jobs and avoids any kind of collision. This is a major advantage for those using multiple data sources.  You can queue up multiple runs and they will handle the execution.  The existing configuration utility does not have a queue mechanism for ETL runs, they are controlled by cron jobs or manually being kicked off so collisions there are possible and you can not have more than one ETL process running at a time. Also because of the different logging methods you should not be running ETL's from both methods because the web configuration utility will not know of the logs for ETL runs kicked off from the STAR installation directory.  Each log from the new queue method will have a time stamp and unique name on it for easy identification. 

Increase to Default Values of Codes and UDFs in P6 Analytics

In the P6 Analytics 3.3, the default value of all UDF's (subject area (Project, Activity, etc.) and type (Date, Text, etc.)) were increased from 20 to 40.  This includes the RPD changes.  When choosing your UDF's from the configuration utility these changes will automatically be reflected in the schema and no changes will be needed in the RPD until you hit 40.   We also increased the default value of Codes from 20 to 50.  Same updates apply as with UDFs where all schema and RPD changes are automatically reflected up to 50. If adding greater than 50 for Codes or 40 for UDF's the appropriate RPD changes will need to be made to add additional values but all schema side changes will be handled by the ETL process.

Monday Jul 28, 2014

Deleted data showing in History?

In the Star ETL process there are 2 main types of updates.  There is the current data (dimensions and facts) and there is the historical data (_hd, hf, history) - see

The current data is overwritten every ETL update.  This will keep your data as up to date as the last ETL.  If you worked on a project, deleted a project, updated tasks, deleted tasks, scheduled - that will all be reflected. In the Historical tables, these changes once captured will always remain.  For example, you deleted a project.  In your list of projects this will still be available. Once the data is captured it is always in the database. This way you truly have a historical perspective.   

In P6 Analytics when viewing Historical data you may want to filter out this data.  You can do so by adding a code to these values and set a value of 'Deleted' then apply this filter. If you truly want to see just the most recent data the Primavera - Activity subject area may be the subject area you want to report on.  The historical subject areas is more for trending and seeing changes over time. In a future blog we will discuss how to add a delete_flag to allow for easier filtering in P6 Analytics. The main take away from this entry is once the ETL is run the data is captured as is and always available in the database if it needs to be reviewed. There was a high demand for being able to see all data, even if deleted, to help show the total historical picture of the life of a project. 

Wednesday Jun 25, 2014

P6 Links appearing as text instead of HTML hyperlinks

With the latest version of the P6 Analytics catalog, the P6 Link fields (those fields that are used to launch P6 Action Links directly from Oracle Business Intelligence analysis) are appearing as text fields as opposed to a normalized hyperlinks. This is caused by the fact that the default data type for these fields are text instead of HTML. There are a few options that can be taken in order to correct this behavior and have the P6 Link fields appear as hyperlinks in analysis. The first option is to change the P6 Link field on an analysis by analysis basis. This is very time consuming and not very practical, especially if these fields are used in a lot of analysis. The Second option (Recommended) is to change the data type to HTML and save this as the system wide default for this field. This option is also a bit tedious because you have to do this for most of the subject areas, but once it's done, there is nothing else to do.

Follow these steps to change the data type for these fields:

1. Create a new analysis that contains ALL of the P6 Links for a specific Subject Area

2. For each link, choose Column Properties,

3. Select the Data Format tab, and check the "Override Default Data Format" option. Change the "Treat Text as" to HTML from the drop down box.

4. After Step 3 is complete, at the bottom right of the Properties window, there is a button to the left of the "Ok" button called "Save as Default". Click this button, and choose "Save as the system-wide default for <whichever Link you are changing the properties for>".

5. Click "Ok" on the Properties Window to save this setting as the default for this field system wide. Repeat steps 2-5 for the remaining fields in this Subject Area. Now anytime you add this field to an analysis, it should appear as a hyperlink.

6. Repeat Steps 1-5 for each subject area.

Friday Jun 20, 2014

Star Schema Table Descriptions

In the P6 Reporting Database STAR schema there are several acronyms on the tables. In this blog we will give a high level description of some of these tables and what the acronyms represents. In each description there will be key words you can use to search and find more information about the specific data warehousing terminology.

Staging Tables

_DS, STG, _%S 

These tables are used during the ETL process and is the first stop from some data being pulled from the P6 Extended schema.  These are overwritten during each ETL run.


These are the foundation Dimensional tables that are used with P6 Analytics.  During the ETL process these are repopulated each time. For more information search on Dimensions and Facts, high level a dimension contains information about a specific object.  For example, Project (w_project_d), this table will contain information about Project - Project Name, Description, Dates, etc.



These _F tables are the foundation Fact tables that are used with P6 Analytics.  These are also repopulated during each ETL run.  For information search on Data warehouse Facts. An example is (w_activity_spread_f), and this fact table with activity spread data contains fields like costs and units.  An exception are the Fact tables which contain History in the name.  These are NOT repopulated each ETL run.  These capture the historical records in the pre-defined intervals. History can be captured on the Project, WBS, or Activity level. 

Historical Dimensions


Historical dimensions represent our Slowly Changing Dimensions tables.  SCD's capture changes over time. These are NOT repopulated during each ETL run. The historical values are saved and stored to represent when this row was valid.  For example, w_project_hd, say you changed the project name.  There will be a row representing what the project name was originally, and the period start and finish range of when that was the project name.  On the next ETL run a new row will be inserted with the new project name and a period start date so you can determine as of which date this was the most recent record. 

Historical Facts


Historical fact tables were added as part of the Slowly Changing Dimension implementation and represent in a similar nature the storing of historical fact data. Similar to the _HD tables these rows are NOT repopulated during each ETL.  They do however act like the _HD tables in that a row will represent each change captured for this fact row. For example, changing of a cost on an activity. These differences can be captured during ETL runs and will have the similar period start and end dates so you can determine when this value was the current record. _HF and _HD will not be automatically populated for all project data. These need to be opted in, by setting the project to Activity level history.  Please plan accordingly before doing so, check the P6 Reporting Database Planning and Sizing guide as this will have a long term effect on the size of your Star Schema.  

Internal Usage 


Tables with ETL_ are used for processing events such as expanding out and populating dynamic codes or capturing process related information. Generally these are not used for reporting  as most serve an internal purpose.  An example is the ETL_PROCESSMASTER table which captures a row and result for every ETL run, or ETL_PARAMETERS which captures the settings supplied from the configuration setup.  Other tables contain metadata and mappings.   

Thursday May 29, 2014

Partitioning Strategies for P6 Reporting Database

Prior to P6 Reporting Database version 3.2 sp1 range partitioning was used. This was applied only to the history tables. The ranges were defined during installation and additional ranges would need to be added once your date range entered the final defined range.

As of P6 Reporting Database version 3.2 sp1, interval partitioning was implemented. Interval partitioning was applied to the existing History table as well as Slowly Changing Dimension tables. One of the major advantages of interval partitioning is there is no more manual addition of ranges. The interval partitioning will automatically create partitions for the defined interval when data is inserted into the table and it exceeds the existing partitions. In 3.2 sp1 there are steps on how to update your partitioning. For all versions after 3.2 sp1 interval partitioning is the only partitioning option used. When upgrading it is important to be aware of these changes. Here is a link with more information on partitioning -the types and the advantages.

Friday May 09, 2014

Indicator type UDFs Part 2

In a previous blog, we covered how to bring indicator type UDFs (User Defined Fields) into the STAR data warehouse by manually adding them as text type UDFs in the file and adding a script into the ETL process. In this post we’ll cover how to use conditional formatting in Oracle Business Intelligence to display these indicator UDFs in analyses.

First, create a new analysis in Oracle Business Intelligence, in this example we’re using the Primavera – Activity subject area, and a simple selection of two indicator type Project UDFs, Schedule Status and Overall Status.

Since the indicators UDFs are being stored as text in the STAR, the initial output from Oracle Business Intelligence is not what we want, we’ll need to apply some conditional formatting in order to display the indicators correctly.

To accomplish this, we first need to add some place holder columns that we can use to display the indicator images. You can just add a second occurrence of each indicator column and rename them so you know which column will contain the text and which will be used to display the actual indicator image.

Next, go to the Schedule Status column, click on the menu drop down and select Edit Formula.

We don’t want to display the text along with the indicator image, so select the Custom Headings option, then in place of the actual column name substitute a blank value(‘ ‘) in the Column Formula section.

Now we can take care of the conditional formatting. For the Schedule Status column, click on the drop down and select Column Properties.

Then go to the Conditional Format tab, click Add Condition, and select the column you want to base the formatting on, in this case Schedule Status Text.

Select the condition and click OK. From the Edit Format window click on Image.

Select an image to be displayed based on the Schedule Status Text, click OK.

Enter any additional format changes, in this case we’ve selected Center horizontal and vertical alignment for the column, then click OK.

Repeat the conditional formatting steps for the other values of the indicator, and then repeat the process for any other indicators in the analysis, in this example we also selected Overall Status.

Once finished, the only remaining step is to hide the text columns from the analysis so only the indicators will be displayed. From the Results tab, right-click on the heading for the text columns and select Hide Column from the drop down list.

Once the text columns have been hidden you will be left only with the indicator columns and the images that were selected in the conditional formatting steps.

Tuesday Apr 22, 2014

ETL Scheduling in a Multiple Data Source Environment

A multiple data source environment is where the Star schema is populated by either multiple P6 instances or a single P6 instance has been split into unique sources of data.  You could split a single P6 instance based on a specific criteria. Maybe you want a group of projects under an EPS to be updated in Star only on Friday's, but another group of projects need to be updated daily.  You could split into separate ETL's.  See previous blogs for more information on filtering and multiple data sources.

For this blog we are going to cover how the ETL's are executed. If you have two data sources, you need to run two separate ETL processes at different times.  ETL #1 must be run first and complete before ETL #2 can be started.  You do NOT want to allow both ETL processes to be executed at the same time.  This can accomplished with a batch process or another queueing mechanism to make sure ETL #1 completes then execute ETL #2. 

If ETL's were to be run at the same time you could see some data issues because they share staging tables.  While the data in the facts and dimensions is contained in rows that are unique to the data source the staging tables are not.  This data could be clobbered if both ETL's were running at the same time then that clobbered data may be pulled into the rows for an existing data source. 

To help control this problem a new web configuration utility was created in P6 Reporting Database and P6 Analytics 3.3.  Now there is a queuing mechanism to prevent ETL's from running at the same time.

You can setup separate tabs for each ETL.  Define the schedule for each ETL.  They will then queue up and be displayed on the home tab where the running and queued ETL's will show.  They can also be Stopped or Removed from the queue. The main take away is for multiple data source environments the ETL's are sequential not parallel.  

Monday Apr 07, 2014

Handling Codes and UDFs in a Multiple Data Source Environment

In a single data source environment codes and udfs are quiet easy.  Log into the configuration utility.  Select your codes and udfs, run your ETL process, and as long as you don't exceed the default number of columns in your RPD the codes and udfs show in OBI.  In the configuration utility the list of codes and udfs that are presented to you is populated by reading directly from the P6 Extended Schema you provided selection information for.  When you make your selections this list is written to your .properties file in the <rdb installation>\res directory.  During the ETL process these selections are read from the .properties file and inserted into Staging tables and eventually Dimensional tables in the STAR schema.  

One thing to note about Staging tables is there is only one set. In a multiple data source environment Staging tables are shared. Meaning during each ETL run they are overwritten. This is the main reason why in a multiple data source environment ETL processes can not be run at the same time.  One ETL process has to finish before the next can be run. Which leads us to how to handle codes and udfs in a multiple data source environment. Say you make your codes selection from data source 1, run your ETL process.  Now make your code selection from data source 2, and run the ETL process.  The selection you made for data source 2 is now your final selection.  The values and order you choose from data source 1 has been overwritten. 

To accommodate multiple data sources for codes and udfs requires a little coordination. Let's say you have 10 codes from your 1st data source and 10 codes from your 2nd data source that you would like to be used in P6 Analytics in OBI. Run the configuration utility for data source 1, choose your 10 codes using slots 1-10. Go to your \res folder and make a copy of the and save it to a different location ( We're going to come back later to use this file for data source 1. Now for data source 2 lets go ahead and log into the configuration utility. Choose your codes, 1-10 don't really matter but choose codes 11-20 as the ones you want to represent this data source in OBI. Make a copy of this and save in a different location ( Your are saving these backups so you can easily rebuild the codes list at a later time if runSubstitution or configuration utility are executed.  

Now you have 2 properties files. These 2 files contain all the codes you are going to use we just need to combine them into 1 file now. Go to data source 1 properties file, change the lines for codes 11-20 to represent the codes 11-20 from the properties file from data source 2. You can find code 11 in data source 2 properties file, copy the three lines and paste it where code 11 is in data source 1's properties file. Or if there is no code 11 just add it below the other codes. Do this for the rest of the codes in data source 2 until in the data source 1 properties file you have all the codes you want from data source 2 in this properties file. 

Now copy the whole codes section from this data source 1 properties file and overwrite the codes section in data source 2.  You will do this for each type of code (project, activity, resource). This section together is the 'master copy' of your codes. Do not run the configuration utility again on either data source 1 or 2 again after you have put this list together or it will overwrite it. If this 'master copy' is overwritten you will need to coordinate and create it again. So be careful when and where the configuration utility is run after this point.  

If you have a scenario where you have the same codes in data source 1 and data source 2 then this process is much easier. You would need to make the 'master copy' in just one of the data sources and then add it to the other properties files.  If the data sources are truly unique and there are codes that don't exist in one of the other data sources you must follow the steps above.Do not worry about when the ETL process runs for data source 1 that it might not have the codes for data source 2 that are now defined in the properties file. This is adding it into the staging tables and mappings for OBI to use for the data associated with data source 2. When you try to add this code along with data from data source 1 it will not return any results just like any other project or activity that is not associated with a chosen code. 


Provide new information on Primavera Analytics and Data Warehouse


« October 2015