Monday Jan 26, 2015

Summary Data and the Summarizer, What is Needed for P6 Analytics?

In P6 there is the Summarizer, which runs scheduled jobs to do calculations and aggregations of data. This data is visible when looking at Projects and EPS in the P6 client or web applications.
There are also the Global Services associated with the Extended Schema. In the Global Services there are jobs for Enterprise Summaries, Enterprise Data, Resource data, and Security. These jobs are completely separate and do not depend on or use any data from the Summarizer. The Summarizer and P6 Extended Schema services are two completely separate entities. P6 Analytics only uses the data coming from the Extended Schema, therefore it is not necessary to run the Summarizer to populate data for P6 Analytics. Only the Global services and Project Publication services need to be run.

Friday Jan 09, 2015

Capturing Historical Financial Periods for Analytics

In P6 Analytics you can use Financial Periods as an element of time similar to Week, Month, Year, etc.  Financial Periods are defined in P6.  They can be a week, two weeks, a month, it is a user defined period.
The financial periods are available where the time dimension is available but can react differently dependant on the subject area that is being used. In this blog we will cover two main areas where this might be used and how the two subject areas will work differently.

Primavera - Activity

This subject area is not historical.  This data is as of the last ETL run. The data in this subject area comes from the Activity Spreads.  If you have spread data that falls into these financial period buckets then it will be shown.  If you do not then certain financial periods will not be shown for this project. Remember once a fact is added into the Analysis, or you join to another dimension hence joining to the fact table, data will be filtered out to honor the data in the fact tables.  Analytics is fact driven. You must have a fact to fit the criteria for it to be displayed.




Primavera - Project History

This subject is historical.  As we discussed with the Primavera - Activity subject area, this is still fact driven.  The change in behavior is that these facts are all about being historically captured, this is NOT using spread data it is only using historical data. For example, you have a project that ranges from 1/1/2013 to 1/1/2016.  You have financial periods established for every two weeks during this time period.  Today is Jan 9th, 2015.  If you are just setting up P6 Analytics today and have never captured any history up to this point when you run the ETL process you will only see data for the financial period of Jan 1 - Jan 15th 2015.  The historical facts are added into the specified buckets of the day the ETL is run and what bucket that falls into.  If you have history for Week, Month, Quarter, and a Financial Period (every two weeks) you would see in the w_project_history_f a period_start and period_end_date like the image below:




Because you only have facts for those time periods, you would only see those time periods were historical data was captured. If you only have history captured starting today (Jan 9th) you would only see data that falls into that bucket.  As time goes and you continue to capture history you will see additional buckets of history be stored and made available for viewing. 


Thing to remember

Here are a few items to remember to do in P6 to make sure your projects are setup to capture history for financial periods. In the P6 Extended Schema remember if you are defining new financial period buckets to run the Enterprise Data global service so this data is calculated and stored in the Extended Schema
and available to be pulled over during the ETL. Projects must be published to calculate the spread data and changes to the project so they can be pulled over into STAR schema and available in the Primavera - Activity and Primavera - Project History subject areas. History must be enabled for the projects where you want to have history recorded.  This is off by default. When setting the history interval it will need to be set to Financial Periods.

Monday Dec 22, 2014

How Could I Get a Project to Publish After a Certain Amount of Activity Code Changes?

Following up on the previous blog discussing why making a change to a code assignment isn't enough to trigger the publication of a project, here is an option if there is a business need around a certain P6 object that gets updated often and is considered as important as the tables we currently count changes for (PROJWBS (WBS rows), TASK (Activities), TASKRSRC (Resource Assignments), and TASKPRED (Relationships)) which determine if a project needs to be refreshed.  

Say activity codes are changed very often on your projects. Maybe this is a way of moving activities into different phases or an important statusing practice. Because the Project publication infrastructure is based on changes, you would want to queue up a project based on changes as well. In this SQL example below we are going to update the px_next_date on the row for this project in the PROJECT table.  Px_next_date when updated with a date later than the last run of the arbiter (project publication) service, it will cause this project to be added to the queue.


update project set px_next_date = sysdate where proj_id in
--timestamp, and trigger a publication for projects in the following query
(select p.proj_id FROM
(select proj_id,  count(*) COUNT from taskactv
where update_date > (sysdate -1)
--in this example where there has been an update in the last day, this could be extended to 7 days or any other time choice
group by proj_id
order by proj_id) p
where p.COUNT > 50)
---when there is a total of 50+ changes publish this project


In this example we are saying at the time of running this SQL, go get a count (per project) for the amount of activity code updates within the last 1 day. And if that count is greater than 50 mark this project to be published.  You could change the numbers to better fit your needs.  You could do a similar update on other project
related objects, this is just a high level example (this is not meant to be production code).

Thursday Dec 11, 2014

Why Aren't My Code Changes Being Reflected in Analytics?

The scenario is, you logged into P6, you made a series of activity code changes on your project.  An hour or two later you ran the ETL process, expecting to see the codes changes reflected when you viewed them in OBI or directly in your Star schema.  However they are not updated, why?

Let's walk through the process of how the data gets there and why codes are different than updating activities directly.

Step 1:  Project Publication - what counts as a change.

You are dependent on the thresholds being crossed for updating projects in the Extended schema (See earlier blog for more info on thresholds). Basically changes to a project are counted, then if you reach the threshold (say 100 changes) or the time threshold (say 24hrs. and 1 change) the project will be published.  However items like code assignments do not count as a change. The only updated areas that count as a change are updates to the following tables:

PROJWBS (WBS rows), TASK (Activities), TASKRSRC (Resource Assignments), and TASKPRED (Relationships)

Step 2: Publishing the Project

The design behind this is these are the most commonly updated areas in a schedule.  To throttle the system and avoid overloading the queue with unnecessary updates these are the core areas that drive updates.  If you are making major changes to codes it is good to be aware of this and you can choose to 'Publish Now' which will then force a publication of your project and capture your code changes.

Step 3: Run the ETL

OK now that we have the Extended Schema (Project publication) side understood and worked out you can run the ETL as normal.  Remember the Extended Schema is the heart of all the data that makes it into P6 Analytics.  P6 Analytics and your STAR schema is only as up to date as your Extended Schema.  

Friday Nov 21, 2014

History Selections (Part 2)

In the last blog we discussed how the history selections you make can effect the size of your Star schema.  Now let's talk about how we make the selections and what they mean.  In P6 go to Projects, then the EPS view.  Right click and choose Project Preferences, and click on Analytics.

Here you will be presented with History Level and History Interval.   History Level is the most important setting here.

Choosing Activity:

-This will populate daily activity level history for the Primavera-Activity History subject area.  This is always daily.  There is no setting for Activity History to be a granularity other than daily.

-When you choose Activity History the History Interval drop down is still available.  This is for choosing the granularity for WBS and Project History, which are turned on by default when you choose Activity Level History.  

-Slowly Changing Dimensions will be turned on for changes to this Project.

Choosing WBS:

-If you choose WBS for History Interval this will also turn on Project Level History and will make these available in the Primavera -Project History subject area.

-The History Interval selection will now affect both WBS and Project Level History.

-Activity Level history will be off as well as Slowly Changing Dimensions. 

Choosing Project: 

-If you choose Project for History Interval this will turn on only Project Level History and will make it available in the Primavera -Project History subject area.

-The History Interval selection will now Project Level History.

-Activity and WBS Level history will be off as well as Slowly Changing Dimensions. 

Thursday Oct 30, 2014

How Historical Selections Effects the Size of Star Schema?

For P6 Reporting Database, the Planning and Sizing guide covers expectations for the size of your Star schema based on the size of your P6 PMDB and gives recommendations.  History is discussed in this document as well, but one thing to go a little deeper on is how the selection of history per project can have an effect on your ETL process as well as the size of your Star schema.

If you select Activity History on a project this will:
- record any change on the Project, Activity, and any related Dimension each time the ETL is run (ex. change in activity name). This enables Slowly Changing Dimensions for rows related to this project.
- Activity history will be recorded in the w_activity_history_f on a daily level.  A row will be captured for each activity for each day the ETL process is run. Depending on the amount of projects you have opted in this could grow quite quickly.  This is why partitioning is a requirement if you are using Activity level history.  
- Choosing Activity level history also opts you in for WBS and Project level history at the selected granularity (Weekly, Monthly). Recording a row for each WBS and Project each time the ETL process is run and bucketing the results into the chosen granularity.

Because choosing Activity History can have a such a dramatic effect on the size and growth of your Star Schema it is suggested that this be a small subset of your projects. Be aware of your partitioning selections as this will help with performance.

For determining the amount of rows for Activity History, take number of projects opted in times number of activities in each project times the number of days in the duration of the project times number of ETL runs. This will give you a ballpark idea on the amount of rows that will be added per project.  A similar calculation can be done for WBS history as well, adjusting for number of WBS rows and selected granularity. 

In summary, be cautious of turning on Activity History on a project unless you absolutely need daily level history per activity.  If the project is on, once the project has ended turn off the Activity History on the project.  Keeping project controls accurate and up to date can help improve the ETL process and control Star schema. 

Friday Oct 03, 2014

P6 Analytics and P6 Reporting Database Security Enforcement

In P6 Analytics row level security is used to enforce data security. The security is calculated in P6 by the extended schema global security service. This means that P6 Analytics uses the same security defined in P6. If you gave a user access to a project in P6 they would have access to it in P6 Analytics. It is calculated, stored, and used across both products.  The calculated security data for projects, resources, and costs are pulled over during the ETL process. This is stored in the STAR schema and when querying the STAR schema for data using OBI there is a validation of the user in OBI against a user that should exist from P6. When the user is identified the data is filter based on the calculated out data they have access to. This is handled using a package in the STAR schema - SECPAC. Security policies for row level security are applied during the ETL process and can be found in the \scripts folder. This enables enforcement across the Facts and Dimensions. Using OBI and allowing the ETL process to handle the enforcement is the easiest way.

However, if you do not use OBI but still wanted to use the STAR schema for data and use the security enforcement you can still use just the P6 Reporting Database.  You would setup your STAR schema and ETL process just as you would if you had the full P6 Analytics.  Once the ETL is completed you will need to execute the SECPAC and pass a username to set the context. In a session with this defined, this user would only see the data they had access to as defined in P6. This would be an additional step and need to be pre-defined in any process to access the STAR data.  Using OBI is a much easier option and you can take advantage of all the benefits of OBI in creating Analysis and Dashboards but it is still possible to setup STAR and use the calculated P6 security even without OBI in your environment.


Wednesday Oct 01, 2014

Incorporating an HTML Weather Feed into an OBIEE Analysis

We've created a whitepaper that details how to embed an HTML weather feed within an OBIEE analysis, and then integrate that weather feed with a spatial view. The end result is an interactive analysis that displays the weather for the location selected in the spatial map. To download the white paper, follow the link below:

P6 Analytics: Weather Feed in OBIEE

Wednesday Sep 24, 2014

UDF's: what is available and where?

This blog covers what is available for User Defined Fields as of P6 Analytics version 3.3. 

Areas available:

Project
WBS
Activity 
Resource
Resource Assignment

These 5 are the Subject areas available for User Defined Fields. Each User Defined Field has a subject area for use with UDF's that are treated as Facts (Cost and Number).

Ex. Primavera - Project User Defined Fields.  



Types of UDFs:

Date and Text - date and text udf's are treated as dimensional values and are grouped into the specific area in each subject area containing that dimension. 
For example, in the Primavera - Activity subject area, under Project you will see UDF-Date and UDF-Text.  These will be available in Analysis in these subject areas. Because Date and Text are dimensional they are also available historically as a slowly changing dimension value when activity history is activated for that project.  Activity History should only be enabled for a small subset of projects because of the impact they will have on database growth.  Please see Planning and Sizing guide for more information.


Number and Cost - number and cost are treated as facts.  They will be the core of each UDF subject areas. For example, Primavera - Project User Defined Fields will have number and cost as the fields available in the Facts section.  The date and text UDF's will be available as dimensional values same as in other subject areas. Because these are facts they are not part of the slowly changing dimensions and do not have historical records. 

Friday Aug 08, 2014

Change to Logging for New Web Configuration Utility

Since P6 Reporting Database 2.0 and Analytics 1.0 the ETL process log has been appending during the ETL runs. In 3.2 staretlprocess.log will contain all ETL runs. We received customer feedback on a desire to have a log for each ETL run.  With the web configuration utility added in 3.3 this was implemented.  In the new configuration tool you can schedule your ETL runs and view the results and logs from the the web config. The logs were separated out so you can see the log for each individual run.  This also gives you the option to clean up old logs from your system directory for a given time period.  

The existing (non web based) configuration utility will still do the logging based on appending each run to the same log if this is the way you choose to have your ETL process logged. However you should choose one method, either the existing configuration utility or the web based configuration utility and not use both.  The new web based config can build up a queue of ETL jobs and avoids any kind of collision. This is a major advantage for those using multiple data sources.  You can queue up multiple runs and they will handle the execution.  The existing configuration utility does not have a queue mechanism for ETL runs, they are controlled by cron jobs or manually being kicked off so collisions there are possible and you can not have more than one ETL process running at a time. Also because of the different logging methods you should not be running ETL's from both methods because the web configuration utility will not know of the logs for ETL runs kicked off from the STAR installation directory.  Each log from the new queue method will have a time stamp and unique name on it for easy identification. 

Increase to Default Values of Codes and UDFs in P6 Analytics

In the P6 Analytics 3.3, the default value of all UDF's (subject area (Project, Activity, etc.) and type (Date, Text, etc.)) were increased from 20 to 40.  This includes the RPD changes.  When choosing your UDF's from the configuration utility these changes will automatically be reflected in the schema and no changes will be needed in the RPD until you hit 40.   We also increased the default value of Codes from 20 to 50.  Same updates apply as with UDFs where all schema and RPD changes are automatically reflected up to 50. If adding greater than 50 for Codes or 40 for UDF's the appropriate RPD changes will need to be made to add additional values but all schema side changes will be handled by the ETL process.

Monday Jul 28, 2014

Deleted data showing in History?

In the Star ETL process there are 2 main types of updates.  There is the current data (dimensions and facts) and there is the historical data (_hd, hf, history) - see

The current data is overwritten every ETL update.  This will keep your data as up to date as the last ETL.  If you worked on a project, deleted a project, updated tasks, deleted tasks, scheduled - that will all be reflected. In the Historical tables, these changes once captured will always remain.  For example, you deleted a project.  In your list of projects this will still be available. Once the data is captured it is always in the database. This way you truly have a historical perspective.   

In P6 Analytics when viewing Historical data you may want to filter out this data.  You can do so by adding a code to these values and set a value of 'Deleted' then apply this filter. If you truly want to see just the most recent data the Primavera - Activity subject area may be the subject area you want to report on.  The historical subject areas is more for trending and seeing changes over time. In a future blog we will discuss how to add a delete_flag to allow for easier filtering in P6 Analytics. The main take away from this entry is once the ETL is run the data is captured as is and always available in the database if it needs to be reviewed. There was a high demand for being able to see all data, even if deleted, to help show the total historical picture of the life of a project. 

Wednesday Jun 25, 2014

P6 Links appearing as text instead of HTML hyperlinks

With the latest version of the P6 Analytics catalog, the P6 Link fields (those fields that are used to launch P6 Action Links directly from Oracle Business Intelligence analysis) are appearing as text fields as opposed to a normalized hyperlinks. This is caused by the fact that the default data type for these fields are text instead of HTML. There are a few options that can be taken in order to correct this behavior and have the P6 Link fields appear as hyperlinks in analysis. The first option is to change the P6 Link field on an analysis by analysis basis. This is very time consuming and not very practical, especially if these fields are used in a lot of analysis. The Second option (Recommended) is to change the data type to HTML and save this as the system wide default for this field. This option is also a bit tedious because you have to do this for most of the subject areas, but once it's done, there is nothing else to do.

Follow these steps to change the data type for these fields:

1. Create a new analysis that contains ALL of the P6 Links for a specific Subject Area

2. For each link, choose Column Properties,

3. Select the Data Format tab, and check the "Override Default Data Format" option. Change the "Treat Text as" to HTML from the drop down box.

4. After Step 3 is complete, at the bottom right of the Properties window, there is a button to the left of the "Ok" button called "Save as Default". Click this button, and choose "Save as the system-wide default for <whichever Link you are changing the properties for>".

5. Click "Ok" on the Properties Window to save this setting as the default for this field system wide. Repeat steps 2-5 for the remaining fields in this Subject Area. Now anytime you add this field to an analysis, it should appear as a hyperlink.

6. Repeat Steps 1-5 for each subject area.

Friday Jun 20, 2014

Star Schema Table Descriptions

In the P6 Reporting Database STAR schema there are several acronyms on the tables. In this blog we will give a high level description of some of these tables and what the acronyms represents. In each description there will be key words you can use to search and find more information about the specific data warehousing terminology.


Staging Tables

_DS, STG, _%S 

These tables are used during the ETL process and is the first stop from some data being pulled from the P6 Extended schema.  These are overwritten during each ETL run.


Dimensions
_D

These are the foundation Dimensional tables that are used with P6 Analytics.  During the ETL process these are repopulated each time. For more information search on Dimensions and Facts, high level a dimension contains information about a specific object.  For example, Project (w_project_d), this table will contain information about Project - Project Name, Description, Dates, etc.


Facts

_F

These _F tables are the foundation Fact tables that are used with P6 Analytics.  These are also repopulated during each ETL run.  For information search on Data warehouse Facts. An example is (w_activity_spread_f), and this fact table with activity spread data contains fields like costs and units.  An exception are the Fact tables which contain History in the name.  These are NOT repopulated each ETL run.  These capture the historical records in the pre-defined intervals. History can be captured on the Project, WBS, or Activity level. 


Historical Dimensions

_HD

Historical dimensions represent our Slowly Changing Dimensions tables.  SCD's capture changes over time. These are NOT repopulated during each ETL run. The historical values are saved and stored to represent when this row was valid.  For example, w_project_hd, say you changed the project name.  There will be a row representing what the project name was originally, and the period start and finish range of when that was the project name.  On the next ETL run a new row will be inserted with the new project name and a period start date so you can determine as of which date this was the most recent record. 


Historical Facts

_HF

Historical fact tables were added as part of the Slowly Changing Dimension implementation and represent in a similar nature the storing of historical fact data. Similar to the _HD tables these rows are NOT repopulated during each ETL.  They do however act like the _HD tables in that a row will represent each change captured for this fact row. For example, changing of a cost on an activity. These differences can be captured during ETL runs and will have the similar period start and end dates so you can determine when this value was the current record. _HF and _HD will not be automatically populated for all project data. These need to be opted in, by setting the project to Activity level history.  Please plan accordingly before doing so, check the P6 Reporting Database Planning and Sizing guide as this will have a long term effect on the size of your Star Schema.  


Internal Usage 

ETL_

Tables with ETL_ are used for processing events such as expanding out and populating dynamic codes or capturing process related information. Generally these are not used for reporting  as most serve an internal purpose.  An example is the ETL_PROCESSMASTER table which captures a row and result for every ETL run, or ETL_PARAMETERS which captures the settings supplied from the configuration setup.  Other tables contain metadata and mappings.   

Thursday May 29, 2014

Partitioning Strategies for P6 Reporting Database

Prior to P6 Reporting Database version 3.2 sp1 range partitioning was used. This was applied only to the history tables. The ranges were defined during installation and additional ranges would need to be added once your date range entered the final defined range.

As of P6 Reporting Database version 3.2 sp1, interval partitioning was implemented. Interval partitioning was applied to the existing History table as well as Slowly Changing Dimension tables. One of the major advantages of interval partitioning is there is no more manual addition of ranges. The interval partitioning will automatically create partitions for the defined interval when data is inserted into the table and it exceeds the existing partitions. In 3.2 sp1 there are steps on how to update your partitioning. For all versions after 3.2 sp1 interval partitioning is the only partitioning option used. When upgrading it is important to be aware of these changes. Here is a link with more information on partitioning -the types and the advantages.

http://docs.oracle.com/cd/E11882_01/server.112/e25523/partition.htm

About

Provide new information on P6 Analytics and Reporting Database.

Search

Categories
Archives
« March 2015
SunMonTueWedThuFriSat
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
    
       
Today