Thursday Aug 08, 2013

How to Generate Your Own RPD from P6 Analytics UDML

Starting in the P6 Analytics 3.1 release we began shipping the UDML that is used to generate the RPD that includes all the Primavera subject areas. If you have the UDML file you can generate your own RPD. This allows for some flexibility with versioning, if there are any changes or releases of Oracle Business Intelligence that might cause conflict with our RPD. Also with the UDML, you can directly edit or update the UDML and generate a new RPD based on your changes. We have seen others take advantage of this to make it easier to add more than 20 codes and udfs. Through the UDML, copy and paste can make it alot easier to add those additional codes and udfs rather than
using the GUI OBI Admin tool for the RPD. 

To generate a new RPD you just need an OBI installation. On that machine copy the UDML file (primavera_analytics_rpd.udml) from the P6_R31_Analytics\obi\rpd folder in the media pack. When generating the new RPD you will be supplying your own password. Open a terminal or cmd line prompt and go to the directory  <OBI HOME>\bifoundation\server\bin
and run the following command (for Windows, adjust the directories as needed for Linux installations) 

nqudmlexec.exe -P prima123vera -I C:\temp\test\primavera_analytics_rpd.udml -O C:\temp\test\PrimaveraAnalytics.rpd

In this example:

-P is the password of the new RPD  

-I is the directory where the UDML file has been copied to  

-O is the directory where your new RPD will be created. 

If there are other options you would like to set you can use the help in this utility to find the parameters. Now that the RPD has been generated you can deploy it through Weblogic like you would the RPD that is shipped with P6 Analytics. 



Wednesday Jul 17, 2013

Security for Multiple Data Sources

The default method of security for the P6 Reporting Database is row level security. With this security policies are applied to each logical level of security enforcement - project, resource, costs. Security is calculated out for each user with the Analytics module access. Security is written into STAR for each project this user has access to and if they do or do not have cost access, similar calculation based on resource access and written to a resource security table. 

Since the introduction of the ability to have multiple data sources into the STAR data warehouse we needed to now be able to enforce security based on these qualifiers as well as data source id. If user Jeff has access to Project Philadelphia in DS1 that security will be written out and applied by the policies. If user Jeff also exists in DS2 and there is also a Project Philadelphia in DS2 but user Jeff does not have access then he will not have access through P6 Analytics\P6 Reporting Database either. The project id, and data source id would be different and when filtering user Jeff would not gain access to that project because it is already calculated out and the query would only apply to that data source. This type of security enforcement allows for separation of data sources based on security but also allows for the aggregate abilities of multiple data sources if the user did have access to projects or resources in both data sources. In the end with access to both you can show a unified view of the enterprise wide data for all data sources. 

Thursday Jul 11, 2013

Slowly Changing Dimensions (Type 2) and Historical Facts-Part 2

In the first part we discussed the types of slowly changing dimensions available in P6 Analytics and P6 Reporting Database. 

Now lets look at how you would view the data. If you connected to your Staruser schema and queries w_project_hd you would be looking at your type 2 SCD project table. In this table you could have multiple rows per project if there were changes on that project and that project was opted in for slowly changing dimensions (see Part 1 of this blog for how to opt in). 


In this screen shot we are looking at sample for project - Baytown.You'll notice three rows here for the same project id. The main rows we want to focus in on are:

effective_start_date, effective_end_date, and current_flag

When a project is first inserted into the w_project_hd table it will have current flag set to 'Y' to confirm this is the most recent value for this project row.  The effective start date will be when it was inserted and the effective end data will be 01-Jan-3000.  If you made a change for this project, ran the ETL you would get a second row. Now the original row would have an effective_end_date set to the effective_start_date of the new row and the current flag on that row would be set to 'N' and the new row would have the new current flag value of 'Y' and so on as more rows are added. It is a basic concept to follow and something you will want to be aware of if directly querying any _HF or _HD tables in the STAR schema. 

Another thing to be aware of is if you are using P6 Analytics and the Primavera - Activity History subject area this is using the type 2 dimension tables along with a historical fact table. This will show all versions of these historical rows. You can add filtering to go on just current flag if necessary, but the ability to see and compare all historical dimension data will be available in this subject area. 

Slowly changing dimensions and historical facts provide a very powerful tool to analyze data change over time. The ability to see historical dimensional changes gives a total project picture along with the existing and new history subject areas added in P6 Analytics 3.1 and P6 Reporting Database 3.1. 

Friday Jun 14, 2013

Slowly Changing Dimensions (Type 2) and Historical Facts-Part 1

This is going to be a two part series.  In part 1 we'll cover what is a SCD (slowly changing dimension) and how is that determined in the P6 Reporting Database.  In a future blog part 2 will cover from the database side how to determine which is the most up to date row, and how to look at your historical rows.

In P6 Reporting Database version 3.1 we added type 2 Slowly Changing Dimensions.  We always had type 1.  Type 1 is an overwrite. It really doesn't store any history. 
For example, you are working on Project ABCD. The original budget on the project was $1 million dollars.  If this was changed to $2 million dollars, the row would be removed and updated with the current value. So type 1 SCD's didn't really keep a historical value.

Type 2 SCD's leaves the old record, and inserts a new record.  This way you can see what the value was in the past and what it is now. One thing to be aware of when opting in for Type 2 SCD dimensions is you can generate a large number of rows quickly. The ETL process is meant to run once per day. Say on average you make about 15% of changes to your data daily, new rows will be added to keep track of those new changes and close out old records. Certain changes can have cascading effects. A change to an EPS could cause changes to all lower levels - project, wbs, activitites. Changes to the project could cause creation of new spread records. So be careful in what projects really need SCD's and Historical fact data. 

You can opt in a project to track type 2 SCD and historical facts by setting the History Interval to Activity for the project. Again this should be a limited set that gets this level of granularity. By default projects are not set at this level. By default most changes on a dimension would cause the capture of a new row of data. To view the field by field list of what captures new rows these can be viewed in the \star\res directory - mappings.tcsv file.  This file has a list of each field mapping to a staging table. 

Lets take this line for example:

83,W_ACTIVITY_DS,ACTIVITY_NAME,ACTIVITY,name,43,Y

For the 8.3 schema mappings of the Activity Dimension, any change on the name field through the Pxrtpuser Activity view will result in a new row in this dimension. The 'Y' flag at the end of this line is on\off switch for this field. You have the ability to flip the switch and turn off fields so they do not cause new SCD rows. If I didn't want activity name changes to cause a new row to be created set the flag on this row to be 'N'

You can also turn off all type 2 SCD's for everything.  We have a 'kill switch' so you don't have to update every row in the mappings.tcsv.  Go to your \res directory and edit the properties file. Look for the line:

etl.scd.type2=

if exists set to false. If does not exist insert the line and set to false. Also check your ETL_PARAMETERS table and validate after ETL process this value is set to false. This will override the type 2 SCD feature and turn it off for all.

Thursday Jun 13, 2013

Validating Metadata Updates

In P6 Reporting Database and P6 Analytics versions 3.1 a metadata feature was added to allow for altering of calculations and a way to add new calculations. There is already an existing path for adding new scripts - user_scripts. User_scripts will allow you to establish new steps in the ETL process such as - adding additional tables or fields and custom calculations to those new objects. The metadata feature allows you to change calculations on existing fields in historical, burndown, or workplanning fact tables.

See documentation for more information:

http://docs.oracle.com/cd/E38975_01/English/Install_and_Config/Admin_PDF_Library/p6_analytics_and_star_database_installation_and_configuration_guide.pdf

Validating the metadata before running the ETL is key. If the changes you added will fail, the ETL process will revert back to a default calculation for all metadata calculations. For example, you change the metadata calculation on two fields. One of the calculations causes a failure, both will revert back to the default calculation column for what to run. Here are the steps to validate your metadata calculations:

  • After making change. Run StarETL for steps 17 through 20

- Linux: staretl.sh -from 17 -to 20

- Windows: staretl.bat 17 20

  • Check the log output for any errors or warnings.

A few more items to be aware of:

  • Calculations are per data warehouse/not per data source
  • Drop STARUSER and you will lose all custom calculations
  • Metadata calculations are stored in the ETL_CALCULATIONS table

Thursday May 23, 2013

Explanation of History Settings for P6 Analytics

This blog applies primarily to P6 Analytics\P6 Reporting Database 3.1.  As of 3.1, history was introduced at a daily level at the activity level.  Activity level history existed as of P6 Analytics 2.0\P6 Reporting Database 3.0 but was only at the interval level of week, or higher. The settings in P6 for history level and history interval haven't really changed too much, but their meaning has changed.  These history settings are on the project level and are defined project by project.  This gives flexibility to include lower levels of history for ONLY those projects that require it. These settings are defined on the Project Preferences section of P6. 

Choosing activity level history can cause your data warehouse to grow extensively based on the amount of changes and size of projects.  If setting projects on activity level history you should first review the Planning and Sizing guide and make sure you have the required space of growth. 

http://www.oracle.com/us/products/applications/primavera/p6-reporting-database-wp-399110.pdf

 If you choose history level = Activity (internally HL_Task), then this project now has Daily history turned on as well as:

- Type 2 slowly changing dimensions for all aspects of this project
- Activity spread data on daily level and historical facts
- Resource assignment spread data on the daily level and historical facts.  
This can produce a large amount of data. If the project finishes or gets to a point where you no longer need to see this level of granularity it is recommended to turn down the level of history to either Project or WBS.

If you have chosen history level of Activity, the setting for history interval now becomes relevant for the Project and WBS history.  You can choose week, month, etc. for the interval of time to be captured for Project and WBS history. This history interval does not apply to the Activity history because Activity is always Daily once chosen.

If you choose Project or WBS level history the history interval setting would apply.  History always rounds up. If you choose Activity, you automatically are opted in for Project and WBS history also.  If you choose WBS history you get Project history as well.  

Friday May 03, 2013

Extended Schema Data- How to force specific projects at a designated time

As you know the Extended Schema services (Project service specifically) runs on a designated time interval based on your Publish Project settings. This could be set to run every 1 min (default), and at this time the 'arbiter' runs and determines which projects need to be queued up to be processed. The arbiter determines this by evaluating each project that has PX_ENABLE_PUBLICATION_FLAG set to 'Y', and threshold of changes on this project or the threshold of time has been crossed. In your database you can see how many changes there are on the specific project by reviewing your PLPROJREF table in the Admuser schema.

Another reason why a project would be queued up is because you right clicked on the project and choose Publish Now.  When you choose publish now what actually occurs is a time stamp is made on the PX_NEXT_DATE field for that project row in the PROJECT table.  In the application there is not a way to separate out projects into different batches that may follow different publish project settings.  The design behind this was to allow data to continually be updating in the system at these different times when the project may have an appropriate amount of changes, time has passed, or you need it now then to calculate the project.  Let the system take care of itself and keep the data up to date in a near real time environment.  On the P6 Analytics\P6 Reporting Database side controls were added such as filters, and additional data sources to allow you to control what projects get pulled over and possibly setting them up on different ETL runs. 


A scenario that was recently proposed was this, I have a group of projects under a specific EPS.  I want these projects to be published at midnight, every night, even if they did not cross a threshold. But all other projects should obey the publish project settings.  One way to accomplish this is through the back end. 

1- determine a list of these projects and get the project id for these projects. You may even want to determine this list of project id's dynamically in case any other new projects are added under this EPS.

2- Put together an update statement that will update those projects and set the PX_NEXT_DATE = sysdate for these project rows. 

3- Have a batch or cron job that will run every night at midnight that executes step 1 and step 2.

By stamping the PX_NEXT_DATE with sysdate you are basically saying publish now.  The next time the arbiter runs it will add those projects to the queue.  Be careful because there is no limit to the number of projects that can be added through this method. The queue could become full and may take some time to process all the projects if there are a lot. Using this approach gives you some control of forcing certain groups of projects to be automatically published at some predetermined time regardless of threshold.

Friday Apr 26, 2013

Adding custom data to Analytics - Part 2

As an add on post to Jeff’s previous blog regarding how to add additional fields from P6 EPPM to the STAR schema, it may be necessary to add the new fields to your RPD file so they appear within P6 Analytics in Oracle Business Intelligence.

Adding new fields to the RPD field is a pretty straight forward process, assuming that the new fields already exist in the STAR schema. The process involves using the Oracle BI Administration tool to modify the OraclePrimaveraAnayltics.rpd file, manually adding the field to the Physical layer, then dragging it to the Business Layer and finally to the Presentation Layer.

Two recommendations before we get started. First, I would recommend that you make a copy of your existing RPD file prior to moving forward here. Even though the newest version of Oracle Business Intelligence handles this for you with auto RPD iterations, I still think it’s a good idea to keep a good “working” copy of the RPD in a safe place just in case.

Second, I would recommend doing this type of RPD manipulation in offline mode. Although it is quite possible that you could get through it all having the RPD “online”, my experience has been more times than not when performing RPD manipulations, that offline is the way to go.

Let’s Begin!

A quick check in SQL Developer of my STAR schema, select * from w_project_d;

Once I have validated that I have successfully added the fields correctly during my ETL run, the next step is to open the OraclePrimaveraAnalytics.RPD file in Oracle BI Administration tool.

In Oracle BI Administration tool, the far right column is the Physical section. This is where the connection pool to the database exists as well as the physical table objects. Expand the connection for your STAR schema all the way down until you reach the physical tables for STAR. Locate the W_PROJECT_D table here. Right click on the W_PROJECT_D table name, on the resulting menu choose New Object, then select Physical Column…


In the resulting physical column window, we need to add/change the following 3 options (NOTE: These 3 fields need to match identically the field(s) that was added to the STAR schema).

Name: PROJECT_OWNER Type: VARCHAR Length: 255

Click OK. This will added the new physical fields to the RPD. You should also notice that after clicking OK to add the new field, if you expand the logical representation of the physical table (Dim_W_PROJECT_D_Project), you shouldl see the newly added PROJECT_OWNER field here as well. From here, it’s an exercise in a series of dragging and dropping the newly added field into the other layers of the RPD, so the field appears where it’s needed.

Expand the Dim_W_PROJECT_D_Project logical table in the Physical Side of the RPD. Find the PROJECT_OWNER Field, highlight it and drag it from this location to the Business Model Layer under the Dim – Project Section.

The newly added field should appear at the bottom of the Column List. You can right click on the PROJECT_OWNER field and select Properties to see the details of this new field. On the General Tab of the Field Properties Window, you can aslo change the Name of the field here to match how you want the field to appear in the Presentation Layer. For Example, you could remove the underscore here and change the case so it appears more normal (see Below)

The final step in the RPD Administration tool is to drag the field from the Business Layer onto the Presentation Layer in the Subject Area(s) you want to have the new field available in Oracle Business Intelligence. In this example, I am only going to add the new field to the Primavera – Activity Subject Area, but if I needed it in the other ones, I could drag it to those as well.

Save the RPD, and deploy the updated RPD file in Oracle Business Intelligence Enterprise Manager. The new field should be available in the Primavera – Activity Subject Area

Until Next Time...

 


Friday Apr 05, 2013

Adding custom data to Analytics - Part 1 - adding to the ETL

Starting with the Analytics 2.0\ RDB 3.0 releases a new user scripts section was added which appended a final step to the ETL process to run any additional scripts someone might have.  These user scripts could include SQL to add dimensional data to your Star that currently doesn't exist.  For example, Project Owner is a field that is related to Projects but is currently not exposed in the Extended schema or Project Dimension in Star. However if this is a field you feel you need the user scripts is the way to accomplish this. In this blog we will walk through an example of how to add Project Owner into your Project dimension. Please be aware this is just an example, not a patch or supported version.  Also the user scripts section is just an opening to allow for expansion. Make sure all scripts are approved by your DBA and are tested thoroughly for performance and data implications. If you add scripts to the user scripts folder and the ETL process begins to slow down at the final step where these scripts are executed remove the scripts and work on the sql performance.  Oracle does not support custom SQL that is not part of the product. Make sure users adding scripts to this section have the appropriate permissions. This step will connect as STARUSER unless otherwise changed. The scripts directory should only be accessible to Administrators.

Project Owner:

1- You need to add new columns to the dimension table. Add a new script to your \scripts\user_scripts folder called create_columns.sql. Here is a sample of SQL to add columns and ignore if exist:

DECLARE
  v_new_col1 number := 0;
  v_new_col2 number := 0;
BEGIN
  Select count(*) into v_new_col1 from user_tab_cols where column_name = 'PROJECT_OWNER' and table_name = 'W_PROJECT_D';
  if (v_new_col1 = 0) then
      execute immediate 'alter table W_PROJECT_D add PROJECT_OWNER varchar2(255)';
  end if;
  Select count(*) into v_new_col2 from user_tab_cols where column_name = 'PROJECT_RSRC_OWNER_OBJECT_ID' and table_name = 'W_PROJECT_D';
  if (v_new_col2 = 0) then
      execute immediate 'alter table W_PROJECT_D add PROJECT_RSRC_OWNER_OBJECT_ID varchar2(255)';
  end if;
  commit;
END;
/
EXIT

In this example, we need to do some transformation of the data. In the Extended schema in the Project view there is a field for "ownerresourceobjectid" this is the object id of the resource that owns the project. For some this may be enough, but you probably want to have the actual username for the resource that owns this project. Because of this we are going to create 2 columns in the dimension but only expose one to Analytics. The first column will store the project resource owner id, then we will find the username based on this object id and update this other new column for project owner with the username.
2- Now we need to populate these columns. Add a new script to your \scripts\user_scripts folder called project_owner.sql. Here is a sample of SQL to populate these fields:

set serveroutput on size 100000
-- add project owner values
declare
  v_src_id number;
  v_link_name varchar2(25);
  vsql varchar2(200);
BEGIN
  v_src_id := get_source_db_id;
  v_link_name := get_link_name;

vsql := 'update w_project_d set PROJECT_RSRC_OWNER_OBJECT_ID = (select ownerresourceobjectid from project@' || v_link_name || ' where objectid = w_project_d.project_object_id) WHERE datasource_id =' || v_src_id; 
execute immediate vsql;
UPDATE w_project_d set PROJECT_OWNER = (select user_name from w_resource_d where resource_object_id = w_project_d.project_rsrc_owner_object_id) WHERE datasource_id = v_src_id;
commit;
end;
/
Exit

The content of this SQL is just an example. What we are doing in the first update is getting the ownerresourceobjectid from the Project view in the Extended schema through the database link.  Next we are going to match based on the resource object id and find the username from a table already  existing in STAR. Then we will commit and exit out of the process.

3- We need to define the order the scripts are run in the user_scripts folder.  In P6 Reporting Database you can define order that scripts get executed.  This is helpful in a case like this or if you see deadlock issues.  Go to \res\priority\ and edit the user_scripts.txt (if it does not exist, add it).  Set the following priority:

# Order of user scripts
create_columns.sql
project_owner.sql

4- Now these scripts are part of the process. Run STARETL.bat or .sh. This final step of the ETL process will execute ANY scripts in this folder. So beware adding anything you do not want to get executed. Test your scripts in a non production environment to make sure there are no side effects of your custom scripts.              

This is a basic sample of how to add data to your STAR data warehouse that may not be part of the product. Keep copies of these scripts so when upgrading you do not lose them. The user scripts concept has continued into the more recent versions of the P6 Reporting Database\P6 Analytics but during upgrade and new installation these scripts will need to be re-added to your new installation.     Upcoming will be another blog about part 2 of this process, how to get this new data to show in P6 Analytics.

Thursday Mar 28, 2013

Fact Driven

The P6 Analytics product is driven by Facts. When you are looking at a subject area, take Primavera - Activity for example. You can add from the Project Dimension Project Id and Project Name. This will list out all projects in the w_project_d dimension table. In your Analysis if you add a Fact - such as actual cost - this will join the Project Dimension to the Activity Spread Fact table. When making this join a fact record must exist or you will get No result set found returned. You would have this situation if the project you are looking for either has no activities or does not have any spread records.

In P6 Analytics versions 1.2 and higher this could occur if the project has not been published in the Extended schema. If you know there are activities that should contain spread data the best place to begin evaluating would be the extended schema. In the P6 application go to the Projects section and add columns for enable publication (make sure it is checked) and last published (make sure this date is populated and recent). You can right click on the project in P6 and choose publish now. Check back in a few minutes and see if the last published date has been updated. More more information on the Extended schema check out the P6 Extended Schema white paper.

Friday Mar 22, 2013

P6 Reporting Database and additional Java options

When executing the ETL process for the P6 Reporting Database you will be running startetl.bat or .sh.  In this file there is a JAVA_INVOKER_PREFIX.
This will have pre-determined options that are needed by the ETL process.  If you need to include additional java options they can be added to this parameter.  One item that may need to be added is to ensure there is enough entropy on this server. If issues are encountered making a JDBC connection it could be related to the entropy on the machine. Validate that the server has enough entropy. If there are a low about about of entropy please contact system administrator for server. The ETL process needs to make both TNS and JDBC connections. 

Friday Mar 01, 2013

Changing Lines at the data date

Have you ever wanted to show actual values vs. remaining on the same line using different colors for each? This can be a very tricky exercise in OBI. First, you have to create 2 separate date sets, one set of dates that represent time before the data date for actual values, and another set of dates after the data date for whatever is remaining. After this, you want to make sure that you can clearly show the data date so it is known where the actual values end and where the remaining begin <see chart below>.  After you have created the time separation, you can begin to assemble your graph. In OBI, the order in which you place your lines matters for the simple fact of which bar will overlay the other. Since our actual bar will stop at the data date, we want this bar to "appear" on top of our remaining, so we will see actual values up to the data date, then remaining after the data date. The last part of this involves the display of the data date. I accomplished this by placing a large number in a field that I used for the data date. This value represents the height of the bar essentially, so when you try this in your environment, you may need to adjust the value according to your projects values.

If you want to see the analysis (OBI 11.1.1.6.7), download the files here


Wednesday Feb 06, 2013

Path too long when copying P6 Analytics Catalog

With a default installation of Oracle Business Intelligence and P6 Analytics 3.1, some object in the P6 Analytics 3.1 default catalog create errors because the path length is too long. Please consult the following Oracle Business Intelligence document, section 17.1.1.1 for Object Name Guidelines (http://docs.oracle.com/cd/E23943_01/bi.1111/e10541/prescatadmin.htm).

If you selected to take the default paths when installing Oracle Business Intelligence, then by default your catalog path with be /home/oracle/Middleware/instances/instance1/bifoundation/OracleBIPresentationServicesComponent/coreapplication_obips1/catalog

This path may cause errors when copying the P6Analytics catalog.

To workaround this issue, create a new folder in a shorter path location (e.g. \home\oracle\Middleware\catalog), copy the P6Analytics folder from the P6 Analytics 3.1 release media to the new folder, and change the path location in Enterprise Manager for your install of Oracle Business Intelligence to use the shorter path.

Additionally, if you installed Oracle Business Intelligence on Linux/UNIX, you can alternatively create a symbolic link, then reference the symbolic link in Oracle Business Intelligence for the catalog path. In Windows, map a drive to the catalog location, then reference the mapped drive in Oracle Business Intelligence for the catalog path.

Tuesday Feb 05, 2013

Information around the P6 Extended Schema

The following blog is around the P6 Extended schema and covers some general topics related to settings that affect the P6 Extended schema services. Because the P6 Extended schema is the data source for P6 Analytics understanding how it works and it's settings are important to having a successful P6 Analytics implementation. The Extended schema applies for all P6 Reporting Database versions 2.2 and higher and P6 Analytics 1.2 and higher. If the data is not in the Extended schema it will not end up in P6 Analytics.

What causes a project to get published?
  • Automatic publishing 
Projects are automatically published when a user right clicks on project in P6 and choose publish now. Next time the project service runs this project will
be added to the queue. Some operations could cause an internal update of fields on the project level that cause the same affect and force the project into the queue on the next project service run.
  • Crossing threshold '# of changes exceeded'
  • Crossing threshold 'time since last publication exceeds' and has had a change since last threshold period
Where are the threshold counts being tracked?

  • Based on the threshold settings when crossing a certain number of changes the project is added to the queue to be published.These changes are calculated from changes on activities in the project, wbs level for the project, resource assignments on the project, and relationships on the activities. Changes in these areas will be added up in an internal table in the PMDB until it crosses the threshold of changes or the threshold of time has been crossed for projects with changes. At either of those times the project is added to the queue for calculation.

What needs to occur for a new project to make it into P6 Analytics?
  • The project must be created, enable publication set. The project must be published. After project has been successfully published the Security global service must be run to calculate security on the project.  Naturally over the course of a day the global services are run and this process is taken care of, but if you need the new project immediately you must run this Security global service after publishing the project.  When you have confirmed all these processes have occurred run the StarETL.  At this point the project should be in the Star database and available for the P6 Analytics application in OBI.

Thursday Jan 17, 2013

Common reasons why data may not appear to be updated in P6 Analytics when compared to P6

The most common reason is if cache is on in OBI and this query has been previously run. Cache will store the result of this same query. If a dashboard or same Analysis is being run, even after the STAR ETL process has been run, cached data can be returned.  Turning off cache will query directly against the database and return most up to date data. This can also have some performance impacts. Based on the environment customers can enable or disable or flush the cache based on their needs of data and performance. See OBI for how to turn off cache or flush cache.

For P6 Analytics version 1.2 and greater the P6 extended schema is used for calculating data that is then pulled into STAR. If the P6 extended schema services are not running the data there could be stale. Check the P6 extended schema services - global services and project services- and make sure they have run recently. For changes on the project to be published the project must cross the threshold defined for the project service - such as amount of changes or time since last change.  See P6 Extended Schema White Paper or P6 EPPM Installation and Configuration guide for more details on the services. Services can also be viewed in the JOBSVC table. The extended schema services are the means for a large amount of updates that would make it into the data warehouse. Some of the values from the extended schema map directly to physical fields in the PMDB but denormalized and calculated data is held primarily in the tables updated by the extended schema services.

Another reason is if the ETL process failed. Generally the ETL process runs on a regularly scheduled interval without any failures. Causes of failures could be loss of connection, bad data, change in system privileges, etc. It is good practice to monitor the staretlprocess.log and staretl.html files located in the <installation directory>\star\log folder. An undetected failure could make P6 Analytics appear as if there were no updates to the data. 
About

Provide new information on P6 Analytics and Reporting Database.

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today