X

Deep dive into various configurations with Oracle Weblogic Server- WLS Installation, plugin configurations, JMS, SSL,...

Recent Posts

Primavera Analytics

Data Science and Primavera Analytics...Introduction

An effort has been underway to incorporate data science and advanced analytics into Primavera Analytics. This effort started a few releases ago with sample analysis using the built in Oracle Business Intelligence advanced analytics functions including Trend line, Outliers, Cluster, and Regression. These samples can be found within the Primavera Analytics catalog on the More dashboard, under the Advanced Analytics page. But we wanted to continue moving forward with this effort and going beyond the out of the box functions. This has lead our effort to incorporate the use of Oracle R (https://www.oracle.com/technetwork/database/database-technologies/r/r-distribution/overview/index.html ) for even more advanced calculations. In Primavera Analytics 19.1, we have started to show what is possible using R, and how to take the data you already have to the next advanced level of analysis. In 19.1, we have added 2 new sub-pages to the More dashboard called "Project Success (Adv A)" and "Resource (Adv A)". If you have any questions, feel free to add them to the comments section below. Additionally, if there are specific advanced calculations that you think would be beneficial to include in our catalog, please leave those in the comment section as well or you can add them to the My Oracle Support Enhancement Request (https://community.oracle.com/community/support/primavera/primavera_-_prim), using the "Create an Idea" link... In the next series of blog postings, I will detail what analysis have been released and some of the technical details...  

An effort has been underway to incorporate data science and advanced analytics into Primavera Analytics. This effort started a few releases ago with sample analysis using the built in Oracle Business...

Primavera Analytics

Is your environment "Secure by Default"?

With the release of Primavera Analytics 18.8, we have introduced a couple of new settings in the Administration app that provide insight into whether or not your configuration is considered "secure by default". What is "Secure by Default"? There are 2 main components used by Primavera Analytics. The first is Transparent Data Encryption (TDE). Transparent Data Encryption(TDE) enables you to encrypt data stored in tables and tablespaces within the Primavera Data Warehouse. Encrypted data is transparently decrypted for a database user or application that has access to data. For detailed information on Transparent Data Encryption (TDE), see the following link https://docs.oracle.com/cd/E11882_01/network.112/e40393/asotrans.htm#ASOAG600 The second component of "Secure by Default", is insuring that communication between the client and server is done using Secure Socket Layers (SSL).  Secure Sockets Layer (SSL) is a standard security protocol for establishing encrypted links between a web server and a browser in an online communication. Neither of these security settings are new in version 18.8. What is new is that during the setup/installation/configuration of Primavera Data Warehouse, the configuration will alert the user that a non-secure configuration is being used, and that the recommendation by Oracle is to operate in a secure configuration (by enabling TDE and SSL). If the environment is not setup using TDE or SSL, the Primavera Data Warehouse configuration will allow you to continue with installation, and the software will run without issue. However it is strongly recommended that your installation employ both of these security protocols to insure the safety and security of your environment.  

With the release of Primavera Analytics 18.8, we have introduced a couple of new settings in the Administration app that provide insight into whether or not your configuration is considered "secure by...

If you don't learn History you are doomed to repeat it... (Part 3)

In this post, I will be discussing fact level history. In part 2, I discussed how Primavera Analytics handles dimensional history. Factual history applies to both Primavera P6 and Unifier. The biggest difference between the two is that P6 has the ability to capture changes at the daily level. The lowest level of data capturing in Unifier is weekly. Factual history is handled in much the same way as dimensional history. Historical data is captured with each run of the data capture process (ETL), provided that the data in the base product (P6 or Unifier) has changed since the last ETL process run. If the data has not changed since the last ETL process run, then the data that exists in the fact history tables is still "effective", and therefore still has the designation of being the current record. How is weekly history handled if the data capturing process is run daily? In this scenario, the weekly window is based off of settings within P6 and Unifier for the start of a week. P6 has a setting which specifies which day of the week (Sunday, Monday, etc...) is the start of a week. The data capture process reads this value and follows the same "window" designation. As the data capturing process moves along daily, the data is updated within the bounds of that week. Once the day of the week defined has been reached, whatever the last capture for that week was, is what will be stored for this week of data. Take for instance that a data capture process is run Monday, Wednesday, and Friday. If P6 is set for the week to start on Sunday, then the data stored for a week will be the data that is captured with the data capture process that is run on Friday. Once the data capture process is run on Monday, this capture will start a new week of captures. The same is also true for daily runs. Since the data capturing process can be run more than one time per day, whatever the last data capture is for that day will be what the data is for that day in Primavera Analytics. In part 4, I will be discussing the appropriate settings in both P6 and Unifier, that controls the time granularity for how date is captured.

In this post, I will be discussing fact level history. In part 2, I discussed how Primavera Analytics handles dimensional history. Factual history applies to both Primavera P6 and Unifier. The biggest...

If you don't learn History you are doomed to repeat it... (Part 2)

In Part 1 of this series, I discussed the 2 types of histories that are available in Primavera Analytics. This post will discuss in detail the first type of history, dimensional history or Slowly Changing Dimensions (SCD). What is a "Slowly Changing Dimension"? A slowly changing dimension is any non-aggregated bit of data, that typically does not change frequently. This data might change a lot at first as a project is being setup and configured, but as the project moves in time, this data is not changing very often, if at all. An example of dimensional data is a Project Name, which typically does not change once the project is created, and the project moves forward in time. How does Primavera Analytics handle a slowly changing dimension? When enabled, Slowly changing dimensions work as followed: (Assume the following: Publication Services and All relevant configurations have been done in Primavera P6 to enable slowly changing dimensions. Also, the assume that this is the very first run of the data migration process (ETL).) Since this is the first run of the data migration process (ETL), a record is created that has the following bits of information: the dimensional value (e.g. Project Name, Acitivty ID, etc...), the effective start date (which is set as the date the ETL process was/is run), the effective end date (since this is the first run the effective end date will be set to 01/01/3000), and a field called "current flag", which will be set to a value of 1. The next day comes along, and the project name , or any other captured bit of data that is configured with slowly changing dimension is changed. The ETL process is run. There are now 2 records in the data warehouse. The first record has the effective end date changed to the day 2 ETL run date (i.e. this ETLs run date) and the current flag is now set to 0, since this record is no longer the "current" record since the data has changed. The second record is created with the New Project Name, effective start date set to this ETLs run date, the effective end date is set to 01/01/3000 and the current flag is set to 1. This behavior is repeated with each subsequent run of the data capture (ETL) process runs. In Primavera Analytics, in the P6 specific History subject Areas, there will be fields added for the effective start date, effective end date and current flag. It is important to note, that if you are creating analysis, it is important to use a filter (e.g. current flag = 1) on a slowly changing dimensional field (where necessary) to avoid a fact metric being counted more than once. In the next post, I will discuss how fact history works with Primavera Analytics.

In Part 1 of this series, I discussed the 2 types of histories that are available in Primavera Analytics. This post will discuss in detail the first type of history, dimensional history or...

If you don't learn History you are doomed to repeat it... (Part 1)

We have had a lot of questions recently on how History works in Primavera Analytics, so we wanted to discuss this topic in this forum so we can bring clarity to the topic. This is part 1 in what will likely be a series of posts dedicated to explaining how to configure history; the settings in CEGBU products, how history is captured and stored in Primavera Analytics, and what behavior can be expected when using history in Primavera Analytics. Primavera Analytics has 2 types of history, each can be configured in different ways. Additionally, you can choose to use both, either or no history, depending on your business requirements. Furthermore, we will discuss the differences between Primavera P6 and Unifier, where appropriate. The first type of history is dimensional history. You may have heard this history called Slowly Changing Dimensions(SCDs) for short. Slowly Changing Dimensions allow for dimensional data (data that cannot be aggregated, e.g. Project Name, Activity ID, Cost Sheet Name, etc...) to be stored historically with each data capture (ETL) process run if and only if the dimensional data has changed. This type of history only applies to Primavera P6. The second type of history is factual history. Factual History stores, at user configured time granularity (Month, Week or Daily), fact data (fact data is data that can be aggregated, e.g. Actual Total Cost, Planned Units, Earned Value, etc...). This type of history applies to both Primavera P6 and Unifier. At the time of this post, Primavera Analytics contains twelve(12) unique subject areas for history. They are as followed: Project History, Activity History, Resource Assignment History, Business Process History, Cash Flow History, Cost Sheet History, Space Management History, Project User Defined Fields History, WBS User Defined Fields History, Activity User Defined Fields History, Resource User Defined Fields History, Resource Assignment User Defined Fields History. In Part 2, we will discuss all of the settings that can be set for configuring Slowly Changing Dimensional history, and where this data is accessible within Primavera Analytics.  

We have had a lot of questions recently on how History works in Primavera Analytics, so we wanted to discuss this topic in this forum so we can bring clarity to the topic. This is part 1 in what will...

Primavera Analytics

Immediate Updates (part 5) Summary Project Data vs. Activity\Aggregate Data

In the Primavera Activity Subject Area there are certain ways to use activity level data (which can be part of immediate updates) and then aggregated up to the project level or higher.  Or you can also use project level data which would pull from the project spread table rather than using aggregated fact data.  To help with performance in this Subject Area,  there is a project spread table that would be used when there is no dimensional value below project.   You can still use the fact selection and same columns but the system knows to use the pre-populated project spread table. An example of this is if you add Project Id, Project Name, planned labor units.  You did not go to the lower activity spread level so this analysis will use the project spread table.  Where this can get confusing is the project spread table is populated only in the daily ETL run and is only populated by the Enterprise Summaries Global Publication service.   To get this pre-populated aggregate data on the project level for a project that is marked as immediate update you would need to run the Enterprise Summaries service, then run your Daily ETL.   Another option to make sure you get the immediate changes for this project would be to force the analysis to use the activity spread data (which can be updated immediately) by adding a field that would make sure it touched the activity dimension,  such as # of activities - from the fact fields available, then hide it in the analysis.  This way the output doesn't show this field but you are getting live, immediate activity level spread data.   

In the Primavera Activity Subject Area there are certain ways to use activity level data (which can be part of immediate updates) and then aggregated up to the project level or higher.  Or you can...

Primavera Analytics

Immediate Updates (part 4) Codes and UDFs

Because of the dynamic nature of Codes and UDF updates for P6 sources, the behavior needs to be monitored differently. Project and Activity code assignments are captured as part of the Real Time Project service capture and updated in w_xxx_codeassignment_ds tables.  These changes are stored as sort of ‘parking lot.’  They are not processed and committed until the daily ETL is run.  Resource codes are part of the Real Time Global and treated differently from project related updates. UDfs are part of the Real time global process because UDF type and UDF value cover all UDF object types, not just project level udfs.  When a UDF assignment change is made, Real Time Global picks this up and stores these These are similar to codes where the changes are stored in the ‘parking lot.’  Daily ETL completes the process of applying these UDF updates to the dimension and fact tables. Parking lots are necessary for items like codes and UDFs where there could be rapid changes and where the same value could be changed multiple times very quickly.  Based on the way the data is stored in the source system we need to gather these changes and apply them at an appropriate time to handle most in one shot.  Passing every change every time for these areas would cause a large amount of redo log creation and influx of dimension updates. As mentioned earlier Codes and UDFs because of their dynamic behavior are unique areas. Codes and UDFs generally are not areas that are changed as often as schedule data so we ensure that these are taken care of in the daily ETL and wouldn't cause any system problems if there was a time of large updates in these areas.

Because of the dynamic nature of Codes and UDF updates for P6 sources, the behavior needs to be monitored differently. Project and Activity code assignments are captured as part of the Real Time...

Primavera Analytics

Immediate Updates (part 2) - Global

As of Primavera Analytics 17.7 in an ODI ETL Process there is a LOAD_PLAN_REALTIME_GLOBAL.  This Global Real time process handles global data such as Resources, Users,  Risks, Roles, OBS, Code Types, etc.  Any change to these dictionary level items will be detected by the Global Real Time load plan and pulled over right away.  This doesn’t mean all related calculations will be completed.  But the staging tables and the changes are going to be detected and brought over.   Here is a look at the mappings which will give a good idea of what global objects will be updated immediately.  If it is a Dimension table it should get an immediate update.   Staging tables generally pull over these global changes but they are stored and incrementally updated but the full process is not completed until the daily ETL runs.  These staging tables are just housing the changes for the full updates in the daily ETL. Running the Extended schema global services makes sure these changes come across and generally is the trigger.  For example, updating a resource or adding a portfolio – this would require Enterprise Data or Resource Management Global service to be run.  When the Global service is run ODI detects and pulls over these global changes right away.   Doing these incrementally takes some of the load off the daily ETL run.  

As of Primavera Analytics 17.7 in an ODI ETL Process there is a LOAD_PLAN_REALTIME_GLOBAL.  This Global Real time process handles global data such as Resources, Users,  Risks, Roles, OBS, Code Types,...

Primavera Analytics

Immediate Project Updates (part 1) - Introduction and Time Zones

In Primavera Analytics 17.7 release a new feature for immediate project updates was released.  This is currently available for projects in P6 and is configurable from the Project Preferences in P6.  By default a project is set as ‘Scheduled’.  This can also be set to ‘Immediate’.  This feature is available ONLY for an ODI (oracle data integrator) ETL process.  With Immediate that means as soon as certain changes occur on that project those changes are sent directly to the data warehouse.  These projects don’t require the daily ETL process to be updated. In Analytics, project related data is comprised of data from P6 Core tables (ex. PROJECT) and from P6 Extended schema tables (ex. PROJECTX).  Certain changes (like Project Id) will come right over when the change is saved.  Other changes that are calculated changes or changes that need to be processed by the project publication service won’t come over until the project has been published.  As a best practice if you are using a project as ‘Immediate’ and you want the changes to come over right away you should publish your project after those changes are done.  Once the project is finished publishing and ODI on the Data Warehouse side has detected the changes they will come across. Things like Project Name are actually stored in the PROJWBS table and a ‘calculated’ value to get into the Project view for the extended schema.  But you can make changes to your activities, assignments, project, wbs, etc. and when the project is published those changes will be visible shortly in Analytics.   Time Zones: You need to make sure the server  where the DB your STAR schema resides, and the server where your ODI schemas are have the same time zone configured.  This is also applicable to the ODI server running the processes.  ‘Real Time’ functionality for project and global objects depends on timestamps.  For this reason these servers need to be on the same time zone to properly work.  Please keep this in mind during configuration.  

In Primavera Analytics 17.7 release a new feature for immediate project updates was released.  This is currently available for projects in P6 and is configurable from the Project Preferences in P6. ...

History Dates vs Non-History Dates in Primavera Analytics

It seems that recently a few questions have come in regarding differences in how Primavera Analytics handles dates. Specifically the difference in dates between current subject areas (e.g., Primavera - Cash Flow, Primavera - Resource Assignment, etc...) and their history subject area counterparts (e.g. Primavera - Cash Flow History, Primavera - Resource Assignment History). In the History Subject Areas, the dates that exist in the Time Folder of the Subject area correspond to the date when the capture of history occurred (i.e. the ETL run date). These dates have nothing to do with "spread" values or buckets within the products, in these History subject areas. Additionally, in the History Subject areas, we have included dimensional dates that can be used for the "spread" buckets for the lowest level of fact granularity. In the current subject areas, the dates in the Time folder are the spread/bucket values for the lowest level of fact data granularity. This is done because there is only 1 level of dates in these subject areas as opposed to 2 levels (History captures and spread/bucket) in the History subject areas. The following example will explain these differences. The following screen capture show a Cash Flow analysis created from the Primavera - Cash Flow Subject Area. The Calendar date is selected from the Time, Calendar folder in the Primavera - Cash flow Subject area and shows the Actual facts for a particular month, which are defined in Unifier. The screen capture below is taken from the Primavera - Cash Flow History Subject Area. The Calendar Date, selected from the Time, Calendar folder in this analysis represents the date in which the data was captured by the running of the ETL data capture process. However, adding in the Cash Flow Date field, located under the Cash Flow folder in the Primavera - Cash Flow History Subject area gives us an exact match of data between the current subject area and the history one, for the latest historical capture and the data in the current subject area.   In conclusion, when 2 levels of dates (spread/bucket dates and History capture dates) are exposed in a single historical subject area, the dates under the Time, Calendar folder are representative of the ETL data capture dates. In these cases, secondary dates are provided under the folder of the lowest level of granularity for the fact data (Cash Flow, Resource Assignment, etc..).

It seems that recently a few questions have come in regarding differences in how Primavera Analytics handles dates. Specifically the difference in dates between current subject areas (e.g., Primavera...

Primavera Analytics

ETL Process Fails with error message "Error while Refreshing variable ...VAR_DATASOURCE_ID"

It has come to our attention recently ETL processes have been failing to run, either during initial setup or at random times after installation, with the error message "No value to be set to variable PRIMAVERA_ANALYTICS_PROJECT.VAR_DATASOURCE_ID". Errors caused can be seen as: During Traditional ETL during step 7 in Create_code_columns script as  “End of File Communication” During OdiImport during ETL section step 7 in Create_code_columns script as  “End of File Communication” In Analytics Admin App show ODI load plans as failed. Upon closer inspection in ODI Studio an error message that looks similar to: Step VAR_DATASOURCE_ID fails after 1 attempt(s). Error while refreshing variable PRIMAVERA_ANALYTICS_PROJECT.VAR_DATASOURCE_ID. This error message is caused by the version of Oracle Database being used for the Data Warehouse. The Data Warehouse Oracle DBMS version needs to be at least 12.1.0.2.160831 (http://www.oracle.com/technetwork/security-advisory/cpuoct2016-2881722.html) Additionally, a new Data Warehouse patch set has been created to address additional causes for this issue. To obtain this patchset,  perform the following: Go to MOS and signin: https://support.oracle.com Choose Patches & Updates tab In the Patch Search section, change dropdown to “Patch Name or Number” Find 26836227 and click Search button.    

It has come to our attention recently ETL processes have been failing to run, either during initial setup or at random times after installation, with the error message "No value to be set to variable...

Primavera Analytics

Unifier Historical Captures - Alternative Way to Get Monthly on Cost Sheet and Cash Flow

For Unifier Cash Flow and Cost Sheet subject areas in Primavera Analytics the default (and currently only available) historical capture option is weekly.  The weekly capture is determined by a Start Day of the Week of Sunday.  That system date is what is used for filling the historical bucket and for closing it and opening a new one.  A new historical bucket is created as of 12:00am on Sunday and will be continuously updated until 11:59pm on Saturday night. If trying to aggregate to monthly in OBI you could see a discrepancy if the month ends in the middle of the week.  If the beginning of the week is in the previous month and the end of the week is in the next month then the capture will be stored in the previous month.  This could result in a slightly larger previous month bucket.   Some alternative ways to get a true monthly capture are: 1- In 16.2 an option was added to the staretl.properties file for Unifier History so it can be turned off.  The intention was to allow shutting history off to preserve space and ETL time if history was not needed. However this can also be used to regulate historical captures.  If you shut this option off for the entire month except for the last day of the month you will get one capture for that month.  When you go to do a monthly bucket in OBI it will result in displaying just this one bucket, giving you your monthly history.  After the history capture for the month, turn this option back off.  2- As of the 17.7 release this option is exposed in the Analytics Administration application so can easily be turned on and off without requiring any backend changes and is stored in the properties settings in the data warehouse.  3- Going along with the setting options above you could also configure an additional data source, with a filter for just the projects you want to capture history for,  then set that ETL to run just once a month and have the historical capture option on for this ETL #2 and have historical capture off for ETL #1.   When reporting you will need to make sure to filter to specifically datasource id = 2 but this way you can control your historical captures in a separate ETL process and only have to configure it one time.  These are a few examples of how to capture historical data on a monthly interval while still working within the default weekly settings.        

For Unifier Cash Flow and Cost Sheet subject areas in Primavera Analytics the default (and currently only available) historical capture option is weekly.  The weekly capture is determined by a Start...

Configuring Codes and UDFs in a Multiple Datasource Environment

When setting up an Analytics deployment with multiple P6 instances, you need to take extra care when configuring codes and UDFs to ensure that your mappings for different sources won't clobber each other. I can't stress enough how crucial it is to get this right the first time when configuring multiple datasources. Once you begin capturing historical data, modifying the position of the existing code and UDF mappings becomes an extremely difficult task. Prior to release 17, the code and UDF mappings were stored in properties files, and you could maintain consistency across multiple datasources by copying the code/UDF portions of the properties file between the ETL homes within your Analytics installation (Handling Codes and UDFs in a Multiple Data Source Environment). Now that those mappings are stored only in the database, the process for keeping them consistent has changed. The simplest example is two P6 instances with identical codes/UDFs. Let's assume in this case you have the following activity codes to map: In this scenario, you would configure the first datasource against Instance 1 via the configuration utlity, selecting these activity codes as 1-7. You would then do the same for the other code types and UDFs you needed to map. At that point you could repeat this entire process for the second datasource against Instance 2, but keep in mind you would need to precisely replicate all of the code/UDF mappings exactly as they were configured for the first datasource. There's an easier and safer way to accomplish this. After you have configured the first datasource against Instance 1, create a second datasource against Instance 2, but don't map over any codes or UDFs. After the second datasource is created, connect to the data warehouse via SQL Developer or a similar tool. Then, run the following statement to fully replicate the code/UDF mappings between the first and second datasource. insert into etl_parameter(p_feature, p_1, datasource_id) select p_feature, p_1, 2 from etl_parameter where datasource_id = 1 and (p_feature like 'code.%' or p_feature like 'udf.%'); commit; Once that completes, open a terminal and navigate to the run substitution folder within the ETL home for datasource 2 (i.e. <DW_HOME>/star/etl_homes/staretl2/etl/common). Execute the runSubsitution.sh/cmd script based on your OS. Now your code/UDF configuration is identical between both datasources. Things get considerably tougher with a more complex example. Let's say you have two P6 instances with slightly different codes/UDFs, and you have the following activity codes to map: In this type of scenario, you'll still want to ensure that any shared codes are mapped to identical code positions. But you'll also need the codes that are unique to each datasource to have their own slots. One option is to configure the first datasource against Instance 1 while leaving gaps for the codes/UDFs that are unique to any subsequent datasources. In this case you would map codes 1-5 for datasource 1, then make sure that when Work Order Priority and Work Order Type were mapped the column numbers were changed to 8-9. This would leave 6-7 available when you configured the second datasource against Instance 2, so those slots could be used for the Team and Work Order Criticality codes. Datasource 1: Datasource 2: The other way to deal with this type of scenario is to just map the first datasource without worrying about leaving specific slots open for subsequent datasources. Then, when adding the second datasource, you would map any unique codes to the first open slots after those used by datasource 1. You would also still need to ensure any common codes were mapped to the same location that was used for the first datasource. Datasource 1: Datasource 2: Either approach will work as long as the common codes/UDFs share the same slots and the unique codes/UDFs are given their own. But unfortunately, there are no shortcuts when you're dealing with this more complex scenario. You'll need to carefully track all of the mappings needed for all datasources, and set up the code/UDFs mapping manually for each source using the configuration utility.

When setting up an Analytics deployment with multiple P6 instances, you need to take extra care when configuring codes and UDFs to ensure that your mappings for different sources won't clobber each...

Primavera Analytics

Creating a DB User, With Row Level Security, For Access to Star Schema Data For a Specific Application User

The goal here is to create a database user based on an application user and have row level security still apply so that this database user can still only see data that the application user can see. This is helpful if you wanted to use another reporting system besides OBI or wanted to create database specific users for authenticating and viewing data. Below is an example of how you would go about doing so. Step 1: Create a new DB User for this application user.  In this scenario, the application user is jeffm. (connect as SYSTEM) create user jeffm identified by password temporary tablespace TEMP default tablespace STAR_DAT1 quota unlimited on STAR_DAT1 quota unlimited on STAR_HST1; grant create session to jeffm;   alter user jeffm grant connect through staruser; --select sys_context ('userenv', 'current_schema') from dual; – this will help find out staruser --**staruser is your schema owner for data warehouse   Step 2: Create Roles Create a role to assign (connect as SYSTEM) create role starreports; grant create session to starreports; grant starreports to staruser with admin option; grant starreports to jeffm; Step 3: Create Synonyms You will want to create synonyms to be queried for access by this user. Here are some examples. Create synonym for accessing table.  Do Synonym for Day dimension too as a test.  No security should block viewing this. If you can query the Day Dimension but not Project it would point to security issue still. create or replace public synonym PROJECT for staruser.w_project_d grant select on staruser.w_project_d to starreports commit create or replace public synonym DAYDIM for staruser.w_day_d; grant select on staruser.w_day_d to starreports; commit; Step 4: Test out and make sure you can retrieve data Connect as STARUSER (ex. staruser)   overall number of projects: select count(*) from w_project_d; select * from w_project_security_s where user_name = 'jeffm'; Now we are going to switch over to use the security package to see what this user should see: execute secpac.set_user('jeffm') select count(*) from w_project_d   Validate as schema owner - jeffm Connect as jeffm select * from project; You'll see now security is being enforced through the synonyms that this user can query. The data returned connected as this user and querying the synonym should match what is returned executing through the secpac as that same user and querying.    Terminology: secpac = security package in STAR schema that controls how and who can see what data. w_project_security_s  = table in STAR schema where security for each user is blown out by specific projects.  If 1 user can see 100 projects there should be 100 rows for that user, 1 for each project.  If 100 users each see 100 projects then you will see 100 rows for each user. 

The goal here is to create a database user based on an application user and have row level security still apply so that this database user can still only see data that the application user can see. This...

Primavera Analytics

Custom Project Publication Arbiter - How to Publish Projects Only When You Want

All project publication in P6 is kicked off by a process called the Arbiter. The arbiter is a job that executes a couple stored procedures to detect changes, thresholds, and settings on projects to determine if they need to be queued up for publication. In the P6 application we can define how often the arbiter runs.  Generally it is run every couple minutes and then if projects have changes and have crossed the set thresholds they are then submitted to the job service queue for publication. If you would like to have projects ONLY publish projects at a specific time below is an example of how to do so.  In this example we are going to create a new procedure. In this procedure it will do the jobs the arbiter does and will turn off any existing recurring arbiter jobs.  This job will run only when this procedure is called.  You can have the procedure called from a database job. This way as long as the database is up then this custom arbiter can run.  In this scenario if you set the database job to run at midnight every night then projects will ONLY publish at midnight and could occur in more project jobs being submitted and possibly taking longer to process all the jobs.  You can increase the number of project job threads from the P6 administration application.  If the server resources are available it would be recommended to increase the number of project publication jobs so you can complete the total job load faster. For example if the thread count is set to 2 than the system can only publish two projects at a time.  Steps to install: Add new procedure:             ----PROCEDURE-------------------             create or replace procedure PX_ARBITER_ASAP              AS             v_job_id number :=9999;             v_sql varchar(4000);              v_sql2 varchar(4000);              v_count number;                          BEGIN             --build insert statement              v_sql := 'INSERT INTO JOBSVC (job_id, parent_job_id, seq_num, audit_flag, job_type, job_type_key, job_name, user_id, status_code, recur_data, recur_type,submitted_date, LAST_ERROR_DESCR, AUDIT_FILE_PATH)             VALUES ';             v_sql2 := v_sql || '(' || v_job_id || ',null, 60, ''N'', ''JT_ProjectArbiter'', null, ''PXArbiterASAP'', 91, ''JS_Pending'',''MinuteOfDay=1,DayOfWeek=0,WeekOrdinal=0,DaysInPeriod=60,PredecessorBool=0'', ''RT_WebASAP''             ,sysdate, null, null)';             --check if any other arbiter jobs enabled             execute immediate 'select count(*) from jobsvc where job_type = ''JT_ProjectArbiter'' and RECUR_TYPE = ''RT_WebEnabled''' into v_count;                          --clean out row for ASAP usage             execute immediate 'DELETE from jobsvc where job_id = ' || v_job_id;             --if other arbiter job enabled, shut it down             if v_count > 0 THEN               update jobsvc set RECUR_TYPE = 'RT_RecurDisabled' where job_type = 'JT_ProjectArbiter' and  RECUR_TYPE = 'RT_WebEnabled';             end if;             --insert row             execute immediate v_sql2;             commit;             END;               ----HOW TO CALL PROCEDURE-------------------             execute PX_ARBITER_ASAP;          --This procedure will:         -turn off recurring arbiter job         -insert new ASAP arbiter job, and remove an old one if it exists.  Hardcoding name and job_id for tracking.                Add new DBMS job:          BEGIN     DBMS_SCHEDULER.CREATE_JOB (        job_name           =>  'pxArbiter2',        job_type           =>  'STORED_PROCEDURE',        job_action         =>  'ADM172.PX_ARBITER_ASAP',        start_date         =>  '25-APR-17 11.35.00 AM US/Pacific',        repeat_interval    =>  'FREQ=DAILY;INTERVAL=1', /* every other day */        end_date           =>  '30-APR-17 11.35.00 AM US/Pacific',        enabled           =>  TRUE,        comments           =>  'ExtendedSchema Update');     END;               --view if job has been run     select *  from dba_scheduler_jobs     

All project publication in P6 is kicked off by a process called the Arbiter. The arbiter is a job that executes a couple stored procedures to detect changes, thresholds, and settings on projects to...

Unifier Data Elements supported in Primavera Analytics

We have been receiving a lot of questions recently about the types and amount of Data elements that are available to be Extracted, Transformed, and Loaded (ETL) from Oracle Primavera Unifier into the Oracle Primavera Data Warehouse. I will try to clarify what data elements are supported in each version. Oracle Primavera Unifier was first supported in Oracle Primavera Data Warehouse version 15.1 Primavera Analytics provides a predefined set of fields. Some fields require data mapping. The Analytics log displayspredefined sets in which you can map data sources or data elements (with matching data definition) of Unifier objectsto predefined fields in Analytics. Data mapping enables data extraction, analyses, and dashboard display in OBI(launched from Unifier). Business Processes: Select business processes and map their fields (upper and detail forms) to predefinedfields. Cash Flow: Select cash flow detail curves (WBS and Summary WBS) and map their data sources topredefined Analytics cash flow data sources (baseline, forecast, actuals, and user-defined curves). Cost Sheet: Map cost sheet column data (single and logical data sources) to predefined cost-relatedAnalytics fields. Shells: Map shell attributes to Analytic project fields. Vendors: Select a business process with vendor information and map its upper form fields to predefinedAnalytics fields. Oracle Primavera Unifier 15.2 added the following enhancements: Unifier Analytics in 15.2 is further enhanced to support features such as support for custom attributes for businessprocesses and shells, custom labels in cost sheet, workflow-related data of business processes, and data from P6summary sheets.Cost Sheet: Custom labels for custom cost sheet columns are now supported. Cost sheet-related data is sent toAnalytics based on the mappings set in the Cost Sheet under Analytics in Unifier. There also are UI changes in theCost Sheet Setup window. The Data Mapping tab has two sections: The top section displays system-defined fields; thebottom section includes 10 user-defined columns. Select the data source for a cost sheet column and provide a label Business Processes: In previous versions of Unifier, data for predefined fields on Unifier business processes weresent to Analytics for reporting. Unifier users can now design their own business process attributes used as dimensionsand facts for Analytics reporting. A new section called User Defined Attributes has been introduced in the Analytics –Business Processes Setup window. New fields can be added by clicking the Add button.• Fields for Data Type, Source, Name, Data Source and Label are displayed. Data type selected for the fields governs whether the selected field will be a dimension or a fact. If the data type is either a string or a time-stamp, then the field will be a dimension object in Analytics. If the data type is numeric, then the field will be a fact object in Analytics. Number of attributes for each data type is limited by a predefined number. Analysis of workflows is now supported. Note that there is no UI change. If workflow business processeshave been selected on the Business Processes tab of the setup window, their task-related information is sent toAnalytics -- e.g., Task Due Date. Oracle Primavera Unifier 16.1 added the following enhancements: Unifier 16.1 extends Primavera Analytics support to the Space Manager module, which is part of FacilitiesManagement and Real Estate Management (FM/RE) application that was released as part of 15.2.2.In this release, Primavera Analytics also supports company-level business processes, associations between businessprocesses, and additional custom attributes in existing Business Processes, Cash Flow, Cost Sheet, and Shellsmodules. Enhancements for the Business Processes module in Analytics include: Support for company-level business processes: Business Processes tab now displays both company- andproject-level business processes. Prior to this release, only project-level business processes were available foranalysis and reporting. Support for Summary Payment Application (SPA) SOV type business processes: Cost allocation lineitem details (e.g., cost code, short description, etc.) can be passed to Analytics for reporting. Summary linefields must be mapped to cost allocation line fields. If there are fields that are common to both summary andcost allocation lines, the latter will be retained. SPA business processes type were introduced in Unifier 15.2. Additional custom attributes: Additional custom attributes (string, date, and numeric) can be set fordimensions and facts. The number of user-defined attributes now supported are: String: Main Form (60) and Line Items (40) Date: Main Form (40) and Line Items (20) Numeric: Main Form (40) and Line Items (20) Association between parent and child business processes: There is no UI component for this feature; PrimaveraAnalytics retrieves references that exist between business processes via BP pickers or BP data pickers, regardless oftheir location on the forms (upper or detail). The associated business processes must be added to the setup for OBIanalyses. For example, add contracts and referencing change orders/invoices to report on them. Additional Custom Attributes: In Unifier 16.1, the Cost Sheet setup module in Analytics has 10 additional columns under the User Defined Columnssection in the Data Mapping tab, bringing the total number of user-defined columns to 20.The Cash Flow module has a new section, User Defined Curves, in the Data Mapping tab. There are now 10 user-definedcurves in this section, which includes the previous five and an additional five curves. Oracle Primavera Unifier 16.2 added the following enhancements: Starting with Unifier release 16.2, Primavera Analytics supports reporting and analyzing Generic Cost Sheet (CM0)data, related to generic cost codes. This is particularly useful for users of the Facilities Management and Real EstateManagement (FM/RE) application, where shell types are typically based on CM0.A new Generic Cost Sheet module is available in the Analytics log. The UI displays Name, Data Source, and Labelcolumns, where Name is the field used in OBI, and Data Source is a single or logical generic cost sheet data source.Label auto-populates from the selected data source and is editable. Map OBI custom fields to corresponding cost sheetcolumns that represent facts and dimensions. Configure up to 40 columns. Oracle Primavera Unifier 17 added the following enhancements: Add multiple custom attributes for business processes for facts and dimensions.Configure the number of fields in the new Custom Fields tab in the Business Processes module.

We have been receiving a lot of questions recently about the types and amount of Data elements that are available to be Extracted, Transformed, and Loaded (ETL) from Oracle Primavera Unifier into the...

Some Causes Why I Might Not See My Unifier Data in Analytics

There are three possible causes for this: 1- Security. Security is populated from the Unifier services but also access to Unifier data is based on Project level access.  Users must have access to the projects this data is included in to be able to see the data in Analytics.  Make sure your Unifier user is marked for having Analytics access as well as has access to projects. In the STAR schema, w_project_security_s table, there will be 1 row for each project for each user who has access to it. This can be reviewed to see what projects the user has access to. To get a fresh set of security rows this table can be truncated and repopulated on the next ETL, however during this time of ETL processing no user will be able to access any of the STAR schema data as the data warehouse operates off of row level security. 2-  Unifier Services The Unifier services didn't run or the data isn't mapped. The movement of data from Unifier into Analytics flows from Unifier source schema into staging tables in that same schema -- SYS_STG_xxx -- then into the Data Warehouse.  If the services that populate this data are not run the data will not show in Analytics.  This is always a good first step is rerun the services to make sure this data is populated then rerun the ETL process.  You could have had scenario where ETL process ran before services and the data didn't make it over yet. 3- OBI Cache This is generally the cause.  OBI cache could be on and need to be cleared and the results you are seeing is a stale set of cached data.

There are three possible causes for this: 1- Security. Security is populated from the Unifier services but also access to Unifier data is based on Project level access.  Users must have access to the...

How to Apply Row Level Security to New Tables

The Primavera Data Warehouse uses row level security for data security.  By controlling at the database level with row level security we can apply a more granular level of security.  The permissions to see certain data comes from the access a user has in P6 or Unifier.  In P6, security is calculated by the Extended Schema Global Security service and applied at the project and resource levels. In Unifier security is applied at project level. By applying at the project level the security is calculated out per project, per user.  This is then stored in security tables in the STAR schema and then security policies are applied. Applying the policies is the key to the row level enforcement.  These policies are applied at a level above STARUSER.  You can verify the user who is applying these policies by looking in the Setup.sh\bat or Staretl.sh\bat (depending on what version you are using) and do a search for security_policies.  It will give a username of %STARDBA_SR% , at the top of this file you can see who that user is. This is important to know because if you are creating or applying new policies you will need to connect as that user to apply them properly. Based on what type of table you are adding (Dimension or Fact) you can find an example in the security_policies.sql script. Lets try a fact example.  Search for:--METADATA NAME:W_ACTIVITY_SPREAD_F_P_POLICY TYPE:ADHOCMake a copy of this section then edit w_activity_spread_f in all spots and add in your new table.  This example will add a project level security policy to your new table. Save this file and connect and run as %STARDBA_SR% (mentioned earlier) to apply this policy.  Now only users with project object id access to values in this table will be able to see the data. Similarly this can be done with dimensional data as well. The key is the project_object_id and datasource_id.  These fields must exist in your new custom tables and be populated accordingly for this security enforcement to work. Applying policies since at the STARDBA_SR access level should be available without having to reapply each ETL, unless at some time dropped.  If wanting to rerun you can duplicate the user_scripts step and and change the connection string for this higher level connection and add a new folder for this script.  

The Primavera Data Warehouse uses row level security for data security.  By controlling at the database level with row level security we can apply a more granular level of security.  The permissions...

How to add more than 50 Codes or UDFs to Primavera Analytics

1 Background Primavera Analytics has the ability to Extract, Transform,and Load (ETL) up to 240 Project, Activity and Resource codes, as well as UserDefined Fields (UDFs). These codes and UDFs are configured during theinstallation process of the P6 Reporting Database or by running the PrimaveraData Warehouse configuration utility. However, out of the box, the PrimaveraAnalytics Repository Definition File (.rpd) is only configured to take advantageof 50 codes and UDFs. Additional steps are necessary to allow more than 50codes and UDFs to be visible in Oracle Business Intelligence (OBI). The stepsfor configuring the .rpd file to use and see these additional codes and UDFsare documented below. 2 Launchingand Logging into the Admin Tool 1. Launch the Admin tool à Example: All Programs àOracle Business Intelligence Edition Plus Client àAdministrator 2. Go to File àOpen à “Online” 3. Login using weblogic administrator credentials. User will then see three panels from right-to-left:Physical, Business Model and Mapping, and Presentation. 3 Adding additional code and/or UDF columns beyond50 Use this process to add one or multiple UDF and/or codemappings. After following the previous section and Launching and Logginginto the Admin Tool: 3.1 AddColumn(s) to the Physical Layer 1. In the Physical window, expand “Oracle PrimaveraP6 Data Warehouse”. à Catalog à dbo. 2. Expand the table where you want to supportadditional columns, for example W_UDF_ACTIVITY_D and W_UDF_ACTIVITY_HD(Activity UDF Dimension and the Activity UDF Dimension History tables). For each table you are adding columns to: Right-click on the table you are adding columns to and select Properties (ex. W_UDF_ACTIVITY_HD → Properties). On theColumns tab, press the “green plus button” or press ALT-Insert. (see Diagram 4) Provide the name of the new physical column in the Name field (ex. UDF_COST_51) (see Diagram 5). Select the correct type from the Type Combo Box (ex. DOUBLE) Check the Nullable Check Box. Press OK Repeat steps 1-6 for additional columns and tables. 3.2 DragColumns over to the Business Layer 1. Right-click the table name you added columns toand click on Query Related Objects → Physical → Physical Table. 2. Select all table aliases in the list and press Mark,then Go To. For each “marked” table alias in the Physical Layer(marked has a red check mark next to the icon): 1. Expand the table alias to reveal its Columns. 2. Right-click the table alias name and click onQuery Related Objects → Logical → Logical Table. 3. Select the logical table in the list and press GoTo. 4. Expand the logical table that was selected. 5. Drag the added columns (use Shift or CTRL clickto select multiple columns) from the marked alias in the Physical Layer ontothe name of logical table in the Business Layer. For each newly dragged column in the Business Layer: 1. Double-click on the column (in the BusinessLayer still) and update the Name field to be like the other columns (ex.Activity UDF Date 51). 2. Click on the Aggregation tab. Dependingon the type of the new column, the default aggregation rule in the aggregationtab may need to be modified. An example of this would be with a user definedcost field that uses “SUM” as the default aggregation rule. Diagram 6 Diagram 7 3.3 DragColumns to the Presentation Layer For each new column in the Business Layer: 1. Right-click on the table the new columns wereadded to in the Business Layer. 2. Click Query Related Objects → Presentation →Presentation Table 3. Select all tables in the list 4. Mark each table by selecting every table in thelist and press Mark 5. Press Go To For each “marked” Presentation Table in thePresentation Layer (marked has a red check mark next to the icon): 6. Drag the new columns from the Business Layerover to the Presentation Table. 7. Highlight the new column in the Presentationlayer and right-click Properties. 8. From the Admin menu click File à Save. Check in the changes and runconsistency checker. The RPD file has now been updated with the new column. Diagram 8 4 Restartthe OBIEE Services 1. Launch the Oracle Business IntelligenceEnterprise Manager (http://<hostname>:<portnumber>/em where <hostname> is the server name or IP address and<port number> is the port number for the Oracle Business Intelligencedeployment. 2. Login to Oracle Business Intelligence EnterpriseManager with a user account that has Administrator privileges (e.g. weblogic). 3. After successfully logging in to Oracle BusinessIntelligence Enterprise Manager, on the left hand navigation tree, expand thefolder Business Intelligence, then single left mouse click on thecoreapplication. 4. On the coreapplication page, select the Overviewtab. In the System Shutdown & Startup portlet, click the Restart button torestart all of the Oracle Business Intelligence services. 5. Once the Oracle Business Intelligence serviceshave successfully been restarted, Log out of Oracle Business IntelligenceEnterprise Manager. 5 Verify New Column is in Oracle Business Intelligence Analytics 1. Launch the Oracle Business IntelligenceAnalytics application. 2. Select the Analysis link on the left and thenselect the Subject area which uses the new column. 3. Expand the folder that uses the new column. 4. Look for new column and verify the columnand the column value.

1 Background Primavera Analytics has the ability to Extract, Transform, and Load (ETL) up to 240 Project, Activity and Resource codes, as well as UserDefined Fields (UDFs). These codes and UDFs are...

Upgrade Considerations When Using Multiple Data Sources

1-  Number of data sources: If you are upgrading but would like to increase or decrease the number of data sources you must do so when installing the latest version.  For example, if you have a Primavera Data Warehouse setupwith 4 data sources in 15.1 and you are upgrading to 15.2 but only choose 3 data sources during the install you will see issues trying to upgrade.  The reason being you are trying to force 4data sources worth of data into 3.  During the installation\configuration there is a box that allows you to enter the number of data sources, by default in 15.2 it was set to 3 so it must be increasedif the data warehouse you are upgrading has more.  Even if you don't intend on using those extra data sources from 15.1 they must initially come over or be removed from the 15.1 data warehouse first. It is always safer to set this number of data sources option to a higher number than expected to make sure to cover this scenario and to allow for growth. 2-  upgradeSchema vs. upgradeStar Generally upgradeSchema is a faster option.  UpgradeSchema usually takes the existing STAR schema and applies just the database level changes (new tables, new columns, etc.) but does not back up or move any data.  UpgradeStar makes a backup of all historical data then rebuilds the STAR schema and reinserts the data and runs an ETL.  UpgradeStar will usually take longer than a normal ETL run because it has to backup and restore all this data.  In the 15.2 release because there was a major expansion of partitioning upgradeSchema will have a similar runtime to upgradeStar because we need to backup all the data, create new partitioned tables, and then reinsert the data.  Moving forward and in previous releases upgradeSchema is generally the faster option and can be run and allow the ETL to run on it's regularly scheduled time. 3- Partitioning vs. Non Partitioning If you were using a partitioned schema you can not now go to a non partitioned schema as the data is already stored this way.  You would need to install and run the ETL processes fresh. With the non partitioning option you would also not have the option for detailed history or slowly changing dimensions so that data that might be in the partitioned schema would not be used anyway.

1-  Number of data sources: If you are upgrading but would like to increase or decrease the number of data sources you must do so when installing the latest version.  For example, if you have a...

Why is Oracle_Home Required? What Would Happen if it is Not Set?

If an Oracle_Home has not been defined you will run into errors with the ETL process.  Sqlldr is required for ETL process. These errors may show either directly in the console or log if running ETL from \star installation directory. If running ETL process from a Schedule in the Primavera Analytics web configuration utility you may see the job listed as Failed in the Status tab. You can check the log to get the specific details.If you receive errors about sqlldr generally this means there is an issue with the Oracle_Home environment variable.   In some environments those variables might be reset occasionally.  To ensure these are set with the web configuration utility it would be recommended to define them in setDomainEnv.sh in such a way that these environment variables will be available for the application p6rdb for all the user sessions.  You could append the Oracle_Home to the file.         export ORACLE_HOME="/xxx/xxx/xxxxx"        export PATH="${PATH}:${ORACLE_HOME}/bin"        export JAVA_HOME="/xxx/xxx/xxx/xxxx" During setup ensuring that these variables are defined and sustained will help to ensure your ETL and ETL scheduling runs smoothly.

If an Oracle_Home has not been defined you will run into errors with the ETL process.  Sqlldr is required for ETL process. These errors may show either directly in the console or log if running ETL...

Top Questions to Answer Before Setting Up Primavera Analtyics

The questions below are aimed at larger installations but can also be applicable to single data source installations.  The purpose of thinking about these areas before installing and implementing is to prepare your environment for expansion, to prevent running into scenarios where you are capturing data you don't require, and to prevent creating an environment that is not suitable to yourbusiness needs and may need to be rebuilt. #1-  How many data sources do I plan to have? Even if you don't know for sure but think you might have 2 to 3 (for example), plan for it.  Better to overestimate than underestimate. Overestimating by a few will leave those slots partitioned and available for future usage. Partitioning is required if using multiple data sources. #2-  Partitioning rangesThis can effect performance.  It is not recommended setting a large partitioning range (for example - 6 months) if you are capturing a large amount of lower level historical data (SCD's and Activity History).  Partitioning is designed to help with performance by querying the data in that partition rather than querying all data if it can help it. Setting partitions equal to 1 month might be better for a specific environment based on size and historical capturing frequency. #3-  Activity Daily HistoryHow many projects have this on option set?  This can heavily effect performance and storage. It is recommended that this is only on for a smaller percentage of projects and then turned off when the need for this level of detail has passed. This has other cascading effects outside of just history capture.  This will also turn on slowly changing dimensions for any data related to the project. Slowly changing dimensions will capture any change on objects that support SCD's.  By default Activity History is not on.  It is recommended to opt in a few projects at a time based on requirements. #4-  What is the P6 Extended Schema data range?If P6 is your primary data source what is the date range set on the project publication. A large date range represents a larger set of spread data that would need to be pulled over.Generally this range is 1 year in the past and 2 years in the future. What this means is you will have daily spread rows for each day for each activity and resource assignment spread bucket for each activity in each project that is opted in.  That can become a large amount of rows quickly. If you have units or costs outside this range that data is not lost and it will come over to Analytics howeveryou just won't have a daily bucket for that data.  The data outside of the range is lumped onto the beginning or end of the date range.  This way the data is still represented in the totals. #5-  What are my data requirements, how often do I need my data refreshed and at what level of granularity?  Decisions based on these questions will shape your entire Analytics environment and Primaveraecosystem. For example if you have 10 data sources, do you need to report at Activity level and have those Analysis' up to date every day?   Do you also require global reporting for all 10 data sources but those reports are only run weekly?  A scenario like this would lend itself to each data source having their own individual STAR with it's own ETL and then a master environment created to pull from all10 data sources on a weekly interval for global reporting.  The advantages of this are that each individual data source can then get an ETL run in each day.  If all 10 data sources per pumping into the same  STAR and each ETL took 2 hoursto run it would take 20 total hours before ETL #1 could be run again.  Having this global version (while still pulling out of the same data sources) allows for individual sites  to have their own ETL rules while still having a consolidated global data warehouse which is updated once a week for each ETL.

The questions below are aimed at larger installations but can also be applicable to single data source installations.  The purpose of thinking about these areas before installing and implementing isto...

Refreshing RPD Translation Changes Without Restarting OBI Services

In Primavera Analytics when making changes to the selected UDFs, Codes, and other dynamic fields that come across generally the OBI services would need to be restarted to see these changes reflected. The reason being these fields that are being altered actually have underlying translation values and those are what need to be updated. Translations are managed differently compared to making a change to an existing field in the RPD. The following steps cover how to disable the caching on the RPD initialization block for translations. This allows for translation updates to be seen in OBIEE without requiring a full restart of the BI Server. Steps for Updating RPD and Refreshing Translations 1. Open the RPD using the BI Admin Tool, in Online Mode. 2. From the Manage menu, select Variables. 3. Under Session, select Initialization Blocks, then double-click on the External Metadata Strings initialization block. 4. In the Variable Target section, click Edit Data Target. 5. Uncheck the Use Caching option, under Row-wise initialization, then click OK 6. Save the RPD changes. When prompted to check-in changes, click OK. When prompted to check global consistency, click No. 7. This example tests mapping a new Activity UDF. First log into OBIEE and create a new analysis using the Primavera - Activity subject area. Expand the Text UDFs - (Activity) folder and note the position of the first unmapped text type activity UDF, in this case Text Activity UDF 6. 8. Run the configutation utility (<ETL_HOME>/configStar.sh/bat), and on the UDF mapping screen add a 6th Activity UDF. Finish the configuration utility, then run the ETL. 9. Once the ETL has completed, log into OBIEE and again create an analysis using the Primavera - Activity subject area. From the refresh drop-down, select Reload Server Metadata. 10. Once that completes, expand the Text UDFs - (Activity) folder and you will see that the translation for the newly added Activity UDF is now displayed.

In Primavera Analytics when making changes to the selected UDFs, Codes, and other dynamic fields that come across generally the OBI services would need to be restarted to see these changes reflected....

I Added a User in P6, How To Make Sure That User Gets into Analytics

A common question is what is the data flow for adding a user and getting that user into Primavera Analytics. For this example we are going to follow the P6 user flow. 1- Log into P6 application and add new user.  Give this user module access for P6 Analytics.  Assign Project Access and Responsible Manager\OBS to this user. 2- Run the Global Services Security Service. 3- Run StarETL process.4- Make sure user exists on OBI side.  If you are using LDAP or SSO you want your OBI server to be configured to the same.  In Primavera Analytics logging into OBI you want the username to match exactly as it is in p6. Make sure there is a user in your OBI, Weblogic, LDAP, or selected security model you are using, with the same name as p6. The names need to match exactly because there is a security package being loaded as a predefined process in OBI with that selected data source.   From here the user should match what is in P6 and see the same data as in P6. The global security service is a key component here. The Primavera Analytics data is pulled directly from the extendedschema so you need to make sure this service has run. It is set on a defined schedule. It is recommended to set this schedule to run and finish before the ETL process runs. Keeping your globalservices scheduled at the appropriate time when they will finish before the ETL and capture all changes is key to keeping a fully update Primavera Analytics environment.

A common question is what is the data flow for adding a user and getting that user into Primavera Analytics. For this example we are going to follow the P6 user flow. 1- Log into P6 application and add...

Can I Include P6 and Unifier Data in a Single STAR Schema?

In Primavera Data Warehouse 15.1 Unifier was added as an optional data source. There are certain subject areas in Primavera Analytics that are geared towards Unifier data - example - Cash Flow, and certain subject areas geared towards P6- example - BurnDown.  If you are running an ETL with only P6 or Unifier those subject areas that are specific to the other product may be empty, this is ok. There are some dimensions that are common - such as Project.  When we use common dimensions, adding a field from that dimension (Project) in one of these other subject areas (Primavera- Activity) will return data, but once a fact is added to the Analysis the data will be filtered out based on the joins in the RPD. If you have an existing Primavera Data Warehouse installation with P6 as your data source and you would like to add a Unifier source the steps are similar to adding an additional data source.  Besides some different options in the configuration utility there is really no difference with the ETL or the storage of the data.  By keeping each installation to it's own data source you could have any number of combinations of installation types or orders of installation (example - DS1 = P6, DS2 = Unifier, DS3 = P6, etc. ) When P6 and Unifier are sharing common data (Project Id) the data can be combined from different subject areas or aggregated or grouped together. This gives you the ability to combine P6 and Unifier data in certain Analysis.

In Primavera Data Warehouse 15.1 Unifier was added as an optional data source. There are certain subject areas in Primavera Analytics that are geared towards Unifier data - example - Cash Flow, and...

P6 Global Security Service... More Info, How to Run a 'Full'

If you are using the Primavera Data Warehouse (previously Primavera Reporting Database) and pulling data from P6 then the Global and Project publication services are your life line for data. If those services are not running your data in Primavera Data Warehouse\Analytics will be out of date. If you encounter an issue where you Security service seems to be failing repeatedly, which can be caused by bad data (such as duplicate rows in USERS table), it may be appropriate to force a 'Full.'  By forcing a 'full' update you are essentially telling the global services that security has never been run and to calculate everything again. The extended schema is all based on update dates. It will compare for related tables and compare those rows with entries in the SETTINGS table.  To make it recalculate everything you would remove the related rows in Settings.  Be very careful here, removing all rows from Settings would cause a global wide Full which may take many many hours to run and use server resources to accomplish and could have an impact on P6.  Delete only what is necessary.In this example we are going to do a 'Full'  for security.  Here are the steps:1- Connect as ADMUSER and run the following: delete from SETTINGS whereNAMESPACE in ( 'PxService.ProfilePrivilege.PxProfilePrivilege.Task', 'PxService.User.PxUser.Task', 'PxService.User.adminusersecurity.Sql.Task', 'PxService.User.baseline_security.Sql.Task', 'PxService.User.users.Sql.Task', 'PxService.User.usersecurity.Sql.Task', 'PxService.UserObs.userobs.Sql.Task'); Commit; 2- Run the global service for Security.  Be prepared this will take much longer than the usual daily run as it will be recalculating all security. After the service has been completed run your ETL process. This way you will have all the latest on both the extended schema and data warehouse\Analytics side. 

If you are using the Primavera Data Warehouse (previously Primavera Reporting Database) and pulling data from P6 then the Global and Project publication services are your life line for data. If those...

Parallel Loading

The Primavera Data Warehouse uses parallel loading of tables in several steps.  If you look in the \scripts folder you'll notice in the dim_load folder there are several scripts.  Those will load data to different dimension tables and could run in parallel as long as not blocked (the amount of threads can be turned down in properties file which would stop parallel processing, also a priority can be set on the order of the scripts).This also depends on the database and how many processes the DB instance can allow. If this instance is used for more than just the Primavera Data Warehouse these could be used up. If you begin to see errors like ORA-12516: TNS:listener could not find available handler with matching protocol stackshowing up for multiple scripts in the same step the likely cause is processes. Have your DBA check the number of processes (show parameter processes;).  If this is around 100-200 you could run into an issue (depending on what else this instance is being used for). The DBA can then increase the number of processes - try 600 as a starting point. (alter system set processes=600 scope spfile;) then restart your instance. After this increase the ETL process should have enough processes and complete.

The Primavera Data Warehouse uses parallel loading of tables in several steps.  If you look in the \scripts folder you'll notice in the dim_load folder there are several scripts.  Those will load data...

Spreads in Primavera Analytics

We get a lot of questions about spreads and how they are handled in Primavera Analytics. Spreads are really the core of P6 resource assignment and planning.  What are Spreads? In a simple way, Spreads are the distribution of units and costs over the life of an activity or resource assignment. This includes both the early dates (forward scheduling pass) and late dates (backward scheduling pass). Spreads can include curves and calendars, but we will leave those for another time. If we look at the following example in P6 EPPM: Activity ID: MN1000 --- Assigned Resource:Automation System Engineer Assignment Remaining Early Start Date: 01-MAY-2015 --- Early Finish: 11-MAY-2015 Assignment Remaining Late Start Date: 27-JUL-2015 --- Late Finish: 31-JUL-2015Remaining Units: 6.78d Below we look at the same Activity, MN1000, in Primavera Analytics. We see that the Remaining Units are being "spread" or distributed over the Early and Late Dates accordingly. We use Remaining Units for spreads over Early Dates (01-MAY-2015 to 11-MAY-2015) and Remaining Late Units for spreads over Late Dates (23-JUL-2015 to 31-JUL-2015). For Remaining Labor Units, we have 8 hours per day, distributed between May 1, 2015 and May 8, 2015. May 2, May 3, May 9 and May 10th, 2015 are not shown because they are non-work time weekend days, and the remaining 6.20 hours are on May 11, 2015. For Remaining Late Labor Units, we have 8 hours per day, distributed between July 31, 2015 and July 23, 2015 (backwards since this is the backward pass of the schedule). July 25, and July 26 are not shown because they are non-work time weekend days, and the remaining 6.20 hours appear on July 23, 2015.

We get a lot of questions about spreads and how they are handled in Primavera Analytics. Spreads are really the core of P6 resource assignment and planning.  What are Spreads? In a simple way, Spreads...

Having Trouble Publishing Projects?

Because the P6 Extended Schema is the lifeline of data for Primavera Analytics it is an important topic to cover. There are two white papers which go into depths about the feature, how to setup, how it works, and some advanced configurations and techniques. http://docs.oracle.com/cd/E20686_01/English/Technical_Documentation/Reporting_Database/P6%20Extended%20Schema%20White%20Paper.pdf http://www.oracle.com/webfolder/technetwork/tutorials/primavera/OnlineLearning/WhitePapers/P6_Extended_Schema_Advanced_Techniques.pdf The key to having project publish is a job service called the Arbiter. If you query the jobsvc table as admuser you will see this job. select * from jobsvc where job_type = 'JT_ProjectArbiter'; When this job runs it looks through your projects to see which ones have changes, which have crossed the time threshold, and which have been marked as 'Publish Now' and adds these projects to the queue.  Without this service running no projects would be published. In a scenario where your database has crashed or other environmental issues may cause this job to be stuck in a running or bad state you will want to get it running again as quickly as possible. One way would be to put a monitor on this row and trigger a database alert if it is stuck in running state for an extended period of time or does not run on the designated schedule. If the arbiter is running ok but some projects might not be getting processed there are a few reasons: 1- There are no changes on the project so there is no need to process it.  As discussed in an earlier blog not all changes on the project count as a change, changing a code or udf assignment wouldn't count as a change. 2- There is a setting called 'Publish Idle Projects.'  This really only occurs when you first turn on publication.  At that time it will go through all projects with enable publication checked and process them, when it is completed this is turned off.  It is turned off by altering a value - ProjectNonDemandStopKey - in the Settings table to zero. If this setting is zero then new projects will not be published until they have changes, have been scheduled, or have been opted in by choosing 'Publish Now.'  You can turn this back on from the database by running the following as admuser: update SETTINGS set setting_value = (select max(proj_id) from project) where SETTING_NAME = 'ProjectNonDemandStopKey'; This will force the arbiter to go through and add any unpublished projects into the queue.  Once it has completed those projects it will set itself back to zero and turn off. 3- Is a rare scenario where none of the above conditions have been met and you just want to force in all unpublished projects.  You could run the following: update project set px_next_date = sysdate where proj_id in (select proj_id from project where PX_ENABLE_PUBLICATION_FLAG = 'Y' and PX_LAST_UPDATE_DATE is null); We will have more blogs discussing the Project and Global publication services. This one is intended to give a little more background on the services.

Because the P6 Extended Schema is the lifeline of data for Primavera Analytics it is an important topic to cover.There are two white papers which go into depths about the feature, how to setup, how it...

When Configuring Multiple Data Sources...

When you are setting up additional data sources to be applied to your STAR schema (For example - multiple P6 databases being combined into a single STAR schema) there are a few things you need to make sure are configured.1- New installation directory    For this new data source it should have it's own installation directory with a new folder name (especially if on the same machine).    Edit the following files and change the path for the new staretl directory:        config.cmd or sh        \etl\common\runSubstitution.cmd or sh        \res\staretl.properties - change any paths that are incorrect. If there is a filter define it for DS2 (star.project.filter.ds2=filtername).         staretl.bat or sh            -add the data source number into this file so each time it executes it runs with the proper data source                call staretl.bat "-s2" %1....             In the scripts folder, locate staretl.bat or .sh. Edit the file, search for 'primary_link', remove the step where you find this from staretl then save.  The reason for this is the script this step runs drops and recreates database link 01 each time.  It also has parameters which could be populated with another source if executed. 2- Create new database link - DSLINK02    Create a new database link called DSLINK02 for the 2nd data source. The same pattern would repeat for additional data sources.  This new DSLINK should have the connection defined to the PXRPTUSER or filtered view for this new source.It is critical to make sure these steps are followed when creating a new datasource and running the ETL processes. If steps are missing the ETL's could run against the wrong source for the specific data source. These steps are also documented in the Installation and Configuration guide.

When you are setting up additional data sources to be applied to your STAR schema (For example - multiple P6 databases being combined into a single STAR schema) there are a few things you need to make...

I Created an ETL Schedule Through the Web Configuration, How Do I turn the Schedule Off?

In P6 Analytics 3.3 a web configuration utility that can be deployed in Weblogic was introduced.  Through this utility you can define an ETL schedule or run the ETL on demand. This web utility will also create a queue to prevent ETL processes from colliding. If you have multiple data sources you can define a schedule for each and the queue will be built to so the ETL process will wait for the previous or current ETL to finish before starting the next.When creating the schedule you can choose to run daily at a designated time or define which days of the week to run and at what time each day. When you define your schedule it is actually written into your ETL installation directory (staretl\res) in a properties file (schedule.properties). Any changes you make to your schedule will be reflected here.  You can only have one schedule per ETL installation.What if you created a schedule to run every week on Monday, Wednesday, Friday at 10pm and now you want to run just daily?All you need to do is make a change on the Schedule page in the web application. By unchecking weekly and selecting daily, selecting your time and clicking 'Schedule ETL' you will have overwritten the previous schedule.  You can verify this by checking the schedule.properties.What if you want to turn off all scheduled ETL's?You can choose Weekly and leave all days unchecked.  This will write to the schedule.propertie to turn off for all options, daily and each day.

In P6 Analytics 3.3 a web configuration utility that can be deployed in Weblogic was introduced.  Through this utility you can define an ETL schedule or run the ETL on demand. This web utility will...

Capturing Historical Financial Periods for Analytics

In P6 Analytics you can use Financial Periods as an element of time similar to Week, Month, Year, etc.  Financial Periods are defined in P6.  They can be a week, two weeks, a month, it is a user defined period.The financial periods are available where the time dimension is available but can react differently dependant on the subject area that is being used. In this blog we will cover two main areas where this might be used and how the two subject areas will work differently.Primavera - ActivityThis subject area is not historical.  This data is as of the last ETL run. The data in this subject area comes from the Activity Spreads.  If you have spread data that falls into these financial period buckets then it will be shown.  If you do not then certain financial periods will not be shown for this project. Remember once a fact is added into the Analysis, or you join to another dimension hence joining to the fact table, data will be filtered out to honor the data in the fact tables.  Analytics is fact driven. You must have a fact to fit the criteria for it to be displayed. Primavera - Project HistoryThis subject is historical.  As we discussed with the Primavera - Activity subject area, this is still fact driven.  The change in behavior is that these facts are all about being historically captured, this is NOT using spread data it is only using historical data. For example, you have a project that ranges from 1/1/2013 to 1/1/2016.  You have financial periods established for every two weeks during this time period.  Today is Jan 9th, 2015.  If you are just setting up P6 Analytics today and have never captured any history up to this point when you run the ETL process you will only see data for the financial period of Jan 1 - Jan 15th 2015.  The historical facts are added into the specified buckets of the day the ETL is run and what bucket that falls into.  If you have history for Week, Month, Quarter, and a Financial Period (every two weeks) you would see in the w_project_history_f a period_start and period_end_date like the image below: Because you only have facts for those time periods, you would only see those time periods were historical data was captured. If you only have history captured starting today (Jan 9th) you would only see data that falls into that bucket.  As time goes and you continue to capture history you will see additional buckets of history be stored and made available for viewing.  Thing to rememberHere are a few items to remember to do in P6 to make sure your projects are setup to capture history for financial periods. In the P6 Extended Schema remember if you are defining new financial period buckets to run the Enterprise Data global service so this data is calculated and stored in the Extended Schemaand available to be pulled over during the ETL. Projects must be published to calculate the spread data and changes to the project so they can be pulled over into STAR schema and available in the Primavera - Activity and Primavera - Project History subject areas. History must be enabled for the projects where you want to have history recorded.  This is off by default. When setting the history interval it will need to be set to Financial Periods.

In P6 Analytics you can use Financial Periods as an element of time similar to Week, Month, Year, etc.  Financial Periods are defined in P6.  They can be a week, two weeks, a month, it is a user...

How Could I Get a Project to Publish After a Certain Amount of Activity Code Changes?

Following up on the previous blog discussing why making a change to a code assignment isn't enough to trigger the publication of a project, here is an option if there is a business need around a certain P6 object that gets updated often and is considered as important as the tables we currently count changes for (PROJWBS (WBS rows), TASK (Activities), TASKRSRC (Resource Assignments), and TASKPRED (Relationships)) which determine if a project needs to be refreshed.  Say activity codes are changed very often on your projects. Maybe this is a way of moving activities into different phases or an important statusing practice. Because the Project publication infrastructure is based on changes, you would want to queue up a project based on changes as well. In this SQL example below we are going to update the px_next_date on the row for this project in the PROJECT table.  Px_next_date when updated with a date later than the last run of the arbiter (project publication) service, it will cause this project to be added to the queue. update project set px_next_date = sysdate where proj_id in --timestamp, and trigger a publication for projects in the following query(select p.proj_id FROM(select proj_id,  count(*) COUNT from taskactvwhere update_date > (sysdate -1)--in this example where there has been an update in the last day, this could be extended to 7 days or any other time choicegroup by proj_idorder by proj_id) pwhere p.COUNT > 50)---when there is a total of 50+ changes publish this projectIn this example we are saying at the time of running this SQL, go get a count (per project) for the amount of activity code updates within the last 1 day. And if that count is greater than 50 mark this project to be published.  You could change the numbers to better fit your needs.  You could do a similar update on other projectrelated objects, this is just a high level example (this is not meant to be production code).

Following up on the previous blog discussing why making a change to a code assignment isn't enough to trigger the publication of a project, here is an option if there is a business need around...

Why Aren't My Code Changes Being Reflected in Analytics?

The scenario is, you logged into P6, you made a series of activity code changes on your project.  An hour or two later you ran the ETL process, expecting to see the codes changes reflected when you viewed them in OBI or directly in your Star schema.  However they are not updated, why?Let's walk through the process of how the data gets there and why codes are different than updating activities directly.Step 1:  Project Publication - what counts as a change.You are dependent on the thresholds being crossed for updating projects in the Extended schema (See earlier blog for more info on thresholds). Basically changes to a project are counted, then if you reach the threshold (say 100 changes) or the time threshold (say 24hrs. and 1 change) the project will be published.  However items like code assignments do not count as a change. The only updated areas that count as a change are updates to the following tables:PROJWBS (WBS rows), TASK (Activities), TASKRSRC (Resource Assignments),and TASKPRED (Relationships)Step 2: Publishing the ProjectThe design behind this is these are the most commonly updated areas in a schedule.  To throttle the system and avoid overloading the queue with unnecessary updates these are the core areas that drive updates.  If you are making major changes to codes it is good to be aware of this and you can choose to 'Publish Now' which will then force a publication of your project and capture your code changes.Step 3: Run the ETLOK now that we have the Extended Schema (Project publication) side understood and worked out you can run the ETL as normal.  Remember the Extended Schema is the heart of all the data that makes it into P6 Analytics.  P6 Analytics and your STAR schema is only as up to date as your Extended Schema.  

The scenario is, you logged into P6, you made a series of activity code changes on your project.  An hour or two later you ran the ETL process, expecting to see the codes changes reflected when you...

History Selections (Part 2)

In the last blog we discussed how the history selections you make can effect the size of your Star schema.  Now let's talk about how we make the selections and what they mean.  In P6 go to Projects, then the EPS view.  Right click and choose Project Preferences, and click on Analytics.Here you will be presented with History Level and History Interval.   History Level is the most important setting here. Choosing Activity:-This will populate daily activity level history for the Primavera-Activity History subject area.  This is always daily.  There is no setting for Activity History to be a granularity other than daily.-When you choose Activity History the History Interval drop down is still available.  This is for choosing the granularity for WBS and Project History, which are turned on by default when you choose Activity Level History.  -Slowly Changing Dimensions will be turned on for changes to this Project. Choosing WBS:-If you choose WBS for History Interval this will also turn on Project Level History and will make these available in the Primavera -Project History subject area.-The History Interval selection will now affect both WBS and Project Level History.-Activity Level history will be off as well as Slowly Changing Dimensions.  Choosing Project: -If you choose Project for History Interval this will turn on only Project Level History and will make it available in the Primavera -Project History subject area.-The History Interval selection will now Project Level History.-Activity and WBS Level history will be off as well as Slowly Changing Dimensions. 

In the last blog we discussed how the history selections you make can effect the size of your Star schema.  Now let's talk about how we make the selections and what they mean.  In P6 go to Projects,...

How Historical Selections Effects the Size of Star Schema?

For P6 Reporting Database, the Planning and Sizing guide covers expectations for the size of your Star schema based on the size of your P6 PMDB and gives recommendations.  History is discussed in this document as well, but one thing to go a little deeper on is how the selection of history per project can have an effect on your ETL process as well as the size of your Star schema. If you select Activity History on a project this will: - record any change on the Project, Activity, and any related Dimension each time the ETL is run (ex. change in activity name). This enables Slowly Changing Dimensions for rows related to this project. - Activity history will be recorded in the w_activity_history_f on a daily level.  A row will be captured for each activity for each day the ETL process is run. Depending on the amount of projects you have opted in this could grow quite quickly.  This is why partitioning is a requirement if you are using Activity level history.   - Choosing Activity level history also opts you in for WBS and Project level history at the selected granularity (Weekly, Monthly). Recording a row for each WBS and Project each time the ETL process is run and bucketing the results into the chosen granularity. Because choosing Activity History can have a such a dramatic effect on the size and growth of your Star Schema it is suggested that this be a small subset of your projects. Be aware of your partitioning selections as this will help with performance. For determining the amount of rows for Activity History, take number of projects opted in times number of activities in each project times the number of days in the duration of the project times number of ETL runs. This will give you a ballpark idea on the amount of rows that will be added per project.  A similar calculation can be done for WBS history as well, adjusting for number of WBS rows and selected granularity.  In summary, be cautious of turning on Activity History on a project unless you absolutely need daily level history per activity.  If the project is on, once the project has ended turn off the Activity History on the project.  Keeping project controls accurate and up to date can help improve the ETL process and control Star schema. 

For P6 Reporting Database, the Planning and Sizing guide covers expectations for the size of your Star schema based on the size of your P6 PMDB and gives recommendations.  History is discussed in this...

P6 Analytics and P6 Reporting Database Security Enforcement

In P6 Analytics row level security is used to enforce data security. The security is calculated in P6 by the extended schema global security service. This means that P6 Analytics uses the same security defined in P6. If you gave a user access to a project in P6 they would have access to it in P6 Analytics. It is calculated, stored, and used across both products.  The calculated security data for projects, resources, and costs are pulled over during the ETL process. This is stored in the STAR schema and when querying the STAR schema for data using OBI there is a validation of the user in OBI against a user that should exist from P6. When the user is identified the data is filter based on the calculated out data they have access to. This is handled using a package in the STAR schema - SECPAC. Security policies for row level security are applied during the ETL process and can be found in the \scripts folder. This enables enforcement across the Facts and Dimensions. Using OBI and allowing the ETL process to handle the enforcement is the easiest way. However, if you do not use OBI but still wanted to use the STAR schema for data and use the security enforcement you can still use just the P6 Reporting Database.  You would setup your STAR schema and ETL process just as you would if you had the full P6 Analytics.  Once the ETL is completed you will need to execute the SECPAC and pass a username to set the context. In a session with this defined, this user would only see the data they had access to as defined in P6. This would be an additional step and need to be pre-defined in any process to access the STAR data.  Using OBI is a much easier option and you can take advantage of all the benefits of OBI in creating Analysis and Dashboards but it is still possible to setup STAR and use the calculated P6 security even without OBI in your environment.

In P6 Analytics row level security is used to enforce data security. The security is calculated in P6 by the extended schema global security service. This means that P6 Analytics uses the...

UDF's: what is available and where?

This blog covers what is available for User Defined Fields as of P6 Analytics version 3.3.  Areas available: Project WBS Activity  Resource Resource Assignment These 5 are the Subject areas available for User Defined Fields. Each User Defined Field has a subject area for use with UDF's that are treated as Facts (Cost and Number). Ex. Primavera - Project User Defined Fields.   Types of UDFs: Date and Text - date and text udf's are treated as dimensional values and are grouped into the specific area in each subject area containing that dimension.  For example, in the Primavera - Activity subject area, under Project you will see UDF-Date and UDF-Text.  These will be available in Analysis in these subject areas. Because Date and Text are dimensional they are also available historically as a slowly changing dimension value when activity history is activated for that project.  Activity History should only be enabled for a small subset of projects because of the impact they will have on database growth.  Please see Planning and Sizing guide for more information. Number and Cost - number and cost are treated as facts.  They will be the core of each UDF subject areas. For example, Primavera - Project User Defined Fields will have number and cost as the fields available in the Facts section.  The date and text UDF's will be available as dimensional values same as in other subject areas. Because these are facts they are not part of the slowly changing dimensions and do not have historical records. 

This blog covers what is available for User Defined Fields as of P6 Analytics version 3.3.  Areas available: Project WBS Activity  Resource Resource Assignment These 5 are the Subject areas available for...

Change to Logging for New Web Configuration Utility

Since P6 Reporting Database 2.0 and Analytics 1.0 the ETL process log has been appending during the ETL runs. In 3.2 staretlprocess.log will contain all ETL runs. We received customer feedback on a desire to have a log for each ETL run.  With the web configuration utility added in 3.3 this was implemented.  In the new configuration tool you can schedule your ETL runs and view the results and logs from the the web config. The logs were separated out so you can see the log for each individual run.  This also gives you the option to clean up old logs from your system directory for a given time period.   The existing (non web based) configuration utility will still do the logging based on appending each run to the same log if this is the way you choose to have your ETL process logged. However you should choose one method, either the existing configuration utility or the web based configuration utility and not use both.  The new web based config can build up a queue of ETL jobs and avoids any kind of collision. This is a major advantage for those using multiple data sources.  You can queue up multiple runs and they will handle the execution.  The existing configuration utility does not have a queue mechanism for ETL runs, they are controlled by cron jobs or manually being kicked off so collisions there are possible and you can not have more than one ETL process running at a time. Also because of the different logging methods you should not be running ETL's from both methods because the web configuration utility will not know of the logs for ETL runs kicked off from the STAR installation directory.  Each log from the new queue method will have a time stamp and unique name on it for easy identification. 

Since P6 Reporting Database 2.0 and Analytics 1.0 the ETL process log has been appending during the ETL runs. In 3.2 staretlprocess.log will contain all ETL runs. We received customer feedback on a...

Deleted data showing in History?

In the Star ETL process there are 2 main types of updates.  There is the current data (dimensions and facts) and there is the historical data (_hd, hf, history) - seeearlier blog for more details -- https://blogs.oracle.com/P6Analytics/entry/star_schema_table_descriptionsThe current data is overwritten every ETL update.  This will keep your data as up to date as the last ETL.  If you worked on a project, deleted a project, updated tasks, deleted tasks, scheduled - that will all be reflected. In the Historical tables, these changes once captured will always remain.  For example, you deleted a project.  In your list of projects this will still be available. Once the data is captured it is always in the database. This way you truly have a historical perspective.   In P6 Analytics when viewing Historical data you may want to filter out this data.  You can do so by adding a code to these values and set a value of 'Deleted' then apply this filter. If you truly want to see just the most recent data the Primavera - Activity subject area may be the subject area you want to report on.  The historical subject areas is more for trending and seeing changes over time. In a future blog we will discuss how to add a delete_flag to allow for easier filtering in P6 Analytics. The main take away from this entry is once the ETL is run the data is captured as is and always available in the database if it needs to be reviewed. There was a high demand for being able to see all data, even if deleted, to help show the total historical picture of the life of a project. 

In the Star ETL process there are 2 main types of updates.  There is the current data (dimensions and facts) and there is the historical data (_hd, hf, history) - see earlier blog for more details -- ht...

P6 Links appearing as text instead of HTML hyperlinks

With the latest version of the P6 Analytics catalog, the P6 Link fields (those fields that are used to launch P6 Action Links directly from Oracle Business Intelligence analysis) are appearing as text fields as opposed to a normalized hyperlinks. This is caused by the fact that the default data type for these fields are text instead of HTML. There are a few options that can be taken in order to correct this behavior and have the P6 Link fields appear as hyperlinks in analysis. The first option is to change the P6 Link field on an analysis by analysis basis. This is very time consuming and not very practical, especially if these fields are used in a lot of analysis. The Second option (Recommended) is to change the data type to HTML and save this as the system wide default for this field. This option is also a bit tedious because you have to do this for most of the subject areas, but once it's done, there is nothing else to do. Follow these steps to change the data type for these fields: 1. Create a new analysis that contains ALL of the P6 Links for a specific Subject Area 2. For each link, choose Column Properties, 3. Select the Data Format tab, and check the "Override Default Data Format" option. Change the "Treat Text as" to HTML from the drop down box. 4. After Step 3 is complete, at the bottom right of the Properties window, there is a button to the left of the "Ok" button called "Save as Default". Click this button, and choose "Save as the system-wide default for <whichever Link you are changing the properties for>". 5. Click "Ok" on the Properties Window to save this setting as the default for this field system wide. Repeat steps 2-5 for the remaining fields in this Subject Area. Now anytime you add this field to an analysis, it should appear as a hyperlink. 6. Repeat Steps 1-5 for each subject area.

With the latest version of the P6 Analytics catalog, the P6 Link fields (those fields that are used to launch P6 Action Links directly from Oracle Business Intelligence analysis) are appearing as text...

Star Schema Table Descriptions

In the P6 Reporting Database STAR schema there are several acronyms on the tables. In this blog we will give a high level description of some of these tables and what the acronyms represents. In each description there will be key words you can use to search and find more information about the specific data warehousing terminology.Staging Tables_DS, STG, _%S These tables are used during the ETL process and is the first stop from some data being pulled from the P6 Extended schema.  These are overwritten during each ETL run.Dimensions _DThese are the foundation Dimensional tables that are used with P6 Analytics.  During the ETL process these are repopulated each time. For more information search on Dimensions and Facts, high level a dimension contains information about a specific object.  For example, Project (w_project_d), this table will contain information about Project - Project Name, Description, Dates, etc.Facts_FThese _F tables are the foundation Fact tables that are used with P6 Analytics.  These are also repopulated during each ETL run.  For information search on Data warehouse Facts. An example is (w_activity_spread_f), and this fact table with activity spread data contains fields like costs and units.  An exception are the Fact tables which contain History in the name.  These are NOT repopulated each ETL run.  These capture the historical records in the pre-defined intervals. History can be captured on the Project, WBS, or Activity level. Historical Dimensions_HDHistorical dimensions represent our Slowly Changing Dimensions tables.  SCD's capture changes over time. These are NOT repopulated during each ETL run. The historical values are saved and stored to represent when this row was valid.  For example, w_project_hd, say you changed the project name.  There will be a row representing what the project name was originally, and the period start and finish range of when that was the project name.  On the next ETL run a new row will be inserted with the new project name and a period start date so you can determine as of which date this was the most recent record. Historical Facts_HFHistorical fact tables were added as part of the Slowly Changing Dimension implementation and represent in a similar nature the storing of historical fact data. Similar to the _HD tables these rows are NOT repopulated during each ETL.  They do however act like the _HD tables in that a row will represent each change captured for this fact row. For example, changing of a cost on an activity. These differences can be captured during ETL runs and will have the similar period start and end dates so you can determine when this value was the current record. _HF and _HD will not be automatically populated for all project data. These need to be opted in, by setting the project to Activity level history.  Please plan accordingly before doing so, check the P6 Reporting Database Planning and Sizing guide as this will have a long term effect on the size of your Star Schema.  Internal Usage ETL_Tables with ETL_ are used for processing events such as expanding out and populating dynamic codes or capturing process related information. Generally these are not used for reporting  as most serve an internal purpose.  An example is the ETL_PROCESSMASTER table which captures a row and result for every ETL run, or ETL_PARAMETERS which captures the settings supplied from the configuration setup.  Other tables contain metadata and mappings.   

In the P6 Reporting Database STAR schema there are several acronyms on the tables. In this blog we will give a high level description of some of these tables and what the acronyms represents. In each...

Indicator type UDFs Part 2

In a previous blog, we covered how to bring indicator typeUDFs (User Defined Fields) into the STAR data warehouse by manually adding them as text typeUDFs in the staretl.properties file and adding a script into the ETL process.In this post we’ll cover how to use conditional formatting in Oracle Business Intelligence to displaythese indicator UDFs in analyses. First, create a new analysis in Oracle Business Intelligence, in this example we’reusing the Primavera – Activity subject area, and a simple selection of two indicatortype Project UDFs, Schedule Status and Overall Status. Since the indicators UDFs are being stored as text in theSTAR, the initial output from Oracle Business Intelligence is not what we want, we’llneed to apply some conditional formatting in order to display the indicatorscorrectly. To accomplish this, we first need to add some place holder columns that we can use todisplay the indicator images. You can just add a second occurrence of eachindicator column and rename them so you know which column will contain the textand which will be used to display the actual indicator image. Next, go to the Schedule Status column, click on the menu dropdown and select Edit Formula. We don’t want to display the text along with the indicatorimage, so select the Custom Headings option, then in place of the actual columnname substitute a blank value(‘ ‘) in the Column Formula section. Now we can take care of the conditional formatting. For the ScheduleStatus column, click on the drop down and select Column Properties. Then go to the Conditional Format tab, click Add Condition,and select the column you want to base the formatting on, in this case ScheduleStatus Text. Select the condition and click OK. From the Edit Formatwindow click on Image. Select an image to be displayed based on the Schedule StatusText, click OK. Enter any additional format changes, in this case we’veselected Center horizontal and vertical alignment for the column, then clickOK. Repeat the conditional formatting steps for the other valuesof the indicator, and then repeat the process for any other indicators in theanalysis, in this example we also selected Overall Status. Once finished, the only remaining step is to hide the textcolumns from the analysis so only the indicators will be displayed. From theResults tab, right-click on the heading for the text columns and select HideColumn from the drop down list. Once the text columns have been hidden you will be left onlywith the indicator columns and the images that were selected in the conditionalformatting steps.

In a previous blog, we covered how to bring indicator type UDFs (User Defined Fields) into the STAR data warehouse by manually adding them as text typeUDFs in the staretl.properties file and adding a...

ETL Scheduling in a Multiple Data Source Environment

A multiple data source environment is where the Star schema is populated by either multiple P6 instances or a single P6 instance has been split into unique sources of data.  You could split a single P6 instance based on a specific criteria. Maybe you want a group of projects under an EPS to be updated in Star only on Friday's, but another group of projects need to be updated daily.  You could split into separate ETL's.  See previous blogs for more information on filtering and multiple data sources. For this blog we are going to cover how the ETL's are executed. If you have two data sources, you need to run two separate ETL processes at different times.  ETL #1 must be run first and complete before ETL #2 can be started.  You do NOT want to allow both ETL processes to be executed at the same time.  This can accomplished with a batch process or another queueing mechanism to make sure ETL #1 completes then execute ETL #2.  If ETL's were to be run at the same time you could see some data issues because they share staging tables.  While the data in the facts and dimensions is contained in rows that are unique to the data source the staging tables are not.  This data could be clobbered if both ETL's were running at the same time then that clobbered data may be pulled into the rows for an existing data source.  To help control this problem a new web configuration utility was created in P6 Reporting Database and P6 Analytics 3.3.  Now there is a queuing mechanism to prevent ETL's from running at the same time. You can setup separate tabs for each ETL.  Define the schedule for each ETL.  They will then queue up and be displayed on the home tab where the running and queued ETL's will show.  They can also be Stopped or Removed from the queue. The main take away is for multiple data source environments the ETL's are sequential not parallel.  

A multiple data source environment is where the Star schema is populated by either multiple P6 instances or a single P6 instance has been split into unique sources of data.  You could split a single...

Handling Codes and UDFs in a Multiple Data Source Environment

In a single data source environment codes and udfs are quiet easy.  Log into the configuration utility.  Select your codes and udfs, run your ETL process, and as long as you don't exceed the default number of columns in your RPD the codes and udfs show in OBI.  In the configuration utility the list of codes and udfs that are presented to you is populated by reading directly from the P6 Extended Schema you provided selection information for.  When you make your selections this list is written to your .properties file in the <rdb installation>\res directory.  During the ETL process these selections are read from the .properties file and inserted into Staging tables and eventually Dimensional tables in the STAR schema.   One thing to note about Staging tables is there is only one set. In a multiple data source environment Staging tables are shared. Meaning during each ETL run they are overwritten. This is the main reason why in a multiple data source environment ETL processes can not be run at the same time.  One ETL process has to finish before the next can be run. Which leads us to how to handle codes and udfs in a multiple data source environment. Say you make your codes selection from data source 1, run your ETL process.  Now make your code selection from data source 2, and run the ETL process.  The selection you made for data source 2 is now your final selection.  The values and order you choose from data source 1 has been overwritten.  To accommodate multiple data sources for codes and udfs requires a little coordination. Let's say you have 10 codes from your 1st data source and 10 codes from your 2nd data source that you would like to be used in P6 Analytics in OBI. Run the configuration utility for data source 1, choose your 10 codes using slots 1-10. Go to your \res folder and make a copy of the staretl.properties and save it to a different location (ds1staretl.properties). We're going to come back later to use this staretl.properties file for data source 1. Now for data source 2 lets go ahead and log into the configuration utility. Choose your codes, 1-10 don't really matter but choose codes 11-20 as the ones you want to represent this data source in OBI. Make a copy of this staretl.properties and save in a different location (ds2staretl.properties). Your are saving these backups so you can easily rebuild the codes list at a later time if runSubstitution or configuration utility are executed.   Now you have 2 properties files. These 2 files contain all the codes you are going to use we just need to combine them into 1 file now. Go to data source 1 properties file, change the lines for codes 11-20 to represent the codes 11-20 from the properties file from data source 2. You can find code 11 in data source 2 properties file, copy the three lines and paste it where code 11 is in data source 1's properties file. Or if there is no code 11 just add it below the other codes. Do this for the rest of the codes in data source 2 until in the data source 1 properties file you have all the codes you want from data source 2 in this properties file.  Now copy the whole codes section from this data source 1 properties file and overwrite the codes section in data source 2.  You will do this for each type of code (project, activity, resource). This section together is the 'master copy' of your codes. Do not run the configuration utility again on either data source 1 or 2 again after you have put this list together or it will overwrite it. If this 'master copy' is overwritten you will need to coordinate and create it again. So be careful when and where the configuration utility is run after this point.   If you have a scenario where you have the same codes in data source 1 and data source 2 then this process is much easier. You would need to make the 'master copy' in just one of the data sources and then add it to the other properties files.  If the data sources are truly unique and there are codes that don't exist in one of the other data sources you must follow the steps above.Do not worry about when the ETL process runs for data source 1 that it might not have the codes for data source 2 that are now defined in the properties file. This is adding it into the staging tables and mappings for OBI to use for the data associated with data source 2. When you try to add this code along with data from data source 1 it will not return any results just like any other project or activity that is not associated with a chosen code. 

In a single data source environment codes and udfs are quiet easy.  Log into the configuration utility.  Select your codes and udfs, run your ETL process, and as long as you don't exceed the default...

Updated Burn Down Whitepapers

An effort was made to explain in detail how information flows from P6 into P6 Analytics, specifically the Primavera - Burn Down Subject Area within Oracle Business Intelligence. The Primavera - Burn Down subject area captures data at a specific point in time (user defined), then uses that point in time capture to compare how a project should progress over time if everything goes "according to plan".  P6 Analytics has the ability to capture which activities are progressing according to plan and which ones are ahead or behind schedule by comparison to the original point in time capture of the project. The Primavera - Burn Down Subject Area captures units and costs, as well as activity status counts (Not Started, In Progress, Completed, etc...). For details on the exact measures that P6 Analytics displays in Oracle Business Intelligence, and how those measures are calculated and stored, we have created three new whitepapers. The three whitepapers detail how units and activity counts are calculated and stored on a daily basis. In addition, we have a whitepaper that details how the overall flow of data moves through the data capture process (ETL), and what the expected results will be in Oracle Business Intelligence within the Primavera - Burn Down Subject Area of P6 Analytics. To download the three New whitepapers, follow the links below: Primavera - Burn Down Counts Details Primavera - Burn Down Units Details Primavera - Burn Down Data Flow Details

An effort was made to explain in detail how information flows from P6 into P6 Analytics, specifically the Primavera - Burn Down Subject Area within Oracle Business Intelligence. The Primavera - Burn...

What's a WID? Where is it used?

In the Primavera PMDB tables most primary key columns are identified by that specific table _ID, example task_id on TASK, proj_id on PROJECT.  This identifies this unique row. In the early versions of our STAR schema we had object_id's as well. W_PROJECT_D (Project Dimension) had project_object_id.  W_ACTIVITY_D (Activity Dimension) had activity_object_id. Fact tables had a combination of all the associated object id's - project, task, etc. The object id's still exist but with the introduction of Multiple Datasources in P6 Reporting Database 3.0 an object_id was not unique enough.  Take the Project Dimension for example, project_object_id has to be unique for each data source. But if you have a second data source that rule doesn't apply. You could have a project_object_id of 4527 in data source 1 and in data source 2.  To make sure we never have this scenario occur row_wid's where added.  The row_wid is a calculation of that specific object id * 100 and the data source id. If this project was from data source 2 the row_wid would be 452702.  There is the possibility of having 100 data sources, however it is unlikely and not recommended. This row_wid is then used to help join other tables. The row_wid from the w_activity_d is referenced as activity_wid in other tables, similar to project_wid, eps_wid. In the STAR schema is not the only place these wid's are used.  They are also used in the RPD for joining between dimensions and facts. This way if in a multiple data source environment the wid's make sure the data being returned in OBI is also correct and unique to that data source. If doing manual manipulation of the RPD be aware that wid's may be the field you want to join on. Consider your environment, it's needs, and it's future - do you foresee an expansion with more than one data source - if so, plan ahead.  

In the Primavera PMDB tables most primary key columns are identified by that specific table _ID, example task_id on TASK, proj_id on PROJECT.  This identifies this unique row. In the early versions of...

Before you upgrade...what to expect (P6 Reporting Database)

If you are upgrading to P6 Reporting Database 3.1 or higher you'll want to allow for adequate time for the upgrade process. In this blog we will focus on upgrading the Star schema. The first major item to be aware of is that during the upgrade an ETL process is initiated. If you have a job scheduled to run your ETL process at a given time you'll want to make sure you allow enough time for the upgrade to finish or you may want to cancel the job until after the upgrade has completed. Having two ETL's running at the same time or overlapping at any point could be problematic for your STAR schema and the integrity of the data. Generally running an upgrade will take longer than a traditional daily ETL. During the upgrade process a set of BCK (backup) tables are created. Backups are taken for all history (_history_f) and slowly changing dimension (_hd and _hf) tables. The schema is recreated and then the data is reinserted into the new tables. Also backed up is the ETL_CALCULATIONS table that contains the metadata calculations.  After the backups are completed the ETL process is run to populate the remaining dimension and fact tables. Before running an upgrade make sure to backup your STAR schema. You'll want to make sure this data, especially your history, is preserved incase of any kind of failure during the upgrade.

If you are upgrading to P6 Reporting Database 3.1 or higher you'll want to allow for adequate time for the upgrade process. In this blog we will focus on upgrading the Star schema. The first major...

Report on ETL Status of a Multiple Datasource Environment

Do you have a Multiple Datasource Reporting Database environment?  In a scenario where you have multiple datasources, and multiple ETL processes running you may want to keep track of those ETL runs in an automated fashion. In an earlier blog we discussed how setup your own ETL monitoring system, in this Blog we will discuss how to use OBI to monitor your ETL runs. Currently we have an ETL_PROCESSMASTER table in the STAR schema that tracks every ETL run - the start time, the finish time, and wheter it was successful. However there is not a column for datasource Id.  You can get this information by calling a function in the STAR schema - get_source_db_id.   If you would like an OBI report to show this information here is a way to do it. First let's talk about how this can be accomplished in OBI.  This data for ETL_PROCESSMASTER is not exposed in any of the subject areas.  The reason is it is not for general user consumption.  The ETL_ tables in STAR are for internal trackings and usage during the ETL process.  While it can have useful information it's main intention is not for exposure in the subject areas of P6 Analytics where the focus is on your schedule and resource data, not the internal workings of the ETL process.   But a report can be created outside of subject areas by using OBI's ability to query a data source directly.  In this type of Analysis you do not use any of the subject areas you use the OBI direct database query and formulate your own SQL. After the SQL has been generated this Analysis can be treated like any other and an administrator could use it to oversee the ETL process for all the organization's datasources.  We need to add a new column to the ETL_PROCESSMASTER table. We will add a column called DSID. You can either run the SQL before running the ETL prcess: ALTER TABLE etl_processmaster ADD (dsid   number); commit; Or another way is to add a script into the USER_SCRIPTS folder (located in \star\scripts directory).  All contents of the USER_SCRIPTS folder will be executed after the ETL process has completed. Create a script called - addcolumn.sql and add the following contents: DECLARE   v_new_col1 number := 0; BEGIN   Select count(*) into v_new_col1 from user_tab_cols where column_name = 'DSID' and table_name = 'ETL_PROCESSMASTER';   if (v_new_col1 = 0) then       execute immediate 'ALTER TABLE etl_processmaster ADD (DSID number)';   end if;   commit; END; / EXIT This script will look to see if the DSID column exists, if it does not it will be added.  If it does exist then this step will be skipped.  Now you have the new column added.  Next we need to populate the datasource id for the last\most recent ETL run. In the USER_SCRIPTS folder add another new script called = datasource.sql set serveroutput on size 100000 -- update datasourceid  declare   v_src_id number; BEGIN   v_src_id := get_source_db_id; update ETL_PROCESSMASTER set dsid = v_src_id where processstartdate =  (select max(processstartdate) from etl_processmaster);  commit; end; / Exit This script will go find the datasource id of this ETL run and update the new DSID accordingly for the most recent ETL run row.  Now you have all the data in place.  Next you just need to generate a simply query to extract this data, such as: select processid, processstartdate, processenddate, processtype, DSID from etl_processmaster;

Do you have a Multiple Datasource Reporting Database environment?  In a scenario where you have multiple datasources, and multiple ETL processes running you may want to keep track of those ETL runs in...

How to find resources without any assignments?

In P6 Analytics if you create an Analysis and use a value from a Dimension - Resource Dimension for example - you will see all resources regardless if they have any assignments. However when you join to another dimension or fact you may no longer see that resource in your Analysis if it doesn't have 1 fact row. In the OBI RPD joins are constructed based around the Facts. Dimensions are joined through the Fact tables. (See RPD Physical Layer diagram). For this reason if you are looking for resources without assignments you may need to use another method to view this data. The data is all there. One of the great things about the P6 Reporting Database is the data is there we just need to access it. This is true for many areas - slowly changing dimensions, history, etc.In OBI you can use a direct SQL query to query directly against the STAR schema to get your desired result.  (New> Create Direct Database Request) and choose:Connection Pool: Oracle Primavera P6 Data Warehouse Connection Pool Add the following SQL: select r.resource_name, nvl(count(distinct ra.activity_object_id),0) num_assignmentsfrom w_resource_d r left outer join w_resource_assignment_d raon r.resource_object_id = ra.resource_object_idwhere r.resource_name is not nullgroup by r.resource_nameorder by 1;This will give you an output of all resources and the number of assignments that occur in the STAR schema.  You can add additional filtering or ordering to satisfy your requirements.

In P6 Analytics if you create an Analysis and use a value from a Dimension - Resource Dimension for example - you will see all resources regardless if they have any assignments. However when you join...

Role Usage in P6 Analytics

In P6 Analytics there are some excellent reporting abilities for Roles I would like to focus on. The ability in P6 Analytics to slice and dice data by different areas allows you to combine information related to Roles and view it in an Analysis.For example, you are staffing multiple projects, you do not know the specific resource that will work on this task so a Role is assigned at the time. Later on as you staff those Role assignments you will need to see across your enterprise how your Role assignments look and which ones still require staffing. Here is a screenshot of a very simple analysis which can show this.   With P6 Analytics you can also add in other areas to help further slice your Analysis, such as Project. If you are interested in seeing a static Resource Limit that goes along with the Resource assigned to the Role you can accomplish this by adding a new join into your RPD. In the Physical Layer of the RPD you will add a join between the w_resource_role_d and w_resource_limit_f.  This will allow you to join the Primavera Resource Assignment and Primavera Resource Utilization subject areas and add Resource Limit into this current Analysis.  This is just a static Resource Limit, not related to Role or a Role limit.  Here is a screenshot of the join in the RPD. After making the change in the RPD the OBI services must be restarted. There is more that can be accomplished with the Resource Assignment subject area around Role and Resource usage and staffing. By using this subject area you can see enterprise wide role and resource usage, or filter down by assignments. 

In P6 Analytics there are some excellent reporting abilities for Roles I would like to focus on. The ability in P6 Analytics to slice and dice data by different areas allows you to combine information...

EPS changes in Analytics (and P6 Extended Schema)

I moved this project to a new EPS, but it still shows an older EPS name? Why?The main culprit of most data mismatches is the ETL process was run before the P6 Extended Schema services were able to update the data.  But don't worry it is just temporary. The services will continue to run and the data will be updated.  One of the keys is understanding the data flow and what service controls what data.  If you are working in P6, updating your project and activities the project services will pick up these changes and the services will be running in the background and you wouldn't need to do anything manually, the environment would update itself based on the thresholds you defined.For some items like EPS they are controlled by a different service. There are the global services - Enterprise Data, Resource Management, Security, and Enterprise Summaries. Enterprise Data consists of dictionary level data like Codes, Calendars, and Portfolios.  Resource Management as expected handled Resources, Roles, and other related dictionary level resource data.  Security handles new,removed,or updated users and access. Enterprise Summaries handles summary data along with updating EPS. For the example above, if someone moved projects around from different EPS levels they would want to have the Enterprise Summaries service run before they ran the ETL process if they wanted to view the data immediately. Planning out when the global services run and when the ETL process runs is a good idea. Generally the global services are set to run daily, same as the ETL process.  You can schedule your global services to run at midnight and your ETL process at 1am and in that case all your data will be up to date.

I moved this project to a new EPS, but it still shows an older EPS name? Why? The main culprit of most data mismatches is the ETL process was run before the P6 Extended Schema services were able to...

Analytics Date Range is Out of Date

If you are working in the P6 Reporting Database or P6 Analytics and you notice that either your spread range or resource utilization and limit ranges haven't adjusted more than likely it has to do with your P6 Extended Schema.  First check in your STAR schema:select min(day_dt),max(day_dt) from w_day_d;In the P6 Extended schema an initial date range is set when the project services and global services are run the 1st time.  Generally the range is not changed after that point. The date range creates a rolling window of time.  You set your start date and then your finish range to be plus a certain amount of months or years in the future.  An example is setting your start date to be:Jan-1-2012 and a rolling 3 year window (today's date + 3 years). Looking at the ReportDate table you want to see if this matches your expected date range:select min(daydate),max(daydate) from reportdate;If all global services were run today and project services this should be updated and max should reflect today's date. If for some reason it does not make sure the Enterprise Data service has been run. Make sure you are still publishing projects.  And if all seems functional, a last case scenario you could truncate the reportdate table then rerun the Enterprise Data service.  This will force a new recalculation of the date range for this table. After this is completed you can run the StarETL and you should see w_day_d updated with this date range.  If for any reason you are still seeing an issue then the w_day_d can be truncated and repopulated on the next ETL run. Again truncating of these tables should not be necessary and is only in extreme cases. In a majority of cases having the Enterprise Data service and ETL run will make sure all date ranges in the P6 Reporting Database and P6 Analytics are up to date.

If you are working in the P6 Reporting Database or P6 Analytics and you notice that either your spread range or resource utilization and limit ranges haven't adjusted more than likely it has to do...

Why Does Adding a UDF or Code Truncates the # of Resources in List?

Go to the Primavera - Resource Assignment History subject area.  Go under Resources, General and add fields Resource Id, Resource Name and Current Flag. Because this is using a historical subject area with Type II slowly changing dimensions for Resources you may get multiple rows for each resource if there have been any changes on the resource.  You may see a few records with current flags = 0, and you will see a row with current flag = 1 for all resources. Current flag = 1 represents this is the most up todate row for this resource.  In this query the OBI server is only querying the W_RESOURCE_HD dimension. (Query from nqquery log)select distinct 0 as c1,     D1.c1 as c2,     D1.c2 as c3,     D1.c3 as c4from      (select distinct T10745.CURRENT_FLAG as c1,               T10745.RESOURCE_ID as c2,               T10745.RESOURCE_NAME as c3          from                W_RESOURCE_HD T10745 /* Dim_W_RESOURCE_HD_Resource */           where  ( T10745.LAST_RUN_PER_DAY_FLAG = 1 )      ) D1 If you add a resource code to the query now it is forcing the OBI server to include data from W_RESOURCE_HD, W_CODES_RESOURCE_HD, as well as W_ASSIGNMENT_SPREAD_HF. Because the Resource and Resource Codes are in different dimensions they must be joined through a common fact table. So if at anytime you are pulling data from different dimensions it will ALWAYS pass through the fact table in that subject areas.One rule is if there is no fact value related to that dimensional data then nothing will show. In this case if you have a list of 100 resources when you query just Resource Id, Resource Name and Current Flag but when you add a Resource Code the list drops to 60 it could be because those resources exist at a dictionary level but are not assigned to any activities and therefore have no facts. As discussed in a previous blog, its all about the facts.  Here is a look at the query returned from the OBI server when trying to query Resource Id, Resource Name, Current Flag and a Resource Code.  You'll see in the query there is an actual fact included (AT_COMPLETION_UNITS) even though it is never returned when viewing the data through the Analysis.select distinct 0 as c1,     D1.c2 as c2,     D1.c3 as c3,     D1.c4 as c4,     D1.c5 as c5,     D1.c1 as c6from      (select sum(T10754.AT_COMPLETION_UNITS) as c1,               T10706.CODE_VALUE_02 as c2,               T10745.CURRENT_FLAG as c3,               T10745.RESOURCE_ID as c4,               T10745.RESOURCE_NAME as c5          from                W_RESOURCE_HD T10745 /* Dim_W_RESOURCE_HD_Resource */ ,               W_CODES_RESOURCE_HD T10706 /* Dim_W_CODES_RESOURCE_HD_Resource_Codes_HD */ ,               W_ASSIGNMENT_SPREAD_HF T10754 /* Fact_W_ASSIGNMENT_SPREAD_HF_Assignment_Spread */           where  ( T10706.RESOURCE_OBJECT_ID = T10754.RESOURCE_OBJECT_ID and T10706.LAST_RUN_PER_DAY_FLAG = 1 and T10745.ROW_WID = T10754.RESOURCE_WID and T10745.LAST_RUN_PER_DAY_FLAG = 1 and T10754.LAST_RUN_PER_DAY_FLAG = 1 )           group by T10706.CODE_VALUE_02, T10745.RESOURCE_ID, T10745.RESOURCE_NAME, T10745.CURRENT_FLAG     ) D1order by c4, c5, c3, c2When querying in any subject area and you cross different dimensions, especially Type II slowly changing dimensions, if the result set appears to be short the first place to look is to see if that object has associated facts.

Go to the Primavera - Resource Assignment History subject area.  Go under Resources, General and add fields Resource Id, Resource Name and Current Flag. Because this is using a historical subject area...

Usage of Historical UDF's and Current Flags When Creating Analysis'

In the P6 Reporting Database 3.1 and higher, type 2 slowly changing dimensions were added. These changes are captured in tables with the suffix _HD. In each of these tables there is a field called current_flag. With type 2 SCD's the current flag let's you determine what is the most recent row vs.other rows which are the historical rows. For example a project, the project is created it has a row. The next day you change something on the project, maybe the project name, then run the ETL process. Now you'll have two rows. The most recent row will have current flag of '1', the original row will have current flag of '0.'  Having a current flag of '0' is a quick way to determine it is a historical record. In the STAR schema it is easy to evaluate these historical versions. In P6 Analytics 3.1 you can see the current flag on some historical records but not in every analysis. Take the Primavera - Activity History subject area, if you add Project Id, Project Name, and Current Flag from under the Project \ General section.  You will see the current flag represent as described above.  If there was a change in the project name you will see two rows and the current flag will have a '0' or '1' to represent the historical record and the current record. This is because the current flag that has been exposed is tied directly the Project dimension (w_project_hd).If you use the current flag from under the General area of any section it will be the current flag representing that section.  Now let's look at how this could behave when combining different type 2 slowly changing dimensions. Stay in the Primavera - Activity History subject area, and for now remove the current flag and add in a Project Text UDF along with the Analysis containing Project Id and Project Name. If there are multiple changes to the text udf (for example -Primavera UDF1), there will be multiple rows showing the history of what this text udf contained. UDF's are contained in their own type 2 slowly changing dimension tables (w_udf_project_hd).  When adding the current flag back from the Project \ General section you will see a slight change in the Analysis and you may see some of the udf historical records with a 1 next to them.  This is because the current flag is actually representative of the project rows, not the udf rows. If you moved the Current Flag next to Project Name it would fit more appropriately in the Analysis.  In the example of udf's, you will see the projects listed and if there is a udf assignment you will see the value. If there is no assignment for that project you will see an empty column. If you never had an assignment, then added a udf you will see two rows. One for the previously empty row on that project, another for the new row containing the udf assignment. This will allow you over the life of your project to all the changes to it. The Primavera - Activity subject area is a good place to work in if you are more interested in always seeing the current values. The Primavera - Activity History subject area gives you a look at the historical versions of a great deal of information. Usage of the current flag to determine what is the most current version works great when using the current flag under those dimensional areas. When combining historical dimensions and using the current flag you just need to be aware of where the current flag is coming from and what it represents. The current flag isn't always available for all historical dimensions. The history and data that exists in P6 Reporting Database and P6 Analytics 3.1 and is extremely powerful, helping to give a total picture of what has happened with your projects and associated data. How have these values changed over time? How has the project changed over time? You can get this information from the STAR schema or very easily from the P6 Analytics dashboards or Analysis. 

In the P6 Reporting Database 3.1 and higher, type 2 slowly changing dimensions were added. These changes are captured in tables with the suffix _HD. In each of these tables there is a field called...

Indicator UDFs

Indicator UDF's are not supported in P6 Analytics. The main reason being there isn't currently a way to move those same indicator images over into OBI, and the images that are used for indicators in P6 don't exist in OBI. There are some workarounds that can be used to achieve using UDF indicators in a way that based on the UDF indicator from P6 and image can be displayed in OBI.   There are two main pieces to this.  First, the STAR data warehouse side.   Second, how to have OBI display an indicator based on a text value. In this blog posting we'll look into the data warehouse side. In a later blog we'll look at the OBI side. Let's start back at P6.  In P6 you can add a Project UDF with a type of indicator. In P6 add this indicator as a column and then for a specific project,assign a value - for this example lets say Blue Star indicator is assigned. If you query the Extended Schema view for UDFVALUE for the projectobjectid of this project: select * from udfvalue where projectobjectid = 4518; You will get a result set back that shows UDFTEXT as BLUE, which is the text for the Blue Star icon. It will give subject area type and type of UDF - FT_STATICTYPE.  In the P6 Reporting Database these UDF's are not available in the configuration utility because they are not supported. But UDF's can be added directly to the staretl.properties file in the \res folder. This file is used for populating all dynamic code and UDF values on the OBI side. In this example this is the entry I added into my 6th position.  udf.proj.text.6.name=Cost Status udf.proj.text.6.type=Text After you have added to the .properties file you will want to execute runSubstitution.cmd or .sh in the \etl\common folder. Running this will populate this new UDF row you just added through the .properties file into the other necessary files.  Next step is to get the tables to populate this value. We are going to treat it like a text UDF . The only goal from the data warehouse side is to get it acting like a text UDF so a conditional statement can be added in OBI where if text = 'Blue Star' then display a certain icon. Go to your \scripts\udf_load and add a new script called change_indicators.sql with the following: BEGIN update w_udfvalue_ds set DATATYPE = 'FT_TEXT' where DATATYPE = 'FT_STATICTYPE' and SUBJECTAREA = 'PROJECT'; commit; end; / exit  Save this file.  (If all works out for you with your UDF's as indicators go back and add this same file as a template in \etl\oracle\templates\udf_load and save file as .tsql.  This will save you from having to make edits to this script each time the config is run. Running config overwrites files in the \scripts directory.) Now run the ETL process and you will see this indicator UDF appear in the w_udf_project_d table like it was a text UDF .  Check back for a future blog on how to make changes in OBI to based a conditional format statement off this value. 

Indicator UDF's are not supported in P6 Analytics. The main reason being there isn't currently a way to move those same indicator images over into OBI, and the images that are used for indicators in...

How to Generate Your Own RPD from P6 Analytics UDML

Starting in the P6 Analytics 3.1 release we began shipping the UDML that is used to generate the RPD that includes all the Primavera subject areas. If you have the UDML file you can generate your own RPD. This allows for some flexibility with versioning, if there are any changes or releases of Oracle Business Intelligence that might cause conflict with our RPD. Also with the UDML, you can directly edit or update the UDML and generate a new RPD based on your changes. We have seen others take advantage of this to make it easier to add more than 20 codes and udfs. Through the UDML, copy and paste can make it alot easier to add those additional codes and udfs rather thanusing the GUI OBI Admin tool for the RPD. To generate a new RPD you just need an OBI installation. On that machine copy the UDML file (primavera_analytics_rpd.udml) from the P6_R31_Analytics\obi\rpd folder in the media pack. When generating the new RPD you will be supplying your own password. Open a terminal or cmd line prompt and go to the directory  <OBI HOME>\bifoundation\server\binand run the following command (for Windows, adjust the directories as needed for Linux installations) nqudmlexec.exe -P prima123vera -I C:\temp\test\primavera_analytics_rpd.udml -O C:\temp\test\PrimaveraAnalytics.rpdIn this example: -P is the password of the new RPD  -I is the directory where the UDML file has been copied to  -O is the directory where your new RPD will be created. If there are other options you would like to set you can use the help in this utility to find the parameters. Now that the RPD has been generated you can deploy it through Weblogic like you would the RPD that is shipped with P6 Analytics. 

Starting in the P6 Analytics 3.1 release we began shipping the UDML that is used to generate the RPD that includes all the Primavera subject areas. If you have the UDML file you can generate your own...

Security for Multiple Data Sources

The default method of security for the P6 Reporting Database is row level security. With this security policies are applied to each logical level of security enforcement - project, resource, costs. Security is calculated out for each user with the Analytics module access. Security is written into STAR for each project this user has access to and if they do or do not have cost access, similar calculation based on resource access and written to a resource security table. Since the introduction of the ability to have multiple data sources into the STAR data warehouse we needed to now be able to enforce security based on these qualifiers as well as data source id. If user Jeff has access to Project Philadelphia in DS1 that security will be written out and applied by the policies. If user Jeff also exists in DS2 and there is also a Project Philadelphia in DS2 but user Jeff does not have access then he will not have access through P6 Analytics\P6 Reporting Database either. The project id, and data source id would be different and when filtering user Jeff would not gain access to that project because it is already calculated out and the query would only apply to that data source. This type of security enforcement allows for separation of data sources based on security but also allows for the aggregate abilities of multiple data sources if the user did have access to projects or resources in both data sources. In the end with access to both you can show a unified view of the enterprise wide data for all data sources. 

The default method of security for the P6 Reporting Database is row level security. With this security policies are applied to each logical level of security enforcement - project, resource, costs....

Slowly Changing Dimensions (Type 2) and Historical Facts-Part 2

In the first part we discussed the types of slowly changing dimensions available in P6 Analytics and P6 Reporting Database.  Now lets look at how you would view the data. If you connected to your Staruser schema and queries w_project_hd you would be looking at your type 2 SCD project table. In this table you could have multiple rows per project if there were changes on that project and that project was opted in for slowly changing dimensions (see Part 1 of this blog for how to opt in).  In this screen shot we are looking at sample for project - Baytown.You'll notice three rows here for the same project id. The main rows we want to focus in on are: effective_start_date, effective_end_date, and current_flag When a project is first inserted into the w_project_hd table it will have current flag set to 'Y' to confirm this is the most recent value for this project row.  The effective start date will be when it was inserted and the effective end data will be 01-Jan-3000.  If you made a change for this project, ran the ETL you would get a second row. Now the original row would have an effective_end_date set to the effective_start_date of the new row and the current flag on that row would be set to 'N' and the new row would have the new current flag value of 'Y' and so on as more rows are added. It is a basic concept to follow and something you will want to be aware of if directly querying any _HF or _HD tables in the STAR schema.  Another thing to be aware of is if you are using P6 Analytics and the Primavera - Activity History subject area this is using the type 2 dimension tables along with a historical fact table. This will show all versions of these historical rows. You can add filtering to go on just current flag if necessary, but the ability to see and compare all historical dimension data will be available in this subject area.  Slowly changing dimensions and historical facts provide a very powerful tool to analyze data change over time. The ability to see historical dimensional changes gives a total project picture along with the existing and new history subject areas added in P6 Analytics 3.1 and P6 Reporting Database 3.1. 

In the first part we discussed the types of slowly changing dimensions available in P6 Analytics and P6 Reporting Database.  Now lets look at how you would view the data. If you connected to your...

Slowly Changing Dimensions (Type 2) and Historical Facts-Part 1

This is going to be a two part series.  In part 1 we'll cover what is a SCD (slowly changing dimension) and how is that determined in the P6 Reporting Database.  In a future blog part 2 will cover from the database side how to determine which is the most up to date row, and how to look at your historical rows.In P6 Reporting Database version 3.1 we added type 2 Slowly Changing Dimensions.  We always had type 1.  Type 1 is an overwrite. It really doesn't store any history. For example, you are working on Project ABCD. The original budget on the project was $1 million dollars.  If this was changed to $2 million dollars, the row would be removed and updated with the current value. So type 1 SCD's didn't really keep a historical value.Type 2 SCD's leaves the old record, and inserts a new record.  This way you can see what the value was in the past and what it is now. One thing to be aware of when opting in for Type 2 SCD dimensions is you can generate a large number of rows quickly. The ETL process is meant to run once per day. Say on average you make about 15% of changes to your data daily, new rows will be added to keep track of those new changes and close out old records. Certain changes can have cascading effects. A change to an EPS could cause changes to all lower levels - project, wbs, activitites. Changes to the project could cause creation of new spread records. So be careful in what projects really need SCD's and Historical fact data. You can opt in a project to track type 2 SCD and historical facts by setting the History Interval to Activity for the project. Again this should be a limited set that gets this level of granularity. By default projects are not set at this level. By default most changes on a dimension would cause the capture of a new row of data. To view the field by field list of what captures new rows these can be viewed in the \star\res directory - mappings.tcsv file.  This file has a list of each field mapping to a staging table. Lets take this line for example:83,W_ACTIVITY_DS,ACTIVITY_NAME,ACTIVITY,name,43,YFor the 8.3 schema mappings of the Activity Dimension, any change on the name field through the Pxrtpuser Activity view will result in a new row in this dimension. The 'Y' flag at the end of this line is on\off switch for this field. You have the ability to flip the switch and turn off fields so they do not cause new SCD rows. If I didn't want activity name changes to cause a new row to be created set the flag on this row to be 'N'You can also turn off all type 2 SCD's for everything.  We have a 'kill switch' so you don't have to update every row in the mappings.tcsv.  Go to your \res directory and edit the properties file. Look for the line:etl.scd.type2=if exists set to false. If does not exist insert the line and set to false. Also check your ETL_PARAMETERS table and validate after ETL process this value is set to false. This will override the type 2 SCD feature and turn it off for all.

This is going to be a two part series.  In part 1 we'll cover what is a SCD (slowly changing dimension) and how is that determined in the P6 Reporting Database.  In a future blog part 2 will cover...

Validating Metadata Updates

In P6 Reporting Database and P6 Analytics versions 3.1 a metadata feature was added to allow for altering of calculations and a way to add new calculations. There is already an existing path foradding new scripts - user_scripts. User_scripts will allow you to establish new steps in the ETL process such as - adding additional tables or fields and custom calculations to those new objects.The metadata feature allows you to change calculations on existing fields in historical, burndown, or workplanning fact tables. See documentation for more information:http://docs.oracle.com/cd/E38975_01/English/Install_and_Config/Admin_PDF_Library/p6_analytics_and_star_database_installation_and_configuration_guide.pdfValidating the metadata before running the ETL is key. If the changes you added will fail, the ETL process will revert back to a default calculation for all metadata calculations. For example,you change the metadata calculation on two fields. One of the calculations causes a failure, both will revert back to the default calculation column for what to run. Here are the steps to validate your metadata calculations:After making change. Run StarETL for steps 17 through 20 - Linux: staretl.sh -from 17 -to 20 - Windows: staretl.bat 17 20Check the log output for any errors or warnings. A few more items to be aware of:Calculations are per data warehouse/not per data sourceDrop STARUSER and you will lose all custom calculationsMetadata calculations are stored in the ETL_CALCULATIONS table

In P6 Reporting Database and P6 Analytics versions 3.1 a metadata feature was added to allow for altering of calculations and a way to add new calculations. There is already an existing path foradding...

Explanation of History Settings for P6 Analytics

This blog applies primarily to P6 Analytics\P6 Reporting Database 3.1.  As of 3.1, history was introduced at a daily level at the activity level.  Activity level history existed as of P6 Analytics 2.0\P6 Reporting Database 3.0 but was only at the interval level of week, or higher. The settings in P6 for history level and history interval haven't really changed too much, but their meaning has changed.  These history settings are on the project level and are defined project by project.  This gives flexibility to include lower levels of history for ONLY those projects that require it. These settings are defined on the Project Preferences section of P6.  Choosing activity level history can cause your data warehouse to grow extensively based on the amount of changes and size of projects.  If setting projects on activity level history you should first review the Planning and Sizing guide and make sure you have the required space of growth.  http://www.oracle.com/us/products/applications/primavera/p6-reporting-database-wp-399110.pdf If you choose history level = Activity (internally HL_Task), then this project now has Daily history turned on as well as: - Type 2 slowly changing dimensions for all aspects of this project - Activity spread data on daily level and historical facts - Resource assignment spread data on the daily level and historical facts.   This can produce a large amount of data. If the project finishes or gets to a point where you no longer need to see this level of granularity it is recommended to turn down the level of history to either Project or WBS. If you have chosen history level of Activity, the setting for history interval now becomes relevant for the Project and WBS history.  You can choose week, month, etc. for the interval of time to be captured for Project and WBS history. This history interval does not apply to the Activity history because Activity is always Daily once chosen. If you choose Project or WBS level history the history interval setting would apply.  History always rounds up. If you choose Activity, you automatically are opted in for Project and WBS history also.  If you choose WBS history you get Project history as well.  

This blog applies primarily to P6 Analytics\P6 Reporting Database 3.1.  As of 3.1, history was introduced at a daily level at the activity level.  Activity level history existed as of P6 Analytics...

Extended Schema Data- How to force specific projects at a designated time

As you know the Extended Schema services (Project service specifically) runs on a designated time interval based on your Publish Project settings. This could be set to run every 1 min (default), and at this time the 'arbiter' runs and determines which projects need to be queued up to be processed. The arbiter determines this by evaluating each project that has PX_ENABLE_PUBLICATION_FLAG set to 'Y', and threshold of changes on this project or the threshold of time has been crossed. In your database you can see how many changes there are on the specific project by reviewing your PLPROJREF table in the Admuser schema. Another reason why a project would be queued up is because you right clicked on the project and choose Publish Now.  When you choose publish now what actually occurs is a time stamp is made on the PX_NEXT_DATE field for that project row in the PROJECT table.  In the application there is not a way to separate out projects into different batches that may follow different publish project settings.  The design behind this was to allow data to continually be updating in the system at these different times when the project may have an appropriate amount of changes, time has passed, or you need it now then to calculate the project.  Let the system take care of itself and keep the data up to date in a near real time environment.  On the P6 Analytics\P6 Reporting Database side controls were added such as filters, and additional data sources to allow you to control what projects get pulled over and possibly setting them up on different ETL runs. A scenario that was recently proposed was this, I have a group of projects under a specific EPS.  I want these projects to be published at midnight, every night, even if they did not cross a threshold. But all other projects should obey the publish project settings.  One way to accomplish this is through the back end. 1- determine a list of these projects and get the project id for these projects. You may even want to determine this list of project id's dynamically in case any other new projects are added under this EPS.2- Put together an update statement that will update those projects and set the PX_NEXT_DATE = sysdate for these project rows. 3- Have a batch or cron job that will run every night at midnight that executes step 1 and step 2.By stamping the PX_NEXT_DATE with sysdate you are basically saying publish now.  The next time the arbiter runs it will add those projects to the queue.  Be careful because there is no limit to the number of projects that can be added through this method. The queue could become full and may take some time to process all the projects if there are a lot. Using this approach gives you some control of forcing certain groups of projects to be automatically published at some predetermined time regardless of threshold.

As you know the Extended Schema services (Project service specifically) runs on a designated time interval based on your Publish Project settings. This could be set to run every 1 min (default), and...

Adding custom data to Analytics - Part 2

As an add on post to Jeff’s previous blog regarding how toadd additional fields from P6 EPPM to the STAR schema, it may be necessary toadd the new fields to your RPD file so they appear within P6 Analytics inOracle Business Intelligence. Adding new fields to the RPD field is a pretty straightforward process, assuming that the new fields already exist in the STAR schema.The process involves using the Oracle BI Administration tool to modify theOraclePrimaveraAnayltics.rpd file, manually adding the field to the Physicallayer, then dragging it to the Business Layer and finally to the PresentationLayer. Two recommendations before we get started. First, I wouldrecommend that you make a copy of your existing RPD file prior to movingforward here. Even though the newest version of Oracle Business Intelligencehandles this for you with auto RPD iterations, I still think it’s a good ideato keep a good “working” copy of the RPD in a safe place just in case. Second, I would recommend doing this type of RPD manipulationin offline mode. Although it is quite possible that you could get through itall having the RPD “online”, my experience has been more times than not whenperforming RPD manipulations, that offline is the way to go. Let’s Begin! Aquick check in SQL Developer of my STAR schema, select * from w_project_d; Once I have validated that I have successfully added thefields correctly during my ETL run, the next step is to open theOraclePrimaveraAnalytics.RPD file in Oracle BI Administration tool. In Oracle BI Administration tool, the far right column isthe Physical section. This is where the connection pool to the database existsas well as the physical table objects. Expand the connection for your STARschema all the way down until you reach the physical tables for STAR. Locatethe W_PROJECT_D table here. Right click on the W_PROJECT_D table name, on theresulting menu choose New Object, then select Physical Column… In the resulting physical column window, we need toadd/change the following 3 options (NOTE: These 3 fields need to matchidentically the field(s) that was added to the STAR schema). Name: PROJECT_OWNER Type: VARCHAR Length:255 Click OK. This will added thenew physical fields to the RPD. You should also notice that after clicking OKto add the new field, if you expand the logical representation of the physicaltable (Dim_W_PROJECT_D_Project), you shouldl see the newly added PROJECT_OWNERfield here as well. From here, it’s an exercise in a series of dragging anddropping the newly added field into the other layers of the RPD, so the fieldappears where it’s needed. Expand theDim_W_PROJECT_D_Project logical table in the Physical Side of the RPD. Find thePROJECT_OWNER Field, highlight it and drag it from this location to theBusiness Model Layer under the Dim – Project Section. The newly added field should appearat the bottom of the Column List. You can right click on the PROJECT_OWNERfield and select Properties to see the details of this new field. On theGeneral Tab of the Field Properties Window, you can aslo change the Name of thefield here to match how you want the field to appear in the Presentation Layer.For Example, you could remove the underscore here and change the case so itappears more normal (see Below) The final step in the RPDAdministration tool is to drag the field from the Business Layer onto thePresentation Layer in the Subject Area(s) you want to have the new fieldavailable in Oracle Business Intelligence. In this example, I am only going toadd the new field to the Primavera – Activity Subject Area, but if I needed itin the other ones, I could drag it to those as well. Save the RPD, and deploy the updated RPD file in OracleBusiness Intelligence Enterprise Manager. The new field should be available inthe Primavera – Activity Subject Area Until Next Time...

As an add on post to Jeff’s previous blog regarding how to add additional fields from P6 EPPM to the STAR schema, it may be necessary toadd the new fields to your RPD file so they appear within...

Adding custom data to Analytics - Part 1 - adding to the ETL

Starting with the Analytics 2.0\ RDB 3.0 releases a new user scripts section was added which appended a final step to the ETL process to run any additional scripts someone might have.  These user scripts could include SQL to add dimensional data to your Star that currently doesn't exist.  For example, Project Owner is a field that is related to Projects but is currently not exposed in the Extended schema or Project Dimension in Star. However if this is a field you feel you need the user scripts is the way to accomplish this. In this blog we will walk through an example of how to add Project Owner into your Project dimension. Please be aware this is just an example, not a patch or supported version.  Also the user scripts section is just an opening to allow for expansion. Make sure all scripts are approved by your DBA and are tested thoroughly for performance and data implications. If you add scripts to the user scripts folder and the ETL process begins to slow down at the final step where these scripts are executed remove the scripts and work on the sql performance.  Oracle does not support custom SQL that is not part of the product. Make sure users adding scripts to this section have the appropriate permissions. This step will connect as STARUSER unless otherwise changed. The scripts directory should only be accessible to Administrators.Project Owner:1- You need to add new columns to the dimension table. Add a new script to your \scripts\user_scripts folder called create_columns.sql. Here is a sample of SQL to add columns and ignore if exist:DECLARE  v_new_col1 number := 0;  v_new_col2 number := 0;BEGIN  Select count(*) into v_new_col1 from user_tab_cols where column_name = 'PROJECT_OWNER' and table_name = 'W_PROJECT_D';  if (v_new_col1 = 0) then      execute immediate 'alter table W_PROJECT_D add PROJECT_OWNER varchar2(255)';  end if;  Select count(*) into v_new_col2 from user_tab_cols where column_name = 'PROJECT_RSRC_OWNER_OBJECT_ID' and table_name = 'W_PROJECT_D';  if (v_new_col2 = 0) then      execute immediate 'alter table W_PROJECT_D add PROJECT_RSRC_OWNER_OBJECT_ID varchar2(255)';  end if;  commit;END;/EXITIn this example, we need to do some transformation of the data. In the Extended schema in the Project view there is a field for "ownerresourceobjectid" this is the object id of the resource that owns the project. For some this may be enough, but you probably want to have the actual username for the resource that owns this project. Because of this we are going to create 2 columns in the dimension but only expose one to Analytics. The first column will store the project resource owner id, then we will find the username based on this object id and update this other new column for project owner with the username. 2- Now we need to populate these columns. Add a new script to your \scripts\user_scripts folder called project_owner.sql. Here is a sample of SQL to populate these fields:set serveroutput on size 100000-- add project owner valuesdeclare  v_src_id number;  v_link_name varchar2(25);  vsql varchar2(200); BEGIN  v_src_id := get_source_db_id;  v_link_name := get_link_name;vsql := 'update w_project_d set PROJECT_RSRC_OWNER_OBJECT_ID = (select ownerresourceobjectid from project@' || v_link_name || ' where objectid = w_project_d.project_object_id) WHERE datasource_id =' || v_src_id; execute immediate vsql;UPDATE w_project_d set PROJECT_OWNER = (select user_name from w_resource_d where resource_object_id = w_project_d.project_rsrc_owner_object_id) WHERE datasource_id = v_src_id;commit;end;/ExitThe content of this SQL is just an example. What we are doing in the first update is getting the ownerresourceobjectid from the Project view in the Extended schema through the database link.  Next we are going to match based on the resource object id and find the username from a table already  existing in STAR. Then we will commit and exit out of the process.3- We need to define the order the scripts are run in the user_scripts folder.  In P6 Reporting Database you can define order that scripts get executed.  This is helpful in a case like this or if you see deadlock issues.  Go to \res\priority\ and edit the user_scripts.txt (if it does not exist, add it).  Set the following priority:# Order of user scriptscreate_columns.sqlproject_owner.sql4- Now these scripts are part of the process. Run STARETL.bat or .sh. This final step of the ETL process will execute ANY scripts in this folder. So beware adding anything you do not want to get executed. Test your scripts in a non production environment to make sure there are no side effects of your custom scripts.              This is a basic sample of how to add data to your STAR data warehouse that may not be part of the product. Keep copies of these scripts so when upgrading you do not lose them. The user scripts concept has continued into the more recent versions of the P6 Reporting Database\P6 Analytics but during upgrade and new installation these scripts will need to be re-added to your new installation.     Upcoming will be another blog about part 2 of this process, how to get this new data to show in P6 Analytics.

Starting with the Analytics 2.0\ RDB 3.0 releases a new user scripts section was added which appended a final step to the ETL process to run any additional scripts someone might have.  These user...

Fact Driven

The P6 Analytics product is driven by Facts. When you are looking at a subject area, take Primavera - Activity for example. You can add from the Project Dimension Project Id and Project Name.This will list out all projects in the w_project_d dimension table. In your Analysis if you add a Fact - such as actual cost - this will join the Project Dimension to the Activity Spread Facttable. When making this join a fact record must exist or you will get No result set found returned. You would have this situation if the project you are looking for either has no activitiesor does not have any spread records. In P6 Analytics versions 1.2 and higher this could occur if the project has not been published in the Extended schema. If you know there are activities that should contain spread data the bestplace to begin evaluating would be the extended schema. In the P6 application go to the Projects section and add columns for enable publication (make sure it is checked) and last published(make sure this date is populated and recent). You can right click on the project in P6 and choose publish now. Check back in a few minutes and see if the last published date has been updated.More more information on the Extended schema check out the P6 Extended Schema white paper.

The P6 Analytics product is driven by Facts. When you are looking at a subject area, take Primavera - Activity for example. You can add from the Project Dimension Project Id and Project Name.This will...

Changing Lines at the data date

Have you ever wanted to show actual values vs. remaining on the same line using different colors for each? This can be a very tricky exercise in OBI. First, you have to create 2 separate date sets, one set of dates that represent time before the data date for actual values, and another set of dates after the data date for whatever is remaining. After this, you want to make sure that you can clearly show the data date so it is known where the actual values end and where the remaining begin <see chart below>.  After you have created the time separation, you can begin to assemble your graph. In OBI, the order in which you place your lines matters for the simple fact of which bar will overlay the other. Since our actual bar will stop at the data date, we want this bar to "appear" on top of our remaining, so we will see actual values up to the data date, then remaining after the data date. The last part of this involves the display of the data date. I accomplished this by placing a large number in a field that I used for the data date. This value represents the height of the bar essentially, so when you try this in your environment, you may need to adjust the value according to your projects values. If you want to see the analysis (OBI 11.1.1.6.7), download the files here

Have you ever wanted to show actual values vs. remaining on the same line using different colors for each? This can be a very tricky exercise in OBI. First, you have to create 2 separate date sets,...

Path too long when copying P6 Analytics Catalog

With adefault installation of Oracle Business Intelligence and P6 Analytics 3.1, someobject in the P6 Analytics 3.1 default catalog create errors because the pathlength is too long. Please consult the following Oracle Business Intelligencedocument, section 17.1.1.1 for Object Name Guidelines (http://docs.oracle.com/cd/E23943_01/bi.1111/e10541/prescatadmin.htm). If youselected to take the default paths when installing Oracle BusinessIntelligence, then by default your catalog path with be/home/oracle/Middleware/instances/instance1/bifoundation/OracleBIPresentationServicesComponent/coreapplication_obips1/catalog Thispath may cause errors when copying the P6Analytics catalog. Toworkaround this issue, create a new folder in a shorter path location (e.g.\home\oracle\Middleware\catalog), copy the P6Analytics folder from the P6Analytics 3.1 release media to the new folder, and change the path location inEnterprise Manager for your install of Oracle Business Intelligence to use theshorter path. Additionally, if you installed Oracle Business Intelligence on Linux/UNIX,you can alternatively create a symbolic link, then reference the symbolic linkin Oracle Business Intelligence for the catalog path. In Windows, map a driveto the catalog location, then reference the mapped drive in Oracle BusinessIntelligence for the catalog path.

With a default installation of Oracle Business Intelligence and P6 Analytics 3.1, some object in the P6 Analytics 3.1 default catalog create errors because the pathlength is too long. Please consult...

Information around the P6 Extended Schema

The following blog is around the P6 Extended schema and covers some general topics related to settings that affect the P6 Extended schema services. Because the P6 Extended schema is the data source for P6 Analytics understanding how it works and it's settings are important to having a successful P6 Analytics implementation. The Extended schema applies for all P6 Reporting Database versions 2.2 and higher and P6 Analytics 1.2 and higher. If the data is not in the Extended schema it will not end up in P6 Analytics. What causes a project to get published? Automatic publishing  Projects are automatically published when a user right clicks on project in P6 and choose publish now. Next time the project service runs this project will be added to the queue. Some operations could cause an internal update of fields on the project level that cause the same affect and force the project into the queue on the next project service run. Crossing threshold '# of changes exceeded' Crossing threshold 'time since last publication exceeds' and has had a change since last threshold period Where are the threshold counts being tracked? Based on the threshold settings when crossing a certain number of changes the project is added to the queue to be published.These changes are calculated from changes on activities in the project, wbs level for the project, resource assignments on the project, and relationships on the activities. Changes in these areas will be added up in an internal table in the PMDB until it crosses the threshold of changes or the threshold of time has been crossed for projects with changes. At either of those times the project is added to the queue for calculation. What needs to occur for a new project to make it into P6 Analytics? The project must be created, enable publication set. The project must be published. After project has been successfully published the Security global service must be run to calculate security on the project.  Naturally over the course of a day the global services are run and this process is taken care of, but if you need the new project immediately you must run this Security global service after publishing the project.  When you have confirmed all these processes have occurred run the StarETL.  At this point the project should be in the Star database and available for the P6 Analytics application in OBI.

The following blog is around the P6 Extended schema and covers some general topics related to settings that affect the P6 Extended schema services. Because the P6 Extended schema is the data...

Common reasons why data may not appear to be updated in P6 Analytics when compared to P6

The most common reason is if cache is on in OBI and this query has been previously run. Cache will store the result of this same query. If a dashboard or same Analysis is being run, even after the STAR ETL process has been run, cached data can be returned.  Turning off cache will query directly against the database and return most up to date data. This can also have some performance impacts. Based on the environment customers can enable or disable or flush the cache based on their needs of data and performance. See OBI for how to turn off cache or flush cache.For P6 Analytics version 1.2 and greater the P6 extended schema is used for calculating data that is then pulled into STAR. If the P6 extended schema services are not running the data there could be stale. Check the P6 extended schema services - global services and project services- and make sure they have run recently. For changes on the project to be published the project must cross the threshold defined for the project service - such as amount of changes or time since last change.  See P6 Extended Schema White Paper or P6 EPPM Installation and Configuration guide for more details on the services. Services can also be viewed in the JOBSVC table. The extended schema services are the means for a large amount of updates that would make it into the data warehouse. Some of the values from the extended schema map directly to physical fields in the PMDB but denormalized and calculated data is held primarily in the tables updated by the extended schema services.Another reason is if the ETL process failed. Generally the ETL process runs on a regularly scheduled interval without any failures. Causes of failures could be loss of connection, bad data, change in system privileges, etc. It is good practice to monitor the staretlprocess.log and staretl.html files located in the <installation directory>\star\log folder. An undetected failure could make P6 Analytics appear as if there were no updates to the data. 

The most common reason is if cache is on in OBI and this query has been previously run. Cache will store the result of this same query. If a dashboard or same Analysis is being run, even after...

What is the difference between Row Level Security and RPD security?

Row level security (RLS) is a feature of Oracle Enterprise Edition database. RLS enforces security policies on the database level. This means any query executed against the database will respect the specific security applied through these policies. For P6 Reporting Database, these policies are applied during the ETL process. This gives database users the ability to access data with security enforcement even outside of the Oracle Business Intelligence application. RLS is a new feature of P6 Reporting Database starting in version 3.0. This allows for maximum security enforcement outside of the ETL and inside of Oracle Business Intelligence (Analysis and Dashboards). Policies are defined against the STAR tables based on Primavera Project and Resource security. RLS is the security method of Oracle Enterprise Edition customers. See previous blogs and P6 Reporting Database Installation and Configuration guide for more on security specifics. To allow the use of Oracle Standard Edition database for those with a small database (as defined in the P6 Reporting Database Sizing and Planning guide) an RPD with non-RLS is also available. RPD security is enforced by adding specific criteria to the physical and business layers of the RPD for those tables that contain projects and resources, and those fields that are cost fields vs. non cost fields. With the RPD security method Oracle Business Intelligence enforces security. RLS security is the default security method.Additional steps are required at installation and ETL run time for those Oracle Standard Edition customers who use RPD security. The RPD method of security enforcement existed from P6 Reporting Database 2.0/P6 Analytics 1.0 up until RLS became available in P6 Reporting Database 3.0\P6 Analytics 2.0.

Row level security (RLS) is a feature of Oracle Enterprise Edition database. RLS enforces security policies on the database level.This means any query executed against the database will respect...

Data that has been deleted in P6, how is it updated in Analytics

In P6 Reporting Database 2.0 the ETL process looked to the refrdel table in the P6 PMDB to determine which projects were deleted. The refrdel table could not be cleared out between ETL runs or those deletes would be lost. After the ETL process is run the refrdel can be cleared out. It is important to keep any purging of the refrdel in a consistent cycle so the ETL process can pick up these deletes and process them accordingly. In P6 Reporting Database 2.2 and higher the Extended Schema is used as the data source. In the Extended Schema, deleted data is filtered out by the views. The Extended Schema services will handle any interaction with the refrdel table, this concern with timing refrdel cleanup and ETL runs is not applicable as of this release. In the Extended Schema tables (ex. TaskX) there can still be deleted data present. The Extended Schema views join on the primary PMDB tables (ex. Task) and filter out any deleted data.  Any data that was deleted that remains in the Extended Schema tables can be cleaned out at a designated time by running the clean up procedure as documented in the P6 Extended Schema white paper. This can be run occasionally but is not necessary to run often unless large amounts of data has been deleted.

In P6 Reporting Database 2.0 the ETL process looked to the refrdel table in the P6 PMDB to determine which projects were deleted. The refrdel table could not be cleared out between ETL runs or those...

Where are my date ranges in Analytics coming from?

In the P6 Reporting Database there are two main tables to consider when viewing time - W_DAY_D and W_Calendar_FS.  W_DAY_D is populated internally during the ETL process and will provide a row for every day in the given time range. Each row will contain aspects of that day such as calendar year, month, week, quarter, etc. to allow it to be used in the time element when creating requests in Analytics to group data into these time granularities. W_Calendar_FS is used for calculations such as spreads, but is also based on the same set date range. The min and max day_dt (W_DAY_D) and daydate (W_Calendar_FS) will be related to the date range defined, which is a start date and a rolling interval plus a certain range. Generally start date plus 3 years. In P6 Reporting Database 2.0 this date range was defined in the Configuration utility.  As of P6 Reporting Database 3.0, with the introduction of the Extended Schema this date range is set in the P6 web application. The Extended Schema uses this date range to calculate the data for near real time reporting in P6.  This same date range is validated and used for the P6 Reporting Database.  The rolling date range means if today is April 1, 2010 and the rolling interval is set to three years, the min date will be 1/1/2010 and the max date will be 4/1/2013.  1/1/2010 will be the min date because we always back fill to the beginning of the year. On April 2nd, the Extended schema services are run and the date range is adjusted there to move the max date forward to 4/2/2013.  When the ETL process is run the Reporting Database will pick up this change and also adjust the max date on the W_DAY_D and W_Calendar_FS. There are scenarios where date ranges affecting areas like resource limit may not be adjusted until a change occurs to cause a recalculation, but based on general system usage these dates in these tables will progress forward with the rolling intervals.Choosing a large date range can have an effect on the ETL process for the P6 Reporting Database. The extract portion of the process will pull spread data over into the STAR. The date range defines how long activity and resource assignment spread data is spread out in these tables. If an activity lasts 5 days it will have 5 days of spread data. If a project lasts 5 years, and the date range is 3 years the spread data after that 3 year date range will be bucketed into the last day in the date range. For the overall project and even the activity level you will still see the correct total values.  You just would not be able to see the daily spread 5 years from now. This is an important question when choosing your date range, do you really need to see spread data down to the day 5 years in the future?  Generally this amount of granularity years in the future is not needed. Remember all those values 5, 10, 15, 20 years in the future are still available to report on they would be in more of a summary format on the activity or project. The data is always there, the level of granularity is the decision.

In the P6 Reporting Database there are two main tables to consider when viewing time - W_DAY_D and W_Calendar_FS.  W_DAY_D is populated internally during the ETL process and will provide a row for...

Security Controls on data for P6 Analytics

The Star database and P6 Analytics calculates security based on P6 security using OBS, global, project, cost, and resource security considerations. If there is some concern that users are not seeing expected data in P6 Analytics here are some areas to review: 1. Determining if a user has cost security is based on the Project level security privileges - either View Project Costs/Financials or Edit EPS Financials. If expecting to see costs make sure one of these permissions are allocated.  2. User must have OBS access on a Project. Not WBS level. WBS level security is not supported. Make sure user has OBS on project level. 3. Resource Access is determined by what is granted in P6. Verify the resource access granted to this user in P6. Resource security is hierarchical. Project access will override Resource access based on the way security policies are applied. 4. Module access must be given to a P6 user for that user to come over into Star/P6 Analytics. For earlier version of RDB there was a report_user_flag on the Users table. This flag field is no longer used after P6 Reporting Database 2.1. 5. For P6 Reporting Database versions 2.2 and higher, the Extended Schema Security service must be run to calculate all security. Any changes to privileges or security this service must be rerun before any ETL. 6. In P6 Analytics 2.0 or higher, a Weblogic user must exist that matches the P6 username. For example user Tim must exist in P6 and Weblogic users for Tim to be able to log into P6 Analytics and access data based on  P6 security.  In earlier versions the username needed to exist in RPD. 7. Cache in OBI is another area that can sometimes make it seem a user isn't seeing the data they expect. While cache can be beneficial for performance in OBI. If the data is outdated it can retrieve older, stale data. Clearing or turning off cache when rerunning a query can determine if the returned result set was from cache or from the database.

The Star database and P6 Analytics calculates security based on P6 security using OBS, global, project, cost, and resource security considerations. If there is some concern that users are not seeing...

Database Partitioning and Multiple Data Source Considerations

With the release of P6 Reporting Database 3.0 partitioning was added as a feature to help with performance and data management.  Careful investigation of requirements should be conducting prior to installation to help improve overall performance throughout the lifecycle of the data warehouse, preventing future maintenance that would result in data loss. Before installation try to determine how many data sources and partitions will be required along with the ranges.  In P6 Reporting Database 3.0 any adjustments outside of defaults must be made in the scripts and changes will require new ETL runs for each data source. Considerations:1. Standard Edition or Enterprise Edition of Oracle Database.   If you aren't using Oracle Enterprise Edition Database; the partitioning feature is not available. Multiple Data sources are only supported on Enterprise Edition of Oracle   Database.2. Number of Data source Ids for partitioning during configuration.   This setting will specify how many partitions will be allocated for tables containing data source information.  This setting requires some evaluation prior to installation as       there are repercussions if you don't estimate correctly.   For example, if you configured the software for only 2 data sources and the partition setting was set to 2, however along came a 3rd data source.  The necessary steps to  accommodate this change are as follows: a) By default, 3 partitions are configured in the Reporting Database scripts. Edit the create_star_tables_part.sql script located in <installation directory>\star\scripts   and search for partition.  You’ll see P1, P2, P3.  Add additional partitions and sub-partitions for P4 and so on. These will appear in several areas.  (See P6 Reporting Database 3.0 Installation and Configuration guide for more information on this and how to adjust partition ranges). b) Run starETL -r.  This will recreate each table with the new partition key.  The effect of this step is that all tables data will be lost except for history related tables.   c) Run starETL for each of the 3 data sources (with the data source # (starETL.bat "-s2" -as defined in P6 Reporting Database 3.0 Installation and Configuration guide)The best strategy for this setting is to overestimate based on possible growth.  If during implementation it is deemed that there are atleast 2 data sources with possibility for growth, it is a better idea to set this setting to 4 or 5, allowing room for the future and preventing a ‘start over’ scenario.3. The Number of Partitions and the Number of Months per Partitions are not specific to multi-data source.  These settings work in accordance to a sub partition of larger tables with regard to time related data.  These settings are dataset specific for optimization.  The number of months per partition is self explanatory, optimally the smaller the partition, the better query performance so if the dataset has an extremely large number of spread/history records, a lower number of months is optimal.  Working in accordance with this setting is the number of partitions, this will determine how many "buckets" will be created per the number of months setting.  For example, if you kept the default for # of partitions of 3, and select 2 months for each partitions you would end up with: -1st partition, 2 months -2nd partition, 2 months -3rd partition, all the remaining recordsTherefore with records to this setting, it is important to analyze your source db spread ranges and history settings when determining the proper number of months per partition and number of partitions to optimize performance.  Also be aware the DBA will need to monitor when these partition ranges will fill up and when additional partitions will need to be added.  If you get to the final range partition and there are no additional range partitions all data will be included into the last partition. 

With the release of P6 Reporting Database 3.0 partitioning was added as a feature to help with performance and data management.  Careful investigation of requirements should be conducting prior to...

How to automate a monitoring system for ETL runs

Upon completion of the Primavera ETL process there are a few ways to determine if the process finished successfully.  First, in the <installation directory>\log folder,  there is a staretlprocess.log and staretl.html files. These files will give the output results of the ETL run. The staretl.html file will give a detailed summary of each step of the process, its run time, and its status. The .log file, based on the logging level set in the Configuration tool, can give extensive information about the ETL process. The log file can be used as a validation for process completion. To automate the monitoring of these log files, perform the following steps:1. Write a custom application to parse through the log file and search for [ERROR] . In most cases,  a major [ERROR] could cause the ETL process to fail. Searching the log and finding this value is worthy of an alert.2. Determine the total number of steps in the ETL process, and validate that the log file recorded and entry for the final step.  For example validate that your log file contains an entry for Step 39/39 (could be different based on the version you are running). If there is no Step 39/39, then either the process is taking longer than expected or it didn't make it to the end.  Either way this would be a good cause for an alert.3. Check the last line in the log file. The last line of the log file should contain an indication that the ETL run completed successfully. For example, the last line of a log file will say (results could be different based on Reporting Database versions):   [INFO] (Message) Finished Writing Report4. You could write an Ant script to execute the ETL process and have it set to - failonerror="true" - and from there send results to an external tool to monitor the jobs, send to email, or send to database.With each ETL run, the log file appends to the existing log file by default. Because of this behavior, I would recommend renaming the existing log files before running a new ETL process. By doing this,  only log entries for the currently running ETL process is recorded in the new log files. Based on these log entries, alerts can be setup to notify the administrator or DBA.Another way to determine if the ETL process has completed successfully is to monitor the etl_processmaster table.  Depending on the Reporting Database version this could be in the Stage or Star databases. As of Reporting Database 2.2 and higher this would be in the Star database.  The etl_processmaster table records entries for the ETL run along with a Start and Finish time.  If the ETl process has failed the Finish date should be null. This table can be queried at a time when ETL process is expected to be finished and if null send an alert.  These are just some options. There are additional ways this can be accomplished based around these two areas - log files or database.Here is an additional query to gather more information about your ETL run (connect as Staruser):SELECT SYSDATE,test_script,decode(loc, 0, PROCESSNAME, trim(SUBSTR(PROCESSNAME, loc+1))) PROCESSNAME,duration duration from ( select (e.endtime - b.starttime) * 1440 duration, to_char(b.starttime, 'hh24:mi:ss') starttime, to_char(e.endtime, 'hh24:mi:ss') endtime, b.PROCESSNAME, instr(b.PROCESSNAME, ']') loc, b.infotype test_script from ( select processid, infodate starttime, PROCESSNAME, INFOMSG, INFOTYPE from etl_processinfo where processid = (select max(PROCESSID) from etl_processinfo) and infotype = 'BEGIN' ) b inner Join ( select processid, infodate endtime, PROCESSNAME, INFOMSG, INFOTYPE from etl_processinfo where processid = (select max(PROCESSID) from etl_processinfo) and infotype = 'END' ) e on b.processid = e.processid and b.PROCESSNAME = e.PROCESSNAME order by b.starttime)

Upon completion of the Primavera ETL process there are a few ways to determine if the process finished successfully.  First, in the <installation directory>\log folder,  there is a staretlprocess.log...

Controlling what data populates STAR

Beginning with the Primavera Reporting Database 2.2\P6 Analytics 1.2 release, the first release that supported the P6 Extended Schema, a new ability was added to filter which projects could be included during an ETL run. In previous releases, all projects were included in an ETL run. Additionally, all projects with the option to enable publication are included in the ETL run by default.Because the reporting needs for P6 Extended Schema are different from those of STAR, you can define a filter that will limit the data that is included in the STAR schema. For example, your STAR schema can be filter to only include all projects in a specific Portfolio, or all projects with a project code assignment of 'For Analytics.'  Any criteria that can be defined in a Where clause and added to a view can be used to filter the projects included in the STAR schema. I highly suggest this approach when dealing with large databases. Unnecessary projects could cause the Extract portion of the ETL process to take longer. A table in STAR called etl_projectlist is the key for what projects are targeted during the ETL process. To setup the filter, perform the following steps:1. Connect to your Primavera P6 Project Management Database as Pxrptuser (extended schema owner) and create a new view:create or replace view star_project_viewasselect PROJECTOBJECTID objectidfrom projectportfolio pp, projectprojectportfolio pppwhere pp.objectid = ppp.PROJECTPORTFOLIOOBJECTIDand pp.name = 'STAR Projects'--The main field that MUST be selected in the view is the projectobjectid. Selecting any other field besides the projectobjectid will cause the view to be invalid and will not work. Any Where clause can be used, but projectobjectid is the key.2. In your STAR installation directory go the \res folder and edit the staretl.properties file.  Here you will define the view to be used.  Add the following line or update if exists:star.project.filter.ds1=star_project_view3. When running the  staretl.cmd or staretl.sh process the database link to Pxrtpuser will be accessed and this view will be used to populate the etl_projectlist table  with the appropriate projectobjectids as defined in the view created in step 1 above. As of Primavera Analytics 15.1 Unifier was added as a possible source.  This same filtering criteria based on projectobjectid through a view still applies and would need to be applied to the Unifier schema and the staretl.properties file so it can be used by the ETL process,  just like it does for a P6 source.

Beginning with the Primavera Reporting Database 2.2\P6 Analytics 1.2 release, the first release that supported the P6 Extended Schema, a new ability was added to filter which projects could...