Monday Apr 07, 2014

Handling Codes and UDFs in a Multiple Data Source Environment


In a single data source environment codes and udfs are quiet easy.  Log into the configuration utility.  Select your codes and udfs, run your ETL process, and as long as you don't exceed the default number of columns in your RPD the codes and udfs show in OBI.  In the configuration utility the list of codes and udfs that are presented to you is populated by reading directly from the P6 Extended Schema you provided selection information for.  When you make your selections this list is written to your .properties file in the <rdb installation>\res directory.  During the ETL process these selections are read from the .properties file and inserted into Staging tables and eventually Dimensional tables in the STAR schema.  

One thing to note about Staging tables is there is only one set. In a multiple data source environment Staging tables are shared. Meaning during each ETL run they are overwritten. This is the main reason why in a multiple data source environment ETL processes can not be run at the same time.  One ETL process has to finish before the next can be run. Which leads us to how to handle codes and udfs in a multiple data source environment. Say you make your codes selection from data source 1, run your ETL process.  Now make your code selection from data source 2, and run the ETL process.  The selection you made for data source 2 is now your final selection.  The values and order you choose from data source 1 has been overwritten. 

To accommodate multiple data sources for codes and udfs requires a little coordination. Let's say you have 10 codes from your 1st data source and 10 codes from your 2nd data source that you would like to be used in P6 Analytics in OBI. Run the configuration utility for data source 1, choose your 10 codes using slots 1-10. Go to your \res folder and make a copy of the staretl.properties and save it to a different location (ds1staretl.properties). We're going to come back later to use this staretl.properties file for data source 1. Now for data source 2 lets go ahead and log into the configuration utility. Choose your codes, 1-10 don't really matter but choose codes 11-20 as the ones you want to represent this data source in OBI. Make a copy of this staretl.properties and save in a different location (ds2staretl.properties). Your are saving these backups so you can easily rebuild the codes list at a later time if runSubstitution or configuration utility are executed.  

Now you have 2 properties files. These 2 files contain all the codes you are going to use we just need to combine them into 1 file now. Go to data source 1 properties file, change the lines for codes 11-20 to represent the codes 11-20 from the properties file from data source 2. You can find code 11 in data source 2 properties file, copy the three lines and paste it where code 11 is in data source 1's properties file. Or if there is no code 11 just add it below the other codes. Do this for the rest of the codes in data source 2 until in the data source 1 properties file you have all the codes you want from data source 2 in this properties file. 

Now copy the whole codes section from this data source 1 properties file and overwrite the codes section in data source 2.  You will do this for each type of code (project, activity, resource). This section together is the 'master copy' of your codes. Do not run the configuration utility again on either data source 1 or 2 again after you have put this list together or it will overwrite it. If this 'master copy' is overwritten you will need to coordinate and create it again. So be careful when and where the configuration utility is run after this point.  

If you have a scenario where you have the same codes in data source 1 and data source 2 then this process is much easier. You would need to make the 'master copy' in just one of the data sources and then add it to the other properties files.  If the data sources are truly unique and there are codes that don't exist in one of the other data sources you must follow the steps above.Do not worry about when the ETL process runs for data source 1 that it might not have the codes for data source 2 that are now defined in the properties file. This is adding it into the staging tables and mappings for OBI to use for the data associated with data source 2. When you try to add this code along with data from data source 1 it will not return any results just like any other project or activity that is not associated with a chosen code. 



Wednesday Mar 26, 2014

Updated Burn Down Whitepapers

An effort was made to explain in detail how information flows from P6 into P6 Analytics, specifically the Primavera - Burn Down Subject Area within Oracle Business Intelligence. The Primavera - Burn Down subject area captures data at a specific point in time (user defined), then uses that point in time capture to compare how a project should progress over time if everything goes "according to plan".  P6 Analytics has the ability to capture which activities are progressing according to plan and which ones are ahead or behind schedule by comparison to the original point in time capture of the project. The Primavera - Burn Down Subject Area captures units and costs, as well as activity status counts (Not Started, In Progress, Completed, etc...).

For details on the exact measures that P6 Analytics displays in Oracle Business Intelligence, and how those measures are calculated and stored, we have created three new whitepapers. The three whitepapers detail how units and activity counts are calculated and stored on a daily basis. In addition, we have a whitepaper that details how the overall flow of data moves through the data capture process (ETL), and what the expected results will be in Oracle Business Intelligence within the Primavera - Burn Down Subject Area of P6 Analytics.

To download the three New whitepapers, follow the links below:

Primavera - Burn Down Counts Details

Primavera - Burn Down Units Details

Primavera - Burn Down Data Flow Details

Monday Mar 17, 2014

What's a WID? Where is it used?

In the Primavera PMDB tables most primary key columns are identified by that specific table _ID, example task_id on TASK, proj_id on PROJECT.  This identifies this unique row. In the early versions of our STAR schema we had object_id's as well. W_PROJECT_D (Project Dimension) had project_object_id.  W_ACTIVITY_D (Activity Dimension) had activity_object_id. Fact tables had a combination of all the associated object id's - project, task, etc. The object id's still exist but with the introduction of Multiple Datasources in P6 Reporting Database 3.0 an object_id was not unique enough.  Take the Project Dimension for example, project_object_id has to be unique for each data source. But if you have a second data source that rule doesn't apply. You could have a project_object_id of 4527 in data source 1 and in data source 2.  To make sure we never have this scenario occur row_wid's where added.  The row_wid is a calculation of that specific object id * 100 and the data source id. If this project was from data source 2 the row_wid would be 452702.  There is the possibility of having 100 data sources, however it is unlikely and not recommended. This row_wid is then used to help join other tables. The row_wid from the w_activity_d is referenced as activity_wid in other tables, similar to project_wid, eps_wid.



In the STAR schema is not the only place these wid's are used.  They are also used in the RPD for joining between dimensions and facts. This way if in a multiple data source environment the wid's make sure the data being returned in OBI is also correct and unique to that data source. If doing manual manipulation of the RPD be aware that wid's may be the field you want to join on. Consider your environment, it's needs, and it's future - do you foresee an expansion with more than one data source - if so, plan ahead.  



Monday Feb 24, 2014

Before you upgrade...what to expect (P6 Reporting Database)

If you are upgrading to P6 Reporting Database 3.1 or higher you'll want to allow for adequate time for the upgrade process. In this blog we will focus on upgrading the Star schema. The first major item to be aware of is that during the upgrade an ETL process is initiated. If you have a job scheduled to run your ETL process at a given time you'll want to make sure you allow enough time for the upgrade to finish or you may want to cancel the job until after the upgrade has completed. Having two ETL's running at the same time or overlapping at any point could be problematic for your STAR schema and the integrity of the data. 

Generally running an upgrade will take longer than a traditional daily ETL. During the upgrade process a set of BCK (backup) tables are created. Backups are taken for all history (_history_f) and slowly changing dimension (_hd and _hf) tables. The schema is recreated and then the data is reinserted into the new tables. Also backed up is the ETL_CALCULATIONS table that contains the metadata calculations.  After the backups are completed the ETL process is run to populate the remaining dimension and fact tables. 

Before running an upgrade make sure to backup your STAR schema. You'll want to make sure this data, especially your history, is preserved incase of any kind of failure during the upgrade.

Wednesday Feb 19, 2014

P6 Extended Schema - advanced techniques and additional information

With the P6 Extended Schema being the lifeline for data for the P6 Reporting Database and P6 Analytics, it is a good idea to get an understanding of how it works and some techniques you can use that can help make your usage of P6 Analytics easier and more efficient.  I wanted to call your attention to a new white paper recently released that contains some advanced techniques and new information on the P6 Extended Schema.  

http://www.oracle.com/webfolder/technetwork/tutorials/primavera/OnlineLearning/WhitePapers/P6_Extended_Schema_Advanced_Techniques.pdf

Highlights include:

Top Five Things You Need to Know about the P6 Extended Schema
Starting Over
Publishing too often
Dependencies
I Want to Build My Own Queue
and more...

Wednesday Jan 22, 2014

Report on ETL Status of a Multiple Datasource Environment

Do you have a Multiple Datasource Reporting Database environment?  In a scenario where you have multiple datasources, and multiple ETL processes running you may want to keep track of those ETL runs in an automated fashion. In an earlier blog we discussed how setup your own ETL monitoring system, in this Blog we will discuss how to use OBI to monitor your ETL runs.

Currently we have an ETL_PROCESSMASTER table in the STAR schema that tracks every ETL run - the start time, the finish time, and wheter it was successful. However there is not a column for datasource Id.  You can get this information by calling a function in the STAR schema - get_source_db_id.  

If you would like an OBI report to show this information here is a way to do it. First let's talk about how this can be accomplished in OBI.  This data for ETL_PROCESSMASTER is not exposed in any of the subject areas.  The reason is it is not for general user consumption.  The ETL_ tables in STAR are for internal trackings and usage during the ETL process.  While it can have useful information it's main intention is not for exposure in the subject areas of P6 Analytics where the focus is on your schedule and resource data, not the internal workings of the ETL process.  

But a report can be created outside of subject areas by using OBI's ability to query a data source directly.  In this type of Analysis you do not use any of the subject areas you use the OBI direct database query and formulate your own SQL. After the SQL has been generated this Analysis can be treated like any other and an administrator could use it to oversee the ETL process for all the organization's datasources. 

We need to add a new column to the ETL_PROCESSMASTER table. We will add a column called DSID. You can either run the SQL before running the ETL prcess:

ALTER TABLE etl_processmaster ADD (dsid   number);
commit;

Or another way is to add a script into the USER_SCRIPTS folder (located in \star\scripts directory).  All contents of the USER_SCRIPTS folder will be executed after the ETL process has completed.

Create a script called - addcolumn.sql and add the following contents:

DECLARE
  v_new_col1 number := 0;
BEGIN
  Select count(*) into v_new_col1 from user_tab_cols where column_name = 'DSID' and table_name = 'ETL_PROCESSMASTER';
  if (v_new_col1 = 0) then
      execute immediate 'ALTER TABLE etl_processmaster ADD (DSID number)';
  end if;
  commit;
END;
/
EXIT


This script will look to see if the DSID column exists, if it does not it will be added.  If it does exist then this step will be skipped.  Now you have the new column added.  Next we need to populate the datasource id for the last\most recent ETL run.

In the USER_SCRIPTS folder add another new script called = datasource.sql

set serveroutput on size 100000
-- update datasourceid 
declare
  v_src_id number;
BEGIN
  v_src_id := get_source_db_id;

update ETL_PROCESSMASTER set dsid = v_src_id where processstartdate =  (select max(processstartdate) from etl_processmaster); 
commit;
end;
/
Exit


This script will go find the datasource id of this ETL run and update the new DSID accordingly for the most recent ETL run row.  Now you have all the data in place.  Next you just need to generate a simply query to extract this data, such as:

select processid, processstartdate, processenddate, processtype, DSID from etl_processmaster;

Thursday Jan 09, 2014

How to find resources without any assignments?

In P6 Analytics if you create an Analysis and use a value from a Dimension - Resource Dimension for example - you will see all resources regardless if they have any assignments. However when you join to another dimension or fact you may no longer see that resource in your Analysis if it doesn't have 1 fact row. In the OBI RPD joins are constructed based around the Facts. Dimensions are joined through the Fact tables. (See RPD Physical Layer diagram). 

For this reason if you are looking for resources without assignments you may need to use another method to view this data. The data is all there. One of the great things about the P6 Reporting Database is the data is there we just need to access it. This is true for many areas - slowly changing dimensions, history, etc.

In OBI you can use a direct SQL query to query directly against the STAR schema to get your desired result.  
(New> Create Direct Database Request) and choose:

Connection Pool: Oracle Primavera P6 Data Warehouse Connection Pool 

Add the following SQL: 

select r.resource_name, nvl(count(distinct ra.activity_object_id),0) num_assignments
from w_resource_d r left outer join w_resource_assignment_d ra
on r.resource_object_id = ra.resource_object_id
where r.resource_name is not null
group by r.resource_name
order by 1;

This will give you an output of all resources and the number of assignments that occur in the STAR schema.  You can add additional filtering or ordering to satisfy your requirements.

Wednesday Jan 08, 2014

Role Usage in P6 Analytics

In P6 Analytics there are some excellent reporting abilities for Roles I would like to focus on. The ability in P6 Analytics to slice and dice data by different areas allows you to combine information related to Roles and view it in an Analysis.For example, you are staffing multiple projects, you do not know the specific resource that will work on this task so a Role is assigned at the time. Later on as you staff those Role assignments you will need to see across your enterprise how your Role assignments look and which ones still require staffing.

Here is a screenshot of a very simple analysis which can show this.  



With P6 Analytics you can also add in other areas to help further slice your Analysis, such as Project.

If you are interested in seeing a static Resource Limit that goes along with the Resource assigned to the Role you can accomplish this by adding a new join into your RPD.
In the Physical Layer of the RPD you will add a join between the w_resource_role_d and w_resource_limit_f.  This will allow you to join the Primavera Resource Assignment and Primavera Resource Utilization subject areas and add Resource Limit into this current Analysis.  This is just a static Resource Limit, not related to Role or a Role limit. 

Here is a screenshot of the join in the RPD.


After making the change in the RPD the OBI services must be restarted.

There is more that can be accomplished with the Resource Assignment subject area around Role and Resource usage and staffing. By using this subject area you can see enterprise wide role and resource usage, or filter down by assignments. 



Thursday Dec 12, 2013

EPS changes in Analytics (and P6 Extended Schema)

I moved this project to a new EPS, but it still shows an older EPS name? Why?

The main culprit of most data mismatches is the ETL process was run before the P6 Extended Schema services were able to update the data.  But don't worry it is just temporary. The services will continue to run and the data will be updated.  One of the keys is understanding the data flow and what service controls what data.  
If you are working in P6, updating your project and activities the project services will pick up these changes and the services will be running in the background and you wouldn't need to do anything manually, the environment would update itself based on the thresholds you defined.

For some items like EPS they are controlled by a different service. There are the global services - Enterprise Data, Resource Management, Security, and Enterprise Summaries. Enterprise Data consists of dictionary level data like Codes, Calendars, and Portfolios.  Resource Management as expected handled Resources, Roles, and other related dictionary level resource data.  Security handles new,removed,or updated users and access. Enterprise Summaries handles summary data along with updating EPS. 

For the example above, if someone moved projects around from different EPS levels they would want to have the Enterprise Summaries service run before they ran the ETL process if they wanted to view the data immediately. Planning out when the global services run and when the ETL process runs is a good idea. Generally the global services are set to run daily, same as the ETL process.  You can schedule your global services to run at midnight and your ETL process at 1am and in that case all your data will be up to date.

Wednesday Dec 04, 2013

Analytics Date Range is Out of Date

If you are working in the P6 Reporting Database or P6 Analytics and you notice that either your spread range or resource utilization and limit ranges haven't adjusted more than likely it has to do with your P6 Extended Schema.  

First check in your STAR schema:

select min(day_dt),max(day_dt) from w_day_d;

In the P6 Extended schema an initial date range is set when the project services and global services are run the 1st time.  Generally the range is not changed after that point. The date range creates a rolling window of time.  You set your start date and then your finish range to be plus a certain amount of months or years in the future.  An example is setting your start date to be:
Jan-1-2012 and a rolling 3 year window (today's date + 3 years). 

Looking at the ReportDate table you want to see if this matches your expected date range:

select min(daydate),max(daydate) from reportdate;

If all global services were run today and project services this should be updated and max should reflect today's date. If for some reason it does not make sure the Enterprise Data service has been run. Make sure you are still publishing projects.  And if all seems functional, a last case scenario you could truncate the reportdate table then rerun the Enterprise Data service.  This will force a new recalculation of the date range for this table. 

After this is completed you can run the StarETL and you should see w_day_d updated with this date range.  If for any reason you are still seeing an issue then the w_day_d can be truncated and repopulated on the next ETL run. Again truncating of these tables should not be necessary and is only in extreme cases. In a majority of cases having the Enterprise Data service and ETL run will make sure all date ranges in the P6 Reporting Database and P6 Analytics are up to date.

Wednesday Nov 06, 2013

Why Does Adding a UDF or Code Truncates the # of Resources in List?

Go to the Primavera - Resource Assignment History subject area.  Go under Resources, General and add fields Resource Id, Resource Name and Current Flag. Because this is using a historical subject area with Type II slowly changing dimensions for Resources you may get multiple rows for each resource if there have been any changes on the resource.  You may see a few records with current flags = 0, and you will see a row with current flag = 1 for all resources. Current flag = 1 represents this is the most up to
date row for this resource.  In this query the OBI server is only querying the W_RESOURCE_HD dimension. 

(Query from nqquery log)

select distinct 0 as c1,
     D1.c1 as c2,
     D1.c2 as c3,
     D1.c3 as c4
from 
     (select distinct T10745.CURRENT_FLAG as c1,
               T10745.RESOURCE_ID as c2,
               T10745.RESOURCE_NAME as c3
          from 
               W_RESOURCE_HD T10745 /* Dim_W_RESOURCE_HD_Resource */ 
          where  ( T10745.LAST_RUN_PER_DAY_FLAG = 1 ) 
     ) D1

If you add a resource code to the query now it is forcing the OBI server to include data from W_RESOURCE_HD, W_CODES_RESOURCE_HD, as well as W_ASSIGNMENT_SPREAD_HF. Because the Resource and Resource Codes are in different dimensions they must be joined through a common fact table. So if at anytime you are pulling data from different dimensions it will ALWAYS pass through the fact table in that subject areas.

One rule is if there is no fact value related to that dimensional data then nothing will show. In this case if you have a list of 100 resources when you query just Resource Id, Resource Name and Current Flag but when you add a Resource Code the list drops to 60 it could be because those resources exist at a dictionary level but are not assigned to any activities and therefore have no facts. As discussed in a previous blog, its all about the facts.  

Here is a look at the query returned from the OBI server when trying to query Resource Id, Resource Name, Current Flag and a Resource Code.  You'll see in the query there is an actual fact included (AT_COMPLETION_UNITS) even though it is never returned when viewing the data through the Analysis.


select distinct 0 as c1,
     D1.c2 as c2,
     D1.c3 as c3,
     D1.c4 as c4,
     D1.c5 as c5,
     D1.c1 as c6
from 
     (select sum(T10754.AT_COMPLETION_UNITS) as c1,
               T10706.CODE_VALUE_02 as c2,
               T10745.CURRENT_FLAG as c3,
               T10745.RESOURCE_ID as c4,
               T10745.RESOURCE_NAME as c5
          from 
               W_RESOURCE_HD T10745 /* Dim_W_RESOURCE_HD_Resource */ ,
               W_CODES_RESOURCE_HD T10706 /* Dim_W_CODES_RESOURCE_HD_Resource_Codes_HD */ ,
               W_ASSIGNMENT_SPREAD_HF T10754 /* Fact_W_ASSIGNMENT_SPREAD_HF_Assignment_Spread */ 
          where  ( T10706.RESOURCE_OBJECT_ID = T10754.RESOURCE_OBJECT_ID and T10706.LAST_RUN_PER_DAY_FLAG = 1 and T10745.ROW_WID = T10754.RESOURCE_WID and T10745.LAST_RUN_PER_DAY_FLAG = 1 and T10754.LAST_RUN_PER_DAY_FLAG = 1 ) 
          group by T10706.CODE_VALUE_02, T10745.RESOURCE_ID, T10745.RESOURCE_NAME, T10745.CURRENT_FLAG
     ) D1
order by c4, c5, c3, c2


When querying in any subject area and you cross different dimensions, especially Type II slowly changing dimensions, if the result set appears to be short the first place to look is to see if that object has associated facts.

Monday Oct 14, 2013

Reference for STAR field mappings

Occasionally questions will come up about the STAR schema and how it maps to fields in P6.  The STAR schema is comprised of data pulled from the P6 EPPM Extended view schema, the rest of the data is calculated as part of the Transformation process. Calculated data includes - historical captures, hierarchies, burndown and work planning calculations, and many others.

If you would like to see how the Extended schema fields are mapped a good place to start is by looking in the ETL_MAPPING table.  Choose what your source schema version is (example: EPPM 8.3 = 83) 

select * from etl_mapping where source_version = 83;


From here you can view how these fields are directly mapped to the staging tables (example: W_PROJECT_DS).  These fields would map similarly to the corresponding dimension or fact tables. This information is also available in the P6 Reporting Database installation directory \res folder in the file mapping.tcsv. 

Wednesday Oct 02, 2013

Usage of Historical UDF's and Current Flags When Creating Analysis'

In the P6 Reporting Database 3.1 and higher, type 2 slowly changing dimensions were added. These changes are captured in tables with the suffix _HD. In each of these tables there is a field called current_flag. With type 2 SCD's the current flag let's you determine what is the most recent row vs.other rows which are the historical rows. For example a project, the project is created it has a row. The next day you change something on the project, maybe the project name, then run the ETL process. Now you'll have two rows. The most recent row will have current flag of '1', the original row will have current flag of '0.'  Having a current flag of '0' is a quick way to determine it is a historical record. In the STAR schema it is easy to evaluate these historical versions. 

In P6 Analytics 3.1 you can see the current flag on some historical records but not in every analysis. Take the Primavera - Activity History subject area, if you add Project Id, Project Name, and Current Flag from under the Project \ General section.  You will see the current flag represent as described above.  If there was a change in the project name you will see two rows and the current flag will have a '0' or '1' to represent the historical record and the current record. This is because the current flag that has been exposed is tied directly the Project dimension (w_project_hd).

If you use the current flag from under the General area of any section it will be the current flag representing that section.  Now let's look at how this could behave when combining different type 2 slowly changing dimensions. Stay in the Primavera - Activity History subject area, and for now remove the current flag and add in a Project Text UDF along with the Analysis containing Project Id and Project Name. If there are multiple changes to the text udf (for example -Primavera UDF1), there will be multiple rows showing the history of what this text udf contained. UDF's are contained in their own type 2 slowly changing dimension tables (w_udf_project_hd).  When adding the current flag back from the Project \ General section you will see a slight change in the Analysis and you may see some of the udf historical records with a 1 next to them.  This is because the current flag is actually representative of the project rows, not the udf rows. If you moved the Current Flag next to Project Name it would fit more appropriately in the Analysis. 

In the example of udf's, you will see the projects listed and if there is a udf assignment you will see the value. If there is no assignment for that project you will see an empty column. If you never had an assignment, then added a udf you will see two rows. One for the previously empty row on that project, another for the new row containing the udf assignment. This will allow you over the life of your project to all the changes to it. 

The Primavera - Activity subject area is a good place to work in if you are more interested in always seeing the current values. The Primavera - Activity History subject area gives you a look at the historical versions of a great deal of information. Usage of the current flag to determine what is the most current version works great when using the current flag under those dimensional areas. When combining historical dimensions and using the current flag you just need to be aware of where the current flag is coming from and what it represents. The current flag isn't always available for all historical dimensions. The history and data that exists in P6 Reporting Database and P6 Analytics 3.1 and is extremely powerful, helping to give a total picture of what has happened with your projects and associated data. How have these values changed over time? How has the project changed over time? You can get this information from the STAR schema or very easily from the P6 Analytics dashboards or Analysis. 


Friday Aug 30, 2013

Indicator UDFs

Indicator UDF's are not supported in P6 Analytics. The main reason being there isn't currently a way to move those same indicator images over into OBI, and the images that are used for indicators in P6 don't exist in OBI. There are some workarounds that can be used to achieve using UDF indicators in a way that based on the UDF indicator from P6 and image can be displayed in OBI.  

There are two main pieces to this.  First, the STAR data warehouse side.   Second, how to have OBI display an indicator based on a text value. In this blog posting we'll look into the data warehouse side. In a later blog we'll look at the OBI side.

Let's start back at P6.  In P6 you can add a Project UDF with a type of indicator. In P6 add this indicator as a column and then for a specific project,assign a value - for this
example lets say Blue Star indicator is assigned.

If you query the Extended Schema view for UDFVALUE for the projectobjectid of this project:

select * from udfvalue where projectobjectid = 4518;

You will get a result set back that shows UDFTEXT as BLUE, which is the text for the Blue Star icon. It will give subject area type and type of UDF - FT_STATICTYPE. 

In the P6 Reporting Database these UDF's are not available in the configuration utility because they are not supported. But UDF's can be added directly to the staretl.properties file in the \res folder. This file is used for populating all dynamic code and UDF values on the OBI side. In this example this is the entry I added into my 6th position. 

udf.proj.text.6.name=Cost Status

udf.proj.text.6.type=Text

After you have added to the .properties file you will want to execute runSubstitution.cmd or .sh in the \etl\common folder. Running this will populate this new UDF row you just added through the .properties file into the other necessary files.  Next step is to get the tables to populate this value. We are going to treat it like a text UDF . The only goal from the data warehouse side is to get it acting like a text UDF so a conditional statement can be added in OBI where if text = 'Blue Star' then display a certain icon.

Go to your \scripts\udf_load and add a new script called change_indicators.sql with the following:

BEGIN

update w_udfvalue_ds set DATATYPE = 'FT_TEXT'

where DATATYPE = 'FT_STATICTYPE' and SUBJECTAREA = 'PROJECT';

commit;

end;

/

exit 

Save this file.  (If all works out for you with your UDF's as indicators go back and add this same file as a template in \etl\oracle\templates\udf_load and save file as .tsql.  This will save you from having to make edits to this script each time the config is run. Running config overwrites files in the \scripts directory.)

Now run the ETL process and you will see this indicator UDF appear in the w_udf_project_d table like it was a text UDF .  Check back for a future blog on how to make changes in OBI to based a conditional format statement off this value. 

Tuesday Aug 20, 2013

P6 Reporting Database versioning

How can you tell what version of the P6 Reporting Database you have installed?   There are two main ways.

1- Query the ETL_PARAMETER table with the following:

select p_feature, p_1 from etl_parameter

where p_feature = 'db.star.version'

or p_feature = 'etl.source.version';

The output will be similar to:

db.star.version 3.1

etl.source.version 83

DB.STAR.VERSION will give the major release that this STAR database was installed from.  The ETL.SOURCE.VERSION will tell you which P6 version the ETL process is running with, this helps determine the mappings.

2- In the <star installation directory> there is a version.txt that will also give the version number. The version.txt was added in the last couple releases and is updated when fix packs are added. 

About

Provide new information on P6 Analytics and Reporting Database.

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today