Tuesday Mar 01, 2016

How to Apply Row Level Security to New Tables


The Primavera Data Warehouse uses row level security for data security.  By controlling at the database level with row level security we can apply a more granular level of security.  The permissions to see certain data comes from the access a user has in P6 or Unifier.  In P6, security is calculated by the Extended Schema Global Security service and applied at the project and resource levels. In Unifier security is applied at project level. By applying at the project level the security is calculated out per project, per user.  This is then stored in security tables in the STAR schema and then security policies are applied. Applying the policies is the key to the row level enforcement.  These policies are applied at a level above STARUSER.  You can verify the
user who is applying these policies by looking in the Setup.sh\bat or Staretl.sh\bat (depending on what version you are using) and do a search for security_policies.  It will give a username of %STARDBA_SR% , at the top of this file you can see who that user is. This is important to know because if you are creating or applying new policies you will need to connect as that user to apply them properly. 

Based on what type of table you are adding (Dimension or Fact) you can find an example in the security_policies.sql script.
Lets try a fact example.  Search for:

--METADATA NAME:W_ACTIVITY_SPREAD_F_P_POLICY TYPE:ADHOC

Make a copy of this section then edit w_activity_spread_f in all spots and add in your new table.  This example will add a project level security policy to your new table. Save this file and connect and run as %STARDBA_SR% (mentioned earlier) to apply this policy.  Now only users with project object id access to values in this table will be able to see the data. Similarly this can be done with dimensional data as well. The key is the project_object_id and datasource_id.  These fields must exist in your new custom tables and be populated accordingly for this security enforcement to work.

Applying policies since at the STARDBA_SR access level should be available without having to reapply each ETL, unless at some time dropped.  If wanting to rerun you can duplicate the user_scripts step and and change the connection string for this higher level connection and add a new folder for this script.  

Monday Feb 29, 2016

How to add more than 50 Codes or UDFs to Primavera Analytics

1 Background

Primavera Analytics has the ability to Extract, Transform, and Load (ETL) up to 240 Project, Activity and Resource codes, as well as User Defined Fields (UDFs). These codes and UDFs are configured during the installation process of the P6 Reporting Database or by running the Primavera Data Warehouse configuration utility. However, out of the box, the Primavera Analytics Repository Definition File (.rpd) is only configured to take advantage of 50 codes and UDFs. Additional steps are necessary to allow more than 50 codes and UDFs to be visible in Oracle Business Intelligence (OBI). The steps for configuring the .rpd file to use and see these additional codes and UDFs are documented below.


2 Launching and Logging into the Admin Tool

1. Launch the Admin tool à Example: All Programs à Oracle Business Intelligence Edition Plus Client à Administrator

2. Go to File à Open à “Online”

3. Login using weblogic administrator credentials.

User will then see three panels from right-to-left: Physical, Business Model and Mapping, and Presentation.

3 Adding additional code and/or UDF columns beyond 50

Use this process to add one or multiple UDF and/or code mappings.

After following the previous section and Launching and Logging into the Admin Tool:

3.1 Add Column(s) to the Physical Layer

1. In the Physical window, expand “Oracle Primavera P6 Data Warehouse”. à Catalog à dbo.

2. Expand the table where you want to support additional columns, for example W_UDF_ACTIVITY_D and W_UDF_ACTIVITY_HD (Activity UDF Dimension and the Activity UDF Dimension History tables).

For each table you are adding columns to:

  1. Right-click on the table you are adding columns to and select Properties (ex. W_UDF_ACTIVITY_HD → Properties).
  2. On the Columns tab, press the “green plus button” or press ALT-Insert. (see Diagram 4)
  3. Provide the name of the new physical column in the Name field (ex. UDF_COST_51) (see Diagram 5).
  4. Select the correct type from the Type Combo Box (ex. DOUBLE)
  5. Check the Nullable Check Box.
  6. Press OK
  7. Repeat steps 1-6 for additional columns and tables.



3.2 Drag Columns over to the Business Layer

1. Right-click the table name you added columns to and click on Query Related Objects → Physical → Physical Table.

2. Select all table aliases in the list and press Mark, then Go To.

For each “marked” table alias in the Physical Layer (marked has a red check mark next to the icon):

1. Expand the table alias to reveal its Columns.

2. Right-click the table alias name and click on Query Related Objects → Logical → Logical Table.

3. Select the logical table in the list and press Go To.

4. Expand the logical table that was selected.

5. Drag the added columns (use Shift or CTRL click to select multiple columns) from the marked alias in the Physical Layer onto the name of logical table in the Business Layer.

For each newly dragged column in the Business Layer:

1. Double-click on the column (in the Business Layer still) and update the Name field to be like the other columns (ex. Activity UDF Date 51).

2. Click on the Aggregation tab. Depending on the type of the new column, the default aggregation rule in the aggregation tab may need to be modified. An example of this would be with a user defined cost field that uses “SUM” as the default aggregation rule.


Diagram 6


Diagram 7

3.3 Drag Columns to the Presentation Layer

For each new column in the Business Layer:

1. Right-click on the table the new columns were added to in the Business Layer.

2. Click Query Related Objects → Presentation → Presentation Table

3. Select all tables in the list

4. Mark each table by selecting every table in the list and press Mark

5. Press Go To

For each “marked” Presentation Table in the Presentation Layer (marked has a red check mark next to the icon):

6. Drag the new columns from the Business Layer over to the Presentation Table.

7. Highlight the new column in the Presentation layer and right-click Properties.

8. From the Admin menu click File à Save. Check in the changes and run consistency checker. The RPD file has now been updated with the new column.


Diagram 8

4 Restart the OBIEE Services

1. Launch the Oracle Business Intelligence Enterprise Manager (http://<hostname>:<port number>/em where <hostname> is the server name or IP address and <port number> is the port number for the Oracle Business Intelligence deployment.

2. Login to Oracle Business Intelligence Enterprise Manager with a user account that has Administrator privileges (e.g. weblogic).

3. After successfully logging in to Oracle Business Intelligence Enterprise Manager, on the left hand navigation tree, expand the folder Business Intelligence, then single left mouse click on the coreapplication.

4. On the coreapplication page, select the Overview tab. In the System Shutdown & Startup portlet, click the Restart button to restart all of the Oracle Business Intelligence services.

5. Once the Oracle Business Intelligence services have successfully been restarted, Log out of Oracle Business Intelligence Enterprise Manager.

5 Verify New Column is in Oracle Business Intelligence Analytics

1. Launch the Oracle Business Intelligence Analytics application.

2. Select the Analysis link on the left and then select the Subject area which uses the new column.

3. Expand the folder that uses the new column.

4. Look for new column and verify the column and the column value.

Friday Feb 05, 2016

Logging in the Data Warehouse Configuration Web Application

As of Primavera Data Warehouse 15.2 release, logging in the Configuration web application has the following behavior in a multiple data source environment.

If you have multiple data sources configured you will see multiple tabs along the top next to Home.  For each tab you can define an ETL schedule which wil result in each ETL running at a different time.  However the Status section (on both tabs) reads from the same internal table - ETL_PROCESSMASTER.  For this reason you will see the results of all ETL's run on this STARUSER.  To be able to determine which ETL is related to which data source, you will only see a View Report and View Log for the ETL's related to this specific data source.  In this example where we see rows for an ETL run with a blank area for Report and Log, this means it was run only for the other data source specifically.

Thursday Jan 21, 2016

Upgrade Considerations When Using Multiple Data Sources

1-  Number of data sources:

If you are upgrading but would like to increase or decrease the number of data sources you must do so when installing the latest version.  For example, if you have a Primavera Data Warehouse setup
with 4 data sources in 15.1 and you are upgrading to 15.2 but only choose 3 data sources during the install you will see issues trying to upgrade.  The reason being you are trying to force 4
data sources worth of data into 3.  During the installation\configuration there is a box that allows you to enter the number of data sources, by default in 15.2 it was set to 3 so it must be increased
if the data warehouse you are upgrading has more.  Even if you don't intend on using those extra data sources from 15.1 they must initially come over or be removed from the 15.1 data warehouse first. It is always safer to set this number of data sources option to a higher number than expected to make sure to cover this scenario and to allow for growth.

2-  upgradeSchema vs. upgradeStar

Generally upgradeSchema is a faster option.  UpgradeSchema usually takes the existing STAR schema and applies just the database level changes (new tables, new columns, etc.) but does not back up or move any data.  UpgradeStar makes a backup of all historical data then rebuilds the STAR schema and reinserts the data and runs an ETL.  UpgradeStar will usually take longer than a normal ETL run because it has to backup and restore all this data.  In the 15.2 release because there was a major expansion of partitioning upgradeSchema will have a similar runtime to upgradeStar because we need to backup all the data, create new partitioned tables, and then reinsert the data.  Moving forward and in previous releases upgradeSchema is generally the faster option and can be run and allow the ETL to run on it's regularly scheduled time.

3- Partitioning vs. Non Partitioning

If you were using a partitioned schema you can not now go to a non partitioned schema as the data is already stored this way.  You would need to install and run the ETL processes fresh. With the non partitioning option you would also not have the option for detailed history or slowly changing dimensions so that data that might be in the partitioned schema would not be used anyway.

Friday Jan 15, 2016

History Options for Unifier Data

Historical captures are available for Business Process, Cash Flows, and Costs Sheets as of the 15.2 release for Unifier data in Primavera Analytics\ Data Warehouse.   History captured in these subject areas is available in a weekly interval.  Weekly is currently the only historical interval option available for these subject areas.   These areas will capture and default to this historical interval setting.

 

Wednesday Dec 23, 2015

Why is Oracle_Home Required? What Would Happen if it is Not Set?

If an Oracle_Home has not been defined you will run into errors with the ETL process.  Sqlldr is required for ETL process. These errors may show either directly in the console or log if running ETL from \star installation directory.
If running ETL process from a Schedule in the Primavera Analytics web configuration utility you may see the job listed as Failed in the Status tab. You can check the log to get the specific details.
If you receive errors about sqlldr generally this means there is an issue with the Oracle_Home environment variable.  

In some environments those variables might be reset occasionally.  To ensure these are set with the web configuration utility it would be recommended to define them in
setDomainEnv.sh in such a way that these environment variables will be available for the application p6rdb for all the user sessions.  You could append the Oracle_Home to the
file.

        export ORACLE_HOME="/xxx/xxx/xxxxx"
        export PATH="${PATH}:${ORACLE_HOME}/bin"
        export JAVA_HOME="/xxx/xxx/xxx/xxxx"

During setup ensuring that these variables are defined and sustained will help to ensure your ETL and ETL scheduling runs smoothly.

Thursday Nov 19, 2015

Top Questions to Answer Before Setting Up Primavera Analtyics

The questions below are aimed at larger installations but can also be applicable to single data source installations.  The purpose of thinking about these areas before installing and implementing is
to prepare your environment for expansion, to prevent running into scenarios where you are capturing data you don't require, and to prevent creating an environment that is not suitable to your
business needs and may need to be rebuilt.

#1-  How many data sources do I plan to have?
 Even if you don't know for sure but think you might have 2 to 3 (for example), plan for it.  Better to overestimate than underestimate. Overestimating by a few will leave those slots partitioned and available for future usage. Partitioning is required if using multiple data sources.

#2-  Partitioning ranges
This can effect performance.  It is not recommended setting a large partitioning range (for example - 6 months) if you are capturing a large amount of lower level historical data (SCD's and Activity History).  Partitioning is designed to help with performance by querying the data in that partition rather than querying all data if it can help it. Setting partitions equal to 1 month might be
better for a specific environment based on size and historical capturing frequency.

#3-  Activity Daily History
How many projects have this on option set?  This can heavily effect performance and storage. It is recommended that this is only on for a smaller percentage of projects and then turned off when the need for this level of detail has passed. This has other cascading effects outside of just history capture.  This will also turn on slowly changing dimensions for any data related to the project. Slowly changing dimensions will capture any change on objects that support SCD's.  By default Activity History is not on.  It is recommended to opt in a few projects at a time based on requirements.

#4-  What is the P6 Extended Schema data range?
If P6 is your primary data source what is the date range set on the project publication. A large date range represents a larger set of spread data that would need to be pulled over.
Generally this range is 1 year in the past and 2 years in the future. What this means is you will have daily spread rows for each day for each activity and resource assignment spread bucket for each activity in each project that is opted in.  That can become a large amount of rows quickly. If you have units or costs outside this range that data is not lost and it will come over to Analytics however
you just won't have a daily bucket for that data.  The data outside of the range is lumped onto the beginning or end of the date range.  This way the data is still represented in the totals.

#5-  What are my data requirements, how often do I need my data refreshed and at what level of granularity? 
Decisions based on these questions will shape your entire Analytics environment and Primavera
ecosystem. For example if you have 10 data sources, do you need to report at Activity level and have those Analysis' up to date every day?  
Do you also require global reporting for all 10 data sources but those reports are only run weekly? 
A scenario like this would lend itself to each data source having their own individual STAR with it's own ETL and then a master environment created to pull from all
10 data sources on a weekly interval for global reporting.  The advantages of this are that each individual data source can then get an ETL run in each day.  If all 10 data sources per pumping into the same  STAR and each ETL took 2 hoursto run it would take 20 total hours before ETL #1 could be run again.  Having this global version (while still pulling out of the same data sources) allows for individual sites  to have their own ETL rules while still having a consolidated global data warehouse which is updated once a week for each ETL.

Monday Nov 02, 2015

Refreshing RPD Translation Changes Without Restarting OBI Services

In Primavera Analytics when making changes to the selected UDFs, Codes, and other dynamic fields that come across generally the OBI services would need to be restarted to see these changes reflected. The reason being these fields that are being altered actually have underlying translation values and those are what need to be updated. Translations are managed differently compared to making a change to an existing field in the RPD. The following steps cover how to disable the caching on the RPD initialization block for translations. This allows for translation updates to be seen in OBIEE without requiring a full restart of the BI Server.

Steps for Updating RPD and Refreshing Translations

1. Open the RPD using the BI Admin Tool, in Online Mode.

2. From the Manage menu, select Variables.

RPD


3. Under
Session, select Initialization Blocks, then double-click on the External Metadata Strings initialization block.

RPD2


4. In the
Variable Target section, click Edit Data Target.

RPD3


5. Uncheck the
Use Caching option, under Row-wise initialization, then click OK


6. Save the RPD changes. When prompted to check-in changes, click OK. When prompted to check global consistency, click No.

7. This example tests mapping a new Activity UDF. First log into OBIEE and create a new analysis using the Primavera - Activity subject area. Expand the Text UDFs - (Activity) folder and note the position of the first unmapped text type activity UDF, in this case Text Activity UDF 6.


8. Run the configutation utility (<ETL_HOME>/configStar.sh/bat), and on the UDF mapping screen add a 6th Activity UDF. Finish the configuration utility, then run the ETL.



9. Once the ETL has completed, log into OBIEE and again create an analysis using the Primavera - Activity subject area. From the refresh drop-down, select Reload Server Metadata.


10. Once that completes, expand the
Text UDFs - (Activity) folder and you will see that the translation for the newly added Activity UDF is now displayed.

Thursday Oct 08, 2015

User Defined Field Fact History

New to Analytics 15.2 release is the addition of historical facts and subject areas for user defined fields fact data such as project, wbs, activity, resource, and resource assignment for their cost
and number udf's.  For these UDF's to be available they will have had to been selected either from configStar or from the Analytics web configuration utility and have the UDF's and opted in.
Opting them in means that history will now be recorded on them. History is captured at different intervals depending on the object.

Because Resource is a global object, the history interval is always set to Week.
For Activity and Resource Assignment UDF's they will be turned on for daily capture based on the setting on the Project associated with this object. The project will need to be opted in with activity daily
history and then so will these UDFs for these objects.
Project and WBS history interval will also be determined by the history interval setting set on this project for the Project\WBS history setting.  The lowest level of granularity here is Weekly.

I Added a User in P6, How To Make Sure That User Gets into Analytics

A common question is what is the data flow for adding a user and getting that user into Primavera Analytics. For this example we are going to follow the P6 user flow.

1- Log into P6 application and add new user.  Give this user module access for P6 Analytics.  Assign Project Access and Responsible Manager\OBS to this user.
2- Run the Global Services Security Service.
3- Run StarETL process.
4- Make sure user exists on OBI side.

 If you are using LDAP or SSO you want your OBI server to be configured to the same.  In Primavera Analytics logging into OBI you want the username to match exactly as it is in p6.
 Make sure there is a user in your OBI, Weblogic, LDAP, or selected security model you are using, with the same name as p6.

The names need to match exactly because there is a security package being loaded as a predefined process in OBI with that selected data source.  

From here the user should match what is in P6 and see the same data as in P6. The global security service is a key component here. The Primavera Analytics data is pulled directly from the extended
schema so you need to make sure this service has run. It is set on a defined schedule. It is recommended to set this schedule to run and finish before the ETL process runs. Keeping your global
services scheduled at the appropriate time when they will finish before the ETL and capture all changes is key to keeping a fully update Primavera Analytics environment.

Tuesday Sep 15, 2015

History Intervals for Unifier Data in Analytics 15.1

In Primavera Analytics\Data Warehouse 15.1 Unifier was added as a possible source database.  There were a few subject areas added specific to Unifier:

Business Process, Cash Flow, and Cost Sheet.


There are also historical versions of these subject areas. The historical versions of these areas are captured by default (there is not an on/off switch in Unifier) and are captured always at a weekly interval. In Unifier there is not an optional start day of the week, so the begining of the week will always be Sunday.

 

Tuesday Sep 01, 2015

Can I Include P6 and Unifier Data in a Single STAR Schema?

In Primavera Data Warehouse 15.1 Unifier was added as an optional data source. There are certain subject areas in Primavera Analytics that are geared towards Unifier data - example - Cash Flow, and certain subject areas geared towards P6- example - BurnDown.  If you are running an ETL with only P6 or Unifier those subject areas that are specific to the other product may be empty, this is ok. There are some dimensions that are common - such as Project.  When we use common dimensions, adding a field from that dimension (Project) in one of these other subject areas (Primavera- Activity) will return data, but once a fact is added to the Analysis the data will be filtered out based on the joins in the RPD.

If you have an existing Primavera Data Warehouse installation with P6 as your data source and you would like to add a Unifier source the steps are similar to adding an additional data source.  Besides some different options in the configuration utility there is really no difference with the ETL or the storage of the data.  By keeping each installation to it's own data source you could have any number of combinations of installation types or orders of installation (example - DS1 = P6, DS2 = Unifier, DS3 = P6, etc. )

When P6 and Unifier are sharing common data (Project Id) the data can be combined from different subject areas or aggregated or grouped together. This gives you the ability to combine P6 and Unifier data in certain Analysis.

Tuesday Aug 25, 2015

Link of Slowly Changing Dimensions and Activity (Daily) History

Slowly Changing Dimensions on specific projects is activated in P6 by setting the History Level to Activity.  Setting the History Level to Activity also triggers a daily recording of each activity for that
project in the historical tables.  This can trigger a large amount of database growth depending on the amount of projects that have activity level history activated on.

In the Primavera Data Warehouse, during the ETL process there are certain steps where all contents of the folder are executed.  If a script is removed from this folder it would not be executed. If you
are interested in using slowly changing dimensions for capturing changes on a specific project but have no plans to use activity level history you could remove the merge_activity_history.sql script from the \staretl\scripts\hist_merge folder.  If you did so no activity history will be captured and written to w_activity_history_f.  This script could be added back in, or rerunning config or runSubstitution would add the script back from the \etl\oracle\templates\hist_merge folder.

Wednesday Aug 12, 2015

P6 Global Security Service... More Info, How to Run a 'Full'

If you are using the Primavera Data Warehouse (previously Primavera Reporting Database) and pulling data from P6 then the Global and Project publication services are your life line for data. If those services are not running your data in Primavera Data Warehouse\Analytics will be out of date. If you encounter an issue where you Security service seems to be failing repeatedly, which can be caused by bad data (such as duplicate rows in USERS table), it may be appropriate to force a 'Full.' 

By forcing a 'full' update you are essentially telling the global services that security has never been run and to calculate everything again. The extended schema is all based on update dates. It will compare for related tables and compare those rows with entries in the SETTINGS table.  To make it recalculate everything you would remove the related rows in Settings.  Be very careful here, removing all rows from Settings would cause a global wide Full which may take many many hours to run and use server resources to accomplish and could have an impact on P6.  Delete only what is necessary.

In this example we are going to do a 'Full'  for security.  Here are the steps:

1- Connect as ADMUSER and run the following: 

delete from SETTINGS where NAMESPACE in (

'PxService.ProfilePrivilege.PxProfilePrivilege.Task',

'PxService.User.PxUser.Task',

'PxService.User.adminusersecurity.Sql.Task',

'PxService.User.baseline_security.Sql.Task',

'PxService.User.users.Sql.Task',

'PxService.User.usersecurity.Sql.Task',

'PxService.UserObs.userobs.Sql.Task');

Commit;

2- Run the global service for Security.  Be prepared this will take much longer than the usual daily run as it will be recalculating all security. 

After the service has been completed run your ETL process. This way you will have all the latest on both the extended schema and data warehouse\Analytics side. 

 

 

Tuesday Jun 09, 2015

Upgrading Older Analytics\Data Warehouse Versions

This blog will discuss upgrading from P6 Reporting Database version 3.0 all the way to Primavera Data Warehouse 15.1.   


There are several releases between 3.0 and 15.1 (3.1, 3.2, 3.3).  Generally upgrades are created for up to two releases back. There were a few major changes along the way which make it necessary to do intermediate upgrades if you are trying to come directly from 3.0 to 15.1.

In version 3.1 there was the addition of Slowly Changing Dimensions.  In version 3.2 sp1 there was a switch in the type of partitioning from range to interval, also partitioning was now applied to the Slowly Changing Dimension tables.  In 3.4 we introduced a new upgradeSchema path for upgrades which makes it possible to just upgrade the STAR schema with necessary alterations and then allow the normal ETL to run as it has been scheduled.  This is in place of the upgrade.bat or .sh, which does full table backups and reinserts of history and other data.  This upgrade process generally takes longer than a normal ETL run. In 15.1 there was the introduction of Unifier as a source into the data warehouse.

The ideal upgrade path if coming from 3.0 to 15.1 is:
3.0 --> 3.2 (to cover slowly changing dimension infrastructure)
3.2 --> 3.3 (to cover partitioning changes)
3.3 --> 15.1 (the easiest upgrade is to use Upgrade schema method.  There are two to run depending on the version you are coming from - upgradeSchema33 and upgradeSchema34.  In this path run upgradeSchema33)

About

Provide new information on Primavera Analytics and Data Warehouse

Search

Categories
Archives
« May 2016
SunMonTueWedThuFriSat
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
    
       
Today