X

Oracle Customer Engineering & Advocacy Lab (CEAL) Blog covers BI Tech, EPM, BI Cloud and EPM Cloud.

Recent Posts

2. EPM

A Simple Guide to HFM Java API

Oracle EPM 11.1.2.4 is available since quite a while now. A lot of customers have adopted this release and start looking into automating common tasks to further increase the user experience and lower the maintenance efforts. Using the Java API, common HFM tasks can be automated easily. This post will provide first steps to log into HFM and run a Consolidation using the Java API. In an earlier blog post, I described the 11.1.2.4 HFM Architecture. Some may remember, that one of the changes was in regards to HFM components (e.g. HFM Web, HFM application server) communicating with each other. In previous releases Microsoft DCOM was used, which was replaced by a more common communication system that can be used across operating systems. This is the reason why the old Visual Basic or Visual Basic Script do not work anymore. Oracle published a “Developer and Customization Guide” as part of the “Oracle Enterprise Performance Management System Documentation”. While this documentation even describes how to setup Oracle JDeveloper to get started with the HFM Java API, this post will just cover a small example as well as the information on how to setup the environment to run HFM API programs without setting up JDeveloper on the server. The full example can be downloaded here. Lets look into the different sections of the code ignoring the import section at the top. After reading the System Properties into variables a verification is done to check, if all required Java System properties and command line options exist. Otherwise the program exists after showing usage instructions. public class runConsol{     // Read System properties into variables     static String EPM_ORACLE_INSTANCE = System.getProperty("EPM_ORACLE_INSTANCE");     static String HFM_CLUSTER = System.getProperty("HFM_CLUSTER");     static String HFM_APPLICATION = System.getProperty("HFM_APPLICATION");     static private String ssoToken;     static private SessionOM sessionOM;     public static void main(String[] args) throws HFMException {         // Check if all System properties and arguments are set         if(args.length != 4 || EPM_ORACLE_INSTANCE == null || HFM_CLUSTER == null || HFM_APPLICATION == null || (!args[2].equals("1") && !args[2].equals("2") && !args[2].equals("3") ) ) {                 System.out.println("Usage: java -DEPM_ORACLE_INSTANCE=<pathToOracleEPMInstance> -DHFM_CLUSTER=<HFMClustername> -DHFM_APPLICATION=<HFMApplicationname> runConsol <USER> <PASSWORD> <ConsolidationType> <POV>");                  System.out.println("  With <ConsolidationType>: 1 - Consolidate (Impacted)  2 - Consolidate All with Data  3 - Consolidate All");                 System.exit(0);         } User name, password, the consolidation type and the point of view, which are command line arguments are stored into variables:         String username = args[0];         String password = args[1];         int consolidationType = Integer.parseInt(args[2]);         WEBOMDATAGRIDTASKMASKENUM consolidationTypeString = WEBOMDATAGRIDTASKMASKENUM.WEBOM_DATAGRID_TASK_CONSOLIDATE;         String pov = args[3];         switch ( consolidationType ) {                 case 1: consolidationTypeString = WEBOMDATAGRIDTASKMASKENUM.WEBOM_DATAGRID_TASK_CONSOLIDATE;                         break;                 case 2: consolidationTypeString = WEBOMDATAGRIDTASKMASKENUM.WEBOM_DATAGRID_TASK_CONSOLIDATEALLWITHDATA;                         break;                 case 3: consolidationTypeString = WEBOMDATAGRIDTASKMASKENUM.WEBOM_DATAGRID_TASK_CONSOLIDATEALL;                         break;         } Afterwards an Oracle EPM Session is created:         try {             Map context = new HashMap(5);             CSSSystem system = CSSSystem.getInstance(context, EPM_ORACLE_INSTANCE);             CSSAPIIF css = system.getCSSAPI();             CSSUserIF user = css.authenticate(context, username, password);             ssoToken = user.getToken();         } catch (CSSException e) {             System.out.println(e.getMessage());             System.exit(1);         } This is followed by the creation of a session in HFM:         try {                 sessionOM = new SessionOM();                 SessionInfo session = sessionOM.createSession(ssoToken, Locale.ENGLISH, HFM_CLUSTER, HFM_APPLICATION); And finally the launch of the Consolidation:                 DataOM dataOM = new DataOM(session);                 List<String> povs = new ArrayList<String>(1);                 povs.add(pov);                 System.out.println("Running consolidation with POV: " + pov);                 ServerTaskInfo taskInfo = dataOM.executeServerTask(consolidationTypeString, povs); Obviously, we need to close our session after we are done:                 sessionOM.closeSession(session);         } catch (HFMException e) {                 System.out.println(e.getMessage());         } Now we have a rough idea what the code does, but how can we run the program? Before even running the Java compiler, the environment needs to be setup, so the compiler as well as the Java Virtual Machine can find all required libraries and environmental information. Here are scripts for Unix and Windows environments, which can be used to setup the environment. Download the script to setup the environment on UNIX. Download the script to setup the environment on Windows. Make sure you change the ORACLE_MIDDLEWARE_HOME variable in the first line according to the path in your environment. After the environment is setup, Java bytecode can be generated running the compiler. The syntax is slightly different on Windows and Unix. Windows: javac -cp %EPM_JAVA_CLASSPATH% runConsol.java UNIX:    javac -cp $EPM_JAVA_CLASSPATH runConsol.java Now you can launch a Consolidation using the HFM Java API. Windows: java -cp %EPM_JAVA_CLASSPATH% -DEPM_ORACLE_INSTANCE=<PATH_TO_EPM_ORACLE_INSTANCE> -DHFM_CLUSTER=<HFM_CLUSTER_NAME> -DHFM_APPLICATION=<HFM_APPLICATION_NAME> runConsol <USER> <PASSWORD> 1 "S#<SCENARIO>.Y#<YEAR>.P#<PERIOD>.E#<ENTITY>" UNIX: java -cp $EPM_JAVA_CLASSPATH -DEPM_ORACLE_INSTANCE=<PATH_TO_EPM_ORACLE_INSTANCE> -DHFM_CLUSTER=<HFM_CLUSTER_NAME> -DHFM_APPLICATION=<HFM_APPLICATION_NAME> runConsol <USER> <PASSWORD> 1 "S#<SCENARIO>.Y#<YEAR>.P#<PERIOD>.E#<ENTITY>" Make sure you run the above command on a single line without any line breaks. While the Consolidation is running the progress can be followed in the applications “Running Tasks” screen.

Oracle EPM 11.1.2.4 is available since quite a while now. A lot of customers have adopted this release and start looking into automating common tasks to further increase the user experience and lower...

System Security

SAML2.0 SSO for PBCS and Oracle Cloud

SSO for Oracle Cloud: Oracle Cloud uses theSAML 2.0 protocol to integrate internal and external users. Oracle Clouddoesn’t support all features of this standard. Oracle Cloud uses the SAML standard to enable secure,cross-domain communication between Oracle Cloud and other SAML-enabled Oraclesystems, as well as a selected number of non-Oracle identity management systemslocated on-premises or in a different cloud. Oracle Cloud SSO also includes a failback mechanism.If SSO becomes inoperable, then administrators can log in to their identitydomains directly, bypassing the SSO identity provider. This is necessary toresolve problems.  SAMLIdentity Provider Requirements Towork with the service provider, which in this case is Oracle Cloud, an identityprovider must support SAML 2.0. OracleCloud supports any SAML 2.0–compliant identity provider. The following identityproviders have been certified with Oracle Cloud: · MicrosoftActive Directory Federation Services 2.0, 2.1, and 3.0 · OracleIdentity Federation 11gR1 and Oracle Access Manager and Identity Federation11gR2 · Shibboleth2.4.0 Beloware references to document links on steps to Config SAML2.0 SSO for PBCS: Configuring Active Directory Federation Services 3.0 as anIdentity Provider with Oracle Cloud as Service Provider: http://www.oracle.com/webfolder/technetwork/tutorials/obe/cloud/sharedidm/cloud_sso_idp_configuration/ADFS3.0/ADFS3.0__IdPConfig_CloudSP.html#overview Configuring Active Directory Federation Services 2.0 as anIdentity Provider with Oracle Cloud as Service Provider: http://www.oracle.com/webfolder/technetwork/tutorials/obe/cloud/sharedidm/cloud_sso_idp_configuration/ADFS2.0/ADFS2.0_IdPConfig_CloudSP.html#section5 Configuring Oracle Access Management Identity Federation 11gR2PS2 as an Identity Provider with Oracle Cloud as Service Provider: http://www.oracle.com/webfolder/technetwork/tutorials/obe/cloud/sharedidm/cloud_sso_idp_configuration/OAM11gR2/OAMIF11gR2_IdPConfig_CloudSP.html Configuring Azure AD Base Version as an Identity Provider withOracle Planning and Budgeting Cloud Service: https://community.oracle.com/docs/DOC-997863 Manage Users and Roles in Oracle Cloud: http://www.oracle.com/webfolder/technetwork/tutorials/obe/cloud/sharedidm/cloud_sso_idp_configuration/manage_user_roles/manage_users_roles.html#overview AfterConfiguring and testing SSO, Importing users and Setting Roles (Videos): https://blogs.oracle.com/emeapartnerbiepm/entry/pbcs_how_to_videos Overview: Understanding Security and Roles in Planning andBudgeting Cloud: https://www.youtube.com/watch?time_continue=17&v=-Q4VSCJksO0 Managing Users in Oracle Planning and Budgeting Cloud Service: https://www.youtube.com/watch?v=RoCJVeFzIV0 Predefined Roles in Oracle Planning and Budgeting Cloud Service: https://www.youtube.com/watch?v=uPOB1vT-TZc Creating Users and Assigning Roles in Oracle Planning andBudgeting Cloud: https://www.youtube.com/watch?v=K-zPWZGUKh4 Import a Batch of Users into Oracle Planning and Budgeting CloudService: https://www.youtube.com/watch?v=xgWlHVuchmg Beloware references to document links to more info on SAML2.0 SSO for Oracle Cloud: Configuring Federated SSO and Authentication: https://docs.oracle.com/en/cloud/paas/process-cloud/cprcw/configuring-federated-sso-and-authentication.html Understanding Identity Concepts: https://docs.oracle.com/en/cloud/get-started/subscriptions-cloud/ocuid/oracle-single-sign.html Secure Your Oracle Cloud Applications with In-house Identity andAccess Management: https://blogs.oracle.com/the-cloud-front/entry/secure_your_oracle_cloud_applications Getting Started with Oracle Cloud: http://docs.oracle.com/en/cloud/get-started/subscriptions-cloud/csgsg/toc.htm

SSO for Oracle Cloud: Oracle Cloud uses the SAML 2.0 protocol to integrate internal and external users. Oracle Cloud doesn’t support all features of this standard. Oracle Cloud uses the SAML standard to...

1. OBIEE

OBIEE 12c: Use HTTP Compression / Caching in OHS 12c for DV mobile

If you intend to use DV on mobile, in order to improve end user experience over slow mobile 4G/LTE networks (with high latency) it is recommended to install Oracle HTTP Server (OHS) 12c and implement the following HTTP compression / caching settings in httpd.conf file: #un-comment or add this line LoadModule deflate_module"${PRODUCT_HOME}/modules/mod_deflate.so" <IfModule mod_deflate.c>       SetOutputFilter DEFLATE </IfModule> ExpiresActive On <IfModule mod_expires.c>       ExpiresByType image/gif "access plus 3 months"       ExpiresByType image/jpeg "access plus 3 months"       ExpiresByType application/x-javascript "access plus 3 months"       ExpiresByType application/javascript "access plus 3 months"       ExpiresByType text/css "access plus 3 months"       ExpiresByType text/javascript "access plus 3 months"       ExpiresByType text/js "access plus 3 months"       ExpiresByType image/png "access plus 3 months"       ExpiresByType application/x-shockwave-flash "access plus 3months" </IfModule> <IfModule mod_headers.c> <FilesMatch "\.(gif|jpeg|png|x-javascript|javascript|css|swf|js)$">        Header set Cache-Control "max-age=7889231" </FilesMatch> </IfModule> SendBufferSize 8123292 ReceiveBufferSize 8123292

If you intend to use DV on mobile, in order to improve end user experience over slow mobile 4G/LTE networks (with high latency) it is recommended to install Oracle HTTP Server (OHS) 12c and implement...

1. OBIEE

OBIEE 12c: How to Configure OBIEE to Use Static Contents in Other Web Server

Note: In BIEE 11g the static files / stlyes location is <ORACLE_HOME>/bifoundation/web/app/res directory but in BIEE 12c location is ../domains/bi/servers/obips1/tmp/earmanager/analytics/../res/Below are steps to configure Oracle HTTP Server as the location for static file resource :1. Define a virtual path '/staticfiles' and point it to a physical path '/ohs/bi/bifoundation/web/appv2' by adding a below configuration in httpd.conf file in the OHS instance : Alias /staticfiles /ohs/bi/bifoundation/web/appv22. Copy OBIEE static files (images, script files, style sheets etc) from OBIEE server to the other web server.Note: Run Fiddler trace to get the dynamic folder name under /res, for the dynamic folder name for example it can be a "v-ZLH62aWed7Y". Once its identified then create the "v-ZLH62aWed7Y" virtual directory as a physical directory under 'res' directory present in the other web server and the static files need to be moved to this directory. The dynamic path is necessary if you have Akamai type of cache. After BI patching the dynamic folder name changes so that a new url will force the cache to be updated.  3. Restart OHS.4. Add below lines in the instanceconfig.xml <ServerInstance>...<URL><ForceAbsoluteResourceURL>true</ForceAbsoluteResourceURL><ResourceServerPrefix>http://<OHS host name>:<OHS port></ResourceServerPrefix><ResourceVirtualPath>/staticfiles</ResourceVirtualPath></URL>...</ServerInstance>5. Save the file and restart presentation services.

Note: In BIEE 11g the static files / stlyes location is <ORACLE_HOME>/bifoundation/web/app/res directory but in BIEE 12c location is ../domains/bi/servers/obips1/tmp/earmanager/analytics/../res/Below are...

1. OBIEE

OBIEE Login Fails while searching users from a Large MSAD

Author: veera.raghavendra.rao@oracle.comProblem: OBIEE Login Failed due to max timeout to search andretrieve a user from the base DN when there are multiple sub trees and largeuser store Oracle Support Doc ID: Using User Filter toReduce the Number of Users Retrieved from Microsoft Active Directory (MSAD)(Doc ID 1227144.1) Example: Users are from multiple subtrees like below: CN=auuser1,OU=Users,OU=AU,DC=paceal,DC=oracle,DC=com CN=causer1,OU=Users,OU=CA,DC=paceal,DC=oracle,DC=com CN=cnuser1,OU=Users,OU=CN,DC=paceal,DC=oracle,DC=com CN=fruser1,OU=Users,OU=FR,DC=paceal,DC=oracle,DC=com CN=inuser1,OU=Users,OU=IN,DC=paceal,DC=oracle,DC=com CN=jpuser1,OU=Users,OU=JP,DC=paceal,DC=oracle,DC=com CN=nzuser1,OU=Users,OU=NZ,DC=paceal,DC=oracle,DC=com CN=uaeuser1,OU=Users,OU=UAE,DC=paceal,DC=oracle,DC=com CN=ukuser1,OU=Users,OU=UK,DC=paceal,DC=oracle,DC=com CN=ususer1,OU=Users,OU=US,DC=paceal,DC=oracle,DC=com HERE ALL THE USERS ARE MEMBERS OF A GROUP “AU Hyperion EPM Users”. The Distinguished Name of the Group: CN=AU Hyperion EPM Users,OU=Common,OU=Groups,OU=AU,DC=paceal,DC=oracle,DC=com Problem Statement: No of users actually used tologin to OBIEE are limited (e.g some 200 Users) and are scattered acrossmultiple sub trees but we are using user based DN as DC=paceal,DC=oracle,DC=comwhere it has to search 10,000 + Users, so the users might not be retrieved ontime i.e. within 120 seconds and so the Login Fails. In this situation we canconfigure the AuthenticationProvider where User Filter is set to retrieve users who are members of acertain Group. User Base DN: DC=paceal,DC=oracle,DC=com All Users Filter & UserFrom Name Filter: (&(objectCategory=Person)(objectClass=user)(memberOf=CN=AUHyperion EPM Users,OU=Common,OU=Groups,OU=AU,DC=paceal,DC=oracle,DC=com)) The users who are members ofthe group are searched immediately and retrieved This will help to retrieve theusers that are required to login to OBIEE instead of searching the entiredirectory with (&(cn=%u)(objectClass=user))

Author: veera.raghavendra.rao@oracle.comProblem: OBIEE Login Failed due to max timeout to search and retrieve a user from the base DN when there are multiple sub trees and large user store Oracle...

1. OBIEE

OBIEE 12c: Configuring Kerberos SSO for OBIEE 12c

Author: veera.raghavendra.rao@oracle.com Problem Statement: Existing Whitepaper to ConfigKerberos SSO for OBIEE is not yet updated for OBIEE 12c (1274953.1) Many customers are raising lots of questions when deploying using DES and AES256 encryption. Additionally, since OBIEE 12c, there was some changes in SecurityConfiguration while Enabling SSO, as SSO Provider Option disappeared in OBIEE 12c UI which is creating lots of confusions. Attached are the Documents toconfigure Kerberos SSO for OBIEE 12c with RC4, AES128 and AES256 Encryptions This document answers mainissues that are mostly met by customers, partners by providing solutions to the problem statement mentioned above. Highlights:Part1.pdf: Types of Encryptionssupported by Active Directory Domain Levels Create a serviceaccount for the WebLogic Server Generate Keytab fileusing ktpass command Setspn with theshortname of the WebLogic Server Configure ActiveDirectory as Authentication Provider Addingvirtualize=true to enable Part2.pdf: Assigning ActiveDirectory Users to Application Roles Testing the AD userLogin to OBIEE Replacing JCE forJava 8 Files to achieve Unlimited Strength for supporting AES256 Encryption Configure WebLogicServer to communicate with the KDC Server Configure WebLogicLogin Module Configure SinglePass Negotiate (SPNEGO) Identity Assertion Provider Configure BI App forSSO (Edit the analytics.ear file for 401 challenge) Part3.pdf: Deploying the editedanalytics.ear (with Upgrade Option)Configure BIInstance for SSO (no need to specify any SSO Provider)Configure ClientMachines for Kerberos SSOConfigure Smart Viewto work with Kerberos SSO (1900485.1)Known Issues inWebLogic 12.2.1.0 (shipped in with OBIEE 12.2.1.0)Syntax differencesif OBIEE is on AIX using IBM JDK Document Downloads: Configuring Kerberos SSO for OBIEE12c Part1.pdf  (here) Configuring Kerberos SSO for OBIEE12c Part2.pdf  (here) Configuring Kerberos SSO for OBIEE12c Part3.pdf  (here)Patch for OBIEE 12.2.1.0.0and not for 12.2.1.1.0: http://aru.us.oracle.com:8080/ARU/ViewPatchRequest/process_form?aru=20164970 If the Patch cannot be downloaded from the above link, clickhere

Author: veera.raghavendra.rao@oracle.com Problem Statement: Existing Whitepaper to Config Kerberos SSO for OBIEE is not yet updated for OBIEE 12c (1274953.1) Many customers are raising lots of questions...

1. OBIEE

11g to 12c Migration Fails - "common.components.home" is not supplied

Getting "oracle.security.jps.service.idstore.IdentityStoreException: JPS-02600: libOvd required system parameter of "common.components.home" is not supplied in the JVM command line" error while trying to create 11g bar file, please try adding the following parameter as JAVA_OPTIONS the bi-migration command. “-Dcommon.components.home=$MW_HOME/oracle_common” For example: java -jar -Dcommon.components.home=$MW_HOME/oracle_common $MW_HOME/Oracle_BI1/bi-migration-tool.jar out $MW_HOME/Oracle_BI1 $MW_HOME/user_projects/domains/bifoundation_domain /tmp/DEV_11gto12cexport.jar Error Message:2016-09-07 15:48:16 SEVERE  Export failed during execution of plugin: oracle.bi.publisher Due to: Execption in exportPassWords  <oracle.bi.migration.orchestration.BundleExporter doExport>2016-09-07 15:48:16 SEVERE  Problem during export: oracle.bi.migration.exporter.ExportProblemException: Execption in exportPassWords  <oracle.bi.migration.tool.MigrationTool alertUser>2016-09-07 15:48:16 SEVERE  Export failed: Execption in exportPassWords oracle.bi.migration.exporter.ExportProblemException: Execption in exportPassWords     at oracle.xdo.tools.upgrade.BIPMigrationPlugin.export(BIPMigrationPlugin.java:97)    at oracle.bi.migration.api.adaptor.exporter.V1ExporterPluginAdaptor.export(V1ExporterPluginAdaptor.java:58)    at oracle.bi.migration.orchestration.BundleExporter.doExport(BundleExporter.java:287)    at oracle.bi.migration.orchestration.BundleExporter.doExport(BundleExporter.java:228)    at oracle.bi.migration.orchestration.BundleExporter.runExport(BundleExporter.java:115)    at oracle.bi.migration.orchestration.actionspecification.ExportAction.performAction(ExportAction.java:95)    at oracle.bi.migration.orchestration.MigrationActionExecutor$1.run(MigrationActionExecutor.java:32)    at java.lang.Thread.run(Thread.java:662)    at oracle.bi.migration.orchestration.MigrationActionExecutor.executeMigrationAction(MigrationActionExecutor.java:52)    at oracle.bi.migration.orchestration.MigrationConductor.performMigrationAction(MigrationConductor.java:120)    at oracle.bi.migration.tool.MigrationTool.go(MigrationTool.java:195)    at oracle.bi.migration.tool.MigrationTool.main(MigrationTool.java:251)Caused by: oracle.security.jps.service.idstore.IdentityStoreException: JPS-02600: libOvd required system parameter of "common.components.home" is not supplied in the JVM command line    at oracle.security.jps.internal.idstore.ldap.LdapIdentityStoreProvider$UseLibOvd.getInstance(LdapIdentityStoreProvider.java:466)    at .........    ... 11 more <oracle.bi.migration.tool.MigrationTool alertUser>2016-09-07 15:48:16 SEVERE  Migration action failed <oracle.bi.migration.tool.MigrationTool alertUser>

Getting "oracle.security.jps.service.idstore.IdentityStoreException: JPS-02600: libOvd required system parameter of "common.components.home" is not supplied in the JVM command line" error while trying...

2. EPM

EPM 11.1.2.x – Planning/PBCS Best Practices for BSO Business Rule Optimisation

1. Introduction Thisdocument is intended to provide best practices for Business Rule Optimisationfor Planning and PBCS models. It willcover items which can be edited within the script only i.e. syntax. Essbase.cfgsettings for on premises Planning will not be discussed in this blog. 2. Environment Setting Below arerecommended Calculation Commands, which are the elements that instruct thebusiness rule how to execute the calculations. SET UPDATECALC OFF turns off intelligent calculation, which is best practice for business rules which use cross dimensional operators and where there may be multiple users accessing the same data block combinations. Using SET UPDATECALC OFF is considered best practice for Planning/PBCS applications where multiple users write to the database. If intelligent calculation is used, ensure it is producing expected results. SET AGGMISSG OFF should be set for Planning/PBCS designs where versions are Standard Target or where non leaf data regions are used and cannot be excluded during your calculation. If the Planning/PBCS design has versions setup as Standard Bottom Up, then data is loaded at level0, where SET AGGMISSG ON will benefit. SET NOTICE and SET MSG SUMMARY should only be used in development environment for individual calculation analysis. These calc commands should be removed once in production and/or after analysis is completed. SET EMPTYMEMBERSETS ON should be used when Run Time Prompts (RTP) are included in FIX statements for Planning/PBCS models so that empty sets are not calculated. Prevents a calculation from running on all members when the set is empty. 3. Optimisation/Performance Use templates in Calc Manager business rules to avoid repeating sections of code and make best use of RTP. Review dependencies on dynamic calc members within each calculation. If possible, change the calculation to avoid including repeated use of dynamic calc or remove the dynamic calc. Use FIX and IF to focus your calculation to ensure only data required is calculated. Avoid the creation of a 0 (zero) data result (unless you want to see a zero e.g inventory levels). This will create a block which will be included in all FIX/IF statements and will be calculated. Remove zeros on data load, if possible, or prevent their creation in business rules. Many rules have a check to see if, for example: IF ((“Budget”==#missing) OR (“Budget” ==0)). IF (“Budget”+1==1) or IF (Budget/Budget ==#missing) will give the same check but avoids the use of Boolean logic within the IF statement. Where possible, perform dense calculations, which do not create blocks, before sparse calculations, which do create blocks. If you need to perform aggregations (e.g to get a total for an allocation calculation), ensure that you only aggregate the section of the data that is required for that allocation calculation. Minimise the passes on a database where possible. Avoid recursive formulas. Excessive recursion can create environment performance issues as well as adding a requirement to clear values to produce consistent results. 4. FIX Statements FIX statements are used to focus the business rule i.e. to keep the # blocks being calculated to be as low as possible i.e. only calculate blocks that are needed. FIX is recommended for sparse dimensions because, when used on sparse, it reduces the # blocks that are required for the calc. Ensure calculations are done on level 0 of all dimensions when versions are bottomup in Planning/PBCS models. Use outer FIX statements on sparse dimensions with inner IF statements on dense dimensions where possible. All FIX statements should include members from ALL dimensions, except dimensions that are within the calculation. If a dimension is excluded, all members from that dimension will be included and it is likely that this is not required. Nest FIX statements where possible to reduce the number of passes on the database. Each full FIX requires a pass on the database. For example, use an outer FIX for Version, Scenario, and/or any other dimension selections that are static throughout the business rule. For Planning/PBCS business rules associated with web forms, leverage the selected page and POV members in FIX statements to reduce the number of blocks calculated. 5. IF Statements IF can be used in member formula. FIX cannot. IF should be used within FIX statements to reduce the #blocks that need to be accessed. IF brings all blocks within the FIX into memory. Use outer FIX statements on sparse dimensions with inner IF statements on dense dimensions where possible. Use ELSE instead of a combination of NOT and ELSEIF where possible to avoid unnecessary analysis of member values during the calculation. However, if an ELSE is not required, it is not necessary. Order IF statements, if possible, where the most number of cases hit the first IF in the block. Use NOT within the IF to ensure this, if applicable. See blog https://blogs.oracle.com/pa/entry/essbase_11_1_2_optimisation for more information on using NOT in IF statements. Review the Calc Member Block choice. A sparse member without dynamic calc dependencies would be a better choice. 6. Only Calculate Blocks Required For Planning/PBCS models, use RTP to ensure that only the data required is included in the business rule. Only aggregate/calculate data that is required at each stage of the calculation to ensure you keep the number of blocks included in the calculation as low as possible for as long as possible. 7. Level of Calculations For Planning/PBCS models, ensure calculations are done on level 0 of all dimensions when versions are bottomup. For Planning/PBCS models, aggregations should only be included in the BSO Plan Type if required for the approval process. Other aggregations should be moved to the ASO Plan Type. Try and keep the number of blocks included in your calculations to be as low as possible, for as long as possible. 8. Syntax Always use @LEVMBRSrather than @RELATIVEif used on the entire dimension. Use @CHILDRENinstead of @RELATIVE, if applicable. Use @REMOVE and @LEVMBRS if you only want to exclude some members from a FIX. 9. Block vs Cell Mode Using block mode, where cells are grouped within the block and simultaneously calculated, is generally faster but data dependencies must be carefully considered e.g. SalesYTD = CurMth + PriorMth would have to be calculated in cell mode so that each month is calculated in the order of the outline. Using cell mode, each cell is calculated sequentially in the order of the dense dimensions in the outline, is generally slower than block mode. Use @CALCMODE to manually control block vs. cell mode. Use debug mode application logs to verify calc mode. If a calculation is performed in block mode, no message will appear. A log message will be shown where calculations are performed in cell mode. 10. BottomUp vs TopDown Add calculation function @CALCMODE(BOTTOMUP); or calculation command SET FRMLBOTTOMUP to calculate existing blocks only (BOTTOMUP) instead of potential blocks (TOPDOWN). TOPDOWN calculations will calculate all potential data blocks with the member. For this reason, it is important that any irrelevant members within sparse dimensions are removed. Thoroughly test calculations using BOTTOMUP to ensure that blocks are created correctly when using @CALCMODE. Ensure testing is completed with clearing data and re-running the calculation to ensure all blocks are created correctly, especially when using BOTTOMUP. Use debug mode application logs to verify calcmode. If a calculation is performed BOTTOMUP, no message will appear. A log message will be shown where calculations are performed TOPDOWN. 11. Create Blocks Blocks, generally, will be created on the following actions: Data load DATACOPY Sparse calculations e.g. AGG or SparseMember = X * X/X; A sparse calculation is triggered: Where sparse members are on the left of the = Where the formula is within a sparse calc member block e.g. “Budget”(“Sales” = “Sales”->”Actual” * 0.95;) where Scenario is sparse and Measures are dense. Creating blocks can be carried out using the calculation commands SET CREATEBLOCKONEQ, SET CREATENONMISSINGBLK or the calculation function @CREATEBLOCK. It is recommended that if these settings are required that they are used very sparingly and within a tight FIX statement. Test to see if it is possible to avoid the use of these statements by changing the type of calculation being performed. Block creation is a design related topic. Where there is an issue, it is important to prove that this is a block creation issue before using these calculation commands or calculation function. Submit a 0 into the target block and re-run the calc to prove a block creation issue. 12. Aggregations A sparse dimension aggregation should be ordered starting with the dimension that creates the fewest blocks to the one that creates the most blocks in order to keep the number of blocks as low as possible for as long as possible. In Planning/PBCS models, end user business rules should not aggregate entire sparse dimensions In Planning/PBCS models, any aggregations required for reporting only should be moved to the ASO Plan Type. n Planning/PBCS models, only aggregate data that is required for the Planning approval process. AGG vs CALC DIM Calculation Commands CALC DIM will execute any member formula CALC DIM will aggregate dense or sparse dimensions. AGG performs aggregations based on outline structure. AGG does NOT execute member formula. AGG will only aggregate sparse dimensions. Test both AGG and CALC DIM as performance can differ depending on levels of aggregation involved in the calculation. Exclude dimensions with dynamic calc on upper levels from all aggregations. Only aggregate data that is required. 13. SET CALCPARALLEL / FIXPARALLELCalculation Commands For Planning/PBCS models i.e. multi-user applications with potential for rules running concurrently, it is best practice for end user business rules to be run in serial mode. Only use SET CALCPARALLEL around full sparse dimension aggregations in batch calculations. Parallel calculation is not recommended on small scripts (for example, less than 10 or 20 seconds) as the overhead of creating parallelism may outweigh the benefit. When used, always test SET CALCPARALLEL to ensure that it does give a benefit. Sometimes serial calculation or calculations with lower levels of parallelism can give better results. Test to determine if FIXPARALLEL would provide better results than SET CALCPARALLEL? Use Calc Manager debug mode to view logs to review. Always consider user concurrency when using SET CALCPARALLEL or FIXPARALLEL. For more information on FIXPARALLEL, please refer to the documentation: http://docs.oracle.com/cd/E66975_01/doc.1221/essbase_tech_ref/fixparallel.html 14. Debug Methodology for DevelopingBusiness Rules Always create a unit test i.e. a small subset of data where you know the source data and the expected results and can easily follow the calculation through manually for this set of data. Always use Calc Manager debug mode, or application logs, to view calculation entries to help with any debug during development. Ensure all data that is required is present (e.g. if a total is required for allocations) and has been pre-calculated prior to the calculation taking place. If the script is long, start debug at the top of the script and work down. An issue further up a script may be creating an issue further down. Debug section by section to ensure that all data is created correctly. Check that later sections of script do not overwrite earlier sections etc. Use debug mode (or application log) to ensure that each section of script is calculated sequentially where required. Always clear data and reload (i.e. do not use a clear script) when testing any business rule in order to ensure that all blocks are created successfully. Always test re-running a script to ensure that the syntax creates correct results for input data and/or populated data. Always test all data results with more than one data set. Where user input may change data sets e.g. allocations, also test data changing from #missing to a value and from a value to #missing, to ensure that previous calculated results are removed (if required) in second and subsequent runse.g. if a user inputs against Product A,B, C in their first calculation run and then Product B, C, D (and not A) in their second, is the allocation result for Product A correctly removed in the second run.

1. Introduction This document is intended to provide best practices for Business Rule Optimisation for Planning and PBCS models. It willcover items which can be edited within the script only i.e....

1. OBIEE

OBIEE 12c: Increasing the Node Manager JAVA Heap Size

In BI 12c, Node Manager is monitoring not only Web logic Managed Servers but also BI System components. We noticed sometimes that when we are starting all the processes using start.sh, Node Manager is taking long time to start, so in order to fix this, we must increase the java heap size for Node Manager. File "commBaseEnv.sh" containing the Node Manager Java Heap is located @ <ORACLE_HOME>/oracle_common/common/bin. Please update the following MEM_ARG parameter from default of "-Xms80m -Xmx200m" to a higher values according to available memory on the system: ... else  case $JAVA_VENDOR in  Oracle)    if [ "${VM_TYPE}" = "JRockit" ]; then     JAVA_VM=-jrockit     MEM_ARGS="-Xms128m -Xmx256m"     UTILS_MEM_ARGS="-Xms32m -Xmx1024m"    else     JAVA_VM=-server MEM_ARGS="-Xms80m -Xmx200m"     UTILS_MEM_ARGS="-Xms32m -Xmx1024m"    fi    VERIFY_NONE="-Xverify:none" ... to ... else  case $JAVA_VENDOR in  Oracle)    if [ "${VM_TYPE}" = "JRockit" ]; then     JAVA_VM=-jrockit     MEM_ARGS="-Xms128m -Xmx256m"     UTILS_MEM_ARGS="-Xms32m -Xmx1024m"    else     JAVA_VM=-server   MEM_ARGS="-Xms256m -Xmx512m"     UTILS_MEM_ARGS="-Xms32m -Xmx1024m"    fi    VERIFY_NONE="-Xverify:none" ... Save the file and restart the services (using stop.sh & start.sh located @ <ORACLE_HOME>/user_projects/domains/bi/bitools/bin) If the BI processes still taking long time to start, please look at Random Number Generator May Be Slow on Machines With Inadequate Entropy

In BI 12c, Node Manager is monitoring not only Web logic Managed Servers but also BI System components. We noticed sometimes that when we are starting all the processes using start.sh, Node Manager is...

2. EPM

EPM 11.1.2.x - Populating Today’s Date in Planning/PBCS with a Business Rule

In a previous CEAL blog, we have discussedhow to calculate dates in Planning/PBCS: https://blogs.oracle.com/pa/entry/calculations_using_dates_stored_in As discussed in that blog, for dates inPlanning/PBCS, Essbase stores the date as a numerical value e.g. a Start Dateof 21st May 2015 would be stored as 20150521. In a non-Planning/PBCS Essbase database,date functionality exists using the number of seconds elapsed since midnight,January 1, 1970 i.e. a non-numerical format different to the way Planning/PBCS storesdates to display on web forms. Itis possible, however, to use the Essbase date format and then convert that dateinto a Planning/PBCS format for use on a web form in order to populate a Planning/PBCSdate member with today’s date i.e. the system date. So, as an example, say you want to add adate to a Planning/PBCS member “Start_Date” with a web form calculation and that Start Dateshould be today’s date. But therequirement is that this should be a dynamically, system-generated date basedon the system date of today. Onepossible reason for this use case could be to dynamically add a non-user input,read only, date against a dynamically added record to audit when that recordwas created e.g. when was a new contract added, when was a new customer addedetc. Essbase dates are formed using day, monthand year parts. Functionality existswithin Essbase to get today’s date (@TODAY) but also how to take a part of thatdate (@DATEPART – e.g. specifying day, month, year). Take 21st May 2015 as anexample. This would be stored in Essbasewith date parts i.e. 21, 05, 2015 for day, month and year respectively. The equivalent date in Planning/PBCS would bea single numerical value of 20150521. To convert the Essbase date parts to aPlanning/PBCS numerical value, it is necessary to multiply the month and yearparts up. This is the mathematics usingthe same example of 21st May 2015: Year = 2015 * 10,000 = 2015000 Month = 05 * 100 = 500 Day = 21 2015000 + 500 + 21 = 20150521i.e. the Planning/PBCS numerical format for the same date. You can do this in a business rule byholding the Essbase elements in variables until you write the result to aPlanning/PBCS member. The followingexample shows how to take the system @today date (in Essbase format) andconvert it to Planning/PBCS date format: Var_Day = @DATEPART(@TODAY(),DP_DAY); Var_Month ==@DATEPART(@TODAY(),DP_MONTH)*100; Var_Year =@DATEPART(@TODAY(),DP_YEAR)*10000; Var_TodayDate = @sum(Var_Day,Var_Month, Var_Year);  To then write this to a Planning/PBCSdate member: · Start_Date = Var_TodayDate; So, as discussed above, if you wanted toadd today’s date against this new record, a date which is today’s system dateand cannot be input by the user, you could use a formula like this to do this,using the syntax about to create that date: · @MEMBER(@HspNthMbr(@name(@descendants("AllMembers")),nextMember))->"Start_Date"=Var_TodayDate;

In a previous CEAL blog, we have discussed how to calculate dates in Planning/PBCS: https://blogs.oracle.com/pa/entry/calculations_using_dates_stored_in As discussed in that blog, for dates inPlanning/P...

2. EPM

Zero Based Budgeting (ZBB) Considerations within Hyperion Planning

Zero based budgeting (ZBB) applications are becomingincreasingly popular as a way to develop a budget, especially for lower growthorganizations that are interested in cutting costs. The most significant difference between ZBBapplications and traditional budget applications are the level of detailscaptured. Where traditional budgetingapplications plan an expense item directly or using a price X quantity formula,ZBB applications will plan every line item related to that expense. For example, when budgeting supply expenses,a ZBB application will include values for each detailed line item, includingpencils, pens, paper clips, etc. Giventhe level of detail required for ZBB applications, careful consideration needsto be taken in order to have optimal performance within Hyperion Planning. The following questions need to be considered before designinga ZBB application within Hyperion Planning: Does the additional ZBB detail require controlover how the data is entered? If yes, then add a separate line item dimension. If no, then supporting detail functionalitywithin Planning may be sufficient. Does the detail have a direct relationship to anaccount? If yes, then smart lists within the accountdimension could be leveraged. Is the ZBB detail relevant for the currentbudget? The application should includeonly those line items needed for the current budget and remaining line itemsshould be purged. Do all accounts require additional ZBBdetail? If no, consider having a separate ZBB plan typeto store line item information for the account subset. As indicated above, ZBB applications tend to require aseparate line item dimension for storing the additional detail required. In addition, ZBB applications use smart listsextensively to capture non-numeric driver values used to evaluate the proposedbudget items. The following lists thedesign characteristics of a ZBB application within Hyperion Planning: ZBB applications require planning at a line itemdetail.  For non-ZBB applications, supporting detail is usuallysufficient.  ZBB applications, however, need more control over how theline item detail is entered, making a sparse Line Item dimension necessary. The number of line items for each account, whichare known as packages in ZBB applications, will vary.  The suppliesaccount or package might have 5 line items while the travel account or packagemight have 10 line items.  As a result, the Account dimension, which istypically called the Package dimension in ZBB, will need to be sparse toaccount for the differences in the amount of line item detail for a particularsub-account or sub-package. ZBB applications typically store multipledrivers, both numeric and non-numeric, for a particular sub-package to providesufficient justification for each budget line item.  Since the bulk of theZBB values are calculated using driver values, the drivers are separated fromthe sparse Package dimension and placed in a dense Driver dimension to ensurethe driver calculations are dense calculations.  The Driver dimension isalso typically marked with the Accounts propertymember tag. The design of the ZBB application assumes thatall sub-package calculations are self-contained and have no dependencies onother sub-packages.  If the design does show sub-package dependencies,then the customer’s definition of a sub-package does not follow ZBB bestpractices and should be reviewed. As described above, ZBB applications, unlike traditionalbudgeting applications, tend to place accounts (known as packages in ZBBapplications) and account drivers in separate dimensions where the accountmembers are stored in a sparse dimension and the account drivers are stored ina dense dimension marked with the Accounts property member tag. This design characteristic and the extensiveuse of smart lists to store non-numeric account driver values have thefollowing performance impacts: The main impact of the ZBB design from a PBCSand Planning perspective is that the Driver dimension stores the drivers acrossall sub-packages, and the drivers used for a particular sub-package istypically unique.  Therefore, the number of drivers populated for aparticular sub-package will be a small subset of the total number of drivers. As a result, a given block of cells will always be sparsely populated for a ZBBapplication, and performance could suffer by having PBCS and Essbase processmore data cells than is necessary. Another impact is on reporting ZBB data. The non-numeric drivers are stored in the Driver dimension as smartlists.  A ZBB application can potentially have more than 50 smart lists,and each of these smart list items is typically queried in a standard or ad hocreport.  If each smart list is mapped to a dimension in an ASO reportingdatabase, the ASO database could potentially have more than 60dimensions.  However, the smart list reporting is typically mutuallyexclusive where a particular report will typically contain members of only onesmart list dimension.  Users do not typically perform analyses acrossmultiple smart list dimensions. Asdescribed above, ZBB applications within Hyperion Planning will tend to havesparsely populated blocks as well as large numbers of smart lists that will bemapped to a dimension in an ASO reporting plan type. Careful consideration will need to be givenwhen designing ZBB applications tominimize the impact of these potential performance items.

Zero based budgeting (ZBB) applications are becoming increasingly popular as a way to develop a budget, especially for lower growthorganizations that are interested in cutting costs. The...

2. EPM

Financial Management 11.1.2.4

The new Oracle EPM System 11.1.2.4 was released recently andit has some great enhancements especially for Financial Management. As one cansee from the new Support Matrix (http://www.oracle.com/technetwork/middleware/bi-foundation/oracle-hyperion-epm-system-certific-131801.xls),Financial Management is available on the Exalytics platform now. There were several changes needed on the architecture, tomake this work. Here is an updated architecture diagram, which is valid for theExalytics as well as the Windows platform: Previously, Financial Management was dependent on DCOMtechnology, which is only available on Windows operating systems. Starting from11.1.2.4, TCP/IP is used to communicate between Financial Management processes.As a result, there is no need to have Internet Information Server anymore,which was acting for the Java based web user interface as a kind of translator,between the web and the backend Financial Management application server. As one can see from the above diagram, the process namesalso changed slightly. On Windows operating systems, the process previouslyknown as HsxServer.exe can now be found asHyS9FinancialManagementJavaServer_INSTANCE_NAME.exe. The HsvDataSource.exe isnow called XFMDataSource.exe. On Exalytics, the Java server can be found bysearching for the string HsxServer, the data source process is known as xfmdatasource. With the new Consolidation Administration web userinterface, tuning is significantly simplified, no longer requiring, forexample, the addition of long keys to the Windows Registry. FinancialManagement can now be tuned easily using the Consolidation Administration inthe web user interface: Going through the list of options, looking at the defaultvalues and the possible ranges show, that the defaults are increased fromprevious releases. Therefore, Financial Management will utilize systemresources better by default. Since servers nowadays are getting bigger and bigger,Financial Management is now able to scale up a lot better on big machines. Upto release 11.1.2.3, the maximum Number of Consolidation Threads (Configurationsetting: NumConsolidationThreads) was eight. This can now be increased up to24, which is going to utilize big servers like the Exalytics box a lot better. Have fun with the new release! 

The new Oracle EPM System 11.1.2.4 was released recently and it has some great enhancements especially for Financial Management. As one can see from the new Support Matrix (http://www.oracle.com/techne...

2. EPM

Essbase BSO Parallel Calculation and Calculator Cache

Essbase BSOparallel calculation utilizes calculator cache on a per thread basis. This post is to show the relationship betweenusing parallel calculation and calculator cache. For calc parallel and calculator (calc)cache: Calc Parallel specifies the number of threads per calculation execution Calc Cache is memory used per thread If your calculations contain, for example,CALCPARALLEL 9 and 5 users run this script, then (9 threads * 5 users) = 45 concurrentthreads will be used (assuming 45 threads are available and the content of thecalculation does not force serial calculation). If threads are not available, the calculations will wait until threadsare available. If your CALCCACHEHIGH is set to 2 MB (as an example) then every thread will use 2MB i.e. (9 threads * 5 users) = 45 concurrentthreads * 2 MB calc cache per thread, would mean that up to 90 MB of memorywould be used for these 5 concurrent calculations. Although, in some circumstances, it may bepossible to increase these settings, it is important to be aware of this memoryusage and thread availability when considering use of calc parallel and calccache with applications, especially with applications that have high userconcurrency. Testing should be performed to prove thatperformance improvement outweigh the additional memory requirements. Please see the following documentation formore details on calc parallel and calculator cache: Calc Parallel: http://docs.oracle.com/cd/E40248_01/epm.1112/essbase_tech_ref/set_calcparallel.html Sizing the Calculator Cache: http://docs.oracle.com/cd/E40248_01/epm.1112/essbase_db/frameset.htm?dstcache.html

Essbase BSO parallel calculation utilizes calculator cache on a per thread basis. This post is to show the relationship between using parallel calculation and calculator cache. For calc parallel and...

1. OBIEE

OBIEE 11.1.1 - Tuning Guide Script v1

The obiee tuning guide available here https://blogs.oracle.com/proactivesupportEPM/entry/wp_obiee_tuning_guide lists a number of checks for optimization. The script attached here //cdn.app.compendium.com/uploads/user/e7c690e8-6ff9-102a-ac6d-e4aebca50425/f4a5b21d-66fa-4885-92bf-c4e81c06d916/File/861e8391c6334e32f4038fc42b47b8c7/checkobieetuningguidev4_01_2014.zip automates these checks. It is only reading from the obiee server and it will not modify any values. The script outputs the current values, that you can compare with the recommended ones.  Usage:* copy the python script and tuningGuide.properties in a folder on the obiee server. This machine contains an oracle_common/common/bin/wslt.sh (.cmd)* Modify tuningGuide.properties WLUsername  weblogic console administrator username WLPassword  weblogic console administrator password WLstUrl  weblogic WLST url to connect to weblogic console. Example: t3://adminserver:7001 httpdconf  Location of Oracle Http Server httpd.conf file (if no OHS, leave default). Example: /oracle/user_projects/epmsystem1/httpConfig/ohs/config/OHS/ohs_component/httpd.conf sslconf Location of Oracle Http Server SSL configuration file. Example: /oracle/user_projects/epmsystem1/httpConfig/ohs/config/OHS/ohs_component/ssl.conf mod_wl_ohsconf Location of Oracle Http Server weblogic proxy plugin configuration file. Example: /oracle/user_projects/epmsystem1/httpConfig/ohs/config/OHS/ohs_component/mod_wl_ohs.conf tcp_fin_timeout Unix tcp parameter. Example: /proc/sys/net/ipv4/tcp_fin_timeout tcp_max_syn_backlog Unix tcp parameter. Example: /proc/sys/net/ipv4/tcp_max_syn_backlog limitsconf Unix tcp parameter. Example: /etc/security/limits.conf solarissystem /etc/system essbase_cfg  Location of essbase.cfg file. Example: /oracle/biee/instances/instance1/Essbase/essbaseserver1/bin/essbase.cfg essbase_application_log  Location of the Essbase application log you want to check. Example: /oracle/biee/instances/instance1/diagnostics/logs/Essbase/essbaseserver1/essbase/app/BISAMPLE/BISAMPLE.log OHS or Essbase values are ignored if not found. Unix config values are ignored when windows server is detected. * Launch the script from the folder containing the properties file. Open a shell: export ORACLE_HOME=/oracle/Middleware/Oracle_Home $ORACLE_HOME/oracle_common/common/bin/wlst.sh CheckObieeTuningGuidev4_01-2014.py* Review the output

The obiee tuning guide available here https://blogs.oracle.com/proactivesupportEPM/entry/wp_obiee_tuning_guide lists a number of checks for optimization. The script attached here //cdn.app.compendium.c...

4. OBIA

Adding trim functions to the column expressions in ODI

Background and Issue Description Product: Oracle Business Intelligence Application OBIA Release: 11.1.1.8.1 and above OLTP Source: JDE90/JDE91 Whileworking with one of our customer who was implementing OBIA 11.1.1.8.1 for JDE9.0 adaptor, it was noticed that some of the data warehouse columns were havingtrailing spaces after the data was extracted from JDE source system and loadedinto the warehouse. Onfurther analysis it was observed that the JDE system stores the data withpadded spaces in the database. During the ETL process of OBIA, when the columnsfrom JDE DB are moved into the OBIA warehouse, corresponding warehouse columnswas loaded with trailing spaces. Thetrailing spaces in warehouse columns can cause a potential issue while joining2 warehouse tables either in the ETL or in the RPD. Ex: If‘W_INVENTORY_PRODUCT_DS. INVENTORY_ORG_ID’ was loaded from JDE source withoutapplying the trim function and ‘W_INT_ORG_DS.ORG_ID’ was loaded from JDE sourceafter applying the trim function and later if these 2 tables are joined usingthe above mentioned columns, the join between these 2 tables will result in norows returned because of the appended spaces in ‘W_INVENTORY_PRODUCT_DS. INVENTORY_ORG_ID. Toavoid the above issue and any other issues because of padded spaces in thewarehouse columns, it is advisable to add trim function in the ODI mappingswhile populating the OBIA warehouse. Some of warehouse columns that needstrim Whileworking with the customer, it was noticed that the following are some thewarehouse target columns that needed application of trim function. Trimfunction need to be applied in corresponding ODI maps on the source columnswhile populating the warehouse columns. Table Name Field Name W_GL_OTHER_FS JOURNAL_SOURCE_ID W_AR_XACT_FS RECEIPT_NUM W_USER_D FULL_NAME W_INT_ORG_D_TL ORG_NAME W_DOMAIN_MEMBER_CODE_DS DOMAIN_MEMBER_CODE W_INVENTORY_PRODUCT_DS INVENTORY_ORG_ID W_INT_ORG_D INTEGRATION_ID Instruction overview: Followingsteps gives the details of how apply a fix in the ODI maps to remove the paddedspace while populating the warehouse column. In the below example, ‘LEDGER_ID’is considered for applying the trim function. Similar approach should beapplied on all the necessary fields in the corresponding ODI maps. Note:Before applying the fix, pleaserefer to the OBIA suggested customization guidelines and follow the guidelines. Solution for example field‘LEDGER_ID’ 1) Open the SDE JDE task folder SDE_JDE_APInvoiceLineFact inthe custom adaptor folder and open the Main interface‘SDE_JDE_APInvoiceLineFact . 2) To apply TRIM function for LEDGER_ID column, the originalexpression, SQ_F0411.RPCO||'~'||'#AP_LEDGER_TYPE' Should be modified as TRIM(SQ_F0411.RPCO)||'~'||'#AP_LEDGER_TYPE'.The execute On should be set to ‘Staging’. 3) Apply and Save the Interface 4) For SDE_JDE_APInvoiceLineFact in the custom folder, go toPackages and generate the scenario using the option to generate the scenario asif all underlying objects are materialized. Set the version number to 002. Conclusion: Whileworking with the OBIA warehouse columns that were populated from JDE sourcesystem, if it is noticed that the warehouse column has padded space. Apply theabove workaround in your warehouse. Subsequent release of OBIA will handle thetrimming of the columns that were not handled in this release.

Background and Issue Description Product: Oracle Business Intelligence Application OBIA Release: 11.1.1.8.1 and above OLTP Source: JDE90/JDE91 Whileworking with one of our customer who was implementing...

2. EPM

Using the @CURRMBR Function

Using the @CURRMBR Function Essbase provides a suite of calculation functions tofacilitate the definition and development of complex member formulacalculations. The @CURRMBR calculation function is especially useful withscoping and managing complex calculations. The @CURRMBR (dimension) functionis used to return the name of the member currently being calculated in thespecified dimension. When using the @CURRMBR calculation function, take intoconsideration if the calculation is dense or sparse. The @CURRMBR calculation function is expectedto be slower when used on a dense dimension since the function calculates atthe block level, not the cell level. Asa result, the function will calculate all of the dense dimension members, evenif a particular dense dimension member is not present in the query. Conversely, performance is not an issue whenusing the @CURRMBR calculation function on a sparse dimension since the blockcorresponds to only one sparse dimension member. In addition, the @CURRMBR function on a dense dimension couldproduce unexpected results or errors when used in conjunction with otherfunctions such as @CONCATENATE. Forexample, if a dynamically calculated member in a query contains the formula “Actual”->@MEMBER(@CONCATENATE(@NAME(@CURRMBR ("Account")),"_Total")) and the Account dimensionis sparse, the query will run without error as long as the account in the querygenerates a valid outline member. However, if the Account dimension is dense, the query will result in the following error even if the account inthe query generates a valid outline member: Error executing formula for [member name] (line 0): attempt to cross a null @ member in function[@X] The error is produced because the @CURRMBR functioncalculates at the block level and therefore calculates on all accounts in thedense block. Not all accounts within the block will generate a valid outlinemember, resulting in the error message above. For further information on the @CURRMBR function, see http://docs.oracle.com/cd/E40248_01/epm.1112/essbase_tech_ref/frameset.htm?currmbr.html.v

Using the @CURRMBR Function Essbase provides a suite of calculation functions to facilitate the definition and development of complex member formulacalculations. The @CURRMBR calculation function is...

2. EPM

Essbase Outline Rebuilding vs. Migrating After Major Release Upgrade

The 11.1.2.3 Essbase Readme document has beenupdated to include a change in recommendation with regard to rebuilding versus migratingthe Essbase outlines between major release upgrades.  This represents a change in the upgradeprocess to ensure outlines take advantage of any new structures, features andfunctionality introduced in the major release. After a major release upgrade, be sure to rebuildall outlines on the new release to eliminate any potential issue where theoutline may not be compatible or properly take advantage of any newfunctionality. OPatch upgrades are not considered a major releaseand therefore do not need to adhere to the new guidelines. The traditional outline upgrade or migration processcan continue to be utilized for OPatch upgrades. If moving between installable versions, need torebuild outlines (11.1.2.1.000-->11.1.2.2.000-->11.1.2.3.000-->…) If applying an OPatch, no need to rebuildoutline (11.1.2.3.500-->11.1.2.3.501-->11.1.2.3.502-->…) Here is the statement added to the Essbase11.1.2.3 Readme document: Recommendation: When upgrading to a version of Essbase in whichyou use the Oracle Hyperion Enterprise Performance Management System Installerand Oracle Hyperion Enterprise Performance Management System Configurator, asin this release, database outlines that were created in any of the previousEssbase releases listed in this topic should be recreated in this release totake advantage of any new structures, features, and functionality. http://docs.oracle.com/cd/E40248_01/epm.1112/readme/essbase_11123000_readme.html

The 11.1.2.3 Essbase Readme document has been updated to include a change in recommendation with regard to rebuilding versus migratingthe Essbase outlines between major release upgrades. ...

2. EPM

Error Connecting to Planning Application from Smartview

Whenthe OU of a user is changed in the external provider, users can potentiallyhave issues connecting to Planning applications from Smartview.  However,users can still log into Planning just not Smartview. Below are the errormessages found in Smartview, Essbase and Shared Services when the OU of a useris changed in the external provider and not synced with Essbase security: · Planninguser connecting to Smartview error: “Cannot open cubeview.  Essbase error 1054060. Essbase failed to select application<appname> because user is not completely provisioned by Planning” · Essbaseapplication log error: [Tue Mar 15 12:16:222011]Local/ESSBASE0///Error(1054060) Essbasefailed to select application <application name> because<user@provider> Directory is not completely provisioned by Planning · SharedServices_Security_Clientlog error: [….@Failed to get user with identity @ ldap://GUID=xxxxxxxxxxxxxx?USER. User notavailable in configured user directories…. ] When the OU of a useris changed in the external provider, perform the following steps to sync theexternal provider changes to Essbase security: 1. Export the Essbasesecurity from Essbase Administration Services (EAS) Console, which can be found\Oracle\Middleware\user_projects\epmsystem1\EssbaseServer\essbaseserver1\bin 2. In the exportedsecurity file, locate each OU user that is having Smartview login issues 3. Perform the followingsteps to remove impacted users from Essbase security and then refresh fromPlanning: · Login to MaxL as anadmin user · Execute command:  displayuser <username>; o Can find each username in the security dump file o Display user commandwill give the full user name with provider <username@provider> neededfor the next command to drop · Execute command: drop user <username@provider> from security_file; · Run another securitydump to be sure users were removed from Essbase · Run a Planningsecurity refresh to sync the user back to Essbase Thisshould correct the Planning application connection from Smartview login issuefor each user.

When the OU of a user is changed in the external provider, users can potentially have issues connecting to Planning applications from Smartview.  However,users can still log into Planning just not...

1. OBIEE

BI Web service security client - authentication operations for 11.1.1.6 and 11.1.1.7.x

This wlst/python script connects to bi security web service to authenticate a user specified in the "ObieeWebServiceClient.properties" file. In 11.1.1.7, this calls: getAuthenticatedUserWithLanguageAndProperties operation of the /bisecurity/service web service. In 11.1.1.6, this calls authenticate operation of the /bimiddleware/security/service web service. This returns as part of the xml response the user unique identifier (guid) and other permissions/roles for the user. This is useful when debugging an issue with OBIEE security. The script has to be run with %ORACLE_HOME%\oracle_common\common\bin\wlst.cmd (sh)Script is available there: //cdn.app.compendium.com/uploads/user/e7c690e8-6ff9-102a-ac6d-e4aebca50425/f4a5b21d-66fa-4885-92bf-c4e81c06d916/File/0dc22ee1558e9aeb90fa86a0f52d84fa/obieewebservicesecurityclient.zip  Example: SET ORACLE_HOME="C:\Oracle\Middleware\Oracle_Home"call %ORACLE_HOME%\oracle_common\common\bin\wlst.cmd ObieeWebServiceClient.py > ObieeWebServiceClient.log Results: with 11.1.1.7.1: ** getting obiee version **Connecting to t3://server.mycompany.com:7001 with userid weblogic ...OBIEE version:11.1.1.7.0 ** getting system.user **System user used for connection to bi web service:BISystemUser ** Calling bi web service for authentication **Calling url:http://server.mycompany.com:9704/bisecurity/service with user:test...<result><GUID><value>8418E120AF5911E38F3585A97A3473DB</value></GUID><name>test</name><description></description><displayName>test</displayName><properties><name></name></properties><language>en</language><timezone></timezone><tenantGUIDxmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"xsi:nil="true"/><roles><GUID><value>AuthenticatedUser</value></GUID><name>AuthenticatedUser</name><description></description><displayName>AuthenticatedUser</displayName></roles><roles><GUID><value>BIConsumer</value></GUID><name>BIConsumer</name><description></description><displayName></displayName></roles><permissions><resourceName>AtAGlance</resourceName><resourceType>rtd_dc_persp</resourceType><actions>dc_perspective</actions></permissions><permissions><resourceName>/</resourceName><resourceType>oracle.fr</resou...

This wlst/python script connects to bi security web service to authenticate a user specified in the "ObieeWebServiceClient.properties" file. In 11.1.1.7, this...

1. OBIEE

OBIEE 11.1.1 - Advanced Usage of nqcmd command

Following are the useful nqcmd command lines arguments: a. In your environment set the "export SA_NQCMD_ADVANCED=yes" b. Following is the description of the nqcmd command line arguments for BIServer (nqsserver) testing: -d = Datasource. -u = username. -p = password. -s = inputsqlsfile. -o = outputfile. counters will be in outputfile_Counters.txt. -td <secs> = timeduration in secs. nqcmd will run until <secs> elapses. -qsel r = select queries random manner from inputsqlsfile. -ds <secs> = dump statistics to outputfile_Counters.txt every <secs>. -T = Timer is on. Otherwise you won’t get correct statistics. -t <number> = generate users . if you give -t 50, 50 users will be there. -q = turn off row output - mandatory flag for load testing.-n <number> = used for login test run for <number> iterations. -w = thinktime (in seconds). For example: The following command runs 50 users and runs for 200secs , this will use logical.sql as input file and fire queries randomly as we select -qsel r. it dumps statistics to stats_sessions_Counters.txt for every 20 secs. the nqcmd for load testing will append _Counters.txt to filename given by -o option:./nqcmd -d AnalyticsWeb -u weblogic -p welcome1 -s logical.sql -o stats_sessions -td 200 -qsel r -ds 20 -q -T -t 50 -utf16 Sample output of nqcmd and interpretationTotalSqls/TimeinSecs gives sqls/sec .CumulativeRT/TotalSqls gives Avg RTFor example:nQcmd TestingTimeStamp   TotalSqls   CumulativeRT            CumulativePrepareTime   CumulativeExecuteTime    CumulativeFetchTime00:00:20    62          84.88       19.40       17.55       47.9300:00:40    124         181.07      44.17       38.66       98.24

Following are the useful nqcmd command lines arguments: a. In your environment set the "export SA_NQCMD_ADVANCED=yes" b. Following is the description of the nqcmd command line arguments for BIServer...

1. OBIEE

BI Checks with WLST - Enable debug, login and parse logs - Part 3

This third post continues with the BIEE 11.1.1.7 Security checks using WLST and Python scripting.This time its all about:* Checks and Displays whether Virtualize=true property is added in the identity store or not, as well as compare with the number of authentication providers. You can use the virtualize_add_remove.py script to change this automatically, but a restart is needed if changed. * Enabling Debug in the target server (e.g. bi_server1) for weblogic security authentication and authorization* Setting Log Level to TRACE:32 for oracle.bi.security and oracle.ods components in the FMW Enterprise Manager Console* Testing Analytics Login with both external LDAP user and weblogic user, using direct connection to weblogic where the target managed server is running, as well as a specified FQDN analytics url* On Login Failure collects the logs with Errors, Exceptions and Incident Errors from relevant Log files on the target managed serverTo run the scripts:-Download the files here:  https://blogs.oracle.com/pa/resource/BICheckWithWLST-Part3.zip-Edit automation.properties and change the parameters You need below extra parameters other than the existing in Part1 automation.properties file for executing the above scripts:VirtualizeValue true            # adds Virtualize=trueenableTRACE true            # sets LogLevel to TRACE:32enableDEBUG true            # enables Debug for atn and atzexternalLDAPusername weblogic        # user defined in weblogicexternalLDAPuserpassword passwordNOTE: This assumes that External LDAP Provider is Configured (hence the requirement to have virtualize=true)NOTE: If no external LDAP provider configured use weblogic user details in the properties file for (externalLDAPusername, externalLDAPuserpassword) parameters.NOTE: VirtualizeValue true:: adds Virtualize=trueNOTE: VirtualizeValue false:: removes Virtualize propertyNOTE: enableDEBUG true:: enables Debug for atn and atzNOTE: enableDEBUG false:: disables Debug for atn and atzNOTE: enableTRACE true:: sets LogLevel to TRACE:32 for oracle.bi.security and oracle.odsNOTE: enableTRACE false:: sets LogLevel to NOTIFICATION:1 for for oracle.bi.security and oracle.ods**********************************automation.properties**********************************WLUsername weblogicWLPassword weblogicpasswordWLServer wlserver.yourcompany.comWLPort 7001WLScheme t3WLUrl t3://wlserver.yourcompany.com:7001TARserver bi_server1FQDNExtension .yourcompany.comObieeAnalytocsExternalLoadBalancerUrl http://wlserver.yourcompany.com/analyticsobieeServerScheme httpHttpProxyHost yourproxyserverHttpProxyPort yourproxyserverportVirtualizeValue trueenableTRACE trueenableDEBUG trueexternalLDAPusername externalLDAPusernameexternalLDAPuserpassword externalLDAPuserpassword**********************************automation.properties**********************************-In a shell, set MIDDLEWARE_HOME and then launch all 4 scripts in sequence%MIDDLEWARE_HOME%\oracle_common\common\bin\wlst.cmd (or .sh) virtualize_display.py# Optional and requires manual restart# %MIDDLEWARE_HOME%\oracle_common\common\bin\wlst.cmd (or .sh) virtualize_add_remove.py%MIDDLEWARE_HOME%\oracle_common\common\bin\wlst.cmd (or .sh) enable_debug_wls_security.py%MIDDLEWARE_HOME%\oracle_common\common\bin\wlst.cmd (or .sh) set_loglevel_bisecurity_ods.py%MIDDLEWARE_HOME%\oracle_common\common\bin\wlst.cmd (or .sh) loginAnalytics_getLogs.pyYou can also use the script provided (TestLoginAnalytics_n_CollectLogs.cmd) after changing the MIDDLEWARE_HOME variable in it.* loginAnalytics_getLogs.py will try to login to analytics, and it will generate a compiled log containing the errors from the bi_serverX.log, bi_serverX-diagnostics.log and bi_serverX.out on bi_serverX machine. This compiled log is named Errors_n_Exceptions.log- After all scripts have been run, revert your changes.

This third post continues with the BIEE 11.1.1.7 Security checks using WLST and Python scripting. This time its all about:* Checks and Displays whether Virtualize=true property is added in the...

1. OBIEE

BI Check with WLST - SQL Datasources part 2

This second post continues with the BIEE security checks using WLST and python. This time, the following information is gathered about BISQLGroupProvider: *  looking for SQLGroupProvider in the configured providers*  retrieving the SQL queries*  retrieving the JDBC source details*  testing the connection pool, for each server that uses it (make sure your BI server is up and running)*  testing the SQL queries directly against the database, with the test.user and test.group provided in the automation.properties file To run the script: - Download the files there: //cdn.app.compendium.com/uploads/user/e7c690e8-6ff9-102a-ac6d-e4aebca50425/f4a5b21d-66fa-4885-92bf-c4e81c06d916/File/7e5a142d4c03d650a0593804455afe0d/bicheckwithwlst_part2.zip - Edit automation.properties and change the parameters. - In a shell, set ORACLE_HOME and then launch %ORACLE_HOME%\oracle_common\common\bin\wlst.cmd (or .sh) SGPtesting.py  The output looks like this: *** Reading properties file    WLPassword=password    datasource.database.password=password    test.user=User12    test.group=Group1    test.nbgroups=10    WLUrl=t3://server.us.oracle.com:7001    WLUsername=weblogic    datasource.database.user=MY*** ConnectingConnecting to t3://server.us.oracle.com:7001 with userid weblogic ...Successfully connected to Admin Server "AdminServer" that belongs to domain "bifoundation_domain".Warning: An insecure protocol was used to connect to the server. To ensure on-the-wire security, the SSL port or Admin port should be used instead.WebLogic Server 10.3.5.0  Fri Apr 1 20:20:06 PDT 2011 1398638 *** Reading SQLGroupProvider settings    SQLGroupProvider is configured*** SQL queries    GetGroupDescription: SELECT GROUP_DESCRIPTION FROM MY.GROUPS WHERE GROUP_NAME = ?    GroupExists: SELECT GROUP_NAME FROM MY.GROUPS WHERE GROUP_NAME = ?    IsMember: SELECT USER FROM MY.GROUPMEMBERS WHERE MEMBER_OF = ? AND USER = ?    ListGroups: SELECT GROUP_NAME FROM MY.GROUPS WHERE GROUP_NAME LIKE ?    ListMemberGroups: SELECT MEMBER_OF FROM MY.GROUPMEMBERS WHERE USER = ?*** JDBC Resource details    BIDatabaseGroupDS    jdbc/BIDatabaseGroupDS    jdbc:oracle:thin:@dbserver.us.oracle.com:1521:orcl    oracle.jdbc.OracleDriver*** Connection pool test    server : AdminServer    dataSource : BIDatabaseGroupDS    DatabaseProductVersion : Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit ProductionWith the Partitioning, OLAP, Data Mining and Real Application Testing options    State : Running    Pool test: OK*** Testing SQL queriesSQLGetGroupDescription Group1('Group One',)SQLGroupExists Group1('Group1',)SQLIsMember Group1 User12('User12',)SQLListGroups first 10('Group1',)('Group2',)('Group3',)SQLListMemberGroups User12('Group1',)('Group2',)Ending the script

This second post continues with the BIEE security checks using WLST and python. This time, the following information is gathered about BISQLGroupProvider: *  looking for SQLGroupProvider in the...

1. OBIEE

BI Check with WLST - Providers and Technical users part 1

 Checking OBIEE 11.1.1.7.1 security configuration manually is relatively time consuming. Using WLST and Python, it is possible to automate most of these tasks. The following script performs checks on OBIEE configuration for version 11.1.1.7.1+ (It will not work for 11.1.1.7.0 or 11.1.1.6) . It is the first of a 3 part series, and the part 1 covers:Providers, technical users and logins tests*    getting provider lists from weblogic*    getting system.user from EM jmx*    listing provider and groups membership recursively for users:*    system.user above, weblogic, OracleSystemUser*    listing weblogic role expressions for role Admin and OracleSystemRole*    listing app roles members for BIAdministrator and BISystem roles*    checking if SSO is enabled in EM*    checking bisecurity#11.1.1 and wsm-pm web app state*    Performing GET request to bisecurity web service*    Performing WS-Security login to bisecurity web service using system.user, and weblogic admin accounts*    Performing login to analytics using direct weblogic connections,  using system.user, and weblogic admin accounts*    Performing login to analytics using external FQDN url,  using system.user, and weblogic admin accounts Part 2 will cover checking BISQLGroupProvider specifically --> https://blogs.oracle.com/pa/entry/bi_check_with_wlst_sql Part 3 covers automation for enabling debug, login tests and logs parsing. Script for Part 1 is available here: //cdn.app.compendium.com/uploads/user/e7c690e8-6ff9-102a-ac6d-e4aebca50425/f4a5b21d-66fa-4885-92bf-c4e81c06d916/File/8f0f1a96b6739dab335be15a9bdf6b0e/bicheckwithwlst_part1.zip Usage:* copy the pythons script and automation.properties in a folder on the server, or on a client. This machine has to contain an oracle_common/common/bin/wslt.sh (.cmd)* Modify automation.properties WLUsername weblogic This is a user with a weblogic Admin role WLPassword password password for this user WLServer server1.mycompany.com weblogic console server name WLPort 7001 weblogic console port WLScheme t3 scheme to connect to admin console WLUrl t3://server1.mycompany.com:7001 url for wlst connect command TARserver bi_server1 target managed server for checks FQDNExtension .mycompany.com string to append after machine name for client calls if short name are used in weblogic server ObieeAnalytocsExternalLoadBalancerUrl http://externalurl.mycompany.com/analytics external load balancer url for login test obieeServerScheme http   used to choose if connection is done directly to bi security web service, or directly to analytics using http or https. Url is constructed based on managed server information (listen address, or machine name completed with FQDNExtension if necessary) HttpProxyHost http proxy host for client calls HttpProxyPort http proxy port for client calls * Edit BICheckPart1.cmd (.sh) and change ORACLE_HOME* Run the BICheckPart1.cmd (.sh)* Review the checkBI.log generated Here is a sample output of the script: --- Reading properties file automation.properties ---    WLScheme=t3    obieeServerScheme=http    HttpProxyHost=    WLServer=server1.mycompany.com    WLPassword=password    HttpProxyPort=    ObieeAnalytocsExternalLoadBalancerUrl=http://external.mycompany.com/analytics    TARserver=bi_server1    WLPort=7001    WLUrl=t3://server1.mycompany.com:7001    WLUsername=weblogic    FQDNExtension=.mycompany.com--- Connecting to weblogic ---Connecting to t3://server1.mycompany.com:7001 with userid weblogic ...Successfully connected to Admin Server "AdminServer" that belongs to domain "bifoundation_domain".Warning: An insecure protocol was used to connect to the server. To ensure on-the-wire security, the SSL port or Admin port should be used instead.Location changed to serverRuntime tree. This is a read-only tree with DomainMBean as the root. For more help, use help('domainConfig')Checking servers targets in domain:bifoundation_domainAdminServer-> http port:7001 or SSL:7002bi_server1-> http port:9704 or SSL:9804***  BI Check Utility running for managed server:bi_server1 ***Already in Domain Config Tree--- Getting providers lists for default realm:myrealm ---Providers list:    SQLGroupProvider Type: BI SQL Group Provider    DefaultAuthenticator Type: WebLogic Authentication Provider    DefaultIdentityAsserter Type: WebLogic Identity Assertion provider    OID Type: Provider that performs LDAP authentication        Host:oidserver.mycompany.com        Port:3060        Provider Class Name:weblogic.security.providers.authentication.LDAPAuthenticationProviderImpl        UserBaseDN:ou=People,dc=us,dc=mycompany,dc=com        Users filters:(&(cn=*)(objectclass=person))        GroupBaseDN:ou=People,dc=us,dc=mycompany,dc=com        Groups filters:(&(cn=dummygroup)(|(objectclass=groupofUniqueNames)(objectclass=orcldynamicgroup)))        Use Retrieved UserName As Principal (0 false, 1 true):0            Use retrieved username as principal is not ticked in provider. Please review the following docid:            OBIEE 11g: Weblogic Microsoft ADSI (Active Directory) Case Sensitivity on OPSS Application Roles (Doc ID 1299220.1)Done getting providers list--- Getting the system.user username configured in Enterprise manager ---system.user is defined as:BISystemUserDone getting system.user--- Getting OBIEE technical users information ---BI Technical users check for provider:DefaultAuthenticator    weblogic found in provider:DefaultAuthenticator    Exact case in provider:weblogic    Member: weblogic is member of the following Groups:        Administrators        BIAdministrators    Member: BIAdministrators is member of the following Groups:        BIAuthors    Member: BIAuthors is member of the following Groups:        BIConsumers    oraclesystemuser found in provider:DefaultAuthenticator    Exact case in provider:OracleSystemUser    Member: oraclesystemuser is member of the following Groups:        OracleSystemGroup    BISystemUser found in provider:DefaultAuthenticator    Exact case in provider:BISystemUser    Member: BISystemUser is member of the following Groups:        AdministratorsBI Technical users check for provider:OIDDone with BIEE technical Users/groups listing--- Listing OBIEE technical roles -weblogic and opss- information ---    Role expression for weblogic global role:AdminGrp(Administrators)|Usr(smuser1)    Role expression for weblogic global role:OracleSystemRoleGrp(OracleSystemGroup)    <---- Members for opss role:BISystemLocation changed to domainRuntime tree. This is a read-only tree with DomainMBean as the root. For more help, use help('domainRuntime')[Principal Clz Name : weblogic.security.principal.WLSUserImpl, Principal Name : BISystemUser, Type : ENT_USER]None    ---->    <---- Members for opss role:BIAdministratorAlready in Domain Runtime Tree[Principal Clz Name : weblogic.security.principal.WLSGroupImpl, Principal Name : BIAdministrators, Type : ENT_ROLE]None    ---->Done listing technical roles members--- Checking SSO config ---Location changed to domain custom tree. This is a writable tree with No root.For more help, use help('domainCustom')SSO is not enabled in EMDone SSO--- Checking bisecurity webservice deployment ---Application: bisecurity#11.1.1 state:STATE_ACTIVEApplication: wsm-pm state:STATE_ACTIVE-->bisecurity and wsm-pm deployments are running -> Performing bisecurity web service tests dr--   server1* Calling url:http://server1.mycompany.com:9704/bisecurityGET HTML response:<html><head>    <title>BI Security Service</title></head><body>    <h1>The presence of this page indicates the BI Security Service has been successfully deployed.</h1></body></html>Login tests to bisecurity web service url: http://server1.mycompany.com:9704/bisecurity  using BISystemUser accountCalling url:http://server1.mycompany.com:9704/bisecurity/service with user:BISystemUserWeb Service Response:<env:Envelope xmlns:env="http://www.w3.org/2003/05/soap-envelope"><env:Header/><env:Body><ns3:getAuthenticatedUserWithLanguageAndPropertiesResponse xmlns="" xmlns:ns3="http://oracle/bi/security/"><result><GUID><value>2A7A0AB053A411E3BFA56112E3E6A324</value></GUID><name>BISystemUser</name>... ws response cut ...e>/EssbaseCluster-1</resourceName><resourceType>oracle.essbase.server</resourceType><actions>access</actions></permissions><permissions><resourceName>/EssbaseCluster-1</resourceName><resourceType>oracle.essbase.application</resourceType><actions>use_filter</actions></permissions></result></ns3:getAuthenticatedUserWithLanguageAndPropertiesResponse></env:Body></env:Envelope>Web service call succeeded: user:BISystemUser has guid:<GUID><value>2A7A0AB053A411E3BFA56112E3E6A324</value></GUID>Login tests to bisecurity web service url: http://server1.mycompany.com:9704/bisecurity using weblogic accountCalling url:http://server1.mycompany.com:9704/bisecurity/service with user:weblogicWeb Service Response:<env:Envelope xmlns:env="http://www.w3.org/2003/05/soap-envelope"><env:Header/><env:Body><ns3:getAuthenticatedUserWithLanguageAndPropertiesResponse xmlns="" xmlns:ns3="http://oracle/bi/security/"><result><GUID><value>52BFE81053A311E3AF712DB8C1A98C51</value></GUID><name>weblogic</name>... ws response cut ...<actions>_all_</actions></permissions><permissions><resourceName>/EssbaseCluster-1</resourceName><resourceType>oracle.essbase.application</resourceType><actions>use_calculation,use_filter</actions></permissions></result></ns3:getAuthenticatedUserWithLanguageAndPropertiesResponse></env:Body></env:Envelope>Web service call succeeded: user:weblogic has guid:<GUID><value>52BFE81053A311E3AF712DB8C1A98C51</value></GUID>dr--   server1Login tests to analytics weblogic url: http://server1.mycompany.com:9704/analytics using system.user account/pw:BISystemUserCalling url:http://server1.mycompany.com:9704/analytics/saw.dll?bieehome with user:BISystemUserLogin success. Valid NQID session:ORA_BIPS_NQID=5levoka2p.......q4fsdcm1a3eoq6i204vp7ri;Login tests to analytics weblogic url: http://server1.mycompany.com:9704/analytics  using weblogic accountCalling url:http://server1.mycompany.com:9704/analytics/saw.dll?bieehome with user:weblogicLogin success. Valid NQID session:ORA_BIPS_NQID=lh9ahsrj.......6jt9pcrvb0l7fsf73b5b32;Login tests to analytics external FQDN url: http://server1.mycompany.com:9704/analytics using system.user account/pw:BISystemUserCalling url:http://server1.mycompany.com:9704/analytics/saw.dll?bieehome with user:BISystemUserLogin success. Valid NQID session:ORA_BIPS_NQID=rjr2p3emgm......pk3qslkntij6p6nosaa;Login tests to analytics external FQDN url: http://server1.mycompany.com:9704/analytics using weblogic accountCalling url:http://server1.mycompany.com:9704/analytics/saw.dll?bieehome with user:weblogicLogin success. Valid NQID session:ORA_BIPS_NQID=t3tdlcimbt4c.....fojdhd91h0fu5c7o1a;Disconnected from weblogic server: AdminServerDone BI Check Utility

 Checking OBIEE 11.1.1.7.1 security configuration manually is relatively time consuming. Using WLST and Python, it is possible to automate most of these tasks. The following script performs checks on...

1. OBIEE

OBIEE Web services with javascript

OBIEE web services provide a large set of operations on ibots, jobs, metadata, analysis, webcat... http://docs.oracle.com/cd/E21764_01/bi.1111/e16364/soa_overview.htmhttp://docs.oracle.com/cd/E23943_01/bi.1111/e16364/methods.htmA wsdl is provided for these services, and it is common to call them in a process running in Weblogic server (java custom application).  However, it is also possible to call them using javascript in a browser, or in a server with a javascript engine (Nodejs.org, or else).The main limitation for the browser resides in the cross domain calls being blocked for security reasons. In this case, the javascript code has to be deployed in a webapp in the same domain as obiee (like http://obieeserver:port/yourwebapprootcontext)It is rather practicle for tests purposes to use a javascript obiee web service client. To create the javascript client from the obiee wsdl, use Apache CXF wsdl2js utility. http://cxf.apache.org/docs/wsdl-to-javascript.htmlFor BI Publisher: Services available here: http://server:port/xmlpserver/servicesFor example you can use this wsdl for the scheduler: http://server:port/xmlpserver/services/v2/ScheduleService?wsdlIn a shell:set PATH=C:\apache-cxf-2.7.7\bin;C:\Oracle\Middleware\Oracle_Home\oracle_common\jdk\bin;%PATH%set JAVA_HOME=C:\Oracle\Middleware\Oracle_Home\oracle_common\jdkcd /d c:\mywsdltojsfileswsdl2js -p soap http://server:port/xmlpserver/services/v2/ScheduleService?wsdlThis creates this file:ScheduleService.jsFor Obiee soap web services: Create the js files using wsdl2js and http://server:port/analytics/saw.dll?privateWSDLOperations are described here: http://docs.oracle.com/cd/E21764_01/bi.1111/e16364/methods.htmThis generates these files:SAWSessionService.js for getting a sessionidthe other services: WebCatalogService.js, IBotService.js and so on For obiee bisecurity web service: this is using ws-security for login, and this requires a change in generated js files for the login to work (this requires a specific soap header). Not covered in this blog since bisecurity web service is not supported as a client api for use by customers/partners.Then you can call the generated functions this way (example with obiee soap web service):Make sure you include the generated js, as well as cxf-utils.js Create a new js:// Get a sessionid      var loginPort=new SAWSOAP_SAWSessionServiceSoap();     if (url) loginPort.url=url+"?SoapImpl=nQSessionService";     var sessionID="";     loginPort.logon(        function (responseObject) {            console.log("Call succeeded -> Sessionid:"+responseObject.getSessionID());                 sessionID=responseObject.getSessionID();        },errorCallback,user,pass);     console.log("Done sessionid:"+sessionID);// call ibot service to send message through default delivery channels     var ibotPort=new SAWSOAP_IBotServiceSoap();     if (url) ibotPort.url=url+"?SoapImpl=ibotService";     var message="This is my message";      var groupArray=[];     groupArray.push(recipient);    ibotPort.sendMessage(successCallbackSendMessage, errorCallback, '',groupArray, 'OBIEE IBOT message',message,'Normal', sessionID );With the call backs for the async calls:function errorCallback(httpStatus, httpStatusText){    globalErrorStatus = httpStatus;     globalStatusText = httpStatusText;     console.log("Error:"+httpStatusText+" "+httpStatus);}function successCallbackSendMessage(responseObject){ console.log("Call succeeded: message sent");}Example with getting the most recent used items through the web service:function getMostRecentUserItemsLists(loginPort,sessionID) {     loginPort.getUserLists(successCallbackUserItemsLists, errorCallback, sessionID);     console.log("Done users list");     } function successCallbackUserItemsLists(responseObject){  console.log("Call succeeded: userlists:"+responseObject.getReturn());  var i=0;  var mruList;  mruList=responseObject.getReturn().getMruList();  // display last mru  if (mruList.length>0) {   var recentItem=mruList[0];     var path=recentItem.getPath();    var sig=recentItem.getSignature();    console.log("mrulist length:"+mruList.length);    if (mruList.length>1) {            recentItem=mruList[1];            path=recentItem.getPath();            sig=recentItem.getSignature();    }    console.log(path);    console.log(sig);  }}

OBIEE web services provide a large set of operations on ibots, jobs, metadata, analysis, webcat... http://docs.oracle.com/cd/E21764_01/bi.1111/e16364/soa_overview.htmhttp://docs.oracle.com/cd/E23943_01/...

2. EPM

Calculations Using Dates Stored in Planning

Planning allows users to enter date values in Planning. For example, a start date can be entered as 11/01/2013 if the format is MM/DD/YYYY. Although the values are entered in date format, it is possible to calculate values based on dates entered. Essbase stores values having a date format as numeric values. In the example above the “Start Date” 11/01/2013 is stored in Essbase as the value 20131101. If the “End Date” is 06/30/2014, you can calculate the number of months between the dates using Essbase functions. The following shows how to calculate the number of months between the “Start Date” and “End Date” using the @ROUND function: 1. Start with (@ROUND (“End Date”,-4) - @ROUND (“Start Date”,-4))/10000*12 – This step will calculate the number of months between the “End Date” year and the “Start Date” year. The result of this step will be (20140000 – 20130000)/10000 *12 or 12. 2. Add (@ROUND (“End Date”,-2) - @ROUND (“End Date”,-4))/100 – This step will calculate the number months between the start of the “End Date” year and the “End Date”. The result of the step will be (20140600 – 20140000)/100 or 6. 3. Subtract (@ROUND (“Start Date”,-2) - @ROUND (“Start Date”,-4))/100 – This step will calculate the number months between the start of the “Start Date” year and the “Start Date”. The result of this step will be (20131100 – 20130000)/100 or 11. Combining the three steps into a single formula results in 12 + 6 – 11 or 7 months. A similar formula can also be written using the @INT or @TRUNCATE Essbase functions.

Planning allows users to enter date values in Planning. For example, a start date can be entered as 11/01/2013 if the format is MM/DD/YYYY. Although the values are entered in date format, it is...

Performance Tuning

EPM Infrastructure Tuning Guide v11.1.2.2 / 11.1.2.3

Applies To: This edition applies to only 11.1.2.2, 11.1.2.3. One of the most challenging aspects of performance tuning is knowing where to begin. To maximize Oracle EPM System performance, all components need to be monitored, analyzed, and tuned. This guide describe the techniques used to monitor performance and the techniques for optimizing the performance of EPM components. TOP TUNING RECOMMENDATIONS FOR EPM SYSTEM: Performance tuning Oracle Hyperion EPM system is a complex and iterative process. To get you started, we have created a list of recommendations to help you optimize your Oracle Hyperion EPM system performance. This chapter includes the following sections that provide a quick start for performance tuning Oracle EPM products. Note these performance tuning techniques are applicable to nearly all Oracle EPM products such as Financial PM Applications, Essbase, Reporting and Foundation services. 1. Tune Operating Systems parameters. 2. Tune Oracle WebLogic Server (WLS) parameters. 3. Tune 64bit Java Virtual Machines (JVM). 4. Tune 32bit Java Virtual Machines (JVM). 5. Tune HTTP Server parameters. 6. Tune HTTP Server Compression / Caching. 7. Tune Oracle Database Parameters. 8. Tune Reporting And Analysis Framework (RAF) Services. 9. Tune Oracle ADF parameters. Click to Download the EPM 11.1.2.3 Infrastructure Tuning Whitepaper (Right click or option-click the link and choose "Save As..." to download this pdf file)

Applies To: This edition applies to only 11.1.2.2, 11.1.2.3. One of the most challenging aspects of performance tuning is knowing where to begin. To maximize Oracle EPM System performance, all...

2. EPM

EPM Client cert authentication

If you are planning to use client cert authentication against EPM (http://docs.oracle.com/cd/E17236_01/epm.1112/epm_security_11121/frameset.htm?ch02s13s04.html), there are a few additional elements to consider on top of the documentation:  "SSLVerifyclient required|optional" in Oracle Http Server "OHS". This ssl parameter is , amongst other tasks, using trusted certificates in OHS wallet to filter out client certificates in the digital certificates browser prompt. That is, if you have client certificates signed by a trusted root that is not in OHS wallet, then these certificates won't show up in the browser prompt to pick the certificate. HYPLOGIN header sent by OHS through the following entry: RequestHeader set HYPLOGIN "%{SSL_CLIENT_CERT}e"OHS HYPLOGIN header value turns out to be like this (note the question marks):?----BEGIN CERTIFICATE---- MII....-----BEGIN CERTIFICATE---- ?When using the java certificate factory ((X509Certificate)CertificateFactory.getInstance("X.509").generateCertificate) to get the DN, you need to remove these unneeded question marks in your CSS custom login class.  It is good practice to add  RequestHeader set HYPLOGIN "" at the top of your VirtualHost, while the SSL_CLIENT_CERT header value will be set in your location entry.

If you are planning to use client cert authentication against EPM (http://docs.oracle.com/cd/E17236_01/epm.1112/epm_security_11121/frameset.htm?ch02s13s04.html), there are a few additional elements to...

2. EPM

Essbase 11.1.2.x - Changing Essbase ODL Logging Levels

ODL (Oracle Diagnostic Logging) informationis recorded for both the Essbase agent and the Essbase application server. The level of the logging that is recorded canbe amended if required. These are the 2 ODL logs for Essbase: EssbaseAgentODLLogger is forthe Essbase agent. This writes to theESSBASE_ODL.log in MIDDLEWARE_HOME/user_projects/epmsystem1/diagnostics/logs/Essbase/Essbase DefSvrLogger is for the Essbaseapplication server (ESSSVR). This writesto the <appname>_ODL.log in MIDDLEWARE_HOME/user_projects/epmsystem1/diagnostics/logs/Essbase/Essbase/app/<appname> The level of logging for these Essbase ODL logs is controlled by thelogging.xml file. The logging.xml for Essbase is stored here: EPM_ORACLE_INSTANCE/EssbaseServer/essbaseserver1/bin/logging.xml The log level can be changed from thedefault of ‘TRACE:1’ to another setting e.g. ‘ERROR:1’. The lower the log level, the less informationwill be recorded. The higher the loglevel, the more information will be recorded. The log levels, and information about what is recorded at each level, arehere: http://docs.oracle.com/cd/E40248_01/epm.1112/epm_troubleshooting/ch03s04s01.html To change the logging levels, edit thelogging.xml file, and find the “<loggers>” section. This is an example of a default logging.xmlfile. <loggers> <logger name='EssbaseAgentODLLogger' level='TRACE:1' useParentHandlers='false'> <handler name='agenthandler'/> </logger> <logger name='DefSvrLogger' level='TRACE:1' useParentHandlers='false'> <handler name='serverhandler'/> </logger> To change the logging level, change the‘TRACE:1’ setting to the required setting (e.g. ‘ERROR:1’). After changing the logging.xml, restart theEssbase service for the configuration change to take effect.

ODL (Oracle Diagnostic Logging) information is recorded for both the Essbase agent and the Essbase application server. The level of the logging that is recorded can be amended if required. These are...

2. EPM

Grid Diagnostics Replaces MassGridStatistics

An earlier blog entry, https://blogs.oracle.com/pa/entry/planning_11_1_2_massgridstatistics,discussed the Planning 11.1.2.1 MassGridStatistics utility for displaying webform load times. Starting with the11.1.2.2 release, MassGridStatistics has been replaced by the grid diagnosticsfunctionality located with the Planning web interface. In order to access grid diagnostics, select the Diagnosticsitem followed by the Grids item within the Tools menu. Select the forms you want diagnostics andclick “Run Diagnostics”. You can selectforms individually or all forms. A piechart will display showing the percentage of load times greater than 5 seconds(colored red), more than 1 second and less than 5 seconds (colored yellow), andless than 1 second (colored green). Formscolored red or yellow could potentially have scalability issues in a multi-userenvironment and should be considered for re-design. Clicking on a section of the pie chart provides a detailedlist of the forms within that category as well as the specific load times. The detailed information can also be presentin chart format as well as tabular format. The types of charts available are Area, Horizontal Bar, Bar and Line. For further information on grid diagnostics, please see theOracle Hyperion Planning Administrator’s Guide at http://docs.oracle.com/cd/E40248_01/epm.1112/planning_admin.pdf.

An earlier blog entry, https://blogs.oracle.com/pa/entry/planning_11_1_2_massgridstatistics, discussed the Planning 11.1.2.1 MassGridStatistics utility for displaying webform load times. Starting...

2. EPM

Disabling Implied Share for an Essbase Application

Implied share is the default behavior in Essbase where a data relationship is shared explicitly between parent and child members in the outline.  Implied share occurs when a parent has a single child or a parent has only one child of many children that consolidates to the parent in an Essbase outline. If you need to disable all implied shares in a given outline, there is an Essbase configuration setting that can be set in the Essbase.cfg file.  As part of the supported steps, it is recommended to rebuild the outline after the IMPLIED_SHARE <ApplicationName> FALSE setting is in place.  This post it to help the field where in certain situations rebuilding the outline may be challenging.  There is a way to set an existing outline to have implied share disabled, however, there are risks associated with these steps and no guarantee the metadata and data implied share setting are in sync.  If there are any issues after following the “unsupported” steps, Oracle Support will require that the supported steps be followed first. The Essbase Technical Reference Guide has the supported “best practice” steps to ensure the metadata and data implied share setting are in sync.  See the Essbase Technical Reference Guide:http://docs.oracle.com/cd/E40248_01/epm.1112/essbase_tech_ref/frameset.htm?impliedshare.html Supported Steps (per the Essbase Technical Reference Guide) :The following steps must be performed any time the IMPLIED_SHARE setting is changed in essbase.cfg:1. Add IMPLIED_SHARE FALSE to essbase.cfg2. Restart Essbase Server3. Create a new application and database, with the IMPLIED_SHARE setting in place4. Rebuild the outline, with the IMPLIED_SHARE setting in place5. Reload the data6. Run aggregation or calculation scripts7. Restart the application. Unsupported Steps:If rebuilding the outline is not possible (step #4 in the Essbase Technical Reference Guide) you can disable implied share for the outline using the ESSCMDQ Utility. 1. Download and install ESSCMDQ from the following URL.  Follow installation instructions on the website for correct version and OS of the ESSCMDQ utility: http://www.oracle.com/technetwork/middleware/bi-foundation/esscmdq-sampleapps-093105.html2. Follow steps 1 – 3 in the Essbase Technical Reference Guide3. In EAS, open the existing outline and save the outline to the newly created Application and Database4. In EAS, unlock outline after closing5. Start ESSCMDQ Utility to modify outline to accept the Essbase CFG setting Login <HostNode> <UserId> <Password> enterSelect <AppName> <DbName> enter Note: You can enter the following commands in one line or hit enter for each prompt command Openotl 2 1 <AppName> <DbName> <OTLname> y n 0 enterWriteotl 0 2 1 <AppName> <DbName> <OTLname> enterenterRestructotl 1 enterExit enter 6. Can use the ESSCMDQ Utility to check that the outline accepted the Essbase CFG setting Login <HostNode> <UserId> <Password> enterSelect <AppName> <DbName> enterOpenotl 2 1 <AppName> <DbName> <OTLname> n n 0 enterGetImpliedShareSetting enter Note:  If IMPLIED_SHARE FALSE setting took effect, should see the following GetImpliedShareSetting: Application Name          : Demo2Database    Name          : Basic Database Implied Share Setting : 2 (Default - currently OFF)Cfg-file Implied Share Setting : 2 (Default - currently OFF) Exit enter 7. Continue following steps 5 – 7 in the Essbase Technical Reference Guide

Implied share is the default behavior in Essbase where a data relationship is shared explicitly between parent and child members in the outline.  Implied share occurs when a parent has a single child...

System Security

Importing server and private key in Oracle wallet

You want to create a wallet containing your server cert and private key provided by your PKI administrator as a yourcert.p12 file. Let s assume the password for the private key is "mypassword". One way is to: a/ convert this p12 to jks keytool -v -importkeystore -srckeystore yourcert.p12 -srcstoretype PKCS12 -destkeystore yournewkeystore.jks -deststoretype JKS You must use the same password for the new jks and the private key = "mypassword" Import in this keystore, the intermediate and root certs for your server cert. This is required to create a valid wallet. keytool -import -alias Root -keystore yournewkeystore.jks -trustcacerts -file root.cer keytool -import -alias Intermediate -keystore yournewkeystore.jks -trustcacerts -file intermediate.cer Validate all entries are there using keytool -list -keystore yournewkeystore.jks b/ create an empty wallet mw_home\oracle_common\bin\orapki wallet create -wallet ./ -pwd "mypassword" c/ convert the jks to a wallet: mw_home\oracle_common\bin\orapki wallet jks_to_pkcs12 -wallet ./ -pwd "mypassword" -keystore ./yournewkeystore.jks -jkspwd "mypassword" Make sure the private key password and the wallet password match = "mypassword" d/ open the newly created ewallet.p12 with Oracle wallet manager And in wallet menu, tick "autologin", then save. This creates cwallet.sso along side ewallet.p12 Your wallet is ready to be used by OHS!

You want to create a wallet containing your server cert and private key provided by your PKI administrator as a yourcert.p12 file. Let s assume the password for the private key is "mypassword".One way...

2. EPM

Highlights of HPCM 11.1.2.3 New Features

The latest release of HPCM provides severalnew features as well as performance improvements. V11.1.2.3 provides the following: Standard Profitability – SmartView Integration Detailed Profitability – newcalculation rules Detailed Profitability –Essbase Integration Detailed Profitability – StageBalancing Report Standard Profitability – Smart View Integration There is now Smart View integrationincluded within the HPCM Standard Costing user interface. This will provide a dramatically reducedlearning curve for reporting as the integration takes you to the correctcombination of dimension members within Smart View. This means that less time is required to seemeaningful data. Within the Stage Balancing report, numberson the report will now show as data point hyper links. Click once and a Smart View will open showingthe relevant data point in your spreadsheet. It is also now possible to create customqueries in HPCM that can be launched in Smart View from the new “Manage Query”screen in the new “Reporting” Task Area. Queries are easily set up usinga setup wizard covering query options, dimension layout and Member Selections Queries can be saved andaccessible to any user. One click takesyou to Smart View. Can be used for Profitabilityanalysis, Contribution Analysis. Detailed Profitability – Calculation Rules New Calculation Rules have been added toDetailed Profitability in v11.1.2.3. These rules provide far more efficient modelling which should in turnprovide simpler, faster model design as well as model maintenance. Faster calculation times may also be seen. New Calculation Rules are visible in a newTask Area within “Manage Calculations” and allow the definition of assignmentlogic for groups of sources at once. The current single source methods are still supported for exceptions. Calculation rules combine driverdefinitions to define the function to be performed with a source assignmentrule to define the sources and destination assignment rules to define thedestination to receive the output of the allocation. Calculation rules can provide moreprocessing control as rules can be single or multi source. The rule’s sequence number can control theorder of the calculation. Detailed Profitability – Essbase Integration This Essbase Integration will enable usersto use Smart View to analyse data in Detailed Profitability. This means far quicker access to meaningfuldata and helps to provide a quicker reporting learning curve. From v11.1.2.3, there will be Essbaseintegration included for Detailed Profitability. 3 ASO databases will be generated for eachmodel: one for the Source Stage, one for Contribution Detail between Stage 1and 2 and one for the Destination Stage. Deployment of the new Essbase Integrateddatabases will be similar to the Essbase Deploy screens in StandardProfitability. Detailed Profitability – Stage Balancing Report New Stage Balancing reports for DetailedProfitability can provide customisable balancing views to match user customisationsof the destination stage as a datamart. This will help to make the stage balancing report a much more meaningfulreport for users. From v11.1.2.3, there will now be a stagebalancing report for Detailed Profitability. The Stage Balancing Report for DetailedProfitability will differ from Standard Costing as target measures are part ofthe driver. The user must choose all thetarget measures for Stage 1 and 2 to appear on the screen. For Further Information For further information on HPCM newfeatures, please see the v11.1.2.3 New Features guide: http://docs.oracle.com/cd/E40248_01/epm.1112/hpm_new_features.pdf

The latest release of HPCM provides several new features as well as performance improvements. V11.1.2.3 provides the following: Standard Profitability – Smart View Integration Detailed Profitability –...

2. EPM

Highlights of Planning 11.1.2.3 New Features

The latest release of Planning includes several performance improvements as well as some new features not present in earlier releases. Some highlights of the new features include:“Member on the Fly” functionality – For Workforce Planning, Capital Asset Planning, Project Financial Planning and Public Sector Budgeting customers, Planning offers “member on the fly” functionality where end users can dynamically add sparse dimension members and use the added members on a form without performing a Planning refresh. The administrator determines how many new members can be added prior to the next regularly scheduled refresh. Below is an example on how to the “member on the fly’ functionality can be implemented: The Planning administrator determines the number of dimension members and the parent containing the new members. The number of dimension members should be based on how many members will be added between database refreshes to eliminate the need for additional refreshes. The administrator runs the regularly scheduled database refresh to add placeholders for the members in Essbase. Although the placeholders are in Essbase, they do not appear in Planning until a user decides to add a new member. A user having access to the dimension as defined by Planning security will add a new member and can now enter data for the new member. The administrator runs the regularly scheduled database refresh. The placeholder designation in Essbase will now be changed to the member name defined by the Planning user. Aggregate Storage Option (ASO) Plan Type – Starting with the 11.1.2.3 release, Planning customers having a full use Essbase license can now have plan types which use aggregate storage databases. You can write back to the ASO database and set up forms against an ASO database. Aggregate storage can offer significant performance improvements for those plan types that are have large dimensionality and large volumes of data where all data is loaded at level 0 and upper levels are aggregations of the lower levels. Note that an administrator will need access EAS or MAXL periodically to merge the data slices resulting from users entering data from forms concurrently. ASO plan types are set up either within the Planning application creation process or within the Web Administrator using the new Manage Plan Types function. Although the ASO plan type is part of the same application from a Planning perspective, a separate application is created within Essbase since ASO databases need to be in separate applications from BSO databases. The Planning repository keeps track of the Essbase applications and databases within the Planning application. Smart View Metadata Update – A Hyperion Planning Admin Extension for Smart View allows Planning administrators to update metadata in Smart View. The advantage of using Smart View is that member properties can be changed for multiple members simultaneously as opposed to one member at a time on the Planning web. For example, if 10 members needed to have the consolidation property changed from (+) to (~), the properties can be changed at the same time using Smart View instead of accessing each member individually and changing the property. The Hyperion Planning Admin Extension for Smart View can also be used for other metadata changes such as moving members and designating members as shared. Administrative functions such as creating and refreshing databases are also available through the Hyperion Planning Admin Extension. For further information on the new Planning 11.1.2.3 features, please see the Oracle Hyperion Planning 11.1.2.3 New Features Guide at http://docs.oracle.com/cd/E40248_01/epm.1112/planning_new_features.pdf.For further information on the new Planning features within Smart View 11.1.2.3, please see the Oracle Hyperion Smart View for Office New Features Guide at http://docs.oracle.com/cd/E40248_01/epm.1112/smart_view_new_features.pdf.

The latest release of Planning includes several performance improvements as well as some new features not present in earlier releases. Some highlights of the new features include: “Member on the Fly”...

1. OBIEE

OBIEE web SSO with ADFS IdP SAMLv2

The intent of this document is to provide a step by step guide for the configuration and installation of a passive claims-based authentication application. A simple passive claims-based mechanism is illustrated in the below list:1. User accesses a website (https://obiee-server.us.oracle.com:9804/analytics) to consume its services via a web browser. Such websites are called relying parties.2. If the user is not authorized to use the relying party, the web application redirects the user to a token issuer / Identity provider (AD FS 2.0 – https://adfs -server.us.oracle.com/adfs/ls). 3. The token issuer prompts the user to enter his credentials.4. The identity provider uses these credentials to query one claim (such as Name, Common Name, email, sAMAccountName, etc.) from an attribute store (Active Directory).5. Following this step, the issuer produces a signed SAML2.0 token with these claims and sends this token to the browser.6. The browser then sends this signed token to the relying party, subsequently the latter validates this token, authorizes the user to consume its services and sends a cookie (to be used for single sign-on) and the required data back to the user. Download the document here:  //cdn.app.compendium.com/uploads/user/e7c690e8-6ff9-102a-ac6d-e4aebca50425/f4a5b21d-66fa-4885-92bf-c4e81c06d916/File/82cfa14f8f5617c24e5ad41f66a82779/adfs_idp_saml_2_0_web_sso_implementation_for_obiee_single_node.pdf describes step by step how to configure ADFS Saml v2 identity provider, with OBIEE service provider for Web SSO.

The intent of this document is to provide a step by step guide for the configuration and installation of a passive claims-based authentication application. A simple passive claims-based mechanism is...

2. EPM

Essbase Aggregate Storage (ASO) - Incremental Outline Updates Triggers a Full or Light Outline Restructure

Whenperforming incremental updates to an Essbase Aggregate Storage (ASO) outline, dependingon the nature of the incremental updates, could trigger either a full or lightoutline restructure. Whenan incremental update triggers a full ASO outline restructure while input dataand aggregate views exist in the database, Essbase will clear the aggregateviews but the input data will remain. Whenan incremental update triggers a light ASO outline restructure while input dataand aggregate views exist in the database, Essbase will not clear the aggregateviews or input data. TheEssbase Database Administration (DBA) Guide describes what triggers an ASO fullvs. light outline restructure. See thefollowing section of the Essbase DBA Guide for more details: http://docs.oracle.com/cd/E17236_01/epm.1112/esb_dbag/frameset.htm?asysadmn.html#asysadmn1066094 Theabove section, specifically Table 235Aggregate Storage Restructuring Levels, will be updated in Essbase v11.1.2.3.000release documentation to reflect this expected behavior. Thispost is only to confirm that depending on the nature of the incremental updatesto an ASO outline could trigger a full restructure, which means aggregate viewswill be cleared; therefore, the performance impact is higher than if theincremental update triggered a light outline restructure.

When performing incremental updates to an Essbase Aggregate Storage (ASO) outline, depending on the nature of the incremental updates, could trigger either a full or light outline restructure. Whenan...

2. EPM

HPCM 11.1.2.2.x - Changing Task Flow Refresh Intervals

 In HPCM, users access the “Job Status” taskarea to view the status of submitted jobs/task flows. In 11.1.2.2.x, the task flow informationviewed on these screens within HPCM is cached in order to provide betterperformance. A refresh occurs after eachconfigured interval and new entries are shown after this interval. By default this interval is set to 1 minutein 11.1.2.2.x. However, this property isconfigurable and can be changed if required. Checkyour refresh interval by generating theregistry.html report: 1. Open a shell on the EPMFoundation Services Server 2. Change directory to%EPM_ORACLE_INSTANCE%/bin. Typically,this directory will be C:\Oracle\Middleware\user_projects\epmsystem1\bin 3. Run epmsys_registry.bat 4. Check the property-workflowEngine.ces.refreshInterval available under SHARED_SERVICES_PRODUCT. This will showthe interval in minutes. Updatethe refresh interval to whatever value you want(in minutes) using epmsys_registry utility as follows: 1. Open a shell on the EPMFoundation Services Server 2. Change directory to%EPM_ORACLE_INSTANCE%/bin. Typically,this directory will be C:\Oracle\Middleware\user_projects\epmsystem1\bin 3. epmsys_registry.bat updatepropertySHARED_SERVICES_PRODUCT/@workflowEngine.ces.refreshInterval <time inminutes> e.g. set <time in minutes> to 0.5 ifyou want to reduce this refresh interval from 1 minute to 30 seconds.

  In HPCM, users access the “Job Status” task area to view the status of submitted jobs/task flows. In 11.1.2.2.x, the task flow informationviewed on these screens within HPCM is cached in order to...

Performance Tuning

Using Essbase Query Governors with Forms Containing User Variables

Many Planning customers today are designing forms containing user variables. User variables allow the user to customize the information presented on a form based on the user’s security. In addition, the number of forms maintained in the application is reduced since multiple versions of the same form having different dimension members do not need to be created. The issue with forms containing user variables is that form retrieval times can vary considerably depending on the dimension member selected, especially when the user variable is defined as a row dimension member. Since the user variable can be selected based on the user’s read and write access, choosing a member from the upper levels of the dimension hierarchy can increase the number of potential rows significantly if the user variable is defined on a large sparse dimension. As a result, performance can degrade significantly if large numbers of users having selected a high level user variable are opening the form concurrently. You can evaluate the impact of selecting an upper level dimension member by running Tools|Diagnostics in Planning 11.1.2.2 (or MassGridStatistics in earlier releases – see http://blogs.oracle.com/pa/entry/planning_11_1_2_massgridstatistics for additional information) for a lower level member, middle level member and higher level member. The diagnostics will show the form load times and the load time increase as higher and higher level members are selected. Although users may have the ability to select a high level dimension member for the user variable, a high level selection is typically not necessary as far as data entry is concerned. Administrators can limit the load times and performance impact of a high level selection by adding either the QRYGOVEXECTIME or QRYGOVEXECBLK query governor settings to the essbase.cfg file. When loading a form, QRYGOVEXECTIME will time out the form loading process if the specified retrieval time in Essbase is exceeded. QRYGOVEXECBLK will time out the form loading process if the number of data blocks is exceeded. Note that adding the query governor settings will only impact activity on the Essbase server. Activity on the Planning server will still occur to generate the MDX query and set up the grid, so the time out message will take slightly longer to generate. The load on the Planning server could be decreased by adjusting the security and decreasing read access to upper level dimension members. This alternative, however, might prevent users from opening other forms at a higher dimension level, which might not be feasible.

Many Planning customers today are designing forms containing user variables. User variables allow the user to customize the information presented on a form based on the user’s security. In addition,...

Installs and Config

OBIEE 11G Installation & Best practices BAPI access for SAP/BW data sources

(Special thanks to Roxane Wiart from Oracle) Table of Contents 1. GLOBAL ARCHITECTURE. 3 2. COMMUNICATION LAYER. 4 3. SUPPORTED ENVIRONMENT. 6 3.1 OTN.. 6 3.2 DATA SOURCES SAP BW... 6 4. INSTALLATION AND CONFIGURATION.. 8 4.1 SAP/BW CLIENT LAYER. 8 4.2 ORACLE BI SAP BAPI ADAPTER. 9 4.3 WINDOWS PATCH OS SECURITY. 10 4.4 WINDOWS VARIABLE ENV.. 10 5. SAP USER HABILITATIONS. 11 6. ADMIN TOOL IMPORT METADATA.. 12 6.1 CREATE THE CONNEXION.. 12 6.2 SELECT THE SAP DATA SOURCES. 14 6.3 WORK WITH SAP DATA HIERARCHIES. 15 6.4 WORK WITH AGGREGATION RULES. 17 6.5 VIEW MEMBERS. 17 7. OBSERVATIONS/BEST PRACTICES. 18 7.1 SAP RFC INI FILE. 18 7.2 RPD AND TIME SERIES. 18 7.3 FILTERS IN OBI ANSWER. 18 7.4 SAP/BW QUERY VOLUME. 18 7.5 SAP/BW QUERY EXPORT. 18 7.6 SAP/BW DATE IN RPD.. 18 8. TROUBLESHOOTING.. 19 8.1 NETWORK CONNECTION.. 19 8.2 SAP NW RFC SDK CLIENT LAYER. 19 8.3 LIMITATIONS OF NUMBERS OF MEMBERS. 20 8.4 CONFIG FILES OBI 20 8.5 KNOW ISSUES ON SAP BW... 20 1. GLOBAL ARCHITECTURE INTRODUCTION SAP/BW DATA SOURCES SYTEM SAP/BW integrates with BI tools via MDX (Multidimensional Queries), MDX is the de facto industry standard to query OLAP servers. SAP/BW Server includes an MDX processor to directly query any BW INFOCUBE and INFOQUERY SAP/BW supports multiple protocols to submit MDX requests: With OBI EE 11g, both the XMLA adapter and the BAPI adapter are supported 2. COMMUNICATION LAYER 2 components OBI need to access SAP/BW font-family: "Arial","sans-serif";" lang="EN-US">The admin tool (client admin layer) and BI server services (web query) The configuration is the same for these 2 components if the solution is deployed on separate machine The BW (Business Warehouse) datasources from BW SERVER available in obi : SAP/BW INFOCUBES SAP/BW InfoProvider SAP/BW MultiProvider SAPBW Infoquery You can connect to SAP/BW data sources using either the XMLA connection type, or the SAP/BW Native connection type (BAPI). SAP/BW Native connections might not be available on certain platforms. To connect to SAP/BW data sources using the SAP/BW Native connection type, you must first download the BAPI Adapter for OBI Package from the Bristlecone Web site and Then, follow the configuration instructions in the Readme provided with the download. No reconfiguration steps are required to connect to SAP/BW over XMLA. Communication layer with BAPI connection type between OBI client and SAP/BW server: 3. SUPPORTED ENVIRONMENT 3.1 OTN All the last supported requirements are available on oracle OTN site: http://www.oracle.com/technetwork/middleware/ias/downloads/fusion-certification-100350.html 3.2 DATA SOURCES SAP BW For the SAP/BW data sources: In the XLS file, go to the Tab, and check the level SAP/BW certified: Level of SAP/BW sources supported versions (Q1 CY13): • BI 11.1.1.5+ • SAP/BW 7.0 SP17+ • BI 10.1.3.4.1 • SAP/BW 3.5 SP 20+ • SAP/BW 7.0 SP17+ N.B: Variable Support • Variables are parameters to BW queries to filter members and other objects • The BI Server supports the following variable types • Characteristic • Hierarchy • Numeric • The BI Server will rewrite filtered requested to use variables, example: Logical Query: SELECT Material."Material Level 01" saw_0, "0SD_C03/Z0SD_C03_Q001"."Gross weight in kg" saw_1 FROM "0SD_C03" WHERE Material."Material Level 01" = 'CD ROM Drive' ORDER BY saw_0 MDX Query: With set [0MATERIAL2] as ' hierarchize ({{[0MATERIAL].[LEVEL00].members},{[0MATERIAL].[LEVEL01].members}})' set [Axis1Set] as '{[0MATERIAL2]}' member [Measures].[MS1] as '[Measures].[D5NXUO6GTYVLD646QLP8RR9EF]' select {[Measures].[MS1]} on columns, NON EMPTY {[Axis1Set]}properties MEMBER_UNIQUE_NAME, PARENT_UNIQUE_NAME on rows from [0SD_C03/Z0SD_C03_Q001] SAP VARIABLES [0P_ITEM] INCLUDING [0MATERIAL].[R-1180] For 11.1.1.6 specifically (Please see http://www.bristleconelabs.com/edel/docs/obia/11.1.1.6/Release%20Notes%2011.1.1.6.txt ): Bcone BAPI Adaptor Release Notes ======================================================== This is a release note for Bcone BAPI Adaptor version 11.1.1.6. About Bcone BAPI Adaptor ======================================================== Bcone BAPI Adaptor is a customized wrapper over NW RFC SDK. A Client that wants to call BAPI functions using Bcone BAPI Adaptor needs to load “BconeAdaptor.Dll”. Bcone BAPI Adaptor can be loaded dynamically or linked statically to the client. About this Release ======================================================== This release has some additional features for OBIEE. This release of Bcone BAPI Adaptor supports OBIEE version 11.1.1.6.2 BP1 and later. Changes/Fixes ======================================================== This is a full release of Bcone BAPI Adaptor. In this release following features have been added. 1) GetMembers API of NW RFC SDK has been added. 2) GetStructures API of NW RFC SDK has been added. 3) Improved error handling is implemented with this release. 4) NW RFC SDK calls have been made safer. 4. INSTALLATION AND CONFIGURATION 4.1 SAP/BW CLIENT LAYER This layer is a Full SAP step Connect to SAP marketplace Web site (customer login needed). Go to the -> Support Packages and Patches - Entry by Application Group -> Additional Components -> SAP NW RFC SDK -> SAP NW RFC SDK 7.20 -> <platform> (for the OS BI server target) -> NWRFC_1-20004xxx.SAR and download the last pack for the SAR package (level 5 au 30/06/2011) If the deployment includes several types of Os (32/64 bits), download all the SAR needed. If the Admin tool is on the same machine of the full OBI installation on 64, only 64 bits libraries can be download. Unzip the SAR package with the SAP CAR tool. If the server doesn’t have this exe, download it from the sap marketplace web site Once unzip, the contain of the SAP NW RFC SDL layer is in a folder « nwrfcsdk » Example on windows here: Some Os need additional parameters in the file: SAPRFC.ini Copy the saprfc.ini from the demo folder and put in the folder nwrfcsdk Open and modify it regarding the SAP/BW customer login parameters 4.2 ORACLE BI SAP BAPI ADAPTER Download the BAPI Adapter for OBI Package from the Bristlecone Web site at: http://www.bcone.com/content.php?pagename=Download-Oracle-Adapter-For-SAP Select the BAPI needed regarding the OS and the Bristlecone documentation: Unzip the BconeAdaptor zip files on obi machine For 32 bits os : 1file = BconeAdaptor.dll For 64 bits os: 2 files = BconeAdaptor.dll + BconeAdaptor.lib Copy/Paste this contents in a folder (the same or not of the SAP SDK in previous step: nwrfcsdk) Example on windows: 4.3 WINDOWS PATCH OS SECURITY Depends of the Os, a level patch security windows is mandatory. This is specified in the documentation PDF Bristlecone: BAPI Adapter Bristlecone Components Installation and Configuration Readme.pdf Download the exe file from Microsoft site and execute the patch Os level and execute 4.4 WINDOWS VARIABLE ENV In the Os variable system configuration, add the folders "bin" and "lib" for the « nwrfcsdk » in the Path variable: Depend of the Os, a variable need to be created: In the Os variable system configuration Create an environment variable RFC_INI with the value of the patch of file rfc.ini folder location 5. SAP USER HABILITATIONS Some specifics habilitations must be attribute to the user sap for OBI in order to access and import the metadata SAP/BW and use the BAPI connection. On the sap system, check the technical authorization on the S_RFC object : "Contrôle des authorisations lors de l'accès RFC" (French version) - ACTVT : 16 - RFC_NAME : RFC1, RSOB, SDIFRUNTIME, SYST - RFC_TYPE : FUGR This is mandatory for any access to SAP/BW from OBI BAPI Optionnal step: for trace user activity connection between SAP/BW server and the client BAPI : Object S_RS_RSTT "Objet d'autorisation pour outil Trace RS" (french version) - ACTVT : 01, 02, 03 - RSTTBOBJ : AUTHTRACE, TRACE - USER : <compte communication> Analytics BI Authorizations BI (on the data): Object S_RS_AUTH "Autorisations analyse BI dans rôle" (French version) - BIAUTH : <autorisation analytique> Put 0BI_ALL if no analytic restriction are required, or create a specific authorization on the functional area. BI Authorizations on the objects : Object S_RS_COMP "Business Explorer - composantes" (French Version) - ACTVT : 03, 16 - RSINFOAREA : <Domaine d'application> - RSINFOCUBE : <InfoCube> - RSZCOMPID : $<InfoCube>, <Requête> - RSZCOMPTP : REP For a query Access, put the same authorizations as the query was executed by SAP BI. For a direct Infocube access, the SAP system check the query as a „pseudo query‟ named :$<Infocube> (field RSZCOMPID). If the all the list of SAP Infocubes need to be access by OBI Admin tool, the field RSINFOCUBE need to be SET with “$*” value else, put the Infocubes‟s named available. Objet S_RS_COMP1 " Business Explorer - Composantes: Extension du propriétaire" (french version) - ACTVT : 03, 16 - RSZCOMPID : $<InfoCube>, <Requête> - RSZCOMPTP : REP - RSZOWNER : <Propriétaire des requêtes> Complement to S_RS_COMP. Allow to limit access to query regarding owner of the query. If no restriction is necessary, put the “*” value in the field RSZOWENER and the same values as S_RS_COMP in the other field. 6. ADMIN TOOL IMPORT METADATA 6.1 CREATE THE CONNEXION Define the connection to SAP BW: See also: http://docs.oracle.com/cd/E23943_01/bi.1111/e10540/datasource.htm#CACEJJHC Provide the following information: ■ System IP or Hostname: The host name or IP address of the SAP data server. This field corresponds to the parameter as host in the SAP/BW connect string. ■ System Number: The SAP system number. This is a two-digit number assigned to an SAP instance, also called Web Application Server, or WAS. This field corresponds to the parameter sysnr in the SAP/BW connect string. ■ Client Number: The SAP client number. This is a three-digit number assigned to the self-contained unit called Client in SAP. A Client can be a training, development, testing, or production client, or it can represent different divisions in a large company. This field corresponds to the parameter client in the SAP/BW connect string. ■ Language: The SAP language code used when logging in to the data source (for example, EN for English or DE for German). This field corresponds to the parameter lang in the SAP/BW connect string. ■ Additional Parameters: Additional connection string parameters in the format param=value. Delimit multiple parameters with a colon. This field is optional. ■ User Name: A valid user name for the data source. ■ Password: The corresponding user password. The password is case-sensitive. The first five fields constitute the elements of the SAP/BW connect string, in the format: ashost=value:sysnr=value:client=value:lang=value:additional_param=value For example: ashost=10.30.0.19:sysnr=00:client=100:lang=EN NOTA or TEST XMLA connection: URL This property is only displayed for XMLA data sources. Specify the URL to connect to the XMLA provider. This URL points to the XMLA virtual directory of the computer hosting the cube. This virtual directory must be associated with msxisapi.dll (part of the Microsoft XML for Analysis SDK installation). For example, the URL might look like the following: http://<ServerName>/xmla/msxisap.dll The authorization and connection information on sap system must be check if the connection failed. 6.2 SELECT THE SAP DATA SOURCES Select the data sources (infocubes, query cubes) Depend of the connection parameter and the data sources, the metadata import don’t generate the same thing. If the language is unknown, the technical names from the SAP/BW catalog are imported Example : data sources Infocubes with LANGAGE = EN Example : data sources Query cubes with LANGAGE = FR : 6.3 WORK WITH SAP DATA HIERARCHIES Check any update from the repository admin documentation e10540.pdf For SAP/BW data sources, all hierarchies default to fully balanced hierarchies on import. The hierarchy type for two-level hierarchies (which typically correspond to characteristic primary hierarchies) should not be changed. Review all SAP/BW multi-level (external) hierarchies to determine whether any are parent-child hierarchies, and set them to Value as needed. Working with Cube Variables for SAP/BW Data Sources In SAP/BW data sources, cube variables are used as a means of configuring queries. Cube variable objects are imported into the Physical layer when metadata is imported from Querycubes/Bex Queries in SAP/BW data sources. Typically, you do not edit these objects directly except to keep them synchronized with the Bex queries in the data source, and except to specify overrides for key characteristics values. The Cube Variables tab of the Cube Table dialog lists the cube variables for the given cube table, along with the cube variable caption. Double-click a cube variable to see more detailed information, or click the Add button to define a new cube variable. Description of the properties of CUBE VARIABLES for SAP/BW data sources. See your SAP/BW documentation for additional information. Properties Description Name Name of the cube variable. Caption A description (label or caption) associated with the cube variable, mainly used for display purposes. Variable Type The type of cube variable. Variable types include: ■ SAP_VAR_TYPE_MEMBER: A placeholder for a selection for MEMBER_UNIQUE_NAMES. ■ SAP_VAR_TYPE_HIERARCHY: A placeholder for a HIERARCHY_UNIQUE_NAME. ■ SAP_VAR_TYPE_NUMERIC: A placeholder for a numeric value in formulas. Selection Type The selection type of the cube variable, for cube variables of type SAP_VAR_TYPE_MEMBER. Selection types include: ■ SAP_VAR_SEL_TYPE_VALUE: The variable is replaced by a single value. Cube variables of type NUMERIC must have this selection type. ■ SAP_VAR_SEL_TYPE_INTERVAL: A placeholder for an interval. ■ SAP_VAR_SEL_TYPE_COMPLEX: A placeholder for a complex selection. Entry Type Indicates whether replacing variables is optional or mandatory. Entry types include: ■ SAP_VAR_INPUT_TYPE_OPTIONAL: Specifying a value is optional for this variable. ■ SAP_VAR_INPUT_TYPE_MANDATORY: You must specify a value for this variable. ■ SAP_VAR_INPUT_TYPE_MANDATORY_NOT_INITIAL: You must specify a value for this variable. An initial field is not a valid entry. Reference Dimension This column contains a DIMENSION_UNIQUE_NAME for the parameter type SAP_VAR_TYPE_HIERARCHY. Reference Hierarchy This column contains a HIERARCHY_UNIQUE_NAME for the variable type SAP_VAR_TYPE_MEMBER. Default Low This property contains a default value for the variable or is zero. Default High This property contains a default value for the variable or is zero. This property is only important for variables with the selection type SAP_VAR_SEL_TYPE_INTERVAL and SAP_VAR_SEL_ TYPE_SELECTION. Override Default Low Provide a default value for the cube variable in this field if the Default Low is zero. You must specify a value for this property for mandatory variables that do not specify a default value. Override Default High Provide a default value for the cube variable in this field if the Default High is zero. You must specify a value for this property for mandatory variables that do not specify a default value. 6.4 WORK WITH AGGREGATION RULES Information‟s from: Oracle® Fusion Middleware Metadata Repository Builder's Guide for Oracle Business Intelligence Enterprise Edition 11g E10540-02 http://docs.oracle.com/cd/E23943_01/bi.1111/e10540/busmodlayer.htm#BIEMG1335 Defining Aggregation Rules for Multidimensional Data Sources : By default, when you import SAP/BW (and some other multidimensional cubes) into the Physical layer, Oracle Business Intelligence cannot read the aggregation rules set within the data source. Because of this, the measures are imported automatically with the default aggregation rule of External Aggregation. Note that this rule is only available for multidimensional data sources. External Aggregation means that the Oracle BI Server is not aware of the underlying aggregation rule for the specific measure and will not compute it internally. Instead, the Oracle BI Server will always ship the query to the underlying multidimensional data source for aggregation. Because the underlying data sources are extremely efficient, pushing the aggregation rules down to the data source ensures that the Oracle BI Server returns the results without adding any additional overhead in processing. However, it is recommended that you update the aggregation rule for each measure in Oracle Business Intelligence with the corresponding aggregation rule defined in the data source. Doing so ensures that the Oracle BI Server can do additional computations when needed. There is no query performance impact, since the Oracle BI Server still pushes down optimized queries wherever possible. 6.5 VIEW MEMBERS View Member values feature in the physical layer is only available ON LINE repository Mode 7. OBSERVATIONS/BEST PRACTICES 7.1 SAP RFC INI FILE The "saprfc.ini" file is not use on Windows 2008 Os configuration. The step to define the Os variable RFC_ini is not mandatory in this case. 7.2 RPD AND TIME SERIES In RPD, OBI cannot use directly Times series functions (Ago, ToDate) because SAP/BW do not store Data in date format. In the RPD, it‟s necessary to define first some CAST or transformation function on the data source column and after use them in derived with time series functions 7.3 FILTERS IN OBI ANSWER In Answer, use filter on characteristic key, not on display attribute (Display attribute cannot be filtered on SAP BW) 7.4 SAP/BW QUERY VOLUME Define little SAP/BW query on the physical layer (and federated them in the logical/presentation layer) and not a big SAP/BW query. This is for limiting the number of joins between SAP/BW (data values) and the Extensions tables (data properties). 7.5 SAP/BW QUERY EXPORT During import metadata step in the Admin tool, the list of query cube is empty or partial. Only Infocubes are visible. The query cubes (create in SAP Bex Query Designer) are only visible from other BI tool if during saving process the checked box called "allow External Access to this query" is checked. Once this option is saved in the SAP Bex query designer, the query is visible in Admin tool. In SAP BEX : this option is setting in the tab: release OLEDB for OLAP, QUERY PROPERTIE, advanced Tab. 7.6 SAP/BW DATE IN RPD In order to use time series function, it’s necessary to work with Admin tool functions on Date field column from SAP. Why? all the Date information in SAP/BW are stored in VARCHAR format and not in date format. All Values need to be converted in Date format before to call date function in OBI Admin tool. Example of syntax: CAST ( SUBSTRING( CAST ("ZMHCMES1/ZHCM_ZMHCMES1_ESS_ERC_STAT1"."Calendar day"."Calendar day Level 01" AS VARCHAR ( 20 )) FROM 7 FOR 4) || SUBSTRING( CAST ("ZMHCMES1/ZHCM_ZMHCMES1_ESS_ERC_STAT1"."Calendar day"."Calendar day Level 01" AS VARCHAR ( 20 )) FROM 6 FOR 1) || SUBSTRING( CAST ("ZMHCMES1/ZHCM_ZMHCMES1_ESS_ERC_STAT1"."Calendar day"."Calendar day Level 01" AS VARCHAR ( 20 )) FROM 1 FOR 3) || SUBSTRING( CAST ("ZMHCMES1/ZHCM_ZMHCMES1_ESS_ERC_STAT1"."Calendar day"."Calendar day Level 01" AS VARCHAR ( 20 )) FROM 4 FOR 2) AS DATE ) 8. TROUBLESHOOTING 8.1 NETWORK CONNECTION Before any installation process, check the physical connection between machines => launch a command line : Telnet 32xx (xx for the number SAP/BW system) By default the SAP/BW system number is 01, so telnet TCP/ IP address (or host machine) 3201 If error appears on this step: no connection will be possible. The issue can be the TCP/IP resolution with firewall in place between obi and sap layer. Check the open ports If a black screen appear: the communication layer is ok. 8.2 SAP NW RFC SDK CLIENT LAYER The connection between obi is done by the nw rfc sdk client. The test is to check this SAP layer communication between the nw rfc dsk client and server For this, try to execute in line mode the STARTRFC.exe from the nwrfcsdk\bin folder with the connection parameters to sap bw If the above error occurs: - Check the bin folder and the existing file - Check the load of dll from the Os (reboot can help) - Check the correct libraries installed (32/64 rfc sdk) 8.3 LIMITATIONS OF NUMBERS OF MEMBERS During the filter creation in answer, the tool can limit the context regarding the volume of data in the filter and the users have a message for information 8.4 CONFIG FILES OBI Check the parameters in the 2 files : - DBFeatures.INI MW_HOME\instances\instancex\config\OracleBIServerComponent\coreapplication_obis1 - NQSConfig.INI MW_HOME\instances\instancex\config\OracleBIServerComponent\coreapplication_obis1 For number of members in section or export to excel, modify the file : - Instanceconfig.xml MW_HOME\instances\instancex\config\OracleBIPresentationServicesComponent\coreapplication_obips1 8.5 KNOW ISSUES ON SAP/BW In multidimensional once you import the cubes/BEX query, it’s a black hole, there is not much you could do it on physical layer like alias renaming etc. Re-importing of BEX/Multi provider could possibly change the column ids, hence resulting recreation of BMM, Presentation layer followed by re-creation of reports Presentation variables cannot be used on the key columns, as they generate in correct MDX Filters On Non Key Columns are NOT added to MDX Query see note :1288595.1 on Oracle Support After importing BW-SAP cubes or BEX, all the non integer columns get converted to varchar data types Due to non-availability of date data type, date functions like ago, today cannot be used. In some of the cases OBIEE creates sub-optimal queries. Using SAP BI unable to pass Filter Selection [ID 942641.1] (Modified 06-10-2011 obi 10, ok 11) : The following solutions is suggested as a resolution to pass the filter parameters along with the query to SAP BI. The following changes were made to the NQSCONFIG.ini file. Under MDX_MEMBER_CACHE section, it was suggested to change the Enable Cache = YES.

(Special thanks to Roxane Wiart from Oracle) Table of Contents 1. GLOBAL ARCHITECTURE. 3 2. COMMUNICATION LAYER. 4 3. SUPPORTED ENVIRONMENT. 6 3.1 OTN.. 6 3.2 DATA SOURCES SAP BW... 6 4. INSTALLATION AND CONFIGURATION.....

2. EPM

EPM 11.1.2.2 Architecture: Essbase

Since a lot of components exist to access or administer Essbase, there are also a couple of client tools available. End users typically use the Excel Add-In or SmartView nowadays. While the Excel Add-In talks to the Essbase server directly using various ports, SmartView connects to Essbase through Provider Services using HTTP protocol. The ability to communicate using a single port is one of the major advantages from SmartView over Excel Add-In. If you consider using Excel Add-In going forward, please make sure you are aware of the Statement of Direction for this component. The Administration Services Console, Integration Services Console and Essbase Studio are clients, which are mainly used by Essbase administrators or application designers. While Integration Services and Essbase Studio are used to setup Essbase applications by loading metadata or simply for data loads, Administration Services are utilized for all kind of Essbase administration. All clients are using only one or two ports to talk to their server counterparts, which makes them work through firewalls easily. Although clients for Provider Services (SmartView) and Administration Services (Administration Services Console) are only using a single port to communicate to their backend services, the backend services itself need the Essbase configured port range to talk to the Essbase server. Any communication to repository databases is done using JDBC connections. Essbase Studio and Integration Services are using different technologies to talk to the Essbase server, Integration Services uses CAPI, Essbase Studio uses JAPI. However, both are using the configured port range on the Essbase server to talk to Essbase. Connections to data sources are either based on ODBC (Integration Service, Essbase) or JDBC (Essbase Studio). As for all other components discussed previously, when setting up firewall rules, be aware of the fact that all services may need to talk to the external authentication sources, this is not only needed for Shared Services.

Since a lot of components exist to access or administer Essbase, there are also a couple of client tools available. End users typically use the Excel Add-In or SmartView nowadays. While the Excel...

Performance Tuning

OBIEE 11.1.1 - (Updated) Best Practices Guide for Tuning Oracle® Business Intelligence Enterprise Edition (Whitepaper)

Applies To: This whitepaper applies to OBIEE release 11.1.1.6, 11.1.1.7 Introduction: One of the most challenging aspects of performance tuning is knowing where to begin. To maximize Oracle® Business Intelligence Enterprise Edition performance, you need to monitor, analyze, and tune all the Fusion Middleware / BI components. This guide describes the tools that you can use to monitor performance and the techniques for optimizing the performance of Oracle® Business Intelligence Enterprise Edition components. You can find new updated copy of OBIEE Tuning Guide Whitepaper on Support note OBIEE 11g Infrastructure Performance Tuning Guide (Doc ID 1333049.1) Disclaimer: All tuning information stated in this guide is only for orientation, every modification has to be tested and its impact should be monitored and analyzed. Before implementing any of the tuning settings, it is recommended to carry out end to end performance testing that will also include to obtain baseline performance data for the default configurations, make incremental changes to the tuning settings and then collect performance data. Otherwise it may worse the system performance.

Applies To: This whitepaper applies to OBIEE release 11.1.1.6, 11.1.1.7 Introduction: One of the most challenging aspects of performance tuning is knowing where to begin. To maximize Oracle® Business...

2. EPM

EPM 11.1.2.2 Architecture: Financial Performance Management Applications

 Financial Management can be accessed either by a browser based client or by SmartView. Starting from release 11.1.2.2, the Financial Management Windows client does not longer access the Financial Management Consolidation server. All tasks that require an on line connection (e.g. load and extract tasks) can only be done using the web interface. Any client connection initiated by a browser or SmartView is send to the Oracle HTTP server (OHS) first. Based on the path given (e.g. hfmadf, hfmofficeprovider) in the URL, OHS makes a decision to forward this request either to the new Financial Management web application based on the Oracle Application Development Framework (ADF) or to the .NET based application serving SmartView retrievals running on Internet Information Server (IIS).Any requests send to the ADF web interface that need to be processed by the Financial Management application server are send to the IIS using HTTP protocol and will be forwarded further using DCOM to the Financial Management application server. SmartView requests, which are processes by IIS in first row, are forwarded to the Financial Management application server using DCOM as well.The Financial Management Application Server uses OLE DB database connections via native database clients to talk to the Financial Management database schema.Communication between the Financial Management DME Listener, which handles requests from EPMA, and the Financial Management application server is based on DCOM. Unlike most other components Essbase Analytics Link (EAL) does not have an end user interface. The only user interface is a plug-in for the Essbase Administration Services console, which is used for administration purposes only. End users interact with a Transparent or Replicated Partition that is created in Essbase and populated with data by EAL.The Analytics Link Server deployed on WebLogic communicates through HTTP protocol with the Analytics Link Financial Management Connector that is deployed in IIS on the Financial Management web server. Analytics Link Server interacts with the Data Synchronisation server using the EAL API. The Data Synchronization server acts as a target of a Transparent or Replicated Partition in Essbase and uses a native database client to connect to the Financial Management database. Analytics Link Server uses JDBC to connect to relational repository databases and Essbase JAPI to connect to Essbase. As most Oracle EPM System products, browser based clients and SmartView can be used to access Planning. The Java based Planning web application is deployed on WebLogic, which is configured behind an Oracle HTTP Server (OHS). Communication between Planning and the Planning RMI Registry Service is done using Java Remote Message Invocation (RMI). Planning uses JDBC to access relational repository databases and talks to Essbase using the CAPI. Be aware of the fact that beside the Planning System database a dedicated database schema is needed for each application that is set up within Planning. As Planning, Profitability and Cost Management (HPCM) has a pretty simple architecture. Beside the browser based clients and SmartView, a web service consumer can be used as a client too. All clients access the Java based web application deployed on WebLogic through Oracle HHTP Server (OHS). Communication between Profitability and Cost Management and EPMA Web Server is done using HTTP protocol. JDBC is used to access the relational repository databases as well as data sources. Essbase JAPI is utilized to talk to Essbase. For Strategic Finance, two clients exist, SmartView and a Windows client. While SmartView communicates through the web layer to the Strategic Finance Server, Strategic Finance Windows client makes a direct connection to the Strategic Finance Server using RPC calls. Connections from Strategic Finance Web as well as from Strategic Finance Web Services to the Strategic Finance Server are made using RPC calls too. The Strategic Finance Server uses its own file based data store. JDBC is used to connect to the EPM System Registry from web and application layer. Disclosure Management has three kinds of clients. While the browser based client and SmartView interact with the Disclosure Management web application directly through Oracle HTTP Server (OHS), Taxonomy Designer does not connect to the Disclosure Management server. Communication to relational repository databases is done via JDBC, to connect to Essbase the Essbase JAPI is utilized.

 Financial Management can be accessed either by a browser based client or by SmartView. Starting from release 11.1.2.2, the Financial Management Windows client does not longer access the Financial...

Performance Tuning

HPCM 11.1.2.2.x - HPCM Standard Costing Generating >99 Calc Scipts

HPCM Standard Profitability calculation scripts are named based on a documented naming convention. From 11.1.2.2.x, the script name = a script suffix (1 letter) + POV identifier (3 digits) + Stage Order Number (1 digit) + “_” + index (2 digits) (please see documentation for more information (http://docs.oracle.com/cd/E17236_01/epm.1112/hpm_admin/apes01.html). This naming convention results in the name being 8 characters in length i.e. the maximum number of characters permitted calculation script names in non-unicode Essbase BSO databases. The index in the name will indicate the number of scripts per stage. In the vast majority of cases, the number of scripts generated per stage will be significantly less than 100 and therefore, there will be no issue. However, in some cases, the number of scripts generated can exceed 99.It is unusual for an application to generate more than 99 calculation scripts for one stage. This may indicate that explicit assignments are being extensively used. An assessment should be made of the design to see if assignment rules can be used instead. Assignment rules will reduce the need for so many calculation script lines which will reduce the requirement for such a large number of calculation scripts.In cases where the scripts generates exceeds 100, the length of the name of the 100th calculation script is different from the 99th as the calculation script name changes from being 8 characters long and becomes 9 characters long (e.g. A6811_100 rather than A6811_99). A name of 9 characters is not permitted in non Unicode applications. It is “too long”. When this occurs, an error will show in the hpcm.log as “Error processing calculation scripts” and “Unexpected error in business logic “. Further down the log, it is possible to see that this is “Caused by: Error copying object “ and “Caused by: com.essbase.api.base.EssException: Cannot put olap file object ... object name_[<calc script name> e.g. A6811_100] too long for non-unicode mode application”. The error file will give the name of the calculation script which is causing the issue. In my example, this is A6811_100 and you can see this is 9 characters in length.It is not possible to increase the number of characters allowed in a calculation script name. However, it is possible to increase the size of each calculation script. The default for an HPCM application, set in the preferences, is set to 4mb. If the size of each calculation script is larger, the number of scripts generated will reduce and, therefore, less than 100 scripts will be generated which means that the name of the calculation script will remain 8 characters long.To increase the size of the generated calculation scripts for an application, in the HPM_APPLICATION_PREFERENCE table for the application, find the row where HPM_PREFERENCE_NAME_ID=20. The default value in this row is 4194304. This can be increased e.g. 7340032 will increase this to 7mb. Please restart the profitability service after making the change. Addendum – Added 29th April 2014From 11.1.2.3.300, the calc script naming convention has been changed. The underscore after the Stage Order Number has now been removed which means that the Index at the end of the calculation script name is now 3 digits in length rather than 2 digits. This means that the index at the end of the calculation script name can now increase to a maximum of 999 rather than 99.Calc script name now = a script suffix (1 letter) + POV identifier (3 digits) + Stage Order Number (1 digit) + index (3 digits).For example, A6811_100 would now show as A6811100 and, therefore, would only be 8 characters in length.

HPCM Standard Profitability calculation scripts are named based on a documented naming convention. From 11.1.2.2.x, the script name = a script suffix (1 letter) + POV identifier (3 digits) + Stage...

Installs and Config

Using Transaction Logging to Recover Post-Archived Essbase data

Data recovery is typically performed by restoring data from an archive.  Data added or removed since the last archive took place can also be recovered by enabling transaction logging in Essbase.  Transaction logging works by writing transactions to a log store.  The information in the log store can then be recovered by replaying the log store entries in sequence since the last archive took place.  The following information is recorded within a transaction log entry:Sequence ID Username Start Time End Time Request Type A request type can be one of the following categories:Calculations, including the default calculation as well as both server and client side calculations Data loads, including data imports as well as data loaded using a load rule Data clears as well as outline resets Locking and sending data from SmartView and the Spreadsheet Add-In.  Changes from Planning web forms are also tracked since a lock and send operation occurs during this process. You can use the Display Transactions command in the EAS console or the query database MAXL command to view the transaction log entries.Enabling Transaction LoggingTransaction logging can be enabled at the Essbase server, application or database level by adding the TRANSACTIONLOGLOCATION essbase.cfg setting.  The following is the TRANSACTIONLOGLOCATION syntax:TRANSACTIONLOGLOCATION [appname [dbname]] LOGLOCATION NATIVE ENABLE | DISABLENote that you can have multiple TRANSACTIONLOGLOCATION entries in the essbase.cfg file.  For example:TRANSACTIONLOGLOCATION Hyperion/trlog NATIVE ENABLETRANSACTIONLOGLOCATION Sample Hyperion/trlog NATIVE DISABLEThe first statement will enable transaction logging for all Essbase applications, and the second statement will disable transaction logging for the Sample application.  As a result, transaction logging will be enabled for all applications except the Sample application. A location on a physical disk other than the disk where ARBORPATH or the disk files reside is recommended to optimize overall Essbase performance.Configuring Transaction Log ReplayAlthough transaction log entries are stored based on the LOGLOCATION parameter of the TRANSACTIONLOGLOCATION essbase.cfg setting, copies of data load and rules files are stored in the ARBORPATH/app/appname/dbname/Replay directory to optimize the performance of replaying logged transactions.  The default is to archive client data loads, but this configuration setting can be used to archive server data loads (including SQL server data loads) or both client and server data loads.To change the type of data to be archived, add the TRANSACTIONLOGDATALOADARCHIVE configuration setting to the essbase.cfg file.  Note that you can have multiple TRANSACTIONLOGDATALOADARCHIVE entries in the essbase.cfg file to adjust settings for individual applications and databases.Replaying the Transaction Log and Transaction Log Security ConsiderationsTo replay the transactions, use either the Replay Transactions command in the EAS console or the alter database MAXL command using the replay transactions grammar.  Transactions can be replayed either after a specified log time or using a range of transaction sequence IDs.The default when replaying transactions is to use the security settings of the user who originally performed the transaction.  However, if that user no longer exists or that user's username was changed, the replay operation will fail.Instead of using the default security setting, add the REPLAYSECURITYOPTION essbase.cfg setting to use the security settings of the administrator who performs the replay operation.  REPLAYSECURITYOPTION 2 will explicitly use the security settings of the administrator performing the replay operation.  REPLAYSECURITYOPTION 3 will use the administrator security settings if the original user’s security settings cannot be used.Removing Transaction Logs and Archived Replay Data Load and Rules FilesTransaction logs and archived replay data load and rules files are not automatically removed and are only removed manually.  Since these files can consume a considerable amount of space, the files should be removed on a periodic basis. The transaction logs should be removed one database at a time instead of all databases simultaneously.  The data load and rules files associated with the replayed transactions should be removed in chronological order from earliest to latest.  In addition, do not remove any data load and rules files with a timestamp later than the timestamp of the most recent archive file.Partitioned Database ConsiderationsFor partitioned databases, partition commands such as synchronization commands cannot be replayed.  When recovering data, the partition changes must be replayed manually and logged transactions must be replayed in the correct chronological order.If the partitioned database includes any @XREF commands in the calc script, the logged transactions must be selectively replayed in the correct chronological order between the source and target databases.ReferencesFor additional information, please see the Oracle EPM System Backup and Recovery Guide.  For EPM 11.1.2.2, the link is http://docs.oracle.com/cd/E17236_01/epm.1112/epm_backup_recovery_1112200.pdf

Data recovery is typically performed by restoring data from an archive.  Data added or removed since the last archive took place can also be recovered by enabling transaction logging in Essbase.  Trans...

2. EPM

EPM 11.1.2.2 Architecture: Reporting and Analysis

Reporting and Analysis is the basis for all Oracle EPM reporting components. Through the Java based Reporting and Analysis web application deployed on WebLogic, it enables users to browse through reports for all kind of Oracle EPM reporting components.Typical users access the web application by browser through Oracle HTTP Server (OHS). Reporting and Analysis Web application talks to the Reporting and Analysis Agent using CORBA protocol on various ports. All communication to the repository databases (EPM System Registry and Reporting and Analysis database) from web and application layer is done using JDBC. As an additional data store, the Reporting and Analysis Agent uses the file system to lay down individual reports. While the reporting artifacts are stored on the file system, the folder structure and report based security information is stored in the relational database. The file system can be either local or remote (e.g. network share, network file system). If an external user directory is used, Reporting and Analysis services also communicate to this directory.WebAnalysis is the ad hoc reporting utility that is part of Oracle EPM System. Users access WebAnalysis either by browser using the web based client or by WebAnalysis Studio, the Java based client.The Java based web application is deployed on WebLogic. As WebAnalysis is part of Reporting and Analysis, communication to the Reporting and Analysis Agent is done using CORBA. To access the Reporting and Analysis repository as well as the EPM System Registry, WebAnalysis uses JDBC connections.Typically WebAnalysis is used on top of Essbase or Financial Management, but it also has the ability to connect to relational databases. To access Essbase, WebAnalysis uses the Essbase JAPI. Financial Management is accessed using the ADM driver. WebAnalysis always connects to the Financial Management application server to retrieve data; it does not connect to the Financial Management database directly.Financial Management will be covered in a later post in more detail. To access its repository, the Financial Management application server uses native database clients through OLE DB connections.Users of Financial Reporting access the application using a browser through Oracle HTTP Server. Report designers need to use the Financial Reporting Studio, a Windows application, which talks to the Financial Reporting Server through HTTP and RMI protocol. RMI ports are dynamic by default, to make Financial Reporting Studio work through firewalls, these ports can be configured using FRconfig.cmd on Windows or FRconfig.sh on Unix.Financial Reporting is deployed on WebLogic and includes all components (e.g. Report Server, Print Server, Scheduler, Web application) that were distributed in prior releases to several processes. Communication to the Reporting and Analysis Agent is done via CORBA protocol. EPM System Registry and the Reporting and Analysis Repository are accessed using JDBC.The data sources for Financial Reporting in context of Oracle EPM are Essbase and Financial Management. To access Essbase, the Java API is used. Consolidation data is pulled from Financial Management application server using the ADM driver on top of Microsoft DCOM protocol. In case Planning Supporting Details are needed within a report, Planning can be a data source as well. Interactive Reporting is the relational reporting tool as part of Oracle EPM System. There are two browser based clients, the HTML and the Web Client. While the HTML client can be used with a compatible browser right away, the Interactive Reporting Web Client is a Plug-in that needs to be installed first. As a third client, Interactive Reporting Studio, a Windows client, is available as well. The Interactive Reporting Studio is mainly used by report and dashboard designers.Users accessing Interactive Reporting using the HTML or Web Client, go through Oracle HTTP Server and talk to Reporting and Analysis Web deployed on WebLogic. The Reporting and Analysis Web application communicates with the Reporting and Analysis Agent as well as the Interactive Reporting Services on various ports using CORBA protocol. The Interactive Reporting Services consist of Data Access Service and BI Service, which are used to query data and render reports on top of relational and multidimensional data sources. Data is typically accessed using native database clients or via ODBC connections.

Reporting and Analysis is the basis for all Oracle EPM reporting components. Through the Java based Reporting and Analysis web application deployed on WebLogic, it enables users to browse through...

2. EPM

EPM 11.1.2.2 Architecture: Foundation

If applicable, the component will have its standard port next to its name in brackets. EPM Foundation is Java based and consists of two web applications, Shared Services and Workspace. Both applications are accessed by browser through Oracle HTTP Server (OHS) or Internet Information Services (IIS). Communication to the backend database is done by JDBC. The file system to store Lifecycle Management (LCM) artifacts can be either local or remote (e.g. NFS, network share). For authentication purposes, the EPM Product Suite can connect to external directories or databases.Interaction with other EPM Suite components like product specific Lifecycle Management connectors or Reporting and Analysis Web happens through HTTP protocol.There are two kinds of clients available for the Enterprise Performance Management Architect (EPMA). The most commonly used is the browser based web interface. In addition to the web interface a Batch Client is available for atomization purposes. Both clients use the HTTP(S) protocol to communicate with the EPMA WebServer through Oracle HTTP Server.The Java based EPMA WebServer deployed on WebLogic is the central point of contact that interacts with other components. Interaction to the EPMA DataSync Server, Planning, Profitability and Cost Management as well as to the EPMA Web Services running on IIS is done using the HTTP protocol. JDBC is used to communicate to the repository databases.Various protocols are used by EPMA WebServer to communicate to the application layer. Communication to Financial Management is done through the Financial Management DME Listener based on TCP/IP protocol. Essbase Studio is accessed based on TCP/IP protocol as well. The Essbase JAPI is used to communicate to the Essbase server.As described above, EPMA WebServer talks to EPMA Web Services using HTTP protocol. Communication from EPMA Web Services to the EPMA Dimension Server is based on NET.tcp. Keep in mind that both services, EPMA Web Services and EPMA Dimension Server need to be on the same server.EPMA Dimension Server uses HTTP protocol to talk to the Data Relationship Management (DRM) Web Services. Communication to the database layer is done using native database protocols, therefore a database client needs to be installed on the EPMA Dimension Server.Calculation Manager is accessed by a browser based client through the Oracle HTTP Server. The Java based application is deployed on WebLogic. Interaction between Calculation Manager and EPMA, Planning as well as Financial Management is done using the HTTP protocol. Essbase is accessed using the Essbase JAPI. All communication to repository databases is done using JDBC.

If applicable, the component will have its standard port next to its name in brackets. EPM Foundation is Java based and consists of two web applications, Shared Services and Workspace. Both...

Performance Tuning

Essbase BSO Data Fragmentation

Essbase BSO Data FragmentationData fragmentation naturally occurs in Essbase Block Storage (BSO) databases where there are a lot of end user data updates, incremental data loads, many lock and send, and/or many calculations executed.  If an Essbase database starts to experience performance slow-downs, this is an indication that there may be too much fragmentation.  See Chapter 54 Improving Essbase Performance in the Essbase DBA Guide for more details on measuring and eliminating fragmentation: http://docs.oracle.com/cd/E17236_01/epm.1112/esb_dbag/daprcset.htmlFragmentation is likely to occur in the following situations:Read/write databases that users are constantly updating data Databases that execute calculations around the clock Databases that frequently update and recalculate dense members Data loads that are poorly designed Databases that contain a significant number of Dynamic Calc and Store members Databases that use an isolation level of uncommitted access with commit block set to zero There are two types of data block fragmentationFree space tracking, which is measured using the Average Fragmentation Quotient statistic. Block order on disk, which is measured using the Average Cluster Ratio statistic. Average Fragmentation QuotientThe Average Fragmentation Quotient ratio measures free space in a given database.  As you update and calculate data, empty spaces occur when a block can no longer fit in its original space and will either append at the end of the file or fit in another empty space that is large enough.  These empty spaces take up space in the .PAG files.  The higher the number the more empty spaces you have, therefore, the bigger the .PAG file and the longer it takes to traverse through the .PAG file to get to a particular record.  An Average Fragmentation Quotient value of 3.174765 means the database is 3% fragmented with free space.Average Cluster RatioAverage Cluster Ratio describes the order the blocks actually exist in the database. An Average Cluster Ratio number of 1 means all the blocks are ordered in the correct sequence in the order of the Outline.  As you load data and calculate data blocks, the sequence can start to be out of order.  This is because when you write to a block it may not be able to place back in the exact same spot in the database that it existed before.  The lower this number the more out of order it becomes and the more it affects performance.  An Average Cluster Ratio value of 1 means no fragmentation.  Any value lower than 1 i.e. 0.01032828 means the data blocks are getting further out of order from the outline order.Eliminating Data Block FragmentationBoth types of data block fragmentation can be removed by doing a dense restructure or export/clear/import of the data.  There are two types of dense restructure: 1. Implicit RestructuresImplicit dense restructure happens when outline changes are done using EAS Outline Editor or Dimension Build. Essbase restructures create new .PAG files restructuring the data blocks in the .PAG files. When Essbase restructures the data blocks, it regenerates the index automatically so that index entries point to the new data blocks. Empty blocks are NOT removed with implicit restructures.2. Explicit RestructuresExplicit dense restructure happens when a manual initiation of the database restructure is executed.An explicit dense restructure is a full restructure which comprises of a dense restructure as outlined above plus the removal of empty blocksEmpty Blocks vs. FragmentationThe existence of empty blocks is not considered fragmentation.  Empty blocks can be created through calc scripts or formulas.  An empty block will add to an existing database block count and will be included in the block counts of the database properties.  There are no statistics for empty blocks.  The only way to determine if empty blocks exist in an Essbase database is to record your current block count, export the entire database, clear the database then import the exported data.  If the block count decreased, the difference is the number of empty blocks that had existed in the database.

Essbase BSO Data Fragmentation Data fragmentation naturally occurs in Essbase Block Storage (BSO) databases where there are a lot of end user data updates, incremental data loads, many lock and send,...

2. EPM

HPCM 11.1.2.2.x - How to find data in an HPCM Standard Costing database

When working with a Hyperion Profitabilityand Cost Management (HPCM) Standard Costing application, there can often be arequirement to check data or allocated results using reporting tools e.gSmartview. To do this, you areretrieving data directly from the Essbase databases related to your HPCMmodel. For information, running reportsis covered in Chapter 9 of the HPCM User documentation. The aim of this blog isto provide a quick guide to finding this data for reporting in the HPCMgenerated Essbase database in v11.1.2.2.x of HPCM.  In order to retrieve data from an HPCMgenerated Essbase database, it is important to understand each of the followingdimensions in the Essbase database and where data is located within them: Measures dimension –identifies Measures AllocationType dimension –identifies Direct Allocation Data or Genealogy Allocation data Point Of View (POV)dimensions – there must be at least one, maximum of four. Business dimensions: StageBusiness dimensions – these will be identified by the Stage prefix. Intra-Stagedimension – these will be identified by the _Intra suffix. Essbase outlines and reporting is explained in thedocumentation here:http://docs.oracle.com/cd/E17236_01/epm.1112/hpm_user/ch09s02.html For additional details on reporting measures, pleasereview this section of the documentation:http://docs.oracle.com/cd/E17236_01/epm.1112/hpm_user/apas03.html Reportingrequirements in HPCM quite often start with identifying non balanced items inthe Stage Balancing report. Thefollowing documentation link provides help with identifying some of the itemswithin the Stage Balancing report:http://docs.oracle.com/cd/E17236_01/epm.1112/hpm_user/generatestagebalancing.html The following are some types of data uponwhich you may want to report: Stage Data: DirectInput AssignedInput Data AssignedOutput Data Idle Cost/Revenue Unassigned Cost/Revenue Over Driven Cost/Revenue Direct Allocation Data Genealogy Allocation Data StageData Stage Data consists of: Direct Input i.e. inputdata, the starting point of your allocation e.g. in Stage 1 Assigned Input Data i.e. thecost/revenue received from a prior stage (i.e. stage 2 and higher). Assigned Output Data i.e. foreach stage, the data that will be assigned forward is assigned post stage data. Reporting on this data is explained in the documentationhere:http://docs.oracle.com/cd/E17236_01/epm.1112/hpm_user/ch09s03.html Dimension Selection Measures Direct Input: CostInput RevenueInput Assigned Input (from previous stages): CostReceivedPriorStage RevenueReceivedPriorStage Assigned Output (to subsequent stages): CostAssignedPostStage RevenueAssignedPostStage AllocationType DirectAllocation POV One member from each POV dimension Stage Business Dimensions Any members for the stage business dimensions for the stage you wish to see the Stage data for. All other Dimensions NoMember Idle/Unassigned/OverDriven To view Idle, Unassigned or Overdriven Costs/Revenue,first select which stage for which you want to view this data. If multiple Stages have unassigned/idle,resolve the earliest first and re-run the calculation as differences in earlystages will create unassigned/idle in later stages. Dimension Selection Measures Idle: IdleCost IdleRevenue Unassigned: UnAssignedCost UnAssignedRevenue Overdriven: OverDrivenCost OverDrivenRevenue AllocationType DirectAllocation POV One member from each POV dimension Dimensions in the Stage with Unassigned/ Idle/OverDriven Cost All the Stage Business dimensions in the Stage with Unassigned/Idle/Overdriven. Zoom in on each dimension to find the individual members to find which members have Unassigned/Idle/OverDriven data. All other Dimensions NoMember DirectAllocation Data Direct allocation data shows the datareceived by a destination intersection from a source intersection where adirect assignment(s) exists. Reporting on direct allocation data is explained in thedocumentation here:http://docs.oracle.com/cd/E17236_01/epm.1112/hpm_user/ch09s04.html You would select the following to reportdirect allocation data Dimension Selection Measures CostReceivedPriorStage AllocationType DirectAllocation POV One member from each POV dimension Stage Business Dimensions Any members for the SOURCE stage business dimensions and the DESTINATION stage business dimensions for the direct allocations for the stage you wish to report on. All other Dimensions NoMember GenealogyAllocation Data Genealogy allocation data shows the indirectdata relationships between stages. Genealogycalculations run in the HPCM Reporting database only. Reporting on genealogy data is explained in thedocumentation here:http://docs.oracle.com/cd/E17236_01/epm.1112/hpm_user/ch09s05.html Dimension Selection Measures CostReceivedPriorStage AllocationType GenealogyAllocation for releases prior to 11.1.1.3.500 IndirectAllocationfor all releases 11.1.1.3.500/11.1.2.x and after POV One member from each POV dimension Stage Business Dimensions Any stage business dimension members from the STARTING stage in Genealogy Any stage business dimension members from the INTERMEDIATE stage(s) in Genealogy Any stage business dimension members from the ENDING stage in Genealogy All other Dimensions NoMember Notes If you still don’t see data after checkingthe above, please check the following Check the calculation hasbeen run. Here are couple of indicators that might help them with that. Notethe size of essbase cube before and after calculations ensure that acalculation was run against the database you are examing. Exportthe essbase data to a text file to confirm that some data exists. Examinethe date and time on task area to see when, if any, calculations were run andwhat choices were used (e.g. Genealogy choices) If data does not exist inplaces where they are expecting, it could be that Nocalculations/genealogy were run Nocalculations were successfully run Themodel/data at feeder location were either absent or incompatible, resulting inno allocation e.g no driver data. SmartviewInvocation from HPCM From version 11.1.2.2.350 of HPCM (thisversion will be GA shortly), it is possible to directly invoke Smartview fromHPCM. There is guided navigation before theSmartview invocation and it is then possible to see the selected value(s) inSmartView. Click to Download HPCM 11.1.2.2.x - How to find data in an HPCM Standard Costing database (Right click or option-click the link and choose "Save As..." to download this pdf file)

When working with a Hyperion Profitability and Cost Management (HPCM) Standard Costing application, there can often be a requirement to check data or allocated results using reporting tools e.gSmartvie...

1. OBIEE

Exalytics v1, not able to vertically scale-out Oracle Business Intelligence system components

In Exalytics v1, not able to vertically scale-out Oracle Business Intelligence system components. This issue exists only if Oracle Business Intelligence system components 11.1.1.6.0 are installed in Exalytics using ANT installation script.Solution: To fix vertical scale out for BI System components (OPIS / OBIS / JavaHost) in opmn.xml file, you need to remove following invalid element (</variable>) and add missing element (‘/’) highlighted in red color: <variable id="LD_LIBRARY_PATH" value="$ORACLE_HOME/common/ODBC/Merant/5.3/lib$:$ORACLE_HOME/bifoundation/serv er/bin$:$ORACLE_HOME/bifoundation/web/bin$:$ORACLE_HOME/clients/epm/Essbase/Es sbaseRTC/bin$:$ORACLE_HOME/bifoundation/odbc/lib$:$ORACLE_INSTANCE$:$ORACLE_HOME/lib:/home/oracle/EXALYTICS_MWHOME/../TimesTen/tt1122/lib" append="true"> (add a trailing ‘/’ to the entry) <variable id="TIMESTEN_DLL"value="/home/oracle/EXALYTICS_MWHOME/../TimesTen/tt1122/lib/libttclient.so"/>                </variable> (remove the </variable> entry)In summary add a trailing '/' to the entry for LD_LIBRARY_PATH and remove the </variable> entry after the TIMESTEN_DLL entry. Then restart the BI System components and now you should able to vertically scale out BI system components via Enterprise Manager (EM).

In Exalytics v1, not able to vertically scale-out Oracle Business Intelligence system components. This issue exists only if Oracle Business Intelligence system components 11.1.1.6.0 are installed in...

Performance Tuning

Essbase “Virtual Cube” BSO Tuning

Essbase “Virtual Cube” BSO Tuning An Essbase “virtual cube” is a specific design setup where transparent partitioning is used and consists of multiple source databases that are partitioned to a single target Block Storage (BSO) database. The single target BSO database is empty meaning contains no data. The purpose of this design setup is so that end users have one consolidated BSO cube for reporting. In some cases, the target BSO database may have different dynamic calc member formulas then the source databases. Also, the target BSO database may have more dimensions to accommodate the source databases and may have different sparse/dense configuration since source databases are tuned with data accordingly. The Essbase Database Administration Guide mentions how to tune Essbase BSO databases that contains data; however, there is no documentation on how to tune a BSO database that contains no data like in a “virtual cube” setup. For the most part, the assumption is that little to no tuning is needed with “virtual cubes” because there is no data in the cube. However, we have found from past experience that query performance against an empty target BSO cube may need basic outline tuning i.e. sparse/dense configuration changes.  Since there is no data in the target BSO cube meaning IND/PAG files do not exist and there are no data blocks, “virtual cube” tuning is limited since we cannot rely on statics that are based on data existing in the database. Since we cannot rely on statically data points in this case, the following are some “virtual cube” tuning considerations. Sparse/Dense Configuration (Dense Block) In most cases, the virtual target BSO database outline is created based on one of the existing source database outlines. Existing source database outlines sparse/dense configurations are derived from the tuning based on the data distribution for that given database, which may not general best practices where the Accounts and Time dimensions are dense and the outline is in the hour-glass dimension order. Here is an example of what a virtual target BSO outline might have if derived from an existing source database outline: Dimensions Type Total Number of Members Stored Members Time Dense 17 12 <--Dynamic Calc Formulas Drivers Dense 230 230 Scenario Sparse 15 7 <--Dynamic Calc Formulas Accounts Sparse 500 250 <--Dynamic Calc Formulas Version Sparse 25 25 Years Sparse 6 5 Currency Sparse 12 12 Product Sparse 3000 2500 CostCenters Sparse 5000 4500 Entities Sparse 10000 9500 The dense block size for the above target BSO outline is roughly 22 KB with Time and Drivers as the dense dimensions. There are sparse dynamic calculations in the above outline with the Accounts and Scenario dimensions set to sparse. Here is the same virtual target BSO outline with general outline best practices applied: Dimensions Type Total Number of Members Stored Members Accounts Dense 500 250 <--Dynamic Calc Formulas Time Dense 17 12 <--Dynamic Calc Formulas Scenario Dense 15 7 <--Dynamic Calc Formulas Years Sparse 6 5 Currency Sparse 12 12 Version Sparse 25 25 Drivers Sparse 230 230 Product Sparse 3000 2500 CostCenters Sparse 5000 4500 Entities Sparse 10000 9500 The dense block size for the above virtual target BSO outline is roughly 164 KB with Accounts, Time and Scenario as the dense dimensions. All of the dynamically calculated formulas are now in the dense block, as opposed to being sparse calculations. The query performance of this virtual target BSO outline will far exceed the performance of the previous outline. Using the above customer test case outlines, a 10 user simulated test case scenario had failures and unacceptable query performance with the original outline configuration. The same testing scenario with the tuned virtual BSO outline configuration could ramp up to 25 users with no failures and acceptable query performance. The virtual target BSO outline tuning changed the following:  <!--[if !supportLists]-->1. <!--[endif]-->The number of requests from the target to the sources databases was reduced <!--[if !supportLists]-->2. <!--[endif]-->The size of each request increased <!--[if !supportLists]-->3. <!--[endif]-->The total number of cells queried remained the same. By including the dynamic calculations in the dense block, the query layer processed fewer requests but the queries are extracting data in bigger pieces; Thereby, improving performance by reducing I/O traffic.

Essbase “Virtual Cube” BSO Tuning An Essbase “virtual cube” is a specific design setup where transparent partitioning is used and consists of multiple source databases that are partitioned to a single...

3. Exalytics

Architecture history: WebLogic Admin Server failover capabilities on a different machine (Active/passive mode) from version 8 to 12

All descriptions here are talking about the way to make Admin Server restart in Active/Passive mode in a vertical scaling case.    WebLogic compatibility versions with EPM/BI: EPM EPM 9.3.3+ is certified with Oracle WebLogic Server 8.1.6, 9.1+, 9.2+ http://download.oracle.com/docs/cd/E10530_01/doc/epm.931/hsys9_install_start_here_933.pdf EPM 11.1.1.3 is certified with Oracle WebLogic Server 9.2 MP1+ EPM 11.1.1.4 is certified with Oracle WebLogic Server 9.2 MP4+ http://www.oracle.com/technetwork/middleware/bi-foundation/oracle-hyperion-epm-system-certific-2-128342.xls EPM 11.1.2.x is certified with Oracle WebLogic Server 11gR1 (10.3.5 and 10.3.6) http://www.oracle.com/technetwork/middleware/bi-foundation/oracle-hyperion-epm-system-certific-131801.xls OBIEE OBIEE 10.1.3.3+ is certified with Oracle Weblogic 9.x, 10.0+, 10.3+, 10.3.x+ http://download.oracle.com/docs/cd/E10415_01/doc/bi.1013/e10417.pdf OBIEE 11.1.1.3+ is certified with Oracle WebLogic Server 11gR1 (10.3.3) OBIEE 11.1.1.5+ is certified with Oracle WebLogic Server 11gR1 (10.3.5) OBIEE 11.1.1.6+ is certified with Oracle WebLogic Server 11gR1 (10.3.5/10.3.6 - OBIEE 11.1.1.6.0 on Exalytics not supported with 10.3.5) http://www.oracle.com/technetwork/middleware/bi-enterprise-edition/bi-11gr1certmatrix-166168.xls BIApps Oracle Business Intelligence Applications versions 7.9.6.0, 7.9.6.1 and 7.9.6.2 are supported on Oracle Business Intelligence Enterprise Edition versions 10.1.3.4.1 and 10.1.3.4.2 so is certified with Oracle Weblogic 9.x, 10.0+, 10.3+, 10.3.x+ http://download.oracle.com/docs/cd/E14223_01/bia.796/e14221.pdf Oracle Business Intelligence Applications For SAP (7.9.7.1) is certified with Oracle Business Intelligence Enterprise Edition versions 11.1.1.5 and 11.1.1.6 so with Oracle WebLogic Server 11gR1 (10.3.5/10.3.6) http://www.oracle.com/technetwork/middleware/bi/obia7971cert-matrix-1523675.xls Oracle BI Applications 7.9.6.3 is certified with Oracle Business Intelligence Enterprise Edition versions 11.1.1.5 and 11.1.1.6 so with Oracle WebLogic Server 11gR1 (10.3.5/10.3.6) http://download.oracle.com/docs/cd/E14223_01/bia.796/e14221.pdf Oracle BI Applications 11.1.1.5.1 is certified with Oracle WebLogic Server 11gR1 (10.3.5) and Oracle Business Intelligence Enterprise Edition 11.1.1.5.0 http://www.oracle.com/technetwork/middleware/bi/oracle-obia-111151-cert-matrix-525376.xls Failover for the WebLogic Administration Server: To prevent the Administration Server from becoming a single point of failure, Managed Servers can always function without the presence an Administration Server, but an Administration Server is required to manage and monitor the domain. By maintaining backups of the config.xml file and certain other resources for a domain, you can replace a failed Administration Server with a backup WebLogic Server instance that can assume the role of Administration Server. WebLogic 8.1: Failover capabilities: http://docs.oracle.com/cd/E13222_01/wls/docs81b/adminguide/failures.html#1118669 http://docs.oracle.com/cd/E13222_01/wls/docs81b/adminguide/failures.html#1121509 WebLogic 9.2: Failover capabilities: http://docs.oracle.com/cd/E13222_01/wls/docs90/server_start/failures.html#1059133 http://docs.oracle.com/cd/E13222_01/wls/docs90/server_start/failures.html#1060813 WebLogic 10g: http://docs.oracle.com/cd/E21764_01/core.1111/e10105/br_rec.htm#ASADM10159 http://docs.oracle.com/cd/E21764_01/core.1111/e10105/br_rec.htm#BCGDADHC WebLogic 12g: Blog for 12c: https://blogs.oracle.com/WebLogicServer/entry/weblogic_server_weekly_for_march1 http://docs.oracle.com/cd/E24329_01/web.1211/e21048/failures.htm http://docs.oracle.com/cd/E24329_01/web.1211/e21048/failures.htm#i1060779 http://docs.oracle.com/cd/E24329_01/web.1211/e21048/failures.htm#i1064077 Conclusion: Since WebLogic version 8, there was constant information on how to restart/restore a WebLogic Admin Server (Domain) on another machine (in case off Machine Site down). The ONLY requirement is to make sure configuration data for the WebLogic domain has been manually/automatically backed up.  A shared drive could also be mandatory if you would like to reduce the time frame during DR process and also if you would like to have an up to date backup available. For EPM and OBIEE, this process works ONLY for Admin Server failover, please link to EPM and OBIEE documentations for additional details: - EPM: http://docs.oracle.com/cd/E17236_01/epm.1112/epm_high_avail_11121/ch01s04s01.html - OBIEE:  http://docs.oracle.com/cd/E23943_01/core.1111/e10106/bi.htm#sthref981

All descriptions here are talking about the way to make Admin Server restart in Active/Passive mode in a vertical scaling case.    WebLogic compatibility versions with EPM/BI: EPM EPM 9.3.3+ is...

Installs and Config

Charts in BI not rendering after applying OBIEE 11.1.1.6.2BP1

Issue:  Charts in BI are no longer rendering in IE or FF after applying OBIEE 11.1.1.6.2 BP1The reason that charts are failing to render is due to the fact that our update process does not touch the custom directories (skins, styles). It will only update the default skins and styles directories (s_blafp and sk_blafp). The reason is that we do not want to overwrite any customizations done by customers / partners. Note: If you are using a custom skins directory, you will have to copy the BIChart.swf from the default directory or the install directory into your custom skins directory. If you do not replace the file, the charts will not render. This change requires that the OracleBI Presentation Services Component be restarted via the opmnctl process as well as for the browser cache to be cleared. Once that has been done, you will be able to render the charts.In a vanilla installation instance, the BIChart.swf file will be updated in the following two locations. (NOTE: Verify that the BIChart.swf file is 457KB (466,944 bytes) and a date of 6/4/2012 in both of these locations)1. %Middleware_Home%\Oracle_BI1\bifoundation\web\app\res\sk_blafp\dvt 2. %Middleware_Home%\user_projects\domains\bifoundation_domain\servers\bi_server1\tmp\_WL_user\analytics_11.1.1\7dezjl\war\res\sk_blafp\dvt You will need to update any additional locations (custom directories) manually by copying the BIChart.swf from one of the above locations to the custom directory. For example: %Middleware_Home%\instances\instance1\bifoundation\OracleBIPresentationServicesComponent\coreapplication_obips1\analyticsRes\sk_<customdirectory>\dvt Do not update the BIChart.swf file located in the \Oracle_BI1\.patch_storage directories as these are the backup files from the opatch process.To restart the OracleBI Presentation Services component using opmnctl (opmnctl stopproc ias-component=coreapplication_obips1, opmnctl startproc ias-component=coreapplication_obips1).For more information, please refer to the 11.1.1.6.2BP1 readme file, located here: https://updates.oracle.com/Orion/Services/download?type=readme&aru=15096099

Issue:  Charts in BI are no longer rendering in IE or FF after applying OBIEE 11.1.1.6.2 BP1 The reason that charts are failing to render is due to the fact that our update process does not touch the...

Installs and Config

Oracle Hyperion Financial Management (HFM) Configurable Dimensions FAQ

What is Configurable Dimensions?Configurable Dimension allows the HFM administrator to set the number of custom dimensions in the application. For each dimension it is possible to set the size of the dimension and give a short name and a long name (alias). What are the benefits of Configurable Dimensions?Configurable Dimensions provides HFM the flexibility lacking in the traditional tools with predefined fixed structures. HFM applications can be built with the exact number of custom dimensions needed. If 2 custom dimensions are necessary, then the application can be built with 2 dimensions. If it needs 4 or 6, the number of custom dimension can be set accordingly. Customer requirements drive the design and the structure of the application without constraints or compromises.What kind of application can be built in HFM?HFM is the preferred tool to create financial reporting applications. Management reporting and legal consolidation can be handled within the same application. Configurable dimension provides an expandable framework ideal to handle all the required details, whether it is operational or legal. For instance some dimensions can be used to analyze Product and Market information, while other dimensions for type of adjustments or eliminations. All the information in these configurable dimensions is automatically available in Web forms, Reports and in Excel using the SmartView module. What is the “size” of a dimension?The size of a dimension determines the total number of elements in the dimension. Depending on the size of the dimension, the system allocates in the database a different number of bytes for the index . The size of a dimension can be either:Large: 2 billions members (4 bytes used internally) Medium: 32,000 members (2 bytes used internally) Small: 128 members (1 byte used internally) What is the minimum number of custom dimensions in an application?The minimum is 2 custom dimensions. Technically the system requires these 2 dimensions to store the source and destination currencies for the exchange rates. The system will automatically create these 2 dimensions with a Large size. What is the maximum number of dimension in an application? The number of dimensions is limited by the total number of bytes used for all the index of these dimensions. This total must not exceed 40 bytes.The system will always create the first two dimensions as Large, which will use 8 bytes. There will be 32 bytes remaining for additional customs. SmallMediumLargeMaximum number of elements128320002 billionsNumber of bytes used internally per dimension124Possible number of dimensions in the applicationNone to 32None to 162 to 10When creating an application, can I mix Large, Medium and Small dimensions in the same application?Yes. It is possible to create some custom dimensions as Large and other dimensions as Medium or Small. What happens when I reach the maximum number of elements in a dimension?Although the officially published maximum number of elements is 128; 32,000; 2 billions for Small, Medium, and Large dimension, internally the system can accommodate twice this number of elements. Once the maximum number of elements is reached in a dimension, it is not possible to create additional elements. The solution is then to create a new application and set the dimension with a larger size. Can we delete elements in order to regain some index number for new elements?No. The system does not reuse index number of deleted elements. After the application has been created in a Development environment, it is recommended to create a brand new application for Production containing only the final version of metadata and using internally only the exact number of index Does Configurable Dimension have an impact on performance?Configurable dimension does not have a direct impact on performance. In an application where Calculation rules are correctly written, performance is driven by the amount of data in the application, not by the number of dimensions or the number of elements in each dimension. Does adding dimension to an existing application have an impact on performance?Adding dimensions to an existing application is likely to introduce more detailed reporting and therefore require additional data and additional calculation rules. These additional data and rules will have an impact on performance. What is the optimal number of custom dimension?There is no technical optimum number independent from the business requirements. For a specific application, the optimal number of dimensions is the smallest number of dimensions necessary to handle the business requirements. Is usability better with more dimensions or fewer dimensions?Usability is better with fewer dimensions. It will be easier for the users to view and set the Point of View with fewer dimensions. There will be less dimensions to use in Journals, Reports, Grids, Web forms, SmartView. Should existing applications be modified following the introduction of configurable dimensions?The existing applications can be converted using the migration utility, they can continue to be used as is. We have an existing application with 4 custom dimensions where only 3 custom dimensions are used. How can I remove the 4th unused dimension?It is not possible to remove a dimension from an existing application. A new application must be created. Should existing applications be redesigned to take advantage of configurable dimensions?If the business requirements are the same, then the existing application should remain the same. If the reporting needs are evolving, then it is possible add new dimensions to the existing application. How many custom dimensions should be used in an application?The number of custom dimensions depends on the level of details of the reporting. When the same detailed analysis exist for multiple accounts, this detail should be created in a custom dimension. For instance, if Sales, Cost of Good Sold and Gross Margin must all be detailed by products, then it is advised to create a Product hierarchy in a custom dimension. When an account must be analyzed by a combination of details, for instance Sales by Product and also simultaneously by Market, then these details should be be created in separate dimensions. The more analytical details are required, the more dimensions are necessary. For instance, if Sales, Cost of goods sold and Gross margin must be analyzed simultaneously by Product, Market, Channels, and Packages, then it is advised to use 4 custom dimensions. Can custom dimensions contain details of different natures?Yes. A custom dimension can contain different type of details, as long as these details are used on different accounts. For instance if the P&L accounts are detailed by Products and the Balance Sheet accounts are detailed by Flows, then it is possible to use the same custom dimension for both details. Products and Flows will be created as separate hierarchies in the same dimension. When defining the accounts, P&L accounts will be associated with Product hierarchy and Balance Sheet accounts will be associated with Flow hiearchy. Is it better to create more custom dimensions or fewer custom dimensions?We believe it is better to create fewer dimensions because it will be easier for the user to navigate in the product. When possible we think it is better to use the same dimension for different type of details, using the top member attributes in accounts to define the correct intersections. How does the system associate custom elements with accounts?Like in prior version, the valid custom dimension elements are associated with accounts using metadata attribute <Dimension Name>TopMember. Should I create the application with extra unused dimensions, just in case it will be needed later?No. The application should be created with the exact number of dimensions required for the current business requirements. We will provide a separate utility to add custom dimensions to existing application. When a new dimension will be added to an existing application, all historical data will be preserved. The new dimension will contain a [None] element. The existing data will be set to [None] for the new dimension. What is the impact of Configurable Dimensions on the User interface?Several changes were made in the User Interface of the product to accommodate a variable number of custom dimensions. For instance, it is possible to control the dimensions displayed in the Point of View and also how the Point of View is displayed. Are all Custom dimensions required when entering Journal?Yes. In order to adapt the Journal module to a possibly large number of custom dimensions, the system can automatically move common elements from the rows to the page so that the user does not have to repeat the same element on every row. How do the Rules work with Configurable Dimensions?All existing functions continue to work. Some new generic functions have been created to handle applications with more than 4 custom dimensions. How are custom dimension names and dimension alias used in the system?Both custom dimension name and alias can be referenced in rules, web forms and system reports. What is the impact of custom dimensions on the infrastructure platform?There is no direct impact of custom dimension on the infrastructure. The infrastructure must be sized based on the number of concurrent users, the amount of data processed during consolidation, the amount of data processed in reports or other data retrieval mechanisms, the intensity of the calculation required. For best performance, 64 bit system using fast processors and large enough amount of memory is recommended.

What is Configurable Dimensions? Configurable Dimension allows the HFM administrator to set the number of custom dimensions in the application. For each dimension it is possible to set the size of the...

Performance Tuning

HPCM 11.1.2.x - Outline Optimisation for Calculation Performance

When an HPCM application is first created, it is likely that you will want to carry out some optimisation on the HPCM application’s Essbase outline in order to improve calculation execution times. There are several things that you may wish to consider.Because at least one dense dimension for an application is required to deploy from HPCM to Essbase, “Measures” and “AllocationType”, as the only required dimensions in an HPCM application, are created dense by default. However, for optimisation reasons, you may wish to consider changing this default dense/sparse configuration.In general, calculation scripts in HPCM execute best when they are targeting destinations with one or more dense dimensions. Therefore, consider your largest target stage i.e. the stage with the most assignment destinations and choose that as a dense dimension. When optimising an outline in this way, it is not possible to have a dense dimension in every target stage and so testing with the dense/sparse settings in every stage is the key to finding the best configuration for each individual application.It is not possible to change the dense/sparse setting of individual cloned dimensions from EPMA. When a dimension that is to be repeated in multiple stages, and therefore cloned, is defined in EPMA, every instance of that dimension has the same storage setting. However, such manual changes may not be preserved in all cases. Please see below for full explanation.However, once the application has been deployed from EPMA to HPCM and from HPCM to Essbase, it is possible to make the dense/sparse changes to a cloned dimension directly in Essbase. This can be done by editing the properties of the outline in Essbase Administration Services (EAS) and manually changing the dense/sparse settings of individual dimensions.There are two methods of deployment from HPCM to Essbase from 11.1.2.1. There is a “replace” deploy method and an “update” deploy method:“Replace” will delete the Essbase application and replace it. If this method is chosen, then any changes made directly on the Essbase outline will be lost. If you use the update deploy method (with or without archiving and reloading data), then the Essbase outline, including any manual changes you have made (i.e. changes to dense/sparse settings of the cloned dimensions), will be preserved. NotesIf you are using the calculation optimisation technique mentioned in a previous blog to calculate multiple POVs (https://blogs.oracle.com/pa/entry/hpcm_11_1_2_optimising) and you are calculating all members of that POV dimension (e.g. all months in the Period dimension) then you could consider making that dimension dense. Always review Block sizes after all changes! The maximum block size recommended in the Essbase Database Administrator’s Guide is 100k for 32 bit Essbase and 200k for 64 bit Essbase. However, calculations may perform better with a larger than recommended block size provided that sufficient memory is available on the Essbase server. Test different configurations to determine the most optimal solution for your HPCM application. Please note that this blog article covers HPCM outline optimisation only. Additional performance tuning can be achieved by methodically testing database settings i.e data cache, index cache and/or commit block settings. For more information on Essbase tuning best practices, please review these items in the Essbase Database Administrators Guide. For additional information on the commit block setting, please see the previous PA blog article https://blogs.oracle.com/pa/entry/essbase_11_1_2_commit

When an HPCM application is first created, it is likely that you will want to carry out some optimisation on the HPCM application’s Essbase outline in order to improve calculation execution times....

Installs and Config

EPM 11.1.2 - EAS Jobs Taking Longer than 5 Minutes Launch Multiple Essbase Sessions

If a job initiated from EAS takes longer than 5 minutes, a new Essbase connection and a new Essbase session will be launched before the first session has completed.  An Essbase connection and a new session will be launched at 5 minute intervals while the job is running in Essbase, creating multiple sessions per job and unnecessarily creating a heavier load on the Essbase agent. Currently, the WebLogic HTTP proxy timeout settings for EAS and EAS Console are set at the default value of 300 seconds or 5 minutes.  This timeout setting in the mod_wl_ohs.conf file will need to be modified if EAS initiated jobs are expected to take longer than 5 minutes. The mod_wl_ohs.conf file is located in the MIDDLEWARE_HOME/user_projects/epmsystem1/httpConfig/ohs/config/OHS/ohs_component/ folder.  Find the LocationMatch section for eas and easconsole and add the WLIOTimeoutSecs parameter.  Set the value in seconds to number larger than the longest running job.  WLIOTimeoutSecs defines the amount of time the plug-in waits for a response to a request from WebLogic Server. The plug-in waits for HungServerRecoverSecs for the server to respond and then declares that server dead, and fails over to the next server.After the mod_wl_ohs.conf file changes have been saved, the HTTP server will need to be restarted in order for the changes to take effect.

If a job initiated from EAS takes longer than 5 minutes, a new Essbase connection and a new Essbase session will be launched before the first session has completed.  An Essbase connection and a new...

System Security

OBIEE 11.1.1 - Web catalog upgrade from 10g to 11g corrupted some user permissions

Web catalog upgrade from 10G to 11.1.1.3 or 11.1.1.5 or 11.1.1.6 corrupted some user permissions, this may cause slow user login time and also slowness in accessing the dashboard. Resolution: Cleanup Web catalog via instanceconfig.xml using the steps documented in "Validating the Catalog" in System Administrator's Guide for Oracle Business Intelligence Enterprise Edition. Runcat cmd line commands to cleanup all permissions that appears to be Invalid. Tip: It is recommended to run all above tasks each & every time content is delivered from a development environment into the production environment (e.g. weekly), as part of the OBIEE administrator's regular backup & maintenance of the catalog Applies to: OBIEE 11.1.1.3 or 11.1.1.5 or 11.1.1.6 Symptoms: Long login time, long navigation time(dashboard drop down list), long search time (on web catalog especially),especially using Internet Explorer. Causes: Web Catalog Permissions are invalid forseveral users. Web Catalog is not clean and appears to becorrupted for each Invalid occurrence (please see Picture 1) Putting ONLY securities on another machinewith a standard Web Catalog is showing optimal answer time. Picture 1: Example of corrupted permissions Take a backup of full web catalog and openit in Offline mode using OBIEE Catalog Manager Resolution: There are 2 steps that are MANDATORY toperform to resolve performances issues: 1. Clean up invalid permissions in webcatalog: Here are the command/instructions to removethe users from the Web catalog:- 1) Open your Web Catalog using Catalog manager in Offline Mode 2) Locate “/” folder and Edit thepermissions 3) In Additional Users and ApplicationRoles box, choose in “User” and click on List button 4) On table result displayed, filter byType of users, putting “Invalid” one on top 5) List all Invalid users to be cleaned 6) Create a file called cleanusers.cmd inthe directory containing runcat.cmd file as follow runcat -cmd forgetAccounts -username xxxxx -cleanup -offline "web catalogPATH" 7) Run the command cleanuser.cmd as follow: 8) Open your Web Catalog using Catalogmanager in Offline Mode 9) Locate “/” folder and Edit thepermissions 10) In Additional Users and ApplicationRoles box, choose in “User” and click on List button 11) Verify that all Users are valid for now Sometimes you will that it is not possibleto erase permissions running runcat commands. The best is to do the following: It can appear that those users are in factlinked to Agents. Clean appropriately to get rid of lastInvalid permissions 2. Run an automated clean via PresentationServer. Note: Refer to steps documented in "Validating the Catalog" inSystem Administrator's Guide for Oracle Business Intelligence EnterpriseEdition.

Web catalog upgrade from 10G to 11.1.1.3 or 11.1.1.5 or 11.1.1.6 corrupted some user permissions, this may cause slow user login time and also slowness in accessing the dashboard. Resolution: Cleanup...

Performance Tuning

Essbase 11.1.2 - AgtSvrConnections Essbase Configuration Setting

AgtSvrConnections is a documented Essbase configuration setting used in conjunction with the AgentThreads and ServerThreads settings. Basically, when a user logs into Essbase, the AgentThreads connects to the ESSBASE process then the AgtSvrConnections will connect the ESSBASE process to the ESSSVR application process which then the ServerThreads are used for end user activities. In Essbase 11.1.2, the default value of the AgtSvrConnections setting was changed to 5. In previous Essbase releases, the AgtSvrConnections setting default value is 1.It is recommended that tuning the AgtSvrConnections settings be done incrementally by 1 or 2 maximum and based on the number of concurrent Set Active/Clear Active calls. In the Essbase DBA Guide and Technical Reference, the maximum setting recommended is to not exceed what is set for AgentThreads, however, we have found that most customers do not need to exceed a setting of 10. In general, it is ok to set AgtSvrConnections close to the AgentThreads setting, however, there have been customers that needed an AgentThread setting greater than 10 and we have found that the AgtSvrConnections setting higher than 5-10 could have a negative impact on Essbase due to too many TCP ports used unnecessarily. As with all Essbase.cfg settings, it is best to set values to what is needed based on process load and not arbitrarily set to high values.In order to monitor and tune the AgtSvrConnections setting, monitor the application log for logins and Set Active/Clear Active messages. If there are a lot of logins and Set Active/Clear Active messages happening in a short period of time making it appear that the login is taking longer, incrementally increase the AgtSvrConnections setting by 1 or 2, which can then help with login speed. The login performance tolerance is different from one customer environment to another since there are other factors that can impact this performance i.e. network latency.What is happening in Essbase when a user logs in:ESSBASE issues a Set Active to the ESSSVR process. Each application has its own ESSSVR process. Set Active then calls MultipleAsyncLogout and waits on the pipe connection. MultipleAsyncLogout goes back to ESSBASE. ESSBASE then needs to send the logout back to the ESSSVR process. When the AgtSvrConnections setting needs to be increased from the default of 5, it is because Essbase cannot find a connection since the previous connections are used by ESSBASE-ESSSVR. In this example, we may want to increase AgtSvrConnections from 5 to 7 to improve the login performance. Again, it is best to set Essbase settings to what is needed based on process load and not arbitrarily set to high values. In general, stress or performance testing environments using automated tools may need higher than normal settings. This is because automated processes run at high speeds for logging in and logging out. Typically, in a real life production environment, the settings are much closer to default values.

AgtSvrConnections is a documented Essbase configuration setting used in conjunction with the AgentThreads and ServerThreads settings. Basically, when a user logs into Essbase, the AgentThreads...

Oracle

Integrated Cloud Applications & Platform Services