Monday Oct 29, 2012

ORACLE RIGHTNOW DYNAMIC AGENT DESKTOP CLOUD SERVICE - Putting the Dynamite into Dynamic Agent Desktop

Untitled Document

There’s a mountain of evidence to prove that a great contact centre experience results in happy, profitable and loyal customers. The very best Contact Centres are those with high first contact resolution, customer satisfaction and agent productivity. But how many companies really believe they are the best? And how many believe that they can be?

We know that with the right tools, companies can aspire to greatness – and achieve it. Core to this is ensuring their agents have the best tools that give them the right information at the right time, so they can focus on the customer and provide a personalised, professional and efficient service.

Today there are multiple channels through which customers can communicate with you; phone, web, chat, social to name a few but regardless of how they communicate, customers expect a seamless, quality experience. Most contact centre agents need to switch between lots of different systems to locate the right information. This hampers their productivity, frustrates both the agent and the customer and increases call handling times. With this in mind, Oracle RightNow has designed and refined a suite of add-ins to optimize the Agent Desktop. Each is designed to simplify and adapt the agent experience for any given situation and unify the customer experience across your media channels.

Let’s take a brief look at some of the most useful tools available and see how they make a difference.

Contextual Workspaces: The screen where agents do their job. Agents don’t want to be slowed down by busy screens, scrolling through endless tabs or links to find what they’re looking for. They want quick, accurate and easy. Contextual Workspaces are fully configurable and through workspace rules apply if, then, else logic to display only the information the agent needs for the issue at hand . Assigned at the Profile level, different levels of agent, from a novice to the most experienced, get a screen that is relevant to their role and responsibilities and ensures their job is done quickly and efficiently the first time round.

Agent Scripting: Sometimes, agents need to deliver difficult or sensitive messages while maximising the opportunity to cross-sell and up-sell. After all, contact centres are now increasingly viewed as revenue generators. Containing sophisticated branching logic, scripting helps agents to capture the right level of information and guides the agent step by step, ensuring no mistakes, inconsistencies or missed opportunities.

Guided Assistance: This is typically used to solve common troubleshooting issues, displaying a series of question and answer sets in a decision-tree structure. This means agents avoid having to bookmark favourites or rely on written notes. Agents find particular value in these guides - to quickly craft chat and email responses. What’s more, by publishing guides in answers on support pages customers, can resolve issues themselves, without needing to contact your agents. And because it can also accelerate agent ramp-up time, it ensures that even novice agents can solve customer problems like an expert.

Desktop Workflow: Take a step back and look at the full customer interaction of your agents. It probably spans multiple systems and multiple tasks. With Desktop Workflows you control the design workflows that span the full customer interaction from start to finish. As sequences of decisions and actions, workflows are unique in that they can create or modify different records and provide automation behind the scenes. This means your agents can save time and provide better quality of service by having the tools they need and the relevant information as required. And doing this boosts satisfaction among your customers, your agents and you – so win, win, win!

I have highlighted above some of the tools which can be used to optimise the desktop; however, this is by no means an exhaustive list. In approaching your design, it’s important to understand why and how your customers contact you in the first place. Once you have this list of “whys” and “hows”, you can design effective policies and procedures to handle each category of problem, and then implement the right agent desktop user interface to support them. This will avoid duplication and wasted effort.

Five Top Tips to take away:

  1. Start by working out “why” and “how” customers are contacting you.
  2. Implement a clean and relevant agent desktop to support your agents. If your workspaces are getting complicated consider using Desktop Workflow to streamline the interaction.
  3. Enhance your Knowledgebase with Guides. Agents can access them proactively and can be published on your web pages for customers to help themselves.
  4. Script any complex, critical or sensitive interactions to ensure consistency and accuracy.
  5. Desktop optimization is an ongoing process so continue to monitor and incorporate feedback from your agents and your customers to keep your Contact Centre successful.


Want to learn more?

Having attended the 3-day Oracle RightNow Customer Service Administration class your next step is to attend the 2-day Oracle RightNow Customer Portal Design Dynamic Agent Desktop Administration class. Here you’ll learn not only how to leverage the Agent Desktop tools but also how to optimise your self-service pages to enhance your customers’ web experience.


Useful resources:

Review the Best Practice Guide

Review the Agent Desktop Tune-up Guide


About the Author:

Angela Chandler

Angela Chandler joined Oracle University as a Senior Instructor through the RightNow Customer Experience Acquisition. Her other areas of expertise include Business Intelligence and Knowledge Management.  She currently delivers the following Oracle RightNow courses in the classroom and as a Live Virtual Class:

Wednesday Aug 29, 2012

Integrating Oracle Hyperion Smart View Data Queries with MS Word and Power Point

Untitled Document

Most Smart View users probably appreciate that they can use just one add-in to access data from the different sources they might work with, like Oracle Essbase, Oracle Hyperion Planning, Oracle Hyperion Financial Management and others. But not all of them are aware of the options to integrate data analyses not only in Excel, but also in MS Word or Power Point. While in the past, copying and pasting single numbers or tables from a recent analysis in Excel made the pasted content a static snapshot, copying so called Data Points now creates dynamic, updateable references to the data source. It also provides additional nice features, which can make life easier and less stressful for Smart View users.

So, how does this option work: after building an ad-hoc analysis with Smart View as usual in an Excel worksheet, any area including data cells/numbers from the database can be highlighted in order to copy data points - even single data cells only.



It is not necessary to highlight and copy the row or column descriptions


Next from the Smart View ribbon select Copy Data Point.

Then transfer to the Word or Power Point document into which the selected content should be copied. Note that in these Office programs you will find a menu item Smart View; from it select the Paste Data Point icon.

The copied details from the Excel report will be pasted, but showing #NEED_REFRESH in the data cells instead of the original numbers.

After clicking the Refresh icon on the Smart View menu the data will be retrieved and displayed. (Maybe at that moment a login window pops up and you need to provide your credentials.)

It works in the same way if you just copy one single number without any row or column descriptions, for example in order to incorporate it into a continuous text:

Before refresh:

After refresh:

From now on (provided that you are connected online to your database or application) for any subsequent updates of the data shown in your documents you only need to refresh data by clicking the Refresh button on the Smart View menu, without copying and pasting the context or content again.

As you might realize, trying out this feature on your own, there won’t be any Point of View shown in the Office document. Also you have seen in the example, where only a single data cell was copied, that there aren’t any member names or row/column descriptions copied, which are usually required in an ad-hoc report in order to exactly define where data comes from or how data is queried from the source. Well, these definitions are not visible, but they are transferred to the Word or Power Point document as well. They are stored in the background for each individual data cell copied and can be made visible by double-clicking the data cell as shown in the following screen shot (but which is taken from another context).


So for each cell/number the complete connection information is stored along with the exact member/cell intersection from the database. And that’s not all: you have the chance now to exchange the members originally selected in the Point of View (POV) in the Excel report. Remember, at that time we had the following selection:


By selecting the Manage POV option from the Smart View menu in Word or Power Point…


… the following POV Manager – Queries window opens:


You can now change your selection for each dimension from the original POV by either double-clicking the dimension member in the lower right box under POV: or by selecting the Member Selector icon on the top right hand side of the window. After confirming your changes you need to refresh your document again. Be aware, that this will update all (!) numbers taken from one and the same original Excel sheet, even if they appear in different locations in your Office document, reflecting your recent changes in the POV.


Build your original report already in a way that dimensions you might want to change from within Word or Power Point are placed in the POV.

And there is another really nice feature I wouldn’t like to miss mentioning: Using Dynamic Data Points in the way described above, you will never miss or need to search again for your original Excel sheet from which values were taken and copied as data points into an Office document. Because from even only one single data cell Smart View is able to recreate the entire original report content with just a few clicks:

Select one of the numbers from within your Word or Power Point document by double-clicking.


Then select the Visualize in Excel option from the Smart View menu.

Excel will open and Smart View will rebuild the entire original report, including POV settings, and retrieve all data from the most recent actual state of the database. (It might be necessary to provide your credentials before data is displayed.)

However, in order to make this work, an active online connection to your databases on the server is necessary and at least read access to the retrieved data. But apart from this, your newly built Excel report is fully functional for ad-hoc analysis and can be used in the common way for drilling, pivoting and all the other known functions and features.

So far about embedding Dynamic Data Points into Office documents and linking them back into Excel worksheets. You can apply this in the described way with ad-hoc analyses directly on Essbase databases or using Hyperion Planning and Hyperion Financial Management ad-hoc web forms.

If you are also interested in other new features and smart enhancements in Essbase or Hyperion Planning stay tuned for coming articles or check our training courses and web presentations.

You can find general information about offerings for the Essbase and Planning curriculum or other Oracle-Hyperion products here (please make sure to select your country/region at the top of this page) or in the OU Learning paths section , where Planning, Essbase and other Hyperion products can be found under the Fusion Middleware heading (again, please select the right country/region). Or drop me a note directly: .

About the Author:

Bernhard Kinkel started working for Hyperion Solutions as a Presales Consultant and Consultant in 1998 and moved to Hyperion Education Services in 1999. He joined Oracle University in 2007 where he is a Principal Education Consultant. Based on these many years of working with Hyperion products he has detailed product knowledge across several versions. He delivers both classroom and live virtual courses. His areas of expertise are Oracle/Hyperion Essbase, Oracle Hyperion Planning and Hyperion Web Analysis.


Friday Jun 29, 2012

Oracle RightNow CX for Good Customer Experiences

Oracle RightNow CX is all about the customer experience, it’s about understanding what drives a good interaction and it’s about delivering a solution which works for our customers and by extension, their customers.

One of the early guiding principles of Oracle RightNow was an 8-point strategy to providing good customer experiences.

  1. Establish a knowledge foundation
  2. Empowering the customer
  3. Empower employees
  4. Offer multi-channel choice
  5. Listen to the customer
  6. Design seamless experiences
  7. Engage proactively
  8. Measure and improve continuously

The application suite provides all of the tools necessary to deliver a rewarding, repeatable and measurable relationship between business and customer.

  • The Knowledge Authoring tool provides gap analysis, WYSIWIG editing (and includes HTML rich content for non-developers), multi-level categorisation, permission based publishing and Web self-service publishing.
  • Oracle RightNow Customer Portal, is a complete web application framework that enables businesses to control their own end-user page branding experience, which in turn will allow customers to self-serve.
  • The Contact Centre Experience Designer builds a combination of workspaces, agent scripting and guided assistances into a Desktop Workflow. These present an agent with the tools they need, at the time they need them, providing even the newest and least experienced advisors with consistently accurate and efficient information, whilst guiding them through the complexities of internal business processes.
  • Oracle RightNow provides access points for customers to feedback about specific knowledge articles or about the support site in general. The system will generate ‘incidents’ based on the scoring of the comments submitted. This makes it easy to view and respond to customer feedback.

It is vital, more now than ever, not to under-estimate the power of the social web – Facebook, Twitter, YouTube – they have the ability to cause untold amounts of damage to businesses with a single post – witness musician Dave Carroll and his protest song on YouTube, posted in response to poor customer services from an American airline. The first day saw 150,000 views and is currently at 12,011,375. The Times reported that within 4 days of the post, the airline’s stock price fell by 10 percent, which represented a cost to shareholders of $180 million dollars.

It is a universally acknowledged fact, that when customers are unhappy, they will not come back, and, generally speaking, it only takes one bad experience to lose a customer.

The idea that customer loyalty can be regained by using social media channels was the subject of a 2011 Survey commissioned by RightNow and conducted by Harris Interactive. The survey discovered that 68% of customers who posted a negative review about a holiday on a social networking site received a response from the business. It further found that 33% subsequently posted a positive review and 34% removed the original negative review. Cloud Monitor provides the perfect mechanism for seeing what is being said about a business on public Facebook pages, Twitter or YouTube posts; it allows agents to respond proactively – either by creating an Oracle RightNow incident or by using the same channel as the original post.

This leaves step 8 – Measuring and Improving:

  • How does a business know whether it’s doing the right thing?
  • How does it know if its customers are happy?
  • How does it know if its staff are being productive?
  • How does it know if its staff are being effective?

Cue Oracle RightNow Analytics – fully integrated across the entire platform – Service, Marketing and Sales – there are in excess of 800 standard reports. If this were not enough, a large proportion of the database has been made available via the administration console, allowing users without any prior database experience to write their own reports, format them and schedule them for e-mail delivery to a distribution list. It handles the complexities of table joins, and allows for the manipulation of data with ease.

Oracle RightNow believes strongly in the customer owning their solution, and to provide the best foundation for success, Oracle University can give you the RightNow knowledge and skills you need. This is a selection of the courses offered:

A full list of courses offered can be found on the Oracle University website. For more information and course dates please get in contact with your local Oracle University team.

On top of the Service components, the suite also provides marketing tools, complex survey creation and tracking and sales functionality.

I’m a fan of the application, and I think I’ve made that clear:

  • It’s completely geared up to providing customers with support at point of need.
  • It can be configured to meet even the most stringent of business requirements.

Oracle RightNow is passionate about, and committed to, providing the best customer experience possible. Oracle RightNow CX is the application that makes it possible.

About the Author:

Sarah Anderson worked for RightNow for 4 years in both in both a consulting and training delivery capacity. She is now a Senior Instructor with Oracle University, delivering the following Oracle RightNow courses:

  • RightNow Customer Service Administration
  • RightNow Analytics
  • RightNow Customer Portal Designer and Contact Center Experience Designer Administration
  • RightNow Marketing and Feedback

Wednesday Apr 18, 2012

Setting a simple high availability configuration for an Oracle Unified Directory Topology

Oracle Unified Directory is the latest full java implementation, of LDAP directories offered by Oracle.

It offers several improved features over earlier LDAP versions such as higher speed on read/write operations, better handling of high volume data capacity, ease of scaling, replication and proxy capabilities.

In this topic we will explore some of the replications features of the Oracle Unified Directory Server, by providing a simple setup for high availability of any OUD topology.

Production OUD topologies need to offer, a continues, synchronized, and high availability flow of the business data, managed / hosted on OUD Ldap Stores. This is achieved by using OUD Ldap data nodes, with specific OUD Ldap directories as "Replication" nodes. A cluster of "OUD replication nodes" associated then with OUD Ldap datastores offers a high degree of availability for the data hosted on the OUD Ldap directories.

A Replication node, is considered an instance of an Oracle Unified Directory, which is used only to synchronize data (read/write operations) between several OUD dedicated LDAP stores. We can setup a replication node, on the same JVM which is hosting an OUD Ldap store, or we can create a specific JVM process which will handle the replication process (this is our approach for this demo).

For our demo, we will create a simply 4 node topology, on the same physical host.

Two OUD nodes (every node is a separate JVM process) , both of them are active, will handle the user data, and two OUD Replication nodes (as separate JMV processes ), will handle the synchronization of any modification applied on the data of the node1/node2.

As a best practice, its good to have at least 2 separate replication nodes in our topology, although we can start the replication scenario by using only 1 replication node, and then adding one additional instance.

Having at least 2 replication nodes in our system, we ensure that any operation on the data nodes of our Ldap systems (node1/noede2) will be propagated to the other Ldap nodes, even if one of the replication node fails.

The whole scenario can be run into one physical host (VirtualBox, or VMaware), without any overhead. In some other topics we will discuss tuning and monitoring operations.

Lets Start by creating the two LDAP nodes (node1, node2) which will hold our data.

This is done by executing the oud-setup script in graphical mode:

Creation of the first node, the node1 listens to 1389 port, and the 4444 is its admin port.

The node 1 is a standalone LDA server

The server will manage the directory tree dc=example,dc=com which is a sample site. For this site, the wizard will generate 2000 samples entries.

this is the final setup for the node1

At this stage, we have already one instance of our LDAP topology ( node1) up and running !.

We will continue, with the creation of the second OUD Ldap node (node2)

The setup for the node node2 is nearly similar to the node1, the LDAP listen port is 2389, and the admin port is 5444.

For the node2, we will create the same directory tree structure, but we will leave the directory database empty. During the synchronization phase (see later slides), we will provision the directory with data coming from the node1.

This is the final setup for the node2

And at this stage we have the second node2 up and running !

The creation of the replication nodes is "nearly similar process" as for the previous LDAP nodes. We will create 2 OUD instances, with the configuration wizard, and we will setup the replication process as an additional setup using the dsreplication command.

The first replication node will listen to port 3389, the admin port will be 6444.

The second replication node will listen to 4389, its admin port will be at 7444.

At this stage we have 4 OUD instances running to our system. Two LDAP nodes (node1, node2) with the "business data", and two replication nodes. The node1 is actually populated with data, the node2 will be provisioned during the setup of the replication between this node and the node1.

Now lets setup the replication process between the node1, and the first replication OUD server, by executing the following dsreplication command. The node1 , is the LDAP data node, and the first replication node, will hold only the replication information (it will not hold any LDAP data)

dsreplication enable \                                                                          
  --host1 localhost  --port1 4444 --bindDN1 "cn=Directory Manager" \                             
  --bindPassword1 welcome1 --noReplicationServer1 \                                              
  --host2 localhost --port2 6444 --bindDN2 "cn=Directory Manager" \                              
  --bindPassword2 welcome1 --onlyReplicationServer2 \                                            
  --replicationPort2  8989 --adminUID admin --adminPassword password \                           
  --baseDN "dc=example,dc=com" -X -n                                                             

This is what we should see in our prompt:

Then we should associate the second replication node with the node1.

The only parameters that we have to change to the previous scripts are the admin port for the second replication node, and the replication port for the second replication node

 dsreplication enable \                                                                          
  --host1 localhost  --port1 4444 --bindDN1 "cn=Directory Manager" \                             
  --bindPassword1 welcome1 --noReplicationServer1 \                                              
  --host2 localhost --port2 7444 --bindDN2 "cn=Directory Manager" \                              
  --bindPassword2 welcome1 --onlyReplicationServer2 \                                            
  --replicationPort2  9989 --adminUID admin --adminPassword password \                           
  --baseDN "dc=example,dc=com" -X -n                                                             

We should execute the same script now, on the node2 in order to associate this node with the first replication node :

dsreplication enable \                                                                          
  --host1 localhost  --port1 5444 --bindDN1 "cn=Directory Manager" \                             
  --bindPassword1 welcome1 --noReplicationServer1 \                                              
  --host2 localhost --port2 6444 --bindDN2 "cn=Directory Manager" \                              
  --bindPassword2 welcome1 --onlyReplicationServer2 \                                            
  --replicationPort2  8989 --adminUID admin --adminPassword password \                           
  --baseDN "dc=example,dc=com" -X -n                                                             

At this stage we have associated the node1, node2, with the two replications nodes.

Before to start our operations (read/write) on node1,and node1 we have to initialize the replication topology with the following script :

dsreplication initialize-all --hostname localhost --port 4444 \                                 
  --baseDN "dc=example,dc=com" --adminUID admin --adminPassword password                         

Here is what we should see :

As you notice, there is a fully successful provisioning of the node2, from the data of the node1!

Now we can monitor our configuration by executing the following command :

 dsreplication status --hostname localhost --port 4444 \                                         
  --baseDN "dc=example,dc=com" --adminUID admin --adminPassword password                         

To test our replication configuration, you can use any LDAP browser client, connect to the first instance, modify one or several entries, then connect to the second instance and check that your modification are applied :)

For additional training see: Oracle Unified Directory 11g: Services Deployment Essentials

About the Author:

Eugene Simos is based in France and joined Oracle through the BEA-Weblogic Acquisition, where he worked for the Professional Service, Support, end Education for major accounts across the EMEA Region. He worked in the banking sector, ATT, Telco companies giving him extensive experience on production environments. Eugene currently specializes in Oracle Fusion Middleware teaching an array of courses on Weblogic/Webcenter, Content,BPM /SOA/Identity-Security/GoldenGate/Virtualisation/Unified Comm Suite) throughout the EMEA region.

Monday Apr 09, 2012

How to create a PeopleCode Application Package/Application Class using PeopleTools Tables

This article describes how - in PeopleCode (Release PeopleTools 8.50) - to enable a grid without enabling each static column, using a dynamic Application Class.

The goal is to disable the following grid with three columns “Effort Date”, ”Effort Amount” and “Charge Back” , when the Check Box “Finished with task” is selected , without referencing each static column; this PeopleCode could be used dynamically with any grid.

If the check box “Finished with task” is cleared, the content of the grid columns is editable (and the buttons “+” and “-“ are available):

So, you create an Application Package “CLASS_EXTENSIONS” that contains an Application Class “EWK_ROWSET”.

This Application Class is defined with Class extends “ Rowset” and you add two news properties “Enabled” and “Visible”:

After creating this Application Class, you use it in two PeopleCode Events : Rowinit and FieldChange :

This code is very ‘simple’, you write only one command : ” &ERS2.Enabled = False” → and the entire grid is “Enabled”… and you can use this code with any Grid!

So, the complete PeopleCode to create the Application Package is (with explanation in [….]) :

******Package CLASS_EXTENSIONS :    [Name of the Package: CLASS_EXTENSIONS]

--Beginning of the declaration part------------------------------------------------------------------------------
class EWK_ROWSET extends Rowset;          [Definition Class EWK_ROWSET  as a 
                                          subclass of Class Rowset]
   method EWK_ROWSET(&RS As Rowset);      [Constructor is the Method with the
                                          same name of the Class]
   property boolean Visible get set;
   property boolean Enabled get set;      [Definition of the property 
                                          “Enabled” in read/write]
private                                   [Before the word “private”, 
                                          all the declarations are publics]
   method SetDisplay(&DisplaySW As boolean, &PropName As string, 
          &ChildSW As boolean);
   instance boolean &EnSW;
   instance boolean &VisSW;
   instance Rowset &NextChildRS;
   instance Row &NextRow;
   instance Record &NextRec;
   instance Field &NextFld;
   instance integer &RowCnt, &RecCnt, &FldCnt, &ChildRSCnt;
   instance integer &i, &j, &k;
   instance CLASS_EXTENSIONS:EWK_ROWSET &ERSChild;   [For recursion]
   Constant &VisibleProperty = "VISIBLE";
   Constant &EnabledProperty = "ENABLED";
--End of the declaration part------------------------------------------------------------------------------

method EWK_ROWSET [The Constructor]
   /+ &RS as Rowset +/
   %Super = &RS;
get Enabled
   /+ Returns Boolean +/;
   Return &EnSW;
set Enabled
   /+ &NewValue as Boolean +/;
   &EnSW = &NewValue;
 %This.SetDisplay(&EnSW, &EnabledProperty, False); [This method is called when
                                                    you set this property]
get Visible
   /+ Returns Boolean +/;
   Return &VisSW;

set Visible
   /+ &NewValue as Boolean +/;
   &VisSW = &NewValue;
   %This.SetDisplay(&VisSW, &VisibleProperty, False);

method SetDisplay                 [The most important PeopleCode Method]
   /+ &DisplaySW as Boolean, +/
   /+ &PropName as String, +/
   /+ &ChildSW as Boolean +/             [Not used in our example]
   &RowCnt = %This.ActiveRowCount;
   &NextRow = %This.GetRow(1);      [To know the structure of a line ]
   &RecCnt = &NextRow.RecordCount; 
   For &i = 1 To &RowCnt                     [Loop for each Line]
      &NextRow = %This.GetRow(&i);
      For &j = 1 To &RecCnt                   [Loop for each Record]
         &NextRec = &NextRow.GetRecord(&j);
         &FldCnt = &NextRec.FieldCount;      

         For &k = 1 To &FldCnt                 [Loop for each Field/Record]
            &NextFld = &NextRec.GetField(&k);
            Evaluate Upper(&PropName)
            When = &VisibleProperty
               &NextFld.Visible = &DisplaySW;
            When = &EnabledProperty;
               &NextFld.Enabled = &DisplaySW; [Enable each Field/Record]
              Error "Invalid display property; Must be either VISIBLE or ENABLED"
      If &ChildSW = True Then   [If recursion]
         &ChildRSCnt = &NextRow.ChildCount;
         For &j = 1 To &ChildRSCnt [Loop for each Rowset child]
            &NextChildRS = &NextRow.GetRowset(&j);
            &ERSChild = create CLASS_EXTENSIONS:EWK_ROWSET(&NextChildRS);
            &ERSChild.SetDisplay(&DisplaySW, &PropName, &ChildSW);
 [For each Rowset child, call Method SetDisplay with the same parameters used 
 with the Rowset parent]
******End of the Package CLASS_EXTENSIONS:[Name of the Package: CLASS_EXTENSIONS]

About the Author:

Pascal Thaler joined Oracle University in 2005 where he is a Senior Instructor. His area of expertise is Oracle Peoplesoft Technology and he delivers the following courses:

  • For Developers: PeopleTools Overview, PeopleTools I &II, Batch Application Engine, Language Oriented Object PeopleCode, Administration Security
  • For Administrators : Server Administration & Installation, Database Upgrade & Data Management Tools
  • For Interface Users: Integration Broker (Web Service)

Tuesday Mar 27, 2012

The new workflow management of Oracle´s Hyperion Planning: Define more details with Planning Unit Hierarchies and Promotional Paths

After having been almost unchanged for several years, starting with the 11.1.2 release of Oracle´s Hyperion Planning the Process Management has not only got a new name: “Approvals” now is offering the possibility to further split Planning Units (comprised of a unique Scenario-Version-Entity combination) into more detailed combinations along additional secondary dimensions, a so called Planning Unit Hierarchy, and also to pre-define a path of planners, reviewers and approvers, called Promotional Path. I´d like to introduce you to changes and enhancements in this new process management and arouse your curiosity for checking out more details on it.

One reason of using the former process management in Planning was to limit data entry rights to one person at a time based on the assignment of a planning unit. So the lowest level of granularity for this assignment was, for a given Scenario-Version combination, the individual entity. Even if in many cases one person wasn´t responsible for all data being entered into that entity, but for only part of it, it was not possible to split the ownership along another additional dimension, for example by assigning ownership to different accounts at the same time. By defining a so called Planning Unit Hierarchy (PUH) in Approvals this gap is now closed. Complementing new Shared Services roles for Planning have been created in order to manage set up and use of Approvals:

The Approvals Administrator consisting of the following roles:

  • Approvals Ownership Assigner, who assigns owners and reviewers to planning units for which Write access is assigned (including Planner responsibilities).
  • Approvals Supervisor, who stops and starts planning units and takes any action on planning units for which Write access is assigned.
  • Approvals Process Designer, who can modify planning unit hierarchy secondary dimensions and entity members for which Write access is assigned, can also modify scenarios and versions that are assigned to planning unit hierarchies and can edit validation rules on data forms for which access is assigned. (this includes as well Planner and Ownership Assigner responsibilities)

Set up of a Planning Unit Hierarchy is done under the Administration menu, by selecting Approvals, then Planning Unit Hierarchy. Here you create new PUH´s or edit existing ones. The following window displays:

After providing a name and an optional description, a pre-selection of entities can be made for which the PUH will be defined. Available options are:

  • All, which pre-selects all entities to be included for the definitions on the subsequent tabs
  • None, manual entity selections will be made subsequently
  • Custom, which offers the selection for an ancestor and the relative generations, that should be included for further definitions.

Finally a pattern needs to be selected, which will determine the general flow of ownership:

  • Free-form, uses the flow/assignment of ownerships according to Planning releases prior to 11.1.2
  • In Bottom-up, data input is done at the leaf member level. Ownership follows the hierarchy of approval along the entity dimension, including refinements using a secondary dimension in the PUH, amended by defined additional reviewers in the promotional path.
  • Distributed, uses data input at the leaf level, while ownership starts at the top level and then is distributed down the organizational hierarchy (entities). After ownership reaches the lower levels, budgets are submitted back to the top through the approval process.
  • Proceeding to the next step, now a secondary dimension and the respective members from that dimension might be selected, in order to create more detailed combinations underneath each entity.

    After selecting the Dimension and a Parent Member, the definition of a Relative Generation below this member assists in populating the field for Selected Members, while the Count column shows the number of selected members. For refining this list, you might click on the icon right beside the selected member field and use the check-boxes in the appearing list for deselecting members.


    In order to reduce maintenance of the PUH due to changes in the dimensions included (members added, moved or removed) you should consider to dynamically link those dimensions in the PUH with the dimension hierarchies in the planning application. For secondary dimensions this is done using the check-boxes in the Auto Include column. For the primary dimension, the respective selection criteria is applied by right-clicking the name of an entity activated as planning unit, then selecting an item of the shown list of include or exclude options (children, descendants, etc.).

    Anyway in order to apply dimension changes impacting the PUH a synchronization must be run. If this is really necessary or not is shown on the first screen after selecting from the menu Administration, then Approvals, then Planning Unit Hierarchy: under Synchronized you find the statuses Yes, No or Locked, where the last one indicates, that another user is just changing or synchronizing the PUH. Select one of the not synchronized PUH´s (status No) and click the Synchronize option in order to execute.

    In the next step owners and reviewers are assigned to the PUH.

    Using the icons with the magnifying glass right besides the columns for Owner and Reviewer the respective assignments can be made in the order that you want them to review the planning unit. While it is possible to assign only one owner per entity or combination of entity+ member of the secondary dimension, the selection for reviewers might consist of more than one person. The complete Promotional Path, including the defined owners and reviewers for the entity parents, can be shown by clicking the icon. In addition optional users might be defined for being notified about promotions for a planning unit.


    Reviewers cannot change data, but can only review data according to their data access permissions and reject or promote planning units.

    In order to complete your PUH definitions click Finish - this saves the PUH and closes the window. As a final step, before starting the approvals process, you need to assign the PUH to the Scenario-Version combination for which it should be used. From the Administration menu select Approvals, then Scenario and Version Assignment.

    Expand the PUH in order to see already existing assignments. Under Actions click the add icon and select scenarios and versions to be assigned. If needed, click the remove icon in order to delete entries.

    After these steps, set up is completed for starting the approvals process. Start, stop and control of the approvals process is now done under the Tools menu, and then Manage Approvals.

    The new PUH feature is complemented by various additional settings and features; some of them at least should be mentioned here:

    Export/Import of PUHs:

    Out of Office agent:

    Validation Rules changing promotional/approval path if violated (including the use of User-defined Attributes (UDAs)):

    And various new and helpful reviewer actions with corresponding approval states.

    More information

    You can find more detailed information in the following documents:

    Or on the Oracle Technology Network.

    If you are also interested in other new features and smart enhancements in Essbase or Hyperion Planning stay tuned for coming articles or check our training courses and web presentations.

    You can find general information about offerings for the Essbase and Planning curriculum or other Oracle-Hyperion products here; (please make sure to select your country/region at the top of this page) or in the OU Learning paths section, where Planning, Essbase and other Hyperion products can be found under the Fusion Middleware heading (again, please select the right country/region). Or drop me a note directly:

    About the Author:

    Bernhard Kinkel

    Bernhard Kinkel started working for Hyperion Solutions as a Presales Consultant and Consultant in 1998 and moved to Hyperion Education Services in 1999. He joined Oracle University in 2007 where he is a Principal Education Consultant. Based on these many years of working with Hyperion products he has detailed product knowledge across several versions. He delivers both classroom and live virtual courses. His areas of expertise are Oracle/Hyperion Essbase, Oracle Hyperion Planning and Hyperion Web Analysis.

    Monday Feb 27, 2012

    Scheduling Options in Primavera P6 by David Kelly

    These options are made available by choosing the Options button, from the scheduling dialogue.

    In a properly configured P6 environment, where projects are created by copy/paste from an official template created by the central project controls group, these settings will be correct.

    Ignore relationships to and from other projects

    In almost all circumstances the default, not to ignore such relationships, would be correct. An example of where this toggle switch is valuable is if all scope variations were held in a separate project. With both projects open, scheduling shows the combined projects. With just the original project open and this option switched on, the original scope only is scheduled. This is one of thimge simplest ways to switch between original scope and current scope.

    Make open ended activities critical

    This is a not just a display option, it changes the float on any path with an open end.

    Use Expected Finish dates

    Expected Finish is a controversial way to report progress. Effectively, rather than telling a P6 activity how much work has been done by reporting a per cent complete and allowing P6 to calculate when we can expect the activity to finish, we input an expected finish date and P6 calculates how much work has been done. Many project controls professionals think this is the wrong way round. Even if such dates have been added in the Status tab of the Activity screen, they will not be used if this option is unchecked. The arithmetic is NOT commutative. It will NOT put the original values back if you un-check this. Be careful!

    Schedule automatically when a change affects dates

    Switching “real-time” scheduling on such that the schedule recalculates with every significant change to the data is not advised. This feature is the preserve of light-weight single-user planning systems.

    Level Resources During Scheduling

    This is largely a matter of style. Some planners prefer to see the results of the schedule before resource levelling, others having satisfactorily set the levelling parameters would rather both processes were run together.

    Recalculate assignment costs after scheduling

    It is hard to imagine a scenario where one would NOT want to recalculate an activity’s costs based on the new dates that the activity may have once scheduled.

    Retained Logic vs. Progress Override

    None of the above settings creates as much discussion as this one. The controversy arises when there is out-of-sequence progress.

    The above Barchart shows two projects which are identical in every respect except for that setting. The only progress that has been achieved is that the site has been prepared. Using Progress Override means that the remaining duration of the activity “Prep site and Erect” starts at the data date. This is clearly nonsense. Note that the whole project now finishes earlier than the Retained Logic project which leaves the remaining duration in the position that its predecessors demand. What is going on here, why do these options exist?

    In the real world most planners add relationships to a project for two reasons:

    1. The laws of physics. If you are going to put a pipe in a trench, you must make the trench first.
    2. Not enough resource information. If I need to machine two valve blocks, and I do not know all or any of:
    • Exactly how much machine time and labour time is required for each of them.
    • My resource dictionary does not properly describe the availability of the equipment and labour to do the job,
    • I do not have clear guidance from management about the priority for allocation of resources
    • BUT, I “know” I can only do one at a time – Then I add a finish to start relationship between the two activities.
    • In case 2) above it does not matter if we start the successor activity first, and is we did start it first we would need to finish it before starting on the predecessor. In a perfect world case 2) above is easily dealt with by resource levelling, but quite a lot more information is required to do it the correct way.

      Calculate start to start lag from

      Clearly Actual Start may calculate more realistic dates. Probably Early Start calculates more optimistic dates if the schedule is slipping.

      Define Critical Activities as

      The definition of Critical in textbooks of CPM methodology is where float is less than or equal to zero. The longest path through the network always shows red bars in the barchart for the so-called Critical path, and is a more popular choice.

      Calculate Float based on Finish date of

      When scheduling multiple projects – perhaps a portfolio that represents a single contract – how many float paths? Does each project have its “own” float, or is float “owned” by the whole portfolio of projects? Before considering this question we would need to know how the projects' inter-project relationships are structured, how many open ends there are in how many of the projects, and an understanding of the commercial/contractual implications. It is unlikely that the planner on the project can answer this question alone.

      Compute Total Float as

      The author admits defeat here. Apparently in some circumstances an LoE or WBS Summary activity can have different Start and Finish Floats. There is no Primavera documentation that describes the circumstances or justifies the arithmetic. Choose Finish Float.

      Calendar for calculating relationship lag

      This is very important. Best practice is to always use 24hour, and always enter lags in hours. E.g. if the lag is 5 days, enter 120h into the lag dialogue. This way no changes in any calendar will change the wall-clock time of any lag

      NOTE: When you schedule in P6 all open projects are scheduled at the data date selected for each project. If only one project is open, then you can change the data date for that project. If multiple projects are open then you can only change their data date in the Projects screen, where you can even “Fill Down” a new data date to multiple projects.

      About the Author:

      David Kelly

      Dave Kelly delivers Oracle Primavera training courses at Milestone in Aberdeen; Milestone is an Oracle University Authorised Education Centre and offers the complete Oracle Primavera course curriculum. Dave has been involved in Planning and Scheduling software training and consultancy for many years he is well known and respected as an expert in delivering Primavera and associated solutions as both an experienced consultant and trainer.

    Wednesday Nov 30, 2011

    New ways for backup, recovery and restore of Essbase Block Storage databases – part 2 by Bernhard Kinkel

    After discussing in the first part of this article new options in Essbase for the general backup and restore, this second part will deal with the also rather new feature of Transaction Logging and Replay, which was released in version 11.1, enhancing existing restore options.

    Tip: Transaction logging and replay cannot be used for aggregate storage databases. Please refer to the Oracle Hyperion Enterprise Performance Management System Backup and Recovery Guide (rel.

    Even if backups are done on a regular, frequent base, subsequent data entries, loads or calculations would not be reflected in a restored database. Activating Transaction Logging could fill that gap and provides you with an option to capture these post-backup transactions for later replay. The following table shows, which are the transactions that could be logged when Transaction Logging is enabled:

    In order to activate its usage, corresponding statements can be added to the Essbase.cfg file, using the TRANSACTIONLOGLOCATION command. The complete syntax reads:


    Where appname and dbname are optional parameters giving you the chance in combination with the ENABLE or DISABLE command to set Transaction Logging for certain applications or databases or to exclude them from being logged. If only an appname is specified, the setting applies to all databases in that particular application. If appname and dbname are not defined, all applications and databases would be covered. LOGLOCATION specifies the directory to which the log is written, e.g. D:\temp\trlogs. This directory must already exist or needs to be created before using it for log information being written to it. NATIVE is a reserved keyword that shouldn’t be changed.

    The following example shows how to first enable logging on a more general level for all databases in the application Sample, followed by a disabling statement on a more granular level for only the Basic database in application Sample, hence excluding it from being logged.


    Tip: After applying changes to the configuration file you must restart the Essbase server in order to initialize the settings.

    A maybe required replay of logged transactions after restoring a database can be done only by administrators. The following options are available:

    In Administration Services selecting Replay Transactions on the right-click menu on the database:

    Here you can select to replay transactions logged after the last replay request was originally executed or after the time of the last restored backup (whichever occurred later) or transactions logged after a specified time.
    Or you can replay transactions selectively based on a range of sequence IDs, which can be accessed using Display Transactions on the right-click menu on the database:

    These sequence ID s (0, 1, 2 … 7 in the screenshot below) are assigned to each logged transaction, indicating the order in which the transaction was performed.

    This helps to ensure the integrity of the restored data after a replay, as the replay of transactions is enforced in the same order in which they were originally performed. So for example a calculation originally run after a data load cannot be replayed before having replayed the data load first. After a transaction is replayed, you can replay only transactions with a greater sequence ID. For example, replaying the transaction with sequence ID of 4 includes all preceding transactions, while afterwards you can only replay transactions with a sequence ID of 5 or greater.

    Tip: After restoring a database from a backup you should always completely replay all logged transactions, which were executed after the backup, before executing new transactions.

    But not only the transaction information itself needs to be logged and stored in a specified directory as described above. During transaction logging, Essbase also creates archive copies of data load and rules files in the following default directory:


    These files are then used during the replay of a logged transaction. By default Essbase archives only data load and rules files for client data loads, but in order to specify the type of data to archive when logging transactions you can use the command TRANSACTIONLOGDATALOADARCHIVE as an additional entry in the Essbase.cfg file. The syntax for the statement is:


    While to the [appname [dbname]] argument the same applies like before for TRANSACTIONLOGLOCATION, the valid values for the OPTION argument are the following:

    Make the respective setting for which files copies should be logged, considering from which location transactions are usually taking place. Selecting the NONE option prevents Essbase from saving the respective files and the data load cannot be replayed. In this case you must first manually load the data before you can replay the transactions.

    Tip: If you use server or SQL data and the data and rules files are not archived in the Replay directory (for example, you did not use the SERVER or SERVER_CLIENT option), Essbase replays the data that is actually in the data source at the moment of the replay, which may or may not be the data that was originally loaded.

    You can find more detailed information in the following documents:

    Or on the Oracle Technology Network.

    If you are also interested in other new features and smart enhancements in Essbase or Hyperion Planning stay tuned for coming articles or check our training courses and web presentations.

    You can find general information about offerings for the Essbase and Planning curriculum or other Oracle-Hyperion products here; (please make sure to select your country/region at the top of this page) or in the OU Learning paths section, where Planning, Essbase and other Hyperion products can be found under the Fusion Middleware heading (again, please select the right country/region). Or drop me a note directly:

    About the Author:

    Bernhard Kinkel

    Bernhard Kinkel started working for Hyperion Solutions as a Presales Consultant and Consultant in 1998 and moved to Hyperion Education Services in 1999. He joined Oracle University in 2007 where he is a Principal Education Consultant. Based on these many years of working with Hyperion products he has detailed product knowledge across several versions. He delivers both classroom and live virtual courses. His areas of expertise are Oracle/Hyperion Essbase, Oracle Hyperion Planning and Hyperion Web Analysis.


    All methods and features mentioned in this article must be considered and tested carefully related to your environment, processes and requirements. As guidance please always refer to the available software documentation. This article does not recommend or advise any explicit action or change, hence the author cannot be held responsible for any consequences due to the use or implementation of these features.

    Wednesday Oct 26, 2011

    New Style Sheet in Release PeopleTools 8.50: Free Form Stylesheet by Pascal Thaler

    Since Release PeopleTools 8.50, free form sub style sheets are text-based sub style sheets that enable you to take advantage of Cascading Style Sheets– Level 2 (CSS2), AJAX, and DHTML features.

    With a free form sub style sheet, you can create the style sheet in a third-party editor text and then copy the style sheet text into the Free Form tab of the free form sub style sheet definition.

    When creating free form style sheets, style class names (like PSEDITBOX for example) must be identical to the PeopleTools default style class names.

    If the default page style sheet includes only free form text, the free form sub style sheet must define and include all default style classes used by the application.

    PSSTYLEDEF_SWAN is the default application style sheet. The PSSTYLEDEF_SWAN style sheet comprises all default style classes and consists of these sub style sheets (list only free form sub style sheet):

    • PSNAV2_SWAN: Defines the 8.50 Menu pagelet navigation style classes
    • PSPOPUP_CSS_SWAN: Defines the pop-up dialog box page style classes.
    • PSTAB_PTCSS_SWAN: Defines the page tab style classes.
    • PSHDR2_SWAN: Defines Oracle Logo.

    Note:You can also convert style sheets to free form sub style sheets:

    • Open a standard style sheet or sub style sheet.
    • Select File, Definition Properties.
    • In the Style Sheet Type drop-down list box, select Freeform Sub Style Sheet and click the OK button.
    • Save the style sheet.
    Example 1: The free form sub style sheet PSNAV2_SWAN

    You want to change the left menu navigation color background in PT 8.50:

    In ApplicationDesigner, open PSNAV2 (or PSNAV2_SWAN) (free form stylesheet), and change the ptnav2pglt class attributes for background:

    and the result is:

    Example 2: The free form sub style sheet PSHDR2_SWAN

    You want to change to a new image (not update the image standard PT_ORACLELOGO_SWAN) for the Oracle logo in the PeopleSoft header:

    In ApplicationDesigner

    File -> New, select Image and browse new file image MY_ORACLELOGO_SWAN.gif and Save it.

    In ApplicationDesigner, open PSSTYLEDEF_SWAN and open FreeForm Sub Style Sheets: PSHDR2_SWAN and change the pthdr2logoswant class attributes for background url(%Image())no-repeat:

    and the result is:

    About the Author:

    Pascal Thaler joined Oracle University in 2005 where he is a Senior Instructor. His area of expertise is Oracle Peoplesoft Technology and he delivers the following courses:

    • For Developers: PeopleTools Overview, PeopleTools I &II, Batch Application Engine, Language Oriented Object PeopleCode, Administration Security
    • For Administrators : Server Administration & Installation, Database Upgrade & Data Management Tools
    • For Interface Users: Integration Broker (Web Service)

    Thursday Sep 29, 2011


    In release 12, an exciting new feature was introduced across the sub ledgers and it was called Multi-Org Access Control or MOAC. A lot of our customers have followed Oracle’s lead and adopted shared service centres (SSC). In these centres, to drive down costs in processing business transactions, the back-office functions (financial and administration) have been consolidated. For example a shared service centre in a single country could deal with all the processing expenses across Europe, or even the world. SSC models are increasingly being used in the Public sector in an attempt to be more efficient, and to push down the cost of daily transactions.

    You may not have implemented a formal shared service centre, but you can still reap the benefits from Multi-Org Access Control. Multi – Org architecture was introduced in version 10.7 to allow businesses with complex enterprise structures, often over many countries, to conduct their business transaction in a single Oracle database instance. Financial transaction in the sub ledgers were secured by operating units, and users gain access to each operating unit via a different responsibility. If a user needed to process transactions in a new operating unit, then they would need another responsibility.

    MOAC allows companies to gain processing efficiencies because users can more easily access, process and report on data across multiple operating units from a single responsibility without compromising data security or system performance. For example, an order processing clerk can open one sales order form and then process orders for all countries without the need to switch responsibilities or data entry forms.

    The following diagram summarises the set up and processing steps for using MOAC.

    moac diagram

    In the Human Resource responsibility, you can define a new security profile and assign to this new profile all the operating units needed for a responsibility to access. To make this new security profile available, you then must run the HR report called ‘Run Security list Maintenance’. The new security profile is then attached to a responsibility by the new profile option called ‘MO: security Profile’.

    A number of reports and forms have been enhanced to allow cross – organisational reporting. Multi – org preferences allows the user to control and limit the number of operating units they have access to, based on their work environment.

    The MOAC feature delivers the following benefits:

    1. Reduce setup and Maintenance of many responsibilities

    2. Speed up data entry

    3. Obtain a global consolidated view of information

    4. Process data across multiple operating units from a single responsibility

    5. Increase operational efficiency and reduce transaction processing costs.

    The setup and use of MOAC is covered in the OU course R12.x Oracle E-Business Suite Essentials for Implementers. The course also covers common setup components such as Flexfields and is the prerequisite course for any follow on application fundamentals course. The course content is also tested in the first examination of the e-business certification program.

    About the Author:

    David Barnacle
    David Barnacle joined Oracle University in 2001, after being the lead Implementer, of a very successful European rollout of the e-Business suite. He currently trains a wide family of applications specializing in the supply chain and financial areas. He enjoys meeting students and likes to learn how each Customer will configure the software suite to meet their own challenging business objectives.

    Friday Aug 26, 2011

    Ways to Train – Oracle University Style by David North

    It’s not just about the content, it’s not just about the trainer, it’s also about you – the learner. The ways people learn new skills is ever so varied; and it is for this reason that OU has been and continues to dramatically expand the sources and styles available. In this short article I want to expand on some of the available styles you may come across to enable you to make the best choice for you. Firstly, we have “Instructor-led” training; the kind of live, group-based training that many of you will have already experienced by attending a classroom and coming face-to-face with your trainer; spending time absorbing theory lessons, watching demos, and then (in most classes) “having a go” and doing hands-on exercises with a live system.

    But now OU has added “LVC” (Live Virtual Class) – a variety of the live, instructor-led training where instead of having to travel, you attend class remotely, over the Internet. You still have a live instructor (so you have to run up on time... no slacking allowed!!). The tool we use allows plenty of interaction with the trainer and other class members, and the hands-on exercises are just the same – although in this style of training if you fall behind or want to explore more, the machines on which you do the exercises are available 24x7 – no being kicked out of the classroom at the end of the day!

    We are doing more and more of these LVC classes as the word spreads about how good they really are. If you can’t take time out during the day and are really up for it, you’ll even find classes scheduled to run in the evenings and overnight! – although be careful you don’t end up on a class being delivered in Chinese or Japanese for example (unless of course you happen to speak the language... When you book a class the language and start times are clearly shown).

    For those of you who prefer a more self-paced style, or who cannot take big chunks of time out to do the live classes, we have created recordings of quite a few – which we call “RWC” (Recorded Web Class”), so you can log in and work through them at your leisure. Sadly with these we cannot make the hand-on practice environments available (there’s no-one there in real time to support them), but they do give you all the content, and at a time and pace to suit your needs.

    If you like that idea, but want something a bit more interactive, we have “Online Training”. Do not confuse this with LVC, the “Online Training” is not “live”; it is a combination of interactive computer based lessons with demos and hands-on simulations based on real live environments. You decide where, when, and how much of the course you do. Each time you log back in the system remembers where you were – you can go back and repeat parts of it, or simply carry on where you left off. Perfect if you have to do our training in bits and pieces and unpredictable times.

    And finally, if you like the idea of the “Online” option, but want even more flexibility about when and where, we have “SSCD” (Self Study CD) – which is in effect the online class on a CD so you don’t even have to be connected to the Internet to dip in and learning something new.

    Not all of our titles are available across all the styles, but the range is growing daily. Now you have no excuse for not finding something in a format that will suit your learning needs.

    Happy training.

    About the Author:

    David North
    David North is Delivery Director for Oracle Applications in the UK, Ireland and Scandinavia and is responsible for Specialist Education Services in EMEA. He has been working with Oracle Applications for over 9 years and in the past helped customers implement and roll out specific products in just about every country in EMEA. He also trained many customers from implementation and customisation through to marketing and business management.

    Wednesday Aug 10, 2011

    Oracle Real Application Clusters Curriculum under Release 2 by Lex van der Werff

    Oracle Real Application Clusters (Oracle RAC), part of the Oracle Database 11g Enterprise Edition, enables a single database to run across a cluster of servers, providing unbeatable fault tolerance, performance, and scalability with no application changes necessary. With Release2 of the Oracle Database 11g, Oracle University has adjusted its RAC curriculum to ensure that you benefit from the full power of this new product.

    The previous course offering for Oracle RAC Release 1 consisted of:

    Course Name: Oracle Database 11g: RAC Administration Ed 1.1
    Course Code: D50311
    Duration: 5 Days
    Content: This course was designed to cover both Oracle Database 10g Release 2 and Oracle Database 11g Release 1.

    As a result of the significant product changes that occurred with Release 2, the curriculum was also redesigned to address these changes. The Oracle Database 11g Release 2 training is now covered in two courses totaling 7 days of training:

    Course Name: Oracle Grid Infrastructure 11g: Manage Clusterware and ASM
    Course Code: D59999
    Duration: 4 Days 
    Content:  Is directed at DBAs and Systems administrators with a responsibility for High Availability or Storage Administration or both. DBAs who are familiar with Automatic Storage Management (ASM) or Clusterware from older releases need this course to learn how Grid Infrastructure has combined these technologies, and to learn the new cababilities of the software. System administrators who manage HA software, or storage administrators will benefit from learning how this product works.

    Course Name: Oracle Database 11g: RAC Administration Ed 2
    Course Code: D60491
    Duration: 3 Days
    Content: Is designed for Database Administrators, primarily those new to RAC. Students will learn about RAC database administration in the Oracle Grid Infrastructure environment. This is not to be considered as a refresher course.

    Both courses are required to master the Oracle Database 11g Release 2 and are part of the Oracle Database 11g learning path.

    Some countries also offer a compact, accelerated version of both courses lasting only 5 days.  

    Course Name: Oracle 11g: RAC and Grid Infrastructure Administration Accelerated Ed 1.1
    Course Code: D72078
    Duration: 5 Days

    To summarize based on job role:

    • Database Administrators who are new to RAC should attend both courses or the accelerated version.
    • Database Administrators familiar with RAC, should attend the 4 day Grid Infrastructure course
    • System and Storage Administrators who manage systems where RAC is installed should also attend the 4 day Grid Infrastructure course


    Database Release Course Name Course Code Duration Description
    Oracle Database 10g Release 2 Oracle Database 11 Release 1 Oracle Database 11g: RAC Administration Ed 1.1 D50311 5 This course is for BOTH Oracle Database 10g Release 2 and Oracle Database 11g Release 1 only.
    Oracle Database 11 Release 2 Oracle Grid Infrastructure 11g: Manage Clusterware and ASM Ed 1 D59999 4 In this course, students will learn about Oracle Grid Infrastructure components including Oracle Automatic Storage Manager (ASM), ASM Cluster File System, and Oracle Clusterware. This course is based on Oracle Database 11g Release 2.
    Oracle Database 11 Release 2 Oracle Database 11g: RAC Administration Ed 2 D60491 3 This RAC course is required as 2nd course only after the Oracle Grid Infrastructure 11g: Manage Clusterware and ASM Ed 1 course.

    Frequently Asked Questions:

    Q: I have attended the 5 day Oracle Database 11g: RAC Administration course, which course do I need to become skilled in Release 2?
    A: In this case, you only need to attend the 4 day Oracle Grid Infrastructure 11g: Manage Clusterware and ASM course.

    Q: Is it possible to take the Oracle Database 11g: RAC Administration course first and then at a later point in time the Oracle Grid Infrastructure 11g: Manage Clusterware and ASM Ed 1 course?
    A: No, the 3-day course should be attended after the Oracle Grid Infrastructure 11g: Manage Clusterware and ASM Ed 1 course

    If you have additional questions or would like to speak to an Oracle University representative to discuss your personal training needs, let us know!


    Lex van der Werff

    Lex van der Werff started at Ingres BV as a consultant in 1992. 2 years later he joined Oracle as a trainer for various technical courses such as Languages (SQL and PL/SQL), Development tools (Developer and Designer), Database Adminstration and Application Server courses. During this time he also taught several seminars throughout EMEA which he developed himself. After working as a training manager for Oracle Consulting, Lex joined Oracle University in 2008 as Delivery Manager for the Benelux.

    Tuesday Jun 14, 2011

    New ways for backup, recovery and restore of Essbase Block Storage databases – part 1 by Bernhard Kinkel

    Backing up databases and providing the necessary files and information for a potential recovery or restore is crucial in today’s working environments. I will therefore present the new interesting options that Essbase provides for this, starting from version 11, and related to this a powerful data export option using Calc Scripts, which has been available since release 9.3.

    Let’s start with the last point: If you wanted to backup just the data from your database, formerly you could use the Export utility that Essbase provides as an item in the database right-click menu in the Administration Services Console. This feature is still available, supporting both Block Storage (BSO) and Aggregate Storage (ASO) databases. But regarding usability, some limitations exist: for example, the focus on which data to export can be set only to Level0, Input Level or All data (the last two options are only available for BSO) – more detailed definitions are not possible. Also the ASCII format of the export file causes them to become rather large, maybe even larger than your Page and Index files.

    Anyway, importing these files is quite simple, as this export can be (re-)loaded without any load rule, as long as the outline structure is the same – even if the database resides on another server. Also modifications are possible while using load rules in combination with an export file in column format.

    But now the way to export data using a Calc Script promises more flexibility, smaller files and faster performance. However, this option is only available to BSO, as ASO cubes do not leverage Calc Scripts.

    For example, in order to focus on even very detailed subsets of data, which is very usual in Calc Scripts, you can take advantage of common commands like FIX | ENDFIX and EXCLUDE | ENDEXCLUDE. In addition the new SET DATAEXPORTOPTIONS command provides more options to refine export content, formatting, and processing, including the possibility to export dynamically calculated values. You can also request statistics and an estimate of export time before actually exporting the data. The following syntax gives you an overview of the available settings:

    DataExportLevel ALL | LEVEL0 | INPUT;
    DataExportDynamicCalc ON | OFF;
    DataExportNonExistingBlocks ON | OFF;
    DataExportDecimal n;
    DataExportPrecision n;
    DataExportColFormat ON | OFF;
    DataExportColHeader dimensionName;
    DataExportDimHeader ON | OFF;
    DataExportRelationalFile ON | OFF;
    DataExportOverwriteFile ON | OFF;
    DataExportDryRun ON | OFF;

    Looking at most of these options will probably already give you an idea on their use and functionality. For more detailed information about the SET DATAEXPORTOPTIONS command options, please see the available Oracle Essbase Online Documentation (rel. or the Enterprise Performance Management System Documentation (including previous releases) on the Oracle Technology Network.

    My example should focus on the binary export and import, as it provides faster export and load performance than export/import with ASCII files. Thus in the first section of my script I will use only two of the data export options, in order to export all data and to overwrite an eventually existing old export file with the new one. The subsequent syntax for the binary export itself is DATAEXPORT "Binfile" "fileName", where "Binfile" is the required keyword and "fileName" is the full pathname for the exported binfile. So the complete script reads:

    DataExportLevel "ALL";
    DATAEXPORT "BinFile" "c:\Export\MyDB_expALL.bin";

    Tip: Export file names can have more than 8 characters; the extension “.bin” is not mandatory.

    The import of the binary file with the Calc Script uses the command DATAIMPORTBIN fileName;. In order to avoid potentially importing a wrong file or importing into a wrong database, each export file includes an outline timestamp, which the import by default checks. Just in case, this check should be bypassed, the command SET DATAIMPORTIGNORETIMESTAMP ON; could be placed before the DATAIMPORTBIN line. The import definition for the preceding export could look like the following:

    DATAIMPORTBIN "c:\Export\MyDB_expALL.bin";

    After this rather new option for data export and import let’s turn to the new backup and restore option for complete databases provided in Administration Services Console starting with release 11. As well as or instead of the common strategies and methods used previously (like running a third party backup utility while the database is in read-only mode), this new feature provides an easy ad-hoc way to archive a database.

    Select the Archive Database item from the right-click menu on the database node and in the subsequent window define the full path and name for the archive file, where the extension “.arc” is a recommendation from Oracle, but not mandatory.

    The process could optionally be run as a background process, while Force archive would overwrite an existing file with the same name.

    After starting the archive procedure, the database is set to read-only mode and a copy of the following files will be written to the archive file:

    After this the database returns to read-write mode. However, not all files are backed up automatically using this procedure. The following table shows a list of files and file types that you would need to backup manually:

    Tip: Also make a backup of the file essbase.bak_startup. This file is created after a successful start of the Essbase server (formerly this file was named just essabse.bak), as well as the essbase.bak file, which now has a different function: while the essbase.bak_startup is only created at the server start and no changes apply to this file until a next successful server start, the essbase.bak could be compared to the security file and updated manually or by using a MaxL command at any time. For a manual update in Administration Services Console under the respective Essbase server right-click Security, and select Update security backup file.

    In MaxL run the command alter system sync security backup. Security files and the CFG-file reside in the ARBORPATH\bin directory, where you installed Essbase.

    As the Archive option by default creates one large file, you have to make sure that the system you save your archive files to supports large files (e.g. in Windows NTFS). If you need smaller files, Essbase can be configured to create multiple files no larger than 2 GB by creating the SPLITARCHIVEFILE TRUE entry in the essbase.cfg file.

    Restoring an archived database works as simply as the backup itself. First make sure that the database to be restored is stopped. Then from the right-click menu select Restore Database. Provide the required information about the archive file to be imported including the full path.

    If the backed-up database used disk volumes, select Advanced. The database can be restored to the same disk volumes without any further definitions, or you define a new mapping for the volume names (e.g. “C” could be replaced by “F”), but you can neither change the number of volumes nor the space used on each volume compared to the original backed-up database. Select to restore in the background if desired, and click OK. The restore is done and confirmed in the Messages panel.

    Tip: Usually, the same database would be restored that has been previously backed-up. But this doesn’t necessarily have to be the case. You can also use the restore feature to create a copy of your database (excluding the files mentioned above, which are not included in the archive file) or to overwrite another database. In both cases you must have an existing database to overwrite. From this “target” database select the Restore Database feature, but make sure to have checked Force Restore in the Restore Database dialog box.

    Depending on the frequency of your archiving cycles, maybe the latest backup doesn’t restore the actual latest state of your database: following the backup, you might, for example, have run Dimension Build Rules or Data Load Rules, data loads from client interfaces or calculations. These would not be reflected in the restored database. In this case the new Transaction Logging and Replay option provides a good way to capture and replay post-backup transactions. Thus, a backed-up database can be recovered to the most recent state before the interruption occurred. This feature will be described in the second part of this article coming later this year.

    Or – if you can’t wait – maybe you should learn how to use it as well as other important administration topics in our Essbase for System Administrators class; please refer also to the links provided below.

    If you are also interested in other new features and smart enhancements in Essbase or Hyperion Planning stay tuned for coming articles or check our training courses and web presentations.

    You can find general information about offerings for the Essbase and Planning curriculum or other Oracle-Hyperion products here; (please make sure to select your country/region at the top of this page) or in the OU Learning paths section, where Planning, Essbase and other Hyperion products can be found under the Fusion Middleware heading (again, please select the right country/region). Or drop me a note directly:

    Bernhard Kinkel started working for Hyperion Solutions as a Presales Consultant and Consultant in 1998 and moved to Hyperion Education Services in 1999. He joined Oracle University in 2007 where he is a Principal Education Consultant. Based on these many years of working with Hyperion products he has detailed product knowledge across several versions. He delivers both classroom and live virtual courses. His areas of expertise are Oracle/Hyperion Essbase, Oracle Hyperion Planning and Hyperion Web Analysis.


    All methods and features mentioned in this article must be considered and tested carefully related to your environment, processes and requirements. As a guidance please always refer to the available software documentation. This article does not recommend or advise any explicit action or change, hence the author cannot be held responsible for any consequences due to the use or implementation of these features.

    Wednesday Mar 30, 2011

    Generating Log files for the Siebel Gateway Name Server and the Siebel Web Server by Tim Bull

    [Read More]

    Sunday Feb 20, 2011

    Oracle Spatial and Transportable Tablespaces by Gwen Lazenby

    [Read More]

    Expert trainers from Oracle University share tips and tricks and answer questions that come up in a classroom.


    « August 2016