Monday Nov 18, 2013

Using the Container Database in Oracle Database 12c by Christopher Andrews


The first time I examined the Oracle Database 12c architecture, I wasn’t quite sure what I thought about the Container Database (CDB). In the current release of the Oracle RDBMS, the administrator now has a choice of whether or not to employ a CDB.

Bundling Databases Inside One Container

In today’s IT industry, consolidation is a common challenge. With potentially hundreds of databases to manage and maintain, an administrator will require a great deal of time and resources to upgrade and patch software. Why not consider deploying a container database to streamline this activity? By “bundling” several databases together inside one container, in the form of a pluggable database, we can save on overhead process resources and CPU time. Furthermore, we can reduce the human effort required for periodically patching and maintaining the software.

Minimizing Storage

Most IT professionals understand the concept of storage, as in solid state or non-rotating. Let’s take one-to-many databases and “plug” them into ONE designated container database. We can minimize many redundant pieces that would otherwise require separate storage and architecture, as was the case in previous releases of the Oracle RDBMS. The data dictionary can be housed and shared in one CDB, with individual metadata content for each pluggable database. We also won’t need as many background processes either, thus reducing the overhead cost of the CPU resource.

Improve Security Levels within Each Pluggable Database 

We can now segregate the CDB-administrator role from that of the pluggable-database administrator as well, achieving improved security levels within each pluggable database and within the CDB. And if the administrator chooses to use the non-CDB architecture, everything is backwards compatible, too.

 The bottom line: it's a good idea to at least consider using a CDB.


About the author:

Chris Andrews is a Canadian instructor of Oracle University who teaches the Server Technologies courses for both 11g and 12c. He has been with Oracle University since 1997 and started training customers back with Oracle 7. While now a Senior Principal Instructor with OU, Chris had an opportunity to work as a DBA for about 10 years before joining Oracle. His background and experiences as a DBA, Database Analyst and also a developer is occasionally shared with customers in the classroom. His skill set includes the Database Administration workshops, Backup & Recovery, Performance Tuning, RAC, Dataguard, ASM, Clusterware and also Exadata and Database Machine administration workshop. While not teaching, Chris enjoys aviation and flying gliders, underwater photography and tennis.

Friday Nov 01, 2013

Running a simple integration scenario using the Oracle Big Data Connectors on Hadoop/HDFS cluster

Between the elephant ( the tradional image of the Hadoop framework) and the Oracle Iron Man (Big Data..) an english setter could be seen as the link to the right data


Data, Data, Data, we are living in a world where data technology based on popular applications , search engines, Webservers, rich sms messages, email clients, weather forecasts and so on, have a predominant role in our life.

More and more technologies are used to analyze/track our behavior, try to detect patterns, to propose us "the best/right user experience" from the Google Ad services, to Telco companies or large consumer sites (like Amazon:) ). The more we use all these technologies, the more we generate data, and thus there is a need of huge data marts and specific hardware/software servers (as the Exadata servers) in order to treat/analyze/understand the trends and offer new services to the users.

Some of these "data feeds" are raw, unstructured data, and cannot be processed effectively by normal SQL queries. Large scale distributed processing was an emerging infrastructure need and the solution seemed to be the "collocation of compute nodes with the data", which in turn leaded to MapReduce parallel patterns and the development of the Hadoop framework, which is based on MapReduce and a distributed file system (HDFS) that runs on larger clusters of rather inexpensive servers.

Several Oracle products are using the distributed / aggregation pattern for data calculation ( Coherence, NoSql, times ten ) so once that you are familiar with one of these technologies, lets says with coherence aggregators, you will find the whole Hadoop, MapReduce concept very similar.

Oracle Big Data Appliance is based on the Cloudera Distribution (CDH), and the Oracle Big Data Connectors can be plugged on a Hadoop cluster running the CDH distribution or equivalent Hadoop clusters.

In this paper, a "lab like" implementation of this concept is done on a single Linux X64 server, running an Oracle Database 11g Enterprise Edition Release 11.2.0.4.0, and a single node Apache hadoop-1.2.1 HDFS cluster, using the SQL connector for HDFS.

The whole setup is fairly simple:

  1. Install on a Linux x64 server ( or virtual box appliance) an Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 server
  2. Get the Apache Hadoop distribution from: http://mir2.ovh.net/ftp.apache.org/dist/hadoop/common/hadoop-1.2.1.
  3. Get the Oracle Big Data Connectors from: http://www.oracle.com/technetwork/bdc/big-data-connectors/downloads/index.html?ssSourceSiteId=ocomen.
  4. Check the java version of your Linux server with the command:
    java -version
     java version "1.7.0_40"
    Java(TM) SE Runtime Environment (build 1.7.0_40-b43)
    Java HotSpot(TM) 64-Bit Server VM (build 24.0-b56, mixed mode)
    
  5. Decompress the hadoop hadoop-1.2.1.tar.gz file to /u01/hadoop-1.2.1
  6. Modify your .bash_profile
    export HADOOP_HOME=/u01/hadoop-1.2.1
    export PATH=$PATH:$HADOOP_HOME/bin
    
    export HIVE_HOME=/u01/hive-0.11.0
    export PATH=$PATH:$HADOOP_HOME/bin:$HIVE_HOME/bin
    
    (also see my sample .bash_profile)
  7. Set up ssh trust for Hadoop process, this is a mandatory step, in our case we have to establish a "local trust" as will are using a single node configuration
  8. copy the new public keys to the list of authorized keys
  9. connect and test the ssh setup to your localhost:

  10. We will run a "pseudo-Hadoop cluster", in what is called "local standalone mode", all the Hadoop java components are running in one Java process, this is enough for our demo purposes. We need to "fine tune" some Hadoop configuration files, we have to go at our $HADOOP_HOME/conf, and modify the files:
    core-site.xml
                

    hdfs-site.xml
    

    mapred-site.xml
    

  11. check that the hadoop binaries are referenced correctly from the command line by executing:
    
    hadoop  -version
    
  12. As Hadoop is managing our "clustered HDFS" file system we have to create "the mount point" and format it , the mount point will be declared to core-site.xml as:

    The layout under the /u01/hadoop-1.2.1/data will be created and used by other hadoop components (MapReduce => /mapred/...) HDFS is using the /dfs/... layout structure

  13. format the HDFS hadoop file system:
  14. Start the java components for the HDFS system
  15. As an additional check, you can use the GUI Hadoop browsers to check the content of your HDFS configurations:

    Once our HDFS Hadoop setup is done you can use the HDFS file system to store data ( big data : )), and plug them back and forth to Oracle Databases by the means of the Big Data Connectors ( which is the next configuration step).

    You can create / use a Hive db, but in our case we will make a simple integration of "raw data" , through the creation of an External Table to a local Oracle instance ( on the same Linux box, we run the Hadoop HDFS one node cluster and one Oracle DB).

  16. Download some public "big data", I use the site: http://france.meteofrance.com/france/observations, from where I can get *.csv files for my big data simulations :).

    Here is the data layout of my example file:

    Download the Big Data Connector from the OTN (oraosch-2.2.0.zip), unzip it to your local file system (see picture below)

  17. Modify your environment in order to access the connector libraries , and make the following test:

    [oracle@dg1 bin]$./hdfs_stream
    Usage: hdfs_stream locationFile
    [oracle@dg1 bin]$
    
  18. Load the data to the Hadoop hdfs file system:
    hadoop fs  -mkdir bgtest_data
    hadoop  fs  -put obsFrance.txt bgtest_data/obsFrance.txt
    hadoop fs  -ls  /user/oracle/bgtest_data/obsFrance.txt       
    [oracle@dg1 bg-data-raw]$ hadoop fs -ls  /user/oracle/bgtest_data/obsFrance.txt
    
    Found 1 items
    -rw-r--r--   1 oracle supergroup      54103 2013-10-22 06:10 /user/oracle/bgtest_data/obsFrance.txt
    
    [oracle@dg1 bg-data-raw]$hadoop fs -ls  hdfs:///user/oracle/bgtest_data/obsFrance.txt
    
    Found 1 items
    -rw-r--r--   1 oracle supergroup      54103 2013-10-22 06:10 /user/oracle/bgtest_data/obsFrance.txt
    
  19. Check the content of the HDFS with the browser UI:
  20. Start the Oracle database, and run the following script in order to create the Oracle database user, the Oracle directories for the Oracle Big Data Connector (dg1 it’s my own db id replace accordingly yours):
    #!/bin/bash
    export ORAENV_ASK=NO
    export ORACLE_SID=dg1
    . oraenv
    sqlplus /nolog <<EOF
    CONNECT / AS sysdba;
    CREATE OR REPLACE DIRECTORY osch_bin_path  AS  '/u01/orahdfs-2.2.0/bin';
    CREATE USER BGUSER IDENTIFIED BY oracle;
    GRANT CREATE SESSION, CREATE TABLE TO BGUSER;
    GRANT EXECUTE ON sys.utl_file TO BGUSER;
    GRANT READ, EXECUTE ON DIRECTORY osch_bin_path TO BGUSER;
    CREATE OR REPLACE DIRECTORY BGT_LOG_DIR as '/u01/BG_TEST/logs';
    GRANT READ, WRITE ON DIRECTORY BGT_LOG_DIR to BGUSER;
    CREATE OR REPLACE DIRECTORY BGT_DATA_DIR as '/u01/BG_TEST/data';
    GRANT READ, WRITE ON DIRECTORY BGT_DATA_DIR to BGUSER;
    EOF
    
  21. Put the following in a file named t3.sh and make it executable,
    hadoop jar $OSCH_HOME/jlib/orahdfs.jar \
    oracle.hadoop.exttab.ExternalTable \
    -D oracle.hadoop.exttab.tableName=BGTEST_DP_XTAB \
    -D oracle.hadoop.exttab.defaultDirectory=BGT_DATA_DIR \
    -D oracle.hadoop.exttab.dataPaths="hdfs:///user/oracle/bgtest_data/obsFrance.txt" \
    -D oracle.hadoop.exttab.columnCount=7 \
    -D oracle.hadoop.connection.url=jdbc:oracle:thin:@//localhost:1521/dg1 \
    -D oracle.hadoop.connection.user=BGUSER \
    -D oracle.hadoop.exttab.printStackTrace=true \
    -createTable  --noexecute
    

    then test the creation fo the external table with it:

    [oracle@dg1 samples]$ ./t3.sh
    
    ./t3.sh: line 2: /u01/orahdfs-2.2.0: Is a directory
    Oracle SQL Connector for HDFS Release 2.2.0 - Production
    Copyright (c) 2011, 2013, Oracle and/or its affiliates. All rights reserved.
    Enter Database Password:]
    The create table command was not executed.
    The following table would be created.
    CREATE TABLE "BGUSER"."BGTEST_DP_XTAB"
    (
     "C1"                             VARCHAR2(4000),
     "C2"                             VARCHAR2(4000),
     "C3"                             VARCHAR2(4000),
     "C4"                             VARCHAR2(4000),
     "C5"                             VARCHAR2(4000),
     "C6"                             VARCHAR2(4000),
     "C7"                             VARCHAR2(4000)
    )
    ORGANIZATION EXTERNAL
    (
       TYPE ORACLE_LOADER
       DEFAULT DIRECTORY "BGT_DATA_DIR"
       ACCESS PARAMETERS
       (
         RECORDS DELIMITED BY 0X'0A'
         CHARACTERSET AL32UTF8
         STRING SIZES ARE IN CHARACTERS
         PREPROCESSOR "OSCH_BIN_PATH":'hdfs_stream'
         FIELDS TERMINATED BY 0X'2C'
         MISSING FIELD VALUES ARE NULL
         (
           "C1" CHAR(4000),
           "C2" CHAR(4000),
           "C3" CHAR(4000),
           "C4" CHAR(4000),
           "C5" CHAR(4000),
           "C6" CHAR(4000),
           "C7" CHAR(4000)
         )
       )
       LOCATION
       (
         'osch-20131022081035-74-1'
       )
    ) PARALLEL REJECT LIMIT UNLIMITED;
    The following location files would be created.
    osch-20131022081035-74-1 contains 1 URI, 54103 bytes
           54103 hdfs://localhost:19000/user/oracle/bgtest_data/obsFrance.txt
    
  22. Then remove the --noexecute flag and create the external Oracle table for the Hadoop data.

    Check the results:

    
    The create table command succeeded.
    
    CREATE TABLE "BGUSER"."BGTEST_DP_XTAB"
    (
     "C1"                             VARCHAR2(4000),
     "C2"                             VARCHAR2(4000),
     "C3"                             VARCHAR2(4000),
     "C4"                             VARCHAR2(4000),
     "C5"                             VARCHAR2(4000),
     "C6"                             VARCHAR2(4000),
     "C7"                             VARCHAR2(4000)
    )
    ORGANIZATION EXTERNAL
    ( 
       TYPE ORACLE_LOADER
       DEFAULT DIRECTORY "BGT_DATA_DIR"
       ACCESS PARAMETERS
       (
         RECORDS DELIMITED BY 0X'0A'
         CHARACTERSET AL32UTF8
         STRING SIZES ARE IN CHARACTERS
         PREPROCESSOR "OSCH_BIN_PATH":'hdfs_stream'
         FIELDS TERMINATED BY 0X'2C'
         MISSING FIELD VALUES ARE NULL
         (
           "C1" CHAR(4000),
           "C2" CHAR(4000),
           "C3" CHAR(4000),
           "C4" CHAR(4000),
           "C5" CHAR(4000),
           "C6" CHAR(4000),
           "C7" CHAR(4000)
         )
       )
       LOCATION
       (
         'osch-20131022081719-3239-1'
       )
    ) PARALLEL REJECT LIMIT UNLIMITED;
    
    The following location files were created.
    
    osch-20131022081719-3239-1 contains 1 URI, 54103 bytes
    
           54103 hdfs://localhost:19000/user/oracle/bgtest_data/obsFrance.txt
    

    This is the view from the SQL Developer:

    and finally the number of lines in the oracle table, imported from our Hadoop HDFS cluster

    SQL> select count(*) from "BGUSER"."BGTEST_DP_XTAB";
                      
    COUNT(*)
    ----------
          1151
    
    

In a next post we will integrate data from a Hive database, and try some ODI integrations with the ODI Big Data connector. Our simplistic approach is just a step to show you how these unstructured data world can be integrated to Oracle infrastructure.

Hadoop, BigData, NoSql are great technologies, they are widely used and Oracle is offering a large integration infrastructure based on these services.

Oracle University presents a complete curriculum on all the Oracle related technologies:

NoSQL:

Big Data:

Oracle Data Integrator:

Oracle Coherence 12c:

Oracle Coherence 12c:

Other Resources:

  • Apache Hadoop : http://hadoop.apache.org/ is the homepage for these technologies.
  • "Hadoop Definitive Guide 3rdEdition" by Tom White is a classical lecture for people who want to know more about Hadoop , and some active "googling " will also give you some more references.

About the author:

Eugene Simos is based in France and joined Oracle through the BEA-Weblogic Acquisition, where he worked for the Professional Service, Support, end Education for major accounts across the EMEA Region. He worked in the banking sector, ATT, Telco companies giving him extensive experience on production environments. Eugene currently specializes in Oracle Fusion Middleware teaching an array of courses on Weblogic/Webcenter, Content,BPM /SOA/Identity-Security/GoldenGate/Virtualisation/Unified Comm Suite) throughout the EMEA region.

Wednesday Oct 09, 2013

NEW ORACLE WEBCENTER CONTENT USER INTERFACE ON VERSION 11.1.1.8
By Roberto Nogueras

Oracle purchased Stellent in November 2006. Soon the Stellent Content Server product became the Oracle Content Server, then Oracle UCM and finally it became the Oracle WebCenter Content. As you see, the product name has changed 3 times in the past 7 years. However, the user interface hasn’t changed that much. Oracle rebranded it in 10gR3 version and has given the software only minor updates ever since. The interface is functional, but perhaps too complex for some end users and lacks the look and feel of modern web applications.

In Spring 2013, it became known that Release 11.1.1.8 was going to feature a new user interface. Some time in September, I decided to download and install it. I connected to the home page URL, logged in and the good old UI came up:

I know, I know, I should read the documentation before installing. After doing so, I found out a few interesting things:

  • The new UI is not a replacement for the old one. It just contains the features more useful for end users.
  • The new UI is an ADF application that you have to install in a separate WLS domain, and there’s no problem in running it in a separate machine.
  • The new UI communicates with the content server by using the RIDC protocol (There’s a dedicated port open for that protocol on the content server side)

The setup is fully explained in the documentation, so we’re not going to get into the details. I’d rather perform a functionality analysis of the new UI. As mentioned before, it’s end user-oriented.

First of all, let’s login. After logging in, this is the main screen:

It’s quite obvious that it has just a few options and the main screen displays, by default, a blank search result. Let’s click the “Upload Button” to check in new content:

Nice. A pop-up window opens and we can either browse the file system or drag-and-drop some content from a file explorer window. When we add a file, the “Metadata” section expands automatically and we can enter data:

We’re redirected to the main screen, and after waiting a few seconds for the search index to be updated, we can click the “refresh” button and then the new document appears in the search:

If we click it, a new tab/window opens displaying this preview screen:

This one is a beautiful design, in my opinion. On the left side we see the document content and more importantly, the tabs to navigate between revisions. On the left side, we can see metadata values , the menu to check out the content, and some other options as “Favorite”, “Follow”, and “File Document” which will be discussed a bit later. By now, let’s check out some content and create a new revision. Please note that a new tab is created:

You can “Follow” pieces of content , which is similar to the old “Subscribe” option, that is, the user wants to be notified every time a new revision is generated for that content item. You can also can mark content as Favorite, which is something new, and finally, you can arrange content into “Libraries”, that are just an evolution of folders. That’s the use of the “File Document” option: to put content into libraries:

There’s little else to say about the interface, as we’ve discussed all of its functionality. Now I hope you have the information to make the decision of using it or not. The benefits for end users are obvious and the cost is also obvious: an extra WLS domain and more memory consumption on the server side.


About the Author:


Roberto Nogueras is based in Madrid, Spain and has been an Oracle University instructor since 1995. He specializes in the Oracle database, application server and middleware technologies. Roberto has been working with Oracle WebCenter Content since the Stellent acquisition.

Wednesday Feb 27, 2013

Transporting a Single Partition of a Table

Untitled Document

A feature of the Oracle database that appeared in 11gR1 but seems to have gone largely unnoticed is the ability to transport single (or multiple) partitions of a table to another database. This is particularly useful when archiving out older data held in older partitions of a table.

In previous releases of the Oracle database, it was only possible to transport entire tables, which meant every partition in that table. Therefore the standard methodology used to transport a single partition was to perform a partition exchange at the source database, making the partition a standalone table, and then to transport this table using the normal Transportable Tablespace method to the target database. The result is a standalone table in the target database. The Oracle 11.2 documentation still states that you cannot use the Transportable Tablespace feature to transport a single partition of a table, however we are going to use a different technique, known as ‘datafile copy’ to do exactly that.

The datafile copy technique is actually part of Data Pump. In a similar way to transporting tablespaces, Data Pump is used to export out the metadata describing the objects to be copied to the new database. The datafiles that contain these objects are then copied to the new database, and Data Pump is used to import the metadata. These objects can now be seen by the new database.

This sounds exactly like transporting a tablespace, but there is a difference. In the datafile copy technique we might only be interested in a subset of what is in the datafiles, not the entire contents. This is achieved by invoking Data Pump export in the table mode, rather than tablespace mode. The datafile then effectively becomes an alternative to a dumpfile containing the exported data.

If the only contents of the tablespace in question are a single partition (or subpartition) of a table, then the final result appears to be identical to that achieved using the partition exchange and transportable tablespace technique traditionally used.

Now look at a worked example:
First create a tablespace TRANS to hold the partition to be transported

SQL> CREATE TABLE "OE"."ORDERS_PART"
( "ORDER_ID" NUMBER(12),
"ORDER_DATE" TIMESTAMP(6) WITH local TIME ZONE CONSTRAINT "ORDER_PART_DATE_NN" NOT NULL ,
"ORDER_MODE" VARCHAR2(8),
"CUSTOMER_ID" NUMBER(6) CONSTRAINT "ORDER_PART_CUST_ID_NN" NOT NULL ,
"ORDER_STATUS" NUMBER(2),
"ORDER_TOTAL" NUMBER(8, 2),
"SALES_REP_ID" NUMBER(6),
"PROMOTION_ID" NUMBER(6))
TABLESPACE "EXAMPLE"
PARTITION BY RANGE ("ORDER_DATE")
(PARTITION "H1_2004" VALUES LESS THAN (TIMESTAMP'2004-07-01 00:00:00 +0:00') TABLESPACE TRANS,
PARTITION "H2_2004" VALUES LESS THAN (TIMESTAMP'2005-01-01 00:00:00 +0:00') TABLESPACE TRANS,
PARTITION "H1_2005" VALUES LESS THAN (TIMESTAMP'2005-07-01 00:00:00 +0:00'),
PARTITION "H2_2005" VALUES LESS THAN (TIMESTAMP'2006-01-01 00:00:00 +0:00'),
PARTITION "H1_2006" VALUES LESS THAN (TIMESTAMP'2006-07-01 00:00:00 +0:00'),
PARTITION "H2_2006" VALUES LESS THAN (MAXVALUE));
Table created

And then populate it:

SQL> insert into oe.orders_part select * from oe.orders;
105 rows created.
SQL> commit;

Look at the segments that are contained inside this tablespace:

SQL> select owner, segment_name,partition_name from dba_segments where tablespace_name='TRANS'

OWNER SEGMENT_NAME PARTITION_NAME -------- --------------------- --------------
OE ORDERS_PART H1_2004 OE ORDERS_PART H2_2004

Just like when transporting tablespaces any tablespaces whose datafiles we want to copy must first be made read only.

SQL> alter tablespace trans read only;
Tablespace altered

Now the Data Pump export can be performed. The ‘table’ option is used to indicate which partitions we want to export, and also the ‘transportable=always’ to indicate that the generated dumpfile is to contain the table/partition metadata only as the datafiles containing segments involved will be copied to the new database.

expdp system/oracle tables=oe.orders_part:H1_2004 transportable=always dumpfile=ordersH1_2004.dmp reuse_dumpfiles=y

Export: Release 11.2.0.2.0 - Production on Thu Feb 21 16:20:31 2013

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_TABLE_01":  system/******** tables=oe.orders_part:H1_2004 transportable=always dumpfile=ordersH1_2004.dmp reuse_dumpf
s=y
Processing object type TABLE_EXPORT/TABLE/PLUGTS_BLK
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/END_PLUGTS_BLK
Master table "SYSTEM"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_TABLE_01 is:
  D:\DATAPUMP\ORDERSH1_2004.DMP
******************************************************************************
Datafiles required for transportable tablespace TRANS:
  D:\ORADATA\TRANS.DBF
Job "SYSTEM"."SYS_EXPORT_TABLE_01" successfully completed at 16:20:53

Now the datafile is copied to its new location and the TRANS tablespace is made read-write again.

copy d:\oradata\trans.dbf d:\oradata\trans.cpy
        1 file(s) copied.

(If necessary, RMAN can now be invoked to convert the datafile to different platform format.)

SQL> alter tablespace trans read write;

Tablespace altered.

Now move over to the second database and import the partition’s metadata into the database. Use the option ‘partition_options=departition’ which will import the partition as a standalone table. In this example the schema owner is also being changed.

impdp system/oracle partition_options=departition dumpfile=ordersH1_2004.dmp transport_datafiles='d:/oradata/trans.cpy' remap_schema=OE:SH

Import: Release 11.2.0.2.0 - Production on Thu Feb 21 16:21:28 2013

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01":  system/******** partition_options=departition dumpfile=ordersH1_2004.dmp transport_datafiles='d:/
data/trans.cpy' remap_schema=OE:SH
Processing object type TABLE_EXPORT/TABLE/PLUGTS_BLK
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/END_PLUGTS_BLK
Job "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01" successfully completed at 16:21:34

Look for the newly imported table

SQL> select table_name, tablespace_name  from dba_tables
  2  where table_name like 'ORDERS%';

TABLE_NAME			TABLESPACE_NAME
------------------------------ --------------------------------------
ORDERS_PART_H1_2004		TRANS
ORDERS_QUEUETABLE 		EXAMPLE
ORDERS				EXAMPLE
ORDERS				EXAMPLE

Its new name is the name of the original table suffixed with the name of the partition. If this was the only segment that was originally stored in this datafile we would now be finished but, although this partition is the only segment whose metadata was imported into this database, there are actually other segments in this datafile of which this 'new' database does not know anything about.

SQL> select owner, segment_name, partition_name from dba_segments where tablespace_name='TRANS'

OWNER    SEGMENT_NAME         PARTITION_NAME
------------------------------ --------------
SH       ORDERS_PART_H1_2004          

So move the new table to a new tablespace enabling the TRANS tablespace to be dropped.

SQL> alter table sh.orders_part_h1_2004 move tablespace example;

Table altered.

SQL> drop tablespace trans;

Tablespace dropped.

Datafile copying as a means to transport partitions is one of the many features covered in the Oracle Database 11g : Implement Partitioning course.

About the Author:

Gwen Lazenby



Gwen Lazenby is Principal Training Consultant at Oracle. She is part of the UK and IE Core Tech Delivery team, teaching database administration, Spatial and Linux courses. Gwen is also a member of the OU EMEA SME team, with a special responsibility for Oracle Spatial.

Monday Oct 29, 2012

ORACLE RIGHTNOW DYNAMIC AGENT DESKTOP CLOUD SERVICE - Putting the Dynamite into Dynamic Agent Desktop

Untitled Document

There’s a mountain of evidence to prove that a great contact centre experience results in happy, profitable and loyal customers. The very best Contact Centres are those with high first contact resolution, customer satisfaction and agent productivity. But how many companies really believe they are the best? And how many believe that they can be?

We know that with the right tools, companies can aspire to greatness – and achieve it. Core to this is ensuring their agents have the best tools that give them the right information at the right time, so they can focus on the customer and provide a personalised, professional and efficient service.

Today there are multiple channels through which customers can communicate with you; phone, web, chat, social to name a few but regardless of how they communicate, customers expect a seamless, quality experience. Most contact centre agents need to switch between lots of different systems to locate the right information. This hampers their productivity, frustrates both the agent and the customer and increases call handling times. With this in mind, Oracle RightNow has designed and refined a suite of add-ins to optimize the Agent Desktop. Each is designed to simplify and adapt the agent experience for any given situation and unify the customer experience across your media channels.

Let’s take a brief look at some of the most useful tools available and see how they make a difference.

Contextual Workspaces: The screen where agents do their job. Agents don’t want to be slowed down by busy screens, scrolling through endless tabs or links to find what they’re looking for. They want quick, accurate and easy. Contextual Workspaces are fully configurable and through workspace rules apply if, then, else logic to display only the information the agent needs for the issue at hand . Assigned at the Profile level, different levels of agent, from a novice to the most experienced, get a screen that is relevant to their role and responsibilities and ensures their job is done quickly and efficiently the first time round.

Agent Scripting: Sometimes, agents need to deliver difficult or sensitive messages while maximising the opportunity to cross-sell and up-sell. After all, contact centres are now increasingly viewed as revenue generators. Containing sophisticated branching logic, scripting helps agents to capture the right level of information and guides the agent step by step, ensuring no mistakes, inconsistencies or missed opportunities.

Guided Assistance: This is typically used to solve common troubleshooting issues, displaying a series of question and answer sets in a decision-tree structure. This means agents avoid having to bookmark favourites or rely on written notes. Agents find particular value in these guides - to quickly craft chat and email responses. What’s more, by publishing guides in answers on support pages customers, can resolve issues themselves, without needing to contact your agents. And because it can also accelerate agent ramp-up time, it ensures that even novice agents can solve customer problems like an expert.

Desktop Workflow: Take a step back and look at the full customer interaction of your agents. It probably spans multiple systems and multiple tasks. With Desktop Workflows you control the design workflows that span the full customer interaction from start to finish. As sequences of decisions and actions, workflows are unique in that they can create or modify different records and provide automation behind the scenes. This means your agents can save time and provide better quality of service by having the tools they need and the relevant information as required. And doing this boosts satisfaction among your customers, your agents and you – so win, win, win!

I have highlighted above some of the tools which can be used to optimise the desktop; however, this is by no means an exhaustive list. In approaching your design, it’s important to understand why and how your customers contact you in the first place. Once you have this list of “whys” and “hows”, you can design effective policies and procedures to handle each category of problem, and then implement the right agent desktop user interface to support them. This will avoid duplication and wasted effort.

Five Top Tips to take away:

  1. Start by working out “why” and “how” customers are contacting you.
  2. Implement a clean and relevant agent desktop to support your agents. If your workspaces are getting complicated consider using Desktop Workflow to streamline the interaction.
  3. Enhance your Knowledgebase with Guides. Agents can access them proactively and can be published on your web pages for customers to help themselves.
  4. Script any complex, critical or sensitive interactions to ensure consistency and accuracy.
  5. Desktop optimization is an ongoing process so continue to monitor and incorporate feedback from your agents and your customers to keep your Contact Centre successful.

 

Want to learn more?

Having attended the 3-day Oracle RightNow Customer Service Administration class your next step is to attend the 2-day Oracle RightNow Customer Portal Design Dynamic Agent Desktop Administration class. Here you’ll learn not only how to leverage the Agent Desktop tools but also how to optimise your self-service pages to enhance your customers’ web experience.

 

Useful resources:

Review the Best Practice Guide

Review the Agent Desktop Tune-up Guide

 

About the Author:

Angela Chandler

Angela Chandler joined Oracle University as a Senior Instructor through the RightNow Customer Experience Acquisition. Her other areas of expertise include Business Intelligence and Knowledge Management.  She currently delivers the following Oracle RightNow courses in the classroom and as a Live Virtual Class:

Wednesday Aug 29, 2012

Integrating Oracle Hyperion Smart View Data Queries with MS Word and Power Point

Untitled Document

Most Smart View users probably appreciate that they can use just one add-in to access data from the different sources they might work with, like Oracle Essbase, Oracle Hyperion Planning, Oracle Hyperion Financial Management and others. But not all of them are aware of the options to integrate data analyses not only in Excel, but also in MS Word or Power Point. While in the past, copying and pasting single numbers or tables from a recent analysis in Excel made the pasted content a static snapshot, copying so called Data Points now creates dynamic, updateable references to the data source. It also provides additional nice features, which can make life easier and less stressful for Smart View users.

So, how does this option work: after building an ad-hoc analysis with Smart View as usual in an Excel worksheet, any area including data cells/numbers from the database can be highlighted in order to copy data points - even single data cells only.

 

TIP

It is not necessary to highlight and copy the row or column descriptions

 

Next from the Smart View ribbon select Copy Data Point.

Then transfer to the Word or Power Point document into which the selected content should be copied. Note that in these Office programs you will find a menu item Smart View; from it select the Paste Data Point icon.

The copied details from the Excel report will be pasted, but showing #NEED_REFRESH in the data cells instead of the original numbers.

After clicking the Refresh icon on the Smart View menu the data will be retrieved and displayed. (Maybe at that moment a login window pops up and you need to provide your credentials.)

It works in the same way if you just copy one single number without any row or column descriptions, for example in order to incorporate it into a continuous text:

Before refresh:

After refresh:

From now on (provided that you are connected online to your database or application) for any subsequent updates of the data shown in your documents you only need to refresh data by clicking the Refresh button on the Smart View menu, without copying and pasting the context or content again.

As you might realize, trying out this feature on your own, there won’t be any Point of View shown in the Office document. Also you have seen in the example, where only a single data cell was copied, that there aren’t any member names or row/column descriptions copied, which are usually required in an ad-hoc report in order to exactly define where data comes from or how data is queried from the source. Well, these definitions are not visible, but they are transferred to the Word or Power Point document as well. They are stored in the background for each individual data cell copied and can be made visible by double-clicking the data cell as shown in the following screen shot (but which is taken from another context).

 

So for each cell/number the complete connection information is stored along with the exact member/cell intersection from the database. And that’s not all: you have the chance now to exchange the members originally selected in the Point of View (POV) in the Excel report. Remember, at that time we had the following selection:

 

By selecting the Manage POV option from the Smart View menu in Word or Power Point…

 

… the following POV Manager – Queries window opens:

 

You can now change your selection for each dimension from the original POV by either double-clicking the dimension member in the lower right box under POV: or by selecting the Member Selector icon on the top right hand side of the window. After confirming your changes you need to refresh your document again. Be aware, that this will update all (!) numbers taken from one and the same original Excel sheet, even if they appear in different locations in your Office document, reflecting your recent changes in the POV.

TIP

Build your original report already in a way that dimensions you might want to change from within Word or Power Point are placed in the POV.

And there is another really nice feature I wouldn’t like to miss mentioning: Using Dynamic Data Points in the way described above, you will never miss or need to search again for your original Excel sheet from which values were taken and copied as data points into an Office document. Because from even only one single data cell Smart View is able to recreate the entire original report content with just a few clicks:

Select one of the numbers from within your Word or Power Point document by double-clicking.

 

Then select the Visualize in Excel option from the Smart View menu.

Excel will open and Smart View will rebuild the entire original report, including POV settings, and retrieve all data from the most recent actual state of the database. (It might be necessary to provide your credentials before data is displayed.)

However, in order to make this work, an active online connection to your databases on the server is necessary and at least read access to the retrieved data. But apart from this, your newly built Excel report is fully functional for ad-hoc analysis and can be used in the common way for drilling, pivoting and all the other known functions and features.

So far about embedding Dynamic Data Points into Office documents and linking them back into Excel worksheets. You can apply this in the described way with ad-hoc analyses directly on Essbase databases or using Hyperion Planning and Hyperion Financial Management ad-hoc web forms.

If you are also interested in other new features and smart enhancements in Essbase or Hyperion Planning stay tuned for coming articles or check our training courses and web presentations.

You can find general information about offerings for the Essbase and Planning curriculum or other Oracle-Hyperion products here (please make sure to select your country/region at the top of this page) or in the OU Learning paths section , where Planning, Essbase and other Hyperion products can be found under the Fusion Middleware heading (again, please select the right country/region). Or drop me a note directly: bernhard.kinkel@oracle.com .

About the Author:

Bernhard Kinkel started working for Hyperion Solutions as a Presales Consultant and Consultant in 1998 and moved to Hyperion Education Services in 1999. He joined Oracle University in 2007 where he is a Principal Education Consultant. Based on these many years of working with Hyperion products he has detailed product knowledge across several versions. He delivers both classroom and live virtual courses. His areas of expertise are Oracle/Hyperion Essbase, Oracle Hyperion Planning and Hyperion Web Analysis.

 

Friday Jun 29, 2012

Oracle RightNow CX for Good Customer Experiences

Oracle RightNow CX is all about the customer experience, it’s about understanding what drives a good interaction and it’s about delivering a solution which works for our customers and by extension, their customers.

One of the early guiding principles of Oracle RightNow was an 8-point strategy to providing good customer experiences.

  1. Establish a knowledge foundation
  2. Empowering the customer
  3. Empower employees
  4. Offer multi-channel choice
  5. Listen to the customer
  6. Design seamless experiences
  7. Engage proactively
  8. Measure and improve continuously

The application suite provides all of the tools necessary to deliver a rewarding, repeatable and measurable relationship between business and customer.

  • The Knowledge Authoring tool provides gap analysis, WYSIWIG editing (and includes HTML rich content for non-developers), multi-level categorisation, permission based publishing and Web self-service publishing.
  • Oracle RightNow Customer Portal, is a complete web application framework that enables businesses to control their own end-user page branding experience, which in turn will allow customers to self-serve.
  • The Contact Centre Experience Designer builds a combination of workspaces, agent scripting and guided assistances into a Desktop Workflow. These present an agent with the tools they need, at the time they need them, providing even the newest and least experienced advisors with consistently accurate and efficient information, whilst guiding them through the complexities of internal business processes.
  • Oracle RightNow provides access points for customers to feedback about specific knowledge articles or about the support site in general. The system will generate ‘incidents’ based on the scoring of the comments submitted. This makes it easy to view and respond to customer feedback.

It is vital, more now than ever, not to under-estimate the power of the social web – Facebook, Twitter, YouTube – they have the ability to cause untold amounts of damage to businesses with a single post – witness musician Dave Carroll and his protest song on YouTube, posted in response to poor customer services from an American airline. The first day saw 150,000 views and is currently at 12,011,375. The Times reported that within 4 days of the post, the airline’s stock price fell by 10 percent, which represented a cost to shareholders of $180 million dollars.

It is a universally acknowledged fact, that when customers are unhappy, they will not come back, and, generally speaking, it only takes one bad experience to lose a customer.

The idea that customer loyalty can be regained by using social media channels was the subject of a 2011 Survey commissioned by RightNow and conducted by Harris Interactive. The survey discovered that 68% of customers who posted a negative review about a holiday on a social networking site received a response from the business. It further found that 33% subsequently posted a positive review and 34% removed the original negative review. Cloud Monitor provides the perfect mechanism for seeing what is being said about a business on public Facebook pages, Twitter or YouTube posts; it allows agents to respond proactively – either by creating an Oracle RightNow incident or by using the same channel as the original post.

This leaves step 8 – Measuring and Improving:

  • How does a business know whether it’s doing the right thing?
  • How does it know if its customers are happy?
  • How does it know if its staff are being productive?
  • How does it know if its staff are being effective?

Cue Oracle RightNow Analytics – fully integrated across the entire platform – Service, Marketing and Sales – there are in excess of 800 standard reports. If this were not enough, a large proportion of the database has been made available via the administration console, allowing users without any prior database experience to write their own reports, format them and schedule them for e-mail delivery to a distribution list. It handles the complexities of table joins, and allows for the manipulation of data with ease.

Oracle RightNow believes strongly in the customer owning their solution, and to provide the best foundation for success, Oracle University can give you the RightNow knowledge and skills you need. This is a selection of the courses offered:

A full list of courses offered can be found on the Oracle University website. For more information and course dates please get in contact with your local Oracle University team.

On top of the Service components, the suite also provides marketing tools, complex survey creation and tracking and sales functionality.

I’m a fan of the application, and I think I’ve made that clear:

  • It’s completely geared up to providing customers with support at point of need.
  • It can be configured to meet even the most stringent of business requirements.

Oracle RightNow is passionate about, and committed to, providing the best customer experience possible. Oracle RightNow CX is the application that makes it possible.

About the Author:


Sarah Anderson worked for RightNow for 4 years in both in both a consulting and training delivery capacity. She is now a Senior Instructor with Oracle University, delivering the following Oracle RightNow courses:

  • RightNow Customer Service Administration
  • RightNow Analytics
  • RightNow Customer Portal Designer and Contact Center Experience Designer Administration
  • RightNow Marketing and Feedback

Wednesday Apr 18, 2012

Setting a simple high availability configuration for an Oracle Unified Directory Topology

Oracle Unified Directory is the latest full java implementation, of LDAP directories offered by Oracle.

It offers several improved features over earlier LDAP versions such as higher speed on read/write operations, better handling of high volume data capacity, ease of scaling, replication and proxy capabilities.

In this topic we will explore some of the replications features of the Oracle Unified Directory Server, by providing a simple setup for high availability of any OUD topology.

Production OUD topologies need to offer, a continues, synchronized, and high availability flow of the business data, managed / hosted on OUD Ldap Stores. This is achieved by using OUD Ldap data nodes, with specific OUD Ldap directories as "Replication" nodes. A cluster of "OUD replication nodes" associated then with OUD Ldap datastores offers a high degree of availability for the data hosted on the OUD Ldap directories.

A Replication node, is considered an instance of an Oracle Unified Directory, which is used only to synchronize data (read/write operations) between several OUD dedicated LDAP stores. We can setup a replication node, on the same JVM which is hosting an OUD Ldap store, or we can create a specific JVM process which will handle the replication process (this is our approach for this demo).


For our demo, we will create a simply 4 node topology, on the same physical host.

Two OUD nodes (every node is a separate JVM process) , both of them are active, will handle the user data, and two OUD Replication nodes (as separate JMV processes ), will handle the synchronization of any modification applied on the data of the node1/node2.

As a best practice, its good to have at least 2 separate replication nodes in our topology, although we can start the replication scenario by using only 1 replication node, and then adding one additional instance.

Having at least 2 replication nodes in our system, we ensure that any operation on the data nodes of our Ldap systems (node1/noede2) will be propagated to the other Ldap nodes, even if one of the replication node fails.

The whole scenario can be run into one physical host (VirtualBox, or VMaware), without any overhead. In some other topics we will discuss tuning and monitoring operations.



Lets Start by creating the two LDAP nodes (node1, node2) which will hold our data.

This is done by executing the oud-setup script in graphical mode:


Creation of the first node, the node1 listens to 1389 port, and the 4444 is its admin port.


The node 1 is a standalone LDA server


The server will manage the directory tree dc=example,dc=com which is a sample site. For this site, the wizard will generate 2000 samples entries.


this is the final setup for the node1


At this stage, we have already one instance of our LDAP topology ( node1) up and running !.

We will continue, with the creation of the second OUD Ldap node (node2)


The setup for the node node2 is nearly similar to the node1, the LDAP listen port is 2389, and the admin port is 5444.


For the node2, we will create the same directory tree structure, but we will leave the directory database empty. During the synchronization phase (see later slides), we will provision the directory with data coming from the node1.


This is the final setup for the node2


And at this stage we have the second node2 up and running !

The creation of the replication nodes is "nearly similar process" as for the previous LDAP nodes. We will create 2 OUD instances, with the configuration wizard, and we will setup the replication process as an additional setup using the dsreplication command.


The first replication node will listen to port 3389, the admin port will be 6444.


The second replication node will listen to 4389, its admin port will be at 7444.

At this stage we have 4 OUD instances running to our system. Two LDAP nodes (node1, node2) with the "business data", and two replication nodes. The node1 is actually populated with data, the node2 will be provisioned during the setup of the replication between this node and the node1.

Now lets setup the replication process between the node1, and the first replication OUD server, by executing the following dsreplication command. The node1 , is the LDAP data node, and the first replication node, will hold only the replication information (it will not hold any LDAP data)


dsreplication enable \                                                                          
  --host1 localhost  --port1 4444 --bindDN1 "cn=Directory Manager" \                             
  --bindPassword1 welcome1 --noReplicationServer1 \                                              
  --host2 localhost --port2 6444 --bindDN2 "cn=Directory Manager" \                              
  --bindPassword2 welcome1 --onlyReplicationServer2 \                                            
  --replicationPort2  8989 --adminUID admin --adminPassword password \                           
  --baseDN "dc=example,dc=com" -X -n                                                             


This is what we should see in our prompt:


Then we should associate the second replication node with the node1.

The only parameters that we have to change to the previous scripts are the admin port for the second replication node, and the replication port for the second replication node


 dsreplication enable \                                                                          
  --host1 localhost  --port1 4444 --bindDN1 "cn=Directory Manager" \                             
  --bindPassword1 welcome1 --noReplicationServer1 \                                              
  --host2 localhost --port2 7444 --bindDN2 "cn=Directory Manager" \                              
  --bindPassword2 welcome1 --onlyReplicationServer2 \                                            
  --replicationPort2  9989 --adminUID admin --adminPassword password \                           
  --baseDN "dc=example,dc=com" -X -n                                                             



We should execute the same script now, on the node2 in order to associate this node with the first replication node :


dsreplication enable \                                                                          
  --host1 localhost  --port1 5444 --bindDN1 "cn=Directory Manager" \                             
  --bindPassword1 welcome1 --noReplicationServer1 \                                              
  --host2 localhost --port2 6444 --bindDN2 "cn=Directory Manager" \                              
  --bindPassword2 welcome1 --onlyReplicationServer2 \                                            
  --replicationPort2  8989 --adminUID admin --adminPassword password \                           
  --baseDN "dc=example,dc=com" -X -n                                                             



At this stage we have associated the node1, node2, with the two replications nodes.

Before to start our operations (read/write) on node1,and node1 we have to initialize the replication topology with the following script :


dsreplication initialize-all --hostname localhost --port 4444 \                                 
  --baseDN "dc=example,dc=com" --adminUID admin --adminPassword password                         


Here is what we should see :


As you notice, there is a fully successful provisioning of the node2, from the data of the node1!

Now we can monitor our configuration by executing the following command :


 dsreplication status --hostname localhost --port 4444 \                                         
  --baseDN "dc=example,dc=com" --adminUID admin --adminPassword password                         



To test our replication configuration, you can use any LDAP browser client, connect to the first instance, modify one or several entries, then connect to the second instance and check that your modification are applied :)

For additional training see: Oracle Unified Directory 11g: Services Deployment Essentials

About the Author:


Eugene Simos is based in France and joined Oracle through the BEA-Weblogic Acquisition, where he worked for the Professional Service, Support, end Education for major accounts across the EMEA Region. He worked in the banking sector, ATT, Telco companies giving him extensive experience on production environments. Eugene currently specializes in Oracle Fusion Middleware teaching an array of courses on Weblogic/Webcenter, Content,BPM /SOA/Identity-Security/GoldenGate/Virtualisation/Unified Comm Suite) throughout the EMEA region.

Monday Apr 09, 2012

How to create a PeopleCode Application Package/Application Class using PeopleTools Tables

This article describes how - in PeopleCode (Release PeopleTools 8.50) - to enable a grid without enabling each static column, using a dynamic Application Class.

The goal is to disable the following grid with three columns “Effort Date”, ”Effort Amount” and “Charge Back” , when the Check Box “Finished with task” is selected , without referencing each static column; this PeopleCode could be used dynamically with any grid.


If the check box “Finished with task” is cleared, the content of the grid columns is editable (and the buttons “+” and “-“ are available):


So, you create an Application Package “CLASS_EXTENSIONS” that contains an Application Class “EWK_ROWSET”.

This Application Class is defined with Class extends “ Rowset” and you add two news properties “Enabled” and “Visible”:


After creating this Application Class, you use it in two PeopleCode Events : Rowinit and FieldChange :


This code is very ‘simple’, you write only one command : ” &ERS2.Enabled = False” → and the entire grid is “Enabled”… and you can use this code with any Grid!

So, the complete PeopleCode to create the Application Package is (with explanation in [….]) :

******Package CLASS_EXTENSIONS :    [Name of the Package: CLASS_EXTENSIONS]

--Beginning of the declaration part------------------------------------------------------------------------------
class EWK_ROWSET extends Rowset;          [Definition Class EWK_ROWSET  as a 
                                          subclass of Class Rowset]
   method EWK_ROWSET(&RS As Rowset);      [Constructor is the Method with the
                                          same name of the Class]
   property boolean Visible get set;
   property boolean Enabled get set;      [Definition of the property 
                                          “Enabled” in read/write]
private                                   [Before the word “private”, 
                                          all the declarations are publics]
   method SetDisplay(&DisplaySW As boolean, &PropName As string, 
          &ChildSW As boolean);
   instance boolean &EnSW;
   instance boolean &VisSW;
   instance Rowset &NextChildRS;
   instance Row &NextRow;
   instance Record &NextRec;
   instance Field &NextFld;
   instance integer &RowCnt, &RecCnt, &FldCnt, &ChildRSCnt;
   instance integer &i, &j, &k;
   instance CLASS_EXTENSIONS:EWK_ROWSET &ERSChild;   [For recursion]
   Constant &VisibleProperty = "VISIBLE";
   Constant &EnabledProperty = "ENABLED";
end-class;
--End of the declaration part------------------------------------------------------------------------------

method EWK_ROWSET [The Constructor]
   /+ &RS as Rowset +/
   %Super = &RS;
end-method;
get Enabled
   /+ Returns Boolean +/;
   Return &EnSW;
end-get;
set Enabled
   /+ &NewValue as Boolean +/;
   &EnSW = &NewValue;
  %This.InsertEnabled=&EnSW;
  %This.DeleteEnabled=&EnSW;
 %This.SetDisplay(&EnSW, &EnabledProperty, False); [This method is called when
                                                    you set this property]
end-set;
get Visible
   /+ Returns Boolean +/;
   Return &VisSW;
end-get;

set Visible
   /+ &NewValue as Boolean +/;
   &VisSW = &NewValue;
   %This.SetDisplay(&VisSW, &VisibleProperty, False);
end-set;

method SetDisplay                 [The most important PeopleCode Method]
   /+ &DisplaySW as Boolean, +/
   /+ &PropName as String, +/
   /+ &ChildSW as Boolean +/             [Not used in our example]
   &RowCnt = %This.ActiveRowCount;
   &NextRow = %This.GetRow(1);      [To know the structure of a line ]
   &RecCnt = &NextRow.RecordCount; 
   For &i = 1 To &RowCnt                     [Loop for each Line]
      &NextRow = %This.GetRow(&i);
      For &j = 1 To &RecCnt                   [Loop for each Record]
         &NextRec = &NextRow.GetRecord(&j);
         &FldCnt = &NextRec.FieldCount;      

         For &k = 1 To &FldCnt                 [Loop for each Field/Record]
            &NextFld = &NextRec.GetField(&k);
            Evaluate Upper(&PropName)
            When = &VisibleProperty
               &NextFld.Visible = &DisplaySW;
               Break;
            When = &EnabledProperty;
               &NextFld.Enabled = &DisplaySW; [Enable each Field/Record]
               Break;
            When-Other
              Error "Invalid display property; Must be either VISIBLE or ENABLED"
            End-Evaluate;
         End-For;
      End-For;
      If &ChildSW = True Then   [If recursion]
         &ChildRSCnt = &NextRow.ChildCount;
         For &j = 1 To &ChildRSCnt [Loop for each Rowset child]
            &NextChildRS = &NextRow.GetRowset(&j);
            &ERSChild = create CLASS_EXTENSIONS:EWK_ROWSET(&NextChildRS);
            &ERSChild.SetDisplay(&DisplaySW, &PropName, &ChildSW);
 [For each Rowset child, call Method SetDisplay with the same parameters used 
 with the Rowset parent]
         End-For;
      End-If;
   End-For;
end-method;
******End of the Package CLASS_EXTENSIONS:[Name of the Package: CLASS_EXTENSIONS]

About the Author:


Pascal Thaler joined Oracle University in 2005 where he is a Senior Instructor. His area of expertise is Oracle Peoplesoft Technology and he delivers the following courses:

  • For Developers: PeopleTools Overview, PeopleTools I &II, Batch Application Engine, Language Oriented Object PeopleCode, Administration Security
  • For Administrators : Server Administration & Installation, Database Upgrade & Data Management Tools
  • For Interface Users: Integration Broker (Web Service)

Tuesday Mar 27, 2012

The new workflow management of Oracle´s Hyperion Planning: Define more details with Planning Unit Hierarchies and Promotional Paths

After having been almost unchanged for several years, starting with the 11.1.2 release of Oracle´s Hyperion Planning the Process Management has not only got a new name: “Approvals” now is offering the possibility to further split Planning Units (comprised of a unique Scenario-Version-Entity combination) into more detailed combinations along additional secondary dimensions, a so called Planning Unit Hierarchy, and also to pre-define a path of planners, reviewers and approvers, called Promotional Path. I´d like to introduce you to changes and enhancements in this new process management and arouse your curiosity for checking out more details on it.

One reason of using the former process management in Planning was to limit data entry rights to one person at a time based on the assignment of a planning unit. So the lowest level of granularity for this assignment was, for a given Scenario-Version combination, the individual entity. Even if in many cases one person wasn´t responsible for all data being entered into that entity, but for only part of it, it was not possible to split the ownership along another additional dimension, for example by assigning ownership to different accounts at the same time. By defining a so called Planning Unit Hierarchy (PUH) in Approvals this gap is now closed. Complementing new Shared Services roles for Planning have been created in order to manage set up and use of Approvals:

The Approvals Administrator consisting of the following roles:

  • Approvals Ownership Assigner, who assigns owners and reviewers to planning units for which Write access is assigned (including Planner responsibilities).
  • Approvals Supervisor, who stops and starts planning units and takes any action on planning units for which Write access is assigned.
  • Approvals Process Designer, who can modify planning unit hierarchy secondary dimensions and entity members for which Write access is assigned, can also modify scenarios and versions that are assigned to planning unit hierarchies and can edit validation rules on data forms for which access is assigned. (this includes as well Planner and Ownership Assigner responsibilities)

Set up of a Planning Unit Hierarchy is done under the Administration menu, by selecting Approvals, then Planning Unit Hierarchy. Here you create new PUH´s or edit existing ones. The following window displays:

After providing a name and an optional description, a pre-selection of entities can be made for which the PUH will be defined. Available options are:

  • All, which pre-selects all entities to be included for the definitions on the subsequent tabs
  • None, manual entity selections will be made subsequently
  • Custom, which offers the selection for an ancestor and the relative generations, that should be included for further definitions.

Finally a pattern needs to be selected, which will determine the general flow of ownership:

  • Free-form, uses the flow/assignment of ownerships according to Planning releases prior to 11.1.2
  • In Bottom-up, data input is done at the leaf member level. Ownership follows the hierarchy of approval along the entity dimension, including refinements using a secondary dimension in the PUH, amended by defined additional reviewers in the promotional path.
  • Distributed, uses data input at the leaf level, while ownership starts at the top level and then is distributed down the organizational hierarchy (entities). After ownership reaches the lower levels, budgets are submitted back to the top through the approval process.
  • Proceeding to the next step, now a secondary dimension and the respective members from that dimension might be selected, in order to create more detailed combinations underneath each entity.

    After selecting the Dimension and a Parent Member, the definition of a Relative Generation below this member assists in populating the field for Selected Members, while the Count column shows the number of selected members. For refining this list, you might click on the icon right beside the selected member field and use the check-boxes in the appearing list for deselecting members.


    TIP:

    In order to reduce maintenance of the PUH due to changes in the dimensions included (members added, moved or removed) you should consider to dynamically link those dimensions in the PUH with the dimension hierarchies in the planning application. For secondary dimensions this is done using the check-boxes in the Auto Include column. For the primary dimension, the respective selection criteria is applied by right-clicking the name of an entity activated as planning unit, then selecting an item of the shown list of include or exclude options (children, descendants, etc.).

    Anyway in order to apply dimension changes impacting the PUH a synchronization must be run. If this is really necessary or not is shown on the first screen after selecting from the menu Administration, then Approvals, then Planning Unit Hierarchy: under Synchronized you find the statuses Yes, No or Locked, where the last one indicates, that another user is just changing or synchronizing the PUH. Select one of the not synchronized PUH´s (status No) and click the Synchronize option in order to execute.


    In the next step owners and reviewers are assigned to the PUH.

    Using the icons with the magnifying glass right besides the columns for Owner and Reviewer the respective assignments can be made in the order that you want them to review the planning unit. While it is possible to assign only one owner per entity or combination of entity+ member of the secondary dimension, the selection for reviewers might consist of more than one person. The complete Promotional Path, including the defined owners and reviewers for the entity parents, can be shown by clicking the icon. In addition optional users might be defined for being notified about promotions for a planning unit.


    TIP:

    Reviewers cannot change data, but can only review data according to their data access permissions and reject or promote planning units.


    In order to complete your PUH definitions click Finish - this saves the PUH and closes the window. As a final step, before starting the approvals process, you need to assign the PUH to the Scenario-Version combination for which it should be used. From the Administration menu select Approvals, then Scenario and Version Assignment.

    Expand the PUH in order to see already existing assignments. Under Actions click the add icon and select scenarios and versions to be assigned. If needed, click the remove icon in order to delete entries.

    After these steps, set up is completed for starting the approvals process. Start, stop and control of the approvals process is now done under the Tools menu, and then Manage Approvals.

    The new PUH feature is complemented by various additional settings and features; some of them at least should be mentioned here:

    Export/Import of PUHs:



    Out of Office agent:



    Validation Rules changing promotional/approval path if violated (including the use of User-defined Attributes (UDAs)):



    And various new and helpful reviewer actions with corresponding approval states.


    More information

    You can find more detailed information in the following documents:

    Or on the Oracle Technology Network.

    If you are also interested in other new features and smart enhancements in Essbase or Hyperion Planning stay tuned for coming articles or check our training courses and web presentations.

    You can find general information about offerings for the Essbase and Planning curriculum or other Oracle-Hyperion products here; (please make sure to select your country/region at the top of this page) or in the OU Learning paths section, where Planning, Essbase and other Hyperion products can be found under the Fusion Middleware heading (again, please select the right country/region). Or drop me a note directly: bernhard.kinkel@oracle.com.

    About the Author:

    Bernhard Kinkel

    Bernhard Kinkel started working for Hyperion Solutions as a Presales Consultant and Consultant in 1998 and moved to Hyperion Education Services in 1999. He joined Oracle University in 2007 where he is a Principal Education Consultant. Based on these many years of working with Hyperion products he has detailed product knowledge across several versions. He delivers both classroom and live virtual courses. His areas of expertise are Oracle/Hyperion Essbase, Oracle Hyperion Planning and Hyperion Web Analysis.

    Friday Aug 26, 2011

    Ways to Train – Oracle University Style by David North

    It’s not just about the content, it’s not just about the trainer, it’s also about you – the learner. The ways people learn new skills is ever so varied; and it is for this reason that OU has been and continues to dramatically expand the sources and styles available. In this short article I want to expand on some of the available styles you may come across to enable you to make the best choice for you. Firstly, we have “Instructor-led” training; the kind of live, group-based training that many of you will have already experienced by attending a classroom and coming face-to-face with your trainer; spending time absorbing theory lessons, watching demos, and then (in most classes) “having a go” and doing hands-on exercises with a live system.

    But now OU has added “LVC” (Live Virtual Class) – a variety of the live, instructor-led training where instead of having to travel, you attend class remotely, over the Internet. You still have a live instructor (so you have to run up on time... no slacking allowed!!). The tool we use allows plenty of interaction with the trainer and other class members, and the hands-on exercises are just the same – although in this style of training if you fall behind or want to explore more, the machines on which you do the exercises are available 24x7 – no being kicked out of the classroom at the end of the day!

    We are doing more and more of these LVC classes as the word spreads about how good they really are. If you can’t take time out during the day and are really up for it, you’ll even find classes scheduled to run in the evenings and overnight! – although be careful you don’t end up on a class being delivered in Chinese or Japanese for example (unless of course you happen to speak the language... When you book a class the language and start times are clearly shown).

    For those of you who prefer a more self-paced style, or who cannot take big chunks of time out to do the live classes, we have created recordings of quite a few – which we call “RWC” (Recorded Web Class”), so you can log in and work through them at your leisure. Sadly with these we cannot make the hand-on practice environments available (there’s no-one there in real time to support them), but they do give you all the content, and at a time and pace to suit your needs.

    If you like that idea, but want something a bit more interactive, we have “Online Training”. Do not confuse this with LVC, the “Online Training” is not “live”; it is a combination of interactive computer based lessons with demos and hands-on simulations based on real live environments. You decide where, when, and how much of the course you do. Each time you log back in the system remembers where you were – you can go back and repeat parts of it, or simply carry on where you left off. Perfect if you have to do our training in bits and pieces and unpredictable times.

    And finally, if you like the idea of the “Online” option, but want even more flexibility about when and where, we have “SSCD” (Self Study CD) – which is in effect the online class on a CD so you don’t even have to be connected to the Internet to dip in and learning something new.

    Not all of our titles are available across all the styles, but the range is growing daily. Now you have no excuse for not finding something in a format that will suit your learning needs.

    Happy training.

    About the Author:

    David North
    David North is Delivery Director for Oracle Applications in the UK, Ireland and Scandinavia and is responsible for Specialist Education Services in EMEA. He has been working with Oracle Applications for over 9 years and in the past helped customers implement and roll out specific products in just about every country in EMEA. He also trained many customers from implementation and customisation through to marketing and business management.

    Wednesday Aug 10, 2011

    Oracle Real Application Clusters Curriculum under Release 2 by Lex van der Werff

    Oracle Real Application Clusters (Oracle RAC), part of the Oracle Database 11g Enterprise Edition, enables a single database to run across a cluster of servers, providing unbeatable fault tolerance, performance, and scalability with no application changes necessary. With Release2 of the Oracle Database 11g, Oracle University has adjusted its RAC curriculum to ensure that you benefit from the full power of this new product.

    The previous course offering for Oracle RAC Release 1 consisted of:

    Course Name: Oracle Database 11g: RAC Administration Ed 1.1
    Course Code: D50311
    Duration: 5 Days
    Content: This course was designed to cover both Oracle Database 10g Release 2 and Oracle Database 11g Release 1.

    As a result of the significant product changes that occurred with Release 2, the curriculum was also redesigned to address these changes. The Oracle Database 11g Release 2 training is now covered in two courses totaling 7 days of training:

    Course Name: Oracle Grid Infrastructure 11g: Manage Clusterware and ASM
    Course Code: D59999
    Duration: 4 Days 
    Content:  Is directed at DBAs and Systems administrators with a responsibility for High Availability or Storage Administration or both. DBAs who are familiar with Automatic Storage Management (ASM) or Clusterware from older releases need this course to learn how Grid Infrastructure has combined these technologies, and to learn the new cababilities of the software. System administrators who manage HA software, or storage administrators will benefit from learning how this product works.

    Course Name: Oracle Database 11g: RAC Administration Ed 2
    Course Code: D60491
    Duration: 3 Days
    Content: Is designed for Database Administrators, primarily those new to RAC. Students will learn about RAC database administration in the Oracle Grid Infrastructure environment. This is not to be considered as a refresher course.

    Both courses are required to master the Oracle Database 11g Release 2 and are part of the Oracle Database 11g learning path.

    Some countries also offer a compact, accelerated version of both courses lasting only 5 days.  

    Course Name: Oracle 11g: RAC and Grid Infrastructure Administration Accelerated Ed 1.1
    Course Code: D72078
    Duration: 5 Days

    To summarize based on job role:

    • Database Administrators who are new to RAC should attend both courses or the accelerated version.
    • Database Administrators familiar with RAC, should attend the 4 day Grid Infrastructure course
    • System and Storage Administrators who manage systems where RAC is installed should also attend the 4 day Grid Infrastructure course

    Overview:

    Database Release Course Name Course Code Duration Description
    Oracle Database 10g Release 2 Oracle Database 11 Release 1 Oracle Database 11g: RAC Administration Ed 1.1 D50311 5 This course is for BOTH Oracle Database 10g Release 2 and Oracle Database 11g Release 1 only.
    Oracle Database 11 Release 2 Oracle Grid Infrastructure 11g: Manage Clusterware and ASM Ed 1 D59999 4 In this course, students will learn about Oracle Grid Infrastructure components including Oracle Automatic Storage Manager (ASM), ASM Cluster File System, and Oracle Clusterware. This course is based on Oracle Database 11g Release 2.
    Oracle Database 11 Release 2 Oracle Database 11g: RAC Administration Ed 2 D60491 3 This RAC course is required as 2nd course only after the Oracle Grid Infrastructure 11g: Manage Clusterware and ASM Ed 1 course.

    Frequently Asked Questions:

    Q: I have attended the 5 day Oracle Database 11g: RAC Administration course, which course do I need to become skilled in Release 2?
    A: In this case, you only need to attend the 4 day Oracle Grid Infrastructure 11g: Manage Clusterware and ASM course.

    Q: Is it possible to take the Oracle Database 11g: RAC Administration course first and then at a later point in time the Oracle Grid Infrastructure 11g: Manage Clusterware and ASM Ed 1 course?
    A: No, the 3-day course should be attended after the Oracle Grid Infrastructure 11g: Manage Clusterware and ASM Ed 1 course

    If you have additional questions or would like to speak to an Oracle University representative to discuss your personal training needs, let us know!

     

    Lex van der Werff

    Lex van der Werff started at Ingres BV as a consultant in 1992. 2 years later he joined Oracle as a trainer for various technical courses such as Languages (SQL and PL/SQL), Development tools (Developer and Designer), Database Adminstration and Application Server courses. During this time he also taught several seminars throughout EMEA which he developed himself. After working as a training manager for Oracle Consulting, Lex joined Oracle University in 2008 as Delivery Manager for the Benelux.

    About

    Expert trainers from Oracle University share tips and tricks and answer questions that come up in a classroom.

    Search

    Archives
    « April 2014
    SunMonTueWedThuFriSat
      
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
       
           
    Today