Monday Jan 20, 2014

Description of the Business Object component within Oracle Siebel CRM / Oracle Siebel 的业务对象(BO)说明

Read in Chinese



Description of the Business Object component within Oracle Siebel CRM



Oracle Siebel CRM is a mainstream Customer Relationship Management (CRM) software package within the current global market. Providing a very comprehensive solution, Oracel Siebel defines its objects by three main levels of technical implementation.



BO (Business Object) in the business level is quite interesting. In reality, a single business component (BC) cannot describe an actual business comprehensively. Therefore, the system has introduced the concept of Business Object to link various Business Components together, to fully represent an actual business. Looking at an example of a Business Component - “Opportunity”: in order to fully understand a business opportunity, we have to take into consideration the Account, Contact and Action that are associated with the business opportunity. Therefore in technical implementation, it is the Business Object that has linked all these different components together.



In this scenario developed around Opportunity, Opportunity is the Parent Business Component, with Account, Contact and Action that describe the Opportunity as Child Business Components. This has formed a multidimensional technical model centered around Opportunity. Its technical implementation can be illustrated as below:



Link is implemented by primary key/foreign key relationship.



Wang Zeyi: Senior training lecturer in Oracle University. Wang has engaged in the CRM field for a long time, including experience in CRM pre-sales, research, business analysis and technical implementation among various industries such as financial service, Hi-tech, telecommunication and automobile. He is mainly in charge of training courses on Oracle Siebel CRM and Oracle Fusion CRM.



Oracle Siebel 的业务对象(BO)说明


Read in English



Oracle Siebel CRM 软件是当今全球主流客户关系管理软件。具有非常完善的解决方案。其技术实现中的对象定义分为三个主要层面。



其中业务层中的业务对象也就是英文所说的BO,比较有特点。因为,在实际业务中一个单独的业务组件 (BC)无法全面描述一个实际业务。所以引入业务对象 (BO) 来关联多个业务组件 (BC),来描述具体的一个实际业务。例如:商机 (Opportunity)这个业务,在实际工作中要想全面了解一个商机,必然要查看这个商机相关的客户 (Account),联系人(Contact),操作 (Action)。这时技术实现中用到的就是业务对象 (BO) 来将这些不同的对象关联到一起。



这样这个场景中处于中心地位的商机 (Opportunity) 作为父业务组件 (BC),而作为商机描叙性信息的客户 (Account),联系人(Contact),操作 (Action) 作为子业务组件存在。形成一个以商机 (Opportunity) 为中心的多维描述的技术模型。至于技术实现就比较容易如下图:



至于链接(Link)的实现,是通过主外键关系实现的。



王泽义 甲骨文大学高级培训讲师。长期从事客户关系管理 (CRM) 相关工作,参与过金融服务、高科技、通信、汽车等多个行业的 CRM 售前、调研、业务分析和技术实现。主要讲授 Oracle Siebel CRM 和 Oracle Fusion CRM 方面的培训课程。

Monday Nov 18, 2013

Using the Container Database in Oracle Database 12c by Christopher Andrews


The first time I examined the Oracle Database 12c architecture, I wasn’t quite sure what I thought about the Container Database (CDB). In the current release of the Oracle RDBMS, the administrator now has a choice of whether or not to employ a CDB.

Bundling Databases Inside One Container

In today’s IT industry, consolidation is a common challenge. With potentially hundreds of databases to manage and maintain, an administrator will require a great deal of time and resources to upgrade and patch software. Why not consider deploying a container database to streamline this activity? By “bundling” several databases together inside one container, in the form of a pluggable database, we can save on overhead process resources and CPU time. Furthermore, we can reduce the human effort required for periodically patching and maintaining the software.

Minimizing Storage

Most IT professionals understand the concept of storage, as in solid state or non-rotating. Let’s take one-to-many databases and “plug” them into ONE designated container database. We can minimize many redundant pieces that would otherwise require separate storage and architecture, as was the case in previous releases of the Oracle RDBMS. The data dictionary can be housed and shared in one CDB, with individual metadata content for each pluggable database. We also won’t need as many background processes either, thus reducing the overhead cost of the CPU resource.

Improve Security Levels within Each Pluggable Database 

We can now segregate the CDB-administrator role from that of the pluggable-database administrator as well, achieving improved security levels within each pluggable database and within the CDB. And if the administrator chooses to use the non-CDB architecture, everything is backwards compatible, too.

 The bottom line: it's a good idea to at least consider using a CDB.


About the author:

Chris Andrews is a Canadian instructor of Oracle University who teaches the Server Technologies courses for both 11g and 12c. He has been with Oracle University since 1997 and started training customers back with Oracle 7. While now a Senior Principal Instructor with OU, Chris had an opportunity to work as a DBA for about 10 years before joining Oracle. His background and experiences as a DBA, Database Analyst and also a developer is occasionally shared with customers in the classroom. His skill set includes the Database Administration workshops, Backup & Recovery, Performance Tuning, RAC, Dataguard, ASM, Clusterware and also Exadata and Database Machine administration workshop. While not teaching, Chris enjoys aviation and flying gliders, underwater photography and tennis.

Monday Nov 04, 2013

Some OBI EE Tricks and Tips in the Admin Tool
By Gerry Langton

How to set the log level from a Session variable Initialization block

As we know it is normal to set the log level non-zero for a particular user when we wish to debug problems. However sometimes it is inconvenient to go into each user’s properties in the Admin tool and update the log level. So I am showing a method which allows the log level to be set for all users via a session initialization block. This is particularly useful for anyone wanting an alternative way to set the log level.

The screen shots shown are using the OBIEE 11g SampleApp demo but are applicable to any environment.

Open the appropriate rpd in on-line mode and navigate to Manage > Variables.

Select Session > Initialization Blocks, right click in the white space and create a New Initialization Block.

I called the Initialization block Set_Loglevel .
Now click on ‘Edit Data Source’ to enter the SQL.

Chose the ‘Use OBI EE Server’ option for the SQL. This means that the SQL provided must use tables which have been defined in the Physical layer of the RPD, and whilst there is no need to provide a connection pool you must work in On-Line mode.

The SQL can access any of the RPD tables and is purely used to return a value of 2. The ‘Test’ button confirms that the SQL is valid.
Next, click on the ‘Edit Data Target’ button to add the LOGLEVEL variable to the initialization block.

Check the ‘Enable any user to set the value’ option so that this will work for any user.
Click OK and the following message will display as LOGLEVEL is a system session variable:

Click ‘Yes’.
Click ‘OK’ to save the Initialization block. Then check in the On-LIne changes.
To test that LOGLEVEL has been set, log in to OBIEE using an administrative login (e.g. weblogic) and reload server metadata, either from the Analysis editor or from Administration > Reload Files and Metadata link. Run a query then navigate to Administration > Manage Sessions and click ‘View Log’ for the query just issued (which should be approximately the last in the list). A log file should exist and with LOGLEVEL set to 2 should include both logical and physical sql. If more diagnostic information is required then set LOGLEVEL to a higher value.

If logging is required only for a particular analysis then an alternative method can be used directly from the Analysis editor.
Edit the analysis for which debugging is required and click on the Advanced tab. Scroll down to the Advanced SQL clauses section and enter the following in the Prefix box:
SET VARIABLE LOGLEVEL = 2;
Click the ‘Apply SQL’ button.

The SET VARIABLE statement will now prefix the Analysis’s logical SQL. So that any time this analysis is run it will produce a log.

You can find information about training for Oracle BI EE products here or in the Oracle University Learning Paths.
Please send me an email at gerry.langton@oracle.com if you have any further questions.

About the Author:


Gerry Langton started at Siebel Systems in 1999 working as a technical instructor teaching both Siebel application development and also Siebel Analytics (which subsequently became Oracle BI EE). From 2006 Gerry has worked as Senior Principal Instructor within Oracle University specialising in Oracle BI EE, Oracle BI Publisher and Oracle Data Warehouse development for BI.

Friday Nov 01, 2013

Running a simple integration scenario using the Oracle Big Data Connectors on Hadoop/HDFS cluster

Between the elephant ( the tradional image of the Hadoop framework) and the Oracle Iron Man (Big Data..) an english setter could be seen as the link to the right data


Data, Data, Data, we are living in a world where data technology based on popular applications , search engines, Webservers, rich sms messages, email clients, weather forecasts and so on, have a predominant role in our life.

More and more technologies are used to analyze/track our behavior, try to detect patterns, to propose us "the best/right user experience" from the Google Ad services, to Telco companies or large consumer sites (like Amazon:) ). The more we use all these technologies, the more we generate data, and thus there is a need of huge data marts and specific hardware/software servers (as the Exadata servers) in order to treat/analyze/understand the trends and offer new services to the users.

Some of these "data feeds" are raw, unstructured data, and cannot be processed effectively by normal SQL queries. Large scale distributed processing was an emerging infrastructure need and the solution seemed to be the "collocation of compute nodes with the data", which in turn leaded to MapReduce parallel patterns and the development of the Hadoop framework, which is based on MapReduce and a distributed file system (HDFS) that runs on larger clusters of rather inexpensive servers.

Several Oracle products are using the distributed / aggregation pattern for data calculation ( Coherence, NoSql, times ten ) so once that you are familiar with one of these technologies, lets says with coherence aggregators, you will find the whole Hadoop, MapReduce concept very similar.

Oracle Big Data Appliance is based on the Cloudera Distribution (CDH), and the Oracle Big Data Connectors can be plugged on a Hadoop cluster running the CDH distribution or equivalent Hadoop clusters.

In this paper, a "lab like" implementation of this concept is done on a single Linux X64 server, running an Oracle Database 11g Enterprise Edition Release 11.2.0.4.0, and a single node Apache hadoop-1.2.1 HDFS cluster, using the SQL connector for HDFS.

The whole setup is fairly simple:

  1. Install on a Linux x64 server ( or virtual box appliance) an Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 server
  2. Get the Apache Hadoop distribution from: http://mir2.ovh.net/ftp.apache.org/dist/hadoop/common/hadoop-1.2.1.
  3. Get the Oracle Big Data Connectors from: http://www.oracle.com/technetwork/bdc/big-data-connectors/downloads/index.html?ssSourceSiteId=ocomen.
  4. Check the java version of your Linux server with the command:
    java -version
     java version "1.7.0_40"
    Java(TM) SE Runtime Environment (build 1.7.0_40-b43)
    Java HotSpot(TM) 64-Bit Server VM (build 24.0-b56, mixed mode)
    
  5. Decompress the hadoop hadoop-1.2.1.tar.gz file to /u01/hadoop-1.2.1
  6. Modify your .bash_profile
    export HADOOP_HOME=/u01/hadoop-1.2.1
    export PATH=$PATH:$HADOOP_HOME/bin
    
    export HIVE_HOME=/u01/hive-0.11.0
    export PATH=$PATH:$HADOOP_HOME/bin:$HIVE_HOME/bin
    
    (also see my sample .bash_profile)
  7. Set up ssh trust for Hadoop process, this is a mandatory step, in our case we have to establish a "local trust" as will are using a single node configuration
  8. copy the new public keys to the list of authorized keys
  9. connect and test the ssh setup to your localhost:

  10. We will run a "pseudo-Hadoop cluster", in what is called "local standalone mode", all the Hadoop java components are running in one Java process, this is enough for our demo purposes. We need to "fine tune" some Hadoop configuration files, we have to go at our $HADOOP_HOME/conf, and modify the files:
    core-site.xml
                

    hdfs-site.xml
    

    mapred-site.xml
    

  11. check that the hadoop binaries are referenced correctly from the command line by executing:
    
    hadoop  -version
    
  12. As Hadoop is managing our "clustered HDFS" file system we have to create "the mount point" and format it , the mount point will be declared to core-site.xml as:

    The layout under the /u01/hadoop-1.2.1/data will be created and used by other hadoop components (MapReduce => /mapred/...) HDFS is using the /dfs/... layout structure

  13. format the HDFS hadoop file system:
  14. Start the java components for the HDFS system
  15. As an additional check, you can use the GUI Hadoop browsers to check the content of your HDFS configurations:

    Once our HDFS Hadoop setup is done you can use the HDFS file system to store data ( big data : )), and plug them back and forth to Oracle Databases by the means of the Big Data Connectors ( which is the next configuration step).

    You can create / use a Hive db, but in our case we will make a simple integration of "raw data" , through the creation of an External Table to a local Oracle instance ( on the same Linux box, we run the Hadoop HDFS one node cluster and one Oracle DB).

  16. Download some public "big data", I use the site: http://france.meteofrance.com/france/observations, from where I can get *.csv files for my big data simulations :).

    Here is the data layout of my example file:

    Download the Big Data Connector from the OTN (oraosch-2.2.0.zip), unzip it to your local file system (see picture below)

  17. Modify your environment in order to access the connector libraries , and make the following test:

    [oracle@dg1 bin]$./hdfs_stream
    Usage: hdfs_stream locationFile
    [oracle@dg1 bin]$
    
  18. Load the data to the Hadoop hdfs file system:
    hadoop fs  -mkdir bgtest_data
    hadoop  fs  -put obsFrance.txt bgtest_data/obsFrance.txt
    hadoop fs  -ls  /user/oracle/bgtest_data/obsFrance.txt       
    [oracle@dg1 bg-data-raw]$ hadoop fs -ls  /user/oracle/bgtest_data/obsFrance.txt
    
    Found 1 items
    -rw-r--r--   1 oracle supergroup      54103 2013-10-22 06:10 /user/oracle/bgtest_data/obsFrance.txt
    
    [oracle@dg1 bg-data-raw]$hadoop fs -ls  hdfs:///user/oracle/bgtest_data/obsFrance.txt
    
    Found 1 items
    -rw-r--r--   1 oracle supergroup      54103 2013-10-22 06:10 /user/oracle/bgtest_data/obsFrance.txt
    
  19. Check the content of the HDFS with the browser UI:
  20. Start the Oracle database, and run the following script in order to create the Oracle database user, the Oracle directories for the Oracle Big Data Connector (dg1 it’s my own db id replace accordingly yours):
    #!/bin/bash
    export ORAENV_ASK=NO
    export ORACLE_SID=dg1
    . oraenv
    sqlplus /nolog <<EOF
    CONNECT / AS sysdba;
    CREATE OR REPLACE DIRECTORY osch_bin_path  AS  '/u01/orahdfs-2.2.0/bin';
    CREATE USER BGUSER IDENTIFIED BY oracle;
    GRANT CREATE SESSION, CREATE TABLE TO BGUSER;
    GRANT EXECUTE ON sys.utl_file TO BGUSER;
    GRANT READ, EXECUTE ON DIRECTORY osch_bin_path TO BGUSER;
    CREATE OR REPLACE DIRECTORY BGT_LOG_DIR as '/u01/BG_TEST/logs';
    GRANT READ, WRITE ON DIRECTORY BGT_LOG_DIR to BGUSER;
    CREATE OR REPLACE DIRECTORY BGT_DATA_DIR as '/u01/BG_TEST/data';
    GRANT READ, WRITE ON DIRECTORY BGT_DATA_DIR to BGUSER;
    EOF
    
  21. Put the following in a file named t3.sh and make it executable,
    hadoop jar $OSCH_HOME/jlib/orahdfs.jar \
    oracle.hadoop.exttab.ExternalTable \
    -D oracle.hadoop.exttab.tableName=BGTEST_DP_XTAB \
    -D oracle.hadoop.exttab.defaultDirectory=BGT_DATA_DIR \
    -D oracle.hadoop.exttab.dataPaths="hdfs:///user/oracle/bgtest_data/obsFrance.txt" \
    -D oracle.hadoop.exttab.columnCount=7 \
    -D oracle.hadoop.connection.url=jdbc:oracle:thin:@//localhost:1521/dg1 \
    -D oracle.hadoop.connection.user=BGUSER \
    -D oracle.hadoop.exttab.printStackTrace=true \
    -createTable  --noexecute
    

    then test the creation fo the external table with it:

    [oracle@dg1 samples]$ ./t3.sh
    
    ./t3.sh: line 2: /u01/orahdfs-2.2.0: Is a directory
    Oracle SQL Connector for HDFS Release 2.2.0 - Production
    Copyright (c) 2011, 2013, Oracle and/or its affiliates. All rights reserved.
    Enter Database Password:]
    The create table command was not executed.
    The following table would be created.
    CREATE TABLE "BGUSER"."BGTEST_DP_XTAB"
    (
     "C1"                             VARCHAR2(4000),
     "C2"                             VARCHAR2(4000),
     "C3"                             VARCHAR2(4000),
     "C4"                             VARCHAR2(4000),
     "C5"                             VARCHAR2(4000),
     "C6"                             VARCHAR2(4000),
     "C7"                             VARCHAR2(4000)
    )
    ORGANIZATION EXTERNAL
    (
       TYPE ORACLE_LOADER
       DEFAULT DIRECTORY "BGT_DATA_DIR"
       ACCESS PARAMETERS
       (
         RECORDS DELIMITED BY 0X'0A'
         CHARACTERSET AL32UTF8
         STRING SIZES ARE IN CHARACTERS
         PREPROCESSOR "OSCH_BIN_PATH":'hdfs_stream'
         FIELDS TERMINATED BY 0X'2C'
         MISSING FIELD VALUES ARE NULL
         (
           "C1" CHAR(4000),
           "C2" CHAR(4000),
           "C3" CHAR(4000),
           "C4" CHAR(4000),
           "C5" CHAR(4000),
           "C6" CHAR(4000),
           "C7" CHAR(4000)
         )
       )
       LOCATION
       (
         'osch-20131022081035-74-1'
       )
    ) PARALLEL REJECT LIMIT UNLIMITED;
    The following location files would be created.
    osch-20131022081035-74-1 contains 1 URI, 54103 bytes
           54103 hdfs://localhost:19000/user/oracle/bgtest_data/obsFrance.txt
    
  22. Then remove the --noexecute flag and create the external Oracle table for the Hadoop data.

    Check the results:

    
    The create table command succeeded.
    
    CREATE TABLE "BGUSER"."BGTEST_DP_XTAB"
    (
     "C1"                             VARCHAR2(4000),
     "C2"                             VARCHAR2(4000),
     "C3"                             VARCHAR2(4000),
     "C4"                             VARCHAR2(4000),
     "C5"                             VARCHAR2(4000),
     "C6"                             VARCHAR2(4000),
     "C7"                             VARCHAR2(4000)
    )
    ORGANIZATION EXTERNAL
    ( 
       TYPE ORACLE_LOADER
       DEFAULT DIRECTORY "BGT_DATA_DIR"
       ACCESS PARAMETERS
       (
         RECORDS DELIMITED BY 0X'0A'
         CHARACTERSET AL32UTF8
         STRING SIZES ARE IN CHARACTERS
         PREPROCESSOR "OSCH_BIN_PATH":'hdfs_stream'
         FIELDS TERMINATED BY 0X'2C'
         MISSING FIELD VALUES ARE NULL
         (
           "C1" CHAR(4000),
           "C2" CHAR(4000),
           "C3" CHAR(4000),
           "C4" CHAR(4000),
           "C5" CHAR(4000),
           "C6" CHAR(4000),
           "C7" CHAR(4000)
         )
       )
       LOCATION
       (
         'osch-20131022081719-3239-1'
       )
    ) PARALLEL REJECT LIMIT UNLIMITED;
    
    The following location files were created.
    
    osch-20131022081719-3239-1 contains 1 URI, 54103 bytes
    
           54103 hdfs://localhost:19000/user/oracle/bgtest_data/obsFrance.txt
    

    This is the view from the SQL Developer:

    and finally the number of lines in the oracle table, imported from our Hadoop HDFS cluster

    SQL> select count(*) from "BGUSER"."BGTEST_DP_XTAB";
                      
    COUNT(*)
    ----------
          1151
    
    

In a next post we will integrate data from a Hive database, and try some ODI integrations with the ODI Big Data connector. Our simplistic approach is just a step to show you how these unstructured data world can be integrated to Oracle infrastructure.

Hadoop, BigData, NoSql are great technologies, they are widely used and Oracle is offering a large integration infrastructure based on these services.

Oracle University presents a complete curriculum on all the Oracle related technologies:

NoSQL:

Big Data:

Oracle Data Integrator:

Oracle Coherence 12c:

Oracle Coherence 12c:

Other Resources:

  • Apache Hadoop : http://hadoop.apache.org/ is the homepage for these technologies.
  • "Hadoop Definitive Guide 3rdEdition" by Tom White is a classical lecture for people who want to know more about Hadoop , and some active "googling " will also give you some more references.

About the author:

Eugene Simos is based in France and joined Oracle through the BEA-Weblogic Acquisition, where he worked for the Professional Service, Support, end Education for major accounts across the EMEA Region. He worked in the banking sector, ATT, Telco companies giving him extensive experience on production environments. Eugene currently specializes in Oracle Fusion Middleware teaching an array of courses on Weblogic/Webcenter, Content,BPM /SOA/Identity-Security/GoldenGate/Virtualisation/Unified Comm Suite) throughout the EMEA region.

Friday Oct 18, 2013

Oracle Database 12c New Partition Maintenance Features by Gwen Lazenby

One of my favourite new features in Oracle Database 12c is the ability to perform partition maintenance operations on multiple partitions. This means we can now add, drop, truncate and merge multiple partitions in one operation, and can split a single partition into more than two partitions also in just one command. This would certainly have made my life slightly easier had it been available when I administered a data warehouse at Oracle 9i.

To demonstrate this new functionality and syntax, I am going to create two tables, ORDERS and ORDERS_ITEMS which have a parent-child relationship. ORDERS is to be partitioned using range partitioning on the ORDER_DATE column, and ORDER_ITEMS is going to partitioned using reference partitioning and its foreign key relationship with the ORDERS table. This form of partitioning was a new feature in 11g and means that any partition maintenance operations performed on the ORDERS table will also take place on the ORDER_ITEMS table as well.

First create the ORDERS table -

SQL> CREATE TABLE orders
      ( order_id NUMBER(12),
        order_date TIMESTAMP,
        order_mode VARCHAR2(8),
        customer_id NUMBER(6),
        order_status NUMBER(2),
        order_total NUMBER(8,2),
        sales_rep_id NUMBER(6),
        promotion_id NUMBER(6),
       CONSTRAINT orders_pk PRIMARY KEY(order_id)
     )
    PARTITION BY RANGE(order_date)
   (PARTITION Q1_2007 VALUES LESS THAN (TO_DATE('01-APR-2007','DD-MON-YYYY')),
    PARTITION Q2_2007 VALUES LESS THAN (TO_DATE('01-JUL-2007','DD-MON-YYYY')),
    PARTITION Q3_2007 VALUES LESS THAN (TO_DATE('01-OCT-2007','DD-MON-YYYY')),
    PARTITION Q4_2007 VALUES LESS THAN (TO_DATE('01-JAN-2008','DD-MON-YYYY'))
    );

Table created.

Now the ORDER_ITEMS table

SQL> CREATE TABLE order_items
     ( order_id NUMBER(12) NOT NULL,
       line_item_id NUMBER(3) NOT NULL,
       product_id NUMBER(6) NOT NULL,
       unit_price NUMBER(8,2),
       quantity NUMBER(8),
       CONSTRAINT order_items_fk
       FOREIGN KEY(order_id) REFERENCES orders(order_id) on delete cascade)  
       PARTITION BY REFERENCE(order_items_fk) tablespace example;

Table created.

Now look at DBA_TAB_PARTITIONS to get details of what partitions we have in the two tables –

SQL>  select table_name,partition_name,
     partition_position position, high_value
     from dba_tab_partitions
     where table_owner='SH' and
     table_name  like 'ORDER_%'
     order by partition_position, table_name;

TABLE_NAME    PARTITION_NAME   POSITION HIGH_VALUE
-------------- --------------- -------- -------------------------
ORDERS         Q1_2007                1 TIMESTAMP' 2007-04-01 00:00:00'
ORDER_ITEMS    Q1_2007                1
ORDERS         Q2_2007                2 TIMESTAMP' 2007-07-01 00:00:00'
ORDER_ITEMS    Q2_2007                2
ORDERS         Q3_2007                3 TIMESTAMP' 2007-10-01 00:00:00'
ORDER_ITEMS    Q3_2007                3
ORDERS         Q4_2007                4 TIMESTAMP' 2008-01-01 00:00:00'
ORDER_ITEMS    Q4_2007                4

Just as an aside it is also now possible in 12c to use interval partitioning on reference partitioned tables. In 11g it was not possible to combine these two new partitioning features.

For our first example of the new 12cfunctionality, let us add all the partitions necessary for 2008 to the tables using one command. Notice that the partition specification part of the add command is identical in format to the partition specification part of the create command as shown above -

SQL> alter table orders add
PARTITION Q1_2008 VALUES LESS THAN (TO_DATE('01-APR-2008','DD-MON-YYYY')),
PARTITION Q2_2008 VALUES LESS THAN (TO_DATE('01-JUL-2008','DD-MON-YYYY')),
PARTITION Q3_2008 VALUES LESS THAN (TO_DATE('01-OCT-2008','DD-MON-YYYY')),
PARTITION Q4_2008 VALUES LESS THAN (TO_DATE('01-JAN-2009','DD-MON-YYYY'));

Table altered.

Now look at DBA_TAB_PARTITIONS and we can see that the 4 new partitions have been added to both tables –

SQL> select table_name,partition_name,
     partition_position position, high_value
     from dba_tab_partitions
     where table_owner='SH' and
     table_name  like 'ORDER_%'
     order by partition_position, table_name;

TABLE_NAME    PARTITION_NAME   POSITION HIGH_VALUE
-------------- --------------- -------- -------------------------
ORDERS         Q1_2007                1 TIMESTAMP' 2007-04-01 00:00:00'
ORDER_ITEMS    Q1_2007                1
ORDERS         Q2_2007                2 TIMESTAMP' 2007-07-01 00:00:00'
ORDER_ITEMS    Q2_2007                2
ORDERS         Q3_2007                3 TIMESTAMP' 2007-10-01 00:00:00'
ORDER_ITEMS    Q3_2007                3
ORDERS         Q4_2007                4 TIMESTAMP' 2008-01-01 00:00:00'
ORDER_ITEMS    Q4_2007                4
ORDERS         Q1_2008                5 TIMESTAMP' 2008-04-01 00:00:00'
ORDER_ITEMS    Q1_2008                5
ORDERS         Q2_2008                6 TIMESTAMP' 2008-07-01 00:00:00'
ORDER_ITEM     Q2_2008                6
ORDERS         Q3_2008                7 TIMESTAMP' 2008-10-01 00:00:00'
ORDER_ITEMS    Q3_2008                7
ORDERS         Q4_2008                8 TIMESTAMP' 2009-01-01 00:00:00'
ORDER_ITEMS    Q4_2008                8

Next, we can drop or truncate multiple partitions by giving a comma separated list in the alter table command. Note the use of the plural ‘partitions’ in the command as opposed to the singular ‘partition’ prior to 12c

SQL> alter table orders drop partitions Q3_2008,Q2_2008,Q1_2008;

Table altered.

Now look at DBA_TAB_PARTITIONS and we can see that the 3 partitions have been dropped in both the two tables –

TABLE_NAME    PARTITION_NAME   POSITION HIGH_VALUE
-------------- --------------- -------- -------------------------
ORDERS         Q1_2007                1 TIMESTAMP' 2007-04-01 00:00:00'
ORDER_ITEMS    Q1_2007                1
ORDERS         Q2_2007                2 TIMESTAMP' 2007-07-01 00:00:00'
ORDER_ITEMS    Q2_2007                2
ORDERS         Q3_2007                3 TIMESTAMP' 2007-10-01 00:00:00'
ORDER_ITEMS    Q3_2007                3
ORDERS         Q4_2007                4 TIMESTAMP' 2008-01-01 00:00:00'
ORDER_ITEMS    Q4_2007                4
ORDERS         Q4_2008                5 TIMESTAMP' 2009-01-01 00:00:00'
ORDER_ITEMS    Q4_2008                5

Now let us merge all the 2007 partitions together to form one single partition –

SQL> alter table orders merge partitions
   Q1_2005, Q2_2005, Q3_2005, Q4_2005
   into partition Y_2007;

Table altered.

TABLE_NAME    PARTITION_NAME   POSITION HIGH_VALUE
-------------- --------------- -------- -------------------------
ORDERS         Y_2007                 1 TIMESTAMP' 2008-01-01 00:00:00'
ORDER_ITEMS    Y_2007                 1
ORDERS         Q4_2008                2 TIMESTAMP' 2009-01-01 00:00:00'
ORDER_ITEMS    Q4_2008                2

Splitting partitions is a slightly more involved. In the case of range partitioning one of the new partitions must have no high value defined, and in list partitioning one of the new partitions must have no list of values defined. I call these partitions the ‘everything else’ partitions, and will contain any rows contained in the original partition that are not contained in the any of the other new partitions.

For example, let us split the Y_2007 partition back into 4 quarterly partitions –

SQL> alter table orders split partition Y_2007 into 
(PARTITION Q1_2007 VALUES LESS THAN (TO_DATE('01-APR-2007','DD-MON-YYYY')),
PARTITION Q2_2007 VALUES LESS THAN (TO_DATE('01-JUL-2007','DD-MON-YYYY')),
PARTITION Q3_2007 VALUES LESS THAN (TO_DATE('01-OCT-2007','DD-MON-YYYY')),
PARTITION Q4_2007);

Now look at DBA_TAB_PARTITIONS to get details of the new partitions –


TABLE_NAME    PARTITION_NAME   POSITION HIGH_VALUE
-------------- --------------- -------- -------------------------
ORDERS         Q1_2007                1 TIMESTAMP' 2007-04-01 00:00:00'
ORDER_ITEMS    Q1_2007                1
ORDERS         Q2_2007                2 TIMESTAMP' 2007-07-01 00:00:00'
ORDER_ITEMS    Q2_2007                2
ORDERS         Q3_2007                3 TIMESTAMP' 2007-10-01 00:00:00'
ORDER_ITEMS    Q3_2007                3
ORDERS         Q4_2007                4 TIMESTAMP' 2008-01-01 00:00:00'
ORDER_ITEMS    Q4_2007                4
ORDERS         Q4_2008                5 TIMESTAMP' 2009-01-01 00:00:00'
ORDER_ITEMS    Q4_2008                5

Partition Q4_2007 has a high value equal to the high value of the original Y_2007 partition, and so has inherited its upper boundary from the partition that was split.

As for a list partitioning example let look at the following another table, SALES_PAR_LIST, which has 2 partitions, Americas and Europe and a partitioning key of country_name.

SQL> select table_name,partition_name,
   high_value
   from dba_tab_partitions
   where table_owner='SH' and
   table_name = 'SALES_PAR_LIST';

TABLE_NAME      PARTITION_NAME   HIGH_VALUE
--------------  ---------------  -----------------------------
SALES_PAR_LIST  AMERICAS         'Argentina', 'Canada', 'Peru',
                                 'USA', 'Honduras', 'Brazil', 'Nicaragua'
SALES_PAR_LIST  EUROPE           'France', 'Spain', 'Ireland', 'Germany',
                                 'Belgium', 'Portugal', 'Denmark'

Now split the Americas partition into 3 partitions –

SQL> alter table sales_par_list split partition americas into
   (partition south_america values ('Argentina','Peru','Brazil'),
   partition north_america values('Canada','USA'),
   partition central_america);

Table altered.

Note that no list of values was given for the ‘Central America’ partition. However it should have inherited any values in the original ‘Americas’ partition that were not assigned to either the ‘North America’ or ‘South America’ partitions. We can confirm this by looking at the DBA_TAB_PARTITIONS view.

SQL> select table_name,partition_name,
   high_value
   from dba_tab_partitions
   where table_owner='SH' and
   table_name = 'SALES_PAR_LIST';

TABLE_NAME      PARTITION_NAME   HIGH_VALUE
--------------- ---------------  --------------------------------
SALES_PAR_LIST  SOUTH_AMERICA    'Argentina', 'Peru', 'Brazil'
SALES_PAR_LIST  NORTH_AMERICA    'Canada', 'USA'
SALES_PAR_LIST  CENTRAL_AMERICA  'Honduras', 'Nicaragua'
SALES_PAR_LIST  EUROPE           'France', 'Spain', 'Ireland', 'Germany',
                                 'Belgium', 'Portugal', 'Denmark'

In conclusion, I hope that DBA’s whose work involves maintaining partitions will find the operations a bit more straight forward to carry out once they have upgraded to Oracle Database 12c.

Gwen Lazenby

Gwen Lazenby is a Principal Training Consultant at Oracle.

She is part of Oracle University's Core Technology delivery team based in the UK, teaching Database Administration and Linux courses. Her specialist topics include using Oracle Partitioning and Parallelism in Data Warehouse environments, as well as Oracle Spatial and RMAN.

Wednesday Oct 09, 2013

NEW ORACLE WEBCENTER CONTENT USER INTERFACE ON VERSION 11.1.1.8
By Roberto Nogueras

Oracle purchased Stellent in November 2006. Soon the Stellent Content Server product became the Oracle Content Server, then Oracle UCM and finally it became the Oracle WebCenter Content. As you see, the product name has changed 3 times in the past 7 years. However, the user interface hasn’t changed that much. Oracle rebranded it in 10gR3 version and has given the software only minor updates ever since. The interface is functional, but perhaps too complex for some end users and lacks the look and feel of modern web applications.

In Spring 2013, it became known that Release 11.1.1.8 was going to feature a new user interface. Some time in September, I decided to download and install it. I connected to the home page URL, logged in and the good old UI came up:

I know, I know, I should read the documentation before installing. After doing so, I found out a few interesting things:

  • The new UI is not a replacement for the old one. It just contains the features more useful for end users.
  • The new UI is an ADF application that you have to install in a separate WLS domain, and there’s no problem in running it in a separate machine.
  • The new UI communicates with the content server by using the RIDC protocol (There’s a dedicated port open for that protocol on the content server side)

The setup is fully explained in the documentation, so we’re not going to get into the details. I’d rather perform a functionality analysis of the new UI. As mentioned before, it’s end user-oriented.

First of all, let’s login. After logging in, this is the main screen:

It’s quite obvious that it has just a few options and the main screen displays, by default, a blank search result. Let’s click the “Upload Button” to check in new content:

Nice. A pop-up window opens and we can either browse the file system or drag-and-drop some content from a file explorer window. When we add a file, the “Metadata” section expands automatically and we can enter data:

We’re redirected to the main screen, and after waiting a few seconds for the search index to be updated, we can click the “refresh” button and then the new document appears in the search:

If we click it, a new tab/window opens displaying this preview screen:

This one is a beautiful design, in my opinion. On the left side we see the document content and more importantly, the tabs to navigate between revisions. On the left side, we can see metadata values , the menu to check out the content, and some other options as “Favorite”, “Follow”, and “File Document” which will be discussed a bit later. By now, let’s check out some content and create a new revision. Please note that a new tab is created:

You can “Follow” pieces of content , which is similar to the old “Subscribe” option, that is, the user wants to be notified every time a new revision is generated for that content item. You can also can mark content as Favorite, which is something new, and finally, you can arrange content into “Libraries”, that are just an evolution of folders. That’s the use of the “File Document” option: to put content into libraries:

There’s little else to say about the interface, as we’ve discussed all of its functionality. Now I hope you have the information to make the decision of using it or not. The benefits for end users are obvious and the cost is also obvious: an extra WLS domain and more memory consumption on the server side.


About the Author:


Roberto Nogueras is based in Madrid, Spain and has been an Oracle University instructor since 1995. He specializes in the Oracle database, application server and middleware technologies. Roberto has been working with Oracle WebCenter Content since the Stellent acquisition.

Wednesday Sep 25, 2013

Siebel Pricing: Adding a New Variable to a Variable Map
By Tim Bull

Remember: Variable Maps are used by the PSP driver workflows to build the row sets that are passed to the pricing procedure workflows.

Variable Maps specify the structure of the row set by defining the variables that appear in the row set and the source of the data.

 

The Variable Source is defined by the Source Type and the Path

The Source Type of Instance refers to business components in the active business object. The Path does not refer directly to the business component and its field. Rather, it is defined in terms of an integration object that maps to the business object/component/fields.

 

To add a new Variable to a Variable Map

1. Lock the Variable Map

Administration - Order Management > Variable Maps

 

2. Create the Variable Definition


Determines the name to be used in pricing procedure business services

 

3. Create the Variable Source

Identify the Integration Component Field that references the new Business Field


4. Release the Variable Map

More Information:
To learn more about variable maps I can recommend the 3-day course: Siebel 8.0 Customer Order Management: Pricing Configuration.
You can find more information about Oracle University's offerings for Siebel on the Siebel Training web page.
You can also send me an eMail: tim.bull@oracle.com

About the Author:


Tim Bull joined Siebel Systems in 1999 and transferred to Oracle in 2006. He is a Principal Education Consultant at Oracle University and is the Senior UK instructor for Siebel software. Using his detailed product knowledge Tim delivers both In Class and Live Virtual Class courses across all the Siebel Curriculum.

Monday Jul 08, 2013

Some OBI EE Tricks and Tips in the Analysis Editor by Gerry Langton

The following examples use the SUPPLIER2 database schema which is used in the Oracle University OBI EE training courses.

How to handle NULLs in a Pivot Table column

Sometimes you want to be able to display a value in a pivot table column even though there is no value in a column.

In this example we see that only one of our Sales Reps has actually sold any Frozen goods. As a consequence the Frozen column is blank for all the other reps. Putting a case statement in the Dollars column formula, e.g. CASE WHEN "Fact-Sales"."Dollars" IS NULL THEN 0 ELSE "Fact-Sales"."Dollars" END looks like a good bet. But unfortunately it doesn’t work as we don’t have a null value in the Dollars column. We just don’t have a row for this particular combination of Sales Rep and Product Type.

However we can use a little known feature of the column properties Data tab: you can enter a custom format that uses formatting commands similar to those used in Excel. The syntax used in this case would be:

positive-value-mask (semi colon) negative-value-mask (semi colon) null-mask.

For example #,###;-#,###;0

or #,###;-#,###;’No value’

So with that Custom Numeric format in place our Pivot Table now looks like this:

How to convert a string to proper case

Looking at the pivot table in the previous example we find the Sales Rep column in the Customers table is in upper case but we would like to see it reported with the first letter of each word in capitals and the remainder of the word in lower case. We review available functions in the Analysis Editor expression builder and don’t find anything suitable; however we know that Oracle provides the INITCAP database function which does exactly what we require.

The Analysis editor provides an EVALUATE expression which allow you to use database functions in the formula of a column.

The syntax for EVALUATE is:

EVALUATE('db_function(%1...%N)' [AS data_type] [, column1, columnN])

db_function is any valid database function understood by the underlying database.
data_type is an optional parameter that specifies the data type of the return result.
column1 through columnN is an optional, comma-delimited list of columns. Where column1 replaces %1 in the database function, column2 replaces %2 etc.

So we would type the following in the Sales Rep column formula:

EVALUATE('INITCAP(%1),Customers."Sales Rep")

Use of the EVALUATE function is disabled by default so you need to enable the feature in the NQSCONFIG.INI file and restart the OBI Server (via Enterprise Manager).

In the Server Section of NQSCONFIG.INI set the parameter EVALUATE_SUPPORT_LEVEL to either 1 or 2 as in the extract from the file below:

# EVALUATE_SUPPORT_LEVEL:
# 1: evaluate is supported for users with manageRepositories permssion
# 2: evaluate is supported for any user.
# other: evaluate is not supported if the value is anything else.

EVALUATE_SUPPORT_LEVEL = 1;

How to do Time Series calculations in the Analysis Editor

Whilst time series calculations can be created in the OBI Repository logical layer using Ago, To Date and Rolling Period functions, it is also possible to create new calculations ‘on the fly’ in the Analysis editor. When you look at the available functions for a column formula you will see ‘Ago’, ‘To Date’ and ‘Rolling Period’ etc. under the ‘Time Series’ heading. However ,the use of these functions is not particularly clear from the Online Help.

Let’s assume that the repository has been set up with a Time dimension which is called Dim-Time and a Time hierarchy called Time, and the Time hierarchy has levels ‘Year’, ‘Quarter’, ‘Month’ and ‘Day’.

According to the online Help, to use the To Date function we use the syntax:
TODATE(expr, time_level) . But what does time_level actually mean? Well we express the time level using the syntax: ‘time dimension’.’hierarchy’.’level’.

Assuming we wish to have year to date totals this would equate to “Dim-Time”.”Time”.”Year”.

So the formula for Year To Date Dollars would be:

TODATE(“Fact Sales”.”Dollars”, “Dim-Time”.”Time”.”Year”)

For Month Ago Dollars the formula would be: AGO(“Fact Sales”.”Dollars”, “Dim-Time”.”Time”.”Month”, 1), where 1 is the period offset, i.e. current month -1.

How to put an HTML link in the column formula

I was interested to discover that the Customer names used in our Oracle University OBI EE course database are in fact genuine American restaurants. I discovered this when I was playing around with the requirement to place HTML links into an Analysis, in this case to pass a column value into a Google search.

We can type the following into the formula of the Customer column:

'<a target="_blank" href="http://www.google.com/search?hl=en&as_epq=' || Customer.Customer || '&btnG=Google+Search">' || Customer.Customer || '</a>'

The first instance of the customer name (Customer.Customer) is passed to Google as the search string. The second instance is used to display in the Analysis.

The Google search string parameters used in this example are hl=en which sets interface host language to ‘en’, i.e. English and as_epq which specifies the whole phrase must be found, which is important in situations like this where there may be spaces in the column value.

Also check Custom Headings and put in a new heading for the column (otherwise it will display the html!)


Also check ‘Override Default Data Format’ and select HTML from the drop down in the column properties.

The Table view should look something like this and each customer name will be a link to Google Search, passing the customer name as a parameter.

So when you click on Alley-Cats for example you will find the restaurant in Arlington Texas.

One obvious problem you will encounter is that some of the Customer names include an ampersand (&) character, and this is a terminator in the Google search string. So when you click on ‘2nd & Goal Sports Cafe’, only ‘2nd’ is passed to Google. To get round this you would have to do a string replace, replacing the ampersand with ‘and’.

So the formula for the column would become:

'<a target="_blank" href="http://www.google.com/search?hl=en&as_epq=' || REPLACE("Customer"."Customer",'&', 'and') || '&btnG=Google+Search">' || Customer.Customer || '</a>'

You can find information about training for Oracle BI EE products here or in the OU Learning Paths. Please send me an email at gerry.langton@oracle.com if you have any further questions.

About the Author:


Gerry Langton started at Siebel Systems in 1999 working as a technical instructor teaching both Siebel application development and also Siebel Analytics (which subsequently became Oracle BI EE). From 2006 Gerry has worked as Senior Principal Instructor within Oracle University specialising in Oracle BI EE, Oracle BI Publisher and Oracle Data Warehouse development for BI.

Tuesday Jul 02, 2013

Oracle Database 12c: Oracle Multitenant Option by Gerlinde Frenzen

1. Why ?
2. What is it ?
3. How ?

1. Why ?

The main idea of the 'grid' is to share resources, to make better use of storage, CPU and memory. If a database administrator wishes to implement this idea, he or she must consolidate many databases to one database. One of the concerns of running many applications together in one database is: ‚what will happen, if one of the applications must be restored because of a human error?‘ Tablespace point in time recovery can be used for this purpose, but there are a few prerequisites. Most importantly the tablespaces are strictly separated for each application. Another reason for creating separated databases is security: each customer has his own database. Therefore, there is often a proliferation of smaller databases. Each of them must be maintained, upgraded, each allocates virtual memory and runs background processes thereby wasting resources. Oracle 12c offers another possibility for virtualization, providing isolation at the database level: the multitenant container database holding pluggable databases.

2. What ?

Pluggable databases are logical units inside a multitenant container database, which consists of one multitenant container database and up to 252 pluggable databases. The SGA is shared as are the background processes.

The multitenant container database holds metadata information common for pluggable databases inside the System and the Sysaux tablespace, and there is just one Undo tablespace. The pluggable databases have smaller System and Sysaux tablespaces, containing just their 'personal' metadata. New data dictionary views will make the information available either on pdb (dba_views) or container level (cdb_views).

There are local users, which are known in specific pluggable databases and common users known in all containers.
Pluggable databases can be easily plugged to another multitenant container database and converted from a non-CDB. They can undergo point in time recovery.

3. How ?

Creating a multitenant container database can be done using the database configuration assistant: There you find the new option: Create as Container Database. If you prefer ‚hand made‘ databases you can execute the command from a instance in nomount state:

CREATE DATABASE cdb1 ENABLE PLUGGABLE DATABASE ….

And of course this can also be achieved through Enterprise Manager Cloud.

A freshly created multitenant container database consists of two containers: the root container as the 'rack' and a seed container, a template for future pluggable databases.

There are 4 ways to create other pluggable databases:

1. Create an empty pdb from seed

2. Plug in a non-CDB

3. Move a pdb from another pdb

4. Copy a pdb from another pdb


We will discuss option2: how to plug in a non_CDB into a multitenant container database. Three different methods are available :

1. Create an empty pdb and use Datapump in traditional export/import mode or with
Transportable Tablespace or Database mode. This method is suitable for pre 12c databases.

2. Create an empty pdb and use GoldenGate replication.
When the pdb catches up with the non-CDB, you fail over to the pdb.

3. Databases of Version 12c or higher can be plugged in with the help of the new dbms_pdb Package.

This is a demonstration for method 3:

Step1: Connect to the non-CDB to be plugged in and create an xml File with description of the database.
The xml file is written to $ORACLE_HOME/dbs per default and contains mainly information about the datafiles.

Step 2: Check if the non-CDB is pluggable in the multitenant container database:

Step 3: Create the pluggable database, connected to the Multitenant container database.

With nocopy option the files will be reused, but the tempfile is created anew:
A service is created and registered automatically with the listener:

Step 4: Delete unnecessary metadata from PDB SYSTEM tablespace:
To connect to newly created pdb, edit tnsnames.ora and add entry for new pdb.
Connect to plugged-in non_CDB and clean up Data Dictionary to remove entries now maintained in multitenant container database. As all kept objects have to be recompiled it will take a few minutes.

Step 5: The plugged-in database will be automatically synchronised by creating common users and roles when opened the first time in read write mode.

Step 6: Verify tablespaces and users:

There is only one local tablespace (users) and one local user (scott) in the plugged-in non_CDB pdb_orcl.
This method of creating plugged_in non_CDB from is fast and easy for 12c databases.

The method for deplugging a pluggable database from a CDB is to create a new non_CDB and use the the new full transportable feature of Datapump and drop the pluggable database.

About the Author:


Gerlinde Frenzen has been working for Oracle University Germany as one of our Principal Instructors for over 14 years. She started with Oracle 7 and became an Oracle Certified Master for Oracle 10g and 11c. She is a specialist in Database Core Technologies, with profound knowledge in Backup & Recovery, Performance Tuning for DBAs and Application Developers, Datawarehouse Administration, Data Guard and Real Application Clusters.

Thursday Jun 20, 2013

Creating an ACFS Replication by Bruno d Agostini

ACFS Replication, new features in 11.2.0.2 enables replication of Oracle ACFS file systems across the network to a remote site, providing disaster recovery capability for the file system.

For this demonstration, I used the practice environment provided for the course Oracle Grid Infrastructure 11g: Manage Clusterware and ASM.

Because the disk space is limited and 4 GB is needed per node, I used only a two Nodes cluster, named host01 and host02.

1. Before you start, be sure your cluster is correctly configured and works fine

2. Using ASMCA, create the disk groups with the required attribute.

a. Create PRIM Diskgroup with external redundancy and use 5 ASM disks ORCL:ASMDISK05 to ORCL:ASMDISK09. Set ASM and ADVM Compatibility to 11.2.0.2.0

 

b. Create SEC Diskgroup with external redundancy and use 5 ASM disks ORCL:ASMDISK10 to ORCL:ASMDISK14. Set ASM and ADVM Compatibility to 11.2.0.2.0

 

3. Using ASMCA, create the volumes needed.

a. Create Volume PRIM on PRIM Diskgroup and use 11G Bytes for the volume (Remember 4 GB per node)

b. Create volume SEC on SEC Diskgroup and use 11G Bytes for the volume (Remember 4 GB per node)

 

4. Using Linux, create the required directories

a. On Host01, create the directory /prim


a. On host02, create the directory /sec

 

5. Using ASMCA, create the ASM Cluster File Systems needed:

a. Create an ASM Cluster File System in PRIM Volume and specify /prim as Mount Point

b. Create an ASM Cluster File System in SEC Volume and specify /sec as Mount Point

6. Using NETCA, create TNS Alias PRIM for PRIM Service on host01 and create TNS Alias SEC for SEC Service on host02. (do not specify Cluster-scan)

7. Verify the result in TNSNAMES.ora file

8. Using Linux, create a password file on host01 with oracle as password

9. Using Linux, create a password file on host02 with oracle as password

10. On Host01, using SQL*Plus, create an user named oracle with password oracle and grant the necessary privileges. The password file will be updated on both sites, Host01 and Host02

11. On Host01, connected on instance +ASM1, add a service PRIM

12. Verify the listener on host01 is listening for service PRIM on instance +ASM1

13. Test the service name PRIM

14. On Host02, connected on instance +ASM2, add a service SEC

15. Verify the listener on host01 is listening for service SEC on instance +ASM2

16. Test the service name SEC

17. Using Linux, using df on host01, check the file system /prim is mounted and dismount the unnecessary file system mounted (/sec)

18. Using Linux, using df on host02, check the file system /sec is mounted and dismount the unnecessary file system mounted (/prim)

19. Connected on host02, using acfsutil, initialize ACFS replication on standby site first

20. Only when the replication has been successfully initialized on standby site, connect to host01 and initialize ACFS on primary site using acfsutil.

21. Check ACFS Replication on primary site

22. Check ACFS Replication on standby site

23. On host01, create a file (named toto)

24. On host02, after a while, check if the file has been replicated

25. On host01, pause the replication and create a second file (named titi)

26. On host02, the new file is normally not replicated because replication is paused on host01

27. After a while, resume replication on host01

28. After few seconds, on host02, the new file is normally replicated

29. On host02, pause the replication

30. On host01, create a new file (named new)

31. On host02, the file is not created because the replication on standby site is paused

32. On host02, resume the replication and test if the new file is created

33. On host02 standby site, try to remove file

34. On host01 primary site, terminate and verify the replication status

35. On host02 standby site, terminate and verify the replication status

About the Author:


Bruno d’Agostini joined Oracle in 1994 and currently is a French Instructor for Oracle University specializing in the Oracle Database, Data Guard, Data Warehouse, Streams, GoldenGate, Parallelism, Partitioning, Clusterware, ASM, RAC and Exadata Database Machine.

Tuesday May 07, 2013

Create a high availability 12C weblogic clustering environment with Traffic Director 11gR1 by Eugene Simos

Traffic director is one of the latest load balancer software, released by Oracle.
In fact its a fast, reliable, scalable and very easy manageable solution, for HTTP, HTTPS, TCP traffic for backend application and HTTP servers.

I have used the latest version of both traffic director, and weblogic 12c, to setup a simple clustering replication session scenario, on my Linux 64bits VBox sandbox.

For this scenario, the Traffic Director, is configured as a front end to a backend weblogic 12C cluster.

A specific feature "dynamic discovery", will let Traffic Director, discover on "the fly" new clustered Weblogic Nodes, associated with its initial configuration, and I will be able to "join" to my initial " 3 nodes wls cluster an another wls instance (doted line), with full HTTP replication capabilities.

In order to test the session replication I used one sample application, delivered as such, with the wls 12 installation (I will detail the utilization of this application in a later post :) ), which I called Session, and I will test the failover features of wls 12, with the Traffic Director, with this Session application deployed on my cluster!

The binary distribution, of the Traffic Director can be download from here:

Once that the distribution is download to my linux box, i started the normal FMW installation workflow:

Passing the normal test on the Linux system components, and choose an empty Oracle Traffic director Home for the software installation :

I saved the response file (I might use next time the silent installer !)

Then after saving the installation details, I start to configure my Traffic Director instances as following :
1) Create one admin instance (I used just the default settings) and an "default user" admin/welcome1

2) Start the instance from the installation directory:

3 ) I used the Traffic Director admin interface : https://localhost:8989, and with my credentials ( from section 1 admin/welcome1) I got the first admin panel

Once that I m logged into the Traffic Director, I m getting a initial "welcome" screen, and I have to create my own configurations according to my wls 12 cluster:

My Weblogic 12c cluster, was configured initially with 3 nodes, and i will add later one more managed instance. I have to create a specific Traffic Director configuration to route the traffic to my backend servers.

Through the Traffic Director configuration wizard, i will create a node on my local linux ( port 8080) , which will push the HTTP requests to my wls 12c clustred servers.
The Traffic Director node will use the specific feature of dynamic discovery in order to be able to push the HTTP request to other wls 12c clustered instances that will join the cluster later:

Then I started my 3 wls 12c clustered instances, on which I deployed my test Session replication application, and i started to test the Session replication scenario thought the Traffic Director node:

As you can see, my session has been created to the M1 node of my cluster.
I have created some data with this session, then I stopped the M1 node ( node failure simulation) and re submitted the request to the wls 12 cluster.

As you can see the session failover feature of the wls 12c worked perfectly, and now my session is on the M2 wls clustered node with the same date as on the previous failed M1 node !

For the dynamic discovery, I created one more managed server in the cluster (M4), then stopped the M2 server, and retried the Session application URL through the Traffic director node!
As you can see the Traffic Director routed the request to the new clustered server, with all the session replication data.

In a next post , we will deploy the Session application in a weblogic cluster, and we will use it as a test application, for the session replication features of weblogic.

About the Author:


Eugene Simos is based in France and joined Oracle through the BEA-Weblogic Acquisition, where he worked for the Professional Service, Support, end Education for major accounts across the EMEA Region. He worked in the banking sector, ATT, Telco companies giving him extensive experience on production environments. Eugene currently specializes in Oracle Fusion Middleware teaching an array of courses on Weblogic/Webcenter, Content,BPM /SOA/Identity-Security/GoldenGate/Virtualisation/Unified Comm Suite) throughout the EMEA region.

Monday Apr 15, 2013

How to update the correct Network driver for a VMWare server configuration

I have recently tried to "convert" my physical servers to virtual through the P2V converters for my own POCS, using VirtualBox or VMWare, but in the process I encountered some annoying issues with theses conversions such as the NetWork driver configuration, so here you can find the story and the solution:

I have converted my Win2003 64, physical server configuration to a VMWare virtual machine, and when I tried to boot it, I wasn't able to load the real Network driver, although the VMWare Tools were installed and configured without issues.
I was able to use the "shared folders" capability of the VMWare , and I created a shared folder "D", mapped on my D host drive, so at least I was able to copy files between the host and my VMWare server :)

The solution to my Network connectivity problem, was the following:

1. I got the Network driver PRO2K3XP_x64.exe from here.

2. Using WinRar I extracted the contents to my local server directory, then on my VMWare server; thought the VMWare network I copied this directory to my local VMWare driver.

3. Then on my VMWare server, I just "forced" a hardware discovery, the driver manager asked my the location of the Intel PRO/1000 driver files, and I pointed the hardware wizard to the my local repository et voila: the driver was recognized and correctly configured.

Now I can hook my VMWare to my network with all the Weblogic/OIM/OAM configuration :)

Here are some useful Oracle Server Virtualization courses :

About the Author:


Eugene Simos is based in France and joined Oracle through the BEA-Weblogic Acquisition, where he worked for the Professional Service, Support, end Education for major accounts across the EMEA Region. He worked in the banking sector, ATT, Telco companies giving him extensive experience on production environments. Eugene currently specializes in Oracle Fusion Middleware teaching an array of courses on Weblogic/Webcenter, Content,BPM /SOA/Identity-Security/GoldenGate/Virtualisation/Unified Comm Suite) throughout the EMEA region.

Friday Mar 29, 2013

The EAI Dispatch Service

The EAI Dispatch Service is an EAI business service that invokes other business services or workflows based on the contents of an input property set. It is driven by a set of customer-configurable rule and is used in both inbound and outbound integration workflows to route messages based on their content.

EAI Dispatch Service

Example: route incoming message from multiple suppliers to the corresponding workflow

Benefits of the EAI Dispatch Service

The dispatch service uses conditional processing to implement Business Policy. Conditional processing is more general than workflow and is based on the entire message rather than just the value of business component fields. This means we can reduce the number of workflows that need to be developed to handle a set of incoming messages and allowing for reuse of business logic in workflows. The dispatch service (like workflow) expresses business logic declaratively and is therefore easier to maintain than custom scripts.

EAI Dispatch Service Methods

The Dispatch Service has the following methods:

Execute:

  • Examines the incoming message
  • Dispatches the message according to the rules
  • Is used in production systems

Lockup:

  • Evaluates the incoming message
  • Does not dispatch the message
  • Is used for testing

Process Aggregate Request

  • Allows multiple invocations of methods in business services in a single request
  • Formats the output into a single message

Rules are used by the EAI Dispatch Service to process messages. They are created and administrated in the Siebel client.

They consist of:

  • One or more conditions used to evaluate an incoming property set
  • A target business service to execute when a condition is satisfied
  • Zero, one, or more transforms to apply to the property set

They are expressed using a search expression notation based on traversing a hierarchical structure.

The search expression uses an XPATH-like notation to examine contents of the property set which includes several symbols to specify navigation to a desired component in the property set hierarchy:

  • / : refers to a level in the hierarchy
  • @ : refers to an attribute

EAI Dispatch Service

A rule set is a set of rules that are evaluated in a specified sequence. The first rule that satisfies is applied and the rule evaluation stops.

EAI Dispatch Service

The rule examines an input property set. It specifies an expression that evaluates an incoming property set and a business service and method to execute if the expression is satisfied.

EAI Dispatch Service

A transform is an optional expression used to pass the existing hierarchy as it is or modify it.

EAI Dispatch Service

Using a Dispatch Rule Set

Outbound Processing

Example:

Send an order to a vendor using a transport specific to the vendor. Include the Dispatch service in a general workflow that processes outgoing vendor messages.

EAI Dispatch Service

Inbound Processing

Create the vendor-type specific workflows and a rule set that:

  • Searches the incoming messages for vendor type
  • Dispatches the message to the corresponding workflow

EAI Dispatch Service

Create a profile for processing received vendor messages.

EAI Dispatch Service
EAI Dispatch Service

About the Author:

Tim Bull

Tim Bull joined Siebel Systems in 1999 and transferred to Oracle in 2006. He is a Principal Education Consultant at Oracle University and is the Senior UK instructor for Siebel software. Using his detailed product knowledge Tim delivers both In Class and Live Virtual Class courses across all the Siebel Curriculum, such as:

Wednesday Feb 27, 2013

Transporting a Single Partition of a Table

Untitled Document

A feature of the Oracle database that appeared in 11gR1 but seems to have gone largely unnoticed is the ability to transport single (or multiple) partitions of a table to another database. This is particularly useful when archiving out older data held in older partitions of a table.

In previous releases of the Oracle database, it was only possible to transport entire tables, which meant every partition in that table. Therefore the standard methodology used to transport a single partition was to perform a partition exchange at the source database, making the partition a standalone table, and then to transport this table using the normal Transportable Tablespace method to the target database. The result is a standalone table in the target database. The Oracle 11.2 documentation still states that you cannot use the Transportable Tablespace feature to transport a single partition of a table, however we are going to use a different technique, known as ‘datafile copy’ to do exactly that.

The datafile copy technique is actually part of Data Pump. In a similar way to transporting tablespaces, Data Pump is used to export out the metadata describing the objects to be copied to the new database. The datafiles that contain these objects are then copied to the new database, and Data Pump is used to import the metadata. These objects can now be seen by the new database.

This sounds exactly like transporting a tablespace, but there is a difference. In the datafile copy technique we might only be interested in a subset of what is in the datafiles, not the entire contents. This is achieved by invoking Data Pump export in the table mode, rather than tablespace mode. The datafile then effectively becomes an alternative to a dumpfile containing the exported data.

If the only contents of the tablespace in question are a single partition (or subpartition) of a table, then the final result appears to be identical to that achieved using the partition exchange and transportable tablespace technique traditionally used.

Now look at a worked example:
First create a tablespace TRANS to hold the partition to be transported

SQL> CREATE TABLE "OE"."ORDERS_PART"
( "ORDER_ID" NUMBER(12),
"ORDER_DATE" TIMESTAMP(6) WITH local TIME ZONE CONSTRAINT "ORDER_PART_DATE_NN" NOT NULL ,
"ORDER_MODE" VARCHAR2(8),
"CUSTOMER_ID" NUMBER(6) CONSTRAINT "ORDER_PART_CUST_ID_NN" NOT NULL ,
"ORDER_STATUS" NUMBER(2),
"ORDER_TOTAL" NUMBER(8, 2),
"SALES_REP_ID" NUMBER(6),
"PROMOTION_ID" NUMBER(6))
TABLESPACE "EXAMPLE"
PARTITION BY RANGE ("ORDER_DATE")
(PARTITION "H1_2004" VALUES LESS THAN (TIMESTAMP'2004-07-01 00:00:00 +0:00') TABLESPACE TRANS,
PARTITION "H2_2004" VALUES LESS THAN (TIMESTAMP'2005-01-01 00:00:00 +0:00') TABLESPACE TRANS,
PARTITION "H1_2005" VALUES LESS THAN (TIMESTAMP'2005-07-01 00:00:00 +0:00'),
PARTITION "H2_2005" VALUES LESS THAN (TIMESTAMP'2006-01-01 00:00:00 +0:00'),
PARTITION "H1_2006" VALUES LESS THAN (TIMESTAMP'2006-07-01 00:00:00 +0:00'),
PARTITION "H2_2006" VALUES LESS THAN (MAXVALUE));
Table created

And then populate it:

SQL> insert into oe.orders_part select * from oe.orders;
105 rows created.
SQL> commit;

Look at the segments that are contained inside this tablespace:

SQL> select owner, segment_name,partition_name from dba_segments where tablespace_name='TRANS'

OWNER SEGMENT_NAME PARTITION_NAME -------- --------------------- --------------
OE ORDERS_PART H1_2004 OE ORDERS_PART H2_2004

Just like when transporting tablespaces any tablespaces whose datafiles we want to copy must first be made read only.

SQL> alter tablespace trans read only;
Tablespace altered

Now the Data Pump export can be performed. The ‘table’ option is used to indicate which partitions we want to export, and also the ‘transportable=always’ to indicate that the generated dumpfile is to contain the table/partition metadata only as the datafiles containing segments involved will be copied to the new database.

expdp system/oracle tables=oe.orders_part:H1_2004 transportable=always dumpfile=ordersH1_2004.dmp reuse_dumpfiles=y

Export: Release 11.2.0.2.0 - Production on Thu Feb 21 16:20:31 2013

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_TABLE_01":  system/******** tables=oe.orders_part:H1_2004 transportable=always dumpfile=ordersH1_2004.dmp reuse_dumpf
s=y
Processing object type TABLE_EXPORT/TABLE/PLUGTS_BLK
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/END_PLUGTS_BLK
Master table "SYSTEM"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_TABLE_01 is:
  D:\DATAPUMP\ORDERSH1_2004.DMP
******************************************************************************
Datafiles required for transportable tablespace TRANS:
  D:\ORADATA\TRANS.DBF
Job "SYSTEM"."SYS_EXPORT_TABLE_01" successfully completed at 16:20:53

Now the datafile is copied to its new location and the TRANS tablespace is made read-write again.

copy d:\oradata\trans.dbf d:\oradata\trans.cpy
        1 file(s) copied.

(If necessary, RMAN can now be invoked to convert the datafile to different platform format.)

SQL> alter tablespace trans read write;

Tablespace altered.

Now move over to the second database and import the partition’s metadata into the database. Use the option ‘partition_options=departition’ which will import the partition as a standalone table. In this example the schema owner is also being changed.

impdp system/oracle partition_options=departition dumpfile=ordersH1_2004.dmp transport_datafiles='d:/oradata/trans.cpy' remap_schema=OE:SH

Import: Release 11.2.0.2.0 - Production on Thu Feb 21 16:21:28 2013

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01":  system/******** partition_options=departition dumpfile=ordersH1_2004.dmp transport_datafiles='d:/
data/trans.cpy' remap_schema=OE:SH
Processing object type TABLE_EXPORT/TABLE/PLUGTS_BLK
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/END_PLUGTS_BLK
Job "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01" successfully completed at 16:21:34

Look for the newly imported table

SQL> select table_name, tablespace_name  from dba_tables
  2  where table_name like 'ORDERS%';

TABLE_NAME			TABLESPACE_NAME
------------------------------ --------------------------------------
ORDERS_PART_H1_2004		TRANS
ORDERS_QUEUETABLE 		EXAMPLE
ORDERS				EXAMPLE
ORDERS				EXAMPLE

Its new name is the name of the original table suffixed with the name of the partition. If this was the only segment that was originally stored in this datafile we would now be finished but, although this partition is the only segment whose metadata was imported into this database, there are actually other segments in this datafile of which this 'new' database does not know anything about.

SQL> select owner, segment_name, partition_name from dba_segments where tablespace_name='TRANS'

OWNER    SEGMENT_NAME         PARTITION_NAME
------------------------------ --------------
SH       ORDERS_PART_H1_2004          

So move the new table to a new tablespace enabling the TRANS tablespace to be dropped.

SQL> alter table sh.orders_part_h1_2004 move tablespace example;

Table altered.

SQL> drop tablespace trans;

Tablespace dropped.

Datafile copying as a means to transport partitions is one of the many features covered in the Oracle Database 11g : Implement Partitioning course.

About the Author:

Gwen Lazenby



Gwen Lazenby is Principal Training Consultant at Oracle. She is part of the UK and IE Core Tech Delivery team, teaching database administration, Spatial and Linux courses. Gwen is also a member of the OU EMEA SME team, with a special responsibility for Oracle Spatial.

Tuesday Jan 29, 2013

Defining Essbase Security using Metaread Filter

Defining security in multidimensional databases like Essbase is a subject that some administrators would rather prefer to get around. Anyway it’s no rocket science and follows a clear logic, how read, write or no access should be applied to applications, databases or numbers in the database, by using filters, in order to achieve the respective settings for making data available to users – or even not available. But there is another, fourth setting available in Essbase filters, which some administrators are not yet aware of, or at least not aware of how to apply it correctly: the so called Metaread access setting. So let’s have a look at this very useful option and shed some light on its efficient use.

Not only giving or denying access to the data and figures in an Essbase database is frequently required, but also in some cases blocking the user’s ability to see members and hierarchy, in other words parts of the metadata. This is what the Metaread filter addresses. But Metaread in various ways is different from the other three filter options, None, Read and Write. First of all, as it does not apply to the data/numbers in the database like all the others, but to metadata, where it limits visibility of members and parts of the hierarchy. Also it doesn’t know an AND logic, but only OR. And finally it overrides definitions made with Read or Write in the way, that even granted Read or Write data access on given members or member combinations cannot be executed, when Metaread definitions exclude these members from being seen at all by the user.

So how does it work in detail: Usually users can display all metadata, meaning they can see all members in the hierarchy, even if no Read or Write data access is given to them on these members. Metaread now adds another layer to existing filter definitions and enforces them by removing certain members or branches from the user’s view. It only allows users to see the explicitly defined members and their ancestors in the hierarchy, where for the ancestors only the hierarchy/member names are visible, while for the declared members always at least read access for the respective data is granted. In the case that data Write access would be given at the same time for the members defined in Metaread, this access would be maintained as Write and not reduced to read-only. Siblings of the defined Metaread members are not visible at all. This is illustrated in the example below.

essbase screenshot

For all other cells not being specified in the Metaread filter, unless not defined differently in another filter, the minimum database access applies, first the one defined at the user access level, or with second priority the setting from the global database access level, like this would be also the case with the common filter definitions.

Of course, also for Metaread, overlapping definitions might occur, but here we have to watch out as they are again treated in a different way, like the following example referring to the hierarchy seen above shows:

essbase screenshot

This definition, unlike for None, Read or Write, would not grant data access to California and West. Instead it would allow data access only to California, but not to its parent West, for which only the hierarchy would be shown. In order to avoid these conflicting settings for Metaread, it is recommended to define all members from the same dimension in one single filter row, hence the correct way to define data access on both members would be:

essbase screenshot

So as you can see, Metaread is a bit special, but not that complicated to use. And it adds another helpful option to the common None, Read and Write settings in Essbase security filters.

Want to learn more?

If you are also interested in other new features and smart enhancements in Essbase or Hyperion Planning stay tuned for coming articles or check our training courses and web presentations.

You can find general information about offerings for the Essbase and Planning curriculum or other Oracle-Hyperion products here (please make sure to select your country/region at the top of this page) or in the OU Learning paths section , where Planning, Essbase and other Hyperion products can be found under the Fusion Middleware heading (again, please select the right country/region.

Please drop me a note directly if you have any questions: bernhard.kinkel@oracle.com .

About the Author:

Angela Chandler

Bernhard Kinkel started working for Hyperion Solutions as a Presales Consultant and Consultant in 1998 and moved to Hyperion Education Services in 1999. He joined Oracle University in 2007 where he is a Principal Education Consultant. Based on these many years of working with Hyperion products he has detailed product knowledge across several versions. He delivers both classroom and live virtual courses. His areas of expertise are Oracle/Hyperion Essbase, Oracle Hyperion Planning and Hyperion Web Analysis.

About

Expert trainers from Oracle University share tips and tricks and answer questions that come up in a classroom.

Search

Archives
« August 2015
SunMonTueWedThuFriSat
      
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
     
Today