Monday Apr 21, 2014

What to Expect from Our Top Oracle Solaris 11 Training Courses

Untitled Document

by Mike Lanker, Senior Oracle University Instructor

After many years of working with prior versions of Solaris, I was asked to learn and teach Solaris 11 classes.
I wondered what could be different?


Old vs. New Versions of Solaris 11

I quickly discovered there are many differences between older and newer versions of Solaris 11.

My first course of action on my training path was to take the UNIX and Linux Essentials course. This training explored the basics, so I followed up by taking the Solaris 11 Administration course.

If you’re working toward becoming a Systems Administrator (SA), I can tell you from experience that these courses will help you excel within this position. How, you ask? Let’s take a look at the skills these courses will help you develop.


UNIX and Linux Essentials Course

This four-day, beginners course is designed for those who work in the Solaris environment. It begins by deep diving into the Operating System (OS) structure and then explores archiving and performing remote file transfers.  It teaches you the basic commands, a very beneficial skill set to have if you’d like to take the Solaris Admin class.

This course is based on OL 6.2 and Solaris 11. It gives you an opportunity to participate in hands-on lab exercises to reinforce your learning.

Learn To:

  • Discover different shells.
  • Set file permissions.
  • Customize your initialization files so you can use tailored commands.
  • Work with the vi editor to create and modify existing files.
  • Understand the basic commands used to create, copy, move and rename files and directories.

Oracle Solaris 11 Administration Course

Once you’ve completed the Essentials training, this five-day course builds on that material, plus expands to other categories.

Learn To:

  • Perform OS installs.
  • Verify the software/drivers are current.
  • Create boot environments.
  • Work with the terrific ZFS file system.
  • Create zones.
  • Perform network administration.
  • Create, update and delete user accounts.
  • Control access to the system.
  • Change the password algorithm.
  • Establish user quotas, in case you have limited space.
  • Run recurring programs.

You’ll also get the opportunity to participate in interactive lab sessions, which creates a more enjoyable, hands-on learning environment.


Become a System Administrator by Enrolling in Oracle University Training

By investing in the above mentioned courses, you’ll develop the skills to oversee the system and keep unauthorized users out. Furthermore, you’ll be able to keep tabs on anyone who may be switching over to another user. You’ll know how to set up the password and handle failed logins to adhere to established policy or procedures. 

If your company decides to expand the password length, plus the encryption level, you’ll have the knowledge to do so with great ease. You’ll establish and maintain the storage requirements, while creating the storage pools and related file systems. 

These courses will prepare you to successfully execute System Administrator responsibilities. Possessing this skill set will help you stand out amongst your peers. Investing in training is an effective way to build the knowledge you need to pass the Oracle Solaris 11 OCA Certification exam.

Get introduced to Solaris 11 with me, Mike Lanker, in this free one-hour webinar.

View all available Oracle Solaris 11 training.


About the Author:

Mike Lanker is a Senior Instructor with Oracle University.
He has been working as an instructor since 1979 and currently teaches classes for Oracle Solaris, Oracle Linux, Oracle Storage solutions and Oracle Database.

Thursday Jan 23, 2014

An Introduction to Subledger Accounting
By Chris Rudd

R12 saw the introduction of Oracle Subledger Accounting (SLA), a rule-based accounting engine that centralizes accounting for Oracle E-Business Suite products in Release 12 and above.
SLA is not a separate product in itself, but is Oracle’s engine catering to the accounting needs of Oracle applications. There are no SLA responsibilities, you do not log into SLA, Subledger Accounting forms and programs are embedded within standard Oracle Application responsibilities (for example, Payables Manager).

Multiple Accounting Representations

Together with the new ledger support in General Ledger (GL), SLA provides the ability to maintain multiple accounting representations in a single instance.
A Subledger Accounting Method is linked to a ledger and the rules contained within that Subledger Accounting Method determines how a transaction is represented in that ledger.

In R12, you can have Primary, Secondary and Reporting Currency Ledgers. Your primary ledger is your main reporting ledger, a Secondary Ledger can be used if you have the need for another accounting representation and a reporting currency ledger is used where you need to report in another currency.

Example:

A US Corporation has an operation in France. The French operation is subject to French accounting regulations, and therefore must report its activities to the local authorities in Euros, according to the French business calendar, French chart of accounts, and the French interpretation of IFRS (International Financial Reporting Standards).

This would be the French Primary Ledger. However, the US headquarters/parent company would need to have a consolidated global visibility of the worldwide operations so you can use a Secondary Ledger linked to the French Primary Ledger. The Secondary Ledger will share the same currency, COA, calendar and Subledger Accounting Method as those at HQ.

In the definition above, Subledger Accounting allows:
• multiple accounting methods to be defined
• separate accounting methods to be used on different ledgers

Journal entries will be created for both ledgers for a single subledger posting (French activities accounted for on the primary ledger according to the IFRS method, and on the secondary ledger according to the US GAAP method).

Oracle provides a number of seeded Subledger Accounting Methods, for example Standard Accrual. If you want to change the accounting rules then you can copy the seeded method and create your own Subledger Accounting Method.

So now all of the subledgers create accounting in the same consistent way, with the ability to do online accounting for a transaction or to submit a standard request to account for all transactions in a module. You can create draft accounting, useful if you are playing with the rules, or if you wish to review before posting. Final Accounting which is complete and ready to post to GL or Final Post, which is complete and will post in GL.

Once data is posted to GL, there’s a full audit trail with drilldown all the way to the actual subledger transaction. Come along to the R12.x Oracle Subledger Accounting Fundamentals class to learn more about this and other features of SLA or send me an eMail with any questions: christine.rudd@oracle.com.

 

About the Author:


Chris Rudd joined Oracle in May 1999. She is a Principal Training Consultant at Oracle University. Using her detailed product knowledge Chris delivers both In Class and Live Virtual Class courses for the Oracle E-Business Suite Financials and Oracle Fusion Financial products

Monday Jan 20, 2014

Description of the Business Object component within Oracle Siebel CRM / Oracle Siebel 的业务对象(BO)说明

Read in Chinese



Description of the Business Object component within Oracle Siebel CRM



Oracle Siebel CRM is a mainstream Customer Relationship Management (CRM) software package within the current global market. Providing a very comprehensive solution, Oracel Siebel defines its objects by three main levels of technical implementation.



BO (Business Object) in the business level is quite interesting. In reality, a single business component (BC) cannot describe an actual business comprehensively. Therefore, the system has introduced the concept of Business Object to link various Business Components together, to fully represent an actual business. Looking at an example of a Business Component - “Opportunity”: in order to fully understand a business opportunity, we have to take into consideration the Account, Contact and Action that are associated with the business opportunity. Therefore in technical implementation, it is the Business Object that has linked all these different components together.



In this scenario developed around Opportunity, Opportunity is the Parent Business Component, with Account, Contact and Action that describe the Opportunity as Child Business Components. This has formed a multidimensional technical model centered around Opportunity. Its technical implementation can be illustrated as below:



Link is implemented by primary key/foreign key relationship.



Wang Zeyi: Senior training lecturer in Oracle University. Wang has engaged in the CRM field for a long time, including experience in CRM pre-sales, research, business analysis and technical implementation among various industries such as financial service, Hi-tech, telecommunication and automobile. He is mainly in charge of training courses on Oracle Siebel CRM and Oracle Fusion CRM.



Oracle Siebel 的业务对象(BO)说明


Read in English



Oracle Siebel CRM 软件是当今全球主流客户关系管理软件。具有非常完善的解决方案。其技术实现中的对象定义分为三个主要层面。



其中业务层中的业务对象也就是英文所说的BO,比较有特点。因为,在实际业务中一个单独的业务组件 (BC)无法全面描述一个实际业务。所以引入业务对象 (BO) 来关联多个业务组件 (BC),来描述具体的一个实际业务。例如:商机 (Opportunity)这个业务,在实际工作中要想全面了解一个商机,必然要查看这个商机相关的客户 (Account),联系人(Contact),操作 (Action)。这时技术实现中用到的就是业务对象 (BO) 来将这些不同的对象关联到一起。



这样这个场景中处于中心地位的商机 (Opportunity) 作为父业务组件 (BC),而作为商机描叙性信息的客户 (Account),联系人(Contact),操作 (Action) 作为子业务组件存在。形成一个以商机 (Opportunity) 为中心的多维描述的技术模型。至于技术实现就比较容易如下图:



至于链接(Link)的实现,是通过主外键关系实现的。



王泽义 甲骨文大学高级培训讲师。长期从事客户关系管理 (CRM) 相关工作,参与过金融服务、高科技、通信、汽车等多个行业的 CRM 售前、调研、业务分析和技术实现。主要讲授 Oracle Siebel CRM 和 Oracle Fusion CRM 方面的培训课程。

Monday Nov 18, 2013

Using the Container Database in Oracle Database 12c by Christopher Andrews


The first time I examined the Oracle Database 12c architecture, I wasn’t quite sure what I thought about the Container Database (CDB). In the current release of the Oracle RDBMS, the administrator now has a choice of whether or not to employ a CDB.

Bundling Databases Inside One Container

In today’s IT industry, consolidation is a common challenge. With potentially hundreds of databases to manage and maintain, an administrator will require a great deal of time and resources to upgrade and patch software. Why not consider deploying a container database to streamline this activity? By “bundling” several databases together inside one container, in the form of a pluggable database, we can save on overhead process resources and CPU time. Furthermore, we can reduce the human effort required for periodically patching and maintaining the software.

Minimizing Storage

Most IT professionals understand the concept of storage, as in solid state or non-rotating. Let’s take one-to-many databases and “plug” them into ONE designated container database. We can minimize many redundant pieces that would otherwise require separate storage and architecture, as was the case in previous releases of the Oracle RDBMS. The data dictionary can be housed and shared in one CDB, with individual metadata content for each pluggable database. We also won’t need as many background processes either, thus reducing the overhead cost of the CPU resource.

Improve Security Levels within Each Pluggable Database 

We can now segregate the CDB-administrator role from that of the pluggable-database administrator as well, achieving improved security levels within each pluggable database and within the CDB. And if the administrator chooses to use the non-CDB architecture, everything is backwards compatible, too.

 The bottom line: it's a good idea to at least consider using a CDB.


About the author:

Chris Andrews is a Canadian instructor of Oracle University who teaches the Server Technologies courses for both 11g and 12c. He has been with Oracle University since 1997 and started training customers back with Oracle 7. While now a Senior Principal Instructor with OU, Chris had an opportunity to work as a DBA for about 10 years before joining Oracle. His background and experiences as a DBA, Database Analyst and also a developer is occasionally shared with customers in the classroom. His skill set includes the Database Administration workshops, Backup & Recovery, Performance Tuning, RAC, Dataguard, ASM, Clusterware and also Exadata and Database Machine administration workshop. While not teaching, Chris enjoys aviation and flying gliders, underwater photography and tennis.

Monday Nov 04, 2013

Some OBI EE Tricks and Tips in the Admin Tool
By Gerry Langton

How to set the log level from a Session variable Initialization block

As we know it is normal to set the log level non-zero for a particular user when we wish to debug problems. However sometimes it is inconvenient to go into each user’s properties in the Admin tool and update the log level. So I am showing a method which allows the log level to be set for all users via a session initialization block. This is particularly useful for anyone wanting an alternative way to set the log level.

The screen shots shown are using the OBIEE 11g SampleApp demo but are applicable to any environment.

Open the appropriate rpd in on-line mode and navigate to Manage > Variables.

Select Session > Initialization Blocks, right click in the white space and create a New Initialization Block.

I called the Initialization block Set_Loglevel .
Now click on ‘Edit Data Source’ to enter the SQL.

Chose the ‘Use OBI EE Server’ option for the SQL. This means that the SQL provided must use tables which have been defined in the Physical layer of the RPD, and whilst there is no need to provide a connection pool you must work in On-Line mode.

The SQL can access any of the RPD tables and is purely used to return a value of 2. The ‘Test’ button confirms that the SQL is valid.
Next, click on the ‘Edit Data Target’ button to add the LOGLEVEL variable to the initialization block.

Check the ‘Enable any user to set the value’ option so that this will work for any user.
Click OK and the following message will display as LOGLEVEL is a system session variable:

Click ‘Yes’.
Click ‘OK’ to save the Initialization block. Then check in the On-LIne changes.
To test that LOGLEVEL has been set, log in to OBIEE using an administrative login (e.g. weblogic) and reload server metadata, either from the Analysis editor or from Administration > Reload Files and Metadata link. Run a query then navigate to Administration > Manage Sessions and click ‘View Log’ for the query just issued (which should be approximately the last in the list). A log file should exist and with LOGLEVEL set to 2 should include both logical and physical sql. If more diagnostic information is required then set LOGLEVEL to a higher value.

If logging is required only for a particular analysis then an alternative method can be used directly from the Analysis editor.
Edit the analysis for which debugging is required and click on the Advanced tab. Scroll down to the Advanced SQL clauses section and enter the following in the Prefix box:
SET VARIABLE LOGLEVEL = 2;
Click the ‘Apply SQL’ button.

The SET VARIABLE statement will now prefix the Analysis’s logical SQL. So that any time this analysis is run it will produce a log.

You can find information about training for Oracle BI EE products here or in the Oracle University Learning Paths.
Please send me an email at gerry.langton@oracle.com if you have any further questions.

About the Author:


Gerry Langton started at Siebel Systems in 1999 working as a technical instructor teaching both Siebel application development and also Siebel Analytics (which subsequently became Oracle BI EE). From 2006 Gerry has worked as Senior Principal Instructor within Oracle University specialising in Oracle BI EE, Oracle BI Publisher and Oracle Data Warehouse development for BI.

Friday Nov 01, 2013

Running a simple integration scenario using the Oracle Big Data Connectors on Hadoop/HDFS cluster

Between the elephant ( the tradional image of the Hadoop framework) and the Oracle Iron Man (Big Data..) an english setter could be seen as the link to the right data


Data, Data, Data, we are living in a world where data technology based on popular applications , search engines, Webservers, rich sms messages, email clients, weather forecasts and so on, have a predominant role in our life.

More and more technologies are used to analyze/track our behavior, try to detect patterns, to propose us "the best/right user experience" from the Google Ad services, to Telco companies or large consumer sites (like Amazon:) ). The more we use all these technologies, the more we generate data, and thus there is a need of huge data marts and specific hardware/software servers (as the Exadata servers) in order to treat/analyze/understand the trends and offer new services to the users.

Some of these "data feeds" are raw, unstructured data, and cannot be processed effectively by normal SQL queries. Large scale distributed processing was an emerging infrastructure need and the solution seemed to be the "collocation of compute nodes with the data", which in turn leaded to MapReduce parallel patterns and the development of the Hadoop framework, which is based on MapReduce and a distributed file system (HDFS) that runs on larger clusters of rather inexpensive servers.

Several Oracle products are using the distributed / aggregation pattern for data calculation ( Coherence, NoSql, times ten ) so once that you are familiar with one of these technologies, lets says with coherence aggregators, you will find the whole Hadoop, MapReduce concept very similar.

Oracle Big Data Appliance is based on the Cloudera Distribution (CDH), and the Oracle Big Data Connectors can be plugged on a Hadoop cluster running the CDH distribution or equivalent Hadoop clusters.

In this paper, a "lab like" implementation of this concept is done on a single Linux X64 server, running an Oracle Database 11g Enterprise Edition Release 11.2.0.4.0, and a single node Apache hadoop-1.2.1 HDFS cluster, using the SQL connector for HDFS.

The whole setup is fairly simple:

  1. Install on a Linux x64 server ( or virtual box appliance) an Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 server
  2. Get the Apache Hadoop distribution from: http://mir2.ovh.net/ftp.apache.org/dist/hadoop/common/hadoop-1.2.1.
  3. Get the Oracle Big Data Connectors from: http://www.oracle.com/technetwork/bdc/big-data-connectors/downloads/index.html?ssSourceSiteId=ocomen.
  4. Check the java version of your Linux server with the command:
    java -version
     java version "1.7.0_40"
    Java(TM) SE Runtime Environment (build 1.7.0_40-b43)
    Java HotSpot(TM) 64-Bit Server VM (build 24.0-b56, mixed mode)
    
  5. Decompress the hadoop hadoop-1.2.1.tar.gz file to /u01/hadoop-1.2.1
  6. Modify your .bash_profile
    export HADOOP_HOME=/u01/hadoop-1.2.1
    export PATH=$PATH:$HADOOP_HOME/bin
    
    export HIVE_HOME=/u01/hive-0.11.0
    export PATH=$PATH:$HADOOP_HOME/bin:$HIVE_HOME/bin
    
    (also see my sample .bash_profile)
  7. Set up ssh trust for Hadoop process, this is a mandatory step, in our case we have to establish a "local trust" as will are using a single node configuration
  8. copy the new public keys to the list of authorized keys
  9. connect and test the ssh setup to your localhost:

  10. We will run a "pseudo-Hadoop cluster", in what is called "local standalone mode", all the Hadoop java components are running in one Java process, this is enough for our demo purposes. We need to "fine tune" some Hadoop configuration files, we have to go at our $HADOOP_HOME/conf, and modify the files:
    core-site.xml
                

    hdfs-site.xml
    

    mapred-site.xml
    

  11. check that the hadoop binaries are referenced correctly from the command line by executing:
    
    hadoop  -version
    
  12. As Hadoop is managing our "clustered HDFS" file system we have to create "the mount point" and format it , the mount point will be declared to core-site.xml as:

    The layout under the /u01/hadoop-1.2.1/data will be created and used by other hadoop components (MapReduce => /mapred/...) HDFS is using the /dfs/... layout structure

  13. format the HDFS hadoop file system:
  14. Start the java components for the HDFS system
  15. As an additional check, you can use the GUI Hadoop browsers to check the content of your HDFS configurations:

    Once our HDFS Hadoop setup is done you can use the HDFS file system to store data ( big data : )), and plug them back and forth to Oracle Databases by the means of the Big Data Connectors ( which is the next configuration step).

    You can create / use a Hive db, but in our case we will make a simple integration of "raw data" , through the creation of an External Table to a local Oracle instance ( on the same Linux box, we run the Hadoop HDFS one node cluster and one Oracle DB).

  16. Download some public "big data", I use the site: http://france.meteofrance.com/france/observations, from where I can get *.csv files for my big data simulations :).

    Here is the data layout of my example file:

    Download the Big Data Connector from the OTN (oraosch-2.2.0.zip), unzip it to your local file system (see picture below)

  17. Modify your environment in order to access the connector libraries , and make the following test:

    [oracle@dg1 bin]$./hdfs_stream
    Usage: hdfs_stream locationFile
    [oracle@dg1 bin]$
    
  18. Load the data to the Hadoop hdfs file system:
    hadoop fs  -mkdir bgtest_data
    hadoop  fs  -put obsFrance.txt bgtest_data/obsFrance.txt
    hadoop fs  -ls  /user/oracle/bgtest_data/obsFrance.txt       
    [oracle@dg1 bg-data-raw]$ hadoop fs -ls  /user/oracle/bgtest_data/obsFrance.txt
    
    Found 1 items
    -rw-r--r--   1 oracle supergroup      54103 2013-10-22 06:10 /user/oracle/bgtest_data/obsFrance.txt
    
    [oracle@dg1 bg-data-raw]$hadoop fs -ls  hdfs:///user/oracle/bgtest_data/obsFrance.txt
    
    Found 1 items
    -rw-r--r--   1 oracle supergroup      54103 2013-10-22 06:10 /user/oracle/bgtest_data/obsFrance.txt
    
  19. Check the content of the HDFS with the browser UI:
  20. Start the Oracle database, and run the following script in order to create the Oracle database user, the Oracle directories for the Oracle Big Data Connector (dg1 it’s my own db id replace accordingly yours):
    #!/bin/bash
    export ORAENV_ASK=NO
    export ORACLE_SID=dg1
    . oraenv
    sqlplus /nolog <<EOF
    CONNECT / AS sysdba;
    CREATE OR REPLACE DIRECTORY osch_bin_path  AS  '/u01/orahdfs-2.2.0/bin';
    CREATE USER BGUSER IDENTIFIED BY oracle;
    GRANT CREATE SESSION, CREATE TABLE TO BGUSER;
    GRANT EXECUTE ON sys.utl_file TO BGUSER;
    GRANT READ, EXECUTE ON DIRECTORY osch_bin_path TO BGUSER;
    CREATE OR REPLACE DIRECTORY BGT_LOG_DIR as '/u01/BG_TEST/logs';
    GRANT READ, WRITE ON DIRECTORY BGT_LOG_DIR to BGUSER;
    CREATE OR REPLACE DIRECTORY BGT_DATA_DIR as '/u01/BG_TEST/data';
    GRANT READ, WRITE ON DIRECTORY BGT_DATA_DIR to BGUSER;
    EOF
    
  21. Put the following in a file named t3.sh and make it executable,
    hadoop jar $OSCH_HOME/jlib/orahdfs.jar \
    oracle.hadoop.exttab.ExternalTable \
    -D oracle.hadoop.exttab.tableName=BGTEST_DP_XTAB \
    -D oracle.hadoop.exttab.defaultDirectory=BGT_DATA_DIR \
    -D oracle.hadoop.exttab.dataPaths="hdfs:///user/oracle/bgtest_data/obsFrance.txt" \
    -D oracle.hadoop.exttab.columnCount=7 \
    -D oracle.hadoop.connection.url=jdbc:oracle:thin:@//localhost:1521/dg1 \
    -D oracle.hadoop.connection.user=BGUSER \
    -D oracle.hadoop.exttab.printStackTrace=true \
    -createTable  --noexecute
    

    then test the creation fo the external table with it:

    [oracle@dg1 samples]$ ./t3.sh
    
    ./t3.sh: line 2: /u01/orahdfs-2.2.0: Is a directory
    Oracle SQL Connector for HDFS Release 2.2.0 - Production
    Copyright (c) 2011, 2013, Oracle and/or its affiliates. All rights reserved.
    Enter Database Password:]
    The create table command was not executed.
    The following table would be created.
    CREATE TABLE "BGUSER"."BGTEST_DP_XTAB"
    (
     "C1"                             VARCHAR2(4000),
     "C2"                             VARCHAR2(4000),
     "C3"                             VARCHAR2(4000),
     "C4"                             VARCHAR2(4000),
     "C5"                             VARCHAR2(4000),
     "C6"                             VARCHAR2(4000),
     "C7"                             VARCHAR2(4000)
    )
    ORGANIZATION EXTERNAL
    (
       TYPE ORACLE_LOADER
       DEFAULT DIRECTORY "BGT_DATA_DIR"
       ACCESS PARAMETERS
       (
         RECORDS DELIMITED BY 0X'0A'
         CHARACTERSET AL32UTF8
         STRING SIZES ARE IN CHARACTERS
         PREPROCESSOR "OSCH_BIN_PATH":'hdfs_stream'
         FIELDS TERMINATED BY 0X'2C'
         MISSING FIELD VALUES ARE NULL
         (
           "C1" CHAR(4000),
           "C2" CHAR(4000),
           "C3" CHAR(4000),
           "C4" CHAR(4000),
           "C5" CHAR(4000),
           "C6" CHAR(4000),
           "C7" CHAR(4000)
         )
       )
       LOCATION
       (
         'osch-20131022081035-74-1'
       )
    ) PARALLEL REJECT LIMIT UNLIMITED;
    The following location files would be created.
    osch-20131022081035-74-1 contains 1 URI, 54103 bytes
           54103 hdfs://localhost:19000/user/oracle/bgtest_data/obsFrance.txt
    
  22. Then remove the --noexecute flag and create the external Oracle table for the Hadoop data.

    Check the results:

    
    The create table command succeeded.
    
    CREATE TABLE "BGUSER"."BGTEST_DP_XTAB"
    (
     "C1"                             VARCHAR2(4000),
     "C2"                             VARCHAR2(4000),
     "C3"                             VARCHAR2(4000),
     "C4"                             VARCHAR2(4000),
     "C5"                             VARCHAR2(4000),
     "C6"                             VARCHAR2(4000),
     "C7"                             VARCHAR2(4000)
    )
    ORGANIZATION EXTERNAL
    ( 
       TYPE ORACLE_LOADER
       DEFAULT DIRECTORY "BGT_DATA_DIR"
       ACCESS PARAMETERS
       (
         RECORDS DELIMITED BY 0X'0A'
         CHARACTERSET AL32UTF8
         STRING SIZES ARE IN CHARACTERS
         PREPROCESSOR "OSCH_BIN_PATH":'hdfs_stream'
         FIELDS TERMINATED BY 0X'2C'
         MISSING FIELD VALUES ARE NULL
         (
           "C1" CHAR(4000),
           "C2" CHAR(4000),
           "C3" CHAR(4000),
           "C4" CHAR(4000),
           "C5" CHAR(4000),
           "C6" CHAR(4000),
           "C7" CHAR(4000)
         )
       )
       LOCATION
       (
         'osch-20131022081719-3239-1'
       )
    ) PARALLEL REJECT LIMIT UNLIMITED;
    
    The following location files were created.
    
    osch-20131022081719-3239-1 contains 1 URI, 54103 bytes
    
           54103 hdfs://localhost:19000/user/oracle/bgtest_data/obsFrance.txt
    

    This is the view from the SQL Developer:

    and finally the number of lines in the oracle table, imported from our Hadoop HDFS cluster

    SQL> select count(*) from "BGUSER"."BGTEST_DP_XTAB";
                      
    COUNT(*)
    ----------
          1151
    
    

In a next post we will integrate data from a Hive database, and try some ODI integrations with the ODI Big Data connector. Our simplistic approach is just a step to show you how these unstructured data world can be integrated to Oracle infrastructure.

Hadoop, BigData, NoSql are great technologies, they are widely used and Oracle is offering a large integration infrastructure based on these services.

Oracle University presents a complete curriculum on all the Oracle related technologies:

NoSQL:

Big Data:

Oracle Data Integrator:

Oracle Coherence 12c:

Oracle Coherence 12c:

Other Resources:

  • Apache Hadoop : http://hadoop.apache.org/ is the homepage for these technologies.
  • "Hadoop Definitive Guide 3rdEdition" by Tom White is a classical lecture for people who want to know more about Hadoop , and some active "googling " will also give you some more references.

About the author:

Eugene Simos is based in France and joined Oracle through the BEA-Weblogic Acquisition, where he worked for the Professional Service, Support, end Education for major accounts across the EMEA Region. He worked in the banking sector, ATT, Telco companies giving him extensive experience on production environments. Eugene currently specializes in Oracle Fusion Middleware teaching an array of courses on Weblogic/Webcenter, Content,BPM /SOA/Identity-Security/GoldenGate/Virtualisation/Unified Comm Suite) throughout the EMEA region.

Friday Oct 18, 2013

Oracle Database 12c New Partition Maintenance Features by Gwen Lazenby

One of my favourite new features in Oracle Database 12c is the ability to perform partition maintenance operations on multiple partitions. This means we can now add, drop, truncate and merge multiple partitions in one operation, and can split a single partition into more than two partitions also in just one command. This would certainly have made my life slightly easier had it been available when I administered a data warehouse at Oracle 9i.

To demonstrate this new functionality and syntax, I am going to create two tables, ORDERS and ORDERS_ITEMS which have a parent-child relationship. ORDERS is to be partitioned using range partitioning on the ORDER_DATE column, and ORDER_ITEMS is going to partitioned using reference partitioning and its foreign key relationship with the ORDERS table. This form of partitioning was a new feature in 11g and means that any partition maintenance operations performed on the ORDERS table will also take place on the ORDER_ITEMS table as well.

First create the ORDERS table -

SQL> CREATE TABLE orders
      ( order_id NUMBER(12),
        order_date TIMESTAMP,
        order_mode VARCHAR2(8),
        customer_id NUMBER(6),
        order_status NUMBER(2),
        order_total NUMBER(8,2),
        sales_rep_id NUMBER(6),
        promotion_id NUMBER(6),
       CONSTRAINT orders_pk PRIMARY KEY(order_id)
     )
    PARTITION BY RANGE(order_date)
   (PARTITION Q1_2007 VALUES LESS THAN (TO_DATE('01-APR-2007','DD-MON-YYYY')),
    PARTITION Q2_2007 VALUES LESS THAN (TO_DATE('01-JUL-2007','DD-MON-YYYY')),
    PARTITION Q3_2007 VALUES LESS THAN (TO_DATE('01-OCT-2007','DD-MON-YYYY')),
    PARTITION Q4_2007 VALUES LESS THAN (TO_DATE('01-JAN-2008','DD-MON-YYYY'))
    );

Table created.

Now the ORDER_ITEMS table

SQL> CREATE TABLE order_items
     ( order_id NUMBER(12) NOT NULL,
       line_item_id NUMBER(3) NOT NULL,
       product_id NUMBER(6) NOT NULL,
       unit_price NUMBER(8,2),
       quantity NUMBER(8),
       CONSTRAINT order_items_fk
       FOREIGN KEY(order_id) REFERENCES orders(order_id) on delete cascade)  
       PARTITION BY REFERENCE(order_items_fk) tablespace example;

Table created.

Now look at DBA_TAB_PARTITIONS to get details of what partitions we have in the two tables –

SQL>  select table_name,partition_name,
     partition_position position, high_value
     from dba_tab_partitions
     where table_owner='SH' and
     table_name  like 'ORDER_%'
     order by partition_position, table_name;

TABLE_NAME    PARTITION_NAME   POSITION HIGH_VALUE
-------------- --------------- -------- -------------------------
ORDERS         Q1_2007                1 TIMESTAMP' 2007-04-01 00:00:00'
ORDER_ITEMS    Q1_2007                1
ORDERS         Q2_2007                2 TIMESTAMP' 2007-07-01 00:00:00'
ORDER_ITEMS    Q2_2007                2
ORDERS         Q3_2007                3 TIMESTAMP' 2007-10-01 00:00:00'
ORDER_ITEMS    Q3_2007                3
ORDERS         Q4_2007                4 TIMESTAMP' 2008-01-01 00:00:00'
ORDER_ITEMS    Q4_2007                4

Just as an aside it is also now possible in 12c to use interval partitioning on reference partitioned tables. In 11g it was not possible to combine these two new partitioning features.

For our first example of the new 12cfunctionality, let us add all the partitions necessary for 2008 to the tables using one command. Notice that the partition specification part of the add command is identical in format to the partition specification part of the create command as shown above -

SQL> alter table orders add
PARTITION Q1_2008 VALUES LESS THAN (TO_DATE('01-APR-2008','DD-MON-YYYY')),
PARTITION Q2_2008 VALUES LESS THAN (TO_DATE('01-JUL-2008','DD-MON-YYYY')),
PARTITION Q3_2008 VALUES LESS THAN (TO_DATE('01-OCT-2008','DD-MON-YYYY')),
PARTITION Q4_2008 VALUES LESS THAN (TO_DATE('01-JAN-2009','DD-MON-YYYY'));

Table altered.

Now look at DBA_TAB_PARTITIONS and we can see that the 4 new partitions have been added to both tables –

SQL> select table_name,partition_name,
     partition_position position, high_value
     from dba_tab_partitions
     where table_owner='SH' and
     table_name  like 'ORDER_%'
     order by partition_position, table_name;

TABLE_NAME    PARTITION_NAME   POSITION HIGH_VALUE
-------------- --------------- -------- -------------------------
ORDERS         Q1_2007                1 TIMESTAMP' 2007-04-01 00:00:00'
ORDER_ITEMS    Q1_2007                1
ORDERS         Q2_2007                2 TIMESTAMP' 2007-07-01 00:00:00'
ORDER_ITEMS    Q2_2007                2
ORDERS         Q3_2007                3 TIMESTAMP' 2007-10-01 00:00:00'
ORDER_ITEMS    Q3_2007                3
ORDERS         Q4_2007                4 TIMESTAMP' 2008-01-01 00:00:00'
ORDER_ITEMS    Q4_2007                4
ORDERS         Q1_2008                5 TIMESTAMP' 2008-04-01 00:00:00'
ORDER_ITEMS    Q1_2008                5
ORDERS         Q2_2008                6 TIMESTAMP' 2008-07-01 00:00:00'
ORDER_ITEM     Q2_2008                6
ORDERS         Q3_2008                7 TIMESTAMP' 2008-10-01 00:00:00'
ORDER_ITEMS    Q3_2008                7
ORDERS         Q4_2008                8 TIMESTAMP' 2009-01-01 00:00:00'
ORDER_ITEMS    Q4_2008                8

Next, we can drop or truncate multiple partitions by giving a comma separated list in the alter table command. Note the use of the plural ‘partitions’ in the command as opposed to the singular ‘partition’ prior to 12c

SQL> alter table orders drop partitions Q3_2008,Q2_2008,Q1_2008;

Table altered.

Now look at DBA_TAB_PARTITIONS and we can see that the 3 partitions have been dropped in both the two tables –

TABLE_NAME    PARTITION_NAME   POSITION HIGH_VALUE
-------------- --------------- -------- -------------------------
ORDERS         Q1_2007                1 TIMESTAMP' 2007-04-01 00:00:00'
ORDER_ITEMS    Q1_2007                1
ORDERS         Q2_2007                2 TIMESTAMP' 2007-07-01 00:00:00'
ORDER_ITEMS    Q2_2007                2
ORDERS         Q3_2007                3 TIMESTAMP' 2007-10-01 00:00:00'
ORDER_ITEMS    Q3_2007                3
ORDERS         Q4_2007                4 TIMESTAMP' 2008-01-01 00:00:00'
ORDER_ITEMS    Q4_2007                4
ORDERS         Q4_2008                5 TIMESTAMP' 2009-01-01 00:00:00'
ORDER_ITEMS    Q4_2008                5

Now let us merge all the 2007 partitions together to form one single partition –

SQL> alter table orders merge partitions
   Q1_2005, Q2_2005, Q3_2005, Q4_2005
   into partition Y_2007;

Table altered.

TABLE_NAME    PARTITION_NAME   POSITION HIGH_VALUE
-------------- --------------- -------- -------------------------
ORDERS         Y_2007                 1 TIMESTAMP' 2008-01-01 00:00:00'
ORDER_ITEMS    Y_2007                 1
ORDERS         Q4_2008                2 TIMESTAMP' 2009-01-01 00:00:00'
ORDER_ITEMS    Q4_2008                2

Splitting partitions is a slightly more involved. In the case of range partitioning one of the new partitions must have no high value defined, and in list partitioning one of the new partitions must have no list of values defined. I call these partitions the ‘everything else’ partitions, and will contain any rows contained in the original partition that are not contained in the any of the other new partitions.

For example, let us split the Y_2007 partition back into 4 quarterly partitions –

SQL> alter table orders split partition Y_2007 into 
(PARTITION Q1_2007 VALUES LESS THAN (TO_DATE('01-APR-2007','DD-MON-YYYY')),
PARTITION Q2_2007 VALUES LESS THAN (TO_DATE('01-JUL-2007','DD-MON-YYYY')),
PARTITION Q3_2007 VALUES LESS THAN (TO_DATE('01-OCT-2007','DD-MON-YYYY')),
PARTITION Q4_2007);

Now look at DBA_TAB_PARTITIONS to get details of the new partitions –


TABLE_NAME    PARTITION_NAME   POSITION HIGH_VALUE
-------------- --------------- -------- -------------------------
ORDERS         Q1_2007                1 TIMESTAMP' 2007-04-01 00:00:00'
ORDER_ITEMS    Q1_2007                1
ORDERS         Q2_2007                2 TIMESTAMP' 2007-07-01 00:00:00'
ORDER_ITEMS    Q2_2007                2
ORDERS         Q3_2007                3 TIMESTAMP' 2007-10-01 00:00:00'
ORDER_ITEMS    Q3_2007                3
ORDERS         Q4_2007                4 TIMESTAMP' 2008-01-01 00:00:00'
ORDER_ITEMS    Q4_2007                4
ORDERS         Q4_2008                5 TIMESTAMP' 2009-01-01 00:00:00'
ORDER_ITEMS    Q4_2008                5

Partition Q4_2007 has a high value equal to the high value of the original Y_2007 partition, and so has inherited its upper boundary from the partition that was split.

As for a list partitioning example let look at the following another table, SALES_PAR_LIST, which has 2 partitions, Americas and Europe and a partitioning key of country_name.

SQL> select table_name,partition_name,
   high_value
   from dba_tab_partitions
   where table_owner='SH' and
   table_name = 'SALES_PAR_LIST';

TABLE_NAME      PARTITION_NAME   HIGH_VALUE
--------------  ---------------  -----------------------------
SALES_PAR_LIST  AMERICAS         'Argentina', 'Canada', 'Peru',
                                 'USA', 'Honduras', 'Brazil', 'Nicaragua'
SALES_PAR_LIST  EUROPE           'France', 'Spain', 'Ireland', 'Germany',
                                 'Belgium', 'Portugal', 'Denmark'

Now split the Americas partition into 3 partitions –

SQL> alter table sales_par_list split partition americas into
   (partition south_america values ('Argentina','Peru','Brazil'),
   partition north_america values('Canada','USA'),
   partition central_america);

Table altered.

Note that no list of values was given for the ‘Central America’ partition. However it should have inherited any values in the original ‘Americas’ partition that were not assigned to either the ‘North America’ or ‘South America’ partitions. We can confirm this by looking at the DBA_TAB_PARTITIONS view.

SQL> select table_name,partition_name,
   high_value
   from dba_tab_partitions
   where table_owner='SH' and
   table_name = 'SALES_PAR_LIST';

TABLE_NAME      PARTITION_NAME   HIGH_VALUE
--------------- ---------------  --------------------------------
SALES_PAR_LIST  SOUTH_AMERICA    'Argentina', 'Peru', 'Brazil'
SALES_PAR_LIST  NORTH_AMERICA    'Canada', 'USA'
SALES_PAR_LIST  CENTRAL_AMERICA  'Honduras', 'Nicaragua'
SALES_PAR_LIST  EUROPE           'France', 'Spain', 'Ireland', 'Germany',
                                 'Belgium', 'Portugal', 'Denmark'

In conclusion, I hope that DBA’s whose work involves maintaining partitions will find the operations a bit more straight forward to carry out once they have upgraded to Oracle Database 12c.

Gwen Lazenby

Gwen Lazenby is a Principal Training Consultant at Oracle.

She is part of Oracle University's Core Technology delivery team based in the UK, teaching Database Administration and Linux courses. Her specialist topics include using Oracle Partitioning and Parallelism in Data Warehouse environments, as well as Oracle Spatial and RMAN.

Wednesday Oct 09, 2013

NEW ORACLE WEBCENTER CONTENT USER INTERFACE ON VERSION 11.1.1.8
By Roberto Nogueras

Oracle purchased Stellent in November 2006. Soon the Stellent Content Server product became the Oracle Content Server, then Oracle UCM and finally it became the Oracle WebCenter Content. As you see, the product name has changed 3 times in the past 7 years. However, the user interface hasn’t changed that much. Oracle rebranded it in 10gR3 version and has given the software only minor updates ever since. The interface is functional, but perhaps too complex for some end users and lacks the look and feel of modern web applications.

In Spring 2013, it became known that Release 11.1.1.8 was going to feature a new user interface. Some time in September, I decided to download and install it. I connected to the home page URL, logged in and the good old UI came up:

I know, I know, I should read the documentation before installing. After doing so, I found out a few interesting things:

  • The new UI is not a replacement for the old one. It just contains the features more useful for end users.
  • The new UI is an ADF application that you have to install in a separate WLS domain, and there’s no problem in running it in a separate machine.
  • The new UI communicates with the content server by using the RIDC protocol (There’s a dedicated port open for that protocol on the content server side)

The setup is fully explained in the documentation, so we’re not going to get into the details. I’d rather perform a functionality analysis of the new UI. As mentioned before, it’s end user-oriented.

First of all, let’s login. After logging in, this is the main screen:

It’s quite obvious that it has just a few options and the main screen displays, by default, a blank search result. Let’s click the “Upload Button” to check in new content:

Nice. A pop-up window opens and we can either browse the file system or drag-and-drop some content from a file explorer window. When we add a file, the “Metadata” section expands automatically and we can enter data:

We’re redirected to the main screen, and after waiting a few seconds for the search index to be updated, we can click the “refresh” button and then the new document appears in the search:

If we click it, a new tab/window opens displaying this preview screen:

This one is a beautiful design, in my opinion. On the left side we see the document content and more importantly, the tabs to navigate between revisions. On the left side, we can see metadata values , the menu to check out the content, and some other options as “Favorite”, “Follow”, and “File Document” which will be discussed a bit later. By now, let’s check out some content and create a new revision. Please note that a new tab is created:

You can “Follow” pieces of content , which is similar to the old “Subscribe” option, that is, the user wants to be notified every time a new revision is generated for that content item. You can also can mark content as Favorite, which is something new, and finally, you can arrange content into “Libraries”, that are just an evolution of folders. That’s the use of the “File Document” option: to put content into libraries:

There’s little else to say about the interface, as we’ve discussed all of its functionality. Now I hope you have the information to make the decision of using it or not. The benefits for end users are obvious and the cost is also obvious: an extra WLS domain and more memory consumption on the server side.


About the Author:


Roberto Nogueras is based in Madrid, Spain and has been an Oracle University instructor since 1995. He specializes in the Oracle database, application server and middleware technologies. Roberto has been working with Oracle WebCenter Content since the Stellent acquisition.

Wednesday Sep 25, 2013

Siebel Pricing: Adding a New Variable to a Variable Map
By Tim Bull

Remember: Variable Maps are used by the PSP driver workflows to build the row sets that are passed to the pricing procedure workflows.

Variable Maps specify the structure of the row set by defining the variables that appear in the row set and the source of the data.

 

The Variable Source is defined by the Source Type and the Path

The Source Type of Instance refers to business components in the active business object. The Path does not refer directly to the business component and its field. Rather, it is defined in terms of an integration object that maps to the business object/component/fields.

 

To add a new Variable to a Variable Map

1. Lock the Variable Map

Administration - Order Management > Variable Maps

 

2. Create the Variable Definition


Determines the name to be used in pricing procedure business services

 

3. Create the Variable Source

Identify the Integration Component Field that references the new Business Field


4. Release the Variable Map

More Information:
To learn more about variable maps I can recommend the 3-day course: Siebel 8.0 Customer Order Management: Pricing Configuration.
You can find more information about Oracle University's offerings for Siebel on the Siebel Training web page.
You can also send me an eMail: tim.bull@oracle.com

About the Author:


Tim Bull joined Siebel Systems in 1999 and transferred to Oracle in 2006. He is a Principal Education Consultant at Oracle University and is the Senior UK instructor for Siebel software. Using his detailed product knowledge Tim delivers both In Class and Live Virtual Class courses across all the Siebel Curriculum.

Monday Jul 08, 2013

Some OBI EE Tricks and Tips in the Analysis Editor by Gerry Langton

The following examples use the SUPPLIER2 database schema which is used in the Oracle University OBI EE training courses.

How to handle NULLs in a Pivot Table column

Sometimes you want to be able to display a value in a pivot table column even though there is no value in a column.

In this example we see that only one of our Sales Reps has actually sold any Frozen goods. As a consequence the Frozen column is blank for all the other reps. Putting a case statement in the Dollars column formula, e.g. CASE WHEN "Fact-Sales"."Dollars" IS NULL THEN 0 ELSE "Fact-Sales"."Dollars" END looks like a good bet. But unfortunately it doesn’t work as we don’t have a null value in the Dollars column. We just don’t have a row for this particular combination of Sales Rep and Product Type.

However we can use a little known feature of the column properties Data tab: you can enter a custom format that uses formatting commands similar to those used in Excel. The syntax used in this case would be:

positive-value-mask (semi colon) negative-value-mask (semi colon) null-mask.

For example #,###;-#,###;0

or #,###;-#,###;’No value’

So with that Custom Numeric format in place our Pivot Table now looks like this:

How to convert a string to proper case

Looking at the pivot table in the previous example we find the Sales Rep column in the Customers table is in upper case but we would like to see it reported with the first letter of each word in capitals and the remainder of the word in lower case. We review available functions in the Analysis Editor expression builder and don’t find anything suitable; however we know that Oracle provides the INITCAP database function which does exactly what we require.

The Analysis editor provides an EVALUATE expression which allow you to use database functions in the formula of a column.

The syntax for EVALUATE is:

EVALUATE('db_function(%1...%N)' [AS data_type] [, column1, columnN])

db_function is any valid database function understood by the underlying database.
data_type is an optional parameter that specifies the data type of the return result.
column1 through columnN is an optional, comma-delimited list of columns. Where column1 replaces %1 in the database function, column2 replaces %2 etc.

So we would type the following in the Sales Rep column formula:

EVALUATE('INITCAP(%1),Customers."Sales Rep")

Use of the EVALUATE function is disabled by default so you need to enable the feature in the NQSCONFIG.INI file and restart the OBI Server (via Enterprise Manager).

In the Server Section of NQSCONFIG.INI set the parameter EVALUATE_SUPPORT_LEVEL to either 1 or 2 as in the extract from the file below:

# EVALUATE_SUPPORT_LEVEL:
# 1: evaluate is supported for users with manageRepositories permssion
# 2: evaluate is supported for any user.
# other: evaluate is not supported if the value is anything else.

EVALUATE_SUPPORT_LEVEL = 1;

How to do Time Series calculations in the Analysis Editor

Whilst time series calculations can be created in the OBI Repository logical layer using Ago, To Date and Rolling Period functions, it is also possible to create new calculations ‘on the fly’ in the Analysis editor. When you look at the available functions for a column formula you will see ‘Ago’, ‘To Date’ and ‘Rolling Period’ etc. under the ‘Time Series’ heading. However ,the use of these functions is not particularly clear from the Online Help.

Let’s assume that the repository has been set up with a Time dimension which is called Dim-Time and a Time hierarchy called Time, and the Time hierarchy has levels ‘Year’, ‘Quarter’, ‘Month’ and ‘Day’.

According to the online Help, to use the To Date function we use the syntax:
TODATE(expr, time_level) . But what does time_level actually mean? Well we express the time level using the syntax: ‘time dimension’.’hierarchy’.’level’.

Assuming we wish to have year to date totals this would equate to “Dim-Time”.”Time”.”Year”.

So the formula for Year To Date Dollars would be:

TODATE(“Fact Sales”.”Dollars”, “Dim-Time”.”Time”.”Year”)

For Month Ago Dollars the formula would be: AGO(“Fact Sales”.”Dollars”, “Dim-Time”.”Time”.”Month”, 1), where 1 is the period offset, i.e. current month -1.

How to put an HTML link in the column formula

I was interested to discover that the Customer names used in our Oracle University OBI EE course database are in fact genuine American restaurants. I discovered this when I was playing around with the requirement to place HTML links into an Analysis, in this case to pass a column value into a Google search.

We can type the following into the formula of the Customer column:

'<a target="_blank" href="http://www.google.com/search?hl=en&as_epq=' || Customer.Customer || '&btnG=Google+Search">' || Customer.Customer || '</a>'

The first instance of the customer name (Customer.Customer) is passed to Google as the search string. The second instance is used to display in the Analysis.

The Google search string parameters used in this example are hl=en which sets interface host language to ‘en’, i.e. English and as_epq which specifies the whole phrase must be found, which is important in situations like this where there may be spaces in the column value.

Also check Custom Headings and put in a new heading for the column (otherwise it will display the html!)


Also check ‘Override Default Data Format’ and select HTML from the drop down in the column properties.

The Table view should look something like this and each customer name will be a link to Google Search, passing the customer name as a parameter.

So when you click on Alley-Cats for example you will find the restaurant in Arlington Texas.

One obvious problem you will encounter is that some of the Customer names include an ampersand (&) character, and this is a terminator in the Google search string. So when you click on ‘2nd & Goal Sports Cafe’, only ‘2nd’ is passed to Google. To get round this you would have to do a string replace, replacing the ampersand with ‘and’.

So the formula for the column would become:

'<a target="_blank" href="http://www.google.com/search?hl=en&as_epq=' || REPLACE("Customer"."Customer",'&', 'and') || '&btnG=Google+Search">' || Customer.Customer || '</a>'

You can find information about training for Oracle BI EE products here or in the OU Learning Paths. Please send me an email at gerry.langton@oracle.com if you have any further questions.

About the Author:


Gerry Langton started at Siebel Systems in 1999 working as a technical instructor teaching both Siebel application development and also Siebel Analytics (which subsequently became Oracle BI EE). From 2006 Gerry has worked as Senior Principal Instructor within Oracle University specialising in Oracle BI EE, Oracle BI Publisher and Oracle Data Warehouse development for BI.

Tuesday Jul 02, 2013

Oracle Database 12c: Oracle Multitenant Option by Gerlinde Frenzen

1. Why ?
2. What is it ?
3. How ?

1. Why ?

The main idea of the 'grid' is to share resources, to make better use of storage, CPU and memory. If a database administrator wishes to implement this idea, he or she must consolidate many databases to one database. One of the concerns of running many applications together in one database is: ‚what will happen, if one of the applications must be restored because of a human error?‘ Tablespace point in time recovery can be used for this purpose, but there are a few prerequisites. Most importantly the tablespaces are strictly separated for each application. Another reason for creating separated databases is security: each customer has his own database. Therefore, there is often a proliferation of smaller databases. Each of them must be maintained, upgraded, each allocates virtual memory and runs background processes thereby wasting resources. Oracle 12c offers another possibility for virtualization, providing isolation at the database level: the multitenant container database holding pluggable databases.

2. What ?

Pluggable databases are logical units inside a multitenant container database, which consists of one multitenant container database and up to 252 pluggable databases. The SGA is shared as are the background processes.

The multitenant container database holds metadata information common for pluggable databases inside the System and the Sysaux tablespace, and there is just one Undo tablespace. The pluggable databases have smaller System and Sysaux tablespaces, containing just their 'personal' metadata. New data dictionary views will make the information available either on pdb (dba_views) or container level (cdb_views).

There are local users, which are known in specific pluggable databases and common users known in all containers.
Pluggable databases can be easily plugged to another multitenant container database and converted from a non-CDB. They can undergo point in time recovery.

3. How ?

Creating a multitenant container database can be done using the database configuration assistant: There you find the new option: Create as Container Database. If you prefer ‚hand made‘ databases you can execute the command from a instance in nomount state:

CREATE DATABASE cdb1 ENABLE PLUGGABLE DATABASE ….

And of course this can also be achieved through Enterprise Manager Cloud.

A freshly created multitenant container database consists of two containers: the root container as the 'rack' and a seed container, a template for future pluggable databases.

There are 4 ways to create other pluggable databases:

1. Create an empty pdb from seed

2. Plug in a non-CDB

3. Move a pdb from another pdb

4. Copy a pdb from another pdb


We will discuss option2: how to plug in a non_CDB into a multitenant container database. Three different methods are available :

1. Create an empty pdb and use Datapump in traditional export/import mode or with
Transportable Tablespace or Database mode. This method is suitable for pre 12c databases.

2. Create an empty pdb and use GoldenGate replication.
When the pdb catches up with the non-CDB, you fail over to the pdb.

3. Databases of Version 12c or higher can be plugged in with the help of the new dbms_pdb Package.

This is a demonstration for method 3:

Step1: Connect to the non-CDB to be plugged in and create an xml File with description of the database.
The xml file is written to $ORACLE_HOME/dbs per default and contains mainly information about the datafiles.

Step 2: Check if the non-CDB is pluggable in the multitenant container database:

Step 3: Create the pluggable database, connected to the Multitenant container database.

With nocopy option the files will be reused, but the tempfile is created anew:
A service is created and registered automatically with the listener:

Step 4: Delete unnecessary metadata from PDB SYSTEM tablespace:
To connect to newly created pdb, edit tnsnames.ora and add entry for new pdb.
Connect to plugged-in non_CDB and clean up Data Dictionary to remove entries now maintained in multitenant container database. As all kept objects have to be recompiled it will take a few minutes.

Step 5: The plugged-in database will be automatically synchronised by creating common users and roles when opened the first time in read write mode.

Step 6: Verify tablespaces and users:

There is only one local tablespace (users) and one local user (scott) in the plugged-in non_CDB pdb_orcl.
This method of creating plugged_in non_CDB from is fast and easy for 12c databases.

The method for deplugging a pluggable database from a CDB is to create a new non_CDB and use the the new full transportable feature of Datapump and drop the pluggable database.

About the Author:


Gerlinde Frenzen has been working for Oracle University Germany as one of our Principal Instructors for over 14 years. She started with Oracle 7 and became an Oracle Certified Master for Oracle 10g and 11c. She is a specialist in Database Core Technologies, with profound knowledge in Backup & Recovery, Performance Tuning for DBAs and Application Developers, Datawarehouse Administration, Data Guard and Real Application Clusters.

Thursday Jun 20, 2013

Creating an ACFS Replication by Bruno d Agostini

ACFS Replication, new features in 11.2.0.2 enables replication of Oracle ACFS file systems across the network to a remote site, providing disaster recovery capability for the file system.

For this demonstration, I used the practice environment provided for the course Oracle Grid Infrastructure 11g: Manage Clusterware and ASM.

Because the disk space is limited and 4 GB is needed per node, I used only a two Nodes cluster, named host01 and host02.

1. Before you start, be sure your cluster is correctly configured and works fine

2. Using ASMCA, create the disk groups with the required attribute.

a. Create PRIM Diskgroup with external redundancy and use 5 ASM disks ORCL:ASMDISK05 to ORCL:ASMDISK09. Set ASM and ADVM Compatibility to 11.2.0.2.0

 

b. Create SEC Diskgroup with external redundancy and use 5 ASM disks ORCL:ASMDISK10 to ORCL:ASMDISK14. Set ASM and ADVM Compatibility to 11.2.0.2.0

 

3. Using ASMCA, create the volumes needed.

a. Create Volume PRIM on PRIM Diskgroup and use 11G Bytes for the volume (Remember 4 GB per node)

b. Create volume SEC on SEC Diskgroup and use 11G Bytes for the volume (Remember 4 GB per node)

 

4. Using Linux, create the required directories

a. On Host01, create the directory /prim


a. On host02, create the directory /sec

 

5. Using ASMCA, create the ASM Cluster File Systems needed:

a. Create an ASM Cluster File System in PRIM Volume and specify /prim as Mount Point

b. Create an ASM Cluster File System in SEC Volume and specify /sec as Mount Point

6. Using NETCA, create TNS Alias PRIM for PRIM Service on host01 and create TNS Alias SEC for SEC Service on host02. (do not specify Cluster-scan)

7. Verify the result in TNSNAMES.ora file

8. Using Linux, create a password file on host01 with oracle as password

9. Using Linux, create a password file on host02 with oracle as password

10. On Host01, using SQL*Plus, create an user named oracle with password oracle and grant the necessary privileges. The password file will be updated on both sites, Host01 and Host02

11. On Host01, connected on instance +ASM1, add a service PRIM

12. Verify the listener on host01 is listening for service PRIM on instance +ASM1

13. Test the service name PRIM

14. On Host02, connected on instance +ASM2, add a service SEC

15. Verify the listener on host01 is listening for service SEC on instance +ASM2

16. Test the service name SEC

17. Using Linux, using df on host01, check the file system /prim is mounted and dismount the unnecessary file system mounted (/sec)

18. Using Linux, using df on host02, check the file system /sec is mounted and dismount the unnecessary file system mounted (/prim)

19. Connected on host02, using acfsutil, initialize ACFS replication on standby site first

20. Only when the replication has been successfully initialized on standby site, connect to host01 and initialize ACFS on primary site using acfsutil.

21. Check ACFS Replication on primary site

22. Check ACFS Replication on standby site

23. On host01, create a file (named toto)

24. On host02, after a while, check if the file has been replicated

25. On host01, pause the replication and create a second file (named titi)

26. On host02, the new file is normally not replicated because replication is paused on host01

27. After a while, resume replication on host01

28. After few seconds, on host02, the new file is normally replicated

29. On host02, pause the replication

30. On host01, create a new file (named new)

31. On host02, the file is not created because the replication on standby site is paused

32. On host02, resume the replication and test if the new file is created

33. On host02 standby site, try to remove file

34. On host01 primary site, terminate and verify the replication status

35. On host02 standby site, terminate and verify the replication status

About the Author:


Bruno d’Agostini joined Oracle in 1994 and currently is a French Instructor for Oracle University specializing in the Oracle Database, Data Guard, Data Warehouse, Streams, GoldenGate, Parallelism, Partitioning, Clusterware, ASM, RAC and Exadata Database Machine.

Tuesday May 07, 2013

Create a high availability 12C weblogic clustering environment with Traffic Director 11gR1 by Eugene Simos

Traffic director is one of the latest load balancer software, released by Oracle.
In fact its a fast, reliable, scalable and very easy manageable solution, for HTTP, HTTPS, TCP traffic for backend application and HTTP servers.

I have used the latest version of both traffic director, and weblogic 12c, to setup a simple clustering replication session scenario, on my Linux 64bits VBox sandbox.

For this scenario, the Traffic Director, is configured as a front end to a backend weblogic 12C cluster.

A specific feature "dynamic discovery", will let Traffic Director, discover on "the fly" new clustered Weblogic Nodes, associated with its initial configuration, and I will be able to "join" to my initial " 3 nodes wls cluster an another wls instance (doted line), with full HTTP replication capabilities.

In order to test the session replication I used one sample application, delivered as such, with the wls 12 installation (I will detail the utilization of this application in a later post :) ), which I called Session, and I will test the failover features of wls 12, with the Traffic Director, with this Session application deployed on my cluster!

The binary distribution, of the Traffic Director can be download from here:

Once that the distribution is download to my linux box, i started the normal FMW installation workflow:

Passing the normal test on the Linux system components, and choose an empty Oracle Traffic director Home for the software installation :

I saved the response file (I might use next time the silent installer !)

Then after saving the installation details, I start to configure my Traffic Director instances as following :
1) Create one admin instance (I used just the default settings) and an "default user" admin/welcome1

2) Start the instance from the installation directory:

3 ) I used the Traffic Director admin interface : https://localhost:8989, and with my credentials ( from section 1 admin/welcome1) I got the first admin panel

Once that I m logged into the Traffic Director, I m getting a initial "welcome" screen, and I have to create my own configurations according to my wls 12 cluster:

My Weblogic 12c cluster, was configured initially with 3 nodes, and i will add later one more managed instance. I have to create a specific Traffic Director configuration to route the traffic to my backend servers.

Through the Traffic Director configuration wizard, i will create a node on my local linux ( port 8080) , which will push the HTTP requests to my wls 12c clustred servers.
The Traffic Director node will use the specific feature of dynamic discovery in order to be able to push the HTTP request to other wls 12c clustered instances that will join the cluster later:

Then I started my 3 wls 12c clustered instances, on which I deployed my test Session replication application, and i started to test the Session replication scenario thought the Traffic Director node:

As you can see, my session has been created to the M1 node of my cluster.
I have created some data with this session, then I stopped the M1 node ( node failure simulation) and re submitted the request to the wls 12 cluster.

As you can see the session failover feature of the wls 12c worked perfectly, and now my session is on the M2 wls clustered node with the same date as on the previous failed M1 node !

For the dynamic discovery, I created one more managed server in the cluster (M4), then stopped the M2 server, and retried the Session application URL through the Traffic director node!
As you can see the Traffic Director routed the request to the new clustered server, with all the session replication data.

In a next post , we will deploy the Session application in a weblogic cluster, and we will use it as a test application, for the session replication features of weblogic.

About the Author:


Eugene Simos is based in France and joined Oracle through the BEA-Weblogic Acquisition, where he worked for the Professional Service, Support, end Education for major accounts across the EMEA Region. He worked in the banking sector, ATT, Telco companies giving him extensive experience on production environments. Eugene currently specializes in Oracle Fusion Middleware teaching an array of courses on Weblogic/Webcenter, Content,BPM /SOA/Identity-Security/GoldenGate/Virtualisation/Unified Comm Suite) throughout the EMEA region.

Monday Apr 15, 2013

How to update the correct Network driver for a VMWare server configuration

I have recently tried to "convert" my physical servers to virtual through the P2V converters for my own POCS, using VirtualBox or VMWare, but in the process I encountered some annoying issues with theses conversions such as the NetWork driver configuration, so here you can find the story and the solution:

I have converted my Win2003 64, physical server configuration to a VMWare virtual machine, and when I tried to boot it, I wasn't able to load the real Network driver, although the VMWare Tools were installed and configured without issues.
I was able to use the "shared folders" capability of the VMWare , and I created a shared folder "D", mapped on my D host drive, so at least I was able to copy files between the host and my VMWare server :)

The solution to my Network connectivity problem, was the following:

1. I got the Network driver PRO2K3XP_x64.exe from here.

2. Using WinRar I extracted the contents to my local server directory, then on my VMWare server; thought the VMWare network I copied this directory to my local VMWare driver.

3. Then on my VMWare server, I just "forced" a hardware discovery, the driver manager asked my the location of the Intel PRO/1000 driver files, and I pointed the hardware wizard to the my local repository et voila: the driver was recognized and correctly configured.

Now I can hook my VMWare to my network with all the Weblogic/OIM/OAM configuration :)

Here are some useful Oracle Server Virtualization courses :

About the Author:


Eugene Simos is based in France and joined Oracle through the BEA-Weblogic Acquisition, where he worked for the Professional Service, Support, end Education for major accounts across the EMEA Region. He worked in the banking sector, ATT, Telco companies giving him extensive experience on production environments. Eugene currently specializes in Oracle Fusion Middleware teaching an array of courses on Weblogic/Webcenter, Content,BPM /SOA/Identity-Security/GoldenGate/Virtualisation/Unified Comm Suite) throughout the EMEA region.

Friday Mar 29, 2013

The EAI Dispatch Service

The EAI Dispatch Service is an EAI business service that invokes other business services or workflows based on the contents of an input property set. It is driven by a set of customer-configurable rule and is used in both inbound and outbound integration workflows to route messages based on their content.

EAI Dispatch Service

Example: route incoming message from multiple suppliers to the corresponding workflow

Benefits of the EAI Dispatch Service

The dispatch service uses conditional processing to implement Business Policy. Conditional processing is more general than workflow and is based on the entire message rather than just the value of business component fields. This means we can reduce the number of workflows that need to be developed to handle a set of incoming messages and allowing for reuse of business logic in workflows. The dispatch service (like workflow) expresses business logic declaratively and is therefore easier to maintain than custom scripts.

EAI Dispatch Service Methods

The Dispatch Service has the following methods:

Execute:

  • Examines the incoming message
  • Dispatches the message according to the rules
  • Is used in production systems

Lockup:

  • Evaluates the incoming message
  • Does not dispatch the message
  • Is used for testing

Process Aggregate Request

  • Allows multiple invocations of methods in business services in a single request
  • Formats the output into a single message

Rules are used by the EAI Dispatch Service to process messages. They are created and administrated in the Siebel client.

They consist of:

  • One or more conditions used to evaluate an incoming property set
  • A target business service to execute when a condition is satisfied
  • Zero, one, or more transforms to apply to the property set

They are expressed using a search expression notation based on traversing a hierarchical structure.

The search expression uses an XPATH-like notation to examine contents of the property set which includes several symbols to specify navigation to a desired component in the property set hierarchy:

  • / : refers to a level in the hierarchy
  • @ : refers to an attribute

EAI Dispatch Service

A rule set is a set of rules that are evaluated in a specified sequence. The first rule that satisfies is applied and the rule evaluation stops.

EAI Dispatch Service

The rule examines an input property set. It specifies an expression that evaluates an incoming property set and a business service and method to execute if the expression is satisfied.

EAI Dispatch Service

A transform is an optional expression used to pass the existing hierarchy as it is or modify it.

EAI Dispatch Service

Using a Dispatch Rule Set

Outbound Processing

Example:

Send an order to a vendor using a transport specific to the vendor. Include the Dispatch service in a general workflow that processes outgoing vendor messages.

EAI Dispatch Service

Inbound Processing

Create the vendor-type specific workflows and a rule set that:

  • Searches the incoming messages for vendor type
  • Dispatches the message to the corresponding workflow

EAI Dispatch Service

Create a profile for processing received vendor messages.

EAI Dispatch Service
EAI Dispatch Service

About the Author:

Tim Bull

Tim Bull joined Siebel Systems in 1999 and transferred to Oracle in 2006. He is a Principal Education Consultant at Oracle University and is the Senior UK instructor for Siebel software. Using his detailed product knowledge Tim delivers both In Class and Live Virtual Class courses across all the Siebel Curriculum, such as:

About

Expert trainers from Oracle University share tips and tricks and answer questions that come up in a classroom.

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
22
23
24
25
26
27
28
29
30
   
       
Today