Tuesday Sep 09, 2014

Oracle SOA Suite 12c New Features: Creating SOA Project Templates for Reusing SOA Composite Designs

By Joe Greenwald, Principal Oracle University Instructor


In SOA Suite 12c, we create application integrations and business processes designed as services composed of processing logic, data transformation and routing, dynamic business rules and human tasks in the form of XML-based metadata. The graphical representations of these services are created in JDeveloper using its graphical editors. Since these services are composed of individual, separately configurable components, we call this a composite service. Once deployed to and hosted by Oracle SOA Suite, this service looks and acts like any other web service to its clients.

It would be highly productive and desirable to be able to easily create templates for service designs that could be reused across teams and projects. Using quality designs and tested patterns as the starting point for new services speeds up development while also supporting widespread adoption of quality and standards in service design.

SOA Suite 12c automates creation and management of templates of service composites, as well as individual service components. The service project templates we create will be stored and managed in the file-based MDS, so they can easily be shared with other developers.

We have an existing service composite that we would like to clone or use as the basis of new service composite. Once we create the new service based on the template, we’ll be able to make modifications to it as needed.

Here is the current Service:

pic1

The service exposes a web service entry point, OrderStatus whose interface is implemented by convertWS mediator. ConvertWS transforms the incoming message as needed and routes the message to be processed by GetStatus, a Business Process Execution Logic (BPEL)-based component. The BPEL process accesses the database through the database adapter, OrderDB, to check order status and then writes the status to a flat file via the file adapter, writeQA.

We’d like to start with this service and then create a new, separate project and service that will have the same starting structure. Then we can make modifications to suit our particular needs.

We begin by creating the service composite template from the existing project.

  1. Create or find a SOA Suite 12c Service Composite to use as the basis for the template and open it in JDeveloper 12c.

  2. pic2

  3. Right-click the project or composite name and select Create SOA Template.

  4. pic3

  5. Click the Save In icon to select the location for storing the template: in the file system or file-based MDS for reuse. We are using the file-based MDS to enable easier sharing and reuse and use a single repository for storing assets.

  6. pic4

  7. Select which parts of the service project to include in the template. You can choose to not add certain components or assets to the template.

  8. pic5

  9. Save the template.

    Now that the template is created, we can reuse it here in our application, or share it as a jar file or since we checked it into the file-based MDS, then we can share it with other developers who have access to the MDS.

To create a new service composite based on the template:

  1. Create a new SOA Project.

  2. pic6

  3. Select a name for the project.


  4. Select the SOA Template radio button and then select the template form the list.

  5. pic7

  6. The new project is created based on the template.

  7. pic8

You can now edit the new project as you see fit.

The ability to create reusable templates is included with components: you can create a mediator or BPEL process and save that as a template for reuse. The process is similar to creating a template for a service composite.

  1. Right-click on the component to use as the basis for the template.

  2. pic9

  3. Select Create Component template, and choose where to save it.


  4. pic10

  5. Choose which files to bundle with the template and Finish.
  6. pic11

    Once the component template is created, you can view it in the Component Window.

    pic12

    To use the component template:

  1. Drag and drop the template onto your service component editor.

  2. pic13

  3. Choose the name for the component and which files to include from the template.

  4. pic14

  5. If there are conflicts with existing files, use the wizard to resolve them as needed and Finish.
  6. pic15

    When finished, you have a new component with the same configuration as the template added to your service composite. You can now edit the new component as needed.

    pic16

In this blog we saw the benefits of using templates to create new SOA service composites and components that can save you development time and increase quality in your Oracle SOA Suite 12c service designs. The templates created can be stored in a local file system or the file-based MDS for reuse.


Get Started with the new Oracle SOA Suite 12c: Essential Concepts training course. View all Oracle SOA Suite training from Oracle University.


About the Author:

joegreenwald

Joe Greenwald is a Principal Instructor for Oracle University. Joe has been teaching and consulting for Oracle for over 10 years and teaches many of the Fusion Middleware courses including Oracle SOA Suite 11g, Oracle WebCenter Content 11g and Oracle Fusion Middleware 11g. Joe’s passion is looking at how best to apply methodologies and tools to the benefit of the development organization.

Friday Jun 06, 2014

Oracle GoldenGate 12c New Features: Trail Encryption and Credentials with Oracle Wallet

Untitled Document

By Randy Richeson, Senior Principal Instructor for Oracle University

Students often ask if GoldenGate supports trail encryption with the Oracle Wallet. Yes, it does now! GoldenGate supported encryption with keygen and the ENCKEYS file for years. GoldenGate 12c now also supports encryption using the Oracle Wallet. This improves security and simplifies its administration.


Two types of wallets can be configured in GoldenGate 12c:

  • The wallet that holds the master key, used with trail or TCP/IP encryption and decryption, stored in the new 12c dirwlt/cwallet.sso file.
  • The wallet that holds the User Id and Password, used for authentication, stored in the new 12c dircrd/cwallet.sso - credential store - file.

 

A wallet can be created using a ‘create wallet’ command. Once created, adding a master key to an existing wallet is easy using ‘open wallet’ and ‘add masterkey’ commands.

 

GGSCI (EDLVC3R27P0) 42> open wallet

Opened wallet at location 'dirwlt'.

GGSCI (EDLVC3R27P0) 43> add masterkey

Master key 'OGG_DEFAULT_MASTERKEY' added to wallet at location 'dirwlt'.

 

Existing GUI Wallet utilities such as the Oracle Database “Oracle Wallet Manager” do not work on this version of the wallet. The default Oracle Wallet location can be changed.

 

GGSCI (EDLVC3R27P0) 44> sh ls -ltr ./dirwlt/*

-rw-r----- 1 oracle oinstall 685 May 30 05:24 ./dirwlt/cwallet.sso

GGSCI (EDLVC3R27P0) 45> info masterkey

Masterkey Name:                 OGG_DEFAULT_MASTERKEY

Creation Date:                  Fri May 30 05:24:04 2014

Version:        Creation Date:                  Status:

1               Fri May 30 05:24:04 2014        Current

 

The second wallet file stores the credential used to connect to a database, without exposing the UserId or Password in a parameter file or macro. Once configured, this file can be copied so that credentials are available to connect to the source or target database.

 

GGSCI (EDLVC3R27P0) 48> sh cp ./dircrd/cwallet.sso $GG_EURO_HOME/dircrd

GGSCI (EDLVC3R27P0) 49> sh ls -ltr ./dircrd/*

-rw-r----- 1 oracle oinstall 709 May 28 05:39 ./dircrd/cwallet.sso

 

The encryption wallet file can also be copied to the target machine so the replicat has access to the master key when decrypting any encrypted records the trail. Similar to the ENCKEYS file, the master key wallet created on the source host must either be stored in a centrally available disk or copied to all GoldenGate target hosts. The wallet is in a platform-independent format, although it is not certified for the iSeries, z/OS, or NonStop platforms.

 

GGSCI (EDLVC3R27P0) 50> sh cp ./dirwlt/cwallet.sso $GG_EURO_HOME/dirwlt

 

The new 12c UserIdAlias parameter is used to locate the credential in the wallet.

 

GGSCI (EDLVC3R27P0) 52> view param extwest

Extract extwest

Exttrail ./dirdat/ew

Useridalias gguamer

Table west.*;


The EncryptTrail parameter is used to encrypt the trail using the FIPS approved Advanced Encryption Standard and the encryption key in the wallet. EncryptTrail can be used with a primary extract or pump extract.


GGSCI (EDLVC3R27P0) 54> view param pwest

Extract pwest

Encrypttrail AES256

Rmthost easthost, mgrport 15001

Rmttrail ./dirdat/pe

Passthru

Table west.*;

Once the extracts are running, records can be encrypted using the wallet.

 

GGSCI (EDLVC3R27P0) 60> info extract *west

EXTRACT    EXTWEST   Last Started 2014-05-30 05:26   Status RUNNING

Checkpoint Lag       00:00:17 (updated 00:00:01 ago)

Process ID           24982

Log Read Checkpoint  Oracle Integrated Redo Logs

                     2014-05-30 05:25:53

                     SCN 0.0 (0)

EXTRACT    PWEST     Last Started 2014-05-30 05:26   Status RUNNING

Checkpoint Lag       24:02:32 (updated 00:00:05 ago)

Process ID           24983

Log Read Checkpoint  File ./dirdat/ew000004

                     2014-05-29 05:23:34.748949  RBA 1483

 

The ‘info masterkey’ command is used to confirm the wallet contains the key. The key is needed to decrypt the data read from the trail before the replicat applies changes to the target table.

 

GGSCI (EDLVC3R27P0) 41> open wallet

Opened wallet at location 'dirwlt'.

GGSCI (EDLVC3R27P0) 42> info masterkey

Masterkey Name:                 OGG_DEFAULT_MASTERKEY

Creation Date:                  Fri May 30 05:24:04 2014

Version:        Creation Date:                  Status:

1               Fri May 30 05:24:04 2014        Current

 

Once the replicat is running, records can be decrypted using the wallet.

 

GGSCI (EDLVC3R27P0) 44> info reast

REPLICAT   REAST     Last Started 2014-05-30 05:28   Status RUNNING

INTEGRATED

Checkpoint Lag       00:00:00 (updated 00:00:02 ago)

Process ID           25057

Log Read Checkpoint  File ./dirdat/pe000004

                     2014-05-30 05:28:16.000000  RBA 1546

 

There is no need for the DecryptTrail parameter when using the wallet, unlike when using the ENCKEYS file.

 

GGSCI (EDLVC3R27P0) 45> view params reast

Replicat reast

AssumeTargetDefs

Discardfile ./dirrpt/reast.dsc, purge

UserIdAlias ggueuro

Map west.*, target east.*;

 

Once a record is committed in the source table, the encryption can be verified using logdump and then querying the target table.

 

SOURCE_AMER_SQL>insert into west.branch values (50, 80071);

1 row created.

SOURCE_AMER_SQL>commit;

Commit complete.

 

The following encrypted record can be found using logdump.


Logdump 40 >n

2014/05/30 05:28:30.001.154 Insert               Len    28 RBA 1546

Name: WEST.BRANCH

After  Image:                                             Partition 4   G  s  

 0a3e 1ba3 d924 5c02 eade db3f 61a9 164d 8b53 4331 | .>...$\....?a..M.SC1 

 554f e65a 5185 0257                               | UO.ZQ..W 

Bad compressed block, found length of  7075 (x1ba3), RBA 1546

  GGS tokens:

TokenID x52 'R' ORAROWID         Info x00  Length   20

 4141 4157 7649 4141 4741 4141 4144 7541 4170 0001 | AAAWvIAAGAAAADuAAp.. 

TokenID x4c 'L' LOGCSN           Info x00  Length    7

 3231 3632 3934 33                                 | 2162943 

TokenID x36 '6' TRANID           Info x00  Length   10

 3130 2e31 372e 3135 3031                          | 10.17.1501 


The replicat automatically decrypts this record from the trail using the wallet and then inserts the row to the target table. This select verifies the row was committed in the target table and the data is not encrypted.


TARGET_EURO_SQL>select * from branch where branch_number=50;

BRANCH_NUMBER                  BRANCH_ZIP

-------------                                   ----------

   50                                              80071

 

Book a seat in an upcoming Oracle GoldenGate 12c: Fundamentals for Oracle Ed 1 class to learn much more about using GoldenGate 12c new features with the Oracle wallet, credentials, integrated extracts, integrated replicats, coordinated replicats, the Oracle Universal Installer, a multi-tenant database, and other features.

Explore Oracle University GoldenGate classes here, or send me an email at randy.richeson[at]oracle.com if you have other questions.

About the Author:

randy

Randy Richeson joined Oracle University as a Senior Principal Instructor in March 2005. He is an Oracle Certified Professional (10g-12c) and GoldenGate Certified Implementation Specialist (10-11g). He has taught GoldenGate since 2010 and other technical curriculums including GoldenGate Management Pack, GoldenGate Director, GoldenGate Veridata, Oracle Database, JD Edwards, PeopleSoft, and the Oracle Application Server since 1997.

Monday Nov 18, 2013

Using the Container Database in Oracle Database 12c by Christopher Andrews


The first time I examined the Oracle Database 12c architecture, I wasn’t quite sure what I thought about the Container Database (CDB). In the current release of the Oracle RDBMS, the administrator now has a choice of whether or not to employ a CDB.

Bundling Databases Inside One Container

In today’s IT industry, consolidation is a common challenge. With potentially hundreds of databases to manage and maintain, an administrator will require a great deal of time and resources to upgrade and patch software. Why not consider deploying a container database to streamline this activity? By “bundling” several databases together inside one container, in the form of a pluggable database, we can save on overhead process resources and CPU time. Furthermore, we can reduce the human effort required for periodically patching and maintaining the software.

Minimizing Storage

Most IT professionals understand the concept of storage, as in solid state or non-rotating. Let’s take one-to-many databases and “plug” them into ONE designated container database. We can minimize many redundant pieces that would otherwise require separate storage and architecture, as was the case in previous releases of the Oracle RDBMS. The data dictionary can be housed and shared in one CDB, with individual metadata content for each pluggable database. We also won’t need as many background processes either, thus reducing the overhead cost of the CPU resource.

Improve Security Levels within Each Pluggable Database 

We can now segregate the CDB-administrator role from that of the pluggable-database administrator as well, achieving improved security levels within each pluggable database and within the CDB. And if the administrator chooses to use the non-CDB architecture, everything is backwards compatible, too.

 The bottom line: it's a good idea to at least consider using a CDB.


About the author:

Chris Andrews is a Canadian instructor of Oracle University who teaches the Server Technologies courses for both 11g and 12c. He has been with Oracle University since 1997 and started training customers back with Oracle 7. While now a Senior Principal Instructor with OU, Chris had an opportunity to work as a DBA for about 10 years before joining Oracle. His background and experiences as a DBA, Database Analyst and also a developer is occasionally shared with customers in the classroom. His skill set includes the Database Administration workshops, Backup & Recovery, Performance Tuning, RAC, Dataguard, ASM, Clusterware and also Exadata and Database Machine administration workshop. While not teaching, Chris enjoys aviation and flying gliders, underwater photography and tennis.

Friday Oct 18, 2013

Oracle Database 12c New Partition Maintenance Features by Gwen Lazenby

One of my favourite new features in Oracle Database 12c is the ability to perform partition maintenance operations on multiple partitions. This means we can now add, drop, truncate and merge multiple partitions in one operation, and can split a single partition into more than two partitions also in just one command. This would certainly have made my life slightly easier had it been available when I administered a data warehouse at Oracle 9i.

To demonstrate this new functionality and syntax, I am going to create two tables, ORDERS and ORDERS_ITEMS which have a parent-child relationship. ORDERS is to be partitioned using range partitioning on the ORDER_DATE column, and ORDER_ITEMS is going to partitioned using reference partitioning and its foreign key relationship with the ORDERS table. This form of partitioning was a new feature in 11g and means that any partition maintenance operations performed on the ORDERS table will also take place on the ORDER_ITEMS table as well.

First create the ORDERS table -

SQL> CREATE TABLE orders
      ( order_id NUMBER(12),
        order_date TIMESTAMP,
        order_mode VARCHAR2(8),
        customer_id NUMBER(6),
        order_status NUMBER(2),
        order_total NUMBER(8,2),
        sales_rep_id NUMBER(6),
        promotion_id NUMBER(6),
       CONSTRAINT orders_pk PRIMARY KEY(order_id)
     )
    PARTITION BY RANGE(order_date)
   (PARTITION Q1_2007 VALUES LESS THAN (TO_DATE('01-APR-2007','DD-MON-YYYY')),
    PARTITION Q2_2007 VALUES LESS THAN (TO_DATE('01-JUL-2007','DD-MON-YYYY')),
    PARTITION Q3_2007 VALUES LESS THAN (TO_DATE('01-OCT-2007','DD-MON-YYYY')),
    PARTITION Q4_2007 VALUES LESS THAN (TO_DATE('01-JAN-2008','DD-MON-YYYY'))
    );

Table created.

Now the ORDER_ITEMS table

SQL> CREATE TABLE order_items
     ( order_id NUMBER(12) NOT NULL,
       line_item_id NUMBER(3) NOT NULL,
       product_id NUMBER(6) NOT NULL,
       unit_price NUMBER(8,2),
       quantity NUMBER(8),
       CONSTRAINT order_items_fk
       FOREIGN KEY(order_id) REFERENCES orders(order_id) on delete cascade)  
       PARTITION BY REFERENCE(order_items_fk) tablespace example;

Table created.

Now look at DBA_TAB_PARTITIONS to get details of what partitions we have in the two tables –

SQL>  select table_name,partition_name,
     partition_position position, high_value
     from dba_tab_partitions
     where table_owner='SH' and
     table_name  like 'ORDER_%'
     order by partition_position, table_name;

TABLE_NAME    PARTITION_NAME   POSITION HIGH_VALUE
-------------- --------------- -------- -------------------------
ORDERS         Q1_2007                1 TIMESTAMP' 2007-04-01 00:00:00'
ORDER_ITEMS    Q1_2007                1
ORDERS         Q2_2007                2 TIMESTAMP' 2007-07-01 00:00:00'
ORDER_ITEMS    Q2_2007                2
ORDERS         Q3_2007                3 TIMESTAMP' 2007-10-01 00:00:00'
ORDER_ITEMS    Q3_2007                3
ORDERS         Q4_2007                4 TIMESTAMP' 2008-01-01 00:00:00'
ORDER_ITEMS    Q4_2007                4

Just as an aside it is also now possible in 12c to use interval partitioning on reference partitioned tables. In 11g it was not possible to combine these two new partitioning features.

For our first example of the new 12cfunctionality, let us add all the partitions necessary for 2008 to the tables using one command. Notice that the partition specification part of the add command is identical in format to the partition specification part of the create command as shown above -

SQL> alter table orders add
PARTITION Q1_2008 VALUES LESS THAN (TO_DATE('01-APR-2008','DD-MON-YYYY')),
PARTITION Q2_2008 VALUES LESS THAN (TO_DATE('01-JUL-2008','DD-MON-YYYY')),
PARTITION Q3_2008 VALUES LESS THAN (TO_DATE('01-OCT-2008','DD-MON-YYYY')),
PARTITION Q4_2008 VALUES LESS THAN (TO_DATE('01-JAN-2009','DD-MON-YYYY'));

Table altered.

Now look at DBA_TAB_PARTITIONS and we can see that the 4 new partitions have been added to both tables –

SQL> select table_name,partition_name,
     partition_position position, high_value
     from dba_tab_partitions
     where table_owner='SH' and
     table_name  like 'ORDER_%'
     order by partition_position, table_name;

TABLE_NAME    PARTITION_NAME   POSITION HIGH_VALUE
-------------- --------------- -------- -------------------------
ORDERS         Q1_2007                1 TIMESTAMP' 2007-04-01 00:00:00'
ORDER_ITEMS    Q1_2007                1
ORDERS         Q2_2007                2 TIMESTAMP' 2007-07-01 00:00:00'
ORDER_ITEMS    Q2_2007                2
ORDERS         Q3_2007                3 TIMESTAMP' 2007-10-01 00:00:00'
ORDER_ITEMS    Q3_2007                3
ORDERS         Q4_2007                4 TIMESTAMP' 2008-01-01 00:00:00'
ORDER_ITEMS    Q4_2007                4
ORDERS         Q1_2008                5 TIMESTAMP' 2008-04-01 00:00:00'
ORDER_ITEMS    Q1_2008                5
ORDERS         Q2_2008                6 TIMESTAMP' 2008-07-01 00:00:00'
ORDER_ITEM     Q2_2008                6
ORDERS         Q3_2008                7 TIMESTAMP' 2008-10-01 00:00:00'
ORDER_ITEMS    Q3_2008                7
ORDERS         Q4_2008                8 TIMESTAMP' 2009-01-01 00:00:00'
ORDER_ITEMS    Q4_2008                8

Next, we can drop or truncate multiple partitions by giving a comma separated list in the alter table command. Note the use of the plural ‘partitions’ in the command as opposed to the singular ‘partition’ prior to 12c

SQL> alter table orders drop partitions Q3_2008,Q2_2008,Q1_2008;

Table altered.

Now look at DBA_TAB_PARTITIONS and we can see that the 3 partitions have been dropped in both the two tables –

TABLE_NAME    PARTITION_NAME   POSITION HIGH_VALUE
-------------- --------------- -------- -------------------------
ORDERS         Q1_2007                1 TIMESTAMP' 2007-04-01 00:00:00'
ORDER_ITEMS    Q1_2007                1
ORDERS         Q2_2007                2 TIMESTAMP' 2007-07-01 00:00:00'
ORDER_ITEMS    Q2_2007                2
ORDERS         Q3_2007                3 TIMESTAMP' 2007-10-01 00:00:00'
ORDER_ITEMS    Q3_2007                3
ORDERS         Q4_2007                4 TIMESTAMP' 2008-01-01 00:00:00'
ORDER_ITEMS    Q4_2007                4
ORDERS         Q4_2008                5 TIMESTAMP' 2009-01-01 00:00:00'
ORDER_ITEMS    Q4_2008                5

Now let us merge all the 2007 partitions together to form one single partition –

SQL> alter table orders merge partitions
   Q1_2005, Q2_2005, Q3_2005, Q4_2005
   into partition Y_2007;

Table altered.

TABLE_NAME    PARTITION_NAME   POSITION HIGH_VALUE
-------------- --------------- -------- -------------------------
ORDERS         Y_2007                 1 TIMESTAMP' 2008-01-01 00:00:00'
ORDER_ITEMS    Y_2007                 1
ORDERS         Q4_2008                2 TIMESTAMP' 2009-01-01 00:00:00'
ORDER_ITEMS    Q4_2008                2

Splitting partitions is a slightly more involved. In the case of range partitioning one of the new partitions must have no high value defined, and in list partitioning one of the new partitions must have no list of values defined. I call these partitions the ‘everything else’ partitions, and will contain any rows contained in the original partition that are not contained in the any of the other new partitions.

For example, let us split the Y_2007 partition back into 4 quarterly partitions –

SQL> alter table orders split partition Y_2007 into 
(PARTITION Q1_2007 VALUES LESS THAN (TO_DATE('01-APR-2007','DD-MON-YYYY')),
PARTITION Q2_2007 VALUES LESS THAN (TO_DATE('01-JUL-2007','DD-MON-YYYY')),
PARTITION Q3_2007 VALUES LESS THAN (TO_DATE('01-OCT-2007','DD-MON-YYYY')),
PARTITION Q4_2007);

Now look at DBA_TAB_PARTITIONS to get details of the new partitions –


TABLE_NAME    PARTITION_NAME   POSITION HIGH_VALUE
-------------- --------------- -------- -------------------------
ORDERS         Q1_2007                1 TIMESTAMP' 2007-04-01 00:00:00'
ORDER_ITEMS    Q1_2007                1
ORDERS         Q2_2007                2 TIMESTAMP' 2007-07-01 00:00:00'
ORDER_ITEMS    Q2_2007                2
ORDERS         Q3_2007                3 TIMESTAMP' 2007-10-01 00:00:00'
ORDER_ITEMS    Q3_2007                3
ORDERS         Q4_2007                4 TIMESTAMP' 2008-01-01 00:00:00'
ORDER_ITEMS    Q4_2007                4
ORDERS         Q4_2008                5 TIMESTAMP' 2009-01-01 00:00:00'
ORDER_ITEMS    Q4_2008                5

Partition Q4_2007 has a high value equal to the high value of the original Y_2007 partition, and so has inherited its upper boundary from the partition that was split.

As for a list partitioning example let look at the following another table, SALES_PAR_LIST, which has 2 partitions, Americas and Europe and a partitioning key of country_name.

SQL> select table_name,partition_name,
   high_value
   from dba_tab_partitions
   where table_owner='SH' and
   table_name = 'SALES_PAR_LIST';

TABLE_NAME      PARTITION_NAME   HIGH_VALUE
--------------  ---------------  -----------------------------
SALES_PAR_LIST  AMERICAS         'Argentina', 'Canada', 'Peru',
                                 'USA', 'Honduras', 'Brazil', 'Nicaragua'
SALES_PAR_LIST  EUROPE           'France', 'Spain', 'Ireland', 'Germany',
                                 'Belgium', 'Portugal', 'Denmark'

Now split the Americas partition into 3 partitions –

SQL> alter table sales_par_list split partition americas into
   (partition south_america values ('Argentina','Peru','Brazil'),
   partition north_america values('Canada','USA'),
   partition central_america);

Table altered.

Note that no list of values was given for the ‘Central America’ partition. However it should have inherited any values in the original ‘Americas’ partition that were not assigned to either the ‘North America’ or ‘South America’ partitions. We can confirm this by looking at the DBA_TAB_PARTITIONS view.

SQL> select table_name,partition_name,
   high_value
   from dba_tab_partitions
   where table_owner='SH' and
   table_name = 'SALES_PAR_LIST';

TABLE_NAME      PARTITION_NAME   HIGH_VALUE
--------------- ---------------  --------------------------------
SALES_PAR_LIST  SOUTH_AMERICA    'Argentina', 'Peru', 'Brazil'
SALES_PAR_LIST  NORTH_AMERICA    'Canada', 'USA'
SALES_PAR_LIST  CENTRAL_AMERICA  'Honduras', 'Nicaragua'
SALES_PAR_LIST  EUROPE           'France', 'Spain', 'Ireland', 'Germany',
                                 'Belgium', 'Portugal', 'Denmark'

In conclusion, I hope that DBA’s whose work involves maintaining partitions will find the operations a bit more straight forward to carry out once they have upgraded to Oracle Database 12c.

Gwen Lazenby

Gwen Lazenby is a Principal Training Consultant at Oracle.

She is part of Oracle University's Core Technology delivery team based in the UK, teaching Database Administration and Linux courses. Her specialist topics include using Oracle Partitioning and Parallelism in Data Warehouse environments, as well as Oracle Spatial and RMAN.

Tuesday May 07, 2013

Create a high availability 12C weblogic clustering environment with Traffic Director 11gR1 by Eugene Simos

Traffic director is one of the latest load balancer software, released by Oracle.
In fact its a fast, reliable, scalable and very easy manageable solution, for HTTP, HTTPS, TCP traffic for backend application and HTTP servers.

I have used the latest version of both traffic director, and weblogic 12c, to setup a simple clustering replication session scenario, on my Linux 64bits VBox sandbox.

For this scenario, the Traffic Director, is configured as a front end to a backend weblogic 12C cluster.

A specific feature "dynamic discovery", will let Traffic Director, discover on "the fly" new clustered Weblogic Nodes, associated with its initial configuration, and I will be able to "join" to my initial " 3 nodes wls cluster an another wls instance (doted line), with full HTTP replication capabilities.

In order to test the session replication I used one sample application, delivered as such, with the wls 12 installation (I will detail the utilization of this application in a later post :) ), which I called Session, and I will test the failover features of wls 12, with the Traffic Director, with this Session application deployed on my cluster!

The binary distribution, of the Traffic Director can be download from here:

Once that the distribution is download to my linux box, i started the normal FMW installation workflow:

Passing the normal test on the Linux system components, and choose an empty Oracle Traffic director Home for the software installation :

I saved the response file (I might use next time the silent installer !)

Then after saving the installation details, I start to configure my Traffic Director instances as following :
1) Create one admin instance (I used just the default settings) and an "default user" admin/welcome1

2) Start the instance from the installation directory:

3 ) I used the Traffic Director admin interface : https://localhost:8989, and with my credentials ( from section 1 admin/welcome1) I got the first admin panel

Once that I m logged into the Traffic Director, I m getting a initial "welcome" screen, and I have to create my own configurations according to my wls 12 cluster:

My Weblogic 12c cluster, was configured initially with 3 nodes, and i will add later one more managed instance. I have to create a specific Traffic Director configuration to route the traffic to my backend servers.

Through the Traffic Director configuration wizard, i will create a node on my local linux ( port 8080) , which will push the HTTP requests to my wls 12c clustred servers.
The Traffic Director node will use the specific feature of dynamic discovery in order to be able to push the HTTP request to other wls 12c clustered instances that will join the cluster later:

Then I started my 3 wls 12c clustered instances, on which I deployed my test Session replication application, and i started to test the Session replication scenario thought the Traffic Director node:

As you can see, my session has been created to the M1 node of my cluster.
I have created some data with this session, then I stopped the M1 node ( node failure simulation) and re submitted the request to the wls 12 cluster.

As you can see the session failover feature of the wls 12c worked perfectly, and now my session is on the M2 wls clustered node with the same date as on the previous failed M1 node !

For the dynamic discovery, I created one more managed server in the cluster (M4), then stopped the M2 server, and retried the Session application URL through the Traffic director node!
As you can see the Traffic Director routed the request to the new clustered server, with all the session replication data.

In a next post , we will deploy the Session application in a weblogic cluster, and we will use it as a test application, for the session replication features of weblogic.

About the Author:


Eugene Simos is based in France and joined Oracle through the BEA-Weblogic Acquisition, where he worked for the Professional Service, Support, end Education for major accounts across the EMEA Region. He worked in the banking sector, ATT, Telco companies giving him extensive experience on production environments. Eugene currently specializes in Oracle Fusion Middleware teaching an array of courses on Weblogic/Webcenter, Content,BPM /SOA/Identity-Security/GoldenGate/Virtualisation/Unified Comm Suite) throughout the EMEA region.

About

Expert trainers from Oracle University share tips and tricks and answer questions that come up in a classroom.

Search

Archives
« August 2015
SunMonTueWedThuFriSat
      
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
     
Today