Friday Jan 30, 2015

Managing Complex Multimedia Sessions, more than just raising alarms (SDM)
by Eva Pavon

Oracle Communication's Session Delivery Manager provides an advanced framework for provisioning, managing, and monitoring session delivery networks. Session Delivery Manager has an advanced architecture, called Service Oriented Architecture (SOA), which provides such features as increased scalability using load balancing across clustered servers, the ability to load Session Delivery Manager Configurations on-demand, and a rich thin-client interface. You can use the enhanced user interface to access such services as Element Manager for configuring Oracle Communications SBCs or Route Manager for configuring LRTs on Oracle Communications SBCs.
Session Delivery Manager also includes a dashboard summary view of key performance indicators for the managed devices and provides three different views into Oracle Communications SBC configuration elements: Default, List, and ACLI.

Element Manager 2.0
Oracle Communications Session Delivery Manager’s Element Manager Application allows for the loading, configuring, and managing of devices, (Net-Net SBC). This release of Oracle Communications Session Delivery Manager can manage the following devices:

  • C-Series—The Net-Net 4000 series contains two systems: the Net-Net 4250 and Net-Net 4500. The Net-Net 3000 series contains two systems: the Net-Net 3800 (Sku 3810) and Net-Net 3820.
  • D-Series—Also known as the Net-Net 9000 series, it contains one system: the Net-Net 9200.
  • E-Series—Also known as the Net-Net 2000 series, it contains one system: the Net-Net 2600.

Multimedia Sessions

The Session Delivery Manager Northbound Fault Management allows you to configure destination receivers to receive forwarded traps in either SDM or ITU X.733 standard formats. You can specify selected traps on devices using the ITU X.733 format in the Add/Edit trap receiver dialog. A maximum of 10 trap receivers may be configured at once, regardless of format.
The SDM server receives the alarms from SBCs and forwards them to selected devices.

Multimedia Sessions

In this diagram, the SBC sends all traps to the SDM server, which sorts them into events and alarms. Events are the aggregated list of all such messages while alarms record only the current state or latest event. Imagine the SBC first sends a trap indicating the fan is operating at 10% and later sends a trap indicating the fan is operating at 50%. In SDM, the alarms tab will display the trap indicating the fan is operating at 50%, whereas the events tab will display both traps.

New Features / Applications in Session Delivery Networks (SDM) 7.4
Report Manager 2.0

Beginning with SDM 7.4, the Report Manager uses the Oracle BI Publisher to manage and render reports. BI Publisher is a complete reporting solution to author, manage, and deliver reports.
The previous reporting tool in NNC 7.3 has been obsoleted. Unlike its predecessor, BI Publisher allows users to create their own reports and dashboards as well as schedule and print out these reports in the multiple formats. Users can also monitor their system with the canned reports.
If you've been using a previous version of Report Manager, pay close attention to the Installation Guide, especially for the additional options about reporting database migration.

Offline Configuration
Offline configurations are configurations that can be applied to multiple devices, in order to share the same configuration data. An offline configuration is created by duplicating an existing managed device configuration or by selecting a schema from a supported software model.
Data variables allow network administrators to target elements that require device specific information, when duplicating an existing configuration. All data variables must have new values in order to push the configuration to a device.

Device Clustering
A device cluster is a collection of devices sharing the same hardware, software, and configuration. By specifying an offline configuration and adding devices to a device cluster, SEM allows you to synchronize configuration changes across all members of the cluster.
Members a device cluster must share the same software version and device credentials in order to share the same configuration. Device-specific parameters can be targeted as data variables, in order to set different values for each member.

Application Orchestrator
The Oracle Communications Application Orchestrator (AO) manages the virtual network function (VNF) life-cycle and provides a central location for integrating and managing multiple virtualization infrastructure management (VIM) systems.
AO provides a virtualization management framework for provisioning commands to virtual hosts while collecting performance information on actively deployed VM instances.
Elasticity mode enables AO to respond to changes in network capacity requirements by calculating thresholds on key performance indicators. AO automatically provisions virtual devices to meet demand. AO also allows for manual setup and deployment of individual virtual devices.

Multimedia Sessions

Example Configuration
VM groups are a logical grouping of VMs that share configuration and scaling thresholds. An offline configuration template is required in order to configure a VM group. See example below:

Multimedia Sessions

VM instances are singular or HA paired VNF deployments assigned to a specific VM group. VM instances cannot deploy unless the device-specific data variables are configured for each instance. The Configure device wizard walks through the steps of configuring device-specific boot parameters and data variables.

More information:
If you would like to find out more about the Session Delivery Manager, I advise taking the following soon to be released Oracle University course: Oracle Session Delivery Manager Operation and Configuration Ed 1 LVC (3 days).

See all of the available training for Acme Packet products.
To view scheduled Acme Packet Training events in Europe, the Middle East and Africa please click here.
Oracle University provides courses in traditional classroom and live virtual class formats. Custom and private training events can be arranged to suit your exact training requirements.

If you have any questions about the above or require training advice, please contact me: eva.pavon.martin@oracle.com.


Eva Pavon

Eva Pavon, Telecommunication Engineer - Polytechnic University of Madrid (UPM), is a very detail oriented Technical Instructor/Developer having international experience delivering training in English, French, and Spanish language. With 18 years of experience in telecommunication and after starting as a developer for Telefónica R&D and working in a Consultancy firm, she joined Lucent Technologies Training Department in 2000 starting the teaching history for 10 years (SDH/WDM, Data/IP and Security in IP networks) and having excellent customer feedback and recognition. After a period of new experiences as a Product Manager in Huawei and Account Manager in McAfee she decided to join Acme Packet (Acquired by Oracle in 2013) and continue with her passion, teaching in a great company and the best training team. After 3 years in the company she can deliver classes for SBC, CAB-C/D, TS-C/D, ADV, SEC, SDM, OCOM, ISR, CSM/USM and constantly receives outstanding feedback on her deliveries.

Wednesday Dec 10, 2014

Audit History in Oracle Order Management. The challenge.
by David Barnacle

Changes to confirmed sales order because of business issues, such as low stock availability or pricing ammendments, can cause Customer concerns or complaint and lead ultimately to financial loss for the Company.

So there may be a requirement to record what attributes of an order were changed, when and by whom.

To address this issue the Order Management team introduced a rules based approach – simply called- Audit History.

This allows the Implementers, by defining simple rule hierarchies, to define which attributes (e.g. shipment date/ order qty/ payment terms) need to be tracked for changes, by whom and when necessary via Business events – what parties to inform of changes.

These changes are recorded in Order Management audit tables which are load on a regular basis by running the concurrent program – Audit History Consolidator. The results can then be review via the audit History report or on line via the View Audit history form.

Below is a screen shot showing a simple rule that would record changes to payment terms once the order had been booked.

Order Management

Conclusion

These rules can be set up at any time to provide insight into changes made to any of the order entry fields.

If you want to stop such changes from happening on a more permanent basis then there are processing constraint rules which are designed, just for that requirement. Once again these constraints use a rules based approach to stop an attribute from being changed by either the system or the end user at key points in the order processing cycle.

David Barnacle

David Barnacle joined Oracle University in 2001, after being the lead Implementer, of a very successful European rollout of the e-Business suite. He currently trains a wide family of applications specializing in the supply chain and financial areas. He enjoys meeting students and likes to learn how each Customer will configure the software suite to meet their own challenging business objectives.

Friday Nov 21, 2014

Are your projects missing milestones or running over budget? Practice these simple risk management steps to avoid these pitfalls
by Marius van Ettinger

The fundamentals of project management are to complete the project on time, within budget and according to the prescribed standards and quality. Projects are often completed late due to a lack of proper risk analysis and risk management.

Projects should always employ a problem solving technique to approximate the probability of certain outcomes. By using the Monte Carlo Simulation, you can conduct these trails known as simulations with random variables. These simulations will furnish you with possible outcomes and probabilities and will enable you to account for risks.

It doesn’t matter whether the new PBXs are geographically local or remote.

One key element is to build a risk impacted baseline schedule to indicate the projects P100, P80 and a P50 schedules. This should be done throughout all the project phases and not only when at execution phase.

  • P100 schedule refers to 100% probability to achieve the completion date
  • P80 schedule refers to 80% probability to achieve the completion date
  • P50 schedule refers to 50% probability to achieve the completion date
Figure 1

An effective way to ensure that key milestone are achieved is to base your schedule and key milestones on the P80 schedule while tracking progress and reporting dates to the project execution team & contractors on the P50 schedule. The variance between P80 and P50 will be your float and can be managed if there are any slippages on the project.

Risk can also be defined as the intentional interaction with uncertainty. Having a controlled risk register and risk matrix, provides reporting capabilities to facilitate mitigation decision-making. Having mitigation plans in place is essential, however, analyzing the cost of these plans is crucial as they might end up being costlier to implement than simply accepting the risk.

Primavera Risk Analysis helps you to run these simulations for risk uncertainties and risk events on your risk log. All risks can also be fully managed, and mitigation plans’ impact can be investigated to ensure that you manage and mitigate the correct risks.

Figure 4

Cost control, minimizing project impacts, meeting key milestone dates and preventing project repairs and failures are just some of the benefits you will gain when managing risks early in your project. Make risk management part of your day-to-day operations, include it in your project meetings and communicate all risk factors to all stakeholders. Identify risks early in your project by engaging with the experienced staff on your project and plan for the risks that often creep in on similar projects. Prioritize and analyze your risks, some risks will have greater impact than others and will need to be accounted for in various phases of your project. Keep a detailed risk log that contains risks descriptions, clarifies ownership issues enables you to carry out some basic analyses with regard to causes and effects.

Continuously measure the effects of your risk management efforts and continue to implement improvements to make it even better. By simply using various methods, like the one mention above, and running Monte Carlo Simulations often, you will be in a position to proactively deal with risks and avoid big impacts/slippages on your project.


Marius Van Ettinger

Marius van Ettinger studied Industrial Engineering at University of Pretoria, he completed his BSC degree in 2002 and started working as a process engineer in the manufacturing sector, after a couple of years he started focusing on managing the improvement projects for his company, thereafter he decided to focus on project planning on large construction projects whilst also completing his Honours degree in Industrial Engineering. He worked on the Transnet Expansion projects and gained experience in the Rail, Infrastructure and Harbour areas.
Marius then moved to Dubai to work as a project controls manager for large development projects, he moved back to South Africa in 2008 and has since be consulting on large projects in the project control area, focusing on risk and schedule analysis.
Find out more: www.amrec.co.za and www.amrectraining.co.za

Thursday Oct 30, 2014

Introduction to Oracle BI Applications with ODI

By Laura Garza, Principal Instructor for Oracle University



Often when teaching Oracle Business Intelligence Applications (OBIA) with Oracle Data Integrator (ODI), I get many questions concerning: 1) The different OBIA repositories, 2) Selecting an OBIA offering and Functional Area and 3) ODI terminology and how it relates to OBIA.

The Different OBIA Repositories:

- The Oracle BI Application Oracle Data Integrator repository contains the OBIA specific prebuilt ETL logic.

- The Oracle BI Application Components repository contains the repository for the Configuration Manager and Functional Setup Manager. It also contains the load plan definitions, setup objects and domain mapping to name a few.

How do we select the OBIA offering and Functional Area?

The Configuration Manager is a web application that allows us to configure the offering (Oracle Financial Analytics) that we have purchased. We will also enable the functional area (Account Payables, General Ledger) that we wish to implement via the Functional Setup Manager. Granted, I am only naming a few things that can be done within the Configuration Manager and Functional Setup Manger.

Here we see a screen shot on how to enable an offering and functional area to be implemented via the Configuration Manager.

Once you have finished the OBIA installation and identified which offering and functional area to implement, you will log into ODI Studio. You will then see the following Prebuilt OBIA ETL Logic.

ODI terminology and how it relates to OBIA.

If you have used OBIA with Informatica, you will see some familiar naming conventions within the prebuilt Oracle BI Applications ETL logic. What we need to be aware of is that ODI has different terminology and concepts than Informatica.

Within the Designer navigator tab of ODI Studio is a child tab called Projects. A Project is how we organize components such as interfaces, packages or procedures. When you install OBIA, a prebuilt project will be created called BI Apps Project. When you expand the BI Apps Project, additional folder nodes will be displayed. A folder called Mappings will now be visible. The Mapping folder is where you will begin to see the Prebuilt Adaptor folders. Notice that the Adaptor folders have the same naming convention that was used within OBIA with Informatica. Oracle did this to help us in the transition between the two products.

Many times students are wondering about the location of the SDE or SIL mapping definition. When we expand the Prebuilt Adaptor folder, SDE_ORA11510_Adapter, a child folder is displayed with the naming convention of the dimension or fact table we are trying to load, SDE_ORA_APAgingBucketsDimension. Right Click to expand the child folder and 3 additional objects will be displayed: Packages, Interfaces and Procedures.

An Interface consists of a set of rules that define the loading of a target datastore.

A Package is a workflow made up of a sequence of steps organized into an execution diagram. A Package references other components from a project such as an interface.

The procedure node contains scripts that can be used to customize database objects.

Also within the Designer navigator tab is a tab called Models. A Model is a description of a set of datastores. They correspond to a group of tabular data structures that are stored in a data server. A folder called Oracle BI Application will have a Model folder named Oracle BI Applications. The Oracle BI Application model folder will have child folders which are named based on the table type, ie: Dimension, Fact, Aggregate. The table structure is known as a Datastore.

Finally, we have the Load Plans and Scenarios. Load Plans and Scenarios are found within the Designer navigator tab of ODI studio. In order for source components (interface, package, procedures) to be placed into production, a scenario must be generated. Below we will see the Prebuilt OBIA Load Plans and Scenarios. Keep in mind that we generate and execute the OBIA Load Plans within the Configuration Manager.

In this blog, we discussed the OBIA Repositories along with an introduction to the Configuration Manager and Functional Setup Manager. We also covered some of the ODI terminology and how it relates to OBIA. I hope we were able to provide you an introduction of OBIA with ODI. I look forward to having you attend one of our Oracle BI classes.

Get Started with the new Oracle BI Applications 11g: Implementation using ODI training course. View all Oracle Business Intelligence training from Oracle University.


About the Author:

Laura Garza and been teaching for Oracle for over 15 years. She teaches both the Language and BI classes. For the past 8 years her primary focus has been Oracle BI.

Friday Oct 24, 2014

Oracle 12c High Availability New Features: Table Recovery in a GoldenGate Environment

By Randy Richeson, Senior Principal Oracle University Instructor



Students often ask if a table can be recovered to a point in time and whether GoldenGate can capture this recovery. Oracle 12c enables you to recover one or more tables or table partitions to a specified point in time without affecting the remaining database objects. This new RMAN functionality reduces the recovery scope and possibly the recovery time and disk space requirements when compared with Database point-in-time-recovery, tablespace point-in-time recovery, and various flashback features which were introduced prior to 12c.

When combined with GoldenGate 12c configured to capture both DML and DDL from a source database, the table recovery can be replicated to a target database effectively recovering 2 tables with 1 RMAN command. The following demo shows how table recovery can be used to recover a table that is part of a replication stream.

This GoldenGate integrated extract is configured to process DML and DDL for the source WEST schema tables.


GGSCI (eddnr2p0) 2> view param extwest

extract extwest

useridalias gguamer

statoptions resetreportstats

report at 00:01

reportrollover at 00:01

reportcount every 60 seconds, rate

reportcount every 1000 records

exttrail ./dirdat/ew

ddl include mapped objname west.*;

ddloptions addtrandata, report

table west.*;


There are 1063 rows in the source AMER database WEST.ACCOUNT table.


AMER_WEST_SQL>select name from v$database;

NAME

---------

AMER

AMER_WEST_SQL>select count(*) from account;

COUNT(*)

----------

1063

AMER_WEST_SQL>select * from account where account_number > 6000;

ACCOUNT_NUMBER ACCOUNT_BALANCE

-------------- ---------------

7000 7000

8000 8000

9000 9000


This GoldenGate integrated replicat is configured to apply DML and DDL to the target EURO database EAST.ACCOUNT table.


GGSCI (eddnr2p0) 2> view param reast

replicat reast

assumetargetdefs

discardfile ./dirrpt/reast.dsc, purge

useridalias ggueuro

ddl include mapped

map west.*, target east.*;


The target EAST.ACCOUNT table has 1063 rows matching the source WEST.ACCOUNT table.


EURO_DATABASE_SQL>select name from v$database;

NAME

---------

EURO

EURO_DATABASE_SQL>select count(*) from account;

COUNT(*)

----------

1063

EURO_DATABASE_SQL>select * from account where account_number > 6000;

ACCOUNT_NUMBER ACCOUNT_BALANCE

-------------- ---------------

7000 7000

8000 8000

9000 9000


A whole RMAN backup is made of the source 'AMER' control file and datafiles.


[oracle@eddnr2p0 gg_amer]$rman target /

Recovery Manager: Release 12.1.0.1.0 - Production on Fri Oct 3 17:50:04 2014

Copyright (c) 1982, 2013, Oracle and/or its affiliates. All rights reserved.

connected to target database: AMER (DBID=1282626787)

RMAN> backup database;

Starting backup at 03-OCT-14

…….

Finished Control File and SPFILE Autobackup at 03-OCT-14


The current scn is queried so that it can be used in this recovery.


RMAN> select timestamp_to_scn(current_timestamp) from v$database;

TIMESTAMP_TO_SCN(CURRENT_TIMESTAMP)

-----------------------------------

2527804


Four transactions occur with negative account balances that logically corrupt the source WEST.ACCOUNT table.


AMER_WEST_SQL> update account set account_balance = -7000 where account_number = 7000;

1 row updated.

AMER_WEST_SQL> commit;

Commit complete.

AMER_WEST_SQL> update account set account_balance = -8000 where account_number = 8000;

1 row updated.

AMER_WEST_SQL> commit;

Commit complete.

AMER_WEST_SQL> update account set account_balance = -9000 where account_number = 9000;

1 row updated.

AMER_WEST_SQL> commit;

Commit complete.

AMER_WEST_SQL> insert into account values (9100,-9100);

1 row created.

AMER_WEST_SQL> commit;

Commit complete.

AMER_WEST_SQL> select count(*) from account;

COUNT(*)

----------

1064

AMER_WEST_SQL> select * from account where account_number > 6000;

ACCOUNT_NUMBER ACCOUNT_BALANCE

-------------- ---------------

7000 -7000

8000 -8000

9000 -9000

9100 -9100


The extract captures the changes from the source WEST.ACCOUNT table and the replicat applies them to the target EAST.ACCOUNT table resulting in negative balances in both tables.


EURO_EAST_SQL> select * from account where account_number > 6000;

ACCOUNT_NUMBER ACCOUNT_BALANCE

-------------- ---------------

7000 -7000

8000 -8000

9000 -9000

9100 -9100


The new 12c recover table command recovers the source table to the time before the updates that corrupted the data using the specified scn and the backups that were previously created. In this example, 'UNTIL SCN' is used. As an alternative, 'UNTIL_TIME' or 'UNTIL_SEQUENCE' can be used. RMAN determines the backup based on the scn and then creates an auxiliary instance to recover the table. RMAN then creates a Data Pump export dump file that contains the recovered table and imports the recovered table into the database. The recovered table is renamed to WEST.ACCOUNT_R so that the recovered data can be compared against the original data in the WEST.ACCOUNT table. During the recovery, the source AMER and target EURO databases remain open.


RMAN> recover table west.account until scn 2527804 auxiliary destination '/u01/app/oracle/backup' remap table 'WEST'.'ACCOUNT':'ACCOUNT_R';

Starting recover at 03-OCT-14

….

Finished recover at 03-OCT-14


After the table recovery, the rows are counted and the data compared between the original and newly recovered table in the source database.


RMAN> select count(*) from west.account;

COUNT(*)

----------

1064

RMAN> select count(*) from west.account_r;

COUNT(*)

----------

1063

RMAN> select * from west.account where account_number > 6000;

ACCOUNT_NUMBER ACCOUNT_BALANCE

-------------- ---------------

7000 -7000

8000 -8000

9000 -9000

9100 -9100

 

RMAN> select * from west.account_r where account_number > 6000;

ACCOUNT_NUMBER ACCOUNT_BALANCE

-------------- ---------------

7000 7000

8000 8000

9000 9000


The pump extract and integrated replicat RBA's advance as the recovery changes are captured from the source database and applied to the target database.


GGSCI (eddnr2p0) 3> info *west

EXTRACT EXTWEST Last Started 2014-10-03 16:52 Status RUNNING

Checkpoint Lag 00:00:06 (updated 00:00:08 ago)

Process ID 21186

Log Read Checkpoint Oracle Integrated Redo Logs

2014-10-03 18:17:47

SCN 0.2533283 (2533283)

EXTRACT PWEST Last Started 2014-10-03 12:26 Status RUNNING

Checkpoint Lag 00:00:00 (updated 00:00:04 ago)

Process ID 14346

Log Read Checkpoint File ./dirdat/ew000009

2014-10-03 18:13:33.000000 RBA 289288

GGSCI (eddnr2p0) 1> info reast

REPLICAT REAST Last Started 2014-10-03 16:51 Status RUNNING

INTEGRATED

Checkpoint Lag 00:00:00 (updated 00:00:08 ago)

Process ID 21155

Log Read Checkpoint File ./dirdat/pe000005

2014-10-03 18:03:43.896883 RBA 171400


The target EAST.ACCOUNT table remains at the original row count while the EAST.ACCOUNT_R is created with the recovered rows.


EURO_EAST_SQL> select count(*) from account;

COUNT(*)

----------

1064

EURO_EAST_SQL> select * from account where account_number > 6000;

ACCOUNT_NUMBER ACCOUNT_BALANCE

-------------- ---------------

7000 -7000

8000 -8000

9000 -9000

9100 -9100

EURO_EAST_SQL> select count(*) from account_r;

COUNT(*)

----------

1063

EURO_EAST_SQL> select * from account_r where account_number > 6000;

ACCOUNT_NUMBER ACCOUNT_BALANCE

-------------- ---------------

7000 7000

8000 8000

9000 9000


Supplemental logging on the new WEST.ACCOUNT_R table needs to be enabled for subsequent changes to be applied by the replicat to the target database.


GGSCI (eddnr2p0) 2> dblogin useridalias gguamer

Successfully logged into database.

GGSCI (eddnr2p0) 3> info trandata west.account*

Logging of supplemental redo log data is enabled for table WEST.ACCOUNT.

Columns supplementally logged for table WEST.ACCOUNT: ACCOUNT_NUMBER.

Logging of supplemental redo log data is disabled for table WEST.ACCOUNT_R.

Logging of supplemental redo log data is enabled for table WEST.ACCOUNT_TRANS.

Columns supplementally logged for table WEST.ACCOUNT_TRANS: TRANS_NUMBER, ACCOUNT_NUMBER, ACCOUNT_TRANS_TS.


After the corrupted WEST.ACCOUNT table is dropped, the extract captures and the replicat applies the drop operation.


AMER_WEST_SQL> drop table account;

Table dropped.

EURO_EAST_SQL> select * from account;

select * from account

*

ERROR at line 1:

ORA-00942: table or view does not exist

After the recovered WEST.ACCOUNT_R is renamed, the extract captures the rename operation and the replicat applies it.

AMER_WEST_SQL> alter table account_r rename to account;

Table altered.


The RMAN table recovery and resulting rename operation results in the WEST.ACCOUNT table having no unique key, which may be added since primary keys are recommended. Supplement logging is enabled to allow for the delivery of updates and deletes by the replicat.


GGSCI (eddnr2p0) 5> info trandata west.account

Logging of supplemental redo log data is disabled for table WEST.ACCOUNT.

GGSCI (eddnr2p0) 6> add trandata west.account

2014-10-03 18:26:47 WARNING OGG-06439 No unique key is defined for table ACCOUNT. All viable columns will be used to represent the key, but may not guarantee uniqueness. KEYCOLS may be used to define the key.

Logging of supplemental redo data enabled for table WEST.ACCOUNT.

TRANDATA for scheduling columns has been added on table 'WEST.ACCOUNT'.

Rows are counted to verify that the account table has the correct count with the recovered data.

EURO_EAST_SQL> select count(*) from account;

COUNT(*)

----------

1063

EURO_EAST_SQL> select count(*) from account where account_number > 6000;

COUNT(*)

----------

3

EURO_EAST_SQL> select * from account where account_number > 6000;

ACCOUNT_NUMBER ACCOUNT_BALANCE

-------------- ---------------

7000 7000

8000 8000

9000 9000

AMER_WEST_SQL> select * from account where account_number > 6000;

ACCOUNT_NUMBER ACCOUNT_BALANCE

-------------- ---------------

7000 7000

8000 8000

9000 9000


RMAN 12c table recovery, combined with GoldenGate 12c configured to capture DML and DDL from a source database, provides a powerful combination that allows a DBA to quickly recover two tables with 1 RMAN command.

Book a seat now in an upcoming Oracle Database 12c: New Features for Administrators class to learn much more about using Oracle 12c new features including Multitenant features, Enterprise Manager 12c, Heat Map and Automatic Data Optimization, In-Database Archiving and Temporal, Auditing, privileges, Data Redaction, RMAN, Real Time Database Monitoring, Schema and Data Change Management, SQL Tuning Enhancements, Emergency Monitoring, ADR, In Memory Caching, SQL Tuning enhancements, Resource Manger, Partitioning, and JSON.

Explore Oracle University Database classes here, or send me an email at randy.richeson@oracle.com if you have other questions.


About the Author:

brentdayley

Randy Richeson joined Oracle University as a Senior Principal Instructor in March 2005. He is an Oracle Certified Professional (10g - 12c) and GoldenGate Certified Implementation Specialist (10g - 11g). He has taught Oracle Database technology since 1997 and other technical curriculums including GoldenGate Software, GoldenGate Management Pack, GoldenGate Director, GoldenGate Veridata, JD Edwards, PeopleSoft, and the Oracle Application Server since 1997.

Thursday Oct 23, 2014

Resolving Enterprise Routing & Dialing Plan complexity with Enterprise Communications Broker (ECB)
By Gabriel Ostolaza

If you rely on PBXs to determine a Dial Plan and a routing plan, then as PBXs are added to the network, not only do you need to program the new PBXs, but a significant programming effort is also required on all the existing systems. By sending all calls to an Enterprise Communications Broker (ECB) for centralized processing, the addition of new PBXs requires only a simple configuration for that new PBX and a small modification to the existing ECB configuration.

In addition, added PBXs may be of different manufacture and require new skills and will further complicate the task.

It doesn’t matter whether the new PBXs are geographically local or remote.

Here is a challenge too often presented. To accomplish full, unblocked, communication between disparate networks, a mesh system seems the only answer. Such a mesh will expand on an N-squared algorithm leading to a completely unmanageably complex network. This type of network is unreliable, has many points of failure, is difficult to troubleshoot and difficult to scale.

Figure 1

Different locations within the same enterprise, such as Corporate HQ, branch office, remote office, regional HQ, and so on, may all have access to the enterprise network, trunks and private lines. However, each may have different Dial Plans – some use 4-digit extensions, others may use 5-digit extensions, or more. This is often governed by the vendor mix used in the different locations. It is particularly noticeable when one company acquires another and wishes to integrate the two companies’ telephone system. Some locations may use exactly the same scheme including the same number range, so that dialing 1234 in one location may wake up the folks in the cafeteria, while dialing 1234 in another location may call the boss.

When an employee travels from one location to another he/she must remember to use the dialing plan of the current location and a call back to his/her office usually ends up being a call on the public system rather than using the enterprise facilities.

What if a traveling employee could simply dial the extension of her primary location as if at his/her own desk? A smart telephone system would know that the employee wanted to make a call to another corporate location, rather than to a local extension.

This is called “Dialing Plan Management” and is a feature of the Oracle Enterprise Communications Broker (ECB).

Figure 2

Here is a para-typical ECB network configuration. In this network, an enterprise uses a variety of telephony equipment and services. The vendor equipment could be geographically dispersed world-wide, but are all connected to the same enterprise network. The enterprise telephony services and applications could be central or similarly dispersed, but shared by the whole enterprise.

Each vendor and/or location could use different Dial Plans, thus making calls between them difficult to avoid using external telephony services. This makes such calls expensive and vulnerable to security concerns.

To facilitate Dial Plan management and session routing, the ECB would be placed at some central, or core, location within the network. It is likely that it would have a close or direct link to the gateway devices such as an SBC if one is employed.

Function:

  • Centralizes enterprise-wide Dial Plan and session routing
  • Provides SIP registrar function for Bring Your Own Device (BYOD) users
  • Provides session-aware layer for application enablement
  • Provides policy control and enforcement
  • Handles SIP traffic only – Not media

Benefits:

  • Streamlines architecture
  • Simplifies provisioning
  • Supports multivendor communications environment
  • Extends to new applications and services
Figure 3

Interface Layout

The ECB uses a purpose built GUI as its primary configuration and administration access. The GUI is not an “Add-on”, rather it is designed to be the primary access to the system. As a result, the GUI is well-designed and it provides the user an intuitive view into the system that makes configuration, provisioning, monitoring and troubleshooting easy.

Figure 4

More information:
If you would like to find out more about the Oracle Enterprise Communications Broker, I advise taking the following Oracle University course: Oracle Communications ECB Configuration and Administration (3 days).

See all of the available training for Acme Packet products.
To view scheduled Acme Packet Training events in Europe, the Middle East and Africa please click here.
Oracle University provides courses in traditional classroom and live virtual class formats. Custom and private training events can be arranged to suit your exact training requirements.

If you have any questions about the above or require training advice, please contact me: gabriel.ostolaza@oracle.com.


About the Author:

Gabriel Ostolaza

Gabriel Ostolaza (Gabo) is an Industrial Engineer with a double Advanced University Degree from ICAI (Madrid) and Supelec (Paris). He is a highly motivated and passionate Technical Instructor/Developer and Training Manager with 8 years experience of Acme Packet/Oracle University Training, and 10 years at Nortel Training before that (Data, Voice, Security, SDH, DWDM, CTI and VoIP). He can deliver almost ALL Acme Packet courses in English, French or Spanish. His main achievement, of which he is deeply proud, is to lead a team of the most technical and professional instructors who are constantly achieving extremely good customer satisfaction. Excellence and efficiency are his objectives; passion and hard work are his way of achieving them.

Tuesday Sep 09, 2014

Oracle SOA Suite 12c New Features: Creating SOA Project Templates for Reusing SOA Composite Designs

By Joe Greenwald, Principal Oracle University Instructor


In SOA Suite 12c, we create application integrations and business processes designed as services composed of processing logic, data transformation and routing, dynamic business rules and human tasks in the form of XML-based metadata. The graphical representations of these services are created in JDeveloper using its graphical editors. Since these services are composed of individual, separately configurable components, we call this a composite service. Once deployed to and hosted by Oracle SOA Suite, this service looks and acts like any other web service to its clients.

It would be highly productive and desirable to be able to easily create templates for service designs that could be reused across teams and projects. Using quality designs and tested patterns as the starting point for new services speeds up development while also supporting widespread adoption of quality and standards in service design.

SOA Suite 12c automates creation and management of templates of service composites, as well as individual service components. The service project templates we create will be stored and managed in the file-based MDS, so they can easily be shared with other developers.

We have an existing service composite that we would like to clone or use as the basis of new service composite. Once we create the new service based on the template, we’ll be able to make modifications to it as needed.

Here is the current Service:

pic1

The service exposes a web service entry point, OrderStatus whose interface is implemented by convertWS mediator. ConvertWS transforms the incoming message as needed and routes the message to be processed by GetStatus, a Business Process Execution Logic (BPEL)-based component. The BPEL process accesses the database through the database adapter, OrderDB, to check order status and then writes the status to a flat file via the file adapter, writeQA.

We’d like to start with this service and then create a new, separate project and service that will have the same starting structure. Then we can make modifications to suit our particular needs.

We begin by creating the service composite template from the existing project.

  1. Create or find a SOA Suite 12c Service Composite to use as the basis for the template and open it in JDeveloper 12c.

  2. pic2

  3. Right-click the project or composite name and select Create SOA Template.

  4. pic3

  5. Click the Save In icon to select the location for storing the template: in the file system or file-based MDS for reuse. We are using the file-based MDS to enable easier sharing and reuse and use a single repository for storing assets.

  6. pic4

  7. Select which parts of the service project to include in the template. You can choose to not add certain components or assets to the template.

  8. pic5

  9. Save the template.

    Now that the template is created, we can reuse it here in our application, or share it as a jar file or since we checked it into the file-based MDS, then we can share it with other developers who have access to the MDS.

To create a new service composite based on the template:

  1. Create a new SOA Project.

  2. pic6

  3. Select a name for the project.


  4. Select the SOA Template radio button and then select the template form the list.

  5. pic7

  6. The new project is created based on the template.

  7. pic8

You can now edit the new project as you see fit.

The ability to create reusable templates is included with components: you can create a mediator or BPEL process and save that as a template for reuse. The process is similar to creating a template for a service composite.

  1. Right-click on the component to use as the basis for the template.

  2. pic9

  3. Select Create Component template, and choose where to save it.


  4. pic10

  5. Choose which files to bundle with the template and Finish.
  6. pic11

    Once the component template is created, you can view it in the Component Window.

    pic12

    To use the component template:

  1. Drag and drop the template onto your service component editor.

  2. pic13

  3. Choose the name for the component and which files to include from the template.

  4. pic14

  5. If there are conflicts with existing files, use the wizard to resolve them as needed and Finish.
  6. pic15

    When finished, you have a new component with the same configuration as the template added to your service composite. You can now edit the new component as needed.

    pic16

In this blog we saw the benefits of using templates to create new SOA service composites and components that can save you development time and increase quality in your Oracle SOA Suite 12c service designs. The templates created can be stored in a local file system or the file-based MDS for reuse.


Get Started with the new Oracle SOA Suite 12c: Essential Concepts training course. View all Oracle SOA Suite training from Oracle University.


About the Author:

joegreenwald

Joe Greenwald is a Principal Instructor for Oracle University. Joe has been teaching and consulting for Oracle for over 10 years and teaches many of the Fusion Middleware courses including Oracle SOA Suite 11g, Oracle WebCenter Content 11g and Oracle Fusion Middleware 11g. Joe’s passion is looking at how best to apply methodologies and tools to the benefit of the development organization.

Thursday Aug 28, 2014

Analyze traffic and diagnose issues in complex Voice networks in real time with a simple click
By Gabriel Ostolaza

Acme Packet’s Palladion Solution, rebranded Oracle Session Monitor, enables SIP/IMS network operators to monitor the service and network and detect and solve problems related to SIP and other signaling protocols used in Voice over IP (VoIP) networks.
The Oracle Session Monitor software suite contains several products that address unique challenges of various networking customers, and allows for individual licensing of the required functionality. These are:

  • Communications Operations Monitor (COM): It includes the Oracle Session Monitor Voice & Video Operations (VVO) and Oracle Session Monitor Customer Experience & Troubleshooting (CET) product. It comes with a range of features for pro-active monitoring of the network. It also provides advanced troubleshooting, customer support and customer experience capabilities. The work flow is incident oriented and allows one to drill down from higher level symptoms into the full details of network protocols.
  • Fraud Detection & Prevention (FDP): The Oracle Session Monitor Fraud Detection & Prevention product provides functionality enabling operators to not only detect toll fraud activities on the network but also to actively stop such attacks. This is a separate module.
  • Mediation Engine Connector (MEC): It provides scalability and global network visibility Binding multiple ME’s.
  • Control Plane Monitor (CPM): Diameter analyzer. It provides support for non-call-related traffic.

Interface layout
Oracle Session Monitor´s web page has four main areas:

  • Administrator area: on the top right corner, shows user logged in, a signed out option, general help, access to the administration menu (settings) and setup menu (setup).
  • Main menu: on the left has three areas for the different functions offered by Oracle Session Monitor. Voice and video Operation (Operations) and Customer Experience and Troubleshooting  (Customer) group functions that are activated when the Communications Operations Monitor (COM) licence is installed.
  • Main display: is the main user working area where all information is displayed and interaction with panels and tables implemented.

With the help of this solution the Network Operator can do things like analyze the Quality of a recent call, analyze an issue in real time, get a Call Diagram, print the detail of every message involved in the call or even issue a PDF report with all the information related to the call to show the end-user they are working to resolve the problem.

More information:
If you would like to find out more about the Oracle Session Monitor, I advise taking the following Oracle University courses:

See all of the available training for Acme Packet products.
Oracle University provides courses in traditional classroom and live virtual class formats.  Custom and private training events can be arranged to suit your exact training requirements. 

If you have any questions about the above or require training advice, please contact me: gabriel.ostolaza@oracle.com.


About the Author:

Gabriel Ostolaza

Gabriel Ostolaza (Gabo) is an Industrial Engineer with a double Advanced University Degree from ICAI (Madrid) and Supelec (Paris). He is a highly motivated and passionate Technical Instructor/Developer and Training Manager with 8 years experience of Acme Packet/Oracle University Training, and 10 years at Nortel Training before that (Data, Voice, Security, SDH, DWDM, CTI and VoIP). He can deliver almost ALL Acme Packet courses in English, French or Spanish. His main achievement, of which he is deeply proud, is to lead a team of the most technical and professional instructors who are constantly achieving extremely good customer satisfaction. Excellence and efficiency are his objectives; passion and hard work are his way of achieving them.

Tuesday Jul 29, 2014

Oracle Data Integrator: What is a Repository?

by Brent Dayley, Senior Instructor at Oracle University


While teaching Oracle University classes on Oracle Data Integrator, I often instruct my students that it takes roughly one and a half days to understand the setup that we do in ODI Studio in order to start to map data from source to target. I start out by giving them a visual tour of the four navigators: Topology, Security, Designer and Operator. After giving them the visual overview, I begin teaching about the tasks that need to be done in the Topology Navigator to get started mapping data from source to target. The Topology Navigator includes a number of different sections that are used to describe the Information System that will be used for source, staging and target tasks. Among the different sections of the Topology Navigator is the “Repositories” section:


pic1

Often I get perplexed looks regarding the Repositories and it sometimes seems to cause thoughts of “Black Magic”.


Simplifying the Mysterious:

Simply put, the Repositories are nothing more than ODI objects that map to storage areas that contain database objects, such as tables, that hold information about what we are doing in ODI. At the very least, we need to create two repositories, a Master Repository and a Work Repository, and usually a modified Work Repository for production purposes called an Execution Repository. Even though these repositories are described as two separate ODI objects, they can point to one or more actual database schemas:

pic2

Proprietary Database Required?

No, ODI does not require a proprietary database engine type. The repositories can be Oracle-based, or a number of other database types:

pic3


ODI offers flexible methods for creating the database objects that store the information about what we are doing in ODI Studio and we do make recommendations regarding which is the preferred method when setting up the repositories with an Oracle database. One of the repository creation methods is to use a separate utility, called the “Repository Creation Utility” or more commonly referred to as “RCU”, to create the repositories. The RCU is the preferred method when creating repositories in 12c, because of the auditing infrastructure, however ODI Studio also has Master and Work repository creation methods built in. When using RCU, both the Master and Work repositories will default to pointing to the same storage schema in the database. When creating the repositories using the ODI Studio methodology, one can use the same or separate database schemas to house the repository tables. Here is an example of two separate schemas in MySQL that are used for the Master and Work repository tables. SNPM1 holds the Master repository tables and SNPW1 holds the Work repository tables:

pic4


pic5


pic6

Here is an example of some of the type of information that is held in a typical repository table, in this case the name of a project that has been created in the Designer Navigator:

pic7


I hope that you have learned a little bit about our outstanding Oracle Data Integrator. I hope to meet you in one of my upcoming 11g or 12c classes!


Take Oracle Data Integrator Training


About the Author:

brentdayley

Mr. Brent Dayley is a Senior Instructor with Oracle University. Brent has worked at Oracle for over 10 years and teaches the SQL, PL/SQL, Application Express, Oracle Database Administration, SQL Tuning , MySQL and Data Integrator classes.

Monday Jun 30, 2014

A Quick Guide to Taleo Evaluation Management
By Nigel Wiltshire

Untitled Document

One of the most crucial parts of any recruitment campaign is the candidate interview, and like all meetings it usually involves the taking of notes and subsequent incorporation of that feedback into the system for relevant stakeholders to review. 
The Taleo Evaluation Management feature not only makes this process seamless, it allows for the total management and administration of feedback questionnaires.  It should be noted that although the context of this article is ‘Interview’ this is by no means that only reason for using this feature.  Any feedback, for any purpose, that needs to be recorded in the system can utilise this service.

The Setup

After a minimal amount of back end configuration (including Feedback Expiration Period and Message Templates), the next step is to create series of questionnaires in a library that can be pushed to the relevant people (e.g. Interviewers and Assessors) at the appropriate time.  This is a 3 stage process:

In addition, you need to update the User Type Permissions in order to establish which users are classified as Evaluators and can therefore be sent the questionnaire.

Linking to Requisitions

Once the setup is complete it is time to start using the feature.  The first step in this journey is to ensure that the right Questionnaires and Evaluators are associated with the appropriate Requisitions.  This is completed at the time of creating the requisition in the system.



The Feedback Request

So, the candidate finally reaches the relevant stage of the hiring process (e.g. Interview) that requires the gathering of feedback.  It is now a simple case of sending the right questionnaire to the right evaluators…

...then just sit back, relax and wait for the results to come flooding in.

If you'd like to learn more about this and other features I suggest you attend the 1-day course: Taleo (TEE): Advanced Recruiting Configuration (REC-SA201). It is available in both a Classroom training format and a Live Virtual Class training format.

For more information about all available Taleo training view Oracle University's Taleo curriculum web pages.

About the Author:

Nigel Wiltshire has over 18 years of experience in software and training with a specialization in the Taleo Enterprise Edition suite. He has been involved in training most industries from government bodies to large corporations and charitable organizations. Along with the end goal of thoroughly explaining the systems he is training on, Nigel also strives to bring a little bit of fun into the training environment by sprinkling in humour and commonly relatable stories from his experiences as a trainer.

Friday Jun 06, 2014

Oracle GoldenGate 12c New Features: Trail Encryption and Credentials with Oracle Wallet

Untitled Document

By Randy Richeson, Senior Principal Instructor for Oracle University

Students often ask if GoldenGate supports trail encryption with the Oracle Wallet. Yes, it does now! GoldenGate supported encryption with keygen and the ENCKEYS file for years. GoldenGate 12c now also supports encryption using the Oracle Wallet. This improves security and simplifies its administration.


Two types of wallets can be configured in GoldenGate 12c:

  • The wallet that holds the master key, used with trail or TCP/IP encryption and decryption, stored in the new 12c dirwlt/cwallet.sso file.
  • The wallet that holds the User Id and Password, used for authentication, stored in the new 12c dircrd/cwallet.sso - credential store - file.

 

A wallet can be created using a ‘create wallet’ command. Once created, adding a master key to an existing wallet is easy using ‘open wallet’ and ‘add masterkey’ commands.

 

GGSCI (EDLVC3R27P0) 42> open wallet

Opened wallet at location 'dirwlt'.

GGSCI (EDLVC3R27P0) 43> add masterkey

Master key 'OGG_DEFAULT_MASTERKEY' added to wallet at location 'dirwlt'.

 

Existing GUI Wallet utilities such as the Oracle Database “Oracle Wallet Manager” do not work on this version of the wallet. The default Oracle Wallet location can be changed.

 

GGSCI (EDLVC3R27P0) 44> sh ls -ltr ./dirwlt/*

-rw-r----- 1 oracle oinstall 685 May 30 05:24 ./dirwlt/cwallet.sso

GGSCI (EDLVC3R27P0) 45> info masterkey

Masterkey Name:                 OGG_DEFAULT_MASTERKEY

Creation Date:                  Fri May 30 05:24:04 2014

Version:        Creation Date:                  Status:

1               Fri May 30 05:24:04 2014        Current

 

The second wallet file stores the credential used to connect to a database, without exposing the UserId or Password in a parameter file or macro. Once configured, this file can be copied so that credentials are available to connect to the source or target database.

 

GGSCI (EDLVC3R27P0) 48> sh cp ./dircrd/cwallet.sso $GG_EURO_HOME/dircrd

GGSCI (EDLVC3R27P0) 49> sh ls -ltr ./dircrd/*

-rw-r----- 1 oracle oinstall 709 May 28 05:39 ./dircrd/cwallet.sso

 

The encryption wallet file can also be copied to the target machine so the replicat has access to the master key when decrypting any encrypted records the trail. Similar to the ENCKEYS file, the master key wallet created on the source host must either be stored in a centrally available disk or copied to all GoldenGate target hosts. The wallet is in a platform-independent format, although it is not certified for the iSeries, z/OS, or NonStop platforms.

 

GGSCI (EDLVC3R27P0) 50> sh cp ./dirwlt/cwallet.sso $GG_EURO_HOME/dirwlt

 

The new 12c UserIdAlias parameter is used to locate the credential in the wallet.

 

GGSCI (EDLVC3R27P0) 52> view param extwest

Extract extwest

Exttrail ./dirdat/ew

Useridalias gguamer

Table west.*;


The EncryptTrail parameter is used to encrypt the trail using the FIPS approved Advanced Encryption Standard and the encryption key in the wallet. EncryptTrail can be used with a primary extract or pump extract.


GGSCI (EDLVC3R27P0) 54> view param pwest

Extract pwest

Encrypttrail AES256

Rmthost easthost, mgrport 15001

Rmttrail ./dirdat/pe

Passthru

Table west.*;

Once the extracts are running, records can be encrypted using the wallet.

 

GGSCI (EDLVC3R27P0) 60> info extract *west

EXTRACT    EXTWEST   Last Started 2014-05-30 05:26   Status RUNNING

Checkpoint Lag       00:00:17 (updated 00:00:01 ago)

Process ID           24982

Log Read Checkpoint  Oracle Integrated Redo Logs

                     2014-05-30 05:25:53

                     SCN 0.0 (0)

EXTRACT    PWEST     Last Started 2014-05-30 05:26   Status RUNNING

Checkpoint Lag       24:02:32 (updated 00:00:05 ago)

Process ID           24983

Log Read Checkpoint  File ./dirdat/ew000004

                     2014-05-29 05:23:34.748949  RBA 1483

 

The ‘info masterkey’ command is used to confirm the wallet contains the key. The key is needed to decrypt the data read from the trail before the replicat applies changes to the target table.

 

GGSCI (EDLVC3R27P0) 41> open wallet

Opened wallet at location 'dirwlt'.

GGSCI (EDLVC3R27P0) 42> info masterkey

Masterkey Name:                 OGG_DEFAULT_MASTERKEY

Creation Date:                  Fri May 30 05:24:04 2014

Version:        Creation Date:                  Status:

1               Fri May 30 05:24:04 2014        Current

 

Once the replicat is running, records can be decrypted using the wallet.

 

GGSCI (EDLVC3R27P0) 44> info reast

REPLICAT   REAST     Last Started 2014-05-30 05:28   Status RUNNING

INTEGRATED

Checkpoint Lag       00:00:00 (updated 00:00:02 ago)

Process ID           25057

Log Read Checkpoint  File ./dirdat/pe000004

                     2014-05-30 05:28:16.000000  RBA 1546

 

There is no need for the DecryptTrail parameter when using the wallet, unlike when using the ENCKEYS file.

 

GGSCI (EDLVC3R27P0) 45> view params reast

Replicat reast

AssumeTargetDefs

Discardfile ./dirrpt/reast.dsc, purge

UserIdAlias ggueuro

Map west.*, target east.*;

 

Once a record is committed in the source table, the encryption can be verified using logdump and then querying the target table.

 

SOURCE_AMER_SQL>insert into west.branch values (50, 80071);

1 row created.

SOURCE_AMER_SQL>commit;

Commit complete.

 

The following encrypted record can be found using logdump.


Logdump 40 >n

2014/05/30 05:28:30.001.154 Insert               Len    28 RBA 1546

Name: WEST.BRANCH

After  Image:                                             Partition 4   G  s  

 0a3e 1ba3 d924 5c02 eade db3f 61a9 164d 8b53 4331 | .>...$\....?a..M.SC1 

 554f e65a 5185 0257                               | UO.ZQ..W 

Bad compressed block, found length of  7075 (x1ba3), RBA 1546

  GGS tokens:

TokenID x52 'R' ORAROWID         Info x00  Length   20

 4141 4157 7649 4141 4741 4141 4144 7541 4170 0001 | AAAWvIAAGAAAADuAAp.. 

TokenID x4c 'L' LOGCSN           Info x00  Length    7

 3231 3632 3934 33                                 | 2162943 

TokenID x36 '6' TRANID           Info x00  Length   10

 3130 2e31 372e 3135 3031                          | 10.17.1501 


The replicat automatically decrypts this record from the trail using the wallet and then inserts the row to the target table. This select verifies the row was committed in the target table and the data is not encrypted.


TARGET_EURO_SQL>select * from branch where branch_number=50;

BRANCH_NUMBER                  BRANCH_ZIP

-------------                                   ----------

   50                                              80071

 

Book a seat in an upcoming Oracle GoldenGate 12c: Fundamentals for Oracle Ed 1 class to learn much more about using GoldenGate 12c new features with the Oracle wallet, credentials, integrated extracts, integrated replicats, coordinated replicats, the Oracle Universal Installer, a multi-tenant database, and other features.

Explore Oracle University GoldenGate classes here, or send me an email at randy.richeson[at]oracle.com if you have other questions.

About the Author:

randy

Randy Richeson joined Oracle University as a Senior Principal Instructor in March 2005. He is an Oracle Certified Professional (10g-12c) and GoldenGate Certified Implementation Specialist (10-11g). He has taught GoldenGate since 2010 and other technical curriculums including GoldenGate Management Pack, GoldenGate Director, GoldenGate Veridata, Oracle Database, JD Edwards, PeopleSoft, and the Oracle Application Server since 1997.

Friday May 16, 2014

Upgrading Your Oracle Service Cloud Site

By Sarah Anderson

Oracle Service Cloud combines Web, Social and Contact Center experiences for a unified, cross-channel service solution in the Cloud, enabling organisations to increase sales and adoption, build trust and strengthen relationships, and reduce costs and effort.

Upgrading your Oracle Service Cloud site can be a daunting prospect – beset with questions like ‘what will happen to my data’, ‘what gets copied over and what do I have to re-build manually’ and ‘what about all of those complex customisations’ generally coming in at the top of the list!

So, take a deep breath and relax...Not only is there a dedicated upgrade team, but there is also a wealth of information, guides and tutorials online to support the process – and of course, you can always submit a support incident if you need any help.

Resources
A good starting point for accessing the documentation is https://cx.rightnow.com/ (you’ll need your account name and password to login).  Search for ‘5167’ – this will then display Answer ID 5167 – which lists the upgrade guides from the present GA version (currently February 2014) all the way back to August 2008 – just select the version that you are upgrading TO.  Then you select the version that you are upgrading FROM (don’t forget that the Customer Portal Framework can be migrated separately) and the documentation will open automatically.

It’s absolutely imperative that you verify that your infrastructure meets the minimum specifications for the version that you are upgrading to – it isn’t safe to assume that just because your 2011 version runs to optimum efficiency that your brand new, shiny 2014 version will do the same!

Now that you’ve confirmed that you meet the minimum system requirements, you are ready to request your upgrade site (this is a copy of your production site – your rules, navigation sets, workspaces, account profiles, configuration settings and message bases etc. will all be identical.  The data contained in the upgrade database contains an asynchronous snapshot of the data as it was in your production site at the point the upgrade was created, but will NOT be synchronised with the production database), so navigate to the overview page of the upgrade documentation, where you will find a link to an HMS (hosting management system) page.  Use your customer support details to login to HMS and select the site that you wish to upgrade.  Once that’s done your upgrade site will be reviewed by a member of the upgrades team and you’ll receive a notification about cutover dates.

Tasks
One of the key elements to a successful upgrade is to test, test and when you’ve done that, test some more... It will ensure that any changes you make to the upgrade site do not have a negative impact or ‘break’ any existing configuration.  To make this process more efficient and to ensure that mandatory tasks have been completed, you can access a check list from the ‘My Site Tools’ link at the top of the support pages – you may need to login again to access this list.

There are 3 mandatory tasks which need to be signed off

  • Test and confirm upgrade site functionality
  • Review upgrade documentation
  • Review Service Update Notification (SUN) Editor

There are 16 optional service tasks, but it’s a matter of best practice to test all of these items too!

To avoid duplication of effort or to ensure responsibility for the testing/verification of these tasks, it is possible to assign each task to an individual.

What gets carried over?
The question that most people ask when they upgrade, is ‘what changes get carried over at cutover’?  The important thing to remember is that when your site is upgraded, the database from the existing production site is merged with the configuration files of the upgrade.  Answer ID 1925 contains detailed information about this, but in synopsis: answer images, customer portal, configuration settings, message bases, file manager and local spellcheck dictionaries will be carried across from the upgrade site and everything else from production.  This means that if you have created a new navigation set or workspace for example, in the upgrade site, they will NOT be carried over to the upgraded production site.

An important aspect of upgrading relates to custom fields.  Custom fields should not be added, modified, or deleted from seven days prior to cutover until after the cutover is complete.  This is because changing custom fields within this time period can cause the upgrade to fail and/or be postponed.

As I mentioned right at the start, upgrading can be daunting, but success is all in the preparation... Lay the groundwork and you should have a trouble free upgrade – and don’t forget, there is an upgrade team that you can reach out to with questions or requests for support.

Here is a summary of the steps to take:

    • Read the upgrade documentation
    • Prepare your production site by having a ‘spring clean’ of extraneous or unused reports, workspaces, navigation sets and account profiles etc
    • Assign upgrade tasks in UMS
    • Test
    • Enjoy the whizzy, new functionality

    About the Author:


    Sarah Anderson worked for RightNow for 4 years in both a consulting and training delivery capacity. She is now a Senior Instructor with Oracle University, delivering the following Oracle RightNow courses:

    • RightNow Customer Service Administration
    • RightNow Analytics
    • RightNow Customer Portal Designer and Contact Center Experience Designer Administration
    • RightNow Marketing and Feedback
    Gaining RightNow Knowledge and Skills
    RightNow applications provide a service enterprise platform to unify web, social and contact center experiences for customers. With Oracle University’s RightNow Training, learn how to use this cloud-based customer service to build trust, increase sales and adoption and reduce costs. 

Monday Apr 21, 2014

What to Expect from Our Top Oracle Solaris 11 Training Courses

Untitled Document

by Mike Lanker, Senior Oracle University Instructor

After many years of working with prior versions of Solaris, I was asked to learn and teach Solaris 11 classes.
I wondered what could be different?


Old vs. New Versions of Solaris 11

I quickly discovered there are many differences between older and newer versions of Solaris 11.

My first course of action on my training path was to take the UNIX and Linux Essentials course. This training explored the basics, so I followed up by taking the Solaris 11 Administration course.

If you’re working toward becoming a Systems Administrator (SA), I can tell you from experience that these courses will help you excel within this position. How, you ask? Let’s take a look at the skills these courses will help you develop.


UNIX and Linux Essentials Course

This four-day, beginners course is designed for those who work in the Solaris environment. It begins by deep diving into the Operating System (OS) structure and then explores archiving and performing remote file transfers.  It teaches you the basic commands, a very beneficial skill set to have if you’d like to take the Solaris Admin class.

This course is based on OL 6.2 and Solaris 11. It gives you an opportunity to participate in hands-on lab exercises to reinforce your learning.

Learn To:

  • Discover different shells.
  • Set file permissions.
  • Customize your initialization files so you can use tailored commands.
  • Work with the vi editor to create and modify existing files.
  • Understand the basic commands used to create, copy, move and rename files and directories.

Oracle Solaris 11 Administration Course

Once you’ve completed the Essentials training, this five-day course builds on that material, plus expands to other categories.

Learn To:

  • Perform OS installs.
  • Verify the software/drivers are current.
  • Create boot environments.
  • Work with the terrific ZFS file system.
  • Create zones.
  • Perform network administration.
  • Create, update and delete user accounts.
  • Control access to the system.
  • Change the password algorithm.
  • Establish user quotas, in case you have limited space.
  • Run recurring programs.

You’ll also get the opportunity to participate in interactive lab sessions, which creates a more enjoyable, hands-on learning environment.


Become a System Administrator by Enrolling in Oracle University Training

By investing in the above mentioned courses, you’ll develop the skills to oversee the system and keep unauthorized users out. Furthermore, you’ll be able to keep tabs on anyone who may be switching over to another user. You’ll know how to set up the password and handle failed logins to adhere to established policy or procedures. 

If your company decides to expand the password length, plus the encryption level, you’ll have the knowledge to do so with great ease. You’ll establish and maintain the storage requirements, while creating the storage pools and related file systems. 

These courses will prepare you to successfully execute System Administrator responsibilities. Possessing this skill set will help you stand out amongst your peers. Investing in training is an effective way to build the knowledge you need to pass the Oracle Solaris 11 OCA Certification exam.

Get introduced to Solaris 11 with me, Mike Lanker, in this free one-hour webinar.

View all available Oracle Solaris 11 training.


About the Author:

Mike Lanker is a Senior Instructor with Oracle University.
He has been working as an instructor since 1979 and currently teaches classes for Oracle Solaris, Oracle Linux, Oracle Storage solutions and Oracle Database.

Thursday Jan 23, 2014

An Introduction to Subledger Accounting
By Chris Rudd

R12 saw the introduction of Oracle Subledger Accounting (SLA), a rule-based accounting engine that centralizes accounting for Oracle E-Business Suite products in Release 12 and above.
SLA is not a separate product in itself, but is Oracle’s engine catering to the accounting needs of Oracle applications. There are no SLA responsibilities, you do not log into SLA, Subledger Accounting forms and programs are embedded within standard Oracle Application responsibilities (for example, Payables Manager).

Multiple Accounting Representations

Together with the new ledger support in General Ledger (GL), SLA provides the ability to maintain multiple accounting representations in a single instance.
A Subledger Accounting Method is linked to a ledger and the rules contained within that Subledger Accounting Method determines how a transaction is represented in that ledger.

In R12, you can have Primary, Secondary and Reporting Currency Ledgers. Your primary ledger is your main reporting ledger, a Secondary Ledger can be used if you have the need for another accounting representation and a reporting currency ledger is used where you need to report in another currency.

Example:

A US Corporation has an operation in France. The French operation is subject to French accounting regulations, and therefore must report its activities to the local authorities in Euros, according to the French business calendar, French chart of accounts, and the French interpretation of IFRS (International Financial Reporting Standards).

This would be the French Primary Ledger. However, the US headquarters/parent company would need to have a consolidated global visibility of the worldwide operations so you can use a Secondary Ledger linked to the French Primary Ledger. The Secondary Ledger will share the same currency, COA, calendar and Subledger Accounting Method as those at HQ.

In the definition above, Subledger Accounting allows:
• multiple accounting methods to be defined
• separate accounting methods to be used on different ledgers

Journal entries will be created for both ledgers for a single subledger posting (French activities accounted for on the primary ledger according to the IFRS method, and on the secondary ledger according to the US GAAP method).

Oracle provides a number of seeded Subledger Accounting Methods, for example Standard Accrual. If you want to change the accounting rules then you can copy the seeded method and create your own Subledger Accounting Method.

So now all of the subledgers create accounting in the same consistent way, with the ability to do online accounting for a transaction or to submit a standard request to account for all transactions in a module. You can create draft accounting, useful if you are playing with the rules, or if you wish to review before posting. Final Accounting which is complete and ready to post to GL or Final Post, which is complete and will post in GL.

Once data is posted to GL, there’s a full audit trail with drilldown all the way to the actual subledger transaction. Come along to the R12.x Oracle Subledger Accounting Fundamentals class to learn more about this and other features of SLA or send me an eMail with any questions: christine.rudd@oracle.com.

 

About the Author:


Chris Rudd joined Oracle in May 1999. She is a Principal Training Consultant at Oracle University. Using her detailed product knowledge Chris delivers both In Class and Live Virtual Class courses for the Oracle E-Business Suite Financials and Oracle Fusion Financial products

Monday Jan 20, 2014

Description of the Business Object component within Oracle Siebel CRM / Oracle Siebel 的业务对象(BO)说明

Read in Chinese



Description of the Business Object component within Oracle Siebel CRM



Oracle Siebel CRM is a mainstream Customer Relationship Management (CRM) software package within the current global market. Providing a very comprehensive solution, Oracel Siebel defines its objects by three main levels of technical implementation.



BO (Business Object) in the business level is quite interesting. In reality, a single business component (BC) cannot describe an actual business comprehensively. Therefore, the system has introduced the concept of Business Object to link various Business Components together, to fully represent an actual business. Looking at an example of a Business Component - “Opportunity”: in order to fully understand a business opportunity, we have to take into consideration the Account, Contact and Action that are associated with the business opportunity. Therefore in technical implementation, it is the Business Object that has linked all these different components together.



In this scenario developed around Opportunity, Opportunity is the Parent Business Component, with Account, Contact and Action that describe the Opportunity as Child Business Components. This has formed a multidimensional technical model centered around Opportunity. Its technical implementation can be illustrated as below:



Link is implemented by primary key/foreign key relationship.



Wang Zeyi: Senior training lecturer in Oracle University. Wang has engaged in the CRM field for a long time, including experience in CRM pre-sales, research, business analysis and technical implementation among various industries such as financial service, Hi-tech, telecommunication and automobile. He is mainly in charge of training courses on Oracle Siebel CRM and Oracle Fusion CRM.



Oracle Siebel 的业务对象(BO)说明


Read in English



Oracle Siebel CRM 软件是当今全球主流客户关系管理软件。具有非常完善的解决方案。其技术实现中的对象定义分为三个主要层面。



其中业务层中的业务对象也就是英文所说的BO,比较有特点。因为,在实际业务中一个单独的业务组件 (BC)无法全面描述一个实际业务。所以引入业务对象 (BO) 来关联多个业务组件 (BC),来描述具体的一个实际业务。例如:商机 (Opportunity)这个业务,在实际工作中要想全面了解一个商机,必然要查看这个商机相关的客户 (Account),联系人(Contact),操作 (Action)。这时技术实现中用到的就是业务对象 (BO) 来将这些不同的对象关联到一起。



这样这个场景中处于中心地位的商机 (Opportunity) 作为父业务组件 (BC),而作为商机描叙性信息的客户 (Account),联系人(Contact),操作 (Action) 作为子业务组件存在。形成一个以商机 (Opportunity) 为中心的多维描述的技术模型。至于技术实现就比较容易如下图:



至于链接(Link)的实现,是通过主外键关系实现的。



王泽义 甲骨文大学高级培训讲师。长期从事客户关系管理 (CRM) 相关工作,参与过金融服务、高科技、通信、汽车等多个行业的 CRM 售前、调研、业务分析和技术实现。主要讲授 Oracle Siebel CRM 和 Oracle Fusion CRM 方面的培训课程。
About

Expert trainers from Oracle University share tips and tricks and answer questions that come up in a classroom.

Search

Archives
« March 2015
SunMonTueWedThuFriSat
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
    
       
Today