Wednesday Dec 10, 2014

Audit History in Oracle Order Management. The challenge.
by David Barnacle

Changes to confirmed sales order because of business issues, such as low stock availability or pricing ammendments, can cause Customer concerns or complaint and lead ultimately to financial loss for the Company.

So there may be a requirement to record what attributes of an order were changed, when and by whom.

To address this issue the Order Management team introduced a rules based approach – simply called- Audit History.

This allows the Implementers, by defining simple rule hierarchies, to define which attributes (e.g. shipment date/ order qty/ payment terms) need to be tracked for changes, by whom and when necessary via Business events – what parties to inform of changes.

These changes are recorded in Order Management audit tables which are load on a regular basis by running the concurrent program – Audit History Consolidator. The results can then be review via the audit History report or on line via the View Audit history form.

Below is a screen shot showing a simple rule that would record changes to payment terms once the order had been booked.

Order Management

Conclusion

These rules can be set up at any time to provide insight into changes made to any of the order entry fields.

If you want to stop such changes from happening on a more permanent basis then there are processing constraint rules which are designed, just for that requirement. Once again these constraints use a rules based approach to stop an attribute from being changed by either the system or the end user at key points in the order processing cycle.

David Barnacle

David Barnacle joined Oracle University in 2001, after being the lead Implementer, of a very successful European rollout of the e-Business suite. He currently trains a wide family of applications specializing in the supply chain and financial areas. He enjoys meeting students and likes to learn how each Customer will configure the software suite to meet their own challenging business objectives.

Friday May 16, 2014

Upgrading Your Oracle Service Cloud Site

By Sarah Anderson

Oracle Service Cloud combines Web, Social and Contact Center experiences for a unified, cross-channel service solution in the Cloud, enabling organisations to increase sales and adoption, build trust and strengthen relationships, and reduce costs and effort.

Upgrading your Oracle Service Cloud site can be a daunting prospect – beset with questions like ‘what will happen to my data’, ‘what gets copied over and what do I have to re-build manually’ and ‘what about all of those complex customisations’ generally coming in at the top of the list!

So, take a deep breath and relax...Not only is there a dedicated upgrade team, but there is also a wealth of information, guides and tutorials online to support the process – and of course, you can always submit a support incident if you need any help.

Resources
A good starting point for accessing the documentation is https://cx.rightnow.com/ (you’ll need your account name and password to login).  Search for ‘5167’ – this will then display Answer ID 5167 – which lists the upgrade guides from the present GA version (currently February 2014) all the way back to August 2008 – just select the version that you are upgrading TO.  Then you select the version that you are upgrading FROM (don’t forget that the Customer Portal Framework can be migrated separately) and the documentation will open automatically.

It’s absolutely imperative that you verify that your infrastructure meets the minimum specifications for the version that you are upgrading to – it isn’t safe to assume that just because your 2011 version runs to optimum efficiency that your brand new, shiny 2014 version will do the same!

Now that you’ve confirmed that you meet the minimum system requirements, you are ready to request your upgrade site (this is a copy of your production site – your rules, navigation sets, workspaces, account profiles, configuration settings and message bases etc. will all be identical.  The data contained in the upgrade database contains an asynchronous snapshot of the data as it was in your production site at the point the upgrade was created, but will NOT be synchronised with the production database), so navigate to the overview page of the upgrade documentation, where you will find a link to an HMS (hosting management system) page.  Use your customer support details to login to HMS and select the site that you wish to upgrade.  Once that’s done your upgrade site will be reviewed by a member of the upgrades team and you’ll receive a notification about cutover dates.

Tasks
One of the key elements to a successful upgrade is to test, test and when you’ve done that, test some more... It will ensure that any changes you make to the upgrade site do not have a negative impact or ‘break’ any existing configuration.  To make this process more efficient and to ensure that mandatory tasks have been completed, you can access a check list from the ‘My Site Tools’ link at the top of the support pages – you may need to login again to access this list.

There are 3 mandatory tasks which need to be signed off

  • Test and confirm upgrade site functionality
  • Review upgrade documentation
  • Review Service Update Notification (SUN) Editor

There are 16 optional service tasks, but it’s a matter of best practice to test all of these items too!

To avoid duplication of effort or to ensure responsibility for the testing/verification of these tasks, it is possible to assign each task to an individual.

What gets carried over?
The question that most people ask when they upgrade, is ‘what changes get carried over at cutover’?  The important thing to remember is that when your site is upgraded, the database from the existing production site is merged with the configuration files of the upgrade.  Answer ID 1925 contains detailed information about this, but in synopsis: answer images, customer portal, configuration settings, message bases, file manager and local spellcheck dictionaries will be carried across from the upgrade site and everything else from production.  This means that if you have created a new navigation set or workspace for example, in the upgrade site, they will NOT be carried over to the upgraded production site.

An important aspect of upgrading relates to custom fields.  Custom fields should not be added, modified, or deleted from seven days prior to cutover until after the cutover is complete.  This is because changing custom fields within this time period can cause the upgrade to fail and/or be postponed.

As I mentioned right at the start, upgrading can be daunting, but success is all in the preparation... Lay the groundwork and you should have a trouble free upgrade – and don’t forget, there is an upgrade team that you can reach out to with questions or requests for support.

Here is a summary of the steps to take:

    • Read the upgrade documentation
    • Prepare your production site by having a ‘spring clean’ of extraneous or unused reports, workspaces, navigation sets and account profiles etc
    • Assign upgrade tasks in UMS
    • Test
    • Enjoy the whizzy, new functionality

    About the Author:


    Sarah Anderson worked for RightNow for 4 years in both a consulting and training delivery capacity. She is now a Senior Instructor with Oracle University, delivering the following Oracle RightNow courses:

    • RightNow Customer Service Administration
    • RightNow Analytics
    • RightNow Customer Portal Designer and Contact Center Experience Designer Administration
    • RightNow Marketing and Feedback
    Gaining RightNow Knowledge and Skills
    RightNow applications provide a service enterprise platform to unify web, social and contact center experiences for customers. With Oracle University’s RightNow Training, learn how to use this cloud-based customer service to build trust, increase sales and adoption and reduce costs. 

Thursday Jun 20, 2013

Creating an ACFS Replication by Bruno d Agostini

ACFS Replication, new features in 11.2.0.2 enables replication of Oracle ACFS file systems across the network to a remote site, providing disaster recovery capability for the file system.

For this demonstration, I used the practice environment provided for the course Oracle Grid Infrastructure 11g: Manage Clusterware and ASM.

Because the disk space is limited and 4 GB is needed per node, I used only a two Nodes cluster, named host01 and host02.

1. Before you start, be sure your cluster is correctly configured and works fine

2. Using ASMCA, create the disk groups with the required attribute.

a. Create PRIM Diskgroup with external redundancy and use 5 ASM disks ORCL:ASMDISK05 to ORCL:ASMDISK09. Set ASM and ADVM Compatibility to 11.2.0.2.0

 

b. Create SEC Diskgroup with external redundancy and use 5 ASM disks ORCL:ASMDISK10 to ORCL:ASMDISK14. Set ASM and ADVM Compatibility to 11.2.0.2.0

 

3. Using ASMCA, create the volumes needed.

a. Create Volume PRIM on PRIM Diskgroup and use 11G Bytes for the volume (Remember 4 GB per node)

b. Create volume SEC on SEC Diskgroup and use 11G Bytes for the volume (Remember 4 GB per node)

 

4. Using Linux, create the required directories

a. On Host01, create the directory /prim


a. On host02, create the directory /sec

 

5. Using ASMCA, create the ASM Cluster File Systems needed:

a. Create an ASM Cluster File System in PRIM Volume and specify /prim as Mount Point

b. Create an ASM Cluster File System in SEC Volume and specify /sec as Mount Point

6. Using NETCA, create TNS Alias PRIM for PRIM Service on host01 and create TNS Alias SEC for SEC Service on host02. (do not specify Cluster-scan)

7. Verify the result in TNSNAMES.ora file

8. Using Linux, create a password file on host01 with oracle as password

9. Using Linux, create a password file on host02 with oracle as password

10. On Host01, using SQL*Plus, create an user named oracle with password oracle and grant the necessary privileges. The password file will be updated on both sites, Host01 and Host02

11. On Host01, connected on instance +ASM1, add a service PRIM

12. Verify the listener on host01 is listening for service PRIM on instance +ASM1

13. Test the service name PRIM

14. On Host02, connected on instance +ASM2, add a service SEC

15. Verify the listener on host01 is listening for service SEC on instance +ASM2

16. Test the service name SEC

17. Using Linux, using df on host01, check the file system /prim is mounted and dismount the unnecessary file system mounted (/sec)

18. Using Linux, using df on host02, check the file system /sec is mounted and dismount the unnecessary file system mounted (/prim)

19. Connected on host02, using acfsutil, initialize ACFS replication on standby site first

20. Only when the replication has been successfully initialized on standby site, connect to host01 and initialize ACFS on primary site using acfsutil.

21. Check ACFS Replication on primary site

22. Check ACFS Replication on standby site

23. On host01, create a file (named toto)

24. On host02, after a while, check if the file has been replicated

25. On host01, pause the replication and create a second file (named titi)

26. On host02, the new file is normally not replicated because replication is paused on host01

27. After a while, resume replication on host01

28. After few seconds, on host02, the new file is normally replicated

29. On host02, pause the replication

30. On host01, create a new file (named new)

31. On host02, the file is not created because the replication on standby site is paused

32. On host02, resume the replication and test if the new file is created

33. On host02 standby site, try to remove file

34. On host01 primary site, terminate and verify the replication status

35. On host02 standby site, terminate and verify the replication status

About the Author:


Bruno d’Agostini joined Oracle in 1994 and currently is a French Instructor for Oracle University specializing in the Oracle Database, Data Guard, Data Warehouse, Streams, GoldenGate, Parallelism, Partitioning, Clusterware, ASM, RAC and Exadata Database Machine.

Thursday Sep 29, 2011

MOAC – DRIVING EFFICIENCIES IN TRANSACTION PROCESSING By David Barnacle Bsc(Hons) ACMA

In release 12, an exciting new feature was introduced across the sub ledgers and it was called Multi-Org Access Control or MOAC. A lot of our customers have followed Oracle’s lead and adopted shared service centres (SSC). In these centres, to drive down costs in processing business transactions, the back-office functions (financial and administration) have been consolidated. For example a shared service centre in a single country could deal with all the processing expenses across Europe, or even the world. SSC models are increasingly being used in the Public sector in an attempt to be more efficient, and to push down the cost of daily transactions.

You may not have implemented a formal shared service centre, but you can still reap the benefits from Multi-Org Access Control. Multi – Org architecture was introduced in version 10.7 to allow businesses with complex enterprise structures, often over many countries, to conduct their business transaction in a single Oracle database instance. Financial transaction in the sub ledgers were secured by operating units, and users gain access to each operating unit via a different responsibility. If a user needed to process transactions in a new operating unit, then they would need another responsibility.

MOAC allows companies to gain processing efficiencies because users can more easily access, process and report on data across multiple operating units from a single responsibility without compromising data security or system performance. For example, an order processing clerk can open one sales order form and then process orders for all countries without the need to switch responsibilities or data entry forms.

The following diagram summarises the set up and processing steps for using MOAC.

moac diagram

In the Human Resource responsibility, you can define a new security profile and assign to this new profile all the operating units needed for a responsibility to access. To make this new security profile available, you then must run the HR report called ‘Run Security list Maintenance’. The new security profile is then attached to a responsibility by the new profile option called ‘MO: security Profile’.

A number of reports and forms have been enhanced to allow cross – organisational reporting. Multi – org preferences allows the user to control and limit the number of operating units they have access to, based on their work environment.

The MOAC feature delivers the following benefits:

1. Reduce setup and Maintenance of many responsibilities

2. Speed up data entry

3. Obtain a global consolidated view of information

4. Process data across multiple operating units from a single responsibility

5. Increase operational efficiency and reduce transaction processing costs.

The setup and use of MOAC is covered in the OU course R12.x Oracle E-Business Suite Essentials for Implementers. The course also covers common setup components such as Flexfields and is the prerequisite course for any follow on application fundamentals course. The course content is also tested in the first examination of the e-business certification program.

About the Author:

David Barnacle
David Barnacle joined Oracle University in 2001, after being the lead Implementer, of a very successful European rollout of the e-Business suite. He currently trains a wide family of applications specializing in the supply chain and financial areas. He enjoys meeting students and likes to learn how each Customer will configure the software suite to meet their own challenging business objectives.

Friday Aug 26, 2011

Ways to Train – Oracle University Style by David North

It’s not just about the content, it’s not just about the trainer, it’s also about you – the learner. The ways people learn new skills is ever so varied; and it is for this reason that OU has been and continues to dramatically expand the sources and styles available. In this short article I want to expand on some of the available styles you may come across to enable you to make the best choice for you. Firstly, we have “Instructor-led” training; the kind of live, group-based training that many of you will have already experienced by attending a classroom and coming face-to-face with your trainer; spending time absorbing theory lessons, watching demos, and then (in most classes) “having a go” and doing hands-on exercises with a live system.

But now OU has added “LVC” (Live Virtual Class) – a variety of the live, instructor-led training where instead of having to travel, you attend class remotely, over the Internet. You still have a live instructor (so you have to run up on time... no slacking allowed!!). The tool we use allows plenty of interaction with the trainer and other class members, and the hands-on exercises are just the same – although in this style of training if you fall behind or want to explore more, the machines on which you do the exercises are available 24x7 – no being kicked out of the classroom at the end of the day!

We are doing more and more of these LVC classes as the word spreads about how good they really are. If you can’t take time out during the day and are really up for it, you’ll even find classes scheduled to run in the evenings and overnight! – although be careful you don’t end up on a class being delivered in Chinese or Japanese for example (unless of course you happen to speak the language... When you book a class the language and start times are clearly shown).

For those of you who prefer a more self-paced style, or who cannot take big chunks of time out to do the live classes, we have created recordings of quite a few – which we call “RWC” (Recorded Web Class”), so you can log in and work through them at your leisure. Sadly with these we cannot make the hand-on practice environments available (there’s no-one there in real time to support them), but they do give you all the content, and at a time and pace to suit your needs.

If you like that idea, but want something a bit more interactive, we have “Online Training”. Do not confuse this with LVC, the “Online Training” is not “live”; it is a combination of interactive computer based lessons with demos and hands-on simulations based on real live environments. You decide where, when, and how much of the course you do. Each time you log back in the system remembers where you were – you can go back and repeat parts of it, or simply carry on where you left off. Perfect if you have to do our training in bits and pieces and unpredictable times.

And finally, if you like the idea of the “Online” option, but want even more flexibility about when and where, we have “SSCD” (Self Study CD) – which is in effect the online class on a CD so you don’t even have to be connected to the Internet to dip in and learning something new.

Not all of our titles are available across all the styles, but the range is growing daily. Now you have no excuse for not finding something in a format that will suit your learning needs.

Happy training.

About the Author:

David North
David North is Delivery Director for Oracle Applications in the UK, Ireland and Scandinavia and is responsible for Specialist Education Services in EMEA. He has been working with Oracle Applications for over 9 years and in the past helped customers implement and roll out specific products in just about every country in EMEA. He also trained many customers from implementation and customisation through to marketing and business management.

Tuesday Jun 14, 2011

New ways for backup, recovery and restore of Essbase Block Storage databases – part 1 by Bernhard Kinkel

Backing up databases and providing the necessary files and information for a potential recovery or restore is crucial in today’s working environments. I will therefore present the new interesting options that Essbase provides for this, starting from version 11, and related to this a powerful data export option using Calc Scripts, which has been available since release 9.3.

Let’s start with the last point: If you wanted to backup just the data from your database, formerly you could use the Export utility that Essbase provides as an item in the database right-click menu in the Administration Services Console. This feature is still available, supporting both Block Storage (BSO) and Aggregate Storage (ASO) databases. But regarding usability, some limitations exist: for example, the focus on which data to export can be set only to Level0, Input Level or All data (the last two options are only available for BSO) – more detailed definitions are not possible. Also the ASCII format of the export file causes them to become rather large, maybe even larger than your Page and Index files.

Anyway, importing these files is quite simple, as this export can be (re-)loaded without any load rule, as long as the outline structure is the same – even if the database resides on another server. Also modifications are possible while using load rules in combination with an export file in column format.

But now the way to export data using a Calc Script promises more flexibility, smaller files and faster performance. However, this option is only available to BSO, as ASO cubes do not leverage Calc Scripts.

For example, in order to focus on even very detailed subsets of data, which is very usual in Calc Scripts, you can take advantage of common commands like FIX | ENDFIX and EXCLUDE | ENDEXCLUDE. In addition the new SET DATAEXPORTOPTIONS command provides more options to refine export content, formatting, and processing, including the possibility to export dynamically calculated values. You can also request statistics and an estimate of export time before actually exporting the data. The following syntax gives you an overview of the available settings:

SET DATAEXPORTOPTIONS
{
DataExportLevel ALL | LEVEL0 | INPUT;
DataExportDynamicCalc ON | OFF;
DataExportNonExistingBlocks ON | OFF;
DataExportDecimal n;
DataExportPrecision n;
DataExportColFormat ON | OFF;
DataExportColHeader dimensionName;
DataExportDimHeader ON | OFF;
DataExportRelationalFile ON | OFF;
DataExportOverwriteFile ON | OFF;
DataExportDryRun ON | OFF;
}

Looking at most of these options will probably already give you an idea on their use and functionality. For more detailed information about the SET DATAEXPORTOPTIONS command options, please see the available Oracle Essbase Online Documentation (rel. 11.1.2.1) or the Enterprise Performance Management System Documentation (including previous releases) on the Oracle Technology Network.

My example should focus on the binary export and import, as it provides faster export and load performance than export/import with ASCII files. Thus in the first section of my script I will use only two of the data export options, in order to export all data and to overwrite an eventually existing old export file with the new one. The subsequent syntax for the binary export itself is DATAEXPORT "Binfile" "fileName", where "Binfile" is the required keyword and "fileName" is the full pathname for the exported binfile. So the complete script reads:

SET DATAEXPORTOPTIONS
{
DataExportLevel "ALL";
DATAEXPORTOVERWRITEFILE ON;
}
DATAEXPORT "BinFile" "c:\Export\MyDB_expALL.bin";

Tip: Export file names can have more than 8 characters; the extension “.bin” is not mandatory.

The import of the binary file with the Calc Script uses the command DATAIMPORTBIN fileName;. In order to avoid potentially importing a wrong file or importing into a wrong database, each export file includes an outline timestamp, which the import by default checks. Just in case, this check should be bypassed, the command SET DATAIMPORTIGNORETIMESTAMP ON; could be placed before the DATAIMPORTBIN line. The import definition for the preceding export could look like the following:

SET DATAIMPORTIGNORETIMESTAMP ON;
DATAIMPORTBIN "c:\Export\MyDB_expALL.bin";

After this rather new option for data export and import let’s turn to the new backup and restore option for complete databases provided in Administration Services Console starting with release 11. As well as or instead of the common strategies and methods used previously (like running a third party backup utility while the database is in read-only mode), this new feature provides an easy ad-hoc way to archive a database.

Select the Archive Database item from the right-click menu on the database node and in the subsequent window define the full path and name for the archive file, where the extension “.arc” is a recommendation from Oracle, but not mandatory.


The process could optionally be run as a background process, while Force archive would overwrite an existing file with the same name.

After starting the archive procedure, the database is set to read-only mode and a copy of the following files will be written to the archive file:


After this the database returns to read-write mode. However, not all files are backed up automatically using this procedure. The following table shows a list of files and file types that you would need to backup manually:


Tip: Also make a backup of the file essbase.bak_startup. This file is created after a successful start of the Essbase server (formerly this file was named just essabse.bak), as well as the essbase.bak file, which now has a different function: while the essbase.bak_startup is only created at the server start and no changes apply to this file until a next successful server start, the essbase.bak could be compared to the security file and updated manually or by using a MaxL command at any time. For a manual update in Administration Services Console under the respective Essbase server right-click Security, and select Update security backup file.

In MaxL run the command alter system sync security backup. Security files and the CFG-file reside in the ARBORPATH\bin directory, where you installed Essbase.

As the Archive option by default creates one large file, you have to make sure that the system you save your archive files to supports large files (e.g. in Windows NTFS). If you need smaller files, Essbase can be configured to create multiple files no larger than 2 GB by creating the SPLITARCHIVEFILE TRUE entry in the essbase.cfg file.

Restoring an archived database works as simply as the backup itself. First make sure that the database to be restored is stopped. Then from the right-click menu select Restore Database. Provide the required information about the archive file to be imported including the full path.


If the backed-up database used disk volumes, select Advanced. The database can be restored to the same disk volumes without any further definitions, or you define a new mapping for the volume names (e.g. “C” could be replaced by “F”), but you can neither change the number of volumes nor the space used on each volume compared to the original backed-up database. Select to restore in the background if desired, and click OK. The restore is done and confirmed in the Messages panel.

Tip: Usually, the same database would be restored that has been previously backed-up. But this doesn’t necessarily have to be the case. You can also use the restore feature to create a copy of your database (excluding the files mentioned above, which are not included in the archive file) or to overwrite another database. In both cases you must have an existing database to overwrite. From this “target” database select the Restore Database feature, but make sure to have checked Force Restore in the Restore Database dialog box.

Depending on the frequency of your archiving cycles, maybe the latest backup doesn’t restore the actual latest state of your database: following the backup, you might, for example, have run Dimension Build Rules or Data Load Rules, data loads from client interfaces or calculations. These would not be reflected in the restored database. In this case the new Transaction Logging and Replay option provides a good way to capture and replay post-backup transactions. Thus, a backed-up database can be recovered to the most recent state before the interruption occurred. This feature will be described in the second part of this article coming later this year.

Or – if you can’t wait – maybe you should learn how to use it as well as other important administration topics in our Essbase for System Administrators class; please refer also to the links provided below.

If you are also interested in other new features and smart enhancements in Essbase or Hyperion Planning stay tuned for coming articles or check our training courses and web presentations.

You can find general information about offerings for the Essbase and Planning curriculum or other Oracle-Hyperion products here; (please make sure to select your country/region at the top of this page) or in the OU Learning paths section, where Planning, Essbase and other Hyperion products can be found under the Fusion Middleware heading (again, please select the right country/region). Or drop me a note directly: bernhard.kinkel@oracle.com.



Bernhard Kinkel started working for Hyperion Solutions as a Presales Consultant and Consultant in 1998 and moved to Hyperion Education Services in 1999. He joined Oracle University in 2007 where he is a Principal Education Consultant. Based on these many years of working with Hyperion products he has detailed product knowledge across several versions. He delivers both classroom and live virtual courses. His areas of expertise are Oracle/Hyperion Essbase, Oracle Hyperion Planning and Hyperion Web Analysis.

Disclaimer:


All methods and features mentioned in this article must be considered and tested carefully related to your environment, processes and requirements. As a guidance please always refer to the available software documentation. This article does not recommend or advise any explicit action or change, hence the author cannot be held responsible for any consequences due to the use or implementation of these features.

Sunday Feb 20, 2011

Oracle Spatial and Transportable Tablespaces by Gwen Lazenby

[Read More]
About

Expert trainers from Oracle University share tips and tricks and answer questions that come up in a classroom.

Search

Archives
« May 2015
SunMonTueWedThuFriSat
     
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
      
Today