Tuesday Nov 17, 2015

Automating Oracle Documaker Interactive and WIP Edit Plug-In Using OpenScript – Part 2

Welcome back to the continuation of our discussion on testing automation with Oracle Documaker! In our last post on Documaker regression testing, we explained the differences between keyword-driven and data-driven frameworks - our testing strategy modeled after the framework proposed by Carl Nagle. For data-intensive applications such as Oracle Documaker Interactive, it is preferable to use data-driven testing frameworks, because these frameworks more closely mirror the use case for Documaker Interactive. In this post, we will review the testing framework and process that is used by the Oracle Documaker Quality Assurance team for automated testing of Documaker Interactive in several typical use cases. Let's get started, shall we?

Test Framework and Design

As Nagle pointed out, a framework for automation is indispensable in creating repeatable tests to achieved repeatable results with the same or similar input data. The first step the design of the automated testing process is to define the parameters of the test: what is being tested, how it should be tested, and how we'll know if the test passed or failed. We know we're testing Documaker Interactive, and we know that it will either work to user satisfaction or it won't - so there's the first and last items done. But how will we test it? The functionality offered by Documaker Interactive contains many possible function paths that a user can take to achieve similar ends, so we need to determine the typical paths and automate those. A good rule of thumb is to consider what 80% of test cases should look like, and design test cases around those processes first. A typical test case might look like this (keep in mind that I'm abstracting some of the details for the sake of clarity - your actual test cases should be much more detailed):

  • Inputs: one or more recipients, each with address information, supplied via external data source
  • Functions: user creates a new transaction in Documaker Interactive, edits a few fields, saves, opens for edit again, completes.
  • Outputs: completed PDF document.
  • Pass/Fail: Pass if no errors occur and outputs contain expected data (e.g. supplied by external data source and the user). Fail if errors occur or data does not match.

What we've defined above is an abstraction of a testing scenario - a structured test with a defined input, output, and results. As you can see, the scenario is very granular, but is not specific to data - it's a functional test case to verify that the software does what it's supposed to do. It's entirely possible that your own test cases can (and should) be more specific to data, especially if you have an enterprise-wide system that accommodates more than one data source or services more than one line of business. After you have a library of scenarios built up, you'll have a test suite, which you can then use for regression testing on software upgrades, functional changes, and more. At this point you're probably thinking, "Right, ok, I have all that. You said we were actually doing to test something?" You're right, I did say that and we will - to do that, we're going to use a few software packages to assist us:

  • Oracle Automated Testing Suite - also known affectionately as OATS - which is available here, and includes OpenScript, which is documented here.
  • Java class Robot - to generate native system events based on keyboard and/or mouse interaction - JavaDoc is here.

For the purposes of this post, we're going to assume you've already installed OATS and Oracle Documaker Enterprise Edition (ODEE), of which Documaker Interactive is a part. If you haven't installed OATS, see the link provided above. If you haven't installed ODEE, I have a series of blog posts that detail an end-to-end installation and configuration of ODEE.

Scripts, Hierarchy, and Data Files

Testing is executed within OATS using a hierarchy of inheritance and execution. At the top of the hierarchy is the master script, which is used to coordinate execution of all the lower-level scripts. The next level is component script, which as the name implies defines the collection of scripts for a software component. Finally there is the scenario script, which is the lowest level and includes all the details of outlined above - inputs, outputs, functions, and pass/fail criteria. Here's a handy diagram in which we have defined multiple components and we show the scenario detail for one component:

The Documaker QA team has a test suite for Documaker Interactive that uses a data file to house all the testing configuration elements used by the master, component and scenario scripts in the test suite. For convenience, the data file is a spreadsheet which contains multiple worksheets, each with unique data that can be replicated and modified to extend the test cases as necessary. The data file is read during the initialization phase of test execution and is stored in a global location for reference across multiple component and scenario levels. All three levels of the automation script use this data file. Let's review the hierarchical test phases and how the data file is used:

  • The master script controls the test. This script checks the component (specifically, the application) and platform being tested (i.e. Documaker Interactive, Windows). The master script references data in the ODEE_Components worksheet of the data file to know which components to execute. The master script then calls the appropriate component-level script in order. The ODEE_Components worksheet contains the following details: component name, release, environment (operating system), test run by, and date run. The component and environment cells are drop-down fields. Based on the selections made in the fields in this worksheet, the script picks the applicable URL and executes the associated test script.

  • The component scripts are used to determine which test scenarios will be run for a component. Each component script references one or more scenarios which are detailed in the TestScenarios worksheet of the data file. Each scenario can be turned on or off for a given component test using the Run Status column value of Y (include in test) or N (exclude from test). The TestScenarios sheet contains all the scenarios for the automation test and the supporting test data for all scenarios. When there are multiple values such as form names or attachment file names, the values should be separated by a semicolon (;). Refer to the example worksheet below. For Scenario_001, look at the Required Fields column and you will see the semicolon-delimited value "34564675;566787;37,500". This string will be parsed by the scenario script and populated into required fields.

  • Scenario scripts are the actual tests that are executed. Each scenario is created as a different method in OpenScript, based on the required functionality that needs to be performed. These methods can be reused and called by other scenarios, so it is possible for a basic scenario to have many variations with little actual code that must be created to support each.

The data file has other supporting sheets that are used by the various testing scripts for automation and control:

  • Interactive_Users -this sheet contains credentials, user roles, and the approval levels.

  • Interactive_forms - this sheet contains all forms with approval levels used for different test scenarios. When you add a new form, that form gets added to both the object library and to this worksheet. From there, the form can then be used across all scenarios. To do this, add the form name to the Forms_List column in the TestScenarios worksheet.

  • Addressees - this sheet is used to add addressees to the data set, which will be shown on the Addressee tab while creating documents within Documaker Interactive. A new addressee can be added in same pattern as defined in the Addressees worksheet.

Putting the Test Together

We have outlined the test cases and the test data for control and execution. Now comes the fun part - we actually need to build the test! But before we do that, I must remind you that it is important to figure out how wide your test cases ought to be. By width I mean how much of the system's functionality the test case should cover. It's tempting to make a scenario cover an entire end-to-end test, across all layers of the system, from upstream data feed to downstream printing or electronic delivery. With OATS you have the power to do that, but as a wise man once said, "With great power comes great responsibility," and test design is no different! A good practice, which is reinforced by the OATS hierarchical design, is the limit a scenario to functionality within a component. That way, you can limit the Documaker Interactive test to include only the functionality that's needed within that component, and external components should be covered by other scenarios. Why am I saying this now? Because as you're going to find out, we're jumping right into Documaker Interactive - no creating a transaction, dropping data, invoking a web service, or anything else. Our assumption will be that the data is there, because it was provided by another test scenario and therefore should be tested there. It will keep our test scenario smaller and easier to manage.While we're on the subject of test scenarios, I should point out that you can use the file system to your advantage here as well - since you're going to have a data file out there with all the control parameters for your scenarios, you can also create an attachments folder and use it to store any test documents that you will be attaching in Documaker Interactive (keeping in mind our plan to segregate test scenarios by component, we'll assume this attachment is coming from a user desktop, or provided by an external system).

As mentioned above, we're going to use the OpenScript component of OATS in combination with the Java class Robots. If you have used Documaker Interactive before, you know that it uses a plugin called WIPedit for facilitating data entry onto documents. Part of the process for test script creation includes the ability to record user interaction with a browser, which then generates the OpenScript code that you can customize. The OpenScript recording capability will capture user interaction with web components, but cannot capture events within WIPedit, and so for that we will use the Robots class to programmatically generate keyboard and mouse input. This screen shot below illustrates the area of differentiation between web components and WIPedit - note that the WIPedit area is everything below the toolbar, inclusive of the form set tree and the document preview window:

When recording your scripts, you'll need to note what input events (keyboard/mouse) are occurring that aren't going to be captured by the recording. In the screen shot above, I have clicked on Zoom Normal, which is a web component as it's in the toolbar. When I go back to edit the recorded script, I'll need to programmatically move the mouse and simulate clicks from the point of departure from web components. Here's a code snippet of how this will work:

oracle.oats.scripting.modules.functionalTest.common.api.internal.types.Point p=web.element("{{obj.ODEE_Interactive.NewDoc_Document Tab_Zoom_Normal button}}").getElementCenterPoint();

Once the position of the Zoom Normal button is captured, I need to move the pointer 40 points down and 40 points left using Java Robot object to place the mouse pointer on the document:

robot.mouseMove(p.x-40, p.y+40);

Now we'll execute a right-click to expose the context menu, move the pointer, and execute a left click to select the "Check Required Fields" menu item:

// Right CLICK

// Moving to 1st option in right click menu "Check required fields"
robot.mouseMove(p.x-40+87, p.y+40+13);

// Left CLICK

There! From here we can continue to flesh out the remainder of the scenario until the test case is completed. This means populating any fields with data (e.g. from your data file to simulate user input), submitting for approval, generating previews, and the like - whatever is required for your test case. A special footnote: dialog boxes generated from WIPedit are detectable by OpenScript, so it is not necessary to use the Robots class to interact with these elements. Have fun putting together your scenarios - when you're done, it's time to execute the tests with OATS. A few pointers here:

  • ErrorScreens - your OATS scripts can store a screenshot of browser windows at the time an error occurs during execution of a test scenario, which is quite helpful in seeing what's happening from a user perspective. Screen captures will be stored in this directory and are named according to the release, build, environment, and scenario undergoing testing. Note that this particular naming convention is specific to Documaker QA's testing scripts, so you don't have to replicate this as-is.
  • OpenScriptLogs - logs for the test are stored in this subdirectory. Every activity along with the values for the web fields gets logged. Logs can be used for troubleshooting in the event of a test failure. If multiple iterations of the test are run on same environment, release, and build, the log gets appended. When the environment, release, and\or build changes, a new log file is created. This file gets initialized when the main script is executed.
  • TestReports - the test report of each successful test run is stored in the TestReports subdirectory. This file gets initialized when the main script is executed. The Test Report is in an *.xls format. If the test run is aborted or stopped for any unknown reason, the test report is not generated. The log file in the OpenScriptLogs will hold the report until the last successful test step is executed.

I hope you've enjoyed this glimpse into the world of regression testing, and that you were able to glean something useful that you can implement within your own environment. If you need assistance with regression testing, OATS, OpenScript, or any of the other technologies or concepts mentioned herein, please head over to the Oracle Forum and post a query. Until next time!

Saturday Oct 24, 2015

Connecting MQSeries to ODEE with JMS Bridge

Welcome to another edition of the Documaker Tech blog! Today we'll be showing how to connect MQSeries queues to Oracle Documaker Enterprise Edition (ODEE). Fair warning: this post will be acronym-intensive, and as such I will endeavor to present the full meaning of an acronym before using it. So let's dive in! If you're not familiar with the concept of the a message queue a cursory internet search will turn up a wealth of information. You need to know that message queues are used to provide synchronous or asynchronous communication between two or more processes. Synchronous (sync) communication means that the sender will wait for the receiver to acknowledge receipt of the message (also known as a response), whereas asynchronous (async) means that the sender will not wait for a response. Async is also known as “fire-and-forget”, or FAF. It is also possible to have multiple senders and receivers using the same queue - that is, you might have multiple senders placing messages into a queue, and then multiple receivers retrieving messages from the queue.  It is this capability that is used to provide scalability within ODEE.

Internal Queueing
Internally, ODEE uses queues to distribute work units among the workers in a factory Assembly Line. Queues are necessary to support distributed work and provide part of the backbone of the infrastructure that enables ODEE to scale to enterprise-level processing capacity - the other part of the backbone being the database. ODEE follows the factory model for document production: an Assembly Line represents a document production configuration, which is serviced by multiple workers to generate documents. The workers perform different tasks, and scale independently of one another to accommodate changing work loads. ODEE has a defined path that each document request will follow in order to complete assembly. This path is orchestrated by the Scheduler worker, which notifies each successive pool of workers when work is available. This notification is done using queues - here’s an example:
  • Document Generation transaction is enqueued from external application into the Request Queue.
  • The Receiver, an ODEE Worker, dequeues the transaction, and starts a Job within ODEE. Note that there could be multiple Receiver workers, and only one is needed to pick up the request to start the transaction.
  • The Scheduler, an ODEE Worker, monitors the system for new Jobs, and notes the new Job. The Scheduler enqueues a message for the Identifier worker pool.
  • All Identifier workers periodically check in to their request queue for new work - one of these instances will pick up the Job, and will mark it as in process.
  • The Identifier worker completes its task with this Job, and updates the system accordingly.
  • The Scheduler, ever watchful, notes that the Identifier phase of this Job has completed, and so notifies the next pool of workers that need to service the job.
  • This process, Scheduler->queue->worker->update repeats until the Job is completed. This entire process typically takes place in a second or two (or on decent hardware, sub-second!).

External Queuing
In addition to internal queues, ODEE uses queues externally as an integration point, enabling it to accept processing requests from other applications. In the default installation, these are JMS queues named ReceiverReq and ReceiverRes.

Queue Requirements
ODEE 12.4 and earlier Enterprise Edition releases use:
  • Java Message Service (JMS) queues to distribute workload among Assembly Line factory worker pools.
  • Java Application Server (JAS) such as like Oracle WebLogic Server (WLS) or IBM WebSphere Application Server ND (WAS).  
  • JMS providers which implement the JMS 1.1 Specification; more precisely WLS 10.3.6 and WAS ND.
During ODEE installation, the deployment scripts will create the necessary artifacts within the target JAS. For WLS, this means a JMS Server and associated module and sub deployments will be created and configured automatically. For WAS, this means the associated components will be created and configured on the WAS Service Integration Bus (SIB). The resulting software deployment is configured to utilize the chosen JAS queues.


During a recent implementation I was presented with a design decision: how to integrate ODEE with IBM WebSphere MQ (aka MQSeries), to extend interoperability to a customers application landscape that was already using MQSeries? ODEE can use MQSeries for its queuing infrastructure, provided the connectors have been configured to activate JMS capability within MQSeries.  In this particular situation we wished to avoid placing the internal queuing infrastructure on MQSeries for a number of reasons (cost and proximity of the MQ host to the ODEE environment to name two) - so we chose a different approach: use the WLS JMS implementation for internal queuing and MQSeries for external queuing, and support an out-of-the-box configuration. Amazingly, this solution is already provided out of the box with Oracle WebLogic Server with some minimal configuration, which will connect the MQSeries queues with the external integration queues ReceiverReq and ReceiverRes. Lets start with a few assumptions:

  • Physical MQSeries queues should already exist; we will use REQQ and RESQ as our example queues;
  • MQ Queue Manager name, host, and port are known (QMGRNAME, QHOST, and 1480 are our respective values in this example). Note that 1414 is the default, and we are using a non-default value;
  • Network paths from the WLS server to MQSeries server exist and are open;
  • WLS 10.3.6 will be used as the JAS for ODEE; and
  • You have sufficient rights to connect and create objects.

Activating MQSeries JMS
First, we need to create a JNDI tree that references and binds the MQSeries artifacts (e.g. Queues and connection factories). The JNDI tree can be file-based, LDAP-based, or JAS-based, depending on your needs. For the purposes of this post well assume a file-based JNDI tree. MQSeries includes a tool called JMSAdmin tool, which is in the MQ_HOME/Java/bin (MQ_HOME is the installation directory of MQSeries). In order to run this tool, you will need to modify the JMSAdmin configuration file, which is called JMSAdmin.config. This file is located in the same directory as the tool itself, and you can edit the file with any text editor. Set the following values:

The directory specified in the PROVIDER_URL setting must be created before you attempt to start the JMSAdmin tool - otherwise, the tool will fail! Now you can run the tool by executing MQ_HOME/Java/bin/JMSAdmin.bat or MQ_HOME/Java/bin/JMSAdmin.sh. Note that the tool uses a proprietary command protocol which is documented here. In the tool, you will execute the following steps:
1. Define the references to the queues in the tool. Note: it is not required to use a different local name ( e.g. “MQRES” or "MQREQ") in fact it could be the same as the physical queue name.      

     InitCtx> Def q(MQREQ) queue(REQQ) qmgr(QMGRNAME) host(QHOST) port(1480)
     InitCtx> Def q(MQRES) queue(RESQ) qmgr(QMGRNAME) host(QHOST) port(1480)

2.  Define the reference to a queue connection factory in the tool:

     InitCtx> Def qcf(MQQCF)

3. Display the context, inspect the output, and end.

     InitCtx> dis ctx

    Contents of InitCtx
       .bindings     java.io.File
     a MQREQ      com.ibm.mq.jms.MQQueue 
     a MQRES      com.ibm.mq.jms.MQQueue 
     a MQQCF      com.ibm.mq.jms.MQQueueConnectionFactory
     4 Object(s)
       0 Context(s)
       4 Binding(s), 3 Administered

     InitCtx> end

As I mentioned, It is possible to also create an LDAP-based or JAS-based JNDI tree, but we’ll explore that in an additional post. For now, let’s continue using the file-based JNDI tree.

Preliminary Setup
First, you’ll need to obtain some JAR files from your MQSeries installation and add them the ODEE domain. Locate the following files and copy them to MIDDLEWARE_HOME/user_projects/domains/idocumaker_domain/lib (where MIDDLEWARE_HOME is the WLS installation directory):
  • com.ibm.mq.commonservices.jar
  • com.ibm.mq.defaultconfig.jar
  • com.ibm.mq.headers.jar
  • com.ibm.mq.jar
  • com.ibm.mq.jms.Nojndi.jar
  • com.ibm.mqjms.jar
  • connector.jar
  • dhbcore.jar
  • fscontext.jar
  • jms.jar
  • jndi.jar
  • providerutil.jar
Once placed, you’ll need to restart the domain (e.g. ODEE WLS AdminServer).

Add MQSeries to WebLogic
Next, we’ll add our MQSeries configuration to WLS as a Foreign JMS Provider. Make sure WLS is running, and open a browser to the administration console (http://hostname:port/console). In the console, use the left-hand pane to navigate to Services ->Messaging->JMS Modules. Locate the installed JMS Module with ODEE, usually it’s called AL1Module, and click it. Click the New button and from the list of available options select Foreign Server, and then click Next. Given the Foreign Server a name (e.g. MQSERIES) then click Next, and accept the default targeting (to jms_server) then click Finish.

A. Click on the Foreign Server you just created. You will need to define some additional parameters to your Foreign Server: 
  • JNDI Initial Context Factory. Set this to the same value we used in the JMSAdmin.config, that is, com.sun.jndi.fscontext.RefFSContextFactory
  • JNDI Connection URL. Set this to the same value we used in the in the JMSAdmin.config, that is, file:/c:/mq_jndi
     Click Save.
B. Click the Destinations sub tab and on the next screen, click New. Here we will define the Foreign Destinations (recall we created these with the JMSAdmin tool), which requires three parameters:
  • Name. This is the internal name of the MQSeries queue, used only for display purposes. Set to MQREQ, to keep things simple. 
  • Local JNDI Name. Set to MQREQ. Can be anything, as it is used locally and not on the MQSeries side, but I recommend using the same name as the next parameter.
  • Remote JNDI name. Must be set to the name of the queue defined in JMSAdmin, e.g.  MQREQ
     Click Ok. Repeat the above step to create another Foreign Destination, this time for MQRES.
C. Click on the Connection Factories sub tab, and then click New. Enter the following settings to define the Foreign Connection Factory:
  • Name. This is the internal name of the MQSeries queue connection factory, used only for display purposes. Set to MQQCF, to keep things simple. 
  • Local JNDI Name. Set to MQQCF. Can be anything, as it is used locally and not on the MQSeries side, but I recommend using the same name as the next parameter.
  • Remote JNDI name. Must be set to the name of the queue defined in JMSAdmin, e.g.  MQQCF.
      Click Ok.
At this point, you should be able to navigate to Environment->Servers->jms_server and then click on View JNDI Tree in the WebLogic console and see the two queues and queue connection factory listed. If not, this means that the Foreign JMS Server references could not be created - usually an indication that either the required MQSeries JAR files are not present in the ODEE domain, or the JNDI Connection URL is not accessible. Check your log files for additional information.

Bridging the Connection from MQ
At this point, we have added the MQSeries queues as foreign JMS queues to our ODEE domain in WLS. What remains is to bridge the default external integration queues ReceiverReq and ReceiverRes to the foreign queues. To do so, back in the WLS Console, click on Services->Messaging->Bridges. Click New. We are creating the bridge for messages coming from MQSeries - enter the following properties:
  • Name - this is for viewing purposes only; call it BRIDGEFROMMQ.
  • Selector - not required; leave blank.
  • Quality of Service - this determines how the bridge tracks messages and ensures they are delivered (e.g. in case of a possible missed delivery, it can resend the message). For this demonstration, choose Exactly Once.
  • Initial State - tick the Started box.
Click Next. Click New Destination. We are creating Source destination for our BRIDGEFROMMQ bridge, so we’ll need to define the source queue:
  • Name - this is for viewing purposes only; call it FROMMQ_SOURCE.
  • Adpater JNDI Name - select eis.jms.WLSConnectionFactoryJNDINoTX (note: if using XA, select the XA adapter name).
  • Adapter Classpath- leave blank.
  • Connection URL - leave blank.
  • Connection Factory JNDI Name - set to MQQCF.
  • Destination JNDI Name - set to MQREQ.
Click Ok. You should now see FROMMQ_SOURCE selected in the dropdown. Click Next. In the Messaging Provider selection, choose Other JMS Provider. Click Next. Click New Destination. We are creating Target destination for our FROMMQ bridge, so we’ll need to define the queue:
  • Name - this is for viewing purposes only; call it FROMMQ_TARGET.
  • Adpater JNDI Name - select eis.jms.WLSConnectionFactoryJNDINoTX (note: if using XA, select the XA adapter name).
  • Adapter Classpath- leave blank.
  • Connection URL - leave blank.
  • Connection Factory JNDI Name - set to jms.al1.qcf - This is the queue connection factory of the target destination, which is the ReceiverReq queue. The name I’ve chosen here is the default installation name.
  • Destination JNDI Name - set to jms.al1.receiverreq.
Click Ok. Choose FROMMQ_TARGET in the dropdown. Click Next. In the Messaging Provider selection, choose WebLogic Server 7.0 or Higher. Click Next. Choose jms_server as the target, click Next, then click Finish. We’re almost done!

Bridging the Connection to MQ
As you might have guessed, we’ve created the bridge from MQ to WLS, and now we need to create the bridge from WLS to MQ. In the WLS Console, click on Services->Messaging->Bridges. Click New. We are creating the bridge for messages coming from MQSeries - enter the following properties:
  • Name - this is for viewing purposes only; call it BRIDGETOMQ.
  • Selector - not required; leave blank.
  • Quality of Service - this determines how the bridge tracks messages and ensures they are delivered (e.g. in case of a possible missed delivery, it can resend the message). For this demonstration, choose Exactly Once.
  • Initial State - tick the Started box.
Click Next. Click New Destination. We are creating Source destination for our BRIDGETOMQ bridge, so we’ll need to define the source queue:
  • Name - this is for viewing purposes only; call it TOMQ_SOURCE.
  • Adpater JNDI Name - select eis.jms.WLSConnectionFactoryJNDINoTX (note: if using XA, select the XA adapter name).
  • Adapter Classpath- leave blank.
  • Connection URL - leave blank.
  • Connection Factory JNDI Name - set to jms.al1.qcf
  • Destination JNDI Name - set to jms.al1.receiverres
Click Ok. You should now see TOMQ_SOURCE selected in the dropdown. Click Next. In the Messaging Provider selection, choose WebLogic Server 7.0 or higher. Click Next. Click New Destination. We are creating Target destination for our TOMQ bridge, so we’ll need to define the queue:
  • Name - this is for viewing purposes only; call it TOMQ_TARGET.
  • Adpater JNDI Name - select eis.jms.WLSConnectionFactoryJNDINoTX (note: if using XA, select the XA adapter name).
  • Adapter Classpath- leave blank.
  • Connection URL - leave blank.
  • Connection Factory JNDI Name - set to MQQCF
  • Destination JNDI Name - set to MQRES
Click Ok. Choose FROMMQ_TARGET in the dropdown. Click Next. In the Messaging Provider selection, choose Other JMS Provider. Click Next. Choose jms_server as the target, click Next, then click Finish. That’s it! Make sure your changes have been activated, and requisite WLS server(s) restarted. To test, deposit a message in the MQREQ queue (it should take the same XML input in SOAP format as the doPublishFromImport web service method). Here’s an example - note where the input extract XML should be placed in Base-64 encoded format:

<cmn:Binary>**replace with base-64 encoded extract data**</cmn:Binary>

After a moment check the MQRES queue for a response message. You may uncomment the <cmn:ResponseType> node with the Attachments value if your system is configured to return PDF values. There can be additional configuration that is necessary depending on your specific system and requirements - consult with an ODEE and/or MQSerires subject matter expert and you’ll be on your way to integrated messaging in no time!

Friday Oct 16, 2015

Automating Oracle Documaker Interactive and WIP Edit Plug-In Using OpenScript

In this post, we detail the steps our Oracle Documaker QA team uses to automate some regression test cases, specifically Documaker Interactive and WIP Edit Plug-in using Oracle Functional Testing and OpenScript. This is the first in a series of posts focused on Documaker regression testing.

Please note: This blog post does not explain how to install or configure Oracle Documaker Interactive (DI), Oracle Documaker WIP Edit Plug-in or Oracle Automation Testing Suite (OATS). Installation and configuration instructions for Documaker Interactive are included in the Oracle Documaker Enterprise Edition (ODEE) Installation guide. WIP Edit Plug-in installation instructions are in the Documaker Web-Enabled Solutions User Guide. Both of these guides are available on the Oracle Technology Network (OTN) under Documaker on the Oracle Insurance Documentation siteThe blog post series “ODEE Green Field (Windows)” also provides detailed information on Oracle Documaker Enterprise Edition (ODEE) installation and Documaker Interactive. 

What is regression testing?

Regression testing is the process of retesting software after changes are made to ensure that the changes have not broken any existing functionality.

Why bother to perform regression testing?

Oftentimes, tech companies release new software features with much fanfare. There's nothing more irritating for users than discovering that those new features have broken an existing functionality—especially if that functionality is critical to their operations. When that happens, business users must wait for the software developers to come up with a fix. If that fix breaks something else, users must report the problem to software developers and spend more time waiting. And the cycle continues. The users are stalled, and their organization can’t benefit from the new features as they wait. Meanwhile, the software developers are spending so much time troubleshooting and fixing bugs which prevents them from working on new features, enhancing existing features, and making the software more beneficial for users. Regression tests can be executed against the entire system (soup to nuts) or against specific products or areas. Quality assurance (QA) staff, software or a combination of the two may conduct regression testing.

More on manual testing

In manual testing, the QA team follows a written test case, which includes action steps and expected results. Manual tests can be useful to help familiarize new users with the product and workflow. However, there are drawbacks to the manual method. Depending on the nature of the test, the process can quickly become tedious and monotonous, which may lead to QA staff overlooking problems. Situations that require QA staff to run manual regression tests in multiple operating systems or multiple browsers for multiple builds during the development cycle can leave the project more vulnerable to potential oversight.

More on automated testing

In automated testing, automation software is used to create and execute automation scripts. Automated testing eliminates the potential for mistakes made during monotonous, repetitive manual testing. And because you can run these tests anytime, day or night, you have more time to increase your test coverage. Automated testing can run unattended on different machines and operating systems, which frees up time for users to spend on other important tasks such as functional testing of new features. Keep in mind that all tests are not necessarily well suited for automated regression testing. For example, if an interface is subject to frequent changes, manual testing is the best option until the interface stabilizes.

Oracle Documaker Interactive

Oracle Documaker Interactive is the interface used to create and edit documents for distribution. It’s one of the components in Oracle Documaker Enterprise Edition (ODEE).

Oracle WIP Edit Plug-in

WIP Edit Plug-in is used in conjunction with Documaker Interactive. It is a browser-based plug-in that lets you create, edit and submit transactions in a WYSIWYG (what you see is what you get) format.

Oracle Automated Testing Suite (OATS)

Oracle Application Testing Suite or OATS is an integrated testing solution. It consists of these integrated products:

  • Oracle Functional Testing - automated functional and regression testing of web applications
  • Oracle Functional Testing Suite for Oracle Applications - functional and regression testing of Oracle packaged applications
  • Oracle Load Testing - scalability, performance and load testing of web applications
  • Oracle Load Testing Suite for Oracle Applications - scalability, performance and load testing of Oracle packaged applications
  • Oracle Test Manager - test process management, including test requirements management, test management, test execution and defect tracking.  

Oracle OpenScript

OpenScript is used in Oracle Functional Testing and Oracle Load Testing. It enables you to create automated tests for Web Applications. You can record, script or manually create tests using the different frameworks that the tool supports. OpenScript can also be integrated with the test management component of Oracle Test Manager (OTM). You can initiate the tests in OTM or OpenScript. OpenScript Workbench has multiple views including a GUI (Graphical User Interface) and Java Code View. You can record and play back tests in the GUI View, and script or edit tests using the Java Code View. More information on OATS is available here.

Test Frameworks

A test framework is the set of assumptions, concepts and tools that provide support for automated software testing. It includes the processes, procedures and environment in which automated tests will be designed, created and implemented and the results reported. There are several different frameworks and scripting techniques. They include:

  • Linear
  • Structured
  • Data-driven
  • Keyword-driven
  • Hybrid (two or more of the above are used)
  • Agile automation framework

For this post, we’ll focus on two of these frameworks: keyword-driven and data-driven.

Keyword-Driven Frameworks

Keyword-driven frameworks look very similar to manual test cases. In a keyword-driven framework, the functionality of the application being tested is documented in a table as well as in step-by-step instructions for each test. A keyword describes action and input data. The test script code drives the application and data. Keyword-driven frameworks require the development of data tables and keywords, which are independent of the test automation tool used to execute them and the test script code that "drives" the application-under-test and the data.

Let’s use this example:

Column A contains the keywords “Enter Client.” Enter Client is the functionality being tested. The remaining columns contain data to execute the keyword. To enter another client, you would add another table row.

Data-Driven Frameworks

Data-driven frameworks are driven by test data. Test scripts are built-in so they will work with different sets of data covering different scenarios without test script changes. In a data-driven framework, test input and output values are read from data files (e.g., data pools, ODBC sources, CSV files, Excel files, DAO objects, ADO objects, etc.) and are loaded into variables in the scripts. In this framework, variables are used for both input values and output verification values. Data-driven frameworks are preferable for applications that use large amounts of input data such as Documaker Interactive.

More information on testing frameworks is available here

Thursday Mar 26, 2015

Documaker Integration : A Primer

Integration of a document automation system with external systems is a critical phase in any Documaker implementation. Whether Documaker is being deployed in a “green field” scenario or into a mature enterprise the fact remains that integration must occur on one or more levels. My goal in this blog post is to explore some concepts and methods required to define and establish integration available in a typical implementation of Oracle Documaker Enterprise Edition, with the hope this may help readers think about possible solutions for their own use.

Establish the Foundation

As I mentioned, Documaker has two possible implementation targets: the green field and the established enterprise. Each target presents both unique and common challenges that must be overcome during the project. In a green field implementation, the software packages being implementation are typically new from top to bottom. This type of target is typically encountered when a brand new company, business unit, or product line is being built from the ground up with new systems, procedures, and software. Implementing within the established enterprise refers to the process of upgrading or replacing an existing software package, or implementing new software within an existing infrastructure. This implementation is most often used with established companies, units, or product lines and serves to enhance functional offerings of the enterprise to serve the needs of the business. In both cases where Documaker is concerned there will be integrations – either to new or existing software, processes, and procedures, which form the basic foundation of the enterprise. In order to perform the subsequent steps, we must know capabilities of the foundation. Therefore, the final activity in this step is to establish a functional catalog of capabilities offered by each component of the system. The table below illustrates one such method of capabilities cataloging.

Table 1, Capabilities Catalog









Policy Administration System



File system delivery

Fixed-record format

Schema dictated by product. Can add new record types and data


Billing System



Web Service

Fixed Schema

XSL-controlled schema, fixed element names. Cannot add new elements.


Content Management System



Web Service

Fixed Schema

Product-controlled schema. New data elements can be added.

This table illustrates how the input and output capabilities of various element in the foundation system and how they may (or may not) be changed to accommodate additional business requirements.

Data Flow Mapping

Having defined the capabilities, we can now further refine business requirements and (hopefully) obtain a match between the requirements and capabilities. When performing a Documaker implementation, part of the business requirements analysis will involve form and data analysis. This analysis defines, among other things, the data needed for document triggering, data mapping, and controlling other aspects of handling document automation. This information constitutes requirements for systems that feed information to Documaker (“upstream” systems). Similarly, there may be a need for Documaker to feed information to systems after processing (“downstream” systems), such as archive repositories, publishing systems, delivery systems, and the like. By now you can probably see that defining the upstream and downstream requirements is directly related to the table of capabilities we developed in Step 1. From here we can further refine our data requirements for each system. The table below represents a simplified view into cataloging the data requirements for each system:

Table 2, Data Requirements



Capability Map





Content Management System


Account Number



Required for indexing output


Mail Processing System



Document barcode (see barcode requirements)

Required for sorting mail pieces to obtain postal discounts.

Once these two steps have been fully executed and mapped out, we’ll have an accurate picture of all upstream and downstream data requirements as well as a map of how those data elements can be captured and passed between systems.

Targeting Documaker

At this point, what we have defined is not specific to Documaker – the steps I’ve outlined above are very generic and could be applied to any software implementation in any enterprise where data interchange and software integration is required. The key is that we are now in a frame of mind where we can start to consider the integration capabilities of Documaker, and how they will map into a possible solution. What better way to do that than to present a capabilities catalog for Documaker Enterprise Edition, as we would do in an actual implementation! The table format I’ve chosen to represent the capabilities catalog is slightly different from Table 1, as I have formatted the table to represent capabilities specific to Documaker.

Table 3, Documaker Input Channels





Transactional Input

Hot Folder

Web Service

Direct Queue


Record layout dictated by upstream capability, informed by form requirements and downstream requirements. Only one layout supported per assembly line.

Transactional Input

Hot Folder

Web Service

Direct Queue


Schema dictated by upstream capability, informed by form requirements and downstream requirements. Only one schema supported per assembly line.


ETL-to-Transactional Input




Other Files

ETL Tool such as Transall, a Documaker component, is used to consolidate multiple input sources into a single transactional input.

Interactive Augmentation

Documaker Interactive

Web Service-to-various

Documents that are processed in Documaker Interactive can be augmented by external data, which is retrieved by Documaker Interactive using a custom web service. This service accepts input from Documaker Interactive (which is transactional or user entered data) and then performs operations to retrieve additional data, format if necessary, and return using a pre-defined schema.

Scripted Augmentation



Database or File-based

DAL-scripting can be used to obtain data from external sources to augment transactional data or for use during processing as needed. This method is usually not recommended as it can present problems when scaling across servers. The preferred model is Consolidation or Interactive Augmentation.

The following table outlines the output capabilities of Documaker.

Table 4, Documaker Output Channels







Web Center: Content (UCM)

File System

Publication Streams

Metadata Files

This channel is a standard output delivery channel included with Documaker. The Archiver component is used for transmitting documents generated by Documaker to various destinations. Archiver also has the capability to emit accompanying metadata files in arbitrary (user-defined) formats. Metadata includes any information related to the transaction that is carried in the ODEE table structure. Typical uses would be to submit documents to archive systems with indexing data in the metadata file.



Publication Streams


The Archiver component has an open framework for adding new output destinations (transports) in addition to those shown above. A typical use case is to write a custom integration to a system that has integration requirements not satisfied by the default transports, e.g. invoking a web service to transmit publications and metadata.


Web Services



Publication Streams


Documaker supports multiple methods of self-service integration. While the Archiver and customization hooks provided by Archiver are a push model, the Distribution channel is a pull model – suitable for self-service integration via web services, database queries, or using the Dashboard UI.




This channel is a standard output delivery channel supported by Documaker which notifies a recipient that a document has been generated.




Publication Streams

This is a standard Documaker output delivery channel used for pushing documents directly to attached printers, and for sending documents via email.




This channel is often overlooked, but can be configured to allow ODEE Factory Workers to output statistical information.

While the preceding tables are not exhaustive of all of the possible integration points with Documaker, they are the most common. Some of these channels aren’t considered traditional integration points, but they are useful when thinking about systems management of an enterprise solution that includes Documaker.

I hope you find this information useful!

Wednesday Oct 29, 2014

Why Projects Fail

I'm going to divert a little from my technical orientation for blogging here and discuss something that's critical to any software implementation: project failure. 

To properly frame the discussion about why projects fail, we first need to define a project. The Project Management Institute (“PMI”) defines a project as “a temporary group activity designed to produce a unique product, service, or result.” (What is Project Management?). This is in contrast to routine activities that use repetitive processes to generate a product or service, such as manufacturing or customer service. PMI further stipulates that a project is temporary because the beginning and end, scope and resources are finite and defined to achieve a singular goal. Timeline, scope, and resources, are the three factors that exert direct influence over the success of project. Each must be in balance with the other two for the project to meet its goals – if the scope becomes too large, the project will exceed available resources (personnel or budget). Similarly, if the timeline becomes compressed, the scope and resources will not be able to deliver on time. With the triangle of time, resources, and scope in mind, we can say that a successful project is one that is delivered within the timeframe, using the identified resources, and meets the agreed-upon scope.

Having defined a project, we can then explore the concept of project management. Again, PMI provides a broad definition: “Project management, then, is the application of knowledge, skills, and techniques to execute projects effectively and efficiently” (What Is Project Management?). The collection of knowledge, skills, and techniques that have been refined over time to produce reliable, repeatable results are called a methodology. A project management methodology is the primary tool used by project managers to ensure the successful delivery of a project. There are many different methodologies such as SCRUM, Agile, Waterfall, and SLDC to name a few (Project Management Methodologies). The project manager uses the methodology to guide the project to completion by controlling the three legs of the project triangle.

With this framework in mind, we can explore the failure points of the triangle. The scope leg of the triangle involves the definition, acceptance, and management of the requirements of a project. A project that has ill-defined requirements, which do not meet the needs of the users, or users who are unable to achieve consensus on requirements, or pressure to execute before requirements are defined is set on a path to failure (Why Projects Fail). The presence of ill-defined requirements can also be a symptom of two broader problems that can lead to project failure: management buy-in and project communication. Project sponsors, executives, and leaders must be in agreement on the high-level deliverables of a project. When these stakeholders are not in alignment, the project scope will be unclear; the project will be in jeopardy. Similarly, the users must also be aware of the goals of the project, otherwise the requirements they devise may be at odds with parameters of the project. These two factors can be managed with clear, concise communication at all points.

With the scope leg of the triangle properly defined, the time and resource legs can then be constructed. It is possible that the project timeline may be set before the scope has been identified, in which case the project will require additional resources to complete on time. When scope and resources are not properly aligned the project may experience cost overruns or late delivery, both of which constitute failure. One method to avoid misalignment is to use proof-of-concept or pilot programs. These programs can help determine viability of an approach to meeting scope requirements within time and resource constraints, which will improve the chances of success.

When the three legs of the project management triangle have been properly defined, agreed-upon, communicated, and have resources allocated, the remaining step is to execute the project using the methods prescribed by the chosen methodology. Herein also lie additional failure points for the project, one of which is reactive management. If a project starts to exceed the constraints of time, scope, or resources, the application of risk management should be used to return the project to the boundaries of control. Proactive risk management includes proper identification, analysis, and mitigation of project risks before they occur. Failure to perform proactive risk management will cause problems to be addressed in a reactive manner, which will result in schedule slippage, and budget/resource overuse (Why Projects Fail).

In conclusion, we have identified multiple factors that can negatively affect the outcome of a project:

  • Ill-defined requirements;
  • Management buy-in;
  • Communication;
  • Alignment of scope and resources; and
  • Ineffective risk management


I have also presented some common ways that these pitfalls can be avoided to help guide a project to success. I hope you find this information useful and can avoid project failures in your own endeavors.


  • What is Project Management? (n.d.). Retrieved October 15, 2014, from http://www.pmi.org/About-Us/About-Us-What-is-Project-Management.aspx
  • Project Management Methodologies. (n.d.). Retrieved October 15, 2014, from http://www.tutorialspoint.com/management_concepts/project_management_methodologies.htm
  • Why Projects Fail: Avoiding the Classic Pitfalls. (2011, October). Retrieved October 15, 2014, from http://www.oracle.com/us/solutions/018860.pdf


Tuesday Jul 08, 2014

How to Publish with EWPS - Quick Guide

Getting Started with Documaker and Web Services 

Today I'll be addressing a common question - how do I get started publishing with web services and Documaker? The aim of this guide is to give you a quick rundown on how you can be up and running with publishing via web services and Documaker within a few hours, or even minutes! Before we get started, I think it's pertinent to discuss web services. 

[Read More]

Thursday Jun 12, 2014

ODEE Green Field (Windows) Part 5 - Deployment and Validation

Part 5 of a multi-part blog series on creating an ODEE green field - a sandbox environment with all necessary software components for a deployment of Oracle Documaker Enterprise Edition. This includes Oracle database, SOA Suite, WebLogic application server, and Documaker. This sandbox series covers Windows versions of the software. In this installment, Documaker is deployed and tested to complete the installation.[Read More]

ODEE Green Field (Windows) Part 4 - Documaker

Part 4 of a multi-part blog series on creating an ODEE green field - a sandbox environment with all necessary software components for a deployment of Oracle Documaker Enterprise Edition. This includes Oracle database, SOA Suite, WebLogic application server, and Documaker. This sandbox series covers Windows versions of the software. In this installment, Documaker is installed.[Read More]

Wednesday Jun 11, 2014

ODEE Green Field (Windows) Part 3 - SOA Suite

Part 3 of a multi-part blog series on creating an ODEE green field - a sandbox environment with all necessary software components for a deployment of Oracle Documaker Enterprise Edition. This includes Oracle database, SOA Suite, WebLogic application server, and Documaker. This sandbox series covers Windows versions of the software. In this installment, SOA Suite is installed.[Read More]

ODEE Green Field (Windows) Part 2 - WebLogic

Part 2 of a multi-part blog series on creating an ODEE green field - a sandbox environment with all necessary software components for a deployment of Oracle Documaker Enterprise Edition. This includes Oracle database, SOA Suite, WebLogic application server, and Documaker. This sandbox series covers Windows versions of the software. In this installment, WebLogic is installed.[Read More]

Monday May 19, 2014

ODEE Green Field (Windows) Part 1 - Intro & Database

A multi-part blog series on creating an ODEE green field - a sandbox environment with all necessary software components for a deployment of Oracle Documaker Enterprise Edition. This includes Oracle database, SOA Suite, WebLogic application server, and Documaker. This sandbox series covers Windows versions of the software. In this installment, the Oracle database is installed.[Read More]

Friday Mar 12, 2010

Demystifying Docupresentment

In this edition of Inside Document Automation, Andy takes a look at Docupresentment, a powerful queue-based tool for integrating on-demand and interactive applications with Documaker. Learn about downloading, installing, and basic configuration in this post![Read More]

Thursday Mar 11, 2010

By Way of Introduction...

With this inaugural post to Inside Document Automation, I'm going to introduce myself, and what my aim is with this blog.  If you didn't figure it out already by perusing my profile, my name is Andy and I've been with Oracle (nee Skywire Software nee Docucorp nee Formmaker) since the formative years of 1998.  Strangely, it doesn't seem that long ago, but it's certainly a lifetime in the age of technology.  I recall running a BBS from my parent's basement on a 1200 baud modem, and the trepidation that accompanies the sweaty-palmed excitement of upgrading to the power and speed of 2400 baud!  I'll admit that perhaps I'm inflating the experience a bit, but I was kid!  This is the stuff of War Games and King's Quest I and the demise of TI-99 4/A.  Exciting times.  So fast-forward a bit and I'm 12 years into a career in the world of document automation and publishing working for, in my humble opinion, the best software company on the planet. 

With Inside Document Automation I hope to peek under the covers, go behind closed doors, lift up the hood and bang on the fenders of the tech space within Oracle Documaker.  I may delve off course a bit, and you'll likely get a dose of humor (at least in my mind) but I hope you'll glean at least a tidbit of usefulness with each post as I shed a little light in the underpinnings of our software.  Feel free to comment as I'm a fairly conversant guy and happy to talk -- it's stopping the talking that's the hard part... So read on!


A technically-focused, in-depth look at Oracle Documaker including the Enterprise and Standard editions and the software components thereof (Docupresentment, EWPS, DWS, and more).


« November 2015