Welcome back to the continuation of our discussion on testing automation with Oracle Documaker! In our last post
on Documaker regression testing, we explained the differences between
keyword-driven and data-driven frameworks - our testing strategy modeled after the framework proposed by Carl Nagle. For data-intensive
applications such as Oracle Documaker Interactive, it is preferable to use data-driven testing frameworks, because these frameworks more closely mirror the use case for Documaker Interactive. In this post, we will review the testing framework and process that is used by the Oracle Documaker Quality Assurance team for automated testing of Documaker Interactive in several typical use cases. Let's get started, shall we?
Test Framework and Design
As Nagle pointed out, a framework for automation is indispensable in creating repeatable tests to achieved repeatable results with the same or similar input data. The first step the design of the automated testing process is to define the parameters of the test: what is being tested, how it should be tested, and how we'll know if the test passed or failed. We know we're testing Documaker Interactive, and we know that it will either work to user satisfaction or it won't - so there's the first and last items done. But how will we test it? The functionality offered by Documaker Interactive contains many possible function paths that a user can take to achieve similar ends, so we need to determine the typical paths and automate those. A good rule of thumb is to consider what 80% of test cases should look like, and design test cases around those processes first. A typical test case might look like this (keep in mind that I'm abstracting some of the details for the sake of clarity - your actual test cases should be much more detailed):
What we've defined above is an abstraction of a testing scenario - a structured test with a defined input, output, and results. As you can see, the scenario is very granular, but is not specific to data - it's a functional test case to verify that the software does what it's supposed to do. It's entirely possible that your own test cases can (and should) be more specific to data, especially if you have an enterprise-wide system that accommodates more than one data source or services more than one line of business. After you have a library of scenarios built up, you'll have a test suite, which you can then use for regression testing on software upgrades, functional changes, and more. At this point you're probably thinking, "Right, ok, I have all that. You said we were actually doing to test something?" You're right, I did say that and we will - to do that, we're going to use a few software packages to assist us:
For the purposes of this post, we're going to assume you've already installed OATS and Oracle Documaker Enterprise Edition (ODEE), of which Documaker Interactive is a part. If you haven't installed OATS, see the link provided above. If you haven't installed ODEE, I have a series of blog posts that detail an end-to-end installation and configuration of ODEE.
Scripts, Hierarchy, and Data Files
Testing is executed within OATS using a hierarchy of inheritance and execution. At the top of the hierarchy is the master script, which is used to coordinate execution of all the lower-level scripts. The next level is component script, which as the name implies defines the collection of scripts for a software component. Finally there is the scenario script, which is the lowest level and includes all the details of outlined above - inputs, outputs, functions, and pass/fail criteria. Here's a handy diagram in which we have defined multiple components and we show the scenario detail for one component:
The Documaker QA team has a test suite for Documaker Interactive that uses a data file to house all the testing configuration elements used by the master, component and scenario scripts in the test suite. For convenience, the data file is a spreadsheet which contains multiple worksheets, each with unique data that can be replicated and modified to extend the test cases as necessary. The data file is read
during the initialization phase of test execution and is stored in a global location for reference across multiple component and scenario levels. All three levels of the automation script use this data file. Let's review the hierarchical test phases and how the data file is used:
The data file has other supporting sheets that are used by the various testing scripts for automation and control:
Interactive_forms - this sheet contains all forms with approval levels used for different test scenarios. When you add a new form, that form gets added to both the object library and to this worksheet. From there, the form can then be used across all scenarios. To do this, add the form name to the Forms_List column in the TestScenarios worksheet.
Putting the Test Together
We have outlined the test cases and the test data for control and execution. Now comes the fun part - we actually need to build the test! But before we do that, I must remind you that it is important to figure out how wide your test cases ought to be. By width I mean how much of the system's functionality the test case should cover. It's tempting to make a scenario cover an entire end-to-end test, across all layers of the system, from upstream data feed to downstream printing or electronic delivery. With OATS you have the power to do that, but as a wise man once said, "With great power comes great responsibility," and test design is no different! A good practice, which is reinforced by the OATS hierarchical design, is the limit a scenario to functionality within a component. That way, you can limit the Documaker Interactive test to include only the functionality that's needed within that component, and external components should be covered by other scenarios. Why am I saying this now? Because as you're going to find out, we're jumping right into Documaker Interactive - no creating a transaction, dropping data, invoking a web service, or anything else. Our assumption will be that the data is there, because it was provided by another test scenario and therefore should be tested there. It will keep our test scenario smaller and easier to manage.While we're on the subject of test scenarios, I should point out that you can use the file system to your advantage here as well - since you're going to have a data file out there with all the control parameters for your scenarios, you can also create an attachments folder and use it to store any test documents that you will be attaching in Documaker Interactive (keeping in mind our plan to segregate test scenarios by component, we'll assume this attachment is coming from a user desktop, or provided by an external system).
As mentioned above, we're going to use the OpenScript component of OATS in combination with the Java class Robots. If you have used Documaker Interactive before, you know that it uses a plugin called WIPedit for facilitating data entry onto documents. Part of the process for test script creation includes the ability to record user interaction with a browser, which then generates the OpenScript code that you can customize. The OpenScript recording capability will capture user interaction with web components, but cannot capture events within WIPedit, and so for that we will use the Robots class to programmatically generate keyboard and mouse input. This screen shot below illustrates the area of differentiation between web components and WIPedit - note that the WIPedit area is everything below the toolbar, inclusive of the form set tree and the document preview window:
When recording your scripts, you'll need to note what input events (keyboard/mouse) are occurring that aren't going to be captured by the recording. In the screen shot above, I have clicked on Zoom Normal, which is a web component as it's in the toolbar. When I go back to edit the recorded script, I'll need to programmatically move the mouse and simulate clicks from the point of departure from web components. Here's a code snippet of how this will work:
oracle.oats.scripting.modules.functionalTest.common.api.internal.types.Point
p=web.element("{{obj.ODEE_Interactive.NewDoc_Document Tab_Zoom_Normal button}}").getElementCenterPoint();
Once the position of the Zoom Normal button is captured, I need to move the pointer 40 points down and 40 points left using Java Robot object
to place the mouse pointer on the document:
robot.mouseMove(p.x-40, p.y+40);
Now we'll execute a right-click to expose the context menu, move the pointer, and execute a left click to select the "Check Required Fields" menu item:
// Right CLICK
robot.mousePress(InputEvent.BUTTON3_MASK);
robot.mouseRelease(InputEvent.BUTTON3_MASK);
// Moving to 1st option in right click menu "Check required fields"
robot.mouseMove(p.x-40+87, p.y+40+13);
// Left CLICK
robot.mousePress(InputEvent.BUTTON1_MASK);
robot.mouseRelease(InputEvent.BUTTON1_MASK);
There! From here we can continue to flesh out the remainder of the scenario until the test case is completed. This means populating any fields with data (e.g. from your data file to simulate user input), submitting for approval, generating previews, and the like - whatever is required for your test case. A special footnote: dialog boxes generated from WIPedit are detectable by OpenScript, so it is not necessary to use the Robots class to interact with these elements. Have fun putting together your scenarios - when you're done, it's time to execute the tests with OATS. A few pointers here:
I hope you've enjoyed this glimpse into the world of regression testing, and that you were able to glean something useful that you can implement within your own environment. If you need assistance with regression testing, OATS, OpenScript, or any of the other technologies or concepts mentioned herein, please head over to the Oracle Forum and post a query. Until next time!