Technical info and insight on using Oracle Documaker for customer communication and document automation

Schedule Batches Like A Pro in Documaker Enterprise

Andy Little
Technical Director

One of the powerful features of the Documaker system is the ability to create batches of output based on specific criteria, such as transaction page count, or batch page count. You can also use physical characteristics like output size, or destination characteristics such as mailing via specialized services, or items that must be handled differently. Documaker provides a wealth of features that enhance batch processing of transactions. Enter Documaker Enterprise - and your batching world suddenly looks a bit different. Now you have the ability to schedule batches to open and close at specific times, or based on specific criteria, and you can sort on just about any piece of data available within a transaction or metadata about a transaction.

Documaker Standard processes serially - that is, there are several processes that occur in a sequence. You may be familiar with the names of the executables, like GENTRAN, GENDATA, and GENPRINT. Or you may be more familiar with the rule names that are defined in a JDT file that is called by GENDATA in a single-step process. Either way, one process or rule follows another, and when a job of n transactions is received, all of the transactions are processed within that job and are staged for the next process. No transaction is left behind in this case. In the Documaker Enterprise world of scheduled batches, things operate differently. Documaker Factory workers operate independently, but transactions still follow a sequential flow through each worker, marching through the system until they are published. The workers continue doing their job, irrespective of what's happening before or after in the process flow.

Recall above where I mentioned that Enterprise uses scheduled batches that open and close at specific times. What do you think happens when a large job is received with many transactions, and the transactions are still processing through the system when a batch closes? That's right - the batch will close and any transactions from that job that didn't get processed in time will flow into another instance of that batch. So your batch that would normally contain all transactions will now contain just the transactions that were able to be processed by the time the batch closed. What can be done about this?

One solution might be to scale up the number of upstream workers that happen before batching so that you can accomplish the processing before the batch closes. Another option might be to change the time the batch closes to accommodate the processing time. But these changes might not be feasible - what if you have limited capacity to scale up, or what if you have downstream processes that you cannot reschedule, or what if loads suddenly peak and you can't accommodate the change? Or, better yet, business is booming and you have to continually monitor the processing performance to make sure batches are being closed with all transactions contained therein? Some of these solutions may be fine in your case, and if so that's good! But, if you're in the situation where none of these solutions seem like a good fit, read on! 

WARNING: Technical talk from here onward!

Typical Documaker Enterprise transaction flow works like this: a job comes in to the Receiver and it is recorded into the JOBS table. The Identifier takes the JOB record and breaks it into discrete transactions which are recorded into the TRNS table. Each transaction is then processed through the Assembler, the Batcher, the Distributor, and then finally the Presenter and Publisher. Each transaction has a TRNSTATUS column in the TRNS table which indicates the disposition of the transaction in the Documaker Enterprise process flow. One such status is 416, which means that a transaction has finished processing in the Assembler and is ready for batching. As I mentioned previously, the transactions will flow through the system and will go into a batch as soon as the transactions are ready. The batch does not  not wait for all transactions in a job to be batched before closing - it's on a schedule, and the schedule must flow! The batch will close at the configured time no matter if some transactions from a job are still outstanding. 

The JOBS table has a database trigger on it which monitors the value of the JOBSTATUS column for each job. This trigger specifically looks for the JOBSTATUS to change to 416, which means that all of the transactions in this job have been processed through the Assembler. We can use this trigger to know when all transactions of a job have been assembled, and with that knowledge we can know when to close a batch. To put this change into effect:

  1. Modify the batch(es) to have a closing date that is in the future - I mean way in the future, so you don't have to mess with this until perhaps the year 1 January 10191. That means that any new batchings that are created from this batch configuration will close well into the future. That means when a transaction is routed into this batching and it is the first transaction in that batching, the close date will be in the far future.
  2. Modify the trigger on the JOBS table as shown below. This trigger will set the close date for a batch to a time in the past, e.g. 1 January 2001 when the job status is updated to 416, which has the effect of immediately closing the batch and pushing it to the downstream workers.

To modify the trigger, open the appropriate IDE e.g. SQL Developer if using Oracle database or SQL Server Management Studio if using Microsoft SQL Server. Locate the trigger - you can do this by navigating to the JOBS table and using the right-click context menus in the respective tools to modify the trigger JOBS_BEFOREUPD_TRG (if using Oracle) or JOBS_AUPD_TRG (if using SQL Server).

Oracle Trigger Changes - add the following text in italics into the existing text of the trigger:

            IF (:OLD.JOBSTATUS<>416 AND 
                :NEW.JOBTRNTOTAL<>0 AND
                :NEW.JOBTRNSCH<>0 AND
                :NEW.JOBTRNERR=0 AND
                 update bchs a1 set bchstartingtime = (TO_TIMESTAMP('01-JAN-01 10.31.19 AM', 'DD-MON-YY HH.MI.SS AM')) 
                   where exists (select a.bch_id, a.job_id from bchs_rcps a 
                      where :NEW.job_id=a.job_id and a1.bch_id = a.bch_id);             
            END IF;

SQL Server Trigger Changes - add the following text in italics into the existing text of the trigger:

                UPDATE [BCHS] SET [BCHS].BCHSTARTINGTIME = '2000-01-01 14:00:00.000' from (Select BCH_ID,JOB_ID FROM [BCHS_RCPS]) a where a.JOB_ID = @NEW_JOB_ID AND a.BCH_ID = [BCHS].BCH_ID AND [BCHS].BCHNAME = 'AWDFileCopy'

This will update any batches that contain transactions from the job that just completed. You may need to modify this to add other conditions if you wish, such as adding the BCH_NAME into the WHERE clause of the UPDATE [BCHS] command. Save the changes to the trigger (e.g. issue the CREATE or REPLACE command for Oracle, or ALTER command for SQL Server).

Keep in mind that if you upgrade Enterprise Edition by migrating to a new version, or recreate your system, you will need to reapply these changes. These changes are provided herein without warranty and by implementing them yourself you could wreck your system so always test in a sandbox! I've attached the full version of the SQL statements for Oracle and MS SQL Server to this post for your reference - but you should use the triggers that are installed in your system if you implement this change.

Good luck!


Be the first to comment

Comments ( 0 )
Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.