External Data Engines II
By Tim Dexter on May 15, 2008
A mad few daze getting presentations and demos ready for internal meetings on BIP for Fusion Apps. Yep, it's coming together and its going to be good from a user and a developer point of view. I want to say its going to be 'sweeeeet', my son's favorite answer for all things good at the moment, but I won't.
Back to data engines, I said I would get into the more complex case that I stated at the end of the last entry.
How can we get away from this 'pull' model where a report is scheduled to pick up the data, what if we wanted to control the timing of the report, say those XML files were being dropped into a directory periodically and each time the new file was dropped in we wanted to get BIP to run the report. While we are at it, maybe we have retail branch sales data coming into a central server, all of those XML files have the same data structure. We do not want to define a report for each branch, we only want a single report definition that picks up the appropriate filename at runtime, processes it and sends an email to the branch manager with their results and a copy to corporate.
This is a real use case scenario I worked on with a customer recently andI believe they are well on their way to implementing it. Branches would periodically drop their data files into a central directory and need a report sent back tout suite. The first hurdle to over come is how to invoke Publisher when a new file hits the directory? This customer has a centralized scheduler called Control-M - fairly widely used I think. This product has the ability to act as a directory daemon looking for files as they are dropped into a directory and then invoking some other process - theres our hook to get BIP to execute a report. Its not that tough to create your own in Perl or similar language. Here's the architecture I came up with:
Whats going on?
The Control-M product is constantly polling a specific directory looking for new or updated files. These are XML data files from the branches, when a new file appears it invokes the shell script, this is passed the filename as a parameter. It in turn invokes the java web service client class that then calls the BIP server to run a given report. The web services we provide allow you to have tight control over the report, run it now?, run it later?, which template to use?, what output to generate?, where to deliver it? All of this information can be passed to the WS client code from the shell script or you could have the WS client class to parse the XML to find out this type of information e.g. the branch's email address.
On the BIP server we have a single report defined that uses an HTTP data source with the filename as a parameter. This calls a servlet that is looking over the directory and based on the incoming filename parameter. I guess you could just have a servlet acting as the daemon to recognise new/updated files and make a call back to BIP. In this case Control-M is orchestrating the whole process so the servlet is just returning the XML file to BIP for processing.
Thats it really, the only point of note is the return codes that get passed back to the WS class and thence to the script and ultimately back to the Control-M application. Fairly simple architecture, that allows another application to control the BIP server. If you have such a requirement, I'd love to hear from you!