Technical info and insight on using Oracle Documaker for customer communication and document automation

Recent Posts

A Bit of Batch Sorting

I recently received an interesting question from a customer who is converting a Documaker Standard Edition system to Documaker Enterprise Edition:  I’m looking for information on how to use Sort by Rule for our batches, but the Administrator Guide doesn't contain much information: “The Sort By rule supports a table.column name reference with an ascending/descending indicator. You can also sort multiple columns.” and “BCHINGSORTRULE Additional sort criteria to sequence the BCHS_RCPS records the Batcher creates. This value is also passed to the new batch record via BCHS.BCHSORTRULE column.” and “BchSortRule Typically a RCPS column name like RCPS.ADR_POSTALCODE is set to sort a scheduled batch by the postal code. Should be a table qualified name so that other tables are candidates in a multi-table select call, e.g. FROM BCHS,JOBS,TRNS,RCPS. This becomes a ORDER BY clause in the select statement if provided to override any sorting the system does normally. A comma delimited list of columns and ASC and DEC key words can be used to as in a ORDER BY clause.” What a great question! To start off with, if you don't already have an ODEE system, you can revisit my blog post on setting up a quick VM. The default sort order is by TRN_ID then RCP_ID. More specifically the RCP_ID is order in which recipients for a transaction were generated and processed within the Assembler and Batcher. From my own investigation the TRN_ID appears to be the order in which jobs were received and processed by the DocFactory. You might assume this would be the clock time, however there are a number of factors than can influence this.  For example, work is processed in small groups, controlled by worker "fetch sizes", and the ID values are allocated in batches for faster execution. So if you have a multi-instance DocFactory installation, there could be slight differences in job/transaction ordering due to latencies and processing time.  In this snapshot of the BCHS_RCPS table (which relates batches and their recipients to jobs and their transactions), you can see that the RCPSEQ ordering does not correlate to JOB_ID ordering -- but it does indicate TRN_ID ordering. Note: RCPSEQ is applied at print generation time according to the sort order applied to the batch, so this tells us that the sort ordering, absent any criteria, is TRN_ID.   Generally this does not present a problem, since it is a default, and you can override it. Let's review! The documentation tells us that the Sort By Rule is a SQL ORDER BY clause, so that tells us we can use any data that is in a column in the ODEE schema. Since the system has a view of the entire context of a transaction across all the system tables (JOBS, TRNS, BCHS, RCPS, PUBS) we don't have to wire up any complex SQL clauseSince the system has a view of the entire context of a transaction across all the system tables (JOBS, TRNS, BCHS, RCPS, PUBS) we don't have to wire up any complex SQL clauses, we can just reference the columns we need in a comma delimited fashion.   Let's assume that my print requirement states that the output batch must be sorted by a key identifier and the recipient's name. The key identifier is contained in the TRNS table and the recipient name is mapped into the recipient ADR record -- this is automatically mapped in the RCPS.ADR_NAME column. My reference system is already set up to map and populate these values, so to use them in the SORT BY RULE setting for my scheduled batch I can do the following: Open Documaker Administrator, select the assembly line, and click Batchings. Select the batching I want to update, click the Rules tab, and enter my rule: Sort By Rule TRNS.KEYID, RCPS.ADR_NAME I can save the change, and test by running a few transactions that I know will be routed to this batch. Note: for testing, make sure you close any open batches for this batching, otherwise the transactions will go into the open batch and will not create a new batch with the new setting. We can see the batch is created with my sort rule in place:   SELECT BCH_ID, BCHNAME, BCHBY, BCHSORTRULE FROM BCHS WHERE BCH_ID=284; Before the batches closes, I want to take a look at the contents by applying my own sorting in a query -- I can compare this to the ordering for print to make sure my sort was properly applied. If we take a look at the contents of the batch, we can see the RCPSEQ is not populated (because the batch hasn’t printed yet). You can also see that I'm using an INNER JOIN to pull the data from the TRNS and RCPS tables. The system will do this automatically, but since I'm going into the database I have to do it myself. I’m applying the same ordering that I expect the print process to apply, so when we generate this print we should see the RCPSEQ ordering match this control ordering:   select  BR.BCH_ID,BR.JOB_ID,BR.TRN_ID,BR.RCPSEQ,R.ADR_NAME,T.KEYID from bchs_rcps  br inner join RCPS R on r.rcp_id = br.rcp_id inner join TRNS T on t.trn_id = br.trn_id where br.bch_id = 284 order by T.KEYID, R.ADR_NAME;     I force the batch to close by running another SQL statement to update the batch close time to yesterday, so it will print immediately. After the print is generated, I can do the same query as above, but instead of specifying a sort order by the KEYID and ADR_NAME, I use the RCPSEQ (recall this is the print order). During print, the transaction recipients are ordered according to the sort criteria (in this case KEYID then ADR_NAME) and then the RCPSEQ is applied. We can review the print output to see if it matches -- I trust that it does  😁  Note that default ordering is supposed to be ASC according to SQL standard, but I would explicitly state DESC or ASC e.g. "TRNS.KEYID DESC”. In my example above, I did not, and you can easily see it is ASCending.   select  BR.BCH_ID,BR.JOB_ID,BR.TRN_ID,BR.RCPSEQ,R.ADR_NAME,T.KEYID from bchs_rcps  br inner join RCPS R on r.rcp_id = br.rcp_id inner join TRNS T on t.trn_id = br.trn_id where br.bch_id = 284 order by RCPSEQ;     So, now that we know we can control the sort ordering, what about some interesting cases? The most likely case is that you'll want to sort based on some values that you pass into DocFactory from your extract data, and perhaps you've mapped those the TRNCUS* fields. If you've looked at the ODEE Schema, you might notice there are RCPCUS* fields as well, so you might be assuming you have to further shove data around... rest assured you do not! In fact, as I mentioned earlier, the transaction/recipient context encompasses the entire schema, so you can reference any of the tables and the relationships are already established.    You can also do other interesting things with SQL here. Let's assume that your KEYID is actually concatenated information that you need to use for further sorting. You can use a substring function to obtain and sort by portions of data. In this query below, I apply an ORDER BY clause that uses SUBSTR() function so I can sort by a portion of the value:   select  BR.BCH_ID,BR.JOB_ID,BR.TRN_ID,BR.RCPSEQ,R.ADR_NAME,T.KEYID from bchs_rcps  br inner join RCPS R on r.rcp_id = br.rcp_id inner join TRNS T on t.trn_id = br.trn_id where br.bch_id = 284 order by SUBSTR(T.KEYID, 1,2) desc,R.ADR_NAME;     You would apply this to the batch by using the Sort By Rule:  SUBSTR(TRNS.KEYID, 1, 2) DESC, RCPS.ADR_NAME  Note: the exact syntax depends on your database. Oracle DB uses SUBSTR() and SQL Server uses SUBSTRING(). Happy sorting! If you have any questions/comments/concerns, feel free to leave a comment below.

I recently received an interesting question from a customer who is converting a Documaker Standard Edition system to Documaker Enterprise Edition:  I’m looking for information on how to use Sort...

Inside Document Automation

Thoughts On Documaker Cloud Implementation

Defining the Cloud The term cloud computing is a general-purpose word that describes provision and consumption of remote computing resources. If the question is posed: "How do we implement/migrate a software package to the cloud?" the necessary first response should be a clarification discussion to understand what the asker means by the cloud. There are common models applied to cloud computing which can help constrain the ask above, specifically the service model and deployment model. These are important to understand, because these models constrain the scope under which a software package or configuration must operate. There are multiple characteristics that are common to cloud computing platforms, but the most important are: services provided are scalable, also called elastic, so that that you don't need to provision an infrastructure to support your peak loads. services are billed on usage - meaning, you only pay for what you use. Service Models The service model includes some common terms that you might already understand: IaaS, PaaS, SaaS, and others. With respect to Documaker, currently only the IaaS or PaaS models are applicable, so the others will be ignored. In the Infrastructure as a Service (IaaS) model a cloud services provider makes available a computing infrastructure onto which you can install your desired operating system, applications, tools, and data. In this model, you, as a customer, are responsible for the operating system, software and data, while the cloud provider is responsible for the infrastructure such as network, servers/virtual machines, storage, load balancers. Simply put, IaaS is like a sandbox in which you can build your castle, but you have to supply the molds and tools and the cloud services vendor provides the box and sand. In the Platform as a Service (PaaS) model the cloud services provider makes available a set of computing services, such as a database, execution runtime, tooling, and other components onto which install your application and data. In this model, the customer is responsible for custom applications and data which may (or may not) use the cloud vendor's provided applications and services that are running on vendor's infrastructure. Continuing with the sandbox analogy, under PaaS you can build a castle but you must use the vendor's box, sand, molds, and tools. To be complete we can briefly discuss SaaS, which is the model in which the vendor provides and manages the infrastructure, services, and applications while you the customer provide the data. A real-world example is a service like Oracle's Financial Services Cloud Communication service. The SaaS sandbox doesn't allow users to build their own castle, but you can play in your own section of the vendor's prebuilt castle. Deployment Models The choice of deployment model largely depends on requirement, risk, and cost. Documaker does not specifically require one model over another in terms of requirement, so long as the underlying hardware/software requirements are met in the service offering. Requirements become a primary factor in the deployment model when considering the solution and its integration points. For example, if your Documaker implementation contains integrations to upstream or downstream software components, these must be considered in the deployment model choice. There are three typical deployment models usually offered by cloud providers: private cloud in which the infrastructure and services are provided for a sole organization, and are hosted either internally or externally. Such services may have gateways that connect the cloud services to the Internet, but this is not a defining characteristic of this model. Continuing the sandbox analogy, the private cloud is a sandbox in which only one customer can play, so the vendor builds a sandbox just for that customer - either in the customer's yard or the vendor's yard. public cloud in which the infrastructure and services are provided through the Internet, and there is typically a direct-connection service provided by vendors to allow customers to link to their existing data centers for workloads that are not operating solely in the cloud environment. On the sandbox side of things, the public cloud is model is a big sandbox in which everyone plays, but each person can optionally have gated connection to their private sandbox. public cloud in which the infrastructure and services are provided through the Internet, and there is typically a direct-connection service provided by vendors to allow customers to link to their existing data centers for workloads that are not operating solely in the cloud environment. On the sandbox side of things, the public cloud is model is a big sandbox in which everyone plays, but each person can optionally have gated connection to their private sandbox. hybrid cloud is a combination of public and private clouds, the details of which vary per vendor. Marketplace There are a number of vendors that provide cloud services, such as Oracle, Microsoft, Amazon, Google, IBM, and others. As one might expect, each vendor has differentiating characteristics that may make them more suitable for one use case or another. For example, Oracle's cloud approach is based on its history of security and governance, which has resulted in a cloud services architecture that enables silos for application control and data. Amazon Web Services (AWS) has a huge marketplace with global reach that makes it easy for startups to quickly create new applications from the ecosystem of AWS tools. Microsoft's Azure system is embedded with Windows applications and tooling, making it a good choice for businesses looking to migrate existing Windows-based workloads. When developing a business case for moving workloads to the cloud, service offerings are a first concern, however they are not the only concern. Cloud vendors have shared characteristics in their design and deployment that is exposed to the customer. Most vendors have similar objects, nomenclature, and processes that are used to perform common tasks such as deploying a virtual machine into an IaaS environment. Underneath those commonalities there are differences in network topologies, infrastructure maintenance procedures, redundancies offered, service level agreements offered, and more. All of these items are reflected in the price that you, as the customer, will ultimately pay to consume those services. It is important to consider these elements when making a business case and plan to migrate workloads to the cloud. We might consider the hard costs of cloud migration e.g. your workload is X, and the cost of X is X*Y1 for Oracle, where the workload is considered in terms of compute cycle costs or data I/O costs. It is important to consider soft costs as well: how is elasticity/service levels guaranteed and what happens when service levels aren't met? How is governance enabled and what are the potential risks and business costs if there is a data leak or non-compliance with regulations? This list is simply to spur thought, and is not conclusive. Documaker in the Cloud Undoubtedly the most pointed question will be is Documaker version X supported on Vendor Y's cloud platform? and the answer still still not quite straightforward as we still have variables to consider. Any long time architect of Documaker services probably understands that Documaker is at its heart an extremely capable set of tools with some prefabricated use cases to support common business operations for generating communication. This has been the case for nearly two decades until the advent of Documaker Enterprise Edition, which sought to push the envelope of Documaker into a more modular approach with established use cases, while still supporting existing Documaker libraries. To answer the question we need to understand the Documaker solution that is being migrated. Your solution could be some or all of these (or more): a Documaker batch system a Documaker on-demand system, running Docupresentment (and possibly also EWPS) a Documaker interactive system, running  iPPS or iDocumaker a Documaker Enterprise System a combination of any or all of these on homogenous or heterogeneous platforms and versions, running single- or multi-step, with feeds in and out of various systems, with different workflows for migrating changes across. In short, there is not a simple answer, however there are some general guidelines that can be followed. First and foremost is you should being with a catalog and understanding your current system. You should know: each implementation of Documaker in your application ecosystem and what it supports every environment supported by an implementation of Documaker what versions of Documaker are used in each implementation any customizations (e.g. compiled code) that is executed within Documaker and what it is for all data inputs and outputs to and from Documaker such as: integrations that rely on data generated by Documaker data used as input to Documaker migration of resources from development environments to production environments (e.g. how form library components are migrated). end-user requirements and workflows performance characteristics and requirements of each implementation hard/soft requirements of each implementation both of Documaker and ancillary applications (e.g. if Documaker is using a library that is stored in MS SQL Server, what version of SQL Server is being used? Is there a reason that specific version is required?) With this information in hand you should be able to form a picture of what should be considered in a workload migration. An overarching concern that should influence every facet of your cloud journey is the need to execute regression testing, both in the literal and figurative sense. Your private sandbox in your corporate data center that has never been exposed to the outside world may be moved to a public cloud, and your existing regression test suites may be nonexistent or may be lacking in scope to adequately test the wholesale movement of business processing onto a new platform. Consider the application boundaries presented by the solution and how those may be affected by migration to a cloud environment. Your corporate governance structures may lack the scope and depth necessary to encompass the wider security concerns presented by moving business applications onto a cloud infrastructure, and should be taken into account as well. For example, if the entire business application suite is moving, the boundaries may be entirely non-existent. If only some of the business application components are moving to a cloud environment, there are now boundaries that did not exist previously. Those boundaries will need to be adequately tested in your regression test suite. Similarly, governance structures may not account for these boundaries since they do not exist currently. There may be some governance rules in place, but they may lack the scope necessary to address a cloud deployment, and so these must also be considered. A concrete example of a boundary is represented by Documaker Studio. This application runs on a user's workstation or in a virtualized workspace like XenDesktop or Citrix, and accesses the database for interacting with form resource libraries, typically in a development environment. The application may also be used to interact with other environments to facilitate migration of changes to downstream environments. If the Documaker solution is moved to a public cloud infrastructure, you will need to consider how to address the solution boundary presented by the separation of your corporate users and the application now running in the cloud infrastructure. Thankfully this is not a problem that is specific to Documaker, and is a common use case that is typically addressed by using virtual private networking between your corporate network and the cloud provider network.   Diving in the service models that you might consider, the simplest level is represented by a migration of a Documaker workload from an on-premise datacenter to an IaaS cloud offering. This migration presents the least risk and complexity, as long as the cloud vendor meets the requirements that you have outlined in your solution requirements catalog. The lift-and-shift into a like-for-like environment is not without risk so long as there is adequate regression test coverage and identification/mitigation of potential issues beforehand. A typical migration might involve a migration from Documaker on Oracle Enterprise Linux 7 machine in your corporate data center to an Oracle Enterprise Linux 7 virtual machine in the cloud infrastructure with similar characteristics. Not all vendors offer the same operating systems and versions, but Documaker is generally flexible in terms of OS, so you need to consider this scope in your regression testing to ensure it does not present any issues. It is important to point out that on some platforms, Documaker does present some specific requirements that should be captured in your solution and requirements catalog. For example, on a Linux-based platform there are some prerequisite components that must exist, and must be at certain version levels e.g. libgcc and libgcc.i686 are required. The next level of complexity is represented by the PaaS service model. Recall that PaaS provides a set of computing services, such as a database management services, middleware, and development tools. Under this model, more components are provided for you, and this represents additional risk since they may not be offered at the level required to match your solution and requirements catalog. There is also the possibility of mixing IaaS and PaaS in some implementations. For example, you may deploy Documaker to an IaaS platform, but take advantage of an RDBMS PaaS offering - you must keep in mind there may be some limitations in your ability to change some aspects of a PaaS offering. For example, you may not wish to use Azure DB with Documaker Enterprise Edition and support certain languages because your Azure DB may not be set up with the proper collation to support foreign languages. Additional risks may be present in other cloud-based services that have not been explicitly tested with Documaker. Q&A Q1. Can I deploy Documaker to the Cloud? A1. Documaker (both Enterprise and Standard) has been successfully deployed in Oracle Cloud Infrastructure (OCI), Amazon Web Services (AWS), and Microsoft Azure. Q2. Who is running Documaker in Cloud environments? A2. There are a number of large corporations who are currently migrating their Documaker workloads to the cloud. However, we cannot release this information publicly. You may consider joining the Documaker Client Advisory Board (CAB) whose membership includes many Documaker customers with whom you can discuss such matters. Q3. Can I use my Documaker license in a Cloud environment? A3. There is not a one-size-fits all answer here, but generally you should be able to do so. If you have a concern you should contact your Oracle license sales representative, or Oracle Support for additional information. Q4. Which cloud vendor should I use? A4. The choice ultimately depends on how well a provider's solution matches your solution and requirements catalog, so the answer depends on the scope and breadth of the catalog. Your catalog should also include soft requirements as discussed above. Finally, such a decision involves cost/benefit analysis, risk analysis, and return on investment. These elements are outside the scope of this paper, but as has been alluded to in the above, there are wider concerns than just putting your data and applications on someone else's computer - there are governance and security concerns as well. In any case, Documaker is an Oracle product, and Oracle platforms typically receive the most attention in terms of product development and support, so you can't go wrong with Oracle Cloud. If you have additional questions, thoughts, or concerns, feel free to post a comment below, or head over to the Documaker community. 

Defining the Cloud The term cloud computing is a general-purpose word that describes provision and consumption of remote computing resources. If the question is posed: "How do we implement/migrate...


How to put the U in OUM

Taking a break from Documaker-specific content, I've asked Allan McAlinden to guest-author a post on the Oracle Unified Method. Allan is an experienced project manager and Documaker consultant with nearly two decades of experience with Documaker, and as many in project management. I hope you find it enlightening! There’s no I in TEAM, but there is ME Acronyms are over-used in IT to the extent that I didn’t even write out ‘Information Technology’ in this sentence just to prove a point.  Sometimes we use them to save paper, avoid unpronounceable words or find common terms to overcome language difficulties.  What may be a surprise is that they can actually introduce real benefits to your working life, and in this blog I’d like to shed some light on the Oracle Unified Method, or OUM as it’s also known.  If it can help U then it’s information worth spending some HH:MM:SS looking into.  Ok, what makes U have a better method than Me? Within the field of Project Management (PM) there are many governing bodies such as Project Management Institute (PMI), the Association of Project Management (APM) or the International Project Management Association(IPMA), all trying to identify and define the best approaches to move from A to Z.  Anyone that’s worked on more than one project understands that one model won’t be suitable for every customer, every product and every project, so what the Oracle Unified Method does is to take the experience, knowledge, artifacts and information from various PM models and sources to identify suitable controls and support options for how we collaborate with our colleagues and customers.  That’s my take on it, but the official statement from the teams involved in creating this is… “The Oracle® Unified Method (OUM) is Oracle’s standards-based method that enables the entire Enterprise Information Technology (IT) lifecycle. OUM provides an implementation approach that is rapid, broadly adaptive, and business-focused. OUM includes a comprehensive project management framework and materials to support Oracle's growing focus on enterprise-level IT strategy, architecture, and governance.” What this does is provide a scalable model to provide guidance for all of the following areas of delivery. Good idea, but will it help ME and my TEAM? If your daily role changes as much as mine then one day you’re writing a project plan, then coding processing scripts, installing software or building servers, then delivering mentoring workshops before drafting Risk Analysis and Governance documentation.  We all wear many hats in today’s working world and these roles can change hourly, daily, weekly or even months apart. You may find yourself in some role that you haven’t performed since last year, but you’re still expected to remember all the fine details and tips’n’tricks that you may not have performed in some time. Will OUM help you remember? Let’s consider a few examples… If I need to explain to a colleague how to build a Risk Matrix and define Governance controls then I find the answer in OUM.  The guidance materials and sample documentation is there.  Maybe I haven’t performed any testing since last year and need help planning a repository to store all the inputs, controls and results. And then I need to create a plan on how to share the results and get feedback, and then act on that feedback.   OUM can help explain all the options to me, giving me structure, artifacts and controls.  There could be standards we need to apply to data acquisition, and I need to meet with a customer to agree on the expected steps and controls for this action. OUM has all the reference information I need to create an agenda for the meeting.   If I’m not F2F, can I help my BFF? By having this method defined, documented, clearly structured, and, most importantly, proven (It’s been around since 2006!) it creates this safe place we can all go to for answers, guidance and reference materials to make day to day life easier. All content is hosted online within Oracle.  Every stakeholder, technical resource, tester, trainer, designer, analyst, project manager and end user can use this method to know the big picture is actually a complete schematic, and not just a random series of doodles.  It can even help that one person that always shows up on the conference call but doesn’t say their name and sits silently in the background.  Sometimes we work in silo’s, focused on our particular aspect of delivering a solution, and if you’re on the perimeter, you might think what’s important to you isn’t important to others. You may think they don’t know that what you do has inputs, dependencies, implications, outputs, and that your work isn’t being given the attention is deserves.  When using OUM, everyone can see that all parts of an implementation or upgrade are considered.  Even if you sit silently in the background, you’re still at the table …or in the room…or on the call when OUM is being applied to your work.  OK, how can I convince my PPL that OUM will give me some ROI? As I mentioned, OUM has been in play since 2006.  In technology terms that means it’s been around since the launch of Facebook to the public, the Playstation 3 was cutting edge gaming, palmtops were still popular among execs with challenging schedules, and most cellphones were still flip phones with actual buttons.  If tech isn’t a success it swiftly falls into obscurity, so OUM’s longevity and continued use alone is a valuable measure of the benefits it offers.  Farewell betamax videos, Google Glass, AskJeeves search engines, electronic virtual pets, and concerns about Y2K.    If that’s enough to convince you, great, but if you need more convincing then you can reach out to me or any other contacts you have within Oracle. Alternatively you can read some of the other great Oracle blogs on the topic explaining how it’s been used, where it’s been used and sharing some success stories. And if you’re inside Oracle, go straight to the source. 

Taking a break from Documaker-specific content, I've asked Allan McAlinden to guest-author a post on the Oracle Unified Method. Allan is an experienced project manager and Documaker consultant with...


Kickstarting a Stuck Transaction in ODEE

Over the past few years, I've been asked by several ODEE customers if it is possible to restart a stuck transaction. My first response is always why is the transaction stuck? If there's a reason why your transactions are getting stuck somewhere in the system, you will probably want to figure out why that is so you can solve that particular problem. But, sometimes, a transaction is sitting there because something happened - maybe you're in a test system that is subject to anomalous behavior, or maybe you accidentally killed a service you shouldn't have, or maybe someone got root access who shouldn't have... in any event, sometimes things just happen. Normally my advice is to run the transaction again - you can run DoPublishFromFactory to create a new job or transaction from an existing one, so this is usually the best advice. You can also grab the input XML data from an existing transaction or job if you really want to start over from scratch. This usually satisfies any need to regenerate a transaction, but what about that stuck transaction, sitting there with some status code, just waiting to be cleared? Sure, you could create a Historian rule filter to grab anything that's sitting at an odd status code for a long period of time, but... well I have to admit, at this point curiosity got the better of me and I decided that I wanted to explore if it was even possible to get a transaction moving again, and the answer is yes you can. I wrote a bit of Java code that you can find over on my Github repository that will takes some parameters for input, including: one or more Transaction IDs database class name, connection string, and credentials queue connection factory and queue JNDI names removal option  WebLogic JMS URL The utility will connect to the database to validate that the transaction IDs exist, and will optionally remove existing form set data, and/or related BCHS/RCPS/PUBS records, will set a new TRNSTATUS code, and finally post a message to a JMS queue for each updated transaction ID. When I put this together I had two use cases in mind: Use Case 1 - Move a stuck transaction - A transaction stuck in DocFactory an arbitrary status can be moved to a previous status. E.g. if a transaction is stuck at 311 status, you can change it back to 221 to run it through Assembler again, in which case the TRNSTATUS will be updated and a message posted into the Assembler's work queue. The default configuration for this behavior target status = 221 and target queue = Assembly Line 1's Assembler queue, so if you have a different use case you will need to know your desired target status and queue. Use Case 2 - Reprocess a transaction - A transaction in complete status 999 can be processed again, but not as a new transaction. This means that the transaction will keep its TRN_ID, and any existing BCHS, RCPS, and PUBS will remain associated with the transaction. If you need a wholly new transaction, use doPublishFromFactory instead. You also have the ability to specify the --remove option to remove any existing BCHS, RCPS, and PUBS associated with the transaction. Both of these cases work well with this utility, but be advised it is provided without any warranty of any kind. To use the tool, download the code from GitHub, and then make sure you have the prerequisite JAR files available. Compile the tool: javac -cp ./commons-cli-1.4.jar:./ojdbc7.jar:./wljmsclient.jar:. TransUpdate.java Run the tool with settings for your environment: java -cp ./commons-cli-1.4.jar:./ojdbc7.jar:./wljmsclient.jar:. TransUpdate -r 3 -i 444,490,433 -s 221 -c jdbc:oracle:thin:@localhost:1521:orcl -p MyPassWord -u MyUser -w t3://localhost:11001 And have a look at the output: Database connection opened. Transactions to process: 3 ============ 1. TRNID 444 ============ TRANS 444 located. TRNS NAPOL data deleted. Performing sweep for TRNID 444 3 PUBS releated to TRNID 444 deleted. 1 BCHS releated to TRNID 444 deleted. 3 RCPS releated to TRNID 444 deleted. 3 BCHS_RCPS releated to TRNID 444 deleted. TRANS 444 updated to TRNSTATUS = 221 Message posted to jms.al1.qcf/jms.al1.assemblerreq on t3://localhost:11001 for TRNID 444 ============ 2. TRNID 222 ============ TRANS 490 was not located. ============ 3. TRNID 433 ============ TRANS 433 located. TRNS NAPOL data deleted. Performing sweep for TRNID 433 3 PUBS releated to TRNID 433 deleted. 1 BCHS releated to TRNID 433 deleted. 3 RCPS releated to TRNID 433 deleted. 3 BCHS_RCPS releated to TRNID 433 deleted. TRANS 433 updated to TRNSTATUS = 221 Message posted to jms.al1.qcf/jms.al1.assemblerreq on t3://localhost:11001 for TRNID 444 Database connection closed. More details on how to use the tool are provided over at Github. If you need to set up an ODEE environment, I invite you to check out any of my previous blog posts. If you have any issues you are welcome to log the issue on Github, comment below, or over in the Documaker community although please be aware that this is unsupported code. Be well and take care.

Over the past few years, I've been asked by several ODEE customers if it is possible to restart a stuck transaction. My first response is always why is the transaction stuck? If there's a reason why...


Using SoapUI to Enhance DWS Testing

If you're using Oracle Documaker Enterprise Edition (ODEE), there's a good chance that you're already using Documaker Web Service (DWS) with your implementation. And, it's quite possible that if you're using DWS, you've already toyed with using Smart Bear's popular SOAP testing tool SoapUI to perform basic validation that DWS is working. You might even be using SoapUI to test more advanced features such as WS-Addressing (WSA) or WS-Security (WSS). But... did you know that you can use SoapUI to enhance your unit and regression testing? In this article, I'm going to go over some some fundamentals for using SoapUI in this capacity. First, before we get too far, you're going to want an ODEE environment so you can test your specific requirements. In my examples I am using the reference implementation which comes with all ODEE installations and although actually deploying it is not required, it's a good way to validate that your environment is up and running. If you don't have an ODEE environment, you can set one up using these posts as a reference. If you don't have all the prerequisites available, you can download a WebCenter Content preview VM and install ODEE there to use as an evaluation. Once you have your ODEE environment, you'll need to know the endpoint of your DWS services, which you can find in the WebLogic Console: login to your console and navigate to Domain Structure > Deployments > DWSAL1 (expand) > PublishingServiceSoap12 (Module DWSAL1) > Testing. Then expand the PublishingServiceSoap12, and note the WSDL link. Note: you can use the PublishingService or the PublishingServiceSoap12. The functionality is exactly the same, however the format of the messages differs (SOAP 1.1 and SOAP 1.2, respectively), so you must be consistent in that if you pick the Soap12 endpoint, you need to use the Soap12 Binding in SoapUI project below. Also FYI, if you happen to have any web front ends such as Oracle HTTP Server, that will not be taken into account on this link.  Next, you'll want to have some information about your particular implementation of ODEE: the assembly lines, extract files, and batching configuration. Not all of this information is necessary, but depending on the type of use case you want to create, you'll want to make sure you have the appropriate extract file samples to trigger the test conditions you want to exercise. In my examples, I'm going to validate the functionality of the LOCALPRINT batching, which simply generates PDF output. To do that, I need an extract file that is known to trigger recipients for the LOCALPRINT batching. The reference implementation includes some default batchings and extract files which are well-suited for this need. The batchings can be reviewed in Documaker Administrator and the extract files are provided in the DocFactory installation under ODEE_HOME/documaker/mstrres/dmres/input. Finally, you need to download and install SoapUI. Once this is done, start up SoapUI and click the SOAP Button. Enter a name for the project, the WSDL endpoint for the Publishing Service (note that you can use the PublishingService or the PublishingServiceSoap12 WSDL endpoints here, and you can even add them both). Tick the box to create sample results and click Ok. Now, expand the nodes of your project and look for PublishingServiceSoap12Binding > doPublishFromImport > Request 1. You'll see here that SoapUI has built a sample request based on all possible values that could be submitted, and I mean all. We don't need most of these, but you might explore the possibilities here before moving on. For now, delete the contents of the request and replace it with this: <soap:Envelope xmlns:soap="http://www.w3.org/2003/05/soap-envelope" xmlns:pub="oracle/documaker/schema/ws/publishing" xmlns:com="oracle/documaker/schema/ws/publishing/common" xmlns:v1="oracle/documaker/schema/ws/publishing/doPublishFromImport/v1" xmlns:com1="oracle/documaker/schema/common" xmlns:req="oracle/documaker/schema/ws/publishing/doPublishFromImport/v1/request">    <soap:Header/>    <soap:Body>       <pub:DoPublishFromImportRequest>          <pub:DoPublishFromImportRequestV1>             <v1:JobRequest CorrelationId="1">                <req:Payload>                   <req:Extract>                      <com1:Content>                         <com1:Binary>***REPLACE***</com1:Binary>                      </com1:Content>                   </req:Extract>                </req:Payload>             </v1:JobRequest>             <v1:ResponseProperties>                <com1:ResponseType>Identifiers</com1:ResponseType>             </v1:ResponseProperties>          </pub:DoPublishFromImportRequestV1>       </pub:DoPublishFromImportRequest>    </soap:Body> </soap:Envelope> The above represents the simplest request you can submit for processing -- note that there is something missing, and that is the base-64 encoded extract file. To insert that, remove the ***REPLACE*** text, and right-click at that point and select Insert File as Base 64. You can then locate the extract file and it will be encoded and inserted in place. That's it!  Now you can run the request using the Play triangle button at the top left of the request window and if all goes well, you should see a response that looks something like XML below. Note that because we had <com1:ResponseType>Identifiers</com1:ResponseType> in our request, DWS is going to respond with the ID attributes for what was generated by the transaction, not the actual outputs.  <S:Envelope xmlns:S="http://www.w3.org/2003/05/soap-envelope">    <S:Body>       <DoPublishFromImportResponse>          <DoPublishFromImportResponseV1>             <ns1:Result>0</ns1:Result>             <ns1:ServiceTimeMillis>2324</ns1:ServiceTimeMillis>             <ns2:ServerTimeMillis>2103</ns2:ServerTimeMillis>             <ns2:JobResponse CorrelationId="1">                <ns8:JobEndTime>2020-03-19T18:33:41.989Z</ns8:JobEndTime>                <ns8:JobStartTime>2020-03-19T18:33:40.002Z</ns8:JobStartTime>                <ns8:JobStatus>999</ns8:JobStatus>                <ns8:Job_Id>400</ns8:Job_Id>                <ns8:Payload>                   <ns8:Transaction>                      <ns8:Data>                         <ns4:Content>                            <ns4:Publication>                               <ns4:BchId>396</ns4:BchId>                               <ns4:PubId>608</ns4:PubId>                               <ns4:PubMimeType>application/pdf</ns4:PubMimeType>                               <ns4:PubPrtExt>pdf</ns4:PubPrtExt>                            </ns4:Publication>                         </ns4:Content>                      </ns8:Data>                      <ns8:Job_Id>400</ns8:Job_Id>                      <ns8:TrnEndTime>2020-03-19T18:33:41.987Z</ns8:TrnEndTime>                      <ns8:TrnName>Bill M Smith</ns8:TrnName>                      <ns8:TrnStartTime>2020-03-19T18:33:40.027Z</ns8:TrnStartTime>                      <ns8:TrnStatus>999</ns8:TrnStatus>                      <ns8:Trn_Id>439</ns8:Trn_Id>                   </ns8:Transaction>                </ns8:Payload>             </ns2:JobResponse>             <ns2:ServiceInfo>                <ns4:Operation>doPublishFromImport</ns4:Operation>                <ns4:Version>                   <ns4:Number>1</ns4:Number>                   <ns4:Used>true</ns4:Used>                </ns4:Version>             </ns2:ServiceInfo>          </DoPublishFromImportResponseV1>       </DoPublishFromImportResponse>  </S:Body> </S:Envelope> This has been truncated a bit to conserve space -- your results should have more information, but you can see that in the response you can obtain the TRNSTATUS=999 for all transactions in the job, some details about the publications created, and the JOBSTATUS=999. You can even get some details on the timing of the job processing. Next, right-click the Request 1 in the project navigator and select Clone Request, and name it Request 2. In this one, you can change the response properties slightly: change Identifiers to Attachments and rerun the request. You should notice that the response is much bigger than the previous request, and you should notice that there are some changes in the data in the response: ... <ns8:Transaction>  <ns8:Action>100017</ns8:Action>  <ns8:ApprovalState>60</ns8:ApprovalState>  <ns8:CreateTime>2020-03-19T19:32:26.000Z</ns8:CreateTime>  <ns8:CurrGroup>3</ns8:CurrGroup>  <ns8:CurrUser>8</ns8:CurrUser>  <ns8:Customized>0</ns8:Customized>  <ns8:Data>     <ns4:Name>397_611</ns4:Name>     <ns4:ContentType>application/pdf</ns4:ContentType>       <ns4:FileType>pdf</ns4:FileType>       <ns4:Content>         <ns4:Binary> *** base-64 encoded data *** </ns4:Binary>       </ns4:Content>  </ns8:Data> </ns8:Transaction> Notice there's a new <Binary> node? That contains, as you might expect, the output publication generated by this request. And you can probably tell from the <FileType> and <ContentType> nodes that it is in fact a PDF, so if you base-64 decode the contents of the <Binary> element, you can view the PDF.   By now you should understand that it's possible to create multiple requests, each with specific requirements for input and output, and you can execute these tests easily by storing the requests and manipulating them. Now we'll take it a step further and create a test suite where we can create iterative tests and validate them with assertions. To get started: Right click the project node in SoapUI and select New TestSuite, and give it a name. Right click the newly-created TestSuite, select New TestCase, and give it a name. Expand the newly-created TestCase, and right-click Test Steps and select Add Step > SOAP Request. Specify a name for the request and click Ok. In the dropdown, select PublishingServiceSoap12Binding -> doPublishFromImport and click Ok. Name the request and tick the boxes for all validations. You can also tick the box to creation optional content, but this is not necessary as we'll be replacing all of it. The request opens with the SOAP Request XML in view. Here, you can paste in the SOAP request XML from the previous Identifiers or Attachments request we tested earlier. Bring the TestCase window to focus (you can double-click it in the Navigator) and click the Play button. You should see the results of the test populate in the response pane of the SOAP Request window. You should also see the TestCase results status as green and finished, meaning no errors! You can also bring the TestSuite window into focus and click the Play button there, too. It will run all the TestCase items under it. I'm sure by now you're saying, "Ok, but this is exactly what I just did, so what's the point?" I get you. Hold on -- we're going to take things to the next level! Bring the TestCase window back into focus and look at the Properties panel in the Navigator, and click the Custom Properties button, click the + icon and name your property InputFile and give it a value of extract.xml.  Next, right click the TestCase and select Add Step > Groovy Script, and name your script. In the Navigator, click and drag the script above the SOAP Request so it will execute before the request. In the Groovy Script window, paste in the following: import javax.mail.util.ByteArrayDataSource; def extractFile = testRunner.testCase.getPropertyValue("InputFile") testRunner.testCase.setPropertyValue("InputFileBase64"testRunner.testCase.setPropertyValue("InputFileBase64","") if (extractFile != "" && extractFile != null) { String extractFilePath = getAbsolutePath(extractFile) String xmlstring = new File(extractFilePath).text testRunner.testCase.setPropertyValue("InputFileBase64",xmlstring.bytes.encodeBase64().toString()) } String getAbsolutePath(String relativePath) { def groovyUtils = new com.eviware.soapui.support.GroovyUtils(context); def thePath = groovyUtils.projectPath + "/" + relativePath; return thePath; } What does this do, exactly? Well, in short, it reads the value of TestCase property called InputFile (which we set to extract.xml) and then it will read that file and base-64 encode it, and store that in a TestCase property called InputFileBase64. Now we just need to use it! in your SOAP Request window, replace the contents of the <Binary> node with this: ${#TestCase#InputFileBase64} So your request should look like this: css<soap:envelope xmlns:com="oracle/documaker/schema/ws/publishing/common" xmlns:com1="oracle/documaker/schema/common" xmlns:pub="oracle/documaker/schema/ws/publishing" xmlns:req="oracle/documaker/schema/ws/publishing/doPublishFromImport/v1/request" xmlns:soap="http://www.w3.org/2003/05/soap-envelope" xmlns:v1="oracle/documaker/schema/ws/publishing/doPublishFromImport/v1">    <soap:header>    <soap:body>       <pub:dopublishfromimportrequest>          <pub:dopublishfromimportrequestv1>             <v1:jobrequest correlationid="1">                <req:payload>                   <req:extract>                      <com1:content>                         <com1:binary>${#TestCase#InputFileBase64}</com1:binary>                      </com1:content>                   </req:extract>                </req:payload>             </v1:jobrequest>             <v1:responseproperties>                <com1:responsetype>Attachments</com1:responsetype>             </v1:responseproperties>          </pub:dopublishfromimportrequestv1>       </pub:dopublishfromimportrequest>    </soap:body> </soap:header></soap:envelope> At this point you should save your project file using the Save All button on the menu bar. Note carefully where you save the project file. Now, take your extract file and place it in the same directory as the project file, and name it with the value you set in the InputFile property. Now if you run the TestCase again, you should see two steps completed: You can click on the Step links to bring up the results of each step, and see the response message using the Response Message button. So, what just happened? When we ran the test case, the Groovy Script read the InputFile property to get the name of the extract file we wanted. We passed that name to a function so we could get the full path of the file, then we loaded that file and base-64 encoded it. We wrote that encoded string to another TestCase property. The next step in the case is to submit the request. Remember when we changed the extract data in the request to the placeholder ${#TestCase#InputFileBase64}? SoapUI automatically replaced our placeholder with actual data. In this case, the syntax of the placeholder indicates a TestCase custom property called InputFileBase64. We can see immediately that the green check marks indicate our tests were successful. If you got errors, use the SoapUI Log, error log, and script log buttons at the bottom of the SoapUI window to review and debug. Along the way you might be asking "what is Groovy?" and the short answer it's a Java-compatible scripting language, and there are plenty of reasons to use it, not least of which is that it's what SoapUI uses for scripting. How can we take advantage of it? Read on! I'm certain you recall that we can use <ResponseType>Attachments</ResponseType> to return a PDF of our transaction, but then we have to go through the trouble of opening the response, copying the base-64 encoded string, pasting it into a decoder, saving that to a file and then opening it. Tedium! What if you could write a script that parses the response and does all of that for you? You can! In fact, I've made this easy. I've built a SoapUI project that you can download from my Github repository. In it, you'll find that I've created two test cases: Local Print - Attachments and Local Print - Identifiers. Each of these test cases has three test steps, and those steps call a third disabled test cases so that they share common code and can be executed simultaneously. Review the steps for all three cases and you'll find that: input files are expected to be in an input folder at the same level as the SoapUI project file; extract file for each test case is named by the test case custom property InputFile; test cases which generate attachments will go into an output folder at the same level as the SoapUI project file; attachments will be written into an output subfolder with a time date stamp; attachments will be written with the configured name that ODEE uses to name the publication; details from responses are written to a few test case custom properties All you need to do with this project file to use it is: replace the extract data with a file that matches your library and configuration (replace input/local_print.xml). If you are using this with the reference implementation, you don't need to change it. change the endpoint to match your system (open the request and select it from the dropdown and click Edit Current and modify to match your system) In future blog posts, I'll cover some more esoteric use cases with SoapUI such as WS-Addressing and WS-Security. For now, experiment with using Groovy Script to modify your extract data to fit multiple use cases, or try building a regression test suite that you can automate with SoapUI. Let me know in the comments about some of the interesting things you do with SoapUI and Documaker!  

If you're using Oracle Documaker Enterprise Edition (ODEE), there's a good chance that you're already using Documaker Web Service (DWS) with your implementation. And, it's quite possible that if...


Quick Start with ODEE 12.6.3 on a VM

If you're tasked with evaluating Oracle Documaker Enterprise Edition (ODEE) and some of its integration capabilities, you might find it to be a somewhat daunting task, given that ODEE is a multitiered application and requires a database and a WebLogic application server. What if you need to see how its schemas are deployed to Oracle DB? Or perhaps you want to explore deployment on WebLogic Server (WLS), or its integration with WebCenter Content (WCC), or perhaps how you might set up load balancing with Oracle HTTP Server (OHS). Raise your hand if you have free time enough to locate, download, install and configure all of these  components. If you raised your hand, by all means, forge onward. If you're pressed for time, then consider the approach I'm going to discuss here, which starts with a virtual machine available on OTN that contains Oracle DB, WLS,  Oracle HTTP Server (OHS) already configured. This is a slightly different approach than my previous post, in which we installed Oracle Linux from scratch. You can use that approach as well, but this much quicker, and it's portable. Oracle has made available a VirtualBox disk in compressed format (OVA) that you can download here - ensure you take a look at the readme guide that discusses the overall set up of the machine. Download and install VirtualBox, then download the OVA file and the readme. Follow the instructions in the readme to start importing the VirtualBox appliance. It's going to take some time, so take this opportunity to obtain ☕️?. Also, just FYI, the installed version of Fusion Middleware (FMW, which includes WebLogic and the ancillary components used by ODEE) is If you've reviewed the system requirements document for Documaker 12.6.2 (I'm taking care specifically to reference 12.6.2 here since the documentation for 12.6.3 hasn't been released yet)  you'll note that the prerequisite version of FMW is Since we're just doing this for exploratory purposes we should be fine - I haven't found any issues running in this configuration, but I wouldn't use it for production by any means! While that's importing, head over to edelivery.oracle.com and grab a copy of ODEE 12.6.3 for Linux. By now you should have a virtual machine called Oracle WebCenter Portal 12c R2 (12.2.1) in your list of VMs in VirtualBox. The Readme mentions creating a shared folder between your host computer and the virtual guest, so don't forget to do that. You may see an "invalid settings detected" message in VirtualBox, depending on your particular host hardware/software configuration, and you can choose how to handle those. I ignored the message about too little video memory (we only need a terminal, so no X Server will be running). Fire up the machine and in a few moments you should see a terminal window, and you can login using the credentials provided in the readme. By default the machine is configured with a host-only network, so you won't need to configure anything to get the network to function. I prefer using iTerm on macOs for my terminal work, but macOs already has ssh running on port 22, so I need to set up the port forwarding on the VM. At the bottom of the VirtualBox window, click the network icon and select Network Settings... then under Advanced, click Port Forwarding. Set the Host port to a suitable port (I chose 7722) and leave the guest port set to 22. Click Ok twice. You should be able to ssh from your host machine to the guest machine on the port you specified. Note that my host is macOs, which has SSH support already, so I'm starting from the terminal on my MacBook. If you're running on Windows, you can use PuTTy or other SSH client as you see fit. % ssh oracle@wcp12cr2 -p 7722 The authenticity of host '[wcp12cr2]:7722 ([]:7722)' can't be established. RSA key fingerprint is ****. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '[wcp12cr2]:7722' (RSA) to the list of known hosts. oracle@wcp12cr2's password: Last login: Wed Mar 25 07:17:31 2020 from localhost [oracle@wcp12cr2 ~]$ At this point we can start up some of the basic services, however, if you just start by following the instructions in the Readme, you will find that you can't get all the services started. I'll spare you the lengthy details, but in short, the database passwords for various users are expired by the time you read this, and you will not be able to start any WebLogic servers. So, we need to unexpire the passwords. To do that, we need to: Start the database using vmctl, a handy script included to control the services. I'll just give you the commands you need to enter. $ vmctl a start db x Start SQL*Plus $ . oraenv The Oracle base has been set to /oracle/db $ sqlplus / as sysdba SQL*Plus: Release Production on Wed Mar 25 08:03:04 2020 Copyright (c) 1982, 2013, Oracle.  All rights reserved. Connected to: Oracle Database 12c Enterprise Edition Release - 64bit Production With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options Set the default profile to not expire passwords. SQL> alter profile DEFAULT limit PASSWORD_REUSE_TIME unlimited; Profile altered. SQL> alter profile DEFAULT limit PASSWORD_LIFE_TIME unlimited; Profile altered. Get a list of specific users that we need to fix (they start with the prefix WCPVM_). SQL> select username from dba_users where username like 'WCPVM_%'; USERNAME WCPVM_OPSS WCPVM_IAU_APPEND WCPVM_IAU_VIEWER WCPVM_MDS WCPVM_OCS WCPVM_WEBCENTER WCPVM_ACTIVITIES WCPVM_DISCUSSIONS WCPVM_DISCUSSIONS_CRAWLER WCPVM_IAU WCPVM_PORTLET WCPVM_STB Reset passwords for specific users. I have a bit of SQL that we can use to reset the passwords to be exactly what they are now, without having to know what they are, and we can set them to unexpired status. Edit the script below and replace the username where indicated based on the results above, keeping in mind that usernames are case-sensitive. Cut and paste into SQL*Plus and repeat for all users. SET DEFINE '&' SHOW DEFINE DEFINE USER_NAME = '**REPLACE WITH USERNAME **' DEFINE OLD_SPARE4 = "" DEFINE OLD_PASSWORD = "" COLUMN SPARE4HASH NEW_VALUE OLD_SPARE4 COLUMN PWORDHASH NEW_VALUE OLD_PASSWORD SELECT  '''' || SPARE4 || '''' AS SPARE4HASH,  '''' || PASSWORD || '''' AS PWORDHASH FROM SYS.USER$ WHERE NAME = '&USER_NAME'; ALTER USER &USER_NAME IDENTIFIED BY VALUES &OLD_SPARE4; You should see User altered after each copy-paste-enter into SQL*Plus. Exit SQL*Plus: SQL > exit Start the services. I like to start fresh, so first I'll stop everything. If you run vmctl and everything is supposed to be stopped, but something is still running, note the ID number (pid) of the process, exit vmctl, then run kill -9 pid. Restart vmctl and start basic services. $ vmctl 0 Starting Database Listener ... . Starting Oracle Database ... . Database Services Successfully Started. . Starting Web Tier [OHS] ... ............................... Starting WebLogic Admin Server ... ............................... Starting Portal [WC_Portal] ... ............................... If you care to review the logs for some of these services you can tail them here: OHS: /oracle/fmw/ohs/user_projects/domains/base_domain/servers/ohs1/logs/ohs1.log AdminServer: /oracle/domains/webcenter/servers/AdminServer/logs/AdminServer.log Portal: /oracle/domains/webcenter/servers/WC_Portal/logs/WC_Portal.log Once that's complete, you can stop the Portal server (we don't need it for ODEE). $ vmctl a stop portal Next, we need to configure yum and update a few packages, which we can do while the above operations are completing. Open another terminal window and enter the following: $ sudo -i # cd /etc/yum.repos.d # mv ./* ~/ # wget http://public-yum.oracle.com/public-yum-ol6.repo # cd .. # sed -i 's/proxy=/#proxy=/g' yum.conf # yum install -y libaio libaio.i686 # exit Now, you can use scp to copy over the ODEE installer. Note: because the ODEE installer uses X Windows, you'll need to have an X Emulator on your machine (I use XQuartz, if you're on Windows you can using Xming. Copy the ODEE 12.6.3 installer to the guest using scp (don't forget to specify the correct port), then connect via ssh to the guest with the -Y option so X11 forwarding happens, and unzip the installer. % scp -P 7722 V995344-01.zip oracle@wcp12cr2:~/ % ssh oracle@wcp12cr2 -p 7722 -Y oracle@wcp12cr2's password: Warning: No xauth data; using fake authentication data for X11 forwarding. Last login: Wed Mar 25 07:29:54 2020 from /usr/bin/xauth:  creating new authority file /home/oracle/.Xauthority [oracle@wcp12cr2 ~]$ unzip V995344-01.zip [oracle@wcp12cr2 ~]$ ./Disk1/runInstaller Starting Oracle Universal Installer... If everything has worked to this point, you see the ODEE installer window pop up. Let's walk through the installation. We are going to install DocFactory to a new Oracle Home directory, but we will be deploying the WebLogic artifacts to an existing domain and server, which makes things interesting. Why? Because we want to reduce the amount of resources consumed by the virtual machine, and it's something technically interesting to do. Click Next on the welcome screen. Set the Oracle Home directory as /oracle/odee and click Next. Set a password for the documaker user. Click Next. Set the Oracle database details. Host=localhost, Port=1521, Database name=orcl, Connection Type=ServiceName, Advanced Compression=checked. Click Next. Set a password for the dmkr_admin database user. Click next. Set a password for the dmkr_asline database user. Click next. Set the WebLogic details. User=weblogic, Password=welcome1 (this is from the Readme doc for the VM, since the AdminServer already exists!), Host=localhost, Oracle Home=/oracle/fmw/wcp12c, Project Path=/oracle/domains, Domain Name=webcenter, Admin Server/Port=AdminServer, Port=7001. Set the remaining values to empty, and click Next. Set the JMS Server details. Normally an ODEE installation will create a new managed server to deploy JMS resources, however, WebCenter Content already has a JMS server so we will co-deploy it. Provider URL=t3://localhost:16200, Principal=weblogic, Credentials=welcome1. Click Next. Use the default hot folder and click Next. Set an SMTP server if you wish, otherwise click Next. Set UCM (WebCenter Content) details. Enable=True, User=weblogic, Password=welcome1, ConnectionString=idc://localhost:4444, Document URL=<leave as-is>. Click Next. Click next through UMS Details. Click install to start installation. While this is progressing, on your virtual machine host you can open a browser and navigate to http://localhost:7777/console and login with the weblogic/welcome1 credentials. Click Next, then Finish once the installation is complete. Now that the installation has completed, it's time to do the post install steps. Go back to your terminal, and issue the following commands. I will omit the responses back in the terminal for brevity. $ . oraenv $ cd /oracle/odee/documaker/database/oracle11g $ sqlplus / as sysdba @dmkr_admin.sql SQL > exit $ sqlplus / as sysdba @dmkr_asline.sql SQL > $ sqlplus / as sysdba @dmkr_admin_user_examples.sql SQL >exit If needed you can execute additional SQL scripts to install other languages besides English. Next is deployment of the WebLogic resources. This is slightly different from a typical install since the domain already exists. Follow closely. $ cd /oracle/odee/documaker/j2ee/weblogic/oracle11g/scripts $ nano weblogic_installation.properties Search and replace '<SECURE VALUE>' with the appropriate passwords specified in the installation. The passwords are stored encrypted in the database however they are not scripted in the deployment scripts since they are stored in plaintext. Replace them here with the values you used in the installer. jdbcAdminPassword : DMKR_ADMIN password jdbcAslinePassword : DMKR_ASLINE password jmsCredential : welcome1 adminPasswd : documaker user password weblogicPassword : welcome1 Locate the following settings and set them all to UCM_server1 - this is case-sensitive so be aware. nameDmkrServer nameJmsServer nameIdmServer Locate the following settings and set them all to 16200 portdmkrServer portJmsServer portIdmServer Save your changes and exit nano. Run wls_addto_domain.sh. Press enter to start the load of ODEE resources to the existing domain. Run wls_add_correspondence.sh. Press enter to start the load of Documaker Interactive to the existing domain. You may see several errors about the domain already existing; these can be ignored. Start AdminServer and UCM_Server1 using the vmctl script: $ vmctl (a)dvanced Options stop admin - wait for this to complete. start admin start content Run additional scripts: $ create_users_groups.sh $ create_users_groups_correspondence_example.sh $ curl http://localhost:7001/jpsquery/ -- if you're wondering why, it's because we have OHS sitting in front of WebLogic, and that's the front-end when we attempt to access web apps outside of the guest system. We haven't configured the WebLogic plugin for Documaker applications yet that will allow us to access it from our host, so we can run curl on the guest for this configuration. Deploy the reference implementation MRL: $ cd /oracle/odee/documaker/mstrres/dmres $ ./deploysamplemrl.sh At this point we can start DocFactory, and drop a test file in. Return to the terminal, and: $ /oracle/odee/documaker/docfactory/bin/docfactory.sh start $ cp /oracle/odee/documaker/mstrres/dmres/input/local_print.xml /oracle/odee/documaker/hotdirectory In order to be able to access the web applications from the host machine, we need to configure the WebLogic proxy plugin and add our applications to it. $ nano /oracle/fmw/ohs/user_projects/domains/base_domain/config/fmwconfig/components/OHS/ohs1/mod_wl_ohs.conf Paste in the following, next to the other settings: <Location /DocumakerAdministrator> WebLogicHost wcp12cr2 WebLogicPort 16200 SetHandler weblogic-handler </Location> <Location /DocumakerDashboard> WebLogicHost wcp12cr2 WebLogicPort 16200 SetHandler weblogic-handler </Location> <Location /DocumakerCorrespondence> WebLogicHost wcp12cr2 WebLogicPort 16200 SetHandler weblogic-handler </Location> Press CTRL-O <enter> CTRL-X. Now we need to restart OHS to pick up the change, which you can do via vmctl: $ vmctl a stop ohs start ohs And finally, because these applications want to use HTTPS, we have two choices: we can either disable HTTPS on the applications or we can enable SSL on the UCM_server. However, there's an added complication here in that we're using OHS on the front end and configuring SSL to work on two levels (browser-OHS and OHS-WebLogic) is another blog post. So for the time being, we'll disable HTTPS on the applications, but just know that if you're evaluating WebLogic and Documaker on the basis of security, this is not a recommended approach. It's just for convenience. Turning off HTTPS in Documaker applications is a two-part process. The first part is configuring the applications. The second is redeploying the applications. Note that the default deployment uses the DD Only model, which means that you cannot administer the application's configuration from WebLogic console and have to manually adjust it then redeploy. You may consider changing this behavior when you redeploy the application, but that is again another blog post. For now, perform the following steps in the terminal. I'm omitting the $ so you can copy/paste: cd /oracle/odee/documaker/j2ee/weblogic/oracle11g/dashboard cp ODDF_Dashboard.ear ODDF_Dashboard.ear.original mkdir temp mv ODDF_Dashboard.ear temp cd temp jar -xf ODDF_Dashboard.ear rm ODDF_Dashboard.ear mkdir temp mv Dashboard*war temp cd temp jar -xf Dashboard_ViewController_webapp1.war rm Dashboard_ViewController_webapp1.war sed -i 's/CONFIDENTIAL/NONE/g' WEB-INF/web.xml jar -cf ../Dashboard_ViewController_webapp1.war * cd .. rm -rf temp jar -cf ../ODDF_Dashboard.ear * cd .. rm -rf temp Open a browser and navigate to http://wcp12cr2:7777/console (make sure you followed the instructions in the Readme for configuring your host with this information. Login with weblogic/welcome1 Under Domain Structure click Deployments Tick the box next to Documaker Dashboard and click Update Click Finish Back in console, repeat the build for Documaker Administrator... cd /oracle/odee/documaker/j2ee/weblogic/oracle11g/documaker_administrator cp documakerAdmin.ear documakerAdmin.ear.original mkdir temp mv documakerAdmin.ear temp cd temp jar -xf documakerAdmin.ear rm documakerAdmin.ear mkdir temp mv Docu*war temp cd temp jar -xf DocumakerAdministrator_ViewController_webapp1.war rm DocumakerAdministrator_ViewController_webapp1.war sed -i 's/CONFIDENTIAL/NONE/g' WEB-INF/web.xml jar -cf ../DocumakerAdministrator_ViewController_webapp1.war * cd .. rm -rf temp jar -cf ../documakerAdmin.ear * cd .. rm -rf temp ... and in WebLogic Console, tick the box next to DocumakerAdministrator and click Update, then click Finish. And back in console for the last time, repeat for Documaker Interactive (aka Correspondence)... cd /oracle/odee/documaker/j2ee/weblogic/oracle11g/idocumaker_correspondence cp idm.ear idm.ear.original mkdir temp mv idm.ear temp cd temp jar -xf idm.ear rm idm.ear mkdir temp mv iDocu*war temp cd temp jar -xf iDocuMaker_adf_main_ViewController_webapp1.war rm iDocuMaker_adf_main_ViewController_webapp1.war sed -i 's/CONFIDENTIAL/NONE/g' WEB-INF/web.xml jar -cf ../iDocuMaker_adf_main_ViewController_webapp1.war * cd .. rm -rf temp jar -cf ../idm.ear * cd .. rm -rf temp ... and in WebLogic Console, tick the box next to DocumakerCorrespondenceAL1 and click Update, then click Finish. At this point you should be able to login to http://wcp12cr2:7777/DocumakerDashboard and see the one test job that we've pushed into the system. If you want to go a bit further and push documents to WebCenter Content, you need to login http://wcp12cr2:7777/DocumakerAdministrator and click Assembly Line 1 and then click Batchings. From here, you can pick a batching such as LOCALPRINT and go to the Distribution tab to enable Archive - Destination : WebCenter Content.  Save your changes and then go to Assembly Line 1 > Archiver and click Configure. Expand DESTINATION - WebCenterContent and select Configuration, and ensure the destination.name property has the Property Active box checked. Add a new property under Configuration called UCM.retry.count with a value of 0, and save it.Then you need to start the Inbound Refinery (IBR): $ vmctl a start ibr Finally, push another transaction. You can do this in the terminal: $ cp /oracle/odee/documaker/mstrres/dmres/input/local_print.xml /oracle/odee/documaker/hotdirectory Go back to the dashboard, and you should see your transaction. You can even login to WebCenter Content using the basic Content Server interface -- use the weblogic credential and search for documents authored by "documentfactory" For extra credit, you can look into deploying the WebCenter Content User Interface application which is much more user-centric. Believe it or not, we're done. We have a functional web server front end, the WebLogic backend, ODEE 12.6.3 and WebCenter Content all running. I hope you found this useful -- let me know in the comments or over on the Documaker community how you fared!

If you're tasked with evaluating Oracle Documaker Enterprise Edition (ODEE) and some of its integration capabilities, you might find it to be a somewhat daunting task, given that ODEE is a multitiered...


Embedding Dynamic Maps in Documaker

Recently a colleague asked if we could embed maps into Documaker output. Being a tech person, I naturally replied, "Of course!" and then I thought about that for a moment and followed that up with, "Wait, what do you mean 'maps'?" What followed was a mini-requirements definition discussion in which we determined that the what we want is: a dynamically generated map from a third party service using data provided in the extract (e.g. an address) rendered into the output (e.g. a PDF) using Documaker Standard Edition We also discuss some additional use cases that I won’t cover in this post, but will in the future: Documaker Mobile (responsive HTML) output Documaker Enterprise Edition workflow After understanding my particular requirements, the first thing I needed to do was determine which third party service I would use, which prompted another delve into requirements. Such a service would need to: Be callable via HTTP Support non-geocoded addresses - my data might not be 100% correct and I need the service correct the address for me. In a real-world scenario this is probably not the case as you would not want to risk putting the wrong map in your output, so you'll be sure your extract data already contains a corrected address. Return an image suitable for embedding (if a non-web ready image is returned, we could amend this in our data enrichment flow, but having a web-ready image simplifies things) Be cheap and easy to implement. Ideally I mean < 20 lines of code. I want this to be simple to illustrate. We can get deep into implementation code later, but to stand this up and illustrate things it should be as simple as possible (ASAP) but no simpler. I settled on a popular mapping API provided by Google. It’s feature-rich, and easy to implement. All you need is a Google API key which is tied to your Google account… and that’s where this experiment goes a little wonky. Why? Well, because my Google account is my personal account and I don’t have a corporate Google account. If you keep your API usage below a certain threshold, your queries will remain free, so this is good for extremely limited development testing. If you decide to go this route, you’ll definitely want to have a corporate Google account for using this feature. I’ve learned that Oracle has a similar maps API that uses a wholly different licensing model, so I’ll explore that in a future post. To use the Google Maps API: Login to https://console.cloud.google.com Create an API Project, e.g. “Documaker” Enable the Maps Static API Obtain the API key Next, you’ll create a data enrichment program. For my “less than 20 lines of code” requirement, I chose to implement this in PowerShell (yes, PowerShell!) I created this script: Add-Type -AssemblyName System.Web [xml]$ef = Get-Content $args[0] $addr = "$($ef.DocumentRequest.Addressees.Addressee.AddrLine1) $($ef.DocumentRequest.Addressees.Addressee.AddrLine2), $($ef.DocumentRequest.Addressees.Addressee.City), ($ef.DocumentRequest.Addressees.Addressee.State) $($ef.DocumentRequest.Addressees.Addressee.PostalCode) $($ef.DocumentRequest.Addressees.Addressee.Country)" $addr = [System.Web.HttpUtility]::UrlEncode($addr) $key = $args[1] $out = $args[2] $url = "https://maps.googleapis.com/maps/api/staticmap?center=$addr&zoom=13&size=300x300&maptype=roadmap&format=jpg&key=$key" $wc = New-Object System.Net.WebClient $wc.DownloadFile($url, $out) That’s 10 lines of code, not including whitespace. Two things are important to note here: the call to this program expects three arguments: The path and filename of the extract data file The Google API key The location of the output file that should be written. In my code above, you can see where I have concatenated the address lines in the extract data into a non-geocoded address variable $addr, which I use later when I call the Google API. This call is on the line in bold, where I have specified a size for my map as 300x300, and a format of jpg. I have also included the API key as a variable, which should be passed as the second argument. Finally, the response to this web call is downloaded into the file specified in the third argument. I ran a quick test and can see my image is being downloaded as a JPG: Now, depending on what your map service provides, you may need to convert the image from one format to another. ImageMagick is a multi-platform, open source image processing application that we can download and install, and will give us a simple command we can call to perform the necessary conversion based on the file extension (e.g. input.jpg output.tif). However, since the Google API returns a JPG, and we can import that directly, we don’t need to do this — I’m just leaving this info here in case it’s needed: convert <input file> <output file> What remains at this point is to get the map image into Documaker’s output. There are a number of ways to do this, but the way I’ve done it in my example is a simple process. Add a placeholder image onto a section. I call the image “PLACEHOLDER”. Add a PostTransDAL call in my JDT to invoke a DAL script. I’m going to say that’s a total of 11 lines of code (even thought strictly speaking this isn’t code). ;PostTransDAL;2;call("AddMapToLogo");; The DAL script will swap out the existing logo identified by the name PLACEHOLDER on a section called MAP for a file on disk that exists in the FORMS subdirectory of the MRL. Now we’re up to 12 lines of code. Side note: if you need it, the DAL reference guide is available here. ChangeLogo("embedmap.jpg","PLACEHOLDER","MAP") Finally, we need to string together the workflow, which for Standard Edition is normally done in a shell script (e.g. a BAT or CMD file):  @echo off setlocal @rem Please set your Google Maps API key set KEY="" set MRLDIR="c:\fap\mstrres\dmres" set EXTRFILE=%MRLDIR%\INPUT\map.xml set OUT=%MRLDIR%\forms\embedmap.jpg" cd %MRLDIR% del data\*.* /q powershell.exe -file ".\GetMap.ps1" %EXTRFILE% %KEY% %OUT% @REM convert %OUT% %MRLDIR%\forms\embedmap.jpg @REM assumes FSIUSER.INI is used, and JDT is configured for single-step to run @REM GENTRAN, GENDATA, and GENPRINT. gendaw32 In the lines above, I call the PowerShell script, and pass three arguments to obtain the JPG of the map (remember to add your API key). I have commented out the call to convert since we don’t need. I’m going to say that the above does not constitute lines of code, since we’re just calling a program, but let’s say we have one additional line of code since we are having to call PowerShell. At the end of the shell script Documaker executes in single-step mode (which combines GENTRAN, GENDATA and one or more subsequent Documaker components such as GENPRINT). The PostTransDAL script is called by the JDT command, which swaps the map. In the output PDF we can see the results, all in 13 lines of code: Now, it should be noted that what I have demonstrated above relies on statically-named intermediary files. If you were operating strictly in a single batch process environment, it would be fine as-is. But if you are running multiple Gendata processes, or using IDS to front any processes, you would need to have dynamically-named intermediary files that ideally use information from the extract data to uniquely name the map JPG file. I hope you’ve found this informative and useful, as it shows how you can take an older system and add new functionality to it to support your business requirements. Stay tuned for a post where I get into Oracle’s Map API!

Recently a colleague asked if we could embed maps into Documaker output. Being a tech person, I naturally replied, "Of course!" and then I thought about that for a moment and followed that up...


Running on Bare Red Metal - Installing Documaker on Oracle Linux from Scratch

Background This is a rather long tutorial that will guide you through the installation of Oracle Linux 7, Oracle Database 12c, Fusion Middleware, and Documaker Enterprise 12.6.2. By the end of this tutorial, you'll go from a bare, empty machine that doesn't boot to an operating system, to a full-fledged Linux-enabled Documaker system running on the complete Oracle stack. Installing Oracle Linux 7 These steps will guide you in preparation of a physical or virtual machine, prior to the installation and configuration of software components. In the following instructions, Oracle Enterprise Linux 7 will be installed to a bare machine. In order to install Oracle Enterprise Linux 7 (OEL7), you will need: An OEL7 disc image (ISO) from here: https://www.oracle.com/technetwork/server-storage/linux/downloads/index.html, A USB flash drive (aka "thumb drive" or "USB stick") with sufficient storage capacity to hold the entire OEL7 ISO. Next, you need to prepare the installation media, which you can do with macOs, Linux, or Windows. Prepare Installation Media Using macOs Make the ISO file available to macOs and note the path/filename. Insert/connect the USB drive. Open the Terminal application on macOs to use for the following steps. Commands to be issued are shown in bold. Specific elements needed are in italics. List the drives attached to obtain the device identifier % diskutil list /dev/disk2 (external, physical): #: TYPE NAME SIZE IDENTIFIER 0: FDisk_partition_scheme       *32.0 GB disk2 1: Windows_FAT_32   32.0 GB    disk2s1 Unmount the drive. % diskutil unmountdisk /dev/disk2 Unmount of all volumes on disk2 was successful Prepare the USB drive with OEL7 media. This will take some time as the entire ISO file (4+ GB) is copied and verified. Press CTRL-T to output current state (can press repeatedly for updates). % sudo dd if=/Volumes/files/oel/oel7/OELR7U5.iso of=/dev/disk2 bs=1m Eject the drive. % diskutil eject /dev/disk2 Disk /dev/disk2 was ejected. Using Windows Make the ISO file available to Windows and note the path/filename. Download balenaEtcher from https://www.balena.io/etcher/ Launch the installer and follow the directions. Insert the USB memory stick into the USB port, and launch balenaEtcher Click the Select image button; a window will appear. Navigate to the ISO file that you want to use, then click Open Click "Change" to select the correct USB device.  Click the Flash! button to begin flashing. If required, click ok to allow balenaEtcher to proceed. Once completed, balenaEtcher will unmount the stick so you can remove it. Using Linux Login to the existing Linux machine. Change to the root user: sudo -i Install syslinux if you do not already have it with : yum install syslinux Insert/connect the USB drive. If running a VM, be sure to eject the drive from the host machine so the guest VM can capture it. Check that the drive is detected by your Linux OS by running fdisk -l to list the disks available. You'll want to know the approximately capacity of the USB drive so you can identify it. Locate the device name, shown in bold. # fdisk -l Disk /dev/sdd: 32.0 GB, 32023511040 bytes, 62545920 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk label type: dos Disk identifier: 0x00000000 Device Boot      Start         End      Blocks   Id  System /dev/sdd1            8192    62545919    31268864    c  W95 FAT32 Run the partition tool to set partition one of the USB drive as bootable, using the device name discovered earlier. Commands are shown in bold. # parted /dev/sdd GNU Parted 3.1 Using /dev/sdd Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) toggle 1 boot (parted) quit Rerun fdisk -l to verify that the device partition is now bootable. Make a note of the partition device name, and ensure it is marked as bootable (shown in bold below). #fdisk -l Disk /dev/sdd: 1015 MB, 1015808000 bytes, 1984000 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk label type: dos Disk identifier: 0x000ac091 Device Boot      Start         End      Blocks   Id  System /dev/sdd1   *         249     1983743      991747+   6  FAT16 Make the OEL7 ISO available to the machine by copying it to the machine, or by mounting the ISO, or using an NFS mount. Download this script (https://github.com/calittle/documaker/blob/master/utilities/oracle_linux_usb.sh) and run it as shown. This script will prepare the USB drive with necessary files. Replace the parts in < > with the appropriate values. # sh oracle_linux_usb.sh --reset-mbr <path/to/ISO/file> </device/name> Eject stick. Install Oracle Enterprise Linux Insert the USB stick into USB port on target computer and reboot machine. If the machine is not configured to select removable media first, use DEL, F2, or F8 to enter into BIOS settings and configure the machine to boot using the removable media. At the red Oracle screen, select option to install. Select the keyboard layout and language option and configure the installation according to these key points. 4.1.Destination: this is where you select the drive(s) that will be used by the Oracle Linux system. Click all the drives you want to use and use automatic partitioning to recover all the space (keep in mind that anything existing on the drives will be erased). 4.2.Software: If you burned the OEL7 ISO to a DVD and inserted it, or used a USB drive created from macOS, or copied the ISO a hard drive in the machine, it should be automatically recognized. If you chose the NFS route, you’ll need to pick the second option after clicking Software, then select the nfs protocol and enter the location in the form <IP_OR_HOSTNAME>:<PATH_TO_ISO>, e.g. 4.3.Installation type: pick development server, and go. 4.4.While the installation is progressing, use the menus to: 4.4.1.Create a root password 4.4.2.Create a user (e.g. “oracle”), and be sure to give the user administrative (sudo) privileges by ticking the box during user creation. 4.4.3.Set up networking and give your new machine a hostname. After the installation completes, remove the USB stick and reboot the machine, and login with the user created in 4.4.2. Post-Installation Steps On your client machine, download the RPM for 64-bit JDK8 and the RPM for 32-bit JDK8 from here (https://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html) and then transfer the files to the server. Note: you can also use a Windows SCP client to transfer the RPM file to your new Oracle machine. % scp jdk-8u241-linux-*.rpm <user>@<hostname>:~/ Login to the server and run the following commands to install a few extra packages that will be needed to complete other installations, and then install the JDK just downloaded. $ sudo yum install -y nfs-utils wget xauth libXp* libXp*.i686 curl nano xorg-x11-apps xdpyinfo sysstat-10.1.5 libaio* smartmon* net-tools* binutils compat-libcap1compat-libstdc++-33 compat-libstdc++-33.i686 glibc glibc.i686 glibc-devel glibc-devel.i686 ksh libaio  libaio.i686 libaio-devel libaio-devel.i686 libX11 libX11.i686 libXau libXau.i686 libXi libXi.i686 libXtst libXtst.i686 libgcc libgcc.i686 libstdc++  libstdc++.i686 libstdc++-devel libstdc++-devel.i686 libxcb libxcb.i686 make nfs-utils net-tools smartmontools sysstat unixODBC unixODBC-devel libtiff.i686 libXrender.i686 libstdc++.i686 zlib-devel.i686 ncurses-devel.i686 libX11-devel.i686 libXrender.i686 libXrandr.i686 $ sudo yum -y localinstall jdk-8u241-linux-x64.rpm $ sudo yum -y localinstall jdk-8u241-linux-i586.rpm Ensure you have installed an X-Windows client on your client machine, such as XQuartz or XMing. Installing Oracle Database 12c Database Installation On your client machine, download Oracle Database 12c Release 2 for Linux x86-64 available here (https://www.oracle.com/database/technologies/oracle-database-software-downloads.html). Transfer the file to your server. % scp Downloads/linuxx64_12201_database.zip <user>@<hostname>:~/ Login to your OEL system. Note: if you are still logged in after performing the post-installation steps you will need to log out, then log back in. Make sure you connect to the server using -Y to enable X11 forwarding: % ssh -Y <user>@<hostname> Install prerequisites. $ sudo yum install oracle-database-server-12cR2-preinstall -y Unzip the installer and run it. The installer GUI should appear. If it does not, check that you have installed an X-Windows client on your client machine and that you used -Y in your ssh command. $ unzip linuxx64_12201_database.zip $ ./database/runInstaller.sh If you want to run the installation silently with a response file you have already generated, use the following command line. You can omit the -silent flag if you want to preview values. $ ./database/runInstaller.sh -silent -responseFile <path/to/file> Untick the box "I wish to receive...". Click Next. Click Yes. Select Create and configure. Click Next. Select Server class. Click Next. Select Single instance. Click Next. Select Advanced install. Click Next. Select Enterprise Edition. Click Next. Use defaults for Oracle base and Software location (/home/oracle/app/oracle...). Click Next. Use defaults for Inventory. Click Next. Select General Purpose. Click Next. Use defaults for Global name (orcl) and SID (orcl). Untick the box for Create as Container database. Click Next. Accept default configurations and click Next (ensure Character Set AL32UTF8 is selected). Accept default database storage options. Click Next. No selections for EM cloud control. Click Next. No selections for Recovery options. Click Next. Enter desired password(s). Click Next. Accept defaults for system groups. Click Next. Process any required fixes for prerequisite checks. If Swap Size warning is displayed, use the following to temporarily increase swap space. Adjust count to number of megabytes of swap needed (e.g. 5000 MB = 5GB), then click Check Again. $ sudo -i # dd if=/dev/zero of=/tmp/swapfile bs=1M count=5000 # mkswap /tmp/swapfile # swapon  /tmp/swapfile # exit Click Save Response File if desired, otherwise click Install. When prompted, run the additional scripts as the root user. Allow the installation to finish. Post-Installation Options To prevent password expiration, rendering database unable to perform services, perform the following: Login to server % ssh <user>@<hostname> Run sqlplus # sqlplus / as sysdba Execute: ALTER PROFILE DEFAULT LIMIT PASSWORD_LIFE_TIME UNLIMITED; ALTER PROFILE DEFAULT LIMIT PASSWORD_GRACE_TIME UNLIMITED; To configure the database for automatic startup on boot, perform the following steps. Login to server as oracle user. Create scripts directory: $ mkdir /home/oracle/scripts Copy and paste this following into the terminal, modify the hostname before copying. cat > /home/oracle/scripts/setEnv.sh <<EOF # Oracle Settings export TMP=/tmp export TMPDIR=\$TMP export ORACLE_HOSTNAME=<hostname> export ORACLE_UNQNAME=orcl export ORACLE_BASE=/home/oracle/app/oracle export ORACLE_HOME=\$ORACLE_BASE/product/12.2.0/dbhome_1 export ORACLE_SID=orcl export PATH=/usr/sbin:/usr/local/bin:\$PATH export PATH=\$ORACLE_HOME/bin:\$PATH export LD_LIBRARY_PATH=\$ORACLE_HOME/lib:/lib:/usr/lib export CLASSPATH=\$ORACLE_HOME/jlib:\$ORACLE_HOME/rdbms/jlib EOF echo ". /home/oracle/scripts/setEnv.sh" >> /home/oracle/.bash_profile cat > /home/oracle/scripts/start_all.sh <<EOF #!/bin/bash . /home/oracle/scripts/setEnv.sh export ORAENV_ASK=NO . oraenv export ORAENV_ASK=YES dbstart \$ORACLE_HOME EOF cat > /home/oracle/scripts/stop_all.sh <<EOF #!/bin/bash . /home/oracle/scripts/setEnv.sh export ORAENV_ASK=NO . oraenv export ORAENV_ASK=YES dbshut \$ORACLE_HOME EOF chown -R oracle.oracle /home/oracle/scripts chmod u+x /home/oracle/scripts/*.sh You can manually start/stop the database as the Oracle user with the new stop_all.sh and start_all.sh scripts. Run the following as the root user to create the database service, which will call the start/stop scripts: cat > /lib/systemd/system/dbora.service <<EOF [Unit] Description=The Oracle Database Service After=syslog.target network.target [Service] LimitMEMLOCK=infinity LimitNOFILE=65535 RemainAfterExit=yes User=oracle Group=oracle ExecStart=/home/oracle/scripts/start_all.sh ExecStop=/home/oracle/scripts/stop_all.sh [Install] WantedBy=multi-user.target EOF systemctl daemon-reload systemctl enable dbora.service Service Control To start the services, run /home/oracle/scripts/start_all.sh To stop the services, run /home/oracle/scripts/stop_all.sh To run sqlplus, enter the command $ sqlplus / as sysdba Installing Fusion Middleware Preparation Download FMW from here: https://www.oracle.com/tools/downloads/application-development-framework-downloads.html Transfer the file to your server. % scp Downloads/fmw_12. <user>@<hostname>:~/ Login to the server with X11 forwarding enabled. % ssh -Y <user>@<hostname> Copy and paste the following into the terminal. This will create the various directories for installation. If you change them from the defaults shown here, please be advised you'll need to change them for the rest of this documentation. Please pay close attention to this part! mkdir -p /home/oracle/app/oracle/fmw mkdir -p /home/oracle/app/oracle/domains mkdir -p /home/oracle/app/oracle/applications echo "export MW_HOME=$ORACLE_BASE/fmw" >> /home/oracle/scripts/setEnv.sh echo "export WLS_HOME=$MW_HOME/wlserver" >> /home/oracle/scripts/setEnv.sh echo "export WL_HOME=$WLS_HOME" >> /home/oracle/scripts/setEnv.sh echo "export DOMAIN_BASE=$ORACLE_BASE/domains" >> /home/oracle/scripts/setEnv.sh echo "export DOMAIN_HOME=$DOMAIN_BASE/wcc" >> /home/oracle/scripts/setEnv.sh echo "export JAVA_HOME=/usr/java/jdk1.8.0_241-amd64" >> /home/oracle/scripts/setEnv.sh echo "export PATH=$JAVA_HOME/bin:$PATH" >> /home/oracle/scripts/setEnv.sh Reload the bash profile $ source ~/.bash_profile Unzip the installer. $ unzip fmw_12. Installation Run the installer. $ java -jar fmw_12. To run a silent installation, see Oracle documentation for generating appropriate artifacts for, and running, a silent installation. Documentation is available here: https://docs.oracle.com/cd/E23943_01/doc.1111/e14142/silent.htm#WLSIG134 Click Next. Click Next. Set Oracle Home to /home/oracle/app/oracle/fmw. Click Next. Select Fusion Middleware Infrastructure. Click Next. Verify prerequisites are satisfied. Click Next. Untick the box "I wish...". Click Next. Click Yes. Click Save Response file (optional). Click Install. Click Next. Click Finish. Configuration Run the Repository Creation Utility. $ $MW_HOME/oracle_common/bin/rcu Click Next. Select Create and System Load. Click Next. Set: Database Type = Oracle Database Host Name = localhost Port = 1521 Service Name = orcl Username = sys Password = <password> Role = SYSDBA Click Next. Click Ok after prerequisite check. Tick "Oracle AS Repository Components" (checks all boxes). Click Next. Click Ok after prerequisite check. Enter password(s) for schemas. Click Next. Accept default tablespaces. Click Next. Click Ok to confirm. Click Ok after table spaces are created. Click Save Response File (optional), then click Create. After the process completes, click Close. Documaker Installation Preparation Download Oracle Documaker Enterprise Edition (ODEE) from https://edelivery.oracle.com. You will need to search for Oracle Documaker Enterprise Edition and choose version 12.6.2 for Linux x86-64. Download the V980428-01.zip file. Copy file to the server % scp Downloads/V980437-01.zip <user>@<hostname>:~/ Login to the server with X11 forwarding enabled. % ssh -Y <user>@<hostname> Unzip the file. $ unzip V980437-01.zip $ unzip ODEE12.6.2.34214linux64.zip Installation The installation process lays down the initial filesystem and prepares the system for configuration. This is a two-part process that begins with installation, and concludes with post-installation setup. You can follow along in the official documentation available at https://docs.oracle.com/cd/E96926_01/DEIG/Content/install-on-unix.htm. Run Disk1/runInstaller from the download package. At the welcome screen, click Next. Set the home directory to /home/oracle/app/oracle/odee. Click Next. Set password for the Documaker administrative user. Click Next. Set the database connection parameters Host = localhost Port = 1521 Database Name = orcl Connection Type = Service Name Advanced Compression = check this box if you are licensed for this feature of Oracle Database. You are not required to use this feature. Click Next. On the following screens you will define the schema name and password for the Documaker administrative schemas and the first Assembly Line.  Set the schema password for dmkr_admin user. Click Next. Set the schema password for dmkr_asline user. Click Next. On the Application Server details screen, enter the following settings. If you have an existing WebLogic Domain that you want to deploy the Documaker artifacts into, specify the settings for that domain. Otherwise, create your own values here. User = <weblogic administrative user> Password=<weblogic password> Host=localhost Oracle Home=<$MW_HOME  value, /home/oracle/app/oracle/fmw> Project Path=<$DOMAIN_BASE value, /home/oracle/app/oracle/domains> Domain Name=<'ODEE' or some other name> Admin Server/Port=AdminServer, 7001 Leave the remaining settings on this screen blank. Click Next Specify the principal and credentials for a user to connect to the JMS server. Recommend using the WebLogic administrative credentials. Note: recommend changing the Provider URL to use "localhost" for name resolution. Click Next. Set hot folder location (default is fine) and click Next. Enter SMTP server connection details if necessary. Note: recommend setting Port=1 if you are not using SMTP, to prevent an unnecessary warning message during startup. Click Next. Set WebCenter Content set Enable=False and click Next. Set  Oracle UMS Enable=False and click Next. Optionally save the response file, then click Install. Configuration Optionally run the following to set up an environment variable that points to your installation location. This isn't required, but is a handy shortcut. If you do not use this shortcut, then replace $ODEE1 in the remaining steps with /home/oracle/app/oracle/odee/documaker. $ echo "export ODEE1=$ORACLE_BASE/odee/documaker" >> ~/scripts/setEnv.sh $ source ~/.bash_profile Run the following: $ cd $ODEE1/database/oracle11g $ sqlplus / as sysdba @dmkr_admin SQL> @dmkr_asline SQL> @dmkr_admin_user_examples SQL> exit Optionally run the following to deploy the reference implementation resources to the tables. $ cd $ODEE1/mstrres/dmres $ ./deploysamplemrl.sh Modify the $ODEE1/j2ee/weblogic/oracle11g/scripts/weblogic_installation.properties file. Replace '<SECURE VALUE>' with the passwords set during the installation. The passwords are not written to this file for enhanced security. jdbcAdminPassword='<SECURE VALUE>' jdbcAslinePassword='<SECURE VALUE>' jmsCredential='<SECURE VALUE>' adminPasswd='<SECURE VALUE>' weblogicPassword='<SECURE VALUE>' Optionally you can modify other values here if they differ from what was originally entered during installation. Run $ODEE/j2ee/weblogic/oracle11g/scripts/wls_create_domain.sh. Select Y to run the RCU. Click Next. Select Create Repository and System and Product Load. Click Next. Select Database Type Oracle, Host Name = localhost, Port = 1521, Service Name = orcl, Username = sys, Password = <sys password>. Click Next. Click Ok once prerequisite check completes. Create a new schema (e.g. DEV).  The templates required for ODEE are shown below - select these if they are not already selected. Common Infrastructure Services Metadata Services Weblogic Services Oracle Platform Security Services Audit Services Audit Services Append Audit Services Viewer Click Next. Click Ok after prerequisite check completes. Enter the desired password(s) for the schema(s) and click Next. Click Next to create tablespaces and then click Ok to confirm. Click Ok once tablespaces are created. Click Create to populate the schemas. Click Close. The installer will prompt you to run the RCU again if necessary. Click N. The WebLogic Domain Configuration Wizard will start. Select Create New Domain to deploy ODEE to a new domain. Set the domain location to /home/oracle/app/oracle/domains/odee. Click Next. Select Oracle JRF and click Next. Enter domain administrator username and password. Click Next. Set Domain Mode = development, and use the preselected JDK (unless you know you need to use a different one). Enter the database connection details for RCU data: Vendor=Oracle, Driver=Oracle's Driver (Thin) for Service Connections, Service=orcl, Host Name=localhost, Port=1521, Schema password for DEV_STB schema. Click Get RCU Configuration. Click Next. Click Next. Click Next after successful tests. Tick Administration Server and Node Manager boxes and click Next. Tick Enable SSL on port 7002. Click Next. Tick Per Domain Default configuration and set desired NodeManager username/password and click Next. Click Create. Click Next, then Finish. Back at the terminal window, you may open another connection to the server if you wish to edit the weblogic_installation.properties file, otherwise press Enter to load ODEE to the WebLogic domain you just created. If any of the properties you entered in the Configuration Wizard do not match those in the weblogic_installation.properties file, you must edit them before continuing. After the script completes, run wls_add_correspondence.sh. Open a new terminal window. Start Node Manager, then start the AdminServer $ $DOMAIN_HOME/bin/startNodeManager.sh & $ $MW_HOME/oracle_common/common/bin/wlst.sh wls:/offline> nmConnect('<nodemanager username>','<nodemanager credential>','localhost','5556','<domain name>','/home/oracle/app/oracle/domains/<domain name>') wls:/nm/<domain name>> startServer('AdminServer') wls:/nm/<domain name>> exit() Back at the original terminal window, add the users for Correspondence: $ ./create_users_groups.sh $ ./create_users_groups_correspondence_example.sh Open a browser to http://<hostname>:7001/jpsquery Open a browser to http://<hostname>:7001/console, and login with the WebLogic credential. Expand Environment, then click Server. Click the Control tab. Tick the boxes next to JMS_Server, IDM_server, and DMKR_Server, then click Start. Grab a coffee and wait. You can click the Refresh icon to have the page automatically refresh. Note: the only required elements for starting Doc Factory are the JMS_Server and the Admin Server. If you don't need to access Documaker Interactive, then you don't need to start IDM_Server. If you don't need access DWS, Documaker Adminstrator, or Documaker Dashboard, then you don't need to start DMKR_server. Once the required servers are in the running state, you can log out of the console. Start DocFactory by entering the following command in the terminal window: $ $ODEE1/docfactory/bin/docfactory.sh start This concludes the tutorial; I hope you've found it useful. You can also check out my other tutorials on using ODEE with WebCenter content or authenticating with Active Directory. If you have any questions or comments, you can leave them below, or you can check out the Documaker Community.

Background This is a rather long tutorial that will guide you through the installation of Oracle Linux 7, Oracle Database 12c, Fusion Middleware, and Documaker Enterprise 12.6.2. By the end...


Handling the Unexpected Exception: Too Many Open Files

If you're running ODEE in a Linux environment, it's a good idea to have a seasoned Linux sysadmin on your team to help manage the operating system. Linux is a great tool as it's generally lean, fast, and highly configurable. Being highly configurable also means that there is some degree of work that has to be put in to tweak your system optimal performance. In this post, I'd like to discuss open file descriptors, which is a topic that is not often discussed during a Documaker implementation (and often never discussed for those that occur on Windows). First, let me tell you why I'm even talking about this. A long-time Documaker customer who is currently converting from Standard Edition to Enterprise Edition (while simultaneously moving all their systems to the cloud) contacted me with an error that was occasionally popping up in their system. They were able to work around this error by restarting the Doc Factory, but were curious as to the nature of the error. The error, in all its glory, manifested as: Unexpected exception:  java.io.IOException: Cannot run program "/var/opt/oracle/u01/data/domains/odee_1/documaker/bin/docfactory_assembler" (in directory "/var/opt/oracle/u01/data/domains/odee_1/documaker/mstrres/dmres"): error=24, Too many open files  at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)  at oracle.documaker.processmonitor.process.instance.Instance.reset(Instance.java:1318)  at oracle.documaker.processmonitor.process.Process.startInstance(Process.java:195)  at oracle.documaker.processmonitor.process.Process.restartInstance(Process.java:311)  at oracle.documaker.processmonitor.process.monitors.InstanceMonitor.restart(InstanceMonitor.java:673) at oracle.documaker.processmonitor.process.monitors.InstanceMonitor.run(InstanceMonitor.java:206)   Caused by: java.io.IOException: error=24, Too many open files  at java.lang.UNIXProcess.forkAndExec(Native Method)  at java.lang.UNIXProcess.<init>(UNIXProcess.java:247)  at java.lang.ProcessImpl.start(ProcessImpl.java:134)  at java.lang.ProcessBuilder.start(ProcessBuilder.java:1029)  ... 5 more The pertinent part is highlighted above: too many open files. What does that even mean? Basically, in Linux (and other related operating systems) a file descriptor is an abstract indicator (a/k/a "handle") used to access an I/O resource, such as a file, pipe, or network socket. You can read more about file descriptors at Wikipedia. Every process that accesses an I/O resource will need a handle to that resource. A software system that accesses many system resources such as databases, network resources, files, or even web applications that have multiple connected users via web browser, will generate many such handles. That's about all you really need to know about file descriptors themselves. So how does this correlate to the error above? The Linux operating system imposes a hard limit on the number of file descriptors available to a given process. The Linux operating system also imposes a soft limit (which can go up the hard limit) and can be managed by the user. To check your particular system, login via shell and issue the following command: $ ulimit -a core file size          (blocks, -c) 0 data seg size           (kbytes, -d) unlimited scheduling priority             (-e) 0 file size               (blocks, -f) unlimited pending signals                 (-i) 96202 max locked memory       (kbytes, -l) 134217728 max memory size         (kbytes, -m) unlimited open files                      (-n) 1024 pipe size            (512 bytes, -p) 8 POSIX message queues     (bytes, -q) 819200 real-time priority              (-r) 0 stack size              (kbytes, -s) 10240 cpu time               (seconds, -t) unlimited max user processes              (-u) 16384 virtual memory          (kbytes, -v) unlimited file locks                      (-x) unlimited The -a parameter displays all limits. As shown above, you can use other parameters to show just the limit you want to see, such as -n, which is the open files limit (and also the limit we are interested in for solving this problem). As you can see, my open files limit is 1,024, which is actually quite small. This means that any process started by my user on this system can have a maximum of 1,024 file handles. By comparison, the recommended file descriptor limit for a user running Oracle DB is 8,192! This particular system runs in a small virtual machine so I have tuned it accordingly. Before we go about changing the limit, we should have some idea what the limit should be and the answer is, as always, it depends. A good starting point is 8,192. In order to properly tune the system, you'll want to run some performance tests and monitor the number of open file handles consumed by Documaker processes, and then tune accordingly. Another option is simply to make the soft limited unlimited, assuming Documaker is the only process running on this machine of any consequence. You should endeavor to work with your sysadmin to determine the appropriate value and approach here. When you're ready to change the soft limit, the recommended approach is to change it at the user level when you execute the Doc Factory startup script. In fact there are probably already lines in this script, which are commented-out, that you can use as a model. To set the limits for docfactory, you can edit the docfactory.sh script in [ODEE_HOME]/documaker/docfactory/bin, and locate the section that looks like this:  # additional necessary environment settings go here # such as DB2INSTANCE or loading of . .db2profile # or ORACLE_SID, MQSERVER, JAVAPATH, CLASSPATH, ... #-------------------------------------------------------- ... #ulimit -c unlimited #ulimit -s unlimited #ulimit -m unlimited #ulimit -d unlimited ulimit -S -n unlimited The line in red above should be added, which sets the soft limit to unlimited. This script also prints out the limits during startup, so you should be able to see the effects of your change when starting up Doc Factory. Note that setting the soft limit to unlimited really doesn't set it to unlimited, it sets it to the same as the hard limit. As I stated previously, if Documaker is the only process of consequence running on this machine then setting it to unlimited is probably okay, but if you have other processes running under the same user account, then you will want to perform some testing to determine the appropriate value and limit accordingly. In that case, you'll want to know the hard limit, which you can discover with the following command: $ ulimit -H -n 65536 Again on my tiny VM system, you can see that the hard limit is 65,536, so when I start up Doc Factory with ulimit -S -n unlimited in place, it should report back the open files limit as 65,536. You can leave it set like this and probably never have another issue, but in case you must set a specific limit lower than the hard limit, then you will have to do some performance testing and take measurements while the system is processing. There are a number of ways to accomplish this using the lsof command, which reports the list of open files. One version of this command will report back a list of the open file handles for a given user, which you can pipe through the wc process to actually count the processes. In my tiny VM system, the oracle user runs all of my Oracle DB, WebLogic, DocFactory, and Docupresentment processes. $ lsof -u oracle | wc -l 66123 You should be looking at that number right now and wondering why it is higher than the hard limit. Recall that the limit is per process, so you can definitely have more handles counted by lsof than the hard limit has. If you want to narrow down the lsof output to a specific process, you’ll need to know the process ID (pid) that you want to examine. The following command will show you the PIDs for DocFactory, WebLogic, and Docupresentment processes, along with other info just for your knowledge. The first column is the PID. The second and third columns are not relevant to what we are doing, but are just here for clarity. The second column is the process name and the third column is part of the command line used to start the process, so we can verify we are looking at the correct processes. In my example above, the WebLogic processes are bold, Docfactory is italicized, and Docupresentment (IDS) is everything else.  $ ps -ef | grep -e docfactory -e ids | grep -v grep |awk {'print $2 " " $8 " " $12’} 2996 /usr/java/jdk1.8.0_60/bin/java -Dweblogic.Name=AdminServer 4069 /usr/java/jdk1.8.0_60/bin/java -Dweblogic.Name=jms_server1 7823 /usr/java/jdk1.8.0_60/bin/java -Dweblogic.Name=idm_server1 24950 /usr/java/jdk1.8.0_60/bin/java -Dweblogic.Name=dmkr_server1 13563 idsrouter.exe -cp 14482 idsinstance.exe -Dconfig.jndi.name=DMKRConfig 14744 idsinstance.exe -Dconfig.jndi.name=DMKRConfig 21152 idsinstance.exe -Dconfig.jndi.name=DMKRConfig 22731 idsinstance.exe -Dconfig.jndi.name=DMKRConfig 27529 idswatchdog.exe lib/DocucorpStartup.jar 22802 /oracle/odee/documaker/jre/bin/docfactory_supervisor -Djava.endorsed.dirs=/oracle/odee/documaker/docfactory/lib/endorsed 22945 docfactory_batcher -Djava.library.path=/oracle/odee/documaker/bin 22946 /oracle/odee/documaker/bin/docfactory_assembler -nu 22968 /oracle/odee/documaker/bin/docfactory_presenter -nu 22974 docfactory_pubnotifier -Dodee.home=/oracle/odee/documaker/ 22984 docfactory_scheduler -Djava.library.path=/oracle/odee/documaker/bin 22985 docfactory_archiver -Dodee.home=/oracle/odee/documaker/ 22987 /oracle/odee/documaker/bin/docfactory_distributor -nu 22988 docfactory_receiver -Djava.library.path=/oracle/odee/documaker/bin 22989 docfactory_historian -Djava.library.path=/oracle/odee/documaker/bin 22990 docfactory_identifier -Djava.library.path=/oracle/odee/documaker/bin 22991 docfactory_publisher -Djava.library.path=/oracle/odee/documaker/bin In this case, the Assembler process has 195 handles currently; as I said, it's a small system. So, with this information you should have a good grasp on how to handle file descriptors (pardon the pun) and should you run into any errors that you can't handle (I can't help myself) please do comment, or seek out assistance via Oracle Support, or the Documaker community. As an aside, I was going to link to an existing blog post in case you needed assistance in setting up ODEE on Linux, and I realized I don't have one -- so look for that soon!

If you're running ODEE in a Linux environment, it's a good idea to have a seasoned Linux sysadmin on your team to help manage the operating system. Linux is a great tool as it's generally lean, fast,...


ODEE and JMS Performance

So you've just installed a brand-new Oracle Documaker Enterprise Edition system ("ODEE") and at some point during your implementation, you're going to have to scale the system. You probably are already familiar with ODEE's scaling properties, but let's review a little bit. In the past, with Standard Edition (a/k/a Documaker, Documaker RP, "ODSE", or various other names) scaling meant figuring out how to split up input files into multiple jobs, and then distribute those jobs to multiple executions of ODSE, commingling  the intermediary output, and then running a final print process. This creates a rigid framework that has to scale manually to meet increased volumes or reduced processing windows. Some years ago, Docupresentment (a/k/a IDS) came along and suddenly Documaker was adorned with a service-based interface that allow for real-time document generation both in batch and "batches of one". Docupresentment added some enhanced scaling capabilities, but still requires some amount of manual intervention for scaling large batches, and has limited automatic scaling capabilities. With ODEE and the database-backed processing capabilities combined with scalable technologies, you're in the driver's seat of a supercar in the world of truly scalable document automation. Under the hood, ODEE uses JMS queues to push around internal units of work from schedulers to workers, and as such requires a well-tuned JMS server to obtain the best performance. In this post, I'm going to discuss JMS configuration within WebLogic, and how you can implement JMS configuration for high-availability and failover with ODEE. Finally, we'll cover one facet of tuning, and that is JMS performance. Let's get started! JMS Implementation Let's review some of the JMS implementation details within WebLogic. The JMS components deployed by the ODEE installer consist of: A Managed Server - this is a discrete JVM which hosts the JMS services. The managed server is named jms_server by default, but you can of course change this. Note that it's also possible to target the JMS services onto another managed server, and that's fine (however if you choose to target JMS services to the administrative server 'AdminServer', there is a WebLogic patch set that is needed. Refer to your ODEE documentation for specific details). A JMS Server - not to be confused with the JVM, this JMS Server is a management containers for the JMS modules (and queues therein) that are targeted to them. This component maintains information on the persistent store is used for any messages that arrive on destinations, and maintain the states of durable subscribers created on the destinations. A Persistent Store - a physical destination, either in file storage or database storage that is used to house persistent messages. If message delivery is critical, use of a persistent store is recommended, and using database storage is more scalable and flexible. A Module - a collection of JMS queues and a connection factory which can be deployed to a target JMS Server.  A Subdeployment - a grouping of JMS queues. A Queue Connection Factory (QCF) - a JMS connection object, through which are exposed general configuration parameters including various client connection, default delivery, load balancing, and security. Multiple Queues - JMS queues, through which are exposed configuration parameters and load balancing options. The hierarchy of these objects looks like this in a default installation with a single assembly line. Managed Server "jms_server" JMS Server "AL11Server" Persistent Store "AL1FileStore" Module "AL1Module" QCF "AL1QCF" Subdeployment "AL1Sub" (note that the queues are organized at the module level, but are grouped for deployment purposes into a subdeployment). Q-1 "ArchiverReq" Q-2 "ArchiverRes" Q-n ... An ODEE Assembly Line has its own set of workers and therefore needs its own set of JMS resources - this is why the hierarchy of components is structured as it is: an Assembly Line has a JMS Server, JMS Module, Subdeployment, Queues, and a QCF. These can be collectively retargeted and migrated as scaling needs change.  JMS Operation  First, it is important to know that WebLogic JMS provides two load-balancing algorithms: Round Robin (default) and Random. In the round-robin algorithm, WebLogic maintains an ordering of physical destinations within the distributed destination. The messaging load is distributed across the physical destinations one at a time in the order that they are defined in the WebLogic Server configuration. Each WebLogic Server maintains an identical ordering, but may be at a different point within the ordering. Multiple threads of execution within a single server using a given distributed destination affect each other with respect to which physical destination a member is assigned to each time they produce a message. Round-robin is the default algorithm and doesn't need to be configured, and is recommended for Documaker. When an ODEE Worker starts, it must connect to a queue destination as a consumer. When distributed destinations are used, WebLogic JMS must find a physical destination that the worker will receive messages from. The choice of which destination member to use is made only upon initial connection by using one of the load-balancing algorithms. From that point on, the consumer gets messages from that member only. When testing failover behavior of Workers and queues, you will notice how ODEE handles loss of queue connections. When a distributed JMS destination member goes down, the Worker will lose connection to the member, and will destroy the existing consumer. The Worker will attempt to re-establish queue connection by creating a new consumer, according to the selected load-balancing algorithm. When a producer sends a message, WebLogic JMS looks at the destination where the message is being sent. If the destination is a distributed destination, WebLogic JMS makes a decision as to where the message will be sent. The producer will send to one of the destination members according to one of the load-balancing algorithms. The producer makes such a decision each time it sends a message. However, there is no compromise of ordering guarantees between a consumer and producer, because consumers are load balanced once, and are then pinned to a single destination member. If a producer attempts to send a persistent message to a distributed destination, every effort is made to first forward the message to distributed members that utilize a persistent store. However, if none of the distributed members utilize a persistent store, then the message will still be sent to one of the members according to the selected load- balancing algorithm. Therefore it is important to understand that JMS Servers do not share messages in a cluster unless additional configuration is performed to forward JMS messages between distributed queue members. This specific configuration is in relation to JMS clustering, however, in our testing with ODEE 12.6.2 we found that it does not properly support the use of clustered JMS queues (we have found that some older versions may support clustered JMS queues). A primary objective in implementing high availability is to eliminate single points of failure (SPoFs), and clustering is a typical remediation for SPoFs. However, there is another option available in WebLogic that remediates SPoFs and that is service migration - this is a feature of WebLogic high availability. In this configuration, a cluster of WebLogic managed servers can be made, and can be scaled, and JMS service can be pinned to one cluster member, and automatically (or manually, if you prefer) migrated from an unhealthy cluster member to a healthy cluster member. This model requires a bit more effort to ensure the cluster members are sized appropriately to handle the work being passed through the system, however in our testing we have found that JMS services are extremely lightweight and trivial in terms of performance hit on system processing speed. Summary: WebLogic JMS implementation supports a high number of consumers and connections with just a single server. WebLogic JMS connections are distributed round-robin across JMS servers. Connections are established at worker startup and are held for the life of the worker. JMS Messages are not shared across clustered JMS queues by default, and can be forwarded – but this is not the default behavior and must be explicitly set. Some ODEE versions do not support clustered JMS with forwarded messages, so this is not the best practice. JMS Messages are not shared across uniform distributed queues if they do not all utilize persistent storage. Low worker instance/thread count resulting in fewer connections to JMS servers will not saturate the connections Worker starvation can occur if messages are concentrated on one JMS server over another. Conversely, workers can be overworked by the same concentration. High availability is achieved by implementing migratable services, which ensures that JMS services are available on a healthy cluster member at all times. Configuration Failover configuration for JMS can take several forms depending on your level of tolerance for message loss. Since this post is specifically dealing with performance I'm not going to cover failover in great detail. In general, JMS services can be configured for service migration which meets the failover requirement. To modify the default deployment of ODEE to support highly available configuration, perform the following configuration steps in WebLogic Console. These instructions assume you have an existing ODEE installation that is already deployed to WebLogic, which means you have a machine (node), on which are multiple managed servers (one of which is hosting JMS modules). These instructions assume some familiarity with WebLogic Console, which is where this configuration takes place. Create an additional machine ("machine2") Create an additional managed server ("jms_server2") Create a cluster containing the original managed server hosting JMS, and the new additional managed server. Configure the server to support migration On the cluster containing the JMS managed servers, click on the Migration tab and change the Migration Basis to Consensus Make sure the available machines are shown in chosen Candidate Machines. Save Under Environment > Clusters > Migratable Targets, select the JMS server hosting the JMS module and change the Migration policy to Auto-Migrate Exactly Once.  Make sure the available JMS managed servers are shown in chosen Constrained Candidate Servers. Save. Configure the JMS Server for migration (you can reuse the existing JMS server created by ODEE install, or create a new one) Update the JMS server to use a JDBC persistent store, targeted to the migratable managed server Change the JMS server to target the migratable managed server Change the JMS module to target the JMS cluster -- all members. Change the JMS submodule to target the JMS server targeted to the migratable managed server. Update the jms.provider.URL setting in Documaker Administrator (Systems -> Assembly Line n -> Configure -> QUEUES_CFG -> Bus) with a comma-delimited list of hostnames and ports for your servers. For example, in step 1 you created additional machines. You will need to update the jms.provider.URL setting with "t3://server_a:11001,server_b:11001" to match the hostnames and ports for each member. The ports should be the same across all machines. During a failover scenario, this configuration should act as follows: If the server instance that is hosting the JMS deployment should fail, then the services are automatically migrated to the next member of the cluster. The products and consumers using those JMS resources will then fail to connect to the now-nonexistent service on the now-dead server, and connection will be established to the next server in the list provided by jms.provider.URL setting. Messages remain intact if the persistent store is a database.   JMS Monitoring One method of performance tuning an ODEE implementation involves determining how efficient workers are handling the workload. Because every implementation is different (different inputs, documents, and rules), there isn't a one-size-fits-all solution. There are a number of activities that you can undertake to give visibility into your system, and one such activity is to monitor your JMS queues. Each queue can expose information about how many messages it contains, the high water mark of messages (e.g. the maximum number of messages that existed in the queue), the number of active consumers, and more. For our purposes, we are interested in, for each queue, the number of consumers and messages, and the high water mark of messages. If you've spent any time digging around in WebLogic console, you will soon learn that capturing enough of this information to conduct trend analysis is somewhat painful, requiring a lot of configuration and overhead. Luckily, I have put together a handy script that you can run in WLST to capture or display information. You can download the script here. ########## USER SETTINGS ############ # connection to WebLogic Instance username='<weblogic_user_id>' password='<weblogic_password>' wlsUrl='t3://<hostname>:<port>' # milliseconds to wait between polls to JMS queues sleepTime=5000; # comma-delimited list of managed servers hosting JMS services to query. includeServer = ['jms_server']; # comma-delimited list of JMS servers (note: not managed servers!) to query. includeJms = ['AL1Server']; # comma-delimited list of JMS destinations to query. includeDestinations = ['IdentifierReq','PresenterReq','AssemblerReq','DistributorReq','ArchiverReq'] #ReceiverReq,ReceiverRes,PubNotifierReq,BatcherReq,SchedulerReq,PublisherReq # path/file name of logfile to write output logfilename = 'jmsmon.csv'; # Logging output options: # 0 - log to screen and file # 1 - log to file # 2 - log to screen logoption = 0 ############ END USER SETTINGS ########### import time from time import gmtime, strftime def getTime(): return strftime("%Y-%m-%d %H:%M:%S", gmtime()) def monitorJms(): servers = domainRuntimeService.getServerRuntimes(); if (len(servers) > 0): for server in servers: serverName = server.getName() if serverName in includeServer: jmsRuntime = server.getJMSRuntime(); jmsServers = jmsRuntime.getJMSServers(); for jmsServer in jmsServers: jmsName = jmsServer.getName(); if jmsName in includeJms: destinations = jmsServer.getDestinations(); for destination in destinations: destName = destination.getName(); destName = destName[destName.find('@')+1:]; if destName in includeDestinations: try: if (logoption < 2): f.write("%s,%s,%s,%s,%s,%s,%s\n" %(getTime(),serverName,jmsName,destName,destination.getMessagesCurrentCount(),destination.getMessagesHighCount(),destination.getConsumersCurrentCount())); if (logoption == 0 or logoption == 2): print("%s\t%s\t%s\t%s\t%s,%s\t\t\t%s" %(getTime(),serverName,jmsName,destName,destination.getMessagesCurrentCount(),destination.getMessagesHighCount(),destination.getConsumersCurrentCount())); except: if (logoption < 2): f.write('ERROR_DATA\n'); if (logoption == 0 or logoption == 2): print('ERROR_DATA!'); connect(username,password, wlsUrl); if (logoption < 2): f = open(logfilename,'a+'); f.write('Time,ServerName,JMSServer,Destination,Msgs Cur,Msgs High,ConsumersCur\n'); if (logoption == 0 or logoption == 2): print 'Time\t\t\tServerName\tJMSServer\tDestName\tMesg Cur,High\tCons. Cur Count'; try: while 1: monitorJms(); if (logoption == 0 | logoption == 2): print('--'); java.lang.Thread.sleep(sleepTime); except KeyboardInterrupt: if (logoption < 2): f.close; This script will output either to a file (as comma-separated values) or the terminal (as formatted output) a listing of each of the desired JMS servers and queues, and the message depths/high water mark and consumer count. To configure for your environment, you can drop the contents of the above into a file called jmsmon.py in your [ODEE_HOME]/documaker/j2ee/weblogic/oracle11g/scripts folder, and then add a shell script file to execute it, which is a simple file with these commands: #!/bin/sh . ./set_middleware_env.sh > /dev/null wlst.sh jmsmon.py Edit the .py file and adjust the settings as necessary. You'll notice that the user settings are contained at the top of the file. The only settings you must change are the username, password, and WebLogic connection URL for server/port. You can optionally change the settings for includeServer, includeJms, and includeDestinations. Each of these settings is a comma-delimited array of names that you want to be polled and included in the results. If you have multiple JMS managed servers, add them to includeServer. If you have multiple JMS servers, add them to includeJms. You can specify which destinations are included by adding them to includeDestinations - note that this group is used for all managed servers and JMS servers. In this way, if you have a clustered configuration or multiple assembly lines, you can capture the statistics for all of them using this script. Note that the default settings are to log to screen and file, and the screen uses tab-formatted output, while file output is comma-separated values for analysis in a software package like Excel. The script is meant to be executed during a load test, usually of at least 100 transactions or more to get some useful data for analysis.  Run the script and start your test. While the load test is underway, you will see the current messages and high-water mark on these queues ramp up considerably, because these are the workers that typically take more time to complete a unit of work, so there will be a backlog of work.  In my particular test case, I'm running 1,000 transactions, all of which are routed for manual intervention and so will not proceed beyond the Assembler worker. If I modify the script only to query the Assembler worker, we can see the number of messages waiting. This test tells us that the Assembler is pumping through around 125 transactions every 5 seconds or so, with a single Assembler instance running. I happen to know that these are relatively complex transactions, and this particular system is a virtual machine running on a laptop with the database, application server, and processing services consolidated to a single virtual machine so my performance expectations are low. By examining the consumer count (1) we know that the load balancing algorithm built into ODEE is not kicking in, based on the default configuration. The load balancer configuration allows the Scheduler work to query each worker at regular intervals. If the worker is able to respond within a specified time frame, it is deemed to be idle. If it is unable to respond within the time frame, it is deemed to be busy. After a predefined number of busy responses, the Scheduler will start up another worker instance (or thread pool, depending on the type of worker) as long as the configured maximum has not been reached. In the example above, if I was unhappy with the amount of time taken to run this batch of jobs, I could lower the threshold for load balancing to kick in, or I could preconfigure the number of instances on startup to be higher. In either case, the goal is to prevent worker starvation across the assembly line by having enough workers to satisfy the demand, while balancing this within the confines of the processing cluster. I reviewed the Identifier queue figures in another run and the high water mark for messages in this queue was under 50 and the current message count was very low, meaning the Identifier was keeping with demand. There is no predetermined performance configuration that will meet all needs, since each implementation is different, but this exercise will give you information to determine how to configure ODEE for your implementation and environment. Good luck! Sources: High Availability Guide for FMW  JMS Configuration for WebLogic

So you've just installed a brand-new Oracle Documaker Enterprise Edition system ("ODEE") and at some point during your implementation, you're going to have to scale the system. You probably are...


How to Permanently Customize Documaker Interactive Columns

If you have Documaker Enterprise Edition deployed at your site, chances are you're also using the Documaker Interactive component of this suite. Documaker Interactive (DI) provides a prebuilt application for creating new document transactions, processing manual documents from batches or real-time transaction, while giving you the power and flexibility of attaching documents from WebCenter Content and using a SOA/BPEL enabled approval workflow. All this and the power of a multi-lingual interface to boot! All in all, it's a very powerful tool in for your customer communication use cases. There is a common requirement among the user base that hasn't (yet) made it into the product: the ability to permanently set the inbox column display. What exactly am I talking about? When a user enters DI, they are usually presented with the Inbox, which displays transactions they have created or have been assigned to them. The display includes some default columns, which are displayed in the language of the user's OS region/locale, so long as it's one of those supported by Documaker Enterprise. The default looks like this: Note: I prefer the Alta skin, which is not the default for DI. You can change your default skin by clicking User Preferences in the top right, and selecting "alta". You can also set your Time Zone here as well. As you may or may not be aware, you can customize the column display by selecting View > Columns > Manage Columns. This will present a dialog in which you can hide or show any of the myriad of data columns that are available in DI. But here's the catch: the changes here are only for the user that makes them, and only for the current session. Log out and come back later, and you're back to the default again. Perhaps you have a business requirement that has mapped some data into the TRNCUS* columns, and you always want to display those, so users can sort their transactions. How do you it? Read on! Before we get started, a few housekeeping items. This was tested with ODEE 12.6.2 on Oracle Enterprise Linux 7, attached to Oracle 12c database. Don't have a system? No problem, check out my previous posts on how to set up your own system, here or here. Keep in mind that these instructions will modify your deployed application, so if you decide to upgrade in the future, or accept a patch, it may overwrite these changes so you'll need to reapply the changes if you perform an upgrade or patch. As always, make a backup of any changed components in case you need to revert your changes. Let's get started! Locate the DI deployable file -- this is the idm.ear file that's created when you ran the ODEE installer. This file is located in the ODEE Home (aka "ORACLE_HOME") specified during the installation. Navigate your file system to ORACLE_HOME/documaker/j2ee/weblogic/oracle11g/idocumaker_correspondence. (Note: if you're using DB2 or SQL Server, use that directory name in place of oracle11g). Inside this folder, you'll find idm.ear. Make a backup copy of this file.  I highly recommend doing the following procedure on a Windows system where you can use the 7zip file explorer to navigate the EAR file and it's contents. Otherwise you'll need to unzip the EAR and WAR file, locate the JAR file inside, explode that, edit a file, and then rebuild the JAR, WAR, and EAR. That's a lot of work. With 7zip, you can just navigate the contents of the EAR file. Open idm.ear with 7zip, then double-click iDocuMaker_adf_main_ViewController_webapp1.war. That will open, then navigate to WEB-INF/lib. In here, you'll find oracle.idocumaker.ui.jar. Double-click that, and then navigate to components. Finally, locate Correspondence_Inbox.jspx, then right-click it and select Edit. This will open the file in Notepad. This is a rather large file that defines the basic Inbox view using a custom ADF tag library. You don't need to know ADF syntax to do what we're going to do, so no worries! Scroll down a bit in the file and locate the section that looks like this: <af:column visible="#{attrs.idmkr_resourcebundle['INBOX']['Key3'] ? 'true' : 'false'}" sortProperty="#{attrs.idmkr_table_model.hints.Key3.name}" filterable="true" sortable="true" headerText="#{attrs.idmkr_table_model.hints.Key3.label}" id="c75"> <af:outputText value="#{row.Key3}" id="ot10"/> </af:column> You'll see a lot of repeating information with minor differences, but the key component here is the visible="#{att..." bit. What this actually does is set whether or not a column is visible according to some defaults contained deep within the DI code itself. Here we aren't actually modifying the code at all, we're just going to set the expression to return false  for both cases on any columns that we want to override the default behavior. For example, many implementations don't use the Key3 value, yet by default it is shown. So let's change that to look like this:  <af:column visible="#{attrs.idmkr_resourcebundle['INBOX']['Key3'] ? 'false' : 'false'}" sortProperty="#{attrs.idmkr_table_model.hints.Key3.name}"                      filterable="true" sortable="true"                      headerText="#{attrs.idmkr_table_model.hints.Key3.label}"                      id="c75"> <af:outputText value="#{row.Key3}" id="ot10"/> </af:column> The expression syntax is delimited with a #{ }, and inside those delimiters is some logic that looks for an object. If that object (in this example, an object called 'Key3') exists, then the visible attribute is set to true, otherwise it is set to false. We override that behavior by setting it to false no matter the existence of the object or not. Technically you could replace the entire expression with a true or false (e.g. visbile="true" or visible="false"), but in case you ever want to revert back to the original you have it right there. If you have other columns to set up to hide or show, go ahead and do those now. While you're here, you can also reorder the columns by moving around the <af:column> content -- just make sure you don't accidentally break the XML structure! You might be tempted to change the column header text too if you were astute and noticed the headerText attribute. Don't do this. If you change this here, you will break the translation capabilities of the system. If you need to change the column headers, you'll need to do this in the DMKR_TRANSLAT tables instead. See the end of this post for more information on this! Once you're done, save the file and exit Notepad. 7zip will then detect the changed file, and ask if you want to update the archive. Select OK. Next, click the "Parent directory" button to navigate up the directory tree (this is the folder with an up arrow). Once you get to the level of the JAR file that contained the update JSPX file, 7zip will detect the changed file, and ask if you want to update the archive. Select OK. Repeat this process, and you'll be asked if you want to save the changed WAR file, and you do, so click OK. Now you can close 7zip, you're done! Copy the new idm.ear file to the location where the original was located in ODEE_HOME. Now comes the exciting part -- Deployment! Open up your WebLogic console, and login. Under Domain Structure in the left pane click Deployments. This will open the list of deployed items in the main pane. Scroll and locate DocumakerCorrespondenceAL1. Tick the box next to this app, then click Update above the table. If you replaced the idm.ear file directly on the WebLogic server's filesystem, you can click Finish. If you put your new idm.ear file somewhere else, click Change Path next to Source Path and locate the file, then click Finish. Once that's done, you'll need to restart the managed server (JVM) where DI is running -- you can do this in WebLogic console too, by expanding Environment > Servers in the Domain Structure pane, then click the Control tab in the main pane. Tick the box next to the JVM running DI (usually this is idm_server_al1_1_1) then select Shutdown > Force Shutdown Now. You can wait a bit and refresh this page, or click the auto-refresh button and wait until the status of the JVM is reported as SHUTDOWN. Once shutdown, tick the box next to the server again, then click Start, and wait (either refresh or use auto-refresh). Once the server is back up, you can login to DI and you should see your brand-spanking new Inbox! I hope this has been helpful - as always if you have questions or comments, you can reply below, or visit the Documaker Community. Addendum: If you want to know where to change the column headings, fire up your SQL tool and query the DMKR_ADMIN schema: SELECT GROUP_ID, ID, DISPLAY, DESCRIPTION, LOCALE_ID FROM dmkr_asline.dmkr_translat WHERE app_id = 201 and GROUP_ID like 'INBOX.ENTITY.%' order by GROUP_ID, LOCALE_ID This will give you the GROUP_ID (which correlates to the "#{attrs.idmkr_resourcebundle['INBOX']['Key3']" attribute in the JPSX file -- just drop the "INBOX.ENTITY." and use the ending token, e.g. KEY3 to understand the correlation. The ID indicates a LABEL or a TOOLTIP. The LABEL is the column heading, and the TOOLTIP is displayed when a pointer is held over a column. The DISPLAY and DESCRIPTION should be set to the same value. DISPLAY is what is shown in the browser UI, and DESCRIPTION should be used for accessibility tools. You can make updates to the DISPLAY and DESCRIPTION here for the appropriate LOCALE_ID, and then restart the IDM server as shown above. An example SQL statement to update KEY3 is shown below: UPDATE DMKR_ASLINE.DMKR_TRANSLATE SET DISPLAY  = 'MyKey3Header',DESCRIPTION  = 'myKey3Header'  WHERE APP_ID=201 and LOCALE_ID = 'en' and GROUP_ID = 'INBOX.ENTITY.KEY3' If you have other languages installed, you'll need to update those as well, substituting the appropriate two-character language code: UPDATE DMKR_ASLINE.DMKR_TRANSLATE SET DISPLAY  = 'Clé 3 En-tête',DESCRIPTION  = 'Clé 3 En-tête'  WHERE APP_ID=201 and LOCALE_ID = 'fr' and GROUP_ID = 'INBOX.ENTITY.KEY3'

If you have Documaker Enterprise Edition deployed at your site, chances are you're also using the Documaker Interactive component of this suite. Documaker Interactive (DI) provides a...

Inside Document Automation

Do I Need to Subscribe to Java SE to Run Oracle Documaker?

Oracle introduced a new subscription program for commercial use of Java. If you're a user of Oracle Documaker, either Standard (ODSE) or Enterprise (ODEE), you may be wondering -- do I need a subscription to Java to use Docupresentment, or DocFactory? In this FAQ, Oracle presents some common questions about the new subscription program. There are a few pertinent questions that apply. Let's explore! Note: if you don't have an existing ODEE implementation and are curious, this post will help you on your way. Also, if you are interested in upgrading the JRE that ships with ODEE, you can review this post which discusses the processes. What about Java SE 8? Oracle Java SE8 updates, which includes the Oracle JRE with Java Web Start, continues to be free for personal use, development, testing, prototyping, demonstrating and some other important uses explained in this FAQ under the OTN License Agreement for Java SE. Personal users can continue downloading the Oracle Java SE 8 JRE at java.com.  Older versions of ODSE/ODEE were tested with JRE 1.6, 1.7, and 1.8, and as you can see in the above FAQ response, JRE 8 is free for personal use, development, and testing. But that doesn't answer the question, do I need to subscribe to Java? Another question asks: I am a customer of an Oracle Product that uses Java. Does Oracle Java remain free for me? If you are a customer who has a current support entitlement to any Oracle Product that includes Java, you continue to have free access to any Oracle Java SE updates for use with that Oracle Product. See this My Oracle Support (MOS) document (requires Oracle Support login) for more information. If you're already an ODEE licensee, you probably know that the Java runtime ships with ODEE, so this answers that question for you. But this doesn't apply to ODSE, since it does not ship with a Java runtime. There doesn't appear to be an answer in the aforementioned FAQ, however there is another FAQ that contains this Q&A: If I use another Oracle product that relies on the Oracle Java SE runtime, how will a Java SE Subscription affect me? If you use any Oracle product that requires Java SE, you are already licensed to use the Oracle Java SE runtime with, and for the sole purpose of running, that Oracle product.  Java SE Subscription provides licensing and support if you need to use the Oracle Java SE runtime for running products not licensed by Oracle. For more information see My.Oracle.Support Note 1557737.1 - (Support Entitlement for Java SE When Used As Part of Another Oracle Product – Requires Support Login). This answers the question directly. In the case of Oracle Documaker Standard Edition, which has components that require Java SE (specifically Docupresentment), then you are licensed to use Java SE specifically for Docupresentment. As always, if you have any questions, please refer to your Oracle license sales representative or contact Oracle Support.    

Oracle introduced a new subscription program for commercial use of Java. If you're a user of Oracle Documaker, either Standard (ODSE) or Enterprise (ODEE), you may be wondering -- do I need...


Securing the Documaker Dashboard

You may have noticed that the Oracle Documaker Enterprise Edition (ODEE) Dashboard application does not have any specific Ability Sets like Documaker Interactive. Nor does it limit accessibility to system administrators like the Documaker Administrator application. This is because the Dashboard is meant to be accessed by any users who have rights to the Documaker system. The Dashboard is read-only, so nothing can be changed. However, because the Dashboard can be used to view live transactions within the system, it is important to consider limiting access to the Dashboard, since personally-identifiable information (PII) can accessed here. In this short article I’m going to show how to configure secure access for Documaker Dashboard in two ways. It’s worth noting that good security practice dictates that one start with the narrowest security scope, and to limit what is transmitted where possible. In short, this means: Don’t grant access by default; and Don’t put information on a document unless it’s needed.I suspect that most documents have been amended so far to prevent exposure of secure information like full account numbers — when is the last time you saw something that showed an entire credit card number or social security number? Background Information At this point, let’s assume that you already have an existing ODEE implementation that is hooked up to an external user repository (if you need help setting up your system you can refer to my green field guides for more information). Before moving on, let’s discuss some important background information. Authentication and Authorization: simply put, the authentication answers the question “who are you”, and authorization answers the question “what are you allowed to do”. Authentication is provided through any number of mechanisms; a common example is a challenge-response (the system challenges you with a password, you respond with what is hopefully correct, and you are now authenticated). The second part of this equation is the authorization — now that the system knows who you are, it can determine what you are allowed to access, or what functions you can perform. When ODEE is deployed, a WebLogic domain is created — which is the collection of resources administered in a bundle. The domain has a security realm, which contains all the users and groups and security policies applicable to the domain. When installed, the default configuration is a single user that serves as the Documaker system administrator. You can also install sample users and groups to illustrate how to use the system. These users and groups are all contained within the realm. Most implementations never use the realm to house users and groups, and instead are configured to use an existing user repository like Active Directory. You can read my write-up on how to configure WebLogic and ODEE to use Active Directory here. Where ODEE is concerned, all authentication is delegated to WebLogic. That is, ODEE knows nothing about users or their passwords. All of this is handled by WebLogic. What you need to know is that authorization is handled at two levels: the domain, and through a special convention in ODEE, the Ability Set. Each of these aspects are controlled by a user’s group membership — WebLogic considers a user’s group membership for authorization, and ODEE aligns groups to specific functions within its applications (“Abililties”), which are grouped by roles (“Ability Sets”). In this manner, you can map a user group to an ability set commensurate with the groups member’s roles within the application. Put another way, the “Drafter” ability set grants functions within Documaker Interactive that are needed by users to create documents. This ability set is then mapped to user groups. Did you notice how I mentioned the first authorization handle at the domain and gave no additional details? There’s a reason for that! The bit about ability sets is all part of base product and is already well-documented, but the domain-based authorization isn’t given much attention in the document. This is partly because it’s not specific to the Documaker product, it’s part of a larger Java Security specification that’s implementation by WebLogic, which you can read about here (fair warning: it’s quite dry and technical in nature). Keep in mind that even though this discussion is specific to the Documaker Dashboard, it applies to any application deployed to WebLogic. I am discussing the Dashboard here because it doesn’t have any administrative interface for its authorizations. There are two models for administering domain-based authorization for the Documaker Dashboard: deployment descriptors or the WebLogic Console. The default model is deployment descriptors, which are a set of files that are contained within the deployable artifacts that are generated and installed when you install ODEE. A deployable artifact is an EAR or WAR file, which is basically a compressed directory structure with the application files in it — a zip file. The other option is the WebLogic Console, which gives you a web-based administrative interface to managing the security. Let’s start with the latter, since it’s somewhat easier. Note: these instructions assume you’ve already deployed ODEE into a sandbox environment. Because the deployments are done automatically during the installation process, describing how to change these settings during the deployment is beyond the scope of this particular instruction — but it’s possible to do it if you’re rolling your own installation. Configure for Administration through WebLogic Console First, undeploy the existing DocumakerDashboard application. This is necessary because the security model is set when the application is deployed. Login to your WebLogic console. Navigate to Deployments, and tick the box next to DocumakerDashboard and then click Delete. Wait! Make sure you note the Targets for the application. The default is the dmkr_server_al1_1, but your implementation might be different. Now, deploy the existing Documaker Dashboard application. You should still have the original deployable file in the ODEE_HOME. Click Install, and use the path navigator to locate the ODDF_Dashboard.ear file. The default location is <ODEE_HOME>/documaker/j2ee/weblogic/<database>/dashboard. Tick the radio button next to the EAR file, and click Next. Accept the default (install as application) and click next. On the deployment targets screen, check the appropriate servers for targeting the deployment. Again, the default is the managed server dmkr_server_al1_1. Tick the necessary boxes and click next. In the General Section, change the Name to DocumakerDashboard (the default is the name of the deployable, which is ok, but not preferred). In the Security section, notice the default is DD Only. This should be changed to Custom Roles (which leaves policies to be defined in the deployment descriptors) or Custom Roles and Policies (which allows both to be administered in WebLogic) depending on your needs. The most likely selection is Custom Roles and Policies, which is what we’ll demonstrate below. Tick the radio button and click Finish. Administration through WebLogic Console Once you’ve set up for administration through WebLogic Console, you can login to the Console and navigate to Deployments > DocumakerDashboard > Security. Here you will see two sub-tabs, Roles and Policies. On the Roles tab, we want to create a new role, so click New. The only option we need to supply is a role name, and we’ll use valid-users (this happens to mimic the default configuration). The default Authorization Provider XACMLAuthorizer should be selected. Click Save and then switch to the Policies tab. On the Policies tab, we will define a policy that allows access to the dashboard only by members of a specific group. Click Add Conditions Select Predicate List: Group, then click Next. Enter a Group Argument name and click Add. The name you specify here is a valid group name. If you don’t have a group set up yet, you can put “Documaker Administrators” for now. Click Finish. Click Save. You can add multiple groups here by repeating this procedure. If you try to login to the Dashboard with the administrative user, you will have access. If you try to login with non-administrative user (e.g. Alan Abrams or other sample users) you will be denied access. Yes, it was really that simple — so simple in fact you’ll wonder why I’m even going to go into this next part, but in the interest of completeness… Oh, before we go, if you ever want to revert back to the default state of the installation, simply perform the undeploy steps, then deploy again, but this time use the DD Only option on the Security tab. Ok, moving on! Administration through Deployment Descriptors. Follow the undeploy steps in the Configure for Administration through WebLogic Console section above. Locate your the original deployable file in the ODEE_HOME. The normal location <ODEE_HOME>/documaker/j2ee/weblogic/<database>/dashboard/ODDF_Dashboard.ear. Make a backup of this file. I feel this should be said twice. Make a backup of this file. I advise you to copy this file to another directory and edit there, rather than backing up somewhere else and editing the original. Copy the ODDF_Dashboard.ear file to a new directory, e.g. ~/temp. Unzip the EAR file in this directory. Remove the EAR file. Inside the ~/temp directory, unzip Dashboard_ViewController_webapp1.war into a new directory inside ~/temp, e.g. ~/temp/war. Remove the WAR file. Inside the ~/temp/war/WEB-INF directory, locate the deployment descriptor weblogic.xml. Edit the weblogic.xml file. You’ll see the following bit in this file, which maps the principal name to the role name. That is, the name of group in your repository is mapped to a role name. All you need to do is change the principal name to the name of a group that you want to have access to the application. If you want more than one group, create duplicate <principle-name> elements. <security-role-assignment> <role-name>valid-users</role-name> <principal-name>users</principal-name> </security-role-assignment> Save the weblogic.xml file. In ~/temp/war, issue the following command to rebuild the WAR file in the ~/temp directory $ jar -cvf ../Dashboard_Viewcontroller_webapp1.war * Change to ~/temp and remove ~/temp/war. In ~/temp, issue the following command to rebuild the EAR file in the ~/ directory: $ jar -cvf ../ODDF_Dashboard.ear * Change to ~/ and remove ~/temp. Follow the deploy steps in the Configure for Administration through WebLogic console section above. Note: some programs will allow you to navigate the contents of zip files and edit them in place. If you associate EAR files and WAR files with such a program, you can simply edit the weblogic.xml and not have to go through the unzip/rezip processes. That’s it! Hopefully this made sense and will be useful for you. It’s not likely that this configuration will change frequently, but if it something that could be in flux it might be beneficial for you to make this change to use the administration through the WebLogic console, so you won’t have to rebuild the deployable files any time a setting changes. As always, if you need help comment here or over on the Documaker Community.

You may have noticed that the Oracle Documaker Enterprise Edition (ODEE) Dashboard application does not have any specific Ability Sets like Documaker Interactive. Nor does it limit accessibility to...


Configuring Documaker Enterprise in WebLogic with Active Directory Authentication

A common implementation task is to tie in the authentication of users into an established user repository. Documaker is no different in this aspect of implementation, and in this article, I'm going to give you a short primer on how to configure an external LDAP repository for use with Documaker Enterprise Edition. First, a review. LDAP is an acronym for Lightweight Directory Access Protocol, and is a terse language used to query a server for information.  In plain terms, the directory is like a phone book of information that could be users, machines, printers, or just about anything. In terms of this example, our directory service will be used to lookup users and their group membership(s). A directory is a hierarchical structure of related nodes, and each node has attributes that describe it. Some nodes and attributes are specified by the X.500 data model, others can be specific to your organization. The image above is an example directory structure. In the above image we can see a few abbreviations: dc is domain component, cn is common name, ou = organizational unit, o = organization. Each of these levels is called a relative distinguished name, or RDN. When we reference a specific element in our directory, the sum of all the RDNs to locate that element gives us the distinguished name, or DN.  Documaker Enterprise features the Administrator, Dashboard, and Interactive applications. These applications require all users to be authenticated - that is, users must present a credential from a trusted system so Documaker (or more specifically, the application server hosting the Documaker applications) is assured that the user is actually "Jane Doe". From there, the Documaker applications can apply authorization: what Jane Doe is allowed to do inside various applications. Where Documaker is concerned, these authorizations are represented by collectively represented by Ability Sets, which are tied to Entities. An Entity is a group, or role. So, when Jane Doe logs into Documaker Interactive, the application authenticates Jane (usually through a challenge/response, e.g. Jane must input her user ID and password), and then the application will learn Jane's group memberships, and will look up these groups in its configured Entities. These Entities are correlated to Ability Sets, and thus the application now knows which functions are available to Jane when displaying the user interface. When Oracle Documaker Enterprise Edition (ODEE) is first installed, a default security realm is created and populated with sample users. The security realm is the source for user authentication and authorization for applications and components deployed in a WebLogic domain. The installation process also creates a set of default users and authorizations, both for administration (e.g. the " documaker " user) and for reference implementation (e.g. users such as "Alan Abrams" and "Linda Lamas"). This is great for establishing a sandbox environment, but unless you have a very small user base you're not likely to import your users into this security realm. Luckily, WebLogic's security framework provides the ability to define multiple authentication sources.This short instruction set will illustrate the necessary configurations and settings to link an Active Directory server to a WebLogic security realm, and then to configure the system so that ODEE's applications will function, and domain users will be able to login . If you have any questions, you can comment below, or over at the Documaker Community. As always, if you need to set up an ODEE sandbox, you can review my instructions here or here. Keep in mind that if you decide to try this in your environment with an existing Active Directory configuration, you'll want to review the settings for locating groups and users with your Active Directory system administrator to make sure they are correct for your environment. Configuring WebLogic Security Realm with Active Directory First, obtain the following information about your Active Directory server: IP Address or Hostname Port Bind DN (user) e.g. cn=weblogic,cn=Users,dc=testcompany,dc=local Bind DN Credential (password) Base DN - this is the top-level grouping where users will be located. This will vary depending on how the organization has configured Active Directory. Recommend using Apache Directory Studio to connect and confirm these settings. Example: "dc=testcompany, dc=local" for the domain "testcompany.local". You may also have additional dc elements if the domain has more dot-separated parts (e.g. test.mycompany.com yields dc=test,dc=mycompany,dc=com) Armed with this information, let's configure the WebLogic security realm to attach to the Active Directory server. Login to WebLogic Console and navigate to Security Realms > myrealm > Providers Click New, and name your Provider (e.g. AzureAD). Select ActiveDirectoryAuthenticator as the type and click Ok Click your Provider in the list of Providers, then select the Provider Specific tab. Enter the settings for your Active Directory Server Host = IP address or Hostname Port = Port Principal = Bind DN Credential = Bind DN Credential Confirm Credential = Bind DN Credential User Base DN = Base DN User From Name Filter = (&(cn=%u)(objectclass=user)) viii.User Search Scope = subtree User Name Attribute = sAMAccountName User Object Class = organizationalPerson Group Base DN = Base DN (note: you may augment the Base DN if the organization has additional tree structure around groups. If the groups are organized separately from users, e.g. in an organizational unit (ou), then you will want to add this to your Base DN, e.g. ou=my-department,dc=test,dc=mycompany,dc=com Group From Name Filter: (&(cn=%g)(objectclass=group)) xiii.Group Search Scope = subtree Group Membership Searching = unlimited Max Group Membership Search Level = 0 Static Group Name Attribute = cn xvii.Static Group Object Class = group xviii.Static Member DN Attribute = member Static Group DNs from Member DN Filter = (&(member=%M)(objectclass=group)) Click Save. WebLogic will validate the connection -- if failure occurs, check your Host, Port, Principal, and Credential settings. Click the Common tab and review the Control Flag setting. It should be set to OPTIONAL at the moment. This will allow WebLogic to query the LDAP repository to get user and group information but does not require a successful authentication of an LDAP user to succeed. Restart the AdminServer. At this point, the chance of failure is minimal since we have configured the ActiveDirectory Provider Control Flag as OPTIONAL, which means it doesn't have to succeed. The DefaultProvider Control Flag should be set to SUFFICIENT. Resolve any startup errors if present. Validate configuration by logging into WebLogic console and navigating to Security Realms > myrealm > Users and Groups. In the Users tab, you should see users listed from DefaultAuthenticator and your provider (e.g. AzureAD). In the Groups tab, you should see groups listed from DefaultAuthenticator and your provider (e.g. AzureAD). This verifies that we are pulling groups and users from Active Directory. Now that we have configured our WebLogic domain for Active Directory, we have to configure the Providers in order and with specific Control Flags. The reason for this is two-fold: one, ODEE web applications only have visibility into the first Provider in the list, and two, most organizations will want to limit the user/group containers only to Active Directory. That is, they do not want to have users/groups coming from multiple sources, especially one that is not enterprise-wide, like the DefaultAuthenticator. The key difficulty in this configuration, which will specifically limit WebLogic user/group validation to Active Directory, means that the users needed to run WebLogic services and login to the console must be defined in WebLogic before the services will run, and any failure to connect to Active Directory will mean the services will not start. Before proceeding, make sure that your Active Directory has a user with sufficient rights to run services in your domain. What does this mean? You must: Create an Active Directory user that will have the ability to run WebLogic services. In this doc, we will refer to this as the adweblogic user. Create an Active Directory group that contains users with the ability to run WebLogic services. In this doc, we will refer to this as the weblogic admin group. Add the adweblogic user to the weblogic admin group.  Now that you have this configured, you can continue. Backup your existing configuration in case something breaks. This means to backup your config.xml found in DOMAIN_HOME/config/config.xml. Login to WLS Console and navigate to Security Realms > myrealm > Roles and Policies. Expand Global Roles and click Roles. Click the Admin role, then click Add Conditions. Configure accordingly: Predicate List : Group. Click Next. Group Argument Name: weblogic admin group. Click Add. Click Finish. Click Save. Stop the AdminServer. Update DOMAIN_HOME/servers/AdminServer/security/boot.properties. Enter the adweblogic user and credential in clear text (it will be encrypted after you start the server). Start the AdminServer. Login to WebLogic Console with adweblogic user. If this does not work, check your configuration. Navigate to Security Realms > myrealm > Providers. Click your Active Directory provider Change Control Flag to REQUIRED, then click Save. Use the breadcrumb trail to navigate back to the list of Providers. Click Reorder, then tick the box next to the Active Directory provider, then click the "to top" button so that your provider is listed first. Click Ok. Now, you should: Shutdown the AdminServer. Shutdown any Managed Servers Shutdown NodeManager. Restart NodeManager, connect with WLST, and start the AdminServer. Start any Managed Servers.   Configuring ODEE with Active Directory In order to use ODEE Applications with Active Directory, some additional configuration is required. You will need a SQL tool (SQLDeveloper, MSSQLStudio, or any tool where you can perform SQL statements). Understanding the configuration ODEE apps use several tables in the DMKR_ADMIN schema to determine what applications and functions users can access. The tables are populated during the initial installation, and the JPSQUERY tool is used to correlate entries in the DMKR_ENTITIES table to users in the default Security Realm. However, when migrating to Active Directory, we have no such users (in fact, if you attempt to login to the applications with the documaker user, you will find that you cannot. It is important to understand the relationship between the tables: DMKR_ENTITIES   This table lists the groups that have been associated to one or more ability sets using Documaker Administrator. This table also lists any users that have successfully accessed the system — note that ability sets are mapped only to groups, not users. DMKR_ENTITY_TO_ENTITY This table defines the relationship between users who have logged into the system and any groups with mapped ability sets. So, when a user logs in, DMKR_ENTITY_ABILITYSET This table defines the relationship between a group and its mapped ability set(s). So, when a user attempts to log into web applications deployed to WebLogic, the security provider is queried to authenticate the user credential, and then obtain the user’s group membership. The web application, e.g. Documaker Administrator, will determine if the user is a member of any group that has been given the appropriate ability set for that application. If the user has membership in a group with a mapped ability set, those entries are created in the DMKR_ENTITIES and DMKR_ENTITY_to_ENTITY table, and then the user is granted access to the application. Performing the configuration Use your SQL tool to connect to the DMKR_ADMIN schema, and run the following queries. You may need to modify them slightly for your target database - these examples were done with Microsoft SQL Server, using the default schema DMKR_ADMIN. Note: Replace <Your_AD_Documaker_Admin_Group_Name> with the name of a group in Active Directory that contains administrators of the Documaker system. Replace <Your_AD_Documaker_Admin_User_Name> with the name of a user in Active Directory that is a member of your Documaker admin group. UPDATE DMKR_ADMIN.DMKR_ENTITIES SET NAME=“<Your_AD_Documaker_Admin_Group_Name>” WHERE NAME = “Documaker Administrators” COMMIT UPDATE DMKR_ADMIN.DMKR_ENTITIES SET NAME=“<Your_AD_Documaker_Admin_User_Name>” WHERE NAME = “documaker” COMMIT   DELETE FROM DMKR_ADMIN.DMKR_APPRLEVELSENTITIES COMMIT DELETE FROM DMKR_ADMIN.DMKR_ENTITIES WHERE NAME NOT IN ( <Your_AD_Documaker_Admin_User_Name>,<Your_AD_Documaker_Admin_Group_Name>) COMMIT After running these queries, you should be able to run JPSQuery* and see output as follows (note: it may differ slightly, as you might see more Users if you have multiple users in the Documaker Administrator group. Also if you have run JPSQuery multiple times it may say “updated” the first time you run this after the SQL queries and "found" in subsequent runs.Restart the managed server(s) running the Documaker applications and you should now be able to login to the Documaker Administrator with your Active Directory user that is a member of the Documaker Administrators group in Active Directory. You can then proceed to link up "Enterprise Groups" (groups from Active Directory) with "Ability Sets", which are preconfigured roles within Documaker applications. Well done! *The JPSQuery application is installed by default to the AdminServer in the WebLogic domain, which runs on port 7001, so you would access it via a browser to https://<admin-server-host>:7001/jpsquery. Your installation may be different.  

A common implementation task is to tie in the authentication of users into an established user repository. Documaker is no different in this aspect of implementation, and in this article, I'm going to...

Inside Document Automation

A Sample Documaker Web Services Interface

What Is This? It's a very simple demonstration of capability to use SOAP messaging with just about any modern browser that supports XmlHttpRequest. The gist of it is to enable a demonstration of user data capture to augment system data, and then use that to request a document for subsequent visual editing by the user. How Do I Use It? To use the dws Client, you need to configure/edit, and then deploy it to your environment. Open the file in your favorite text editor, e.g. Notepad. PS you should be using Sublime Text, but it's your choice. The files: Click here to download the file. Use this to reference the original file for line numbers, etc. Assumptions You have a working Documaker Enterprise environment on a Linux machine that uses WebLogic. You can adapt the instructions if you are running on Windows (really the only difference is the simply-scripted creation of the web.xml file) and the file copying details. If you aren't using Documaker Enterprise, and you're using iDocumaker/Docupresentment/EWPS, then this is not for you. However, you will note that the only real difference is the SOAP messaging, and the endpoints, so you can probably figure that out. I will probably come back here some day and make another version for EWPS. This assumes an Interactive workflow. What does that mean? It means you're going to be using Documaker Interactiveto allow the user to go mess with the document before it's published. Wait, do I have to do that? No, you don't. You could: ...Present a PDF to the user as a result of their request. Around line 44 set var returnType = 'Attachments' and then go around line 278 to change what happens when the response is received (e.g. retrieve the PDF from the response, decode it, and present it to the user; or... ...Make it a straight-through process with no user intervention. Around line 287 you could remove the redirection to Documaker Interactive and just add something like alert('Your document was submitted') and if the user doesn't need to do anything else; or... ...Do both of the above, depending on data, or the response from Documaker e.g. if it generated a PDF, show it; if it was routed for user entry, redirect there. Or, do nothing! Basic Instructions for Technical Wizards The basic configuration goes like this: Edit the HTML form to include the input elements you want the user to enter. Be sure to give each element a unique id attribute. Set the variable dwsUrl to the correct endpoint for your DWS Publishing Service. Set the variable diUrl to the correct endpoint for your Documaker Interactive application. Deploy to your J2EE container (e.g. WebLogic). Note: you could deploy this to another container or server, but you need to be aware of CORS. I'm not getting into that here, so just keep it simple and deploy to the same container, alright? Step-By-Step Configuration and Deployment for Those Who Like Words Around line 27 in the file you will see the definition of HTML input tags. You can add as many or as few as you like. The only thing you need to keep in mind is that each one needs a unique ID attribute, like the examples. You’ll see there are two, lastname and firstname. Add however many you want, and give them each a unique ID. Around line 35, you will have the settings you need to make. These are pretty easy, really you just need to change the IP address to the IP of your environment, on the lines that say var dwsUrl and var diUrl. You don’t need to change anything else about those lines. Here’s where it gets interesting, and this is the glue between this sample client and your Documaker installation - the extract data! If you want to use a specific extract file, you can - just make sure it corresponds to what your MRL is expecting. I have presented the extract data here as one big string, so you can read it and replace/add anything you might need. The key is around line 77 - you will see my example comments - where you need to replace the data in the extract string with the user-entered data. Look at my example and follow it. You can copy and paste this as you need to. Notice that you’re referencing your user inputs by the unique ID attribute. You can also add some fancy HTML or CSS or whatever you want between lines 2-31 if you want to pretty it up. This is by design a VERY light and bare-bones example, so changing this is optional but recommended. Save the file somewhere - it’s time to deploy! Open a terminal in your Documaker environment's server. Copy these commands, paste them into the terminal, then press ENTER. cd /home/oracle mkdir demohtml mkdir demohtml/WEB-INF cat > demohtml/WEB-INF/web.xml << EOF EOF Now you can copy over your saved file to the environment and place it inside the /home/oracle/demohtml directory as index.html. There are a number of ways to accomplish this, depending on where your system is and how you normally put files there. If it's running as a VM, you might have a shared directory. You can use scp if you're into the whole command-line thing. It's up to you - the key is to make sure the file is named index.html and that it is in /home/oracle/demohtml. Just FYI: you can create multiple copies of this file in the /home/oracle/demohtml directory, in case you want to try different things. Time to create the deployment in WebLogic. This needs to happen only once, no matter how many copies of the HTML file you have. Open your browser to the WebLogic console and log in. Navigate to Deployments Click Install Locate the /home/oracle/demohtml directory and tick the radio button next to it, click Next. If you can't see it, use the links in Current Location to get to it, or... Just enter /home/oracle/demohtml in the Path and click Next. Accept the default Install this deployment as an application and click Next. Select the server(s) on which to deploy the application, e.g. AdminServer and click Next. Accept defaults and click Finish. Now the application has been deployed -- time to open up your browser and try it out. If you need to change the HTML, you can redo the steps above to edit and copy your file, but you don't have to redo the deployment. Just copy the new file out there and refresh your browser page. If you run into issues you can put your browser into "developer mode" and view the console to see if there's any useful error messages. Wait, you aren't getting console messages? Have a look around line 35 and make sure var debug = true; so you can get some debugging information to your console. Lastly, if you need some assistance, you can comment here or over at the Oracle Documaker community.

What Is This? It's a very simple demonstration of capability to use SOAP messaging with just about any modern browser that supports XmlHttpRequest. The gist of it is to enable a demonstration of user...


Thinking about upgrading Documaker Standard?

In some recent discussions I was asked to describe the general steps for performing an upgrade of Documaker Standard edition and when planning should start for such an upgrade. Note: Documaker Enterprise Edition shares some common upgrade tasks but is a different animal when it comes to upgrading. If you have an ODEE environment, you can read this but know that your needs will be different!   First, let me point out that most implementations are specialized to the requirements of the organization in which the software is installed. There are often similarities, but typically no two systems are the same. As such it can be difficult to write a one-size-fits-all guide, nor provide any documentation to reference in cases like these. This makes it a little daunting to approach an upgrade but it doesn't have to be. Read on!   The absolute first step is understanding what you're going to upgrade. That means taking an inventory of what you have: software and versions, hardware and specifics, Documaker implementation specifics, as well as related software systems. You need to be able to answer questions like What version of Documaker are you running? Are you using Docupresentment? Do you have custom rules in Documaker or Docupresentment? What are your integrations with third-party software?    Ideally you have been maintaining documentation about your system over its lifetime. If your system was implemented by Docucorp/Skywire/Oracle, it is likely that you have system installation and configuration documentation that you have been maintaining (you have been, right?) and this will be immensely helpful in answering these questions, and you’ll see why in a moment.   Next you need to determine the upgrade method, which is either going to be in-place or side-by-side. Side-by-side is the most common and means having a new implementation that runs next to an existing one. An in-place upgrade is rarely done, and involves literally installing the new software over the existing software, and is usually done only when budgets and timelines are extremely tight and new environments cannot be procured to support the new version. Keep in mind that as soon as cutover to production is 100% complete, the old system can be retired/repurposed, so your hardware investment should not grow significantly.   It’s time to conduct the upgrade assessment. This is the hardest step as it is the most complex, especially if you have a much older system, or you haven’t been maintaining your system documentation. The objective here is to document the system specifics both for the existing system and the target system. Review the readme/release notes for the target version and identify additions/changes you might use or might affect you. Note: readme/release notes are not cumulative so if skipping a version (e.g. 12.4 to 12.6) you’ll need to review the notes for each version. Identify any touchpoint and integrations with outside systems. If the system is complex and far-reaching, it may be difficult to discover all the touchpoints, as some system could be using an output without your knowledge. It’s unlikely that an ODSE upgrade will change an output, but it could happen, so you need to know what to test to make sure it’s still working after the upgrade. Identify any customizations. In an ODSE there are two typical areas where custom code comes into play: CUSLIB and Docupresentment. There are other areas too, but these are the most prevalent. CUSLIB customizations are usually done to introduce new behavior into Documaker to meet a requirement that cannot be satisfied with the base rules and functionality in Documaker. For systems that have been in place for a long time, it is typical that old custom code can be replaced with base rules and functionality. Analyze customizations. If you have customizations, you should analyze the functionality of the target system and determine if you still need the customization(s). If you determine you need to keep the custom code in CUSLIB, you’ll have to plan for  recompiling against the new version. Note: for custom rules in Docupresentment, it’s possible you could have custom rules there as well, but these rules rarely (practically never) need to be recompiled. Determine impact of changes to configuration. This step is ambiguous in its definition, but once you have conducted the prior steps it should be pretty clear. If you are implementing new rules to replace custom code, then those rules could need new configuration options enabled, or changing existing ones. If you are doing an implementation on a new server that has a different directory structure, you might have to change some configuration settings. The objective here is to know what needs to be changed in the configuration. It most cases, the answer is very little or nothing.   Next you need to establish your upgrade plan. This will be based on your previous analysis and needs, and depends largely on your upgrade method, implementation particulars, and corporate standards. This is typically a sequential list of tasks that will be completed, who will complete them, and when they will be completed. You need to account for establishing the new environment. If doing in-place, this is not necessary. For side-by-side, this can mean a new directory on an existing server or a new physical/virtual server. The software doesn’t care either way, but it depends on what your particular needs are. You need to account for how you will perform the installation and configuration (typically, run the installation, then copy the configuration (INI files) from the existing system and perform any necessary modifications to the configuration for the new environment (e.g. if you have renamed directories that are referenced in the configuration, then you need to update the configuration).   A typical strategy includes a regression test wherein a set of test cases are executed against the existing system and the outputs retained for comparison. When the new system is installed and configured, the regression test cases are run in the new system and the outputs compared to determine if there are any differences. The strategy made also include a smoke test to validate that the installation is successful, followed by some period of comparative side-by-side execution, which is essentially running production data in the old system and the new system at the same time, while still using the output from the old system and selectively comparing to the output from the new system to ensure the fidelity is similar. Typically these methods are used if there are significant changes to the codebase (e.g. moving from custom rules to built-in rules) and less-so for like-for-like upgrades.   So now you’ve created the plan… what comes next? It should be obvious at this point, but the next step is to execute the plan. If this seems daunting, don’t worry - you can always contact an Oracle Consulting representative to help you through this process. We’ve done this many, many times, and we’re the experts here, so don’t hesitate to reach out.

In some recent discussions I was asked to describe the general steps for performing an upgrade of Documaker Standard edition and when planning should start for such an upgrade. Note: Documaker...


Remixing the JSESSION ID with Documaker

If you have deployed Oracle Documaker Enterprise Edition (ODEE) into an existing environment with other WebLogic applications, it's possible that you have run into a problem. That problem usually exhibits as an inability to access Documaker applications (such as Interactive, Dashboard, or Administrator) or an inability to access another WebLogic-hosted J2EE application. Typically you might be able to access one and not the other depending on your users' particular work process. What I have found in such situations is that the conflict occurs specifically when both applications are using authentication (which is pretty typical in all applications). WebLogic uses a default cookie name (JSESSIONID) for all web applications, and when authentication is in use, the same cookie name is used -- this enables single-sign on (SSO) applications to work. Once a user has been authenticated, the applications can all use the same cookie for user authentication. This is documented in WebLogic 11g and 12c. So what happens when you have two applications that use the same cookie for authentication but have different authentication mechanisms and users? Well, one or both of those applications aren't going to work. In this case, the fix is simple: all you need to do is configure one of the applications to use a different cookie name. If you want to do this for a Documaker application, here's how. First, you need to make a choice: you can change the cookie name or the cookie path. Either should work, but I've tested specifically with changing the cookie name. What you'll be doing is changing a parameter in the WebLogic-specific deployment descriptor weblogic.xml, which is a file contained in the application deployment. Inside weblogic.xml is an XML element called <session-descriptor>. Inside this element you can set the <cookie-name> element, which should be something other than the default JSESSIONID (if it is not specified, the default is used). Alternatively, you can set the <cookie-path> instead, which defaults to "/". My recommendation is to set the <cookie-name>. To modify the deployment descriptor, here's what you need to do: Get the EAR file for the web app you want to modify (e.g. Documaker Admin, Dashboard, or Interactive). When you installed ODEE, these were written to the filesystem inside the <ODEE_HOME> so you should find them under <ODEE_HOME>/documaker/j2ee/weblogic/<dbtype>. Make a backup copy of the EAR file. Open the EAR file with your favorite ZIP file manager. Locate the WAR file inside this file (it should be at the root of the EAR). Open the WAR file and locate the WEB-INF/weblogic.xml file. Modify the weblog.xml and change the cookie-name or cookie-path as mentioned above.  Save the weblogic.xml file inside the WAR file. Replace the WAR inside the EAR file. Note: some ZIP file managers will allow you to navigate and edit files directly rather than having to unpack, edit, and rebuild your WAR/EAR files. Repeat for all web apps you want to modify. Make sure your final EAR file has the same name and path as the original. You can have a backup file sitting next to it with a different name, or make the backup somewhere else. Use the WebLogic console and navigate to Deployments, then select the application you updated and click Update. This should allow you to specify the deployment file (EAR) that you want to update -- it should default to the same EAR file you just built/edited. In the following screen click Finish. It should redeploy the EAR file you modified. Repeat for each application you modified. Important Note: If are you using a proxy such as IIS, Apache, or Oracle HTTP Server to proxy requests you must change the WebLogic Plugin configuration to rename the WLCookieName parameter to match what you did above for <cookie-name>. For reference you can see the Oracle docs. As always, if you don't have an ODEE playground you can set one up by following my blog posts. If you have questions, feel free to post a comment, or head on over to the Oracle Documaker community pages.

If you have deployed Oracle Documaker Enterprise Edition (ODEE) into an existing environment with other WebLogic applications, it's possible that you have run into a problem. That problem...


A Simple Guide to WebCenter Content Integration

Integration Methods   WebCenter Integration design depends on specific business requirements, and what your application can support with respect to integration models. WebCenter Content (WCC) exposes its services over a wide variety of protocols (web services, JSP, Java API, RIDC, EJB, J2EE, CORBA, RMI, IIOP, ODMA, SOAP, WebDAV, and COM). Available services are listed here (there are a lot of them!) The most common integration method is persistent URLs. Another is SOAP. But before you worry about the integration method you need to know what you’re integrating. More specifically, you need to understand your functional requirements and then map those to exposed WCC services. You can access all services through any of the available APIs. So once you have the requirements mapped to services, you can determine the non-functional requirements (e.g. security, integration methods, etc) and then it comes down to performing the integration. If you want to see SOAP WSDLs, You can login to the CS as an administrator and go to Administration > SOAP WSDLs to see the available service endpoints and download WSDL files for each service. Note that SOAP requests use WSS for authentication.   Before we dig too deeply, make sure you have an existing WebCenter Content system - if you don't, you can refer to my guide for detailed instructions on setting up a system from the ground up with Documaker, and WebCenter. I also have an existing guide on basic Documaker and WebCenter Content integration that provides more detail on that arena.   Example Pattern: Search/Retrieve   A typical integration pattern is to search for a document by some metadata criteria, then retrieve a selected document. With this pattern, we have several options. We can link directly to the document, or we can use a service to return the document by ID. Persistent URLs take this form:   http://<SERVER>:<PORT>/cs/idcplg?IdcService=<SERVICE_NAME>&PARMNAME=PARMVALUE   Search This method is used to locate documents given a set of one or more search criteria.   Persistent URL http://wcc:16200/cs/idcplg?IdcService=GET_SEARCH_RESULTS&QueryText=dDocTitle&lt;Substring&gt;%60Arms%60   SOAP Request <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:sear="http://www.stellent.com/Search/">    <soapenv:Header/>    <soapenv:Body>       <sear:QuickSearch>          <sear:queryText>          dDocTitle &lt;Substring&gt; `Arms`          </sear:queryText>               </sear:QuickSearch>    </soapenv:Body> </soapenv:Envelope>   SOAP Results <SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/">    <SOAP-ENV:Body>       <idc:QuickSearchResponse xmlns:idc="http://www.stellent.com/Search/">          <idc:QuickSearchResult>             <idc:SearchResults>                <idc:dID>2</idc:dID>                <idc:dRevisionID>1</idc:dRevisionID>                <idc:dDocName>19216813916200000002</idc:dDocName>                <idc:dDocTitle>Hogwart's Coat of Arms</idc:dDocTitle>                <idc:dDocType>DigitalMedia</idc:dDocType>                <idc:dDocAuthor>weblogic</idc:dDocAuthor>                <idc:dSecurityGroup>Public</idc:dSecurityGroup>                <idc:dDocAccount/>                <idc:dExtension>png</idc:dExtension>                <idc:dWebExtension>png</idc:dWebExtension>                <idc:dRevLabel>1</idc:dRevLabel>                <idc:dInDate>9/16/17 8:00 AM</idc:dInDate>                <idc:dOutDate/>                <idc:dFormat>image/png</idc:dFormat>                <idc:dOriginalName>Hogwarts_coat_of_arms_colored_with_shading.svg.png</idc:dOriginalName>                <idc:url>/cs/groups/public/documents/digitalmedia/mjaw/mdaw/~edisp/19216813916200000002.png</idc:url>                <idc:dGif/>                <idc:webFileSize>0</idc:webFileSize>                <idc:vaultFileSize>1256024</idc:vaultFileSize>                <idc:alternateFileSize>0</idc:alternateFileSize>                <idc:alternateFormat/>                <idc:dPublishType/>                <idc:dRendition1>D</idc:dRendition1>                <idc:dRendition2/>                <idc:CustomDocMetaData>                   <idc:property>                      <idc:name>xComments</idc:name>                      <idc:value/>                   </idc:property>                   <idc:property>                      <idc:name>xExternalDataSet</idc:name>                      <idc:value/>                   </idc:property>                   <idc:property>                      <idc:name>xIdcProfile</idc:name>                      <idc:value/>                   </idc:property>                   <idc:property>                      <idc:name>xTemplateType</idc:name>                      <idc:value/>                   </idc:property>                   <idc:property>                      <idc:name>xAnnotationDetails</idc:name>                      <idc:value>0</idc:value>                   </idc:property>                   <idc:property>                      <idc:name>xIsACLReadOnlyOnUI</idc:name>                      <idc:value>0</idc:value>                   </idc:property>                   <idc:property>                      <idc:name>xLibraryGUID</idc:name>                      <idc:value/>                   </idc:property>                   <idc:property>                      <idc:name>xPartitionId</idc:name>                      <idc:value/>                   </idc:property>                   <idc:property>                      <idc:name>xWebFlag</idc:name>                      <idc:value/>                   </idc:property>                   <idc:property>                      <idc:name>xStorageRule</idc:name>                      <idc:value>DispByContentId</idc:value>                   </idc:property>                </idc:CustomDocMetaData>             </idc:SearchResults>             <idc:SearchInfo>                <idc:startRow>1</idc:startRow>                <idc:endRow>1</idc:endRow>                <idc:pageNumber>1</idc:pageNumber>                <idc:numPages>1</idc:numPages>                <idc:totalRows>1</idc:totalRows>                <idc:totalDocsProcessed>1</idc:totalDocsProcessed>             </idc:SearchInfo>             <idc:NavigationPages>                <idc:headerPageNumber>1</idc:headerPageNumber>                <idc:pageReference>1</idc:pageReference>                <idc:pageNumber>1</idc:pageNumber>                <idc:startRow>1</idc:startRow>                <idc:endRow>1</idc:endRow>             </idc:NavigationPages>             <idc:StatusInfo>                <idc:statusCode>0</idc:statusCode>                <idc:statusMessage>You are logged in as 'weblogic'.</idc:statusMessage>             </idc:StatusInfo>          </idc:QuickSearchResult>       </idc:QuickSearchResponse>    </SOAP-ENV:Body> </SOAP-ENV:Envelope>   Retrieve This method uses the document ID in <idc:dID>. Alternately you could directly reference the document’s URL as shown in the <idc:url> node. If the document ID is not known, you can also use the document name along with a revision specification — this method is the most likely to be used since the document name may be known by external systems without any knowledge of the internal document ID set by WebCenter. Review the GET_FILE documentation here.   Persistent URL This example shows the file access using document ID: http://wcc:16200/cs/idcplg?IdcService=GET_FILE&dID=2   This example shows the file access using document name and revision specification. http://wcc:16200/cs/idcplg?IdcService=GET_FILE&dDocName=123456789&RevisionSelectionMethod=Latest     SOAP Request <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:get="http://www.stellent.com/GetFile/">    <soapenv:Header/>    <soapenv:Body>       <get:GetFileByID>                  <get:dID>2</get:dID>       </get:GetFileByID>    </soapenv:Body> </soapenv:Envelope>   SOAP Results <SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/">    <SOAP-ENV:Body>       <idc:GetFileByIDResponse xmlns:idc="http://www.stellent.com/GetFile/">          <idc:GetFileByIDResult>             <idc:FileInfo>                <idc:dDocName>19216813916200000002</idc:dDocName>                <idc:dDocTitle>Hogwart's Coat of Arms</idc:dDocTitle>                <idc:dDocType>DigitalMedia</idc:dDocType>                <idc:dDocAuthor>weblogic</idc:dDocAuthor>                <idc:dSecurityGroup>Public</idc:dSecurityGroup>                <idc:dDocAccount/>                <idc:dID>2</idc:dID>                <idc:dRevClassID>2</idc:dRevClassID>                <idc:dRevisionID>1</idc:dRevisionID>                <idc:dRevLabel>1</idc:dRevLabel>                <idc:dIsCheckedOut>0</idc:dIsCheckedOut>                <idc:dCheckoutUser/>                <idc:dCreateDate>9/16/17 8:01 AM</idc:dCreateDate>                <idc:dInDate>9/16/17 8:00 AM</idc:dInDate>                <idc:dOutDate/>                <idc:dStatus>RELEASED</idc:dStatus>                <idc:dReleaseState>Y</idc:dReleaseState>                <idc:dFlag1/>                <idc:dWebExtension>png</idc:dWebExtension>                <idc:dProcessingState>Y</idc:dProcessingState>                <idc:dMessage/>                <idc:dReleaseDate>9/16/17 8:01 AM</idc:dReleaseDate>                <idc:dRendition1>D</idc:dRendition1>                <idc:dRendition2/>                <idc:dIndexerState/>                <idc:dPublishType/>                <idc:dPublishState/>                <idc:dDocID>3</idc:dDocID>                <idc:dIsPrimary>1</idc:dIsPrimary>                <idc:dIsWebFormat>0</idc:dIsWebFormat>                <idc:dLocation/>                <idc:dOriginalName>Hogwarts_coat_of_arms_colored_with_shading.svg.png</idc:dOriginalName>                <idc:dFormat>image/png</idc:dFormat>                <idc:dExtension>png</idc:dExtension>                <idc:dFileSize>1256024</idc:dFileSize>                <idc:CustomDocMetaData>                   <idc:property>                      <idc:name>xComments</idc:name>                      <idc:value/>                   </idc:property>                   <idc:property>                      <idc:name>xExternalDataSet</idc:name>                      <idc:value/>                   </idc:property>                   <idc:property>                      <idc:name>xIdcProfile</idc:name>                      <idc:value/>                   </idc:property>                   <idc:property>                      <idc:name>xTemplateType</idc:name>                      <idc:value/>                   </idc:property>                   <idc:property>                      <idc:name>xAnnotationDetails</idc:name>                      <idc:value>0</idc:value>                   </idc:property>                   <idc:property>                      <idc:name>xIsACLReadOnlyOnUI</idc:name>                      <idc:value>0</idc:value>                   </idc:property>                   <idc:property>                      <idc:name>xLibraryGUID</idc:name>                      <idc:value/>                   </idc:property>                   <idc:property>                      <idc:name>xPartitionId</idc:name>                      <idc:value/>                   </idc:property>                   <idc:property>                      <idc:name>xWebFlag</idc:name>                      <idc:value/>                   </idc:property>                   <idc:property>                      <idc:name>xStorageRule</idc:name>                      <idc:value>DispByContentId</idc:value>                   </idc:property>                </idc:CustomDocMetaData>             </idc:FileInfo>             <idc:downloadFile>                <idc:fileName>Hogwarts_coat_of_arms_colored_with_shading.svg.png</idc:fileName>                <idc:fileContent>XXXXXFileContentIsHereXXXXXXXX</idc:downloadFile>             <idc:StatusInfo>                <idc:statusCode>0</idc:statusCode>                <idc:statusMessage>You are logged in as 'weblogic'.</idc:statusMessage>             </idc:StatusInfo>          </idc:GetFileByIDResult>       </idc:GetFileByIDResponse>    </SOAP-ENV:Body> </SOAP-ENV:Envelope>   That's it! Remember - for a more detailed view on Documaker Integration with WebCenter Content, refer to my guide.   References Service guide for WCC Simple Search Example Introduction to WebCenter Content Services Configuring WebCenter Content Web Services for Integration Understanding Libraries Understanding Folders

Integration Methods   WebCenter Integration design depends on specific business requirements, and what your application can support with respect to integration models. WebCenter Content (WCC) exposes...


Schedule Batches Like A Pro in Documaker Enterprise

One of the powerful features of the Documaker system is the ability to create batches of output based on specific criteria, such as transaction page count, or batch page count. You can also use physical characteristics like output size, or destination characteristics such as mailing via specialized services, or items that must be handled differently. Documaker provides a wealth of features that enhance batch processing of transactions. Enter Documaker Enterprise - and your batching world suddenly looks a bit different. Now you have the ability to schedule batches to open and close at specific times, or based on specific criteria, and you can sort on just about any piece of data available within a transaction or metadata about a transaction. Documaker Standard processes serially - that is, there are several processes that occur in a sequence. You may be familiar with the names of the executables, like GENTRAN, GENDATA, and GENPRINT. Or you may be more familiar with the rule names that are defined in a JDT file that is called by GENDATA in a single-step process. Either way, one process or rule follows another, and when a job of n transactions is received, all of the transactions are processed within that job and are staged for the next process. No transaction is left behind in this case. In the Documaker Enterprise world of scheduled batches, things operate differently. Documaker Factory workers operate independently, but transactions still follow a sequential flow through each worker, marching through the system until they are published. The workers continue doing their job, irrespective of what's happening before or after in the process flow. Recall above where I mentioned that Enterprise uses scheduled batches that open and close at specific times. What do you think happens when a large job is received with many transactions, and the transactions are still processing through the system when a batch closes? That's right - the batch will close and any transactions from that job that didn't get processed in time will flow into another instance of that batch. So your batch that would normally contain all transactions will now contain just the transactions that were able to be processed by the time the batch closed. What can be done about this? One solution might be to scale up the number of upstream workers that happen before batching so that you can accomplish the processing before the batch closes. Another option might be to change the time the batch closes to accommodate the processing time. But these changes might not be feasible - what if you have limited capacity to scale up, or what if you have downstream processes that you cannot reschedule, or what if loads suddenly peak and you can't accommodate the change? Or, better yet, business is booming and you have to continually monitor the processing performance to make sure batches are being closed with all transactions contained therein? Some of these solutions may be fine in your case, and if so that's good! But, if you're in the situation where none of these solutions seem like a good fit, read on!  WARNING: Technical talk from here onward! Typical Documaker Enterprise transaction flow works like this: a job comes in to the Receiver and it is recorded into the JOBS table. The Identifier takes the JOB record and breaks it into discrete transactions which are recorded into the TRNS table. Each transaction is then processed through the Assembler, the Batcher, the Distributor, and then finally the Presenter and Publisher. Each transaction has a TRNSTATUS column in the TRNS table which indicates the disposition of the transaction in the Documaker Enterprise process flow. One such status is 416, which means that a transaction has finished processing in the Assembler and is ready for batching. As I mentioned previously, the transactions will flow through the system and will go into a batch as soon as the transactions are ready. The batch does not  not wait for all transactions in a job to be batched before closing - it's on a schedule, and the schedule must flow! The batch will close at the configured time no matter if some transactions from a job are still outstanding.  The JOBS table has a database trigger on it which monitors the value of the JOBSTATUS column for each job. This trigger specifically looks for the JOBSTATUS to change to 416, which means that all of the transactions in this job have been processed through the Assembler. We can use this trigger to know when all transactions of a job have been assembled, and with that knowledge we can know when to close a batch. To put this change into effect: Modify the batch(es) to have a closing date that is in the future - I mean way in the future, so you don't have to mess with this until perhaps the year 1 January 10191. That means that any new batchings that are created from this batch configuration will close well into the future. That means when a transaction is routed into this batching and it is the first transaction in that batching, the close date will be in the far future. Modify the trigger on the JOBS table as shown below. This trigger will set the close date for a batch to a time in the past, e.g. 1 January 2001 when the job status is updated to 416, which has the effect of immediately closing the batch and pushing it to the downstream workers. To modify the trigger, open the appropriate IDE e.g. SQL Developer if using Oracle database or SQL Server Management Studio if using Microsoft SQL Server. Locate the trigger - you can do this by navigating to the JOBS table and using the right-click context menus in the respective tools to modify the trigger JOBS_BEFOREUPD_TRG (if using Oracle) or JOBS_AUPD_TRG (if using SQL Server). Oracle Trigger Changes - add the following text in italics into the existing text of the trigger: IF (:OLD.JOBSTATUS<>416 AND :NEW.JOBTRNTOTAL<>0 AND :NEW.JOBTRNSCH<>0 AND :NEW.JOBTRNERR=0 AND :NEW.JOBTRNTOTAL=(:NEW.JOBTRNPROC+:NEW.JOBTRNSCH)) THEN update bchs a1 set bchstartingtime = (TO_TIMESTAMP('01-JAN-01 10.31.19 AM', 'DD-MON-YY HH.MI.SS AM')) where exists (select a.bch_id, a.job_id from bchs_rcps a where :NEW.job_id=a.job_id and a1.bch_id = a.bch_id); :NEW.JOBSTATUS:=416; END IF; SQL Server Trigger Changes - add the following text in italics into the existing text of the trigger:                OR ( @NEW_JOBREPLYSENT = 1 AND @NEW_JOBREPLYSENT <> @OLD_JOBREPLYSENT )                 BEGIN                IF (@OLD_JOBSTATUS <> 416 AND @NEW_JOBTRNTOTAL <> 0 AND @NEW_JOBTRNSCH <> 0 AND @NEW_JOBTRNERR = 0 AND @NEW_JOBTRNTOTAL =(@NEW_JOBTRNPROC + @NEW_JOBTRNSCH ))                 BEGIN                 UPDATE [BCHS] SET [BCHS].BCHSTARTINGTIME = '2000-01-01 14:00:00.000' from (Select BCH_ID,JOB_ID FROM [BCHS_RCPS]) a where a.JOB_ID = @NEW_JOB_ID AND a.BCH_ID = [BCHS].BCH_ID AND [BCHS].BCHNAME = 'AWDFileCopy'                END                END This will update any batches that contain transactions from the job that just completed. You may need to modify this to add other conditions if you wish, such as adding the BCH_NAME into the WHERE clause of the UPDATE [BCHS] command. Save the changes to the trigger (e.g. issue the CREATE or REPLACE command for Oracle, or ALTER command for SQL Server). Keep in mind that if you upgrade Enterprise Edition by migrating to a new version, or recreate your system, you will need to reapply these changes. These changes are provided herein without warranty and by implementing them yourself you could wreck your system so always test in a sandbox! I've attached the full version of the SQL statements for Oracle and MS SQL Server to this post for your reference - but you should use the triggers that are installed in your system if you implement this change. Good luck! Andy

One of the powerful features of the Documaker system is the ability to create batches of output based on specific criteria, such as transaction page count, or batch page count. You can also use...


15 Minutes to Documaker and WebCenter Content Success

So, how can you use WCC with Documaker? Documaker integrates with WebCenter Content for two reasons: pushing published documents into a repository, and pulling documents for inclusion in a document transaction. The latter happens only when Documaker Interactive, part of Oracle Documaker Enterprise Edition (ODEE), is used. The former happens as part of the Documaker Connector when using Oracle Documaker Standard Edition (ODSE) or part of the Archiver worker process when using ODEE. This guide will focus on the specifics of ODEE integration into WebCenter Content. Some information may be relevant to ODSE integration, but this is covered in detail in the product documentation. In order to publish documents to WebCenter Content (WCC), Documaker needs only a bit of configuration information, however this basic configuration is usually not sufficient for a full-fledged implementation because most companies need to be able to find documents based on more than document title and date. Therefore, it becomes necessary to define additional data elements (“metadata”) that describe the document in WCC, and it is also necessary to source that data in order to push it into WCC. The source of that data is a combination of: source systems, which provide initial transactional data to Documaker;  user-entered data, if Documaker Interactive is used; and data that is computed in Documaker processes. To get started, first you need to plan your WebCenter configuration - review planning documentation and then: Determine organizational structure (libraries and folders) Determine any document classes or profiles that may be needed Identify any global metadata elements that are needed Identify any document class-specific metadata elements that are needed Identify any special access restrictions Then, you can configure WebCenter.  Wait - you do have a WebCenter installation, right? If not, you can check out this guide which will take you through, step-by-step, to install a fully working system that includes MS SQL Server, WebLogic, WebCenter Content and Documaker Enterprise Edition. If you have an existing Documaker Enterprise installation and you want to test-drive WebCenter, you can download a pre-installed VM. Once you have Documaker and WebCenter up and running, continue on! The two most common activities for getting WebCenter Content to match your business requirements are to organize documents into folders according to metadata values, and to define additional metadata values. To define metadata values first review the documentation, then follow these general steps.  Open WebCenter Content Server administrator. Select Administration > Admin Applets > Configuration Manager. Run the file that downloads. Click the Add button on the Information Fields tab to create a new metadata element, give the element a name, and click Ok. Define the attributes of the element, such as type, value, requirements, and more, then click Ok. Repeat the above two steps as necessary, then click Update Database Design. Review the changes and click Ok.                         To organize documents into folders according to metadata values, first review the documentation, then follow these general steps. You will be creating a query folder, also known as a saved search. First, determine where the saved search folder will be located. Ideally you will create an Enterprise Library and/or a subfolder within that library to contain the folder. Open WCC User Interface and login. Click Browse. To create a Library, click the Create Library button. Give the Enterprise Library a unique name, description, and assign a security group, then click Finish. The library will be created. To create a search folder: click the Search button, then click the down arrow in the search bar, and choose “Standard Search”.  Input the appropriate search criteria that will define your folder - e.g. choose metadata elements and values with appropriate comparison operators such as starts, matches, contains or does not contain. Click Search. If the results look correct, click the down arrow under Searching Documents and select Create Saved Search. Give the Saved Search a unique name and description, then select the appropriate library(ies) to house the saved search. Click Save. The saved search will now be available under the libraries selected. In the example screen shots below, I create use the Standard Search form to filter on documents where my metadata value DocumentData1 matches the value pcl.    After verifying that the search contains the documents I want, I'll click Create Saved Search... ...and then choose the enterprise library where the saved search folder will be located. To test the saved search, upload a document and configure the document criteria to match the saved search. In WCC User Interface, click Browse and then open the library that contains the saved search, then click on the saved search. The document(s) matching the saved search criteria appear. After this point, you'll have a working WebCenter system, read to receive your documents from Documaker. To configure Documaker to publish into WebCenter, follow these steps.  Open Documaker Administrator and login. Select System 1 > Assembly Line 1 > Archiver and click Configure. Select DESTINATION > WebCenterContent > Configuration. Set the following properties:  destination.name = oracle.documaker.ecmconnector.ucmdestination.UCMDestination destination.ucm.connectionstring_0 = idc://<webcenter_cs>:4444 destination.ucm.importmethod = 0 destination.ucm.password = <credential password> destination.ucm.username = <credential> DocumentURL = http://<webcenter_cs>:16200/cs/groups/secure/documents/document/ UCM.retry.count = 0 Select DESTINATION > WebCenterContent > Defaults. Here you can set any default metadata values for all documents archived into WCC. The property name must be a valid WCC data element. The following are created for you, and you can change these values.  dDocAccount = docfactory dDocAuthor = documentfactory dDocType = Document default =  dSecurityGroup = Secure ​Select DESTINATION > WebCenterContent > Mappings. Here you can set any metadata elements that will be mapped to transaction or document details using Freemarker notation. When you add metadata elements in WebCenter, you will map them here. You can use columns from any of the ODEE processing tables: JOBS, TRNS, BCHS, RCPS, and PUBS. A listing of some of the fields is in the ODEE 12.6 Administration Guide on page 543. To map new metadata values, click the + and enter the element name and mapping. Save your changes - you will need to restart ODDF for the changes to take effect. You should not delete any of the mappings that come with the system on installation, but you can change them per your requirements. Note: metadata elements created in WCC will not display an “x” in front of the field name, but in ODEE you will need to enter the element with an x as shown below. Also, be aware of any field data type and length limits that you set in WCC and ensure that the values you map into those fields will fit; failure to do so may cause documents to fail to import. dDocType = {PUBS.UNIQUE_ID} default =  primaryFile = {PUBS.PUBUNIQUE_ID}.{PUBS.PUBPRTEXT} primaryFileExt = {PUBS.PUBPRTEXT} xDocumentData1 = {PUBS.PUBPRTEXT} xSomeOtherField = {TRNS.TRNCUSSTR001} That's it! Now when you start pushing documents out of Documaker and into WebCenter, you'll be able to login and see them in your saved search folders. This screenshot shows the Documaker enterprise library, with three saved search folders: NotPCL Documents, PCL Documents, and test. The saved search criteria for NotPCL Documents and PCL Documents are simple "contains" and "does not contain" the value "pcl" in the metadata field DocumentData1, which was mapped in Documaker to the extension of the published document (not very useful, but an easy demonstration of how to push values in). We can see that the PCL Documents saved search has some files in it...    ...and if we look at the metadata for one, we can see that it does have the "pcl" value I hope this has been useful for you; feel free to drop a comment here or in the Documaker community to discuss!  

So, how can you use WCC with Documaker? Documaker integrates with WebCenter Content for two reasons: pushing published documents into a repository, and pulling documents for inclusion in a document...


Getting Started with Documaker Enterprise & WebCenter Content with SQL Server

This guide assumes you are installing Oracle Documaker Enterprise Edition 12.6 on Windows 7 SP1 x64 using SQL Server as your target database and Oracle WebLogic Some portions of the installation process may be applicable to other databases as well. The instructions in this guide will help you download and install the following: Oracle WebLogic Fusion Middleware Oracle WebCenter Content Microsoft SQL Server 2012 Express Oracle Documaker Enterprise Edition 12.6.0   Prerequisites Database - SQL Server 2012 Express, using defaults with the following exceptions: Use Mixed Mode authentication; provide a strong password for sa user. Default instance MSSQLSERVER rather than named instance). Using the Latin1_General_CS_AS_WS collation (case-sensitive with Unicode support). Install the SQL Server Management Studio (SSMS) from here.  Create a database using SSMS; name the database something appropriate; referred to hereafter as <MY_DB>. Note: you must enable Read Committed Snapshot or else the RCU will fail. You can turn this on during the creation of the database setting appropriate option, or you can do this after the fact with SSMS (Database > Properties > Options > Miscellaneous > Is Read Committed Snapshot On = True). In this guide, WebCenter Content and Documaker schemas are installed on the same database server, but in separate databases. This is not necessarily a requirement (although definitely a good practice). Ensure the database owner account is enabled and you know the password. This document assumes the sa user. Java Development Kit 1.8.x Download and install latest JDK 1.8 from here. Must be 64-bit version of JDK. Set JAVA_HOME environment variable Start -> Computer -> Right Click -> Properties -> Advanced System Settings -> Environment Variables -> System Variables Locate JAVA_HOME. If not available, click New -> Variable Name and enter JAVA_HOME Set Variable Value as the path to JDK installation folder, e.g. c:\proga~1\Java\JDK1.8.0_144 Windows Configuration Refer to product documentation on acquiring and installing the Visual C++ 2008 Redistributable package. If this is not installed, the product may fail to install. Refer to product documentation on acquiring and installing the Visual C++ 2005 Redistributable package. If this is not installed, the product may fail to start (this step is only needed for WebCenter Content). Disable 8.3 file naming by opening Windows Registry Editor (this step is only needed for WebCenter Content). Locate HKEY_LOCAL_MACHINE/SYSTEM/CurrentControlSet/Control/FileSystem Set NtfsDisable8dot3NameCreation = 1 Reboot. Refer to product documentation on acquiring and installing the Visual C++ 2005 Redistributable package. If this is not installed, the product may fail to start. Installation: WebLogic, FMW, WebCenter Download Fusion Middleware Infrastructure Installer (1.5 GB). Navigate to here. Accept the license agreement. Download the Release 12c Generic (1.6 GB). Unzip all zip archives into a directory, referenced as <INSTALL_DIR> Open a command window as Administrator (Start -> Command Window -> Right-Click -> Run as Administrator Execute the following in <INSTALL_DIR>:     C:\> %JAVA_HOME%\bin\java -jar fmw_12. [ Welcome ] Click Next. [ Auto Updates ] Select Skip Auto Updates, click Next. [ Installation Location ] Select <ORACLE_HOME> from the dropdown. Click Next. [ Installation Type ] Select Fusion Middleware Infrastructure, click Next [ Prerequisite Checks ] Click Next. Note: if you have problems, resolve them. [ Installation Summary ] Click Install. Click Next. [ Installation Complete ] Click Finish. Execute the following in <INSTALL_DIR>:     c:\> %JAVA_HOME%\bin\java -jar fmw_12. [ Welcome ] Click Next. [ Auto Updates ] Select Skip Auto Updates, click Next. [ Installation Location ] Select <ORACLE_HOME> from the dropdown. Click Next. [ Prerequisite Checks ] Click Next. Note: if you have problems, resolve them. [ Installation Summary ] Click Install. Click Next. [ Installation Complete ] Click Finish. Run the Repository Creation Utility (RCU) in:     <ORACLE_HOME>/oracle_common/bin/rcu.bat [ Welcome ] Click Next. [ Create Repository ] Select Create Repository and System Load and Product Load. Click Next. [ Database Connection Details ] Select Microsoft SQL Server, Unicode Support = Yes, and Connection Parameters. Enter the following as your connection parameters, then click Next. Click OK on the prerequisite check dialog. Server Name = localhost Port = 1433 Database Name = <MY_DB> Username = sa Password = <sa password> [ Select Components ] Use the DEV schema prefix, Select WebCenter Content which will check other components as well. Click Next. Click OK to dismiss the prerequisite check dialog. Note: if you experience any errors, note the location of the error log and review for remediation. [ Schema Passwords ] Enter passwords for schema(s) as desired. Note this password as you will be required to enter it in the next section. Click Next. [ Summary ] Review summary and click Create. Watch the system load progress, click Close when completed.   Configuration: WebLogic/WebCenter   Run the Configuration Wizard in:     <ORACLE_HOME>/oracle_common/common/bin/config.cmd [ Create Domain ] Select Create a new domain. Optionally, select the domain location (hereafter referenced as <DOMAIN_HOME>). Click Next. [ Templates ] Select the following templates (this will require some additional packages that will be automatically selected), then click Next Oracle Universal Content Management - Inbound Refinery Oracle Universal Content Management - Content Server Oracle Universal Content Management - Web UI Oracle WebCenter Enterprise Capture [ High Availability Options ] Accept defaults and click Next. [ Application Location ] Accept defaults and click Next. [ Administrator Account ] Enter a password for the weblogic user and click Next. [ Domain Mode and JDK ] Select development mode and specify the JDK (it should be set to the JDK you use to execute the installer JAR. Click Next. [ Datasources ] Check the box for the given datasource(s) and then update the settings as shown below, then click Next. Vendor : MS SQL Server Driver : Oracle’s MS SQL Server Driver (Type 4) Versions: Any Hostname : localhost DBMS/Service : <MY_DB> Port : 1433 Username : DEV_MDS Password : <password entered in Schema Passwords step> [ JDBC DS Test ] All tests should complete ok, Click Next. [ Database Configuration Type ] Select RCU Data, then use the same settings as shown Datasources (exception is that Username : DEV_STB. Click Get RCU Configuration. Click Next. [ Component Datasources ] Click Next. [ JDBC Test ] All tests should complete ok, Click Next. [ Credentials ] Enter a username and password, then click Next. Note: MOS note suggests this username should be set to sysadmin.  [ Advanced Configuration ] Select Administration Server, Topology, and Deployments and Services. Click Next. [ Administration Server ] Choose a single Listen Address (specifically an IP address, not localhost and not All Local Addresses). Leave Server Groups as Unspecified. You may enable SSL and optionally change all Listen Ports (although for this guide I will leave as default 7001/7002). Click Next. [ Managed Servers ] Choose a single Listen Address for each Managed Server. If you don’t choose the actual IP address, the installer may complain later. Note: the names for the managed servers may be slightly different, but the names aren’t important. Do not enable SSL until you have a functional SSL certificate. Recommended settings are shown below; do not change the group names. Click Next. WCC User Interface Server Name=wccui_server_1 (may show as wccadf_server1) Listen address=<IP_ADDRESS> Listen port=16225 Enable SSL on  SSL Listen port 16227 Server Group is UCM-ADF-MGD-SVR Click add and repeat above, incrementing the server name by 1, Listen Port = 7003, and SSL Port 16227. Capture Server Server Name=capture_server1 (may show as cpt_server1) Listen address=<IP_ADDRESS> Listen port=16400 Server Group is CAPTURE-MGD-SVR Click add and repeat above, incrementing the server name by 1, Listen Port = 7004. No SSL. WCC Server Server Name=wcc_server_1 (may show as UCM_server1) Listen address=<IP_ADDRESS> Listen port=16200 Enable SSL on SSL Listen Port 16201. Server Group is UCM-MGD-SVR IBR Server Server Name=ibr_server_1 Listen address=<IP_ADDRESS> Listen port=16250 Server Group is IBR-MGD-SVR [ Clusters ] Name the default cluster wcc_cluster_1. Leave defaults.Click Add. Name the cluster cpt_cluster_1. Leave defaults. Click Add. Name the cluster ibr_cluster_1. Leave defaults. Click Add. Name the cluster wccui_cluster_1. Leave defaults. Click Next. [ Server Templates ] Click Next. This is not needed for development environments. [ Dynamic Servers ] Click Next. [ Assign Servers to Clusters ] Select cpt_server_n in the Servers panel, select the cpt_cluster_1 in the Clusters panel. Click the Right arrow. Repeat for similarly-named servers and clusters. Click Next. [ Coherence Clusters ] Click Next. [ Machines] Click Add and set the machine name to wcc_machine_1. Select a listen address that is not localhost. Click Next. [ Assign Servers to Machines ] Move all servers to the machine wcc_machine_1. Click Next. [ Virtual Targets ] Click Next. This is not needed for development environments. [ Partitions ] Click Next. This is not needed for development environments. [ Deployments Targeting ] Review the AppDeployment sections under each server, so you can see which applications are deployed to which managed server. Click Next. Example: UCM_server1 contains Oracle Universal Content Management - Content Server WCCADF_Server1 contains Oracle WebCenter Content - Web UI [ Services Targeting ] Click Next. [ Configuration Summary ] If you see any warnings, correct them then come back to this screen. Click Create. If you didn’t pick IP addresses here or here, you may see a complaint from Coherence clustering. To resolve, click the Administrator Server link on the left and select an IP address, and/or click Managed Servers on the left and select an IP address. [ Configuration Progress ] Click Next when available. [ End of Configuration ] Note the AdminServer URL that should look like http://ipaddress:port/console. Note the IP address and port as <ADMINSERVER>. Click Finish. Install NodeManager as a Windows Service by executing <DOMAIN_HOME>/bin/installNodeMgrSvc.cmd.  Optional: When starting the AdminServer, you will be prompted for the WebLogic admin credential. You can prevent this check by creating the <DOMAIN_HOME>/servers/AdminServer/security/boot.properties file with username=weblogic and password=<password>. After a successful startup, these values will be encrypted.  Start NodeManager. Start -> Run -> Services.msc Locate “Oracle WebLogic base_domain NodeManager” service. Right click and select Start. Start the AdminServer Execute <DOMAIN_HOME>/bin/startWebLogic.cmd A shell window will appear; when the system is ready you should see <Notice> <WebLogicServer> <BEA-000365> <Server state changed to RUNNING.> Start Managed Servers. This can be done with either Enterprise Manager or WLS Console.  To use EM: Browse to <ADMINSERVER>/em (e.g. http://localhost:7001/em) and login with the credentials from Configuration Step 6. Select each desired server, and select Control > Start To use Console: Browse to <ADMINSERVER>/console and login with the credentials from Configuration Step 6. Select Environment > Servers from the Domain Structure. Select the Control tab in Summary of Servers Select the ibr_server_1 and wcc_server_1 servers using the checkbox. Click the Start button. Configure Inbound Refinery (IBR) Open a browser to http://<server>:16250/ibr, where server is the host where IBR is installed. Login the credentials from Configuration Step 6. You can review the default settings here, the defaults should be sufficient. Click Submit. Restart the node as shown in step 35. Configure Content Server (CS) Open a browser to http://<server>:16200/cs, where server is the host where CS is installed. Login the credentials from Configuration Step 6. You can review the default settings here, the defaults should be sufficient. Click Submit. Restart the node as shown in step 35.   Installation: Documaker   Acquire the installer for Oracle Documaker Enterprise Edition 12.6. This can be obtained either through Oracle Software Delivery Cloud (OSDC) or My Oracle Support (MOS).  On OSDC, search for “Oracle Documaker Enterprise Edition”. Note: at the time of this writing, 12.6 was not available on OSDC. On MOS, you can search Patches & Updates for Oracle Documaker 12.6 in the Oracle Insurance Applications group, or click here. The patch number for ODEE 12.6.0 is Patch 26100748.  Run the ODEE 12.6 setup.exe extracted from the installer. [ Welcome ] Click Next. [ Installation Location ] Select an appropriate home directory (referred to hereafter as <ODEE_HOME>). This guide will assume the default c:\oracle\odee_1 is used. Click Next [ Administrator ] Set a user name and password for the Documaker Administrator web application user account. This is the user that is able to log in to Documaker Administrator and administer this application. This guide will assume the default user name documaker is used. Click Next. [ Database Server ] Select SQL Server from the dropdown.  Enter the hostname, IP address, or use localhost to identify the machine where the database is located. This guide assumes the database has been installed locally, therefore localhost is used.  Enter the port where SQL Server listens for connections. The default 1433 is assumed. Enter the desired name of the database. The default IDMAKER is assumed. Note: SQL scripts will be generated that create the database and its files, so do not create the database before running the installation.  Click Next [ Administration Schema] This identifies the database schema that will house the administration tables for Documaker. DB Index Folder - set to the location where SQL Server writes datafiles. In this guide we have used all defaults for installing SQL Server, so C:\Program Files\Microsoft SQL Server\MSSQL12.MSSQLSERVER\MSSQL\DATA will be used. DB Folder - set to the same as DB Index Folder. User - the default dmkr_admin should be used. Password - enter a password. Note that this must conform to the default SQL Server password filters, so it is advisable to make it secure as it will problematic if you have to correct this later. Do not use dictionary words, use alphanumerics with mixed case and have more than 8 characters. Try not to use special characters like (@) as the installer might complain. System ID - default to 1. System Name - default to System 1. Click Next. [ Assembly Line Schema ] - This identifies the database schema that will house the Documaker assembly line tables. DB Index Folder - set to the location where SQL Server writes datafiles. In this guide we have used all defaults for installing SQL Server, so C:\Program Files\Microsoft SQL Server\MSSQL12.MSSQLSERVER\MSSQL\DATA will be used. DB Folder - set to the same as DB Index Folder. User - the default dmkr_asline should be used. Password - enter a password. Note that this must conform to the default SQL Server password filters, so it is advisable to make it secure as it will problematic if you have to correct this later. Do not use dictionary words, use alphanumerics with mixed case and have more than 8 characters. Try not to use special characters like (@) as the installer might complain. System ID - default to 1. System Name - default to Assembly Line 1. Click Next. [ Application Server ] Select WebLogic Server from the dropdown. Enter the credential for the WebLogic Server (default is weblogic). Enter the password for the credential. Click Next. [ JMS ] Enter the settings for your JMS Server. This will be created during the WebLogic domain creation. Connection Class - leave as default. Initial Context Factory - leave as default. Provider URL - for this guide, since everything will be on one server, you can leave the hostname portion of the URL as the machine where the software is being installed. Do not use localhost as we have already configured the AdminServer to listen on the IP address of our server. Principal - here you can define a credential for the JMS Server connection. Use the same credential as the WebLogic server (e.g. weblogic) just for consistency. Credentials - the password for the principal; re-enter where shown. Click Next. [ Hot Folder ] Use the default - this is the monitored folder where data files can be dropped. Click Next. [ SMTP Email Server ] If you have an SMTP email server, you can configure the details here. If you don’t have an SMTP email server, you can use a demonstration SMTP server for simple testing - download SMTP4DEV, extract, and set to run on startup. In the installer for ODEE, set the server to localhost and port to 25, then Click Next. [ UCM ] If you have a WebCenter Content Server that will accept documents, change Enable to True and following these steps, otherwise click Next User - set to weblogic or any valid WCC user. Password - set to appropriate password for the WCC user defined above. Connection String - if the WCC server is on this machine (assumed yes) leave as default. If on another server, modify the hostname in the connection string. If you changed the RIDC (aka IntraDoc) port set this here. 4444 is the default. Note: if WCC is on a different server you will need to modify the WCC CS settings in that domain’s config.cfg to allow connections from the server where ODEE is running. Locate the file <DOMAIN_HOME>\ucm\cs\config\config.cfg Locate the IntradocServerPort setting to note the port. Default is 4444. Locate the SocketHostAddressSecurityFilter setting. Include a pipe-delimited list of allowed IP addresses. Wildcards * and ? are supported. Examples:|192.*.*.*|10.24?.*.* Optionally, locate the SocketHostNameSecurityFilter setting. Include a pipe-delimited list of allowed hosts. Wildcards * and ? are supported. Examples: localhost|odee.acme.com | *.acme.com | odee12?.*.*  Save changes and restart WCC CS. Document URL - this defines the URL for CS to access the folder where documents are stored. Use the default. [ UMS ] If you have a UMS that will be used for sending notifications, change Enable to True and enter the settings, otherwise click Next. [ Installation Summary ] Click Install. [ Installation Progress ] When available, click Next. [ Installation Complete ] Optionally, click Save to save the response file in case you want to re-run this installation in unattended mode. click Finish.   Configuration   Using SMSS, connect to your database server and perform the following: Open the C:\Oracle\odee_1\documaker\database\sqlserver2012\dmkr_admin.sql file and locate the section around line 139-141. Add the following: ALTER DATABASE IDMAKER COLLATE SQL_LATIN_GENERAL_CP1_CI_AS; GO Save the file and execute. This will create the IDMAKER database and the dmkr_admin schema and objects. Note: if you do not have ability to execute some of the commands in the SQL file, have your DBA do this step for you. Open the C:\Oracle\odee_1\documaker\database\sqlserver2012\dmkr_asline.sql file, and execute. This will create the dmkr_asline schema and objects. Note: if you do not have ability to execute some of the commands in the SQL file, have your DBA do this step for you. Open the C:\Oracle\odee_1\documaker\database\sqlserver2012\dmkr_asline_user_examples.sql file and execute. This will create demonstration users. Optionally, open and execute dmkr_asline_XX.sql and dmkr_admin_XX.sql files to add language-specific messages into Documaker. Consult the ODEE 12.6 installation guide on page 41 for additional details. Open a browser to obtain the MS-SQL JDBC Type 4 driver from Microsoft. Click the Download button and select sqljdb_6.2.1.0_enu.exe, then click Next.  After the file downloads, run the EXE to extract the files into a directory of your choice. Locate the enu\mssql-jdbc-6.2.1.jre8.jar file, and copy this file into the following directories: <ODEE_HOME>\documaker\bin\lib,  <ODEE_HOME>\documaker\docfactory\lib <ODEE_HOME>\documaker\docupresentment\lib <WLS_HOME>\wlserver\server\lib Edit the <WLS_HOME>\oracle_common\common\bin\commExtEnv.cmd file and locate this line: set WEBLOGIC_CLASSPATH=%JAVA_HOME%\lib\tools.jar;%PROFILE_CLASSPATH%;%ANT_CONTRIB%\ant-contrib-1.0b3.jar;%CAM_NODEMANAGER_JAR_PATH%; Edit the line by adding the following to the end of it: %WL_HOME%\server\lib\mssql-jdbc-6.2.1.jre8.jar; Save the file. Run <ODEE_HOME>\documaker\mstrres\dmres\deploysamplemrl.bat to deploy the assembly line form templates and resources to the newly-created IDMAKER database.  Open <ODEE_HOME>\documaker\j2ee\weblogic\sqlserver2012\scripts. Edit set_middeware_env.cmd in Notepad. Locate your WebLogic Middleware Home (if installing according to the instructions in this document it should be c:\oracle\middleware\oracle_home Update MW_DRIVE to the drive letter of this location.  Update MW_HOME to the path of this location.  Save and exit. Edit weblogic_installation.properties in Notepad.  Update dirWeblogicHome to the same value for MW_DRIVE\MW_HOME in the previous step above. Note the use of \\ instead of \.  Update jdbcAdminPassword with the password established for the DMKR_ADMIN schema in this step. Replace ‘<SECURE VALUE>’ including quotes, e.g. jdbcAdminPassword=myP@ssw0rd123 Update jdbcAslinePassword with the password established for the DMKR_ASLINE schema in this step. Update jmsCredential with the password established for the JMS server in this step. Update adminPasswd with the password established for the Documaker Administrator user in this step. Update weblogicPassword with the password established for the WebLogic Domain Administrator credential in this step. Update weblogicDomain to base_domain, since we are installing this into the existing WebCenter Content domain. If you are not following the instructions that were provided in this document for creating the WebLogic domain for WebCenter, you should leave this as the default so it creates a new domain. Save and exit. Execute wls_create_domain.cmd in this directory. When prompted to execute the RCU, enter n (note: it is case-sensitive). The FMW Configuration Wizard will execute. Since you have already created a domain, you can cancel this wizard When prompted, press Enter to load ODEE into the WebLogic domain, which will invoke the WLST component. When the script finishes, press any key to exit the script. Execute wls_add_correspondence.cmd if you intend to use Documaker Interactive. Press a key when prompted to exit after the deployment completes. To progress further, you will need to start up the AdminServer if it not already running according to this step. Because the configuration scripts assume the AdminServer that listens on localhost and our instructions above for configuring the WebCenter Content domain set a specific address, we’ll need to modify the script. Edit documaker.py in this directory with Notepad. Search for t3://localhost and replace with t3://<IP address> of the WebLogic server - there should be two occurrences. Save the file. Execute create_users_groups.cmd. Execute create_users_groups_correspondence_example.cmd. Because these scripts create new users and groups with unique identifiers, we need to run a one-time tool that links these users and groups to the entities created in the SQL schema. Open a browser to http://<IP address>:7001/jpsquery. You should see a list of Groups, Users, ApproverLevels Groups, and ApproverLevels Users that are each listed as “found”. If there are any “failed” entries, just refresh the page to try again. Once all are listed as found, continue to the next step. Use the WebLogic Console to associate the Documaker managed servers to the default machine. Open a browser to http://<IP-ADDRESS>:7001/console and login with the weblogic credential. Expand Environment > Machines in the Domain Structure. Click the wcc_machine_1 and then click the Servers tab. Select “Select an existing server” and choose dmkr_server from the dropdown and click Finish. Repeat for jms_server and idm_server. Click Activate Changes. Use the WebLogic Console to start the jms_server. In Environment > Servers, click the Control tab and select the checkbox for jms_server and click Start, then Yes. This will instruct the NodeManager to start the jms_server. Wait for the jms_server to indicate that it is in the RUNNING state (you can click the Refresh button in  the Control tab to automatically refresh the view. Start the Documaker Document Factory (“docfactory”) by opening the Services applet (Start > Services.msc). Locate the service named “ODDF (dmkr_asline:1:1)” and start it. Locate the service named “ODDP (dmkr_asline:1:1)” and start it. Start the remaining managed servers (idm_server and dmkr_server). Optional Step: Configure Documaker output into WebCenter.  There is a known bug in versions of 12.5 and 12.6 with ODEE and WebCenter. There is a patch available from Oracle Support which resolves this bug. You will need to implement this patch in order to archive to WebCenter. Open Documaker Administrator and select Assembly Line 1. Click Batchings.  Select the LOCALPRINT batching, and click the Distribution tab. Check the Archive box, and select WebCenter Content from the dropdown. Return to Documaker Administrator Systems Overview tab, select Assembly Line 1 > Archiver and click Configure. Expand DESTINATION - WebCenterContent and select Configuration and set the following properties and ensure the Property Active box is ticked. destination.name = oracle.documaker.ecmconnector.ucmdestination.UCMDestination  destination.ucm.connectionstring = idc://<IP-ADDRESS>:4444 destination.ucm.importmethod = 0 destination.ucm.password = WebLogic Credential password destination.ucm.username = WebLogic Credential DocumentURL = http://<IP-ADDRESS>:16200/cs/groups/secure/documents/document/ UCM.retry.count = 0 Save all changes and restart ODDF service. Troubleshooting: If documents do not show up in WebCenter, check the <ODEE_HOME>/documaker/docfactory/temp/archiver/logs/Archiver.log file (or the LOGS and ERRS tables).  If you see a message like “The content item was not successfully checked in. Permission denied. Address ‘x.x.x.x’ is not an allowable remote socket address”, then you need to go to WebCenter Content Server and add that servername/IP address to your allowed hosts. You may wish to use wildcards during initial setup and testing like so: 10.*.*.*||192.168.*.* If you see a message like “Connector failed to initialize …java.lang.NoClassDefFoundError: org/apache/commons/httpclient/Credentials”, please download this jar and place in <ODEE_HOME>/documaker/docfactory/lib. Optional Step: Configure Documaker Interactive to use WebCenter for attachments Open Documaker Administrator and select Assembly Line 1 > Correspondence. Expand ATTACHMENTS - WCC_CONNECT and select WCC_CONNECT and set the following properties and ensure the Property Active box is ticked. (class) = oracle.documaker.idocumaker.ucmquery.ucmConnect connectionString =  idc://<IP-ADDRESS>:4444 passWord = WebLogic Credential password userName = WebLogic Credential Save all changes and restart idm_server. Optional Step: Create a Documaker Cluster by opening WebLogic Console and navigating to Environment > Clusters.  Click New > Cluster. Name the cluster dmkr_cluster_1. Click Ok. Click dmkr_cluster_1. Click Servers. Scroll down to the bottom of the page and click Add. Select dmkr_server from the dropdown and click Finish, then click Save. Click Environment > Clusters. Click New > Cluster. Name the cluster idm_cluster_1. Click Ok. Click idm_cluster_1. Click Servers. Scroll down to the bottom of the page and click Add. Select idm_server from the dropdown and click Finish, then click Save. Click Activate Changes.   Exploration   Now you can proceed to explore the system and its configuration. Before beginning, you may wish to review the Using WebCenter Content documentation. This will allow you to become familiarized with concepts of working with and managing content, including general concepts of content management, document libraries, enterprise libraries, folders, and document workflows. This information is in Part I of the Using Oracle WebCenter Content document, available here.     The Oracle WebCenter User Interface is a modern, intuitive interface that enables users to manage content in dynamic ways. Information on how to use this interface covers finding libraries, folders, and documents, viewing and annotating documents, check-in and check-out of documents, how to work with libraries and content folders, and use of workflows. This information is in Part II of the Using Oracle WebCenter Content document, available here.   For more information on Documaker, check out the Documaker Community, and the Oracle Documaker site. Refer to the Documaker installation guide for additional validation steps and exploration recommendations.   An important note: because this is a demonstration setup, the SSL certificates used to identify the servers are for demonstration only. Therefore, most browsers will complain about the certificate. You should be able to confirm exceptions (possibly permanently) to avoid this message. In a production environment, SSL services will be backed by actual certificates that will be trusted by your users’ browsers so this message will not occur.    Use Cases   The list of use cases below are exemplary of system administration and configuration, as well as end-users functions. The list of use case shows recommended application(s) and function(s) that will satisfy those use cases.   To create highly-available service clusters, use the WebLogic Console. To administer managed servers, use the WebLogic Console or WebLogic Enterprise Manager. To administer clusters, use the WebLogic Enterprise Manager. To administer Documaker DocFactory configuration, use Documaker Administrator. This can include things like defining metadata elements that pass into WebCenter. To view Documaker DocFactory processing results, use Documaker Dashboard. To create, review, and approve correspondence, use Documaker Correspondence. To administer the Content Server, use the WebLogic Content Server. This can include: Creating workflows Defining retention requirements Defining and administering scheduled jobs Configuring records settings Reviewing log files Refinery administration (the refinery is responsible for file format conversions). Security Configuration  Defining additional metadata elements Defining document profiles with metadata elements To define Libraries, Workflows, Folders, use WebCenter Content Server. To participate in Workflows, use WebCenter Content Server or WebCenter User Interface.   Application URLs   Replace localhost with the hostname or IP address where you have installed the products using the steps above.   WebLogic Console http://localhost:7001/console WebLogic Enterprise Manager http://localhost:7001/em WebCenter Content Server http://localhost:16200/cs WebCenter Content User Interface http://localhost:16225/wcc WebCenter Inbound Refinery http://localhost:16250/ibr WebCenter Capture http://localhost:7004/cpt Documaker Administrator http://localhost:10001/DocumakerAdministrator Documaker Dashboard http://localhost:10001/DocumakerDashboard Documaker Interactive http://localhost:9001/DocumakerCorrespondence Documaker Web Services - Composition http://localhost:10001/DWSAL1/CompositionService?WSDL Documaker Web Services - Publishing http://localhost:10001/DWSAL1/PublishingService?WSDL Credentials   WebLogic and WebCenter administrative functions can be accessed with the weblogic credential. All Documaker functions can be accessed with the documaker credential, however this is typically used for administrative functions only. Documaker user functions (Dashboard, Interactive) and WebCenter user functions can be accessed with users defined in the WebLogic security realm. You can view these users in WebLogic Console under Domain Structure > base_domain > Security Realms > myrealm > Users. The password for all credentials is set on install and can be changed here.   EDIT: This post has been corrected to reflect the correct version numbers of FMW components and Documaker, which were keyed incorrectly. In addition, a step which was not necessary (installing WLS separately) has been removed. EDIT: Added guideline about using separate databases for WCC and Documaker, and clarified database reference.   PS: My apologies for the wording of the title. I used a publishing "wizard" and it suggested this title, and while I can change it, I don't want to break links to it, so the link will remain  :-D

This guide assumes you are installing Oracle Documaker Enterprise Edition 12.6 on Windows 7 SP1 x64 using SQL Server as your target database and Oracle WebLogic Some portions of the...


Tales From Integration: The Orphaned Process

Background  This week I was presented with an integration problem with ODEE 12.4 - specifically a third-party workflow program and Documaker Interactive (DI). The workflow program provides users with the business process workflow, and DI simply provides access to edit transactions via its direct-access URL. In case you're not aware, with DI you can skip the Home screen and go directly into edit mode using a URL with some query string parameters. This is all documented in the product documentation on page 519. The problem presented was that the workflow program requires IE to run in compatibility mode, whereas DI and the underlying ADF components cannot work with compatibility mode turned on. As it happens, it may work, but you may have unintended results. Investigation One of the problems we noted was that using the direct access mode with DI would sometimes cause the browser process to crash and reload - the user almost never noticed this happening, but in this particular customer's environment, the integration between the workflow application and DI would be lost. This manifested as an inability to use the DI toolbar to save documents, or the document would be loaded, but the home tab would have focus instead of the document tab, or sometimes the document would not load. What I observed in this particular environment is that IE always had 3 processes: a parent 64-bit process, and two child 32-bit processes: one for the workflow application and one for DI. I noted that the process IDs would change for one of the 32-bit processes whenever this crash-and-reload situation occurred. After some additional investigation with the customer, we found that when closing an IE session (in this case, a DI window for a particular document transaction), the parent 64-bit process held on to the 32-bit child process for approximately 60 seconds after which it was terminated. The problem manifested when attempting to open a new document in another DI integration window before the previous 32-bit child process was terminated: IE would crash-and-reload in what I'm calling an "orphaned process", and we would observe the anomalous behavior in one of the flavors mentioned above. So, Solution Since we couldn't find any way to force IE to terminate the 32-bit process on demand, our only option was not to terminate the process at all. In doing this, we would have to make sure that the integration would always use the same IE window for DI on a given user's workstation. This meant that we needed to instruct users not to close any DI windows, but to instead leave it open and simply close the document tab when they completed it. They would be able to go back to their workflow application and select a new document. The significant change was to the workflow application itself: rather than a simple URL link to DI, we would have to do something a bit more creative in order to keep the IE processes from spawning new windows. Here's how we did it. The previous integration link in the workflow application was a simple anchor tag (the highlighted elements are unique for each transaction and would be dynamically replaced by the workflow application when generating the HTML): <a href="https://servername:port/DocumakerCorrespondence/faces load?taskflow=edit&uniqueId=UID&docId=DOCID" target="_new">View Document</a> The new integration link is almost the same, but includes a call to some client-side Javascript that does the magic. <a href="#" onclick="openDocumaker('https://servername:port/DocumakerCorrespondence/faces load?taskflow=edit&uniqueId=UID&docId=DOCID');">View Document</a> And finally, the Javascript magic - this simply forces each document to open in the same window, and then an <IFRAME> element is created with the source being the DI integration link.  The workflow integration webpage maintains the handle to the IE window using the Javascript variable documakerWindow. If the variable is not initialized, then a new window is created. Finally, there's a bit of Javascript to allow dynamic resizing of the <IFRAME> element whenever the window is resized.  <script>var documakerWindow, documakerFrame;function openDocumaker(sUrl){  if (documakerWindow == null){    documakerWindow = window.open("","Documaker","resizable=yes,toolbar=no,status=yes");    documakerWindow.document.open();    documakerWindow.document.write("<script>function autoResize(){var f=document.getElementById('diframe');var nw,nh;" +    "if(f){nh=f.contentWindow.document.body.scrollHeight;nw=f.contentWindow.document.body.scrollWidth;}f.height=(nh)+'px';f.width=(nw)+'px';}" +    "<\/script><iframe src='"+sUrl+"' width=100% height=100% id=diframe marginheight=0 frameborder=0 onLoad='autoResize();" +    "document.getElementById('diframe').parent.addEventListener('resize',autoResize);'></iframe>");    documakerWindow.document.close();    documakerFrame = documakerWindow.document.getElementById('diframe');  } else {    documakerFrame.src = sUrl;  }  documakerWindow.focus();}</script> Apologies in advance for the terrible layout. We're moving to a new blogging platform soon, and hopefully it will be better for code representation. Anyway, this was a bit of a problem that took some concentrated debugging effort to figure out - that was most of the battle! In the end, this seems to have done the trick. Now it's time for production roll-out!  Update I did some additional experimentation to create a custom JSP tag that illustrates how you can make integration to Documaker Interactive even easier for developers. The model is still the same as discussed above - an anchor <a> tag and the corresponding JavaScript to facilitate opening a window with an IFRAME. The only difference now is that you can encapsulate the complexities of the Javascript and anchor (and even creating the URL to Documaker Interactive) inside a custom JSP tag that can be reused in your applications. If you want to try this yourself, you can download the source for this example here (note: it's unsupported and offered without warranty of any kind). Tag Usage Example Implementation is four steps - first download the example code and extract it. Copy public_html/WEB-INF/dmkrint.tld to your web application (ideally in WEB-INF folder) Compile the Java files in src/oracle/documaker/integration. Copy the compiled classes to your WEB-INF/classes folder Add the TLD to your JSP page (example:  <%@ taglib prefix="d" uri="WEB-INF/dmkrint.tld"%>) Add the taglib to your web.xml (see example in public_html/WEB-INF/web.xml) Implement the tag in your JSP, as shown below.  <d:interactive mode="compose" uid="b2304ad8-fb3c-428f-8077-f35e8dbcfbcb" docid="TEST" protocol="http" host="" app="DocumakerCorrespondence">View Document</d:interactive> Attributes for the tag are: mode = [ inbox | edit | compose ] - the desired mode for Interactive to launch into. Edit goes into forms selection, Inbox goes to the Inbox, Compose goes into document composition with WIPedit. uid = The UniqueId of the document to edit. If you are implementing with DWS (web services) you will get this information in the response when submitting a request to generate a document. docid = The DocId (or key Id) of the document to edit. If you are implementing with DWS (web services) you will get this information in the response when submitting a request to generate a document. protocol = [ http | https ] - protocol for building URL to Documaker Interactive. host = The hostname and port if necessary for building URL to Documaker Interactive. app = The context root of the Interactive application (e.g. DocumakerCorrespondence) Tag body = the text to display in the anchor tag. HTML Emitted by Tag <a href="#" onclick="openDocumaker(''); ">View Document</a> <script>var dW, dF; function openDocumaker(s){ if(dW==null){ dW= window.open("", "Documaker", "resizable=yes,toolbar=no,status=yes"); dW.document.open(); dW.document.write("<script>function autoResize(){var f=document.getElementById('diframe');var nw,nh;if(f){nh=f.contentWindow.document.body.scrollHeight;nw=f.contentWindow.document.body.scrollWidth;}f.height=(nh)+'px';f.width=(nw)+'px'; }<\/script>" + "iframe src='"+s+"' width=100% height=100% id=diframe marginheight=0 frameborder=0 onLoad='autoResize();document.getElementById('diframe').parent.addEventListener('resize',autoResize);'><\/iframe>"); dW.document.close(); dF= dW.document.getElementById('diframe'); } else { dF.src = sUrl; } dW.focus(); } </script>

Background  This week I was presented with an integration problem with ODEE 12.4 - specifically a third-party workflow program and Documaker Interactive (DI). The workflow program provides users with...


How To Update the Documaker JRE

I've gotten some questions from customers wanting to know if it's possible to use newer Java JREs with Documaker, so I thought I'd address this in a blog post. First, it's important to note that Oracle Documaker Enterprise Edition (ODEE) has additional software infrastructure requirements such as WebLogic 11g, and these components have specific Java JDK version requirements of their own. These requirements are wholly separate from ODEE. ODEE's requirements are outlined in the Documaker System Requirements guide for each version of Documaker, and the Java-specific requirements are succinctly stated as 32-bit JRE, Java 6 for 12.0-12.2 and Java 7 for 12.3-12.5. I'm suspect that 12.6 will ship with Java 8. ODEE includes the appropriate JRE version with it's installer packages, and it's completely stand-alone, meaning that ODEE does not rely on having a JRE available on the system to use at runtime other than the one it provides. This means that it is possible to replace the JRE that ships with Documaker without affecting other components (e.g. WebLogic or other applications). This blog post will detail how to do that for a Linux system. It's possible to do this for Windows as well, using similar procedures.  It's also very important to note that running Documaker with a non-shipping JRE may not be supported by Oracle, and any particular JRE may not have even been tested. So, keep in mind that doing this may void your warranty - but the process is completely reversible. First, I'm starting with an Oracle Enterprise Linux 6u3 environment which already has ODEE 12.4 installed. It is possible to use yum to install alternate versions of software, however I haven't found a reliable way to get the 32-bit JRE through this method, so we'll download the JRE directly from Java.com as a GZipped tar. Make sure you download the plain 32-bit "tar.gz" file and not the RPM or 64-bit version ("x64"). Download the file and then copy it to your Documaker server, then unzip/untar it. This will create a directory containing the JRE wherever you run this command. # pwd[ODEE_HOME]/documaker/# tar -xvzf jre-8u121-linux-i586.tar.gz Before proceeding, make sure you have shutdown any ODEE processes using [ODEE_HOME]/documaker/docfactory/bin/docfactory.sh stop. Also stop any running Docupresentment processes by running [ODEE_HOME]/documaker/docupresentment/docserver.sh stop. Now, let's move the existing JRE into a backup directory and have a look. Notice anything interesting? # pwd[ODEE_HOME]/documaker # mv  jre backup-jre # ls -ltr backup-jre-rwxrwxr--. 1 oracle oinstall   5746 Feb 28  2013 java-rwxrwxr--. 1 oracle oinstall   5746 Feb 28  2013 idswatchdog.exe-rwxrwxr--. 1 oracle oinstall   5746 Feb 28  2013 idsrouter.exe-rwxrwxr--. 1 oracle oinstall   5746 Feb 28  2013 idsinstance.exe-rwxrwxr--. 1 oracle oinstall   5746 Feb 28  2013 docfactory_supervisor-rwxrwxr--. 1 oracle oinstall   5746 Feb 28  2013 docfactory_scheduler-rwxrwxr--. 1 oracle oinstall   5746 Feb 28  2013 docfactory_receiver-rwxrwxr--. 1 oracle oinstall   5746 Feb 28  2013 docfactory_pubnotifier-rwxrwxr--. 1 oracle oinstall   5746 Feb 28  2013 docfactory_publisher-rwxrwxr--. 1 oracle oinstall   5746 Feb 28  2013 docfactory_identifier-rwxrwxr--. 1 oracle oinstall   5746 Feb 28  2013 docfactory_historian-rwxrwxr--. 1 oracle oinstall   5746 Feb 28  2013 docfactory_batcher-rwxrwxr--. 1 oracle oinstall   5746 Feb 28  2013 docfactory_archiver If you said, "Hey, all those files are the same size as the java binary," you're right! Since each ODEE worker is a separate JVM, if you started up ODEE and did a process list, you would see a bunch of java processes and you would have to dig a bit to see which process corresponds to each worker. By copying the java binary and renaming it to each worker, your process list will show each worker by its name (e.g. "docfactory_archiver") instead of "java". This means that we cannot simply rip-and-replace the JRE; we'll have to do a little extra work as well.  # pwd[ODEE_HOME]/documaker# mv jre1.8.0_121 jre # cd jre/bin   Now that we have our own copy of the JRE, we need to copy the java binary a few times to make our JVM process names reflect the workers inside them. Note: yes, it's strange to have Linux binaries with ".exe" extensions, but there may be some scripts that look for specific process names, so we're doing to leave them just the way they are. # cp java idswatchdog.exe# cp java idsrouter.exe# cp java idsinstance.exe# cp java docfactory_supervisor# cp java docfactory_scheduler# cp java docfactory_receiver# cp java docfactory_pubnotifier# cp java docfactory_publisher# cp java docfactory_identifier# cp java docfactory_historian# cp java docfactory_batcher# cp java docfactory_archiver  At this point, the work is done. Now we simply need to test it. Start up the Document Factory using [ODEE_HOME]/documaker/docfactory/bin/docfactory.sh start. If you see any errors, time to dig a little further - one possible error is that perhaps you forgot to copy the java binary with the proper name, in which case you'd see an error like this: 2017-01-25 19:33:23.827 UTC-ERROR-[Supervisor-Scheduler-1-oracle.documaker.processmonitor.process.monitors.InstanceMonitor]-[Instance.java:1568]-oracle.documaker.processmonitor.process.instance.Instance.reset:  Unexpected exception:  java.io.IOException: Cannot run program "docfactory_scheduler" (in directory "[ODEE_HOME]/odee_1/documaker/docfactory/temp/scheduler"): error=2, No such file or directory This error tells us that the file "docfactory_scheduler" does not exist, and so we need to double check that the file [ODEE_HOME]/documaker/jre/bin/docfactory_scheduler exists. The next step is to start up Docupresentment by running [ODEE_HOME]/documaker/docupresentment/docserver.sh start. The process should start up without any problems, and now you simply need to regression test your implementation to make sure everything functions as you expect.  My advice in regression testing is to concentrate on areas outside of Documaker itself (meaning, don't focus regression testing on forms, rather, focus on integration points that utilize Docupresentment. If you use EWPS web services, or DSI interfaces, server-side web pages, or direct queue integration, or any combination thereof, I recommend focusing regression testing there. Good luck! 

I've gotten some questions from customers wanting to know if it's possible to use newer Java JREs with Documaker, so I thought I'd address this in a blog post. First, it's important to note...

Oracle WebLogic Plugin with Apache Configuration

Many software configurations use Java Application Servers (JAS) like WebLogic for serving dynamic applications.  In addition to using a JAS, many companies will implement load balancing, proxying, or a number of other use cases by configuring an HTTP server front-end, such as Oracle HTTP Server (OHS) or Apache. By using an HTTP front-end, you can also improve performance by serving static content from the front-end HTTP server, and dynamic content from the JAS. I've been playing a bit with Oracle WebLogic 11g with an Apache front-end, and this post will document some of my findings and configuration practices. I'll be updating this post as a get further along, so feel free to come back periodically and see what's changed. Initial Plugin Deployment  This environment is using Oracle Enterprise Linux 6, Oracle DB 12c, ODEE 12.4,  WebLogic 11g (10.3.6), and Apache 2.2.15.  The installations are fairly generic and sandbox, with only some minor changes as all of the elements are installed on the same machine. The biggest change is that the various Documaker applications have been consolidated into the AdminServer managed server to reduce the JVM overhead. NOTE: This document assumes you're using the latest WebLogic Plugin 11g, which is as of this writing This is not the same version that comes with the WebLogic 10.3.6 installer (which is 10g). There are differences in the two plugins and the 10g Plugin will not support newer SSL implementation needs.  The first step is to install and configure the WebLogic plugin for Apache. Obtain the 11g plugin from edelivery.oracle.com (search for WebLogic Plugin and download the version). Once you've downloaded and unzipped the file, you will be able to find and extract the appropriate plugin version for your Apache server version and OS. Note the README.txt file contains explicit instructions for deploying the plugin to your environment. I will paraphrase these instructions here, but reference them for your installation. Unzip plugin folder to the Apache configuration directory in a subfolder called plugin (e.g. /etc/httpd/plugin)  Add the contents plugin/lib folder to the [APACHE_HOME]/lib folder, or edit the /etc/init.d/httpd script file to add the plugin/lib folder to LD_LIBRARY_PATH.  Edit the /etc/httpd/conf/httpd.conf to add the WebLogic module. Add the lines below: LoadModule weblogic_module plugin/lib/mod_wl.so<IfModule mod_weblogic.c>Include conf/weblogic.conf</IfModule>  By using the Include directive here, we have externalized the WebLogic plugin configuration to a separate file. Create the /etc/httpd/conf/weblogic.conf file and add the following directives: <IfModule mod_weblogic.c>        WebLogicHost testbed        WebLogicPort 7001        MatchExpression *.*</IfModule> <LocationMatch /*>SetHandler weblogic-handler</LocationMatch> This directive tells Apache that all URLs ("/*") will use the weblogic-handler, which further indicates that traffic should be redirected to the WebLogic server on host testbed, listening on port 7001. Start Apache using apachectl start, or service start httpd and then access http://testbed/console. (Remember: testbed is the hostname, and Apache and WebLogic are on the same host). You should see the WebLogic console login screen (the application may have to deploy first).  It's possible that you'll see an error like this: "Error:Failure of server APACHE bridge:No backend server available for connection: timed out after 10 seconds or idempotent set to OFF." This is a rather generic error unfortunately. You can review the Apache logs (/etc/httpd/logs/error_log) and you might see: [Thu Jan 12 08:48:35 2017] [error] [client xxx.xxx.xxx.xxx] ap_proxy: trying GET /console/console.portal at backend host 'xxx.xxx.xxx.xxx/7001; got exception 'CONNECTION_REFUSED [os error=13, line 1735 of ../nsapi/URL.cpp]: Error connecting to host xxx.xxx.xxx.xxx:7001 errno = 13'  One possible solution for this (which worked in my environment) is modifying the SELinux parameter to Permissive: # getenforce Enforcing# setenforce 0# getenforce Permissive After setting the Enforce parameter to Permissive, you can restart Apache (apachectl stop then apachectl start) and run the test again. If everything works, soldier on to set up SSL. Using SSL Before we dig in, I advise a review of the official Oracle documentation on SSL and WebLogic. It will give you some primer information that is helpful. After that, you can continue here. The first step in securing your applications running on WebLogic is to setup SSL. By default, Documaker applications will use a test SSL certificate, which is why when you access the applications for the first time you might get various errors about the certificate being untrusted. The reason for this is because digital certificates are issued by Certificate Authorities (CAs). Most browsers come configured with a trust store, which is a storage area for root CA certificates for each of the CAs that the browser is configured to trust. Trust is chained, meaning, if you or your company requests an SSL certificate from a CA, you'll issue a CSR (certificate signing request) from your server(s), which you'll submit to your CA. This establishes the chain of trust: the CA verifies that your company is who it says it is, and signs the certificate using their root certificate, establishing the digital chain. When your browser connects to your company's secured web server using secure protocols (e.g. HTTPS) one of the activities that happens is an SSL handshake, where the client (browser) requests an identity certificate from the server. The server returns the certificate, which the client compares against the root certificates in the trust store. If the certificate is properly signed, valid, and not expired, and meets other security parameters (such as the server name matching the hostname in the certificate, or the certificate is encrypted using an appropriately secure encryption scheme that the browser will accept, to name a few) the browser will allow the connection to proceed. Otherwise, the handshake will fail and the connection will be deemed insecure. Side note: certificates are stored in keystores, which I'll discuss below. In short, test certificates are not signed by CAs, and therefore most browsers will not trust them, which is why you'll be presented with a warning when you try to access Documaker applications secured with test certificates. To resolve this you'll need to configure your identity store with appropriately-signed certificates, or if your company acts as a CA on its own behalf and issues its own certificates for internal use, you'll need to import the root CA used to sign these certificates into every client browser trust store. Configuring Keystores First, a bit There are two types of keystores: identity and trust. The Identity keystore contains the private certificates that identify the server - these should be kept safe. The trust keystore contains the public certificates of CAs and other servers which this server will trust. The default installation includes demo keystores for identity and trust. You will want to keep these as-is in case you should need to revert to them. Create a new keystore using the instructions found here. Once you have created the keystores you can configure them on the Configuration > Keystores tab. Additionally, you will need to access the Configuration > SSL tab to add the private key passphrase for the identity store, and optionally enable Use JSSE SSL. NOTE:  When creating the private and public keys you will have the option to specify the bit length for the encryption key and the hash method. Do not use a length less than 1024 bits, and do not use SHA-1 as the hash method. I recommend using SHA-2 hashing and 2048-bit key length. Note that using SHA-2 means you will need to enable Use JSSE SSL. JSSE requires WebLogic 10.3.6 or higher. If you wish to use TLS v1.2, you need to use Java 7, which also requires WebLogic 10.3.6. Since WebLogic 10.3.6 is supported with ODEE 12.3 and higher, if you require TLS v1.2 or SHA-2 support you must use ODEE 12.3 or higher. SPECIAL NOTE: Some earlier releases of Documaker Enterprise Edition shipped with keystores that had keys of 1024-bit length. Some browsers have been updated to require longer bit-length in keys, and if you attempt an SSL connection with key bit-length that is shorter than the browser requires, you'll be unable to connect. Some browsers such as Firefox will give you a reasonable error message (e.g. "SSL_ERROR_WEAK_SERVER_CERT_KEY") but not all browsers will. You can regenerate the keys to use a longer bit-length by logging into a terminal session on the WebLogic server and executing the following commands. After executing the -genkey command you'll be prompted to enter some information about the key; enter whatever is appropriate for your use. cd [MIDDLEWARE_HOME]/wlserver_10.3/server/libkeytool -delete -keystore DemoIdentity.jks -storepass DemoIdentityPassPhrase -alias DemoIdentitykeytool -genkey -keystore DemoIdentity.jks -storepass DemoIdentityPassPhrase -alias DemoIdentity -keypass DemoIdentityPassPhrase -keyalg RSA -keysize 2048 -validity 3650 Other problems you may encounter are "An error occurred during a connection to SSL received a weak ephemeral Diffie-Hellman key in Server Key Exchange handshake message. Error code: SSL_ERROR_WEAK_SERVER_EPHEMERAL_DH_KEY" or "SSL_ERROR_NO_CYPHER_OVERLAP", which means that the WebLogic server is configured to allow a version of SSL that the browser doesn't like, likely SSLv3. To disable this, you need to tell WebLogic to use a specific version of SSL, and that is done via a startup parameter: -Dweblogic.security.SSL.protocolVersion=TLS1 You can apply this parameter is several places, depending on your configuration (e.g. startManagedWebLogic.sh, WebLogic console Server Start page, etc). It may also be beneficial to add the following SSL debugging flags here as well: -Dssl.debug=true -Dweblogic.StdoutDebugEnabled=true -Djavax.net.debug=all You can establish all of these parameters by adding a line to your startWebLogic.sh/cmd scripts, which should be located in [MIDDLEWARE_HOME]/user_projects/domains/idocumaker_domain:  JAVA_OPTIONS="-Dssl.debug=true -Dweblogic.StdoutDebugEnabled=true -Djavax.net.debug=all"  By adding this line and restarting the appropriate server(s) you will get additional debugging information for SSL connections.  Configuring Managed Servers  After you've secured the necessary certificates and created keystores, you'll need to attach them to the managed servers in WebLogic which are running your Documaker applications. Login to WebLogic console and navigate to Environment > Servers. Default installations include the following managed servers: soa_server, idm_server, and dmkr_server. Repeat these steps for each managed server: Click the server name (e.g. dmkr_server) Under Configuration > General, tick the box for SSL Listen Port Enabled. Make sure, on this page, that you have properly configured the Listen Address for the exact DNS name or IP address that matches your SSL certificate. Enter the desired port number for SSL Listen Port. Usually this is the Listen Port + 1 (e.g. Listen Port = 10001, SSL Listen Port = 10002). Click Save. Navigate to Configuration > SSL and change any desired settings.  Navigate to Configuration > Keystores and change any desired settings.  After restarting the managed servers, you should be able to access your Documaker web applications using the secure port and protocol, e.g. https://xxxx:10002/DocumakerAdministrator. If you receive errors, review your configuration and review the SSL Debugging information here for additional assistance. Once you have validated that all is well you can secure the AdminServer using the same configuration methods. After this is completed, you're ready to configure the WebLogic plugin. Note: if your Listen Address name doesn't match, you have a few options: you can disable the RequireHostNameMatch setting (set to false) for the WebLogic Plugin, or you can use SSLHostMatchOID setting to configure a different portion of the Subject Distinguished Name to use for matching. Refer to WebLogic Plugin configuration details for specifics. Configuring the WebLogic Plugin for Apache The WebLogic Plugin 11g uses the Oracle Wallet for SSL configuration. The plugin files downloaded earlier contain the orapki tool to assist you in managing the wallet.  You need to have the JAVA_HOME environment variable set to a valid JDK. The commands below will create the wallet files in the /etc/httpd/ssl/wallet directory.  You will be prompted for a password current the creation process. mkdir /etc/httpd/ssl/etc/httpd/plugin/bin/orapki wallet create -wallet /etc/httpd/ssl/wallet -auto_loginchmod a+r /etc/httpd/sslchmod a+r /etc/httpd/ssl/wallet/*.ssochmod a+r /etc/httpd/ssl/wallet/*.p12chmod a+r /etc/httpd/ssl/wallet  Next, you need a copy of the server identity certificate - this is the certificate that resides in your identity keystore which is tied to your WebLogic host. Documaker ships with a demo identity keystore which contains the necessary certificates, and that's what we'll use. If you have a different scenario (e.g. you have a non-demo keystore or certificate) you should review the orapki documentation. Note that you will have to modify the keystore somewhat, because the orapki tool requires that the private key password and the keystore password be the same. You can temporarily change the private key password using the keytool: keytool -keypasswd -alias demoidentity -keystore DemoIdentity.jks -storepass DemoIdentityKeyStorePassPhrase Once the private key and keystore passwords are the same, you can then import the keystore into the wallet: /etc/httpd/plugin/bin/orapki wallet jks_to_pkcs12 -wallet ./wallet -pwd <wallet password> -keystore <MIDDLEWARE_HOME>/wlserver_10.3/server/lib/DemoIdentity.jks -jkspwd DemoIdentityKeyStorePassPhrase  Note: You can run the keytool command again if you wish to revert the private key passphrase back to its original form. If you choose not to, you should use the WebLogic Console to update the managed servers SSL settings with the new private key passphrase. Also of note - if you are using NodeManager to control your WebLogic domain (which is a good idea for production environments) you may receive a failure to start up if you change the identity key passphrase.  <Jan 25, 2017 8:10:11 AM> <SEVERE> <Fatal error in node manager server>weblogic.nodemanager.common.ConfigException: Incorrect identity private key password        at weblogic.nodemanager.server.SSLConfig.loadKeyStoreConfig(SSLConfig.java:170)        at weblogic.nodemanager.server.SSLConfig.<init>(SSLConfig.java:102)        at weblogic.nodemanager.server.NMServer.init(NMServer.java:186)        at weblogic.nodemanager.server.NMServer.<init>(NMServer.java:148)        at weblogic.nodemanager.server.NMServer.main(NMServer.java:380)        at weblogic.NodeManager.main(NodeManager.java:31)  You can update NodeManager with the new passphrase by adding the following lines to the end of the nodemanager.properties file located in the WL_HOME/common/nodemanager directory. I've used the values below to match what I used with keytool and orapki above; obviously you should replace these with what is correct for your environment. Note that NodeManager will encrypt the passphrase values upon startup, so it's a good idea to start up NodeManager after making these changes so they don't exist in unencrypted form too long.  KeyStores=CustomIdentityAndCustomTrust CustomIdentityKeyStoreFileName=<MIDDLEWARE_HOME>/wlserver_10.3/server/lib/DemoIdentity.jks CustomIdentityKeyStorePassPhrase=DemoIdentityKeyStorePassPhrase CustomIdentityAlias=demoidentity CustomIdentityPrivateKeyPassPhrase=DemoIdentityKeyStorePassPhrase  Edit the weblogic.conf file for SSL. Previously we simply added the necessary configuration to support the WebLogic plugin which allows Apache to be a proxy. Now, we'll add the ability for Apache to maintain an SSL connection, which is necessary for secure communications. Recall that in the initial configuration we created an all-encompassing directive <LocationMatch /*>  that routes all traffic from Apache to WebLogic. Most implementation practices will avoid this sort of behavior, so we'll explicitly define what needs to be forwarded now. Replace the contents of the weblogic.conf file with the following: WLLogFile /etc/httpd/ssl/proxy.logWLSSLWallet /etc/httpd/ssl/walletDebug ALLDebugConfigInfo ONSecureProxy ONWLProxySSL ONFileCaching ON <LocationMatch "/DocumakerCorrespondence*"> SetHandler weblogic-handler WebLogicHost testbed WebLogicPort 9002</LocationMatch><LocationMatch "/plugin*"> SetHandler weblogic-handler WebLogicHost testbed WebLogicPort 9002</LocationMatch> <LocationMatch "/DWS*"> SetHandler weblogic-handler WebLogicHost testbed WebLogicPort 10002</LocationMatch> <LocationMatch "/DocumakerAdministrator*">  SetHandler weblogic-handler  WebLogicHost testbed  WebLogicPort 10002</LocationMatch> <LocationMatch "/DocumakerDashboard*">  SetHandler weblogic-handler  WebLogicHost testbed  WebLogicPort 10002</LocationMatch> <LocationMatch "/DocumakerDashboard*">  SetHandler weblogic-handler  WebLogicHost testbed  WebLogicPort 10002</LocationMatch> The top level of the file defines some directives which apply to all subsequent <LocationMatch> directives. This will allow you to redirect traffic for specific services and ports. Note that the configuration is slightly different if you are running a clustered WebLogic configuration. If clustering is configured, you'll be using directives like the following: WebLogicCluster testbed1:9002,testbed2:9002 The WebLogic 11g plugin may fail to load properly if you have not configured the LD_LIBRARY_PATH appropriately. Consult the /etc/httpd/ssl/proxy.log and if you see a message similar to "mod_weblogic(ssl): Cannot Load library: libwlssl.so" then you need to review your system to make sure httpd is able to dynamically load the plugin libraries. Conclusion After some mild Sturm und Drang, we have SSL enabled and Apache is proxying our requests to WebLogic using the WebLogic plugin. We can access the Documaker applications by a simple Apache URL with no WebLogic server or port. From here, we can expand further by creating an Apache cluster and a WebLogic cluster to handle more users in the system. Note: If you enabled the SSL logging in the WebLogic Plugin or on the Managed Servers, I highly recommend you remove those settings once everything is working because there is a prodigious amount of debugging information that will be logged. Til next time! 

Many software configurations use Java Application Servers (JAS) like WebLogic for serving dynamic applications.  In addition to using a JAS, many companies will implement load balancing, proxying, or...


All About WebLogic Passivation, Or, What Are Those BC* Files?

If you're running Oracle Documaker Enterprise Edition and using Documaker Interactive, it's quite possible that you have noticed a proliferation of certain files inside the root directory of your Documaker domain. Specifically the default domain, idocumaker_domain, likely has a bunch of files prefixed with "BC" followed by an alphanumeric string, and then ending with the suffix "BCD". Have you ever wondered what these files are, and why they are there? The reason is simply stated: the Internet and it's primary protocol, HTTP. HTTP is a stateless protocol which means that each request made from a browser to a server is independent of any previous request. In the world of business applications, sometimes multiple steps are needed to complete an operation. This means that each step in an operation needs to retain some information about what the user has done in previous steps - that is, the state of the user's operation needs to be retained between steps. How is this accomplished with a stateless protocol like HTTP?   Given the stateless nature of HTTP, an arbitrary request from a user cannot be distinguished from any other request from a user, and each request looks like any other request - in a sequential process, there's no way for the server to know if this is the first or third step in a multi-step operation. Enter cookies, and not the kind you bake yourself. HTTP cookies are bits of information, specifically name/value pairs, which the server can send to a browser and the browser can store locally. Once the cookie has been created with some name/value pairs, it is given a unique identifier. From then, the browser and the server exchange the cookie with each request and response. The cookie affords a way for a server application to store user session information ("state"), since the cookie is traded back and forth with each request and response. Cookies enable applications to be stateful when communicating via a stateless protocol. Furthermore, for Java EE-compliant application servers such as Oracle WebLogic and others, the HttpSession object allows applications to store Java objects within cookies, which facilitates the creation multi-step operations within web applications. As with most things in the programming world, this convenience does not come without some cost: HttpSession objects are held in memory of the Java EE web server, which means that should the Java EE server crash, any in-memory sessions are lost. This failure point can be mitigated by implementing multiple Java EE servers in a cluster, which allows the HttpSession object to replicated across the cluster members. This increases perceived reliability, since a user's request can then be handled by any active cluster member. The cost, of course, being increased network traffic and some time to broadcast the changed HttpSession objects across the cluster. As user volume increases, so does the number of HttpSession objects, and as these objects change, network traffic increases. Applications can store all manner of objects within the HttpSession, and each of these consumes memory for each HttpSession and each user. This is mitigated somewhat by session invalidation, which occurs on session timeout (which is a configurable parameter with the Java EE application server) or when the user logs out -- an uncertain event. Clearly the performance implications of a system weigh heavily on system design to minimize memory, network, and reliability concerns. Luckily, Oracle ADF provides the Application Module ("AM") that handles the heavy lifting for applications, and is tunable. There is a lot of detail behind the AM State Management Facility, but for our purposes we need only know that the module allows an application to store ("passivate") pending transaction state as XML data (also known as a snapshot). As you might expect, an application can also restore ("activate") pending transaction state from a previously-stored snapshot. BUT WHAT ABOUT THESE BC*BCD files?!, you're probably yelling right about now. Since I've explained some background, I can now say that these files are the the XML snapshots stored to the filesystem. But, they can also be stored to a database. As you might expect, each option (file-based or database-backed) has pros and cons. The file-based option is fine for non-production systems and is a little faster since it lacks the overhead of database interaction. You can use this option for production systems if your Java EE application server cluster utilizes a shared file system wherein each member of the application server cluster has access to the shared file system. If you choose this route for a production system, ensure that you engage appropriate subject matter experts to tune your shared file systems, as they can represent a significant bottleneck to performance if they are not tuned accordingly. In most cases for production servers, the database-back snapshot system is preferred for systems that must be highly-available and configured for failover. The following table illustrates settings that control the passivation scheme used by Oracle WebLogic. These settings can be added to startup scripts (e.g. startWebLogic.cmd/sh and startManagedWebLogic.cmd/sh) or to the server startup parameters sheet within WebLogic console. In either case, each setting must be prefixed with -D, e.g. "-Djbo.passivationstore=file". Note that these settings apply only to the managed server running Documaker web applications (Interactive, Dashboard, and Administrator) and the SOA suite (idm_server, dmkr_server, and soa_server are the default managed server names). Table 1. Passivation Scheme Settings  Setting Value Description jbo.passivationstore "file"  The AM stores snapshot information to a file. "database"   The AM stores snapshot information to the PS_TXN table within the database. Any other value  The AM stores snapshot information to a file, or uses an application-specific configuration (for Documaker applications, this defaults to file) jbo.tmpdir Valid directory path  Location where snapshot files are stored. Defaults to application domain root. jbo.pcoll.mgr oracle.jbo.pcoll.OraclePersistManager Use this value when the passivation schema resides within an Oracle database. oracle.documaker.shared.model.DB2PersistManagerCustom Use this value when the passivation schema resides within a DB2 database. jbo.server.internal_connection JNDI name or JDBC connection string Specify a database connection where passivation schema is located. The default is to not provide this value, which causes the system to use the preconfigured DMKR_ASLINE schema. I advise against changing this value and use the default, which stores the passivation information within the appropriate assembly line schema. So, in conclusion, you can stop the creation of the BC*BCD files by adding the -Djbo.passivationstore=database and -Djbo.pcoll.mgr=oracle.jbo.pcoll.OraclePersistManager or -Djbo.pcoll.mgr=oracle.documaker.shared.model.DB2PersistManagerCustom settings to your WebLogic server startup scripts or Server Start tab within the WebLogic Console for managed servers hosting Documaker Web Applications. The default managed servers are idm_server, dmkr_server and soa_server although your implementation may have a different targeting. The Documaker web applications, should you need a refresher, are Interactive, Dashboard, and Administrator. Using the database-backed persistence store is desirable for clustered production environments which must be highly-available or configured for failover. You can use -Djbo.passivationstore=file for non-clustered environments, non-production environments, or clustered environments with a shared filesystem. You can also use the -Djbo.tmpdir=<?> option to specify a directory where passivation files are written in case you don't want them in the root of the Documaker application's domain. 

If you're running Oracle Documaker Enterprise Edition and using Documaker Interactive, it's quite possible that you have noticed a proliferation of certain files inside the root directory of...


ODEE 12.5 Green Field on Windows with Oracle Database

It's been a while since I last wrote about the green field topic, and since then we've had two versions of Oracle Documaker Enterprise Edition (ODEE) released. It's hard to believe that 12.3 was so long ago! Now we're up to 12.5, and the installation process has changed somewhat, along with the introduction of support for Microsoft SQL Server, so I figured it was time to start a new series of posts addressing some of these changes. For the first foray back into the world of green fields, I'm going to use a red field - that is, I'm going to walk through the installation of ODEE 12.5 on the so-called red stack, which is Oracle Database 11g, Oracle WebLogic 10.3.6, and Oracle SOA Suite, all on Windows. Technically that's not exactly a red stack, but it's close enough for our purposes. Let's dig in! Getting StartedTo begin, we'll need to revisit a few things - the first is that our prerequisites for this particular installation haven't changed since 12.3. That means you can use the existing posts where I detailed the installation procedures for Oracle Database 11g, Oracle WebLogic 10.3.6***, and Oracle SOA Suite ***Note! My previous installation guide for Oracle WebLogic 10.3.6 mentions using JDK1.6 or higher, however JDK1.8 is not supported on WebLogic 10.3.6, so you should use the latest patch of JDK1.7Note! If I didn't mention it before, I'll mention it here now: since this is a standalone sandbox, I recommend for simplicity using the same password for all accounts on this sandbox - for everything from database to Weblogic, to schemas, to JMS credentials. It will greatly ease the installation process. Yes, it's not secure, but this is a trial exercise!Next, make sure you've had a look at the system requirements, and while you're at it, make a note of the Documaker 12.5 documentation site. Place it in your bookmarks! Finally, have the ODEE 12.5 installation guide handy - in fact, let's open it now, because on page 24 we need to review some Fusion Middleware concerns - specifically with patches required. Let's review each item. JDK/JRE SelectionI'm just mentioning this again in case you missed it above. If you installed WebLogic 10.3.6 using JDK1.8 you need to start over using JDK1.7! JDK1.8 is not supported on WebLogic 10.3.6. SOA Suite PatchThere are a few patches needed to address some problems with SOA Suite prior to deploying ODEE 12.5. These patches install using OPatch, which is installed as part of the SOA Suite. You will need to set a few environment variables prior to using OPatch. To do so, open a command window with Adminstrator privileges and do the following:set MW_HOME = c:\oracle\middlewareThis is the directory where you installed WebLogic 10.3.6.set ORACLE_HOME = c:\oracle\middleware\oracle_soa1This is the directory where you installed SOA Suite (oracle_soa1).set JAVA_HOME = c:\progra~1\java\jdk1.8This is the directory where you install the JDK - keep in mind spaces are not allowed, so use the 8.1 notation in Windows, or install it to a directory without spaces! Next you can navigate to ORACLE_HOME\OPatch, and run:opatch lsinventoryYou should see "OPatch succeeded" at the end of this run. This verifies everything is ready to go for patching. Using IE11If you intend to use Internet Explorer 11 with ODEE 12.5, you will need to download this patch for Fusion Middleware and install it. Extract the downloaded ZIP to ORACLE_HOME\patch. Then in your command prompt:cd %ORACLE_HOME%\patch\18277370\oui\18277370and then...opatch applyAt then end, you should see "OPatch succeeded", or a message indicating no patch was necessary.To fix a problem building BPEL using ANT, you need to install this patch that fixes a problem building BPEL with ANT. Download zip, unzip to ORACLE_HOME\patch and in your command prompt:cd %ORACLE_HOME%\patch\16443288and then...opatch applyYou may be prompted to shutdown Oracle instances - you shouldn't have any running at this point, so press Y. At the end, you should see "Opatch succeeded". Now we're ready to start the ODEE 12.5 installation. To obtain the software, follow these steps:Browse to the Oracle Software Delivery Cloud (OSDC) and sign in with your Oracle account.In the OSDC Search bar, type "documaker" and in the auto-sense list, select Oracle Documaker Enterprise Edition. Then click the Platform dropdown, and check Windows x64, and click Select. Now you should see something populated in the download queue, so click Continue.In the release Download Queue screen, click the triangle to expand the release downloads for ODEE 12.5. We don't need all of this, so uncheck the box next to Release, which unchecks all releases. We need only select the Oracle Documaker Enterprise Edition - which should be 914.6MB. With that option checked, click Continue. Now you'll need to accept the license agreements - read and review, then check the "I have reviewed..." box, and click Continue.Click the V137972-01.zip link to download the installation package.Once you have downloaded the software, unzip it. You should have a readme and an additional zip file. Review the readme, then unzip the ODEE12.5.00.29909W64.zip file. Before we get started, I should mention something about nomenclature within this post. You'll see references to some items which I will define in this table. You might want to create environment variables to use on your own system for each of these, which greatly eases navigation. You can set these environment variables in a command window for one-time use:set ORACLE_HOME=c:\oracleOr you can Right-click Computer, select Properties, click Advanced System Settings, click Environment Variables, and click New under System Variables. Then you can add the name and value of the variable as shown below. ItemValue ORACLE_HOMEThe location where the Oracle software is installed, typically c:\oracle. MW_HOMEThe location of the middleware software, usually c:\oracle\middleware. ODEE_HOMEThe location of the ODEE installation, usually c:\oracle\odee_1. ODEE_MWThe location of the ODEE middleware domain, usually MW_HOME\user_projects\domains\idocumaker_domain. SOA_HOMEThe location where SOA Suite domain is installed, usually MW_HOME\Oracle_SOA1 InstallationNow it's time to perform the installation! Open the Disk1 folder extracted from the zip, and click setup.exe. Let's follow each of the steps: Welcome screen: Click Next. Specify Installation Location: I prefer to use c:\oracle\odee_1; from here on out I'll call this ODEE_HOME. Note! there is a 44 character limit to this path size. I have no idea why, just make it simple, then click Next. Specify Administrator Group and User: It's fine to accept the defaults here; you'll need to enter and confirm a password. Write it down and click Next. Note! because we're installing in WebLogic, the installer will create the security realm with the group and user we define here. If this was a WebSphere install, then you would need to input a valid group and user that already exists in the LDAP repository for WebSphere. Database Server Details: Oracle 11g should already be selected for you. Input the host, port, and database name. The name should be either the SID or the Service Name you used back when you created the database (I used IDMAKER and idmaker.us.oracle.com respectively). You can use either SID or Service name, and select the correct type from the drop down. For the purposes of the sandbox we are not using Advanced Compression, so uncheck this box and click Next. Administration Schema Details: This is where we define the schema that will hold the administrative/system tables for ODEE. Accept the defaults, however you will want to provide a new password and confirm it. Leave the System ID and name as-is. Assembly Line Schema Details: This is where we define the schema that will hold the assembly line tables for ODEE. Accept the defaults, however you will want to provide a new password and confirm it. Leave the Assembly Line ID as-is, if you want to use a fancy name, go ahead! Click Next. Application Server Details: Select WebLogic Server 10.3.6 from the dropdown. Set your username to weblogic, and set the password. This is the username that you'll use to login to the WebLogic domain and administer it. Click Next. JMS Details: Part of the deployment will include a managed server that runs JMS servers. Here you will specify the connection details that will be used. Accept the defaults and enter a password. Click Next. Hot Folder Path: Accept the default here and click Next. SMTP Email Server Details: If you have an SMTP server that you want to use to send emails from Documaker, you can enter the connection criteria here, then click Next.Note! It's quite likely that your IT enterprise won't allow just any service to send email, so for the purposes of the sandbox you might want to use a developer's SMTP server. I use SMTP4Dev - it's lightweight (under 1MB!) and free. If you decide to run it on your sandbox, it listens on port 25 by default and does not require a user or password. Optional UCM Details: UCM is out of scope for the sandbox, so click Next. Optional UMS Details: UMS is out of scope for the sandbox, so click Next. Documaker Interactive Workflow: Accept the defaults here, and click Next. Installation Summary: Review your details here. You might want to save the response file in case you have to re-run the installation, and you can do that here. Click Install to get things going! The installer will layout the filesystem for ODEE, and stage everything for deployment based on the configuration options selected in the Installer. Once this is done, we'll proceed to deploy the Data and Presentation tiers (the Processing tier deploys automatically during installation, but won't be operational until the other two tiers are deployed). Data TierNow we'll need to deploy the administrative and assembly line schemas to our database. At some point perhaps Documaker will start using the RCU to deploy it's schemas, which will relieve you of some hassle, but for the moment we'll need to do it "by hand" - so roll up your sleeves and open a command prompt! Navigate to the the ODEE_HOME/documaker/database folder in Windows Explorer. Shift-Right-Click on the oracle11g folder and select open command window here. In the command window that opens, type in SET NLS_LANG=AL32UTF8 and press enter. Note! This step is optional; execute it if you're going to add languages to ODEE installation. If you're using English (default) you don't need this step, but it won't hurt! In command window, type in sqlplus / as sysdba and press enter. If you're prompted for the password you established for the database back in the prerequisites (you remember it, right?) key it in and press enter. You should see:C:\Oracle\odee_1\documaker\database\oracle11g>sqlplus / as sysdbaSQL*Plus: Release Production on Fri Jun 10 16:57:18 2016Copyright (c) 1982, 2010, Oracle. All rights reserved.Connected to:Oracle Database 11g Enterprise Edition Release - ProductionWith the Partitioning, OLAP, Data Mining and Real Application Testing optionsSQL> Next, enter @dmkr_admin.sql and press enter. When the script is finished you should see:No errors.alter Commit complete.SQL> In the command window, type in @dmkr_asline.sql and press enter. At the end of processing, you should see:Commit complete.SQL> NOTE! In the previous two steps, it is possible that you might see two error messages at the beginning of each script:SQL> @dmkr_admin.sql'{system.folder.empty}ALTER' is not recognized as an internal or external command, operable program or batch file. andSQL> @dmkr_asline.sql'{assemblyLine.folder.empty}ALTER' is not recognized as an internal or external command, operable program or batch file. You can safely ignore these errors - they have no effect on the installation and deployment. In the command window, type in @dmkr_admin_user_examples.sql and press enter - this is to install the demonstration users and groups. If you are using these instructions to assist in your actual system deployment you can skip this step. At the end of processing, you should see:Commit complete.SQL> Note! If you wish to install additional languages, you may do so now using SQLPLUS. Follow the same steps as above: Execute @dmkr_admin_XX.sql, where XX is the language code for the language you wish to install. Execute @dmkr_asline_XX.sql, where XX is the language code for the language you wish to install. Note! The language code for Simplified Chinese is incorrectly listed in the documentation as zh. It is actually zh_CN, e.g. dmkr_admin_zh_CN.sql. Enter exit into the SQLPLUS window and press enter to end the session. Library DeploymentNow that the schemas have been created, you can deploy the reference implementation Master Resource Library (MRL). This will complete one of the installation steps as well as verify the database connection outside of SQLPLUS. In Windows Explorer, navigate to ODEE_HOME\documaker\mstrres\dmres. Double click on deploysamplemrl.bat. You should see a lot of information flowing across the screen, like this:...--- LBYSync Copyright (C) 1998, 2016, Oracle and/or its affiliates. All rights reserved.--- Synchronize Documaker librariesWill attempt to Sync all Revisions of all Versions.Inserting default LBYSync criteria ;*;*;*;*;*;*;*;*;*;Promoted Resource, Name <CORRESPONDENCE>, Type<BDF>, Ver<00001>, Rev<00001>...Synchronization performed. The following number of objects wereadded to the target library.SOURCE LIBRARY: deflib/master.lbyTARGET LIBRARY: DMRESBDFs : 1GRPs : 25FORs : 95FAPs : 408DDTs : 0LOGOs: 46DALs : 24XDDs : 2PARs : 26PSLs : 7STYs : 3TLKs : 1TPLs : 4Total: 642 You should see a total of 642 resources promoted (in case you're wondering, one resource will not promote and that is expected, so ignore the "Did not promote older resource, Name<TIMESTAMP>" message). Once this completes, you have finished deploying the Data Tier. Presentation Tier In Windows Explorer navigate to ODEE_HOME\documaker\j2ee\weblogic\oracle11g\scripts. Open set_middleware_env.cmd in Notepad, and locate the following:SET MW_DRIVE=E:SET MW_HOME=%MW_DRIVE%\Oracle\MiddlewareUpdate the value of MW_DRIVE and MW_HOME to match your environment - this is where you installed WebLogic. The default is C:\Oracle\Middleware, so update accordingly. Save the file and exit Notepad. Open weblogic_installation.properties in Notepad, and locate the following:# The dirWeblogicHome directory is the middleware home where WebLogic# is installed on the WebLogic server machine. # Unix Example: dirWeblogicHome=/home/oracle/middlewaredirWeblogicHome=E:\\oracle\\middlewareUpdate the value for dirWeblogicHome to match the MW_HOME value above, noting that this file uses Java syntax for backslashes (\\ instead of \). Save the file but don't exit yet! While still in weblogic_installation.properties in Notepad, and locate the following:################################################################################ Update and verify these credentials for JDBC, DMKR_ADMIN DB user, # DMKR_ASLINE DB user, JMS user (blank if not secured) , Documaker Demo user # and WebLogic domain admin account user ################################################################################ Administration DB users credentialjdbcAdminPassword='<SECURE VALUE>'# AssembleLine DB users credentialjdbcAslinePassword='<SECURE VALUE>'# Secured JMS users credential (set to blank of not secured)jmsCredential='<SECURE VALUE>'# Documaker Enterprise Admin user users credential (set to blank of not secured)adminPasswd='<SECURE VALUE>'# WebLogic domain admin account credential weblogicPassword='<SECURE VALUE>'You'll need to replace each of these occurences of '<SECURE VALUE>' (yes, including the ') with the password you specified during the installation phase. This is done for security reasons. You did write all these down, didn't you? Hint: remember when I said,"use the same password for everything" back in Getting Started? Now you know why! Save the file and exit. Double-click the wls_create_domain.cmd file and watch it do its thing. Note: if you see an error like this:Failed to get environment, environ will be empty: (0, 'Failed to execute command ([\'sh\', \'-c\', \'env\']): java.io.IOException: Cannot run program "sh": CreateProcess error=2, The system cannot find the file specified')you can safely ignore it. Double-click the wls_add_correspondence.cmd file and watch it do its thing. Now we can start up the AdminServer component - navigate to MW_HOME/user_projects/domains/idocumaker_domain/bin and execute startWebLogic.cmd. This will open a new command window, and once it gets to the point where you see this:Server started in RUNNING modeyou are ready to proceed. To create the demonstration user accounts in WebLogic, return to the ODEE_HOME\documaker\j2ee\weblogic\oracle11g\scripts directory and execute the create_users_groups.cmd file. To install the demonstration users execute create_users_groups_correspondence_example.cmd in the same directory (FYI - if you're using these instructions for building your deployment system and not just for demonstration, you can skip running this script). Note! If you want to change the passwords for each/some/all of these demo users, you can edit the documaker.py script in this directory. By default, the demo user and password are the same (e.g. Alan Abrams -> Alan Abrams). Locate the following area in the documaker.py file: atnr.createUser('Alan Abrams',adminPasswd,'Alan Abrams'); Replace the adminPasswd with a password of your choosing, remembering to enclose it in single quotes. Next we need to link these users and groups created via the scripts to the entities that are configured in the database. Note that this is a one-time, installation-only process. When new users are added to the security realm, they will automatically be linked/created in the entities database, so this is a strictly one-time step. Execute this step by opening a browser and navigating to http://localhost:7001/jpsquery - of course if you aren't running the browser on the server, you need to replace localhost with your servername or IP address. You should see some output that indicates users/groups added. If you access the URL again, it will show the users/groups found and/or added. Starting ServicesAt this point we have deployed all the basic services and applications that are needed to start up ODEE. You should still have the AdminServer running, but just in case you don't, we'll revisit those steps.Documaker Factory If not already started, start the WebLogic AdminServer by running MW_HOME\user_projects\domains\idocumaker_domain\startWebLogic.cmd. Once you see Server started in RUNNING mode you may proceed. (If the server is already running you will of course see error messages which you can ignore). Start the JMS Server by running MW_HOME\user_projects\domains\idocumaker_domain\bin\startManagedWebLogic.cmd jms_server. You will be prompted for credentials to start the managed server, which you entered in Installation Step 7 above. Once you see Server started in RUNNING mode you may proceed. Start Document Factory services by opening Windows Services manager. You can do this using Start->Run->services.msc, or under Administrative Tools->Services. In the Services manager, find the service that starts with "ODDF". Start this service. It may take a few moments for all of the services to initially start up. You can view the Windows Task Manager using the Details tab. Sort by Name, and you should see docfactory_* services listed. You can also review the startup log in ODEE_HOME\documaker\docfactory\logs\startup.log. This log file will show each worker being deployed and will note any problems. If all is successful you should see 2016-06-13 12:32:04,315- INFO-[Supervisor-oracle.documaker.processmonitor.deployment.HotDeployer]: 0 Error(s). at the end of the log file. If there are any errors, you can review further up the log file, and then follow up in ODEE_HOME\documaker\docfactory\temp for the specific worker. Note! It is possible that the installer could fail to create the Windows service for one reason or another. You can manually deploy the Windows service by navigating to ODEE_HOME\documaker\docfactory\bin. Here you can run docfactory_supervisor_install.cmd. Switch to the Services manager and hit F5 to refresh. If you still don't see the services, open up docfactory_supervisor.properties file and modify the service.debugging=0 setting to service.debugging=1. Run the docfactory_supervisor_install.cmd again and check in the same directory for a log file called docfactory_supervisor-service. Inspect for any problems you can remediate by modifying settings in the aforementioned properties file. Documaker Administrator and Dashboard To start the Documaker Administrator and Dashboard web applications, open a command window and navigate to MW_HOME\user_projects\domains\idocumaker_domain\bin. Execute startManagedWebLogic.cmd dmkr_server. You will be prompted for credentials to start the managed server, which you entered in Installation Step 7 above. Once you see Server started in RUNNING mode you may proceed.Note! You can edit the startManagedWebLogic.cmd file to include the user credentials to avoid being prompted whenever starting - but note that it's insecure to do so. Open the startManagedWebLogic.cmd file in Notepad and locate set WLS_USER=set WLS_PW= and add the credentials, then save the file. Documaker Interactive - Correspondence To start the Documaker Interactive web applications, open a command window and navigate to MW_HOME\user_projects\domains\idocumaker_domain\bin. Execute startManagedWebLogic.cmd idm_server. You will be prompted for credentials to start the managed server, which you entered in Installation Step 7 above. Once you see Server started in RUNNING mode you may proceed.Note! You can edit the startManagedWebLogic.cmd file to include the user credentials to avoid being prompted whenever starting - but note that it's insecure to do so. Open the startManagedWebLogic.cmd file in Notepad and locate set WLS_USER=set WLS_PW= and add the credentials, then save the file. There is an additional step you'll need to perform on any client computers that will be accessing Documaker Interactive - you'll need to install the WIPedit client. The installer is located in ODEE_HOME\documaker\j2ee\ODWE.exe. Copy this installer to any client computer and run the installer, accepting the defaults. Note! Documaker Interactive supports the capability to provide a direct link in case a user does not have the WIPedit plugin installed. It requires a little bit of additional effort which I will cover in another post. Optional Step: SOA deploymentIf you plan on testing BPEL rules for approving transactions processed in Documaker Interactive, read on. This is an optional step and is not required for Documaker Interactive, or Document Factory to work. The first part of this procedure is to extend SOA Suite components into the WebLogic domain for ODEE. Extend Domain Stop any running Document Factory services. Stop any running managed servers, then stop the AdminServer. Open a command window to and run MW_HOME\wlsserver_10.3\common\bin\config.cmd; the configuration wized will open.Select Extend an existing WebLogic domain and click Next. In the selection tree, click the idocumaker_domain, which is in MW_HOME\user_projects\domains. Click Next. Check the box for Oracle SOA Suite; this may select additional items too. Click Next. Click Next on the Configure JDBC Data Sources screen. Click Select All and then click Test Connections. For each connection you should see the following (in addition to other information) in the Connection Result Log box:CFGFWK-20850: Test Successful! Click Next. On the Configure JDBC Component Schema screen, check the boxes next to each component schema. Update the following items, then click Next: PropertyValue DBMS/Serviceidmaker.us.oracle.com (or the service name you used for your Oracle database) Schema Owner(Do not change) Schema PasswordHopefully you set all of the schema passwords to the same value during your SOA Suite installation. Input that value here. If you gave each schema a different password, put nothing here. Host Namelocalhost (or if using a remote database, the hostname of the data) Port1521 (the default; otherwise use the port for your database) Note! If you used separate schema passwords for each schema, you will need to check each component individually, set the password, then uncheck until all passwords are set. For each connection you should see the following (in addition to other information) in the Connection Result Log box:CFGFWK-20850: Test Successful! If everything was succesful, click Next - otherwise click Previous to correct errors and retry. On the Select Optional Configuration screen just click Next. On the Configuration Summary screen, verify your choices and click Extend. Once the operation is complete, click Done. Note! If you followed the optional configuration to add the weblogic credentials to the startup script, you'll need to redo this as the domain extension recreates the startup scripts. Build Rules CompositeNext, we'll compile and deploy the rules composite into the SOA framework. Open a command window and execute ODEE_HOME\documaker\j2ee\weblogic\oracle11g\bpel\antbuild.cmd. Note! It's possible that you may receive an error during the build - see these steps to resolve. Start the AdminServer. Open the Weblogic Console and login (http://localhost:7001/console). Navigate in the left-hand domain structure to Services->Data Sources. For both dmkr_admin and dmkr_asline perform the following: Click the data source name. Click the Targets tab. Check the soa_server1 box. Click Save. In domain structure on the left, click Services->Data Sources. Open a command window and navigate to and execute ODEE_MW\bin\startManagedWebLogic.cmd soa_server1 Once the server is running, navigate to and execute ODEE_HOME\j2ee\weblogic\oracle11g\scripts\depoy_soa.cmd. You should see the following at the end of the process:---->Deploying composite success.Press any key to continue . . . Now you can start up the other managed servers (jms_server, dmkr_server, and idm_server). Resolving Antbuild FailuresDuring the antbuild phase of deploying the BPEL composite, you could see this error:BUILD FAILEDC:\oracle\MIDDLE~1\Oracle_SOA1\soa\modules\oracle.soa.ext_11.1.1\build.xml:41: Problem: failed to create task or type ifCause: The name is undefined.Action: Check the spelling.Action: Check that any custom tasks/types have been declared.Action: Check that any presetdef/macrodef declarations have taken place. If you run into this problem, you can workaround it: Navigate to SOA_HOME\soa\modules\oracle.soa.ext_11.1.1 in Windows Explorer. Locate the build.xml copy it to build.xml.backup. Open build.xml in Notepad, and locate this line: <if> <equals arg1="${Extension-Name}" arg2="oracle.soa.workflow.wc" /> and replace it with: <!-- <if> <equals arg1="${Extension-Name}" arg2="oracle.soa.workflow.wc" /> Locate this line: <else> <jar destfile="${library.path}" update="yes" > and modify it to <else> --> <jar destfile="${library.path}" update="yes" > Locate this line: </else> </if> and modify it to: <!-- </else> </if> --> Save the file, and rerun the antbuild step. ValidationTo verify our system is fully operational, we'll exercise each of the components. First we'll submit a job to the Document Factory. To do this, navigate in Windows Explorer to ODEE_HOME\documaker\mstrres\dmres\input. Here, locate the input file extrfile.xml. Copy this file and place it in ODEE_HOME\documaker\hotdirectory. Next, we'll inspect the Documaker Dashboard to inquire about the transaction status. Open a browser and navigate to http://localhost:10001/DocumakerDashboard. Login with one of the demonstration users (e.g. Alan Abrams) and have a look around. You should be able to locate the transaction you just submitted. Note! If you are using a browser that is not located on the same server, replace localhost with the servername. Test your ability to login to Documaker Administrator by opening a browser and navigating to http://localhost:10001/DocumakerAdministrator. You can login with the adminstration account you created in Installation Step 3 above. Note! If you are using a browser that is not located on the same server, replace localhost with the servername. Test your ability to login to Documaker Interactive by opening a browser and navigating to http://localhost:9001/DocumakerInteractive. Note! If you are using a browser that is not located on the same server, replace localhost with the servername. Test DWS (Documaker Web Services) by opening a browser and navigating to http://localhost:10001/DWSV0AL1/CompositionService?WSDL. You can also import this WSDL into a SOAP testing tool such as Soap UI or similar. Note! If you are using a browser that is not located on the same server, replace localhost with the servername. Uh OhDid something not work correctly? Here are a list of common issues and solutions on initial deployments. I tried to go to the Dashboard/Administrator/Interactive, but my browser gave me a security warning and I couldn't continue. Some typical reasons: If you're using a newer browser, you'll need to enable a new version of security for SSL in WebLogic. Open the WebLogic Console (http://localhost:7001/console and login with the administrative credentials. Click on Environment->Servers in the left-hand navigation. For each server where SSL is used (AdminServer, dmkr_server, and idm_server) perform these steps: Click on the Server name. Click on the SSL tab. Scroll to the bottom and expand Advanced settings.Check the JSSE Enabled box. You will likely have to restart each of the servers here after applying these settings. The out-of-the-box deployment includes "demo" self-signed SSL certificates that will work for most cases on installation, but these are never meant for production use. You should replace these demonstration certificates with actual certificates signed by a valid Certification Authority (CA). When the demo certificates are in use you might see errors like these: Newer browsers may issue an error message like this: The server certificate included a public key that was too weak. Error code: SSL_ERROR_WEAK_SERVER_CERT_KEY Some browsers have a default setting that requires a certificate to be encrpyted with a key length that is less than 1024 bits. To remedy this, you can either encrypt the certificate again with a longer key, or replace the certificate with a valid certifcate (ensure that you use a 1024-bit key or better). To regenerate the key, do the following: Open a command window and navigate to MW_HOME\wlserver_10.3\server\lib. Run java -version as a quick test to make sure that Java is in your path. You should see output like this: C:\oracle\Middleware\wlserver_10.3\server\lib>java -version java version "1.7.0_79" Java(TM) SE Runtime Environment (build 1.7.0_79-b15) Java HotSpot(TM) 64-Bit Server VM (build 24.79-b02, mixed mode) If you don't see this and instead see a message like 'java' is not recognized as an internal or external command... then add it to the PATH temporarily: set path = c:\java\jdk1.7.0_79\binMake sure you replace the value above with the path to the bin directory inside your JDK installation.Next, take a backup of the existing keystore: copy DemoIdentity.jks DemoIdentityBackup.jks 1 file(s) copied. Use the keytool to delete the existing DemoIdentity certificate. keytool -delete -keystore DemoIdentity.jks -storepass DemoIdentityKeyStorePassPhrase -alias DemoIdentity Use the keytool to generate a new DemoIdentity certificate, which will use an encryption key of 2048-bits - nice and secure! keytool -genkey -keystore DemoIdentity.jks -storepass DemoIdentityKeyStorePassPhrase -alias DemoIdentity -keypass DemoIdentityPassPhrase -keyalg RSA -keysize 2048 -validity 3650 You'll be prompted for some common name ("CN") information, which you can fill out to your liking.What is your first and last name? [Unknown]: DocumakerWhat is the name of your organizational unit? [Unknown]: Oracle InsuranceWhat is the name of your organization? [Unknown]: OracleWhat is the name of your City or Locality? [Unknown]: AtlantaWhat is the name of your State or Province? [Unknown]: GAWhat is the two-letter country code for this unit? [Unknown]: USIs CN=Documaker, OU=Oracle Insurance, O=Oracle, L=Atlanta, ST=GA, C=US correct? [no]: yes Restart the AdminServer and any managed servers that are running. You may see various types of certificate errors when using the "demo" certificate. The browser will typically show a risk due to unknown or invalid certificate. You may be able to proceed by acknowledging the risk, or adding an exception for this certificate. This can be also be remediated by replacing your SSL demo certificates with actual certificates issued by a CA. What's Next?I'll follow up this post with some additional information on how you can configure NodeManager, how to setup WIPedit installer for automatic download, and how to consolidate the various managed servers to reduce the footprint on your sandbox machine. In the meantime, enjoy your new sandbox and post any comments below!

It's been a while since I last wrote about the green fieldtopic, and since then we've had two versions of Oracle Documaker Enterprise Edition (ODEE) released. It's hard to believe that 12.3 was so...


Using ODEE's Transactional Data

Today I'm going to demonstrate how you can query the Oracle Documaker Enterprise Edition ("ODEE") transactional data to get some statistics on your processing. All you will need to complete this exercise is a functional ODEE environment that has some transactions in it, and you might need to know a little bit about the content of those transactions. In addition you'll need access to the database that houses the Assembly Line - read only access is fine for this exercise. This will actually be a pretty simple exercise that doesn't require much knowledge of SQL syntax or XML, but a little bit of expertise here would be helpful. Let's get started! Note: the examples herein assume you're using Oracle database 12c. If you're using DB2 or MS-SQL Server, you may have to tweak these examples. First, let's understand the history of this transactional data in Documaker. As you might be aware, the Standard edition of Documaker uses a proprietary record layout for the transactional data generated by Documaker during execution. The transactional data may be stored in two files: the POL file ("POLFILE"), which contains form set information, and the NA file ("NAFILE"), which contains section and variable field information. In some systems these two files are combined into a single file called the NAPOL file. The layout of the NAFILE is documented here.  Side note: I've been here for 17 years and I still haven't gotten a good explanation for what the acronyms POL and NA mean! With the advent of ODEE, two important design changes occurred: all transactional data was moved from the filesystem into a relational database system. This change was crucial to create a scalable, robust architecture for an enterprise-class document automation solution. Relying on intermediary or final output files being written to disk creates a bottleneck for systems that need to scale to accommodate large volumes. The second change was more subtle - changing the internal NAPOL data format from a proprietary record-based format to an self-describing XML format. This means that the NAPOL data becomes immediately more useful. It is a rich data source about every transaction that can be mined, queried, and inspected for statistical analysis, spot checking, and workflow control, and more. In many cases this data is consolidated into a single place - the document - which would otherwise be scattered across many disparate systems. I've uploaded a sample of the NAFILE in XML format here. When using ODEE the NAPOL data is stored in the TRNS table of the Assembly Line database, in the column named TRNNAPOLXML. This column is defined with the datatype XMLTYPE. In Oracle database, this means that we can execute XPATH-style queries and obtain results from an XML document using SQL-type syntax. Note: my goal here is not to explain how this works internally, but I am giving you a little background to be helpful. In order to query this table and the resulting XML data, we're going to use Oracle's XMLTABLE function. To begin, open SQLPLUS (or your favorite SQL tool, such as TOAD or SQLDeveloper) and login to your Assembly Line schema with an appropriate credential. Here's what this looks like using SQLPLUS on my demonstration system: $ sqlplusSQL*Plus: Release Production on Thu Jan 7 01:29:30 2016Copyright (c) 1982, 2014, Oracle.  All rights reserved.Enter user-name: dmkr_aslineEnter password: Last Successful login time: Thu Jan 07 2016 01:29:08 +05:30Connected to:Oracle Database 12c Enterprise Edition Release - 64bit ProductionWith the Partitioning, OLAP, Advanced Analytics and Real Application Testing optionsSQL> Next, you'll want to enter the query to execute. First, let's step through the significant query elements and you can modify this for your system before executing it. The first line instructs the database to return the row count of the TRN_ID columns which match our forthcoming criteria. SELECT count(TRN_ID) The next line tells the database the source table(s) for our query. In this case, the actual table is the TRNS table, which is part of the DMKR_ASLINE schema. FROM DMKR_ASLINE.TRNS In addtion to the TRNS table, we have defined a second table using the XMLTABLE() command. The Oracle database will create this virtual table using the XPATH query '//*:Form', which instructs the XML parser to obtain every <Form> element and child nodes, and the source XML document is from the column called TRNNAPOLXML (from the TRNS table). XMLTABLE ('//*:Form' PASSING TRNNAPOLXML The virtual table will contain a single column called 'v_name' and it's datatype is VARCHAR2 with a length of 100. The data value for this column will be source by applying the XPATH '@name' to the nodes obtained from //*:Form. COLUMNS v_name varchar2(100) PATH '@name') Finally, the WHERE clause will filter into our count only those rows from the virtual table where the column v_name has a value of 'UL APPLICATION REJECTION NOTICE'. WHERE v_name = 'UL APPLICATION REJECTION NOTICE'; Below is a snapshot of the XML within the TRANNAPOLXML column. I've placed arrows pointing to the <Form> nodes in this XML document. You can see the attribute "name" here as well. The parameters for the XMLTABLE function instruct the database to create an XML snippet from the TRNNAPOLXML column by applying the XPATH //*:Form. This means that the XML used to create the XMLTABLE will be restricted to only the <Form> elements and their children. The XMLTABLE parameters tell the database that the virtual table will contain one column, v_name, which comes from the XPATH @name, which is parsed from the XML snippet. The end result for the above example is an XMLTABLE that looks like this:  v_name  UL APPLICATION REJECTION NOTICE  TIFFINCLDUE In practice, the virtual table will be much larger, as it could encompass data from all the rows in the TRNS table - it could be huge! We'll discuss this implication later. Recalling that our WHERE clause restricts our count to include only the form name 'UL APPLICATION REJECTION NOTICE', we note that the TIFFINCLUDE form will not be included in the overall count. So, at this point you should have an understanding of how the query works. Let's enter the query into SQL PLUS (Note - for readability lines are wrapped, however you will enter the following on a single line): SQL> SELECT count(TRN_ID) FROM DMKR_ASLINE.TRNS,XMLTABLE ('//*:Form' PASSING TRNNAPOLXML COLUMNS v_name varchar2(100) PATH '@name') WHERE v_name = 'UL APPLICATION REJECTION NOTICE'; After pressing <ENTER>, the query will be executed and you'll get the results. If you did everything correctly, you see some output that looks similar to this: COUNT(TRN_ID)-------------45 Keep in mind of course that your results may vary based on two factors: 1) if you have no transactions, then you obviously won't get any results; and 2) you may need to change the name of the form you're querying if you haven't been using my demo system. What this tells me is that of all the transactions in my system, the form UL APPLICATION REJECTION NOTICE was used 45 times. What this information doesn't show is the timeframe in which this form was used, or over how many transactions this form was used. Those two additional pieces of information will give us insight into how often this particular form is used. Recall that the only WHERE clause we used was based on the form name - we didn't place any restrictions on the timeframe of the transaction. We can augment our query using an additional clause that restricts the query to only those transactions that are completed and were generated within a specific window. Note again that for readability lines are wrapped, but you will enter everything on a single line. SQL> SELECT count(TRN_ID) FROM DMKR_ASLINE.TRNS,XMLTABLE ('//*:Form' PASSING TRNNAPOLXML COLUMNS v_name varchar2(100) PATH '@name') WHERE v_name = 'UL APPLICATION REJECTION NOTICE' AND ENDTIME BETWEEN to_timestamp_tz('01-JAN-16', 'DD-MON-RR HH.MI.SSXFF AM') AND to_timestamp_tz('07-JAN-16', 'DD-MON-RR HH.MI.SSXFF AM') AND TRNSTATUS = 999; COUNT(TRN_ID)------------- 4 What's different is that  I added an AND conjunction and then the filter on the ENDTIME to include only those transactions that completed between 01-Jan-2016 and 07-Jan-2016, and then I added another AND conjunction with a final criteria of only completed transactions (TRNSTATUS=999). After pressing <ENTER> we see the results of the count - 4! To gain some additional insight we might want to know how many total forms were generated during this time period as well, and we can obtain this by removing v_name filter from the query: SQL> SELECT count(TRN_ID) FROM DMKR_ASLINE.TRNS,XMLTABLE ('//*:Form' PASSING TRNNAPOLXML COLUMNS v_name varchar2(100) PATH '@name') WHERE ENDTIME BETWEEN to_timestamp_tz('01-JAN-16', 'DD-MON-RR HH.MI.SSXFF AM') AND to_timestamp_tz('07-JAN-16', 'DD-MON-RR HH.MI.SSXFF AM')AND TRNSTATUS = 999;  COUNT(TRN_ID)------------- 8 This query shows us the count of all <Form> nodes present in transactions that completed successfully between 01-Jan-2016 and 07-Jan-2016. That means that the UL Application Rejection Notice constitutes 50% of forms generated during that time period! So now you might be wondering why this information is useful - the biggest benefit is knowing how a form is used in your business. You can customize this query to dig even further into the data model by looking at fonts, sections, rules, even actual data values. You can augment the query to consider only transactions that had to be processed manually, so you can understand which forms are being routed to interactive. You can interrogate this data to determine if some of your forms are generated in cycles. There are many ways you can view this information - the key is knowing how to access it! I hope this has been useful for you - feel free to post comments or questions!

Today I'm going to demonstrate how you can query the Oracle Documaker Enterprise Edition ("ODEE") transactional data to get some statistics on your processing. All you will need to complete this...


Security with Documaker Enterprise

In this post I'll be addressing some common security configurations and practices with respect to Oracle Documaker Enterprise Edition implementation. Questions and comments are welcome!  Users and Groups Documaker Enterprise Edition uses web application serversecurity frameworks for authentication and authorization of users. The webapplication servers typically utilize frameworks that include support forexternal user and group repositories that can be accessed via industry-standardprotocols, such as LDAP. The ODEEinstallation process includes the deployment of a user and group data storethat works with the demonstration library. A key component of the Documaker security model includes thedefinition of entities and ability sets. An ability set is acollection of application-specific functions with attributes that describe thelevel of user exposure. An entity is an element defined in a user/grouprepository – typically a group. Documaker Administrator isused to define ability sets and to correlate them to entities, which are thenused by Documaker applications for controlling access and the user interface. There must be at least one user and one group –Documaker and Documaker Administrators respectively – inorder for the Documaker applications to work properly. The names of theseentities can be changed during installation or prior to startup of the systemfor the first time. When connectingDocumaker to an enterprise user/group data store, ensure that there is aDocumaker Administrators group that has at least one user. If the name of thisgroup has been changed in the database script that creates entity entries, thischange must be reflected in the user/group data store as well. Note that a partof the installation, it is required to run the JPSQUERY tool to link up theuser/group defined in the database script and the entities created in theuser/group data store. This step only needs to be executed once duringinstallation. Upon installation, Documaker Enterprise Edition sets up alocalized security realm within theweb application server. The localized security realm is useful for severalreasons: 1) it provides a way to verify the functionality of the installation;2) it isolates the installation from issues connecting to external services;and 3) quicker installation for implementations that do not require externaluser/group data stores. If requirements dictate the need to use an externaluser/group data store, this configuration will need to happen within the webapplication server. Configuring WebLogic for External User/Group Data Store To configure WebLogic for external user/group data stores,you will need access to the Documaker domain within the WebLogic web console.Note that it is possible to complete this configuration using WSLT –see online documentationto do this. WebLogic Server includes the following Authentication providers: Oracle Internet Directory Authenticationprovider Oracle Virtual Directory Authentication provider iPlanet Authentication provider Active Directory Authentication provider Open LDAP Authentication provider Novell Authentication provider generic LDAP Authentication provider Each LDAP Authentication provider stores user and groupinformation in an external LDAPserver. WebLogic Server does not support or certify any particular LDAPservers. Any LDAP v2 or v3 compliant LDAP server should work with WebLogicServer. The following LDAP directory servers have been tested: Oracle Internet Directory Oracle Virtual Directory Sun iPlanet version 4.1.3 Active Directory shipped as part of Windows 2000 Open LDAP version 2.0.7 Novell NDS version 8.5.1 Note: if your configuration has only one configuredAuthentication provider for the security realm used by Documaker, then the userthat is configured for starting WebLogic Server (the “bootuser”)must meet the following requirements: Exist in the LDAP directory Be a member of a group that has the Admin role By default the Admin role is granted to the Administratorsgroup so you may create this group in the LDAP directory if it does not exist.If you wish to use a different group, include the WebLogic Server boot user inthe group and grant the Admin role to the group. Task 1: CreateProvider instance 1. Open the WebLogic Server console for theDocumaker AdminServer and click on Security Realms. 2. Click on myrealm. 3. Click the Providers tab. 4. Click the New button. 5. Name the provider 6. Select the provider type from the dropdown. Task 2: SetProvider-Specific Attributes Once the provider has been created, you can click on the newprovider and configure its provider-specific attributes on theProvider-Specific tab. The attributes allow you to: Enable communication between the LDAP server andthe LDAP Authentication provider. For a more secure deployment, Oracle recommendsusing the SSL protocol to protect communications between the LDAP server andWebLogic Server. Enable SSL with the SSLEnabled attribute. Configure options that control how the LDAPAuthentication provider searches the LDAP directory. Specify where in the LDAP directory structureusers are located. Specify where in the LDAP directory structuregroups are located. Define how members of a group are located. Set the name of the global universal identifier(GUID) attribute defined in the LDAP server. Set a timeout value for the connection to theLDAP server. The LDAPServerMBean.ConnectTimeout attribute for all LDAPAuthentication providers has a default value of zero. This default setting canresult in a slowdown in WebLogic Server execution if the LDAP server isunavailable. Oracle recommends that you set theLDAPServerMBean.ConnectTimeout attribute to a non-zero value (e.g. 60 seconds). Configure performance options that control thecache for the LDAP server. Use the Configuration > Provider Specific andPerformance pages for the provider in the Administration Console to configurethe cache. See Improving the Performance ofWebLogic and LDAP Authentication Providers. Here are basic settings that will work with most LDAP providers (e.g. Active Directory, or Oracle Internet Directory):Login to WebLogic consoleClick on Security Realms, and select "myrealm"Navigate to Providers > Authentication and click New. Name your provider something memorable, like "Andy's Authenticator" or perhaps something technical, like "SuperCoOID".Select the type of the authenticator. For Active Directory, select "LDAP Authenticator". For Oracle Internet Directory, select "OracleInternetDirectoryAuthenticator".Click OK.Click on the authenticator you just created, and you're on the Common configuration tab.Set the Control Flag to SUFFICIENT.Click Save.Click on the Provider Specific tab. You may need your LDAP repository administrator to help you with these values.Set the Host and Port values. Set the Principal and Credential values (this is the login information to the LDAP provider, e.g. Principal = "cn=admin", Credential is the password).Set the User Base DN. This identifies the users(s) that will be available to the application, e.g. "cn=Users, dc=mysubdomain, dc=oracle, dc=com". Ensure the "Use Retrieved User Name as Principal" checkbox is checked.Set the Group Base DN. This identifies the group(s) that will be available to the application, e.g. "cn=DocumakerGroup, dc=mysubdomain,dc=oracle,dc=com".Click Save.Go back to the Authentication tab, and click Reorder. Move your new Authenticator to the top of the list. Save.Restart your AdminServer (you will need to shutdown your managed servers, and any services like Documaker Factory and Docupresentment as well). Note: If the LDAP Authentication provider fails to connect to theLDAP server, or throws an exception, check the configuration of the LDAPAuthentication provider to make sure it matches the corresponding settings inthe LDAP server. Task 3: Enable SSLfor LDAP 1. Ensure that the SSLEnabled attribute for yourLDAP Authentication provider is set (as instructed in Task 2). 2. Obtain the root certificate authority (CA)certificate for the LDAP server and create a trust keystore[1] with it.You can do this with Java’s keytool command – an example is shownbelow. Values in italics should be replaced with appropriate values for yourorganization: keytool-import -keystore ./ldapTrustKS-trustcacerts -alias oidtrust -file rootca.pem-storepass TrustKeystorePwd –noprompt 3. Copy the keystore to a location accessible toWebLogic server. 4. Start the WebLogic Server administration consoleand go to idocumaker_domain à Environment àServers àAdminServer àKeystores 5. If necessary, in the Keystores field, clickChange to select the Custom Identity and Custom Trust configuration rules. 6. If the communication with the LDAP server uses2-way SSL, configure the custom identity keystore, keystore type, andpassphrase. 7. In Custom Trust Keystore, enter the path andfile name of the trust keystore created in step 2. 8. In Custom Trust Keystore Type, enter jks. 9. In Custom Trust Keystore Passphrase, enter thepassword used when creating the keystore. 10. Reboot theWebLogic Server instance for changes to take effect. Tuning WebLogic LDAP Provider for High Availability It is a good practice to configure the LDAP provider to workwith multiple LDAP servers and enable failover if one LDAP server is notavailable. Use the Host attribute (found in the Administration Console on theConfiguration àProvider Specific page for the LDAP Authentication provider) to specify thenames of the additional LDAP servers. Each host name may include a trailingspace character and a port number. In addition, set the Parallel Connect Delayand Connection Timeout attributes for the LDAP Authentication provider: Parallel Connect Delay—Specifiesthe number of seconds to delay when making concurrent attempts to connect tomultiple servers. An attempt is made to connect to the first server in thelist. The next entry in the list is tried only if the attempt to connect to thecurrent host fails. This setting might cause your application to block for anunacceptably long time if a host is down. If the value is greater than 0,another connection setup thread is started after the specified number of delayseconds has passed. If the value is 0, connection attempts are serialized. Connection Timeout—Specifies the maximumnumber of seconds to wait for the connection to the LDAP server to beestablished. If the set to 0, there is no maximum time limit and WebLogicServer waits until the TCP/IP layer times out to return a connection failure.Set to a value over 60 seconds depending upon the configuration of TCP/IP. Hardening Hardening is the act of applying security to each component ofthe infrastructure, including: Web Servers Application Servers Identity and Access Management solutions Database systems Operating systems The hardening process described in this document is intended toserve as a reminder to implement hardening for each of these elements and isnot a comprehensive hardening plan. The points contained herein will bespecific to the applications and components commonly used in a Documakerimplementation – specifically application servers anddatabases. Use enterprise standards and industry best practices –a typical practice is to begin with everything locked down and then open upports and access rights as necessary. WebLogic Server Oracle WebLogic Server uses a more specific type ofhardening known as lockdown, which refers to securing the subsystems andapplications that run on a server instance. In contrast, hardening is moregeneral and involves doing a security survey to determine the threat model thatmay impact your site, and identifying all aspects of your environment (such ascomponents in the Web tier) that could be insecure. The following aspects ofWebLogic Server should be considered for lockdown: SSL-enabling components and component routes Documakerweb applications install with SSL enabled. LDAPAuthentication providers should be configured for SSL Configuretwo-way SSL - one-way SSL is a configuration where clients request a servercertificate and the server accepts all connections. Two-way SSL configurationsrequire the client and the server to exchange certificates, thereby providingan additional layer of trust by ensuring that non-trusted clients cannot invokeservices. SSL-enabling web services - DocumakerWeb Services install with SSL disabled and should be enabled. Managing ports and other features of the sitesuch as: defaultdeployed application – remove any non-essential default apps such asthe welcome page demonstration/samples–remove demoApp, demo keystores, demo trust, and demo SSL certificate changedefault ports for common services e.g. admin port –Documaker services ship with standard ports; however, these are not common andcould remain as-is. The base WebLogic components (e.g. console) are configured standardports and should be changed from the default (7001). Password management Roles and Policies for access –role- and policy-based security should be configured for authorized access to: webservices datasources applications[2] –note that Documaker applications feature separate access controls set in theDocumaker Administrator (Entities and Ability Sets); however this configurationcan provide an additional layer of security if needed. Open the WebLogic Server administrator console to administerroles and policies. You can create roles and policies at a global level or fromthe context of an item you wish to secure. 1. In the left pane of the Administration Console,select Security Realms. 2. On the Summary of Security Realms page, selectthe name of the realm in which you want to create the role (for example,myrealm). 3. On the Settings page, select the Roles andPolicies tab. Then select the Roles subtab. The Roles page organizes all of thedomain's resources and corresponding roles in a hierarchical tree control.Navigate to the resource you wish to secure, and select it. This example willillustrate creating a global role. 4. In the Roles table, in the Name column, expandthe Global Roles node. 5. In the Name column, select the name of the Rolesnode. 6. In the Global Roles table click New. 7. On the Create a New Role page enter the name ofthe global role in the Name field. Note: Do not use blank spaces, commas,hyphens, or any characters in the following comma-separated list: \t, <>, #, |, &, ~, ?, ( ), { }. Security role names are case sensitive. Allsecurity role names are singular and the first letter is capitalized, accordingto convention. The proper syntax for a security role name is as defined for an Nmtoken in the Extensible Markup Language(XML) Recommendation[3]. 8. If you have more than one role mapper configuredfor the realm, from the Provider Name list select the role mapper you want touse for this role. Role mapping is the process whereby principals (users orgroups) are dynamically mapped to security roles at runtime. The role mapper provideris responsible for saving your role definition in its repository. See ConfigureRole Mapping providers[4]. 9. Click OK to save your changes. 10. In theGlobal Roles table select the role. 11. In theRole Conditions section click Add Conditions. 12. On theChoose a Predicate page, in the Predicate List, select a condition. Oraclerecommends that you use the Group condition whenever possible. This conditiongrants the security role to all members of the specified group (that is,multiple users). For a description of all conditions in thePredicate List, see Security Role Conditions. 13.Thenext steps depend on the condition that you chose: 14. If youselected Group or User, click Next, enter a user or group name in the argumentfield, and click Add. The names you add must match groups or users in the securityrealm active for this WebLogic domain. 15. If youselected a boolean predicate (Server is in development mode , Allow access toeveryone, or Deny access to everyone) there are no arguments to enter. ClickFinish and go to step 15. 16. If youselected a context predicate, such as Context element's name equals a numericconstant, click Next and enter the context name and an appropriate value. It isyour responsibility to ensure that the context name and/or value exists atruntime. 17. If youselected a time-constrained predicate, such as Access occurs between specifiedhours, click Next and provide values for the Edit Arguments fields. 18.ClickFinish. 19. (Optional)Create additional role conditions. 20. (Optional)The WebLogic Security Service evaluates conditions in the order they appear inthe list. To change the order, select the check box next to a condition andclick the Move Up or Move Down button. 21. (Optional)Use other buttons in the Role Conditions section to specify relationshipsbetween the conditions: 22. SelectAnd/Or between expressions to switch the and / or statements. 23.ClickCombine or Uncombine to merge or unmerge selected expressions. See Combine Conditions. 24. ClickNegate to make a condition negative; for example, NOT Group Operators excludesthe Operators group from the role. 25. ClickSave. Web Service Security The web application servers that implement the WebService-Security (WS-S) standards secure Documaker Web Services (DWS). BothWebLogic and WebSphere provide standard WS-S implementations that allow for thedefinition of security policies including access and authorization for webservice consumption. Ensure DWS is configured with appropriate policies androles to prevent unauthorized consumption of web services. It is important to note that the encryption used by SSL is"all or nothing": either the entire SOAP message is encrypted or itis not encrypted at all. There is no way to specify that only selected parts ofthe SOAP message be encrypted. Securing Web Services with WebLogic The best practice for securing web services for Documaker inenvironments requiring higher levels of security is to implement the followingmeasures with WebLogic Server. Note that the last item, access-controlsecurity, is only required if corporate security policy dictates that access toweb services should be restricted. Message-level security – Datain a SOAP message is digitally signed or encrypted. May also include identitytokens for authentication. This security level includes all the securitybenefits of SSL, but with additional flexibility and features. Message-levelsecurity is end-to-end, which means that a SOAP message is secure even when thetransmission involves one or more intermediaries. The SOAP message itself isdigitally signed and encrypted, rather than just the connection. And finally,you can specify that only individual parts or elements of the message besigned, encrypted, or required. Transport-level security – SSLis used to secure the connection between a client application and WebLogicServer with Secure Sockets Layer (SSL). SSL provides secure connections byallowing two applications connecting over a network to authenticate the other'sidentity and by encrypting the data exchanged between the applications.Authentication allows a server, and optionally a client, to verify the identityof the application on the other end of a network connection. A clientcertificate (two-way SSL) can be used to authenticate the user. Encryptionmakes data transmitted over the network intelligible only to the intendedrecipient. Transport-level security includes HTTP BASIC authentication as wellas SSL. Transport-level security, however, secures only the connection itself.This means that if there is an intermediary between the client and WebLogicServer, such as a router or message queue, the intermediary gets the SOAPmessage in plain text. When the intermediary sends the message to a secondreceiver, the second receiver does not know who the original sender was.Additionally, the encryption used by SSL is "all or nothing": eitherthe entire SOAP message is encrypted or it is not encrypted at all. There is noway to specify that only selected parts of the SOAP message be encrypted. Access control security – Specifieswhich roles are allowed to access Web services. This answers the question"who can do what?" First you specify the security roles that areallowed to access a Web service; a security role is a privilege grantedto users or groups based on specific conditions. Then, when a clientapplication attempts to invoke a Web service operation, the clientauthenticates itself to WebLogic Server, and if the client has theauthorization, it is allowed to continue with the invocation. Access controlsecurity secures only WebLogic Server resources. That is, if you configure onlyaccess control security, the connection between the client application andWebLogic Server is not secure and the SOAP message is in plain text. Implementing Message-level security Message-level security specifies whether the SOAP messagesbetween a client application and the Web service invoked by the client shouldbe digitally signed or encrypted, or both. It also can specify a shared securitycontext between the Web service and client in the event that they exchangemultiple SOAP messages. You can use message-level security to assure: Confidentiality, by encrypting message parts Integrity, by digital signatures Authentication, by requiring username, X.509, orSAML tokens Supported use cases for this level of security: Use X.509 certificates to sign and encrypt aSOAP message, starting from the client application that invokes themessage-secured Web service, to the WebLogic Server instance that is hostingthe Web service and back to the client application. Specify the SOAP message targets that aresigned, encrypted, or required: the body, specific SOAP headers, or specificelements. Include a token (username, SAML, or X.509) inthe SOAP message for authentication. Specify that a Web service and its client(either another Web service or a standalone application) establish and share asecurity context when exchanging multiple messages using WS-SecureConversation(WSSC). Derive keys for each key usage in a securecontext, once the context has been established and is being shared between aWeb service and its client. This means that a particular SOAP message uses twoderived keys, one for signing and another for encrypting, and each SOAP messageuses a different pair of derived keys from other SOAP messages. Because eachSOAP message uses its own pair of derived keys, the message exchange betweenthe client and Web service is extremely secure. A Web service can have zero or more WS-Policy files associatedwith it. WS-Policy files follow the guidelines of the WS-Policy specification. WebLogicServer uses WS-Policy files to specify the details of the message-levelsecurity (digital signatures and encryption) and reliable messagingcapabilities of a Web service. You can attach a WS-Policy file to a Web service endpoint,which means that the policy assertions apply to all the operations of a Webservice endpoint. You can also attach a WS-Policy file to an operation, whichmeans that the policy assertions apply only to the specific operation. In addition, you can attach a WS-Policy file to the inboundor outbound SOAP message, or both. For example, if a WS-Policy file thatspecifies encryption for the body of a SOAP message is attached to just theinbound message of a particular operation, only the SOAP request needs to beencrypted. After you have attached a WS-Policy file to a Web serviceendpoint or operation, the assistant updates the application's deployment plan.If the application does not currently have a configured deployment plan, theassistant creates one for you in the location you specify. Types of Policies You can attach two types of policies to WebLogic WebServices: Oracle Web Services Manager policy and WebLogic Web Service policy. Pre-packaged WebLogic Web Service Policies WebLogic Server includes pre-packaged WS-Policy files thatyou can use for configuring message-level security and reliable messaging,including the following. These files are static and you cannot change them.Predefined policies are available in the following categories: • ReliableMessaging: Set of WS-Policy files that enable you to configure Web servicesreliable messaging. • SOAPMessage Transmission Optimization Mechanism (MTOM): Used to specify thatthe Web service supports MTOM to transport binary data. MTOM describes a methodfor optimizing the transmission of XML data of type xs:base64Binary using MIMEattachments over HTTP to carry that data while at the same time allowing boththe sender and the receiver direct access to the XML data without having to beaware that any MIME artifacts were used to marshal the xs:base64Binary data. • Security: Two sets of pre-packaged security policyfiles are available for configuring message-level security. One set of securitypolicy files conforms to the OASIS WS-SecurityPolicy 1.2 specification. Thesesecurity policy files are described in UsingWS-SecurityPolicy 1.2 Policy Files in Securing Web Services for OracleWebLogic Server. The other set of security policy files conforms to aproprietary Oracle Web services security policy schema and are described in ProprietaryWeb Services Security Policy Files (JAX-RPC Only) in Securing WebServices for Oracle WebLogic Server. You can use security policy files fromeither set, but the two sets are not mutually compatible; you cannot defineboth types of policy file in the same Web service. Pre-packaged Oracle WSM Policies Oracle WSM includes a set of predefined policies in thefollowing categories: security, WS-Addressing, MTOM, reliable messaging, andmanagement. Note that theAdministration Console allows you to associate as many WS-Policy files as youwant to a Web service and its operations, even if the policy assertions in thefiles contradict each other. It is up to you to ensure that multiple associatedWS-Policy files work together. If any contradictions do exist, WebLogic Serverwill return a runtime error when a client application invokes the Web service. To associate a WS-Policy file with a Web service: In the left pane of the Administration Console, select Deployments. In the right pane, navigate within the Deployments table until you find the Web service for which you want to configure a WS-Policy file. Note: Web services are deployed as part of an Enterprise application, Web application, or EJB. To understand how Web services are displayed in the Administration Console, see View installed Web services. In the Deployments table, click the name of the Web service. Select Configuration -> WS-Policy. The table lists the WS-Policy files that are currently associated with the Web service. The top level lists all the ports of the Web service. Click the + next to a Web service port to see its operations and associated WS-Policy files. To associate a WS-Policy file with an entire Web service endpoint (port): Click the name of the Web service port. A page appears which includes two columns: one labelledAvailable Endpoint Policies that lists the names of the WS-Policy files that you can attach to a Web service endpoint and one labelled Chosen Endpoint Policies that lists the WS-Policy files that are currently configured for this endpoint. Use the arrows to move WS-Policy files between the available and chosen columns. The WS-Policy files that are in the Chosen column are attached to the Web service endpoint. Click OK. If your Web service already has a deployment plan associated to it, then the newly attached WS-Policy files are displayed in the Policies column in the table. If the J2EE module of which the Web service is a part does not currently have a deployment plan associated with it, the assistant asks you for the directory that should contain the deployment plan. Use the navigation tree to specify a directory, then click Finish. To associate a WS-Policy file with a Web service operation: Click the name of the operation. A page appears which includes two columns: one labeled Available Message Policies that lists the names of the WS-Policy files that are available to attach to the inbound (request) and outbound (response) SOAP message of the operation invoke and one labeled Chosen Message Policies that lists the WS-Policy files that are currently attached to the inbound and outbound SOAP message of the operation invoke. Use the arrows to move WS-Policy files between the available and chosen columns. The WS-Policy files that are in the Chosen column are the ones that are attached to the inbound and outbound SOAP message when this operation is invoked by a client application. Click Next. A page appears which includes two columns: one labeled Available Inbound Message Policies that lists the names of the WS-Policy files that are available to attach to the inbound (request) SOAP message of the operation invoke and one labeled Chosen Outbound Message Policies that lists the WS-Policy files that are currently attached to the inbound SOAP message of the operation invoke. Use the arrows to move WS-Policy files between the available and chosen columns. The WS-Policy files that are in the Chosen column are the ones that are attached to the inbound (request) SOAP message when this operation is invoked by a client application. Click Next. A page appears which includes two columns: one labeled Available Outbound Message Policies that lists the names of the WS-Policy files that are available to attach to the outbound (response) SOAP message of the operation invoke and one labeled Chosen Outbound Message Policies that lists the WS-Policy files that are currently attached to the outbound SOAP message of the operation invoke. Use the arrows to move WS-Policy files between the available and chosen columns. The WS-Policy files that are in the Chosen column are the ones that are attached to the outbound (response) SOAP message when this operation is invoked by a client application. Click Finish. If your Web service alreadyhas a deployment plan associated with it, the attached WS-Policy files aredisplayed in the Policies column in the table. If theJ2EE module of which the Web service is a part does not currently have adeployment plan associated with it, the assistant asks you for the directorythat should contain the deployment plan. Use the navigation tree to specify adirectory, then click Finish. Implementing Transport-Level Security You must configure SSL encryption for WebLogic Server beforecontinuing. You may then apply policies using the steps outlined above formessage-level security. This level of security has specific implications for clientswishing to consume web services; client application that invokes the Webservice must specify certain properties to indicate the SSL implementation inuse. In particular: To specify the Certicom SSL implementation, usethe following properties -Djava.protocol.handler.pkgs=weblogic.net -Dweblogic.security.SSL.trustedCAKeyStore=trustStore Where trustStorespecifies the name of the client-side truststore that contains the list oftrusted certificates (one of which should be the server's certificate). Todisable host name verification, also specify the following property: -Dweblogic.security.SSL.ignoreHostnameVerification=true To specify the Sun SSL implementation, use thefollowing properties: 
-Djavax.net.ssl.trustStore=trustStore Where trustStorespecifies the name of the client-side truststore that contains the list oftrusted certificates (one of which should be the server's certificate). Todisable host name verification, also specify the following property: -Dweblogic.wsee.client.ssl.stricthostchecking=false Access Scenarios The following access scenarios indicate typical use cases forthe Documaker system and can be used to guide your security policy definition.These descriptions outline the defaultout-of-the-box configuration of the system. Web Services for Document Generation This use case is applicable for all Documaker Web Services(DWS) endpoints and operations. A client application that wants to consume a DWSoperation establishes a connection to the appropriate endpoint. Additional security policies can be applied atthe web application server level to further secure the web service, such as: Transport-levelsecurity, providing SSL encryption of the message. Message-levelsecurity, which encrypts the contents of the message Access-levelcontrol, which uses policies to grant or deny access to the web service.  Interactive User Document Editing This use case is applicable for all Documaker applicationsdeployed within the web application server (e.g. Documaker Dashboard, DocumakerAdministrator, and Documaker Interactive). A user opens a web browser and accesses the URLfor a Documaker application A secure connection is established according tothe deployment descriptor for the Documaker application: the client browserrequests a server certificate, which is sent and validated. Additional security policies can be applied atthe web application server level to further secure the web application (e.g.policy-based security). The Documaker application requests the user’slogin credentials via a login dialog[5]. The Documaker application relies on the webapplication server to authenticate the credential against the security store. Upon successful authentication, the Documakerapplication caches (or updates the cache) with the user’sgroup membership. The Documaker application then compares thegroup member to the entities defined within the configuration tables anddetermines the application-specific ability sets allocated to those entities. The Documaker application then renders the userinterface according to the ability sets afforded to the user based on theirgroup membership. Conclusion You made it! I know that's a lot of information to digest, and hopefully it was organized in such a way as to be helpful. If you have any followup questions or comments, unleash them below! Footnotes [1] More information is available here. [2] Applications are configured for DD-only security (deployment descriptor)which means that if you wish to add role- and/or policy-based security on topof this, you must modify the deployment descriptors for the affectedapplication(s). Keep in mind this will affect upgrade capability as you have tore-apply deployment descriptor changes. [3] http://www.w3.org/TR/REC-xml/#NT-Nmtoken [4] http://docs.oracle.com/cd/E28280_01/apirefs.1111/e13952/taskhelp/security/ConfigureRoleMappingProviders.html [5] It is possible toutilize single sign-on features here provided the web application serversupports the desired SSO model.

In this post I'll be addressing some common security configurations and practices with respect to Oracle Documaker Enterprise Edition implementation. Questions and comments are welcome!  Users and...

Resource Management Practices with Documaker

In this post I aim to unleash a great deal of information on resource management practices and procedures as well as configuration with respect to Oracle Documaker. Most of this information is interchangeable between Standard and Enterprise editions. In Documaker 11.5 and newer, document template resources arestored in a set of tables collectively called the Master Resource Library, orMRL.  The MRL houses the resources in aproprietary compressed storage medium that resides within the MRL tables storedin the database. In addition, the MRL also stores metadata about the resources.Resources are version-controlled and can be tagged with metadata elements tocategorize and segment according to implementation-specific criteria. LibraryManager is the name for the management structure around the library resources –and is an integral component of the integrated development environment used toauthor and manage resources in the MRL (Documaker Studio). LibraryConfiguration Types and Table Names There are two possible configurations of Library Managerresource libraries - database and file-base. Note that for ODEE, the databasemodel is the only model allowed, however file-base models can be used fordevelopment environments where ODEE is not used[1]. In thedatabase model, the MRL tables are stored in an ODBC-compliant database. Thetables generally feature a standard naming convention (libraryname_table) asshown in the table below, however the name of the tables can be anything thatsuits corporate standards. The default name prefix for MRLs used with OracleDocumaker Enterprise Edition (ODEE) is DMRES. In the file-based model, this same information is carried in aset of 3 files, all with the same filename (e.g. MASTER) and differentextensions (.MDX, .LBY, and .DBF). The table below shows the tables/files andtheir purpose. Name Prefix Name Suffix/Extension Purpose DMRES LBYI LBYC LBYD Index Catalog Data MASTER MDX LBY DBF LOG Index Data Catalog Change Log Schemasand Assembly Lines A typical ODEE footprint encompasses multiple environments (e.g.Development, Test, Performance, DR, Production, etc). Each of theseenvironments is connected to a data tier that contains a database. The databasecontains two schemas, which may be named using any convention desired. Thedefault schema names are DMKR_ADMIN, which houses system tables for ODEE, andDMKR_ASLINE, which houses the ODEE processing tables ASLINE is shorthand for “Assembly Line”which roughly corresponds to the single MRL. It should be noted that certainODEE components have a 1:1 relationship with an Assembly Line, such asDocumaker Web Services (DWS), Documaker Interactive (DI), Document Factory(Factory), and Docupresentment (IDS). These components are tied to a singleAssembly Line. If multiple Assembly Lines are used, each must have a dedicatedinstance of these components. Other components, such as Documaker Administrator(DA) and Dashboard are used across Assembly Lines. It is possible, therefore,to reduce the footprint of the enterprise installation by consolidatingmultiple processing MRLs into single assembly lines where possible by creatingBusiness Definitions (represented by BDFs within the Library). There may beadditional requirements that stipulate the use of separate assembly lines. ResourceLifecycle A typical resource lifecycle includes the following environmentsand uses: · DEV environment – whereresources authors/content developers use Documaker Studio to create or addresources to the MRL. Unit testing of functional requirements satisfied bydevelopment activities occurs in this environment. · QA environment – wherequality assurance testers to validate the satisfaction of functionalrequirements. · UAT environment – wheretesters to validate the satisfaction of technical and operational requirements. · PROD environment – whereend-users to perform business operations. Accordingly, the resources contained in the MRL must be promotedbetween each of these environments in accordance with project management,testing management, and standard operating procedures to ensure changedresources are fully tested and satisfy requirements before being promoted intothe PROD environment. Once a resource has been promoted beyond the DEV environment, nofurther changes should be made to that version and revision of the resources -all changes should result in a new revision and/or version. During the courseof developing a resource for initial implementation, there could be dozens ofrevisions for a single resource - incremental check-ins by resource developers,for example. Only the latest revision of a resource is necessary in production,so for this reason it is recommended to collapse revisions upon promotion from DEV.The management of the resource lifecycle is further augmented by the use ofLibrary Project Management. Prior to work with resource lifecycle, thelibraries should be configured. LibrarySetup As mentioned previously, each library has a set of tables thatcontain the resource data. In order to promote between the libraries, eachlibrary must be defined to the user(s) who have access to performpromotions.  The process of defining alibrary can be done inside Documaker Studio - consult the Documaker StudioAdministrator Guide for additional details. The following instructions willexplain how to configure a user’sinstallation for multiple libraries using the INI files. This will allow you tocopy the configuration from one machine to another if you have multiple usersto configure.  Two important points to consider: · Do not place library configurations into a user’sINI files if that user should have access to the library or environment. · These same configurations can be used forcreating INI files for automated deployment using LBYPROC/LBYSYNC. 1. Locate the FSISYS/FSIUSER INI files for the user’s workspace. Typically these files are found inc:\fap\mstrres\dmres, however each installation can be different. These changescan be placed in either file, however it is best to open both and locate theexisting control groups and settings where possible. 2. Create the library section. The name of thelibrary should match the LBYI (library index table) name - see the special noteat the end of this section for table name conversion. < Library:UAT _DMRES>                         ;the following values(right side of =) must be defined!                         CATALOG         = UAT_LBYC                         DBTable           = UAT_LBYD                         LBYLogFile      = UAT_LBYL                         USERFile         = UAT_USER 3. Create the DBTabledefinitions for the previous settings. Note that the value of the <DBTable:xxxx> should match the corresponding table inthe database (e.g. UAT_DEVC might actually be UQY0_DMRES_DEVC, so the control group would be <DBTable:UQY0_DMRES_DEVC>). If the names are the sameacross all environments, see the note after this section. <DBTable:UAT_USER >     DBHandler               = UAT_DB     DefaultTag               = UNIQUEIDTAG     UniqueIDTag           = UNIQUEIDTAG     UniqueTag               = IDTAG <DBTable:UAT_LBYC >     DBHandler               = UAT_DB     UniqueTag                = CATALOGID <DBTable:UAT_LBYD >     DBHandler               = UAT_DB     DFD                             = DEFLIB\carfileora.DFD     UniqueTag                = ARCKEY+SEQ_NUM <DBTable:UAT_LBYL >     DBHandler               = UAT_DB     UniqueTag                = DATE+TIME <DBTable:UAT_DMRES >     DBHandler               = UAT_DB 4. Create the DBHandlersection. This is used and referenced by the DBTablesections. Encrypt[2]the connection settings so users do not gain access to resources outside ofStudio or the command-line tools. < DBHandler:UAT_DB >        AlwaysSqlPrepare = Yes        Class      = ODBC        CreateIndex =No        CreateTable =No        Debug    = No        PassWd   =~ENCRYPTED xxxxxx        Server   = ~ENCRYPTED xxxxxx        SubClass =ORA        UserID = ~ENCRYPTED xxxxx 5. Add the library to the <LibraryManager>section. The name used here needs to match the <Library:XXX>name you have created.             <LibraryManager >                                     Library= UAT_DMRES By default, the table names listed in <DBTable:xxx> are used to match table names in thedatabase - so if the names shown in the <DBTable:xxx>do not match, add the conversion section below. 6. Repeat these steps for all environments thisuser should access. Note - if your database schema uses the sametable names across environments (e.g. DEV uses DMRES_LBYC, and QA uses DMRES_LBYCthen you must use a specialconversion section and nomenclature for your INI settings. The < ODBC_FileConvert > section is used to provide internal conversion of table names. Theleft side of the notation is the table name used in the INI, whereas the rightside is the database table name. Use this table to create internal conversionsso naming of your tables does not get confused in the INI settings. Note thatthe “LBYI” table is used as the main library table.Then, in your settings shown above, use the environment-specific INI namedefined here. < ODBC_FileConvert >             ;ININAME = DB NAME             DEV_DMRES    = DMRES_LBYI             DEV_FLDB                   = DMRES_FLDB             DEV_USER                  = DMRES_DMUSER             DEV_LBYC                   = DMRES_LBYC             DEV_LBYD                  = DMRES_LBYD             QA_DMRES    = DMRES_LBYI             QA_FLDB                     = DMRES_FLDB             QA_USER                     = DMRES_DMUSER             QA_LBYC                     = DMRES_LBYC             QA_LBYD                     = DMRES_LBYD             UAT_DMRES    = DMRES_LBYI             UAT_FLDB                   = DMRES_FLDB             UAT_USER                   = DMRES_DMUSER             UAT_LBYC                   = DMRES_LBYC             UAT_LBYD                   = DMRES_LBYD             PRD_DMRES    = DMRES_LBYI             PRD_FLDB                   = DMRES_FLDB             PRD_USER                   = DMRES_DMUSER             PRD_LBYC                   = DMRES_LBYC             PRD_LBYD                   = DMRES_LBYD At the end of this configuration, the user will now be able topromote between the libraries defined in these INI settings, using eitherStudio or command-line tools. Promotion This section discusses non-Library Project Management (LPM)promotion. For information on LPM Promotion, please proceed to the LibraryProject Management section. Typically, increasingly strict governance surrounds thepromotion process as resources move upwards to production. Usually only trustedgatekeepers are allowed to push resources to production, whereas developersusually have free reign to push resources from DEV to QA environments. Resource promotion can happen in several ways using a differentcombination of methods and configurations. The following list illustrates someof the common configurations used. There are two methods for executingpromotions - Studio-based or command-line based. Studio-based promotionprovides a user interface for selecting resources for promotion and executingthe promotions. The command-line promotion provides a utility to defineresource selection criteria and execute promotions. Note that the Studio-basedmethod actually invokes the command-line tool to perform promotions, so eachmethod has the same functionality. The command-line tool may be preferredbecause it can be automated. · Studio-based promotion into all environments -in this method, one or more Studio installations are configured with access tolibraries for promotion. This method is considered to have the highest ease ofuse. · Studio-based promotion into interim format - inthis method, one or more Studio installations are configured to promote fromone environment into a local, file-based MRL. The file-based MRL is thentransferred to the target environment and then promoted locally into thedatabase MRL. This method is generally considered to be more secure bypreventing Studio users from accessing non-development libraries. · Command-line based promotion into allenvironments - in this method, a Documaker installation is configured with theappropriate libraries, and the promotion is done via command-line. · Command-line based promotion into interim format- this method is the same as the Studio-based promotion into interim format,using command-line tools. There is a final possibility and that is for Studio users togenerate Library Script Control (LSC) files which defines the resourcemigration properties (e.g. which resources to migrate, source and targetproperties, etc). This script can then be used with the command-line utilities. The promotion process works in this fashion: 1. Identify resources for promotion - in this stepresources are identified for promotion. This can happen in a number of ways. Ifthe resources are already classified (for example by Project) then the user caneasily filter the library by project name. Otherwise the user can select theresources individually from the library. 2. Set target library - this is simply selectingthe target library to receive the resources. 3. Set target classification properties - this setsthe properties of the promoted resources in the target library. 4. Set source properties - this sets the propertiesof the promoted resources in the source library. 5. Execute/Preview - this performs the promotionand sets the properties as noted, or previews what the promotion will do. PromotionSteps 1. Prior to initiation a promotion, the user shoulddelete the MASTER.* library files from the local machine - this step is onlyused if the “interim files” option is used. 2. User logs into Studio - user must have LibraryManager and Perform Promotion rights. 3. User opens Libraries, selects source library. 4. User clicks Promote from the Library menu tab. 5. User selects appropriate resource(s) to bepromoted, or can open an existing Library Script which has been saved from aprevious promotion. The Library Script can identify the resources to bepromoted and can be saved and executed multiple times. 6. User selects the appropriate target library (“MASTER”if using the interim files option) 7. User selects appropriate promote commands (e.g.Promote Selected Files Only and/or Include Descendants in Promote) 8. User commits promotion. If using Interim files option, the user can then take theMASTER.* library files and copy to the target environment in the documaker/mstrres/dmres/deflib directory, and then run the documaker/mstrres/dmres/deploysamplemrl.sh script to migrateresources from the interim file into that system’s library. LibraryProject Management Library Project Management (LPM) was introduced in DocumakerStudio 12.0 to facilitate management of resources using concepts borrowed fromthe software development life cycle model. Using LPM means that access toresources can be tightly controlled based on roles, and assures that resourcescannot be migrated to downstream tiers (libraries) without approvals. All LPMfunctionality is built within Documaker Studio. For reference, refer to the Documaker Studio User Guide, page 349. Tiers LPM uses the concept of tiers to correlate a repository of resource library tables in agiven environment with a logical development phase. Each tier has a set oftables that reside in the database, as well as a directory present in theDocumaker Studio workspace (either on the local user’s workstation or in a network location if usingshared workspaces). A tier is assigned a category that characterizes thetier according to its function. An implementation can have any number of tiersof any category, but must have one DEV tier, which is required. Generally it isrecommended to have a tier established for each environment. Roles There are multiple roles that persons can fulfill within theworkflow of form generation and development. It is possible that a singleperson could perform multiple roles and the system supports this. Note: theseroles correlate to use of LPM, some roles could be consolidated (e.g. Reviewerand Tester). Role Description Developer Creates/edits resources Reviewer Reviews and approves changes to resources Tester Validates functionality resources Administrator Administers the LPM system configuration Performs promotions between tiers Classifications An MRL resource can have one or more metadata[3] elementsdefined and assigned to it. Documaker Studio provides four metadata elements,called classifications, and each element can have many possible valuesdefined. The classification of resources, or tagging, is paramount for proper library management. Refer tothe Documaker Studio guide page 487 for information on how to configurationclassification properties. These properties are configured to provideselectable values that a user may choose from when editing resourceinformation. Remember that these are only recommendations; properties can bedefined according to business requirements - however when LPM is used, some ofthe properties take on special meanings. The available properties ofclassification are: Classification Description Action This property is used only when LPM is enabled. The Action property specifies the activities that a user may perform with the resource. When the Action is performed, the status of that resource is changed. Mode This field could specify where in the development cycle the resource is. The recommended usage is to correlate Mode to the environment or tier (e.g. DEV, TEST, PROD). In LPM, the Mode is used to categorize resources into tiers to determine what actions are available. Status This field could indicate whether a resource has passed or failed testing. Recommended values are TEST, PASSED, FAILED. In LPM, the Status indicates the last action performed on a resource. Class This field could indicate a large grouping of similar resources that share a common functionality or usage (some uses are to organize by state/province or business function). Project This field could indicate a smaller sub-grouping of resources that can span classes and are correlated by business purposes. The recommended use is to create project values that are synonymous with change management (e.g. all resources for “Project 001” are grouped together for migration). State This property is only used when LPM is enabled. The State property is a combination of tier, mode, and status and determines which roles have access to the resource and what actions can be performed on the resource. SecuringResources Roles are used to collect specific rights and functions withinDocumaker Studio when LPM is used in a library. A user is assigned one or moreroles by a user with Administrator capabilities in Documaker Studio. Note thatthis role configuration is wholly separate from Entities and Ability Sets usedby Documaker Interactive (DI) - recall that DI is a product targeted for use byend users, whereas Documaker Studio is purely for resource developers andadministrators. The recommended security settings by role are shown in the tablebelow. Refer to the Documaker Studio Administrator’s Guide pages 84-87 for additional details,including additional settings required for LPM. Role Authorizations Developer Full Access on all Manager rights, except Deployment Manager Reviewer View Only Access on all Manager rights except Deployment Manager Tester View Only Access on all Manager rights except Deployment Manager - note this may need to be modified to allow the tester to adjust resource classifications using the “Limited Property Modifications” right. Administrator Library Administrator rights - performs promotions. Can selectively give certain users rights to perform promotions using the “Limited Property Modifications” and “Perform Promotions” rights. It is possible to secure specific resources to a user, so onlythat user can change the secured resource. To lock specific resources by userID, see the Enabling Securing Resources in the Documaker Studio UserGuide, page 90. Promotion The following steps outline the basic workflow of the LPMlifecycle, and how users in specific roles interact with LPM. Dev Environment In the DEV tier, Developers and/or Administrators perform thefollowing functions: · Identify resources to be created or modifiedpursuant to business requirements. · Determine appropriate classification value (e.g.Project classification = “BR101.3”)to be used for the development effort. · If necessary, create any classification valuesneeded. In the DEV tier, Developers perform the following functions: · Create or modify resources according to businessrequirements or test results. · Assign classification values to resources (e.g.PROJECT value) · Unit test all resources created or modified forthe Project classification. · Identify all required components for promotion.Studio can help in this regard - e.g. if a form is being promoted that has anew section added, it will, if requested, automatically select the applicablesection(s). However if a section has a field that is defined in the XDD, Studiowill not automatically select the XDD – the developer must knowwhat resources have changed. In the DEV tier, Reviewers perform the following functions: · Validate that all resources set for promotionare ready and feature complete. · Notify Administrators to perform promotion ofspecific resources. In the DEV tier, Administrator performs promotion in thefollowing steps: 1. Select all resources identified for promotion.The selection criteria can be saved to an external file for re-use. This fileis called library script file and typically uses the extension LSC. 2. Set Target Library = QA. 3. Set classification properties a. Target Mode = TEST b. Target Status = TEST c. SourceMode = DEV d. SourceStatus = PROMOTED e. Target Project = Source Project 4. Select “Promote Selected FilesOnly” 5. Select Include Descendants in Promote to includechild resources (e.g. form’ssections) - this step is optional, but remember to include ALL requiredresources in promotion. 6. Click Preview to see intended results ofpromotion. 7. Click Promote Now to perform promotion - Studiowill promote the resource from the source to the target and update theclassification properties of the resources in each environment asnecessary.  Note that the settings of thepromotion can be stored to an external Library Script Control (LSC) file andrepeated. The LSC file can also be used to execute the promotion using acommand line tool from the server environment. 8. Open the target Environment to ensure resourcespromoted successfully. QA Environment In the QA tier, Testers perform the following functions: · Validate functionality of the resourcesintroduced by the promotion set. · If any resources fail tests… o Setclassification property STATUS = FAILED for those resources. o Revertto developer to correct issues and follow promotion path. Resources of the samepromotion set should not be promoted until all resources pass. · If all resources pass tests… o Setclassification property STATUS = PASSED for all resources. o NotifyReviewers of readiness of promotion set. Classification properties are updated. In the QA tier, Reviewers perform the following functions: · Validate that all resources set for promotionare ready and feature complete. · Notify Administrators to perform promotion ofresources in the promotion set. Classification properties are updated. In the QA tier, Administrators perform promotion in thefollowing steps: 1. Select all resources identified for promotion. 2. Set Target Library = UAT. 3. Set classification properties a. Target Mode = UAT b. Target Status = TEST c. SourceStatus = PROMOTED d. Target Project = Source Project e. Select “Promote Selected FilesOnly” f. Select Include Descendants in Promote to includechild resources (e.g. form’ssections) - this step is optional, but remember to include ALL requiredresources in promotion. g. Click Preview to see intended results ofpromotion. h. Click Promote Now to perform promotion. i. Open the target Environment to ensure resourcespromoted successfully. UAT Environment In the UAT tier, Testers perform the following functions: · Validate functionality of the resources introducedby the promotion set. · If any resources fail tests: o Setclassification property STATUS = FAILED for those resources. o Revertto developer to correct issues and follow promotion path. Resources of the samepromotion set should not be promoted until all resources pass. · If all resources pass tests: o Setclassification property STATUS = PASSED for all resources. o NotifyReviewers of readiness of promotion set. Classification properties are updated. In the UAT tier, Reviewers perform the following functions: · Validate that all resources set for promotionare ready and feature complete. · Notify Administrators to perform promotion ofresources in the promotion set. Classification properties are updated. In the UAT tier, Administrators perform promotion in the followingsteps: 1. Select all resources identified for promotion. 2. Set Target Library = PROD. 3. Set classification properties a. Target Mode = PROD b. Target Status = PROD c. SourceStatus = PROMOTED d. Target Project = Source Project e. Select “Promote Selected FilesOnly” f. Select Include Descendants in Promote to includechild resources (e.g. form’ssections) - this step is optional, but remember to include ALL requiredresources in promotion. g. Click Preview to see intended results ofpromotion. h. Click Promote Now to perform promotion. i. Open the target Environment to ensure resourcespromoted successfully. PROD Environment Resources here are in production mode and no changes should bemade to their revision and version numbers. Configuringa Workspace for LPM This action should be taken on a single user’s system first - it will configure the system touse LPM. Changes should then be copied into each user’s system to ensure that all users are configuredwith the same LPM settings. A workspace must be enabled for LPM before it canbe used with LPM. To enable LPM, follow these steps: 1. Open Documaker Studio and select ManageàSystemàSettings(or click the Settings button if you’reusing the Ribbon bar). 2. Select Workspace Information and check ProjectsWorkspace, then click OK. 3. Open Documaker Studio and select ManageàSystemàSettings(or click the Settings button if you’reusing the Ribbon bar). 4. Click Libraries. Use tabs to review Modes,Status codes, Classes, and Projects. Recommend Modes and Status codes remainas-is. Define Classes and Projects as necessary – these are just ways tocategorize resources. Class could be used to correlate all resources for ageographical region, for example. Project is usually used to correlateresources across classes for a particular business objective (such as a newproduct for all regions). 5. Click Library Tiers. Tiers are organized by type(types are listed in the Tier section). The Types are shown below –the first tier is a DEV tier called “001 –Development”. This tier is created automatically and cannot be deleted or edited. You can create anyadditional tiers here – you can of course opt to not create all tiertypes (for instance you can omit model office if you do all testing in systemtesting, as long as you ensure adequate testing is performed before promotingto production). 6. Click“CreateTier”.Enter the details about the new tier. a. Type – the type of tier (DEV,UNIT, etc). b. Path – the parent path of thelocation where the tier configuration files are stored – these are workspace-specific. c. Tier Location – the folder inside thePath location named for the tier where the configuration files are stored. Recommended naming convention is [library name]_[tier type]. d. Database Connection – enterthe details about the database connection. The tier can reside in the samedatabase[4]as other tiers, but will require a different naming convention. The suggestedconvention is [library_name]_[tier_name], e.g. DMRES_DEV or DMRES_UAT. e. LibraryName – it is recommended that it stay the same as thelibrary name defined previously. f. ODBC Data source – click and select orcreate the data source to your database where the library will be created. Ifyou are using the same database for the tier you are creating as an existingtier, select an existing data source and provide the login credentials. g. The file names can stay as they are shown. Theseare automatically generated based on the library name. h. Use Generate DDLs to create the DDL for a DBA touse for creating the tables. NOTE: if you are creating a tier that will be accessed by Document Factory andthe database is Oracle, please note: the “LBYD”table DDL needs to be adjusted – the CARDATA column isdefined as RAW(1954) but needs to be changed to BLOB before committing the DDL.Before committing any resources to the library, you must change thecarfileora.dfd present in the deflib directory. Open the carfileora.dfd in atext editor and change the < FIELD:CARDATA > asshown: <FIELD:CARDATA > EXT_LENGTH = 8 EXT_TYPE   = BLOB INT_LENGTH = 8 INT_TYPE = BLOB KEY=N REQUIRED=N BLOB =N If the change to the DFD is not made before the finalization ofthe library creation, Studio will create default resources and add them to thelibrary –these resources must be deleted and recreated after the DFD has been changed. 7. Click Next, then Finish. You can createadditional tiers. Click OK when done. Assigning Roles – use this to assign usersto one or more LPM roles. 1. Open Documaker Studio and select ManageàSystemàUsers(or click the Users button if you’reusing the Ribbon bar). You will need to have the User Administrator or SystemAdministrator role in order to access this function. 2. Select a user and click Configure. 3. Select Projects and then add the desired role(s)to the user, then click Ok. 4. Repeat for all users who will participate inLPM. Workingwith LPM Once you have resources loaded into projects, you can startworking with LPM. Note: If you want to strictly adhere to the LPM approach,only perform actions on resources within the Projects manager. Managingresources outside of Projects manager with a tool such as Library manager canresult in the status and mode combinations of resources becoming undefined,thereby making the resources unusable by the Projects manager. Onlyadministrators should use the Library manager for other functionality. Open the projects manager, and then you can filter the list ofitems shown in the project or perform actions on them. Conclusion Hopefully I achieved my goal of giving you enough information to help plan your resource management strategy. Questions and comments are welcome!  Footnotes [1] An example of thiswould be a standalone development environment that is used for resourcedevelopment only and does not have an ODEE component. [2] To encrypt settings,use the command-line tool distributed with Documaker Studio cryruw32.exe. To run the tool, enter cryruw32 string_to_encrypt on thecommand line. The resulting output can be pasted into INI files on Windows orLinux platforms using the notation ~ENCRYPTEDencrypted_string [3] Metadata means,literally, “data about data”. Metadata is information about an item. [4] It is recommendedthat production tiers do not reside in the same database or schema asnon-production tiers.

In this post I aim to unleash a great deal of information on resource management practices and procedures as well as configuration with respect to Oracle Documaker. Most of this information is...


Stability through Monitoring

Monitoring is a critical and necessary function to ensure that systems and process are running properly. Good monitoring practice can also be proactive in identifying and resolving potential problems before they occur. In many implementations it is possible to omit or defer the definition of process monitoring which can lead to overlooking important items. Many times this effort is left up to systems analysts to determine which processes and files should be monitored, and without guidance, there are opportunities to miss critical functions. My goal in this post is to consolidate andexplain some of basic areas within a Documaker Enterprise implementation that should be monitored and how to monitor them. Processes Process monitoring is a basic function all IT departments and istypically done with enterprise-level tools. Where Oracle Documaker isconcerned, there are two types of processes that should be monitored:singletons and over-watch processes. A singleton is a process that exists inonly one instance and thus represents a single point of failure (SPOF). SPOFsshould be minimized in a highly available environment. To mitigate singletonsas SPOFs process monitoring should be enabled. It is possible for two workers in the Oracle Documaker infrastructureto be configured as singletons – the Historian and theReceiver. These Workers can be deployed in a clustered fashion, however certainimplementation choices can preclude the use of a clustered environment and thusshould be avoided. Specifically Hotfolder submission in a clustered environment – while the hot folder isa supported implementation method, it is not advisable from an enterpriseperspective. Additionally, when hot folder submission is used, there should be atmost one Receiver per hot folder directory. This model has two implications: One hot folder per cluster member that must beindependent and not shared. Processes that deposit files to hot folders mustuse round-robin protocol to ensure load-balanced job submission. Failure to implement a file delivery model that meets theserequirements will result in having to run a single Receiver worker across the ODEEcluster. To do otherwise will result in duplicate job creation, as multipleReceivers will attempt to process the same data files[1]. The Historian performs high volumes of database activity andshould be scheduled to run at off-peak hours. To prevent overloading thedatabase only one Historian worker should be configured. The Historian is not acritical to the document generation process and does not represent a SPOF inthe document generation process chain. Singleton processes are monitored by the application’sover-watch process and therefore do not need special handling in the event offailure, however these processes should be monitored to ensure there is alwaysat least one instance running on the primary node. In the event of primary nodefailure, the singleton must be started on another node. Upon recovery, the singletonshould be shutdown on the failover node and restarted on the recovered node.Startup and shutdown of the singleton is controlled by the presence of the appropriateJAR in the deploy directory. The following processes should be monitored oneach node of the application tier to ensure functional operation of the OracleDocumaker system: Application Process Type Docfactory docfactory_supervisor Over-watch process Docupresentment idswatchdog Over-watch process Docfactory docfactory_receiver[2] Singleton  If there are numerous restarts of the over-watch processes, thisis indicative of a non-functional system and therefore merits investigation. Theover-watch processes are responsible for ensuring that the sub-processes arerunning and are load-balanced, and there are many tunable parameters that theover-watch processes use to know when to start/kill sub-processes. It is worthnoting that the over-watch processes are responsible for starting thesub-processes, therefore a parent-child relationship is established. If parentsare stopped then the children will also be stopped –therefore the monitoring application should ensure that children are properlystopped before restarting the parent. Although outside the scope of thisdocument, it is advisable to monitor the appropriate processes of thePresentation Tier to ensure functional operation of the web application server(e.g. WebLogic NodeManager or WebSphere Application Server). Recommendations If the Receiver must be implemented as a Singleton, amend theenvironment build instructions as follows: After installation of Documaker on all application nodes createthe directory [ODEE_HOME]/documaker/docfactory/undeploy On non-primary application nodes, the receiver.jar should bemoved from [ODEE_HOME]/documaker/docfactory/deploy to [ODEE_HOME]/documaker/docfactory/undeploy. Create operational procedures that reflect the followinginstructions: Upon failure of primary application node, move receiver.jar from[ODEE_HOME]/documaker/docfactory/undeploy to [ODEE_HOME]/documaker/docfactory/deploy on the firstnon-primary node – hereafter called failover node. Upon recovery of primary application node, move receiver jarfrom [ODEE_HOME]/documaker/docfactory/deploy to [ODEE_HOME]/documaker/docfactory/undeploy on the failovernode. Monitoring over-watch processes: Establish thresholds for tracked metrics (e.g. process memory orCPU consumption, excessive GC or long-running GC, child-process restarts) to determinewhen investigation is required. Ensure child processes are completely stopped before restartingan over-watch process. Monitoring child-processes is not necessary in production as this is the responsibility of the over-watch processes,however it could be useful for performance tuning and appropriate sizing inperformance test environment. Logging The logging mechanism within Oracle Documaker is highlyconfigurable and is based on LOG4J principles and industry standard loggingpatterns. Log messages are generated by ODEE components and are passed to alogging component, which then routes the messages according to priorities, filters, and appenders. Priorities Priorities are used to determine where log messages are sent.Priorities are defined in the APPCONFIGCONTEXT level (per application/workerprocess) and are set for various Java components of the Document Factory.Generally it is advised to use the default “ERROR”priority for all entries unless directed to modify a package for diagnosticpurposes. Priority Contents FATAL Events that prevent Documaker from starting properly (not typically used) ERROR Events that cannot be processed and prevent Documaker from running properly. WARN Events that cannot be processed but do not prevent Documaker from running properly. INFO Events containing informational messages (not typically used) DEBUG Events containing diagnostic information Filters Document Factory workers use filters to determine the locationwhere log messages are written: database or file. Filters are defined by creatingLogFilter entries in the ALCONFIGCONTEXT table. Each LogFilter entry includes a package name that correlates to specificcomponents within Documaker. When a Documaker component generates a logmessage, it uses the LogFilter list to determine if it should pass the messageto the database or to the file system. Any package that is named in a LogFilterwill be written to the database; conversely packages not named in LogFilterswill be written to the file system in the [ODEE_HOME]/doucmaker/docfactory/temp/<process-name>directory. The installation process creates LogFilter entries for each worker,and some additional components. To change the log location for thesecomponents, deactivate the corresponding row in the ALCONFIGCONTEXT table byusing Documaker Administrator (System -> Assembly Line -> Configure ->LOG4J_LogFilter_LogFilter) or update the ACTIVE value to 0 in the table for theappropriate row. Appenders Appenders define the destinations where log statements can besent. These are defined globally in the ALCONFIGCONTEXT table. They can also bedefined in the APPCONFIGCONTEXT level, which provides a level of override atthe application or worker process level. Each appender has specificconfiguration information – see ODEE AdministratorGuide “Configuringthe LOG4J Appenders”. Appender Destination stdout Standard output appender (e.g. console). To redirect console output from stdout, amend your startup script procedure to redirect to the desired file. Ensure you monitor the file size of this output periodically as it will grow to available extents. Example: ./docfactory.sh start 1>stdout.txt 2>stderr.txt roll File system. To modify this setting, use the Documaker Administrator to configure the Assembly Line. Locate Context LOG4J, Category Appender, Group roll. Modify the property “File” to point to the desired logging location. Note that you can use replacement variables like ~THREADID and ~PROGRAM to further categorize the filename/directory structure for the log. LOG4J default configuration sets the maximum size of this file and will automatically roll when the file size limit is hit. This limit can be adjusted in Documaker Administrator using the filter above and property “MaxFileSize”. The number of rolled files retained is set with the “MaxBackupIndex” property. process-roll File system. To modify this setting, use the Documaker Administrator to configure the Assembly Line. Locate Context LOG4J, Category Appender, Group process-roll. Modify the property “File” to point to the desired logging location. Note that you can use replacement variables like ~THREADID and ~PROGRAM to further categorize the filename/directory structure for the log. LogAppender LOGS table (INFO, DEBUG, WARN priorities) ErrorAppender ERRS table (ERROR,FATAL priorities) EMAIL Email notification for “critical” error messages. Loggers Loggers provide the match between Document Factory componentsand Appenders. Loggers are hierarchical depending on the name of the logger,e.g. oracle.documaker is the parent of oracle.documaker.util. Parent loggerswill roll up log messages for ancestors if the additivity value is set to YES –this can result in duplicate messages. Loggers and their settings are definedin ALCONFIGCONTEXT table. The logger settings determine which event informationfrom the logging package is captured (Priority) and the destinations availableto the logger (Appenders). SpecialCircumstances When a worker is started, a database connection must beestablished. The database connection must be established for the worker toobtain its settings, particularly the LOG4J settings for the worker. Eventmessages generated during this time are logged to the file system. Rudimentarysettings for logging are stored inthe log4j.xml file, which is located inside the worker’sdeployment JAR file. Deployment JAR files are located in the[ODEE_HOME]/docfactory/deploy directory. The LOG4J settings in this file atinstallation are the same as the roll, LogAppender, and ErrorAppender settingsprovided above – therefore if you make changes to these appendersit is recommended to change them in the deploy files as well. Documaker Interactive generally logs messages to the LOGS andERRS tables as necessary. However during debugging sessions it may be necessaryto generate debug information that goes to a file rather than one of theaforementioned tables. This file location is specified in DocumakerAdministrator -> System -> Assembly Line -> Correspondence ->Context Name LOG4J, Category LOGGING, Group Name LOG4J_INIT, PropertylogFilePath. By default this file shows in the root of the idm_server directoryon the presentation tier web application server. Recommendations Consolidate file logging to a single area formonitoring DocFactorystdout/stderr (console redirect) DocFactoryWorker process logs (process-roll appender) DocFactoryWorker program logs (roll appender) DocumakerInteractive – debug logs WebApplication Server logs – use the console web application to change thename/location for dmkr_server and idm_server log files. Keep in mind thislocation must be uniformly accessible across all nodes in the cluster. Periodically scan ERRS table for relevant errormessages that may need resolution. Filter out document-related errors that aredue to publishing problems (e.g. missing required data or other non-systemerrors). Use ERRDATA and ERRPROGRAM columns for filters –filter values TBD. Periodically scan LOGS table for relevantinformation about the general health of the system. Filter out document-relatedlog information. Filter values TBD. For reconciliation reporting, scan the JOBS,TRNS, and PUBS tables and filter on JOBSTATUS, TRNSTATUS, and PUBSTATUS like ‘%41’–this will show any jobs, transactions, or publications that resulted in anerror. It may be desirable to create a VIEW that consolidates these columnsinto a single source that can be queried. Instrumentation Supervisor and Java-based workers (all except Assembler,Distributor, Presenter) support JMX instrumentation to monitor class loading,memory usage, garbage collection, and deadlocks. Each worker will have aseparate TCP/IP port for each instancefor monitoring, ports are assigned a starting number and then incremented by 1for each additional instance. Recommendations Documented recommendation is notto enable instrumentation in production mode because of additional overhead andport usage; however, this could be mitigated by a long interval between checks(default is 60 seconds). With the introduction of automated instrumentation in Java 6,JMX is not required. Instrumentation can be configured programmaticallypursuant to the needs of an organization. Utilize enterprise tools to injectinstrumentation code and perform inspection according to desired results. Notification Supervisor can email notifications in the event of a fault. Messagesare not configurable and are the same messages that are delivered via otherLOG4J methods – email is one of the appenders so thisinformation will be captured in other appender locations as well. Recommendations Consider email notification in non-production environments tosupport development efforts. Consolidate issue notification using an enterprise-wide tool –therefore consider using the other methods provided by the software to logmessages to common locations. System A complete monitoring plan should include monitoring the overallhealth of each node in across the tiers of the Oracle Documaker environment. Ata high level, the following metrics should be monitored: CPU Utilization – utilization shouldaverage about 80%. Peaks are expected (e.g. during process startup or shutdown,or during high-load times) however the average should hover at or below 80%.Utilization average above 80% suggests the need to execute performance tuningactivities or other remediation that may include one or more of the following: Inspectthe affected node’s configuration and ensure there are nounnecessary processes running. Adda node to the affected node’s cluster. For example,if the affected node is a Documaker application node in a cluster, create a newnode and add it to the cluster. Tuningof process ceilings allowed on the affected node. For example, if the affectednode is a Documaker application node and the UseLoadBalancing settings areenabled, it may be necessary to set a ceiling on the maximum number of processesthat can be started. Memory Utilization / Swapping  - ensuring the proper amount of memoryallocated to a node is imperative since disk swapping leads to poorperformance. If excessive memory/disk swapping occurs, consider one or more ofthe following remediation activities: Inspectthe memory consumption of processes on the affected node and remediate anyirregularities (for non-Documaker processes). Reducethe number of processes running on the affected node. If UseLoadBalancingsettings are enabled on a Documaker application node, consider setting aceiling on the maximum number of processes that can be started. Inspect thenode to ensure there are no unnecessary processes that are running. Reducememory allocation of processes – performance tests shouldbe conducted to determine the appropriate memory allocation of processes toachieve maximum performance for a given assembly line. The optimalconfiguration may result in a higher memory specification for applicationnodes, therefore be aware of the appropriate node memory sizing. DiskSpace –ensure the system has adequate free space for swap files, temporary files, logfiles, and data file storage areas as determined by system, business, andtechnology requirements. Table Space – ensure the tablespacesused by the Documaker system have adequate space allocated to ensure new rowscan be added to meet processing requirements. Presence of other components of an enterprise system such as webapplication servers and database may suggest additional monitoring requirements–consult the appropriate documentation for those systems. Housekeeping The Oracle Documaker data schema includes live and historical data tables. Live tables are populated and readcontinuously by the Workers within the DocFactory. DocFactory workers do notconsume data pushed into the historical tables. No differentiation of datadisposition (e.g. in live or history tables) is presented to external consumers(e.g. web service consumers, dashboard or Interactive users). To these consumersthe data simply appears to be within the Documaker schema. This is useful formaintaining historical information apart from live data while still allowingthe historical data to be useful. It is important to ensure that the live datatables are as lean as necessary to support business requirements. Oracle Documaker includes a Historian worker that facilitates movement of data from live tohistorical tables. The Historian worker can also facilitate cleaning the LOGSand ERRS tables, and will also remove clean old data from historical tables.All of these functions are configurable using schedules and rules. Schedulesare used to determine when a particular Historian task is executed –typically during idle time. Rules are used to determine eligibility forprocessing and can be used to support business requirements for retention –e.g. transactions matching certain eligibility conditions may need to beretained for a specific time period. The Historian also has the ability topurge certain columns of data, which allows the system to retain thetransactional information for statistical use but removes the heavyweightcolumns (e.g. BLOBs containing XML data and print stream data) to keep thesystem lean. Refer to the Historian documentation in the Documaker EnterpriseEdition Administrator Guide for complete details on configuration andscheduling of Historian tasks. [1] Note: this problem appearsto be specific to Linux systems, which do not implement file locking in thesame manner as Windows systems. [2] This process should onlybe monitored if the Receiver has been implemented as a Singleton.

Monitoring is a critical and necessary function to ensure that systems and process are running properly. Good monitoring practice can also be proactive in identifying and resolving potential problems...


Documaker Documentation and White Papers

I am asked at least once a month for the location of various Documaker documentation, manuals, guides, marketing materials, and white papers so I figured it was about time to consolidate some of this information into a single place that I can reference. I will update this post periodically when I have new information to add, so bookmark it if you're so inclined. Production Documentation - these are base product materials: Documaker Documentation Library for 12.6, 12.5, 12.4, 12.3, 12.2.1, 12.1, 12.0, 11.5, 11.4, 11.3, and 11.2.  Oracle Insurance Documentation library (includes all Oracle Insurance products such as Documaker, Documanage, Documerge, OIPA, Docuflex, OHI, Data Exchange, and more) The documentation library for each version generally includes all the base product READMEs, Release Notes, System Requirements, User Guides, Administrator Guides, Installation Guides, and other information for the specific flavors of Documaker (Enterprise/Standard) and other components (Mobile, Studio, Transall, Docupresentment, EWPS, DWS, iPPS, iDocumaker, WIPedit, Docucreate, and more) if available in that release.  For newer versions of Documaker, there are also tutorials and samples, such as sample Resource Libraries for use with Documaker Standard, iPPS, and Mobile (the sample resource library for Enterprise Edition is incorporated in the installer for that product). Tutorials are included for Silanis e-Signature, Documaker Add-In for Microsoft Word, Documaker Mobile, and the Documaker contracts use case. Also included are some database scripts that can be used for cleaning up Documaker Enterprise databases. For newer versions of Documaker there are also Help systems for various Documaker applications (Interactive: Correspondence, Dashboard, Administrator) as well as reference materials for DAL Scripting, INI files, internal file formats, and Rules, Troubleshooting guide, and a Documaker Utilities reference. Lifetime Support Policy - Are you unsure of the support model for your particular version of Documaker? If you have paid for support, you are entitled to receive the level of support provided by Oracle Support that corresponds to your version of the software. Oracle reserves the right to change the support model for a particular version of software as it ages, so you need to be apprised of the support offered on particular versions of Documaker software. You  can find this information here, and you can reference the support policies here. Oracle's goal is to offer competitive TCO, so the support lifetime for older versions of Documaker (as well as legacy Docucorp/Image Sciences/Skywire Software components) is comparatively lengthy. Additionally, these materials are available: Documaker Enterprise/Standard comparison (Executive Briefing) Documaker Enterprise Features (Executive Briefing) Documaker Enterprise Data Sheet Documaker Mobile Data Sheet (Documaker Mobile enables responsive HTML5 output from Documaker) Documaker Enterprise Highlights Video Digital Transformation White Paper Building an Agile Communication Strategy White Paper Publishing Trends in Insurance White Paper The Digital Experience Your Customers Expect - Live Presentation on Documaker Enterprise/Mobile given by Oracle's Director of Strategy for document publishing at the July 2015 customer meeting. Other materials are available here.   If there is information you'd like to see added here, let me know in the comments section.  

I am asked at least once a month for the location of various Documaker documentation, manuals, guides, marketing materials, and white papers so I figured it was about time to consolidate some of this...


Automating Oracle Documaker Interactive and WIP Edit Plug-In Using OpenScript – Part 2

Welcome back to the continuation of our discussion on testing automation with Oracle Documaker! In our last post on Documaker regression testing, we explained the differences between keyword-driven and data-driven frameworks - our testing strategy modeled after the framework proposed by Carl Nagle. For data-intensive applications such as Oracle Documaker Interactive, it is preferable to use data-driven testing frameworks, because these frameworks more closely mirror the use case for Documaker Interactive. In this post, we will review the testing framework and process that is used by the Oracle Documaker Quality Assurance team for automated testing of Documaker Interactive in several typical use cases. Let's get started, shall we? Test Framework and Design As Nagle pointed out, a framework for automation is indispensable in creating repeatable tests to achieved repeatable results with the same or similar input data. The first step the design of the automated testing process is to define the parameters of the test: what is being tested, how it should be tested, and how we'll know if the test passed or failed. We know we're testing Documaker Interactive, and we know that it will either work to user satisfaction or it won't - so there's the first and last items done. But how will we test it? The functionality offered by Documaker Interactive contains many possible function paths that a user can take to achieve similar ends, so we need to determine the typical paths and automate those. A good rule of thumb is to consider what 80% of test cases should look like, and design test cases around those processes first. A typical test case might look like this (keep in mind that I'm abstracting some of the details for the sake of clarity - your actual test cases should be much more detailed): Inputs: one or more recipients, each with address information, supplied via external data source Functions: user creates a new transaction in Documaker Interactive, edits a few fields, saves, opens for edit again, completes. Outputs: completed PDF document. Pass/Fail: Pass if no errors occur and outputs contain expected data (e.g. supplied by external data source and the user). Fail if errors occur or data does not match. What we've defined above is an abstraction of a testing scenario - a structured test with a defined input, output, and results. As you can see, the scenario is very granular, but is not specific to data - it's a functional test case to verify that the software does what it's supposed to do. It's entirely possible that your own test cases can (and should) be more specific to data, especially if you have an enterprise-wide system that accommodates more than one data source or services more than one line of business. After you have a library of scenarios built up, you'll have a test suite, which you can then use for regression testing on software upgrades, functional changes, and more. At this point you're probably thinking, "Right, ok, I have all that. You said we were actually doing to test something?" You're right, I did say that and we will - to do that, we're going to use a few software packages to assist us: Oracle Automated Testing Suite - also known affectionately as OATS - which is available here, and includes OpenScript, which is documented here. Java class Robot - to generate native system events based on keyboard and/or mouse interaction - JavaDoc is here. For the purposes of this post, we're going to assume you've already installed OATS and Oracle Documaker Enterprise Edition (ODEE), of which Documaker Interactive is a part. If you haven't installed OATS, see the link provided above. If you haven't installed ODEE, I have a series of blog posts that detail an end-to-end installation and configuration of ODEE. Scripts, Hierarchy, and Data Files Testing is executed within OATS using a hierarchy of inheritance and execution. At the top of the hierarchy is the master script, which is used to coordinate execution of all the lower-level scripts. The next level is component script, which as the name implies defines the collection of scripts for a software component. Finally there is the scenario script, which is the lowest level and includes all the details of outlined above - inputs, outputs, functions, and pass/fail criteria. Here's a handy diagram in which we have defined multiple components and we show the scenario detail for one component: The Documaker QA team has a test suite for Documaker Interactive that uses a data file to house all the testing configuration elements used by the master, component and scenario scripts in the test suite. For convenience, the data file is a spreadsheet which contains multiple worksheets, each with unique data that can be replicated and modified to extend the test cases as necessary. The data file is read during the initialization phase of test execution and is stored in a global location for reference across multiple component and scenario levels. All three levels of the automation script use this data file. Let's review the hierarchical test phases and how the data file is used: The master script controls the test. This script checks the component (specifically, the application) and platform being tested (i.e. Documaker Interactive, Windows). The master script references data in the ODEE_Components worksheet of the data file to know which components to execute. The master script then calls the appropriate component-level script in order. The ODEE_Components worksheet contains the following details: component name, release, environment (operating system), test run by, and date run. The component and environment cells are drop-down fields. Based on the selections made in the fields in this worksheet, the script picks the applicable URL and executes the associated test script. The component scripts are used to determine which test scenarios will be run for a component. Each component script references one or more scenarios which are detailed in the TestScenarios worksheet of the data file. Each scenario can be turned on or off for a given component test using the Run Status column value of Y (include in test) or N (exclude from test). The TestScenarios sheet contains all the scenarios for the automation test and the supporting test data for all scenarios. When there are multiple values such as form names or attachment file names, the values should be separated by a semicolon (;). Refer to the example worksheet below. For Scenario_001, look at the Required Fields column and you will see the semicolon-delimited value "34564675;566787;37,500". This string will be parsed by the scenario script and populated into required fields. Scenario scripts are the actual tests that are executed. Each scenario is created as a different method in OpenScript, based on the required functionality that needs to be performed. These methods can be reused and called by other scenarios, so it is possible for a basic scenario to have many variations with little actual code that must be created to support each. The data file has other supporting sheets that are used by the various testing scripts for automation and control: Interactive_Users -this sheet contains credentials, user roles, and the approval levels. Interactive_forms - this sheet contains all forms with approval levels used for different test scenarios. When you add a new form, that form gets added to both the object library and to this worksheet. From there, the form can then be used across all scenarios. To do this, add the form name to the Forms_List column in the TestScenarios worksheet. Addressees - this sheet is used to add addressees to the data set, which will be shown on the Addressee tab while creating documents within Documaker Interactive. A new addressee can be added in same pattern as defined in the Addressees worksheet. Putting the Test Together We have outlined the test cases and the test data for control and execution. Now comes the fun part - we actually need to build the test! But before we do that, I must remind you that it is important to figure out how wide your test cases ought to be. By width I mean how much of the system's functionality the test case should cover. It's tempting to make a scenario cover an entire end-to-end test, across all layers of the system, from upstream data feed to downstream printing or electronic delivery. With OATS you have the power to do that, but as a wise man once said, "With great power comes great responsibility," and test design is no different! A good practice, which is reinforced by the OATS hierarchical design, is the limit a scenario to functionality within a component. That way, you can limit the Documaker Interactive test to include only the functionality that's needed within that component, and external components should be covered by other scenarios. Why am I saying this now? Because as you're going to find out, we're jumping right into Documaker Interactive - no creating a transaction, dropping data, invoking a web service, or anything else. Our assumption will be that the data is there, because it was provided by another test scenario and therefore should be tested there. It will keep our test scenario smaller and easier to manage.While we're on the subject of test scenarios, I should point out that you can use the file system to your advantage here as well - since you're going to have a data file out there with all the control parameters for your scenarios, you can also create an attachments folder and use it to store any test documents that you will be attaching in Documaker Interactive (keeping in mind our plan to segregate test scenarios by component, we'll assume this attachment is coming from a user desktop, or provided by an external system). As mentioned above, we're going to use the OpenScript component of OATS in combination with the Java class Robots. If you have used Documaker Interactive before, you know that it uses a plugin called WIPedit for facilitating data entry onto documents. Part of the process for test script creation includes the ability to record user interaction with a browser, which then generates the OpenScript code that you can customize. The OpenScript recording capability will capture user interaction with web components, but cannot capture events within WIPedit, and so for that we will use the Robots class to programmatically generate keyboard and mouse input. This screen shot below illustrates the area of differentiation between web components and WIPedit - note that the WIPedit area is everything below the toolbar, inclusive of the form set tree and the document preview window: When recording your scripts, you'll need to note what input events (keyboard/mouse) are occurring that aren't going to be captured by the recording. In the screen shot above, I have clicked on Zoom Normal, which is a web component as it's in the toolbar. When I go back to edit the recorded script, I'll need to programmatically move the mouse and simulate clicks from the point of departure from web components. Here's a code snippet of how this will work: oracle.oats.scripting.modules.functionalTest.common.api.internal.types.Pointp=web.element("{{obj.ODEE_Interactive.NewDoc_Document Tab_Zoom_Normal button}}").getElementCenterPoint(); Once the position of the Zoom Normal button is captured, I need to move the pointer 40 points down and 40 points left using Java Robot objectto place the mouse pointer on the document: robot.mouseMove(p.x-40, p.y+40); Now we'll execute a right-click to expose the context menu, move the pointer, and execute a left click to select the "Check Required Fields" menu item: // Right CLICKrobot.mousePress(InputEvent.BUTTON3_MASK);  robot.mouseRelease(InputEvent.BUTTON3_MASK); // Moving to 1st option in right click menu "Check required fields"robot.mouseMove(p.x-40+87, p.y+40+13); // Left CLICKrobot.mousePress(InputEvent.BUTTON1_MASK); robot.mouseRelease(InputEvent.BUTTON1_MASK); There! From here we can continue to flesh out the remainder of the scenario until the test case is completed. This means populating any fields with data (e.g. from your data file to simulate user input), submitting for approval, generating previews, and the like - whatever is required for your test case. A special footnote: dialog boxes generated from WIPedit are detectable by OpenScript, so it is not necessary to use the Robots class to interact with these elements. Have fun putting together your scenarios - when you're done, it's time to execute the tests with OATS. A few pointers here: ErrorScreens - your OATS scripts can store a screenshot of browser windows at the time an error occurs during execution of a test scenario, which is quite helpful in seeing what's happening from a user perspective. Screen captures will be stored in this directory and are named according to the release, build, environment, and scenario undergoing testing. Note that this particular naming convention is specific to Documaker QA's testing scripts, so you don't have to replicate this as-is. OpenScriptLogs - logs for the test are stored in this subdirectory. Every activity along with the values for the web fields gets logged. Logs can be used for troubleshooting in the event of a test failure. If multiple iterations of the test are run on same environment, release, and build, the log gets appended. When the environment, release, and\or build changes, a new log file is created. This file gets initialized when the main script is executed. TestReports - the test report of each successful test run is stored in the TestReports subdirectory. This file gets initialized when the main script is executed. The Test Report is in an *.xls format. If the test run is aborted or stopped for any unknown reason, the test report is not generated. The log file in the OpenScriptLogs will hold the report until the last successful test step is executed. I hope you've enjoyed this glimpse into the world of regression testing, and that you were able to glean something useful that you can implement within your own environment. If you need assistance with regression testing, OATS, OpenScript, or any of the other technologies or concepts mentioned herein, please head over to the Oracle Forum and post a query. Until next time!

Welcome back to the continuation of our discussion on testing automation with Oracle Documaker! In our last post on Documaker regression testing, we explained the differences betweenkeyword-driven...


Connecting MQSeries to ODEE with JMS Bridge

Welcome to another edition of the Documaker Tech blog! Today we'll be showing how to connect MQSeries queues to Oracle Documaker Enterprise Edition (ODEE). Fair warning: this post will be acronym-intensive, and as such I will endeavor to present the full meaning of an acronym before using it. So let's dive in!If you're not familiar with the concept of the a message queue a cursory internet search will turn up a wealth of information. You need to know that message queues are used to provide synchronous or asynchronous communication between two or more processes. Synchronous (sync) communication means that the sender will wait for the receiver to acknowledge receipt of the message (also known as a response), whereas asynchronous (async) means that the sender will not wait for a response. Async is also known as “fire-and-forget”, or FAF. It is also possible to have multiple senders and receivers using the same queue - that is, you might have multiple senders placing messages into a queue, and then multiple receivers retrieving messages from the queue.  It is this capability that is used to provide scalability within ODEE. Internal Queueing Internally, ODEE uses queues to distribute work units among the workers in a factory Assembly Line. Queues are necessary to support distributed work and provide part of the backbone of the infrastructure that enables ODEE to scale to enterprise-level processing capacity - the other part of the backbone being the database. ODEE follows the factory model for document production: an Assembly Line represents a document production configuration, which is serviced by multiple workers to generate documents. The workers perform different tasks, and scale independently of one another to accommodate changing work loads. ODEE has a defined path that each document request will follow in order to complete assembly. This path is orchestrated by the Scheduler worker, which notifies each successive pool of workers when work is available. This notification is done using queues - here’s an example: Document Generation transaction is enqueued from external application into the Request Queue. The Receiver, an ODEE Worker, dequeues the transaction, and starts a Job within ODEE. Note that there could be multiple Receiver workers, and only one is needed to pick up the request to start the transaction. The Scheduler, an ODEE Worker, monitors the system for new Jobs, and notes the new Job. The Scheduler enqueues a message for the Identifier worker pool. All Identifier workers periodically check in to their request queue for new work - one of these instances will pick up the Job, and will mark it as in process. The Identifier worker completes its task with this Job, and updates the system accordingly. The Scheduler, ever watchful, notes that the Identifier phase of this Job has completed, and so notifies the next pool of workers that need to service the job. This process, Scheduler->queue->worker->update repeats until the Job is completed. This entire process typically takes place in a second or two (or on decent hardware, sub-second!). External Queuing In addition to internal queues, ODEE uses queues externally as an integration point, enabling it to accept processing requests from other applications. In the default installation, these are JMS queues named ReceiverReq and ReceiverRes. Queue Requirements ODEE 12.4 and earlier Enterprise Edition releases use: Java Message Service (JMS) queues to distribute workload among Assembly Line factory worker pools. Java Application Server (JAS) such as like Oracle WebLogic Server (WLS) or IBM WebSphere Application Server ND (WAS).   JMS providers which implement the JMS 1.1 Specification; more precisely WLS 10.3.6 and WAS ND. During ODEE installation, the deployment scripts will create the necessary artifacts within the target JAS. For WLS, this means a JMS Server and associated module and sub deployments will be created and configured automatically. For WAS, this means the associated components will be created and configured on the WAS Service Integration Bus (SIB). The resulting software deployment is configured to utilize the chosen JAS queues. Integration During a recent implementation I was presented with a design decision: how to integrate ODEE with IBM WebSphere MQ (aka MQSeries), to extend interoperability to a customer’s application landscape that was already using MQSeries? ODEE can use MQSeries for its queuing infrastructure, provided the connectors have been configured to activate JMS capability within MQSeries.  In this particular situation we wished to avoid placing the internal queuing infrastructure on MQSeries for a number of reasons (cost and proximity of the MQ host to the ODEE environment to name two) - so we chose a different approach: use the WLS JMS implementation for internal queuing and MQSeries for external queuing, and support an out-of-the-box configuration. Amazingly, this solution is already provided out of the box with Oracle WebLogic Server with some minimal configuration, which will connect the MQSeries queues with the external integration queues ReceiverReq and ReceiverRes. Let’s start with a few assumptions: Physical MQSeries queues should already exist; we will use REQQ and RESQ as our example queues; MQ Queue Manager name, host, and port are known (QMGRNAME, QHOST, and 1480 are our respective values in this example). Note that 1414 is the default, and we are using a non-default value; Network paths from the WLS server to MQSeries server exist and are open; WLS 10.3.6 will be used as the JAS for ODEE; and You have sufficient rights to connect and create objects. Activating MQSeries JMS First, we need to create a JNDI tree that references and binds the MQSeries artifacts (e.g. Queues and connection factories). The JNDI tree can be file-based, LDAP-based, or JAS-based, depending on your needs. For the purposes of this post we’ll assume a file-based JNDI tree. MQSeries includes a tool called JMSAdmin tool, which is in the MQ_HOME/Java/bin (MQ_HOME is the installation directory of MQSeries). In order to run this tool, you will need to modify the JMSAdmin configuration file, which is called JMSAdmin.config. This file is located in the same directory as the tool itself, and you can edit the file with any text editor. Set the following values:      INITIAL_CONTEXT_FACTORY=com.sun.jndi.fscontext.RefFSContextFactory      PROVIDER_URL=file:/c:/mq_jndi The directory specified in the PROVIDER_URL setting must be created before you attempt to start the JMSAdmin tool - otherwise, the tool will fail! Now you can run the tool by executing MQ_HOME/Java/bin/JMSAdmin.bat or MQ_HOME/Java/bin/JMSAdmin.sh. Note that the tool uses a proprietary command protocol which is documented here. In the tool, you will execute the following steps: 1. Define the references to the queues in the tool. Note: it is not required to use a different local name ( e.g. “MQRES” or "MQREQ") in fact it could be the same as the physical queue name.            InitCtx> Def q(MQREQ) queue(REQQ) qmgr(QMGRNAME) host(QHOST) port(1480)      InitCtx> Def q(MQRES) queue(RESQ) qmgr(QMGRNAME) host(QHOST) port(1480) 2.  Define the reference to a queue connection factory in the tool:      InitCtx> Def qcf(MQQCF) 3. Display the context, inspect the output, and end.      InitCtx> dis ctx     Contents of InitCtx        .bindings     java.io.File      a MQREQ      com.ibm.mq.jms.MQQueue       a MQRES      com.ibm.mq.jms.MQQueue       a MQQCF      com.ibm.mq.jms.MQQueueConnectionFactory      4 Object(s)        0 Context(s)        4 Binding(s), 3 Administered      InitCtx> end As I mentioned, It is possible to also create an LDAP-based or JAS-based JNDI tree, but we’ll explore that in an additional post. For now, let’s continue using the file-based JNDI tree. Preliminary Setup First, you’ll need to obtain some JAR files from your MQSeries installation and add them the ODEE domain. Locate the following files and copy them to MIDDLEWARE_HOME/user_projects/domains/idocumaker_domain/lib (where MIDDLEWARE_HOME is the WLS installation directory): com.ibm.mq.commonservices.jar com.ibm.mq.defaultconfig.jar com.ibm.mq.headers.jar com.ibm.mq.jar com.ibm.mq.jms.Nojndi.jar com.ibm.mqjms.jar connector.jar dhbcore.jar fscontext.jar jms.jar jndi.jar providerutil.jar Once placed, you’ll need to restart the domain (e.g. ODEE WLS AdminServer). Add MQSeries to WebLogic Next, we’ll add our MQSeries configuration to WLS as a Foreign JMS Provider. Make sure WLS is running, and open a browser to the administration console (http://hostname:port/console). In the console, use the left-hand pane to navigate to Services ->Messaging->JMS Modules. Locate the installed JMS Module with ODEE, usually it’s called AL1Module, and click it. Click the New button and from the list of available options select Foreign Server, and then click Next. Given the Foreign Server a name (e.g. MQSERIES) then click Next, and accept the default targeting (to jms_server) then click Finish. A. Click on the Foreign Server you just created. You will need to define some additional parameters to your Foreign Server:  JNDI Initial Context Factory. Set this to the same value we used in the JMSAdmin.config, that is, com.sun.jndi.fscontext.RefFSContextFactory.  JNDI Connection URL. Set this to the same value we used in the in the JMSAdmin.config, that is, file:/c:/mq_jndi.       Click Save. B. Click the Destinations sub tab and on the next screen, click New. Here we will define the Foreign Destinations (recall we created these with the JMSAdmin tool), which requires three parameters: Name. This is the internal name of the MQSeries queue, used only for display purposes. Set to MQREQ, to keep things simple.  Local JNDI Name. Set to MQREQ. Can be anything, as it is used locally and not on the MQSeries side, but I recommend using the same name as the next parameter. Remote JNDI name. Must be set to the name of the queue defined in JMSAdmin, e.g.  MQREQ.       Click Ok. Repeat the above step to create another Foreign Destination, this time for MQRES. C. Click on the Connection Factories sub tab, and then click New. Enter the following settings to define the Foreign Connection Factory: Name. This is the internal name of the MQSeries queue connection factory, used only for display purposes. Set to MQQCF, to keep things simple.  Local JNDI Name. Set to MQQCF. Can be anything, as it is used locally and not on the MQSeries side, but I recommend using the same name as the next parameter. Remote JNDI name. Must be set to the name of the queue defined in JMSAdmin, e.g.  MQQCF.       Click Ok. At this point, you should be able to navigate to Environment->Servers->jms_server and then click on View JNDI Tree in the WebLogic console and see the two queues and queue connection factory listed. If not, this means that the Foreign JMS Server references could not be created - usually an indication that either the required MQSeries JAR files are not present in the ODEE domain, or the JNDI Connection URL is not accessible. Check your log files for additional information. Bridging the Connection from MQ At this point, we have added the MQSeries queues as foreign JMS queues to our ODEE domain in WLS. What remains is to bridge the default external integration queues ReceiverReq and ReceiverRes to the foreign queues. To do so, back in the WLS Console, click on Services->Messaging->Bridges. Click New. We are creating the bridge for messages coming from MQSeries - enter the following properties: Name - this is for viewing purposes only; call it BRIDGEFROMMQ. Selector - not required; leave blank. Quality of Service - this determines how the bridge tracks messages and ensures they are delivered (e.g. in case of a possible missed delivery, it can resend the message). For this demonstration, choose Exactly Once. Initial State - tick the Started box. Click Next. Click New Destination. We are creating Source destination for our BRIDGEFROMMQ bridge, so we’ll need to define the source queue: Name - this is for viewing purposes only; call it FROMMQ_SOURCE. Adpater JNDI Name - select eis.jms.WLSConnectionFactoryJNDINoTX (note: if using XA, select the XA adapter name). Adapter Classpath- leave blank. Connection URL - leave blank. Connection Factory JNDI Name - set to MQQCF. Destination JNDI Name - set to MQREQ. Click Ok. You should now see FROMMQ_SOURCE selected in the dropdown. Click Next. In the Messaging Provider selection, choose Other JMS Provider. Click Next. Click New Destination. We are creating Target destination for our FROMMQ bridge, so we’ll need to define the queue: Name - this is for viewing purposes only; call it FROMMQ_TARGET. Adpater JNDI Name - select eis.jms.WLSConnectionFactoryJNDINoTX (note: if using XA, select the XA adapter name). Adapter Classpath- leave blank. Connection URL - leave blank. Connection Factory JNDI Name - set to jms.al1.qcf - This is the queue connection factory of the target destination, which is the ReceiverReq queue. The name I’ve chosen here is the default installation name. Destination JNDI Name - set to jms.al1.receiverreq. Click Ok. Choose FROMMQ_TARGET in the dropdown. Click Next. In the Messaging Provider selection, choose WebLogic Server 7.0 or Higher. Click Next. Choose jms_server as the target, click Next, then click Finish. We’re almost done! Bridging the Connection to MQ As you might have guessed, we’ve created the bridge from MQ to WLS, and now we need to create the bridge from WLS to MQ. In the WLS Console, click on Services->Messaging->Bridges. Click New. We are creating the bridge for messages coming from MQSeries - enter the following properties: Name - this is for viewing purposes only; call it BRIDGETOMQ. Selector - not required; leave blank. Quality of Service - this determines how the bridge tracks messages and ensures they are delivered (e.g. in case of a possible missed delivery, it can resend the message). For this demonstration, choose Exactly Once. Initial State - tick the Started box. Click Next. Click New Destination. We are creating Source destination for our BRIDGETOMQ bridge, so we’ll need to define the source queue: Name - this is for viewing purposes only; call it TOMQ_SOURCE. Adpater JNDI Name - select eis.jms.WLSConnectionFactoryJNDINoTX (note: if using XA, select the XA adapter name). Adapter Classpath- leave blank. Connection URL - leave blank. Connection Factory JNDI Name - set to jms.al1.qcf Destination JNDI Name - set to jms.al1.receiverres Click Ok. You should now see TOMQ_SOURCE selected in the dropdown. Click Next. In the Messaging Provider selection, choose WebLogic Server 7.0 or higher. Click Next. Click New Destination. We are creating Target destination for our TOMQ bridge, so we’ll need to define the queue: Name - this is for viewing purposes only; call it TOMQ_TARGET. Adpater JNDI Name - select eis.jms.WLSConnectionFactoryJNDINoTX (note: if using XA, select the XA adapter name). Adapter Classpath- leave blank. Connection URL - leave blank. Connection Factory JNDI Name - set to MQQCF Destination JNDI Name - set to MQRES Click Ok. Choose FROMMQ_TARGET in the dropdown. Click Next. In the Messaging Provider selection, choose Other JMS Provider. Click Next. Choose jms_server as the target, click Next, then click Finish. That’s it! Make sure your changes have been activated, and requisite WLS server(s) restarted. To test, deposit a message in the MQREQ queue (it should take the same XML input in SOAP format as the doPublishFromImport web service method). Here’s an example - note where the input extract XML should be placed in Base-64 encoded format: <soapenv:Envelope      xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"      xmlns:tns="oracle/documaker/schema/ws/publishing"      xmlns:pubcmn="oracle/documaker/schema/ws/publishing/common"      xmlns:v1="oracle/documaker/schema/ws/publishing/doPublishFromImport/v1"      xmlns:cmn="oracle/documaker/schema/common"      xmlns:req="oracle/documaker/schema/ws/publishing/doPublishFromImport/v1/request"> <soapenv:Header/> <soapenv:Body> <tns:doPublishFromImportRequest> <tns:doPublishFromImportRequestV1> <pubcmn:timeoutMillis>90000</pubcmn:timeoutMillis> <v1:JobRequest> <req:Payload> <req:Transaction> <req:Data> <cmn:Content> <cmn:Binary>**replace with base-64 encoded extract data**</cmn:Binary> </cmn:Content> </req:Data> </req:Transaction> </req:Payload> </v1:JobRequest> <v1:ResponseProperties> <!--cmn:ResponseType>Attachments</cmn:ResponseType--> <cmn:ResponseType>JOB_ID</cmn:ResponseType> </v1:ResponseProperties> </tns:doPublishFromImportRequestV1> </tns:doPublishFromImportRequest> </soapenv:Body> </soapenv:Envelope> After a moment check the MQRES queue for a response message. You may uncomment the <cmn:ResponseType> node with the Attachments value if your system is configured to return PDF values. There can be additional configuration that is necessary depending on your specific system and requirements - consult with an ODEE and/or MQSerires subject matter expert and you’ll be on your way to integrated messaging in no time!

Welcome to another edition of the Documaker Tech blog! Today we'll be showing how to connect MQSeries queues to Oracle Documaker Enterprise Edition (ODEE). Fair warning: this post will be...


Automating Oracle Documaker Interactive and WIP Edit Plug-In Using OpenScript

In thispost, we detail the steps our Oracle Documaker QA team uses to automate someregression test cases, specifically Documaker Interactive and WIP Edit Plug-inusing Oracle Functional Testing and OpenScript. This is the first in a seriesof posts focused on Documaker regression testing. Please note: This blog post does not explain how to installor configure Oracle Documaker Interactive (DI), Oracle Documaker WIP Edit Plug-in or Oracle Automation Testing Suite (OATS). Installation and configurationinstructions for Documaker Interactive are included in the Oracle DocumakerEnterprise Edition (ODEE) Installation guide. WIP Edit Plug-in installation instructionsare in the Documaker Web-Enabled Solutions User Guide. Both of these guides areavailable on the Oracle Technology Network (OTN) under Documaker on the OracleInsurance Documentation site. The blog post series “ODEE Green Field (Windows)”also provides detailed information on Oracle Documaker Enterprise Edition(ODEE) installation and Documaker Interactive.  What is regressiontesting? Regression testing is the process ofretesting software after changes are made to ensure that the changes have notbroken any existing functionality. Why bother to perform regression testing? Oftentimes, tech companies releasenew software features with much fanfare. There's nothing more irritating forusers than discovering that those new features have broken an existingfunctionality—especially if that functionality is critical to their operations.When that happens, business users must wait for the software developers to comeup with a fix. If that fix breaks something else, users must report the problemto software developers and spend more time waiting. And the cyclecontinues. The users are stalled, and their organization can’t benefit from thenew features as they wait. Meanwhile, the software developers are spending somuch time troubleshooting and fixing bugs which prevents them from working onnew features, enhancing existing features, and making the software morebeneficial for users. Regression tests can be executedagainst the entire system (soup to nuts) or against specific products or areas.Quality assurance (QA) staff, software or a combination of the two may conductregression testing. More on manual testing In manual testing, the QA teamfollows a written test case, which includes action steps and expected results.Manual tests can be useful to help familiarize new users with the product andworkflow. However, there are drawbacks to the manual method. Depending on thenature of the test, the process can quickly become tedious and monotonous,which may lead to QA staff overlooking problems. Situations that require QAstaff to run manual regression tests in multiple operating systems or multiplebrowsers for multiple builds during the development cycle can leave the projectmore vulnerable to potential oversight. More on automated testing In automated testing, automationsoftware is used to create and execute automation scripts. Automated testingeliminates the potential for mistakes made during monotonous, repetitive manualtesting. And because you can run these tests anytime, day or night, you havemore time to increase your test coverage. Automated testing can run unattendedon different machines and operating systems, which frees up time for users tospend on other important tasks such as functional testing of new features. Keep in mind that all tests are notnecessarily well suited for automated regression testing. For example, if aninterface is subject to frequent changes, manual testing is the best optionuntil the interface stabilizes. Oracle Documaker Interactive Oracle DocumakerInteractive is the interface used to create and edit documents fordistribution. It’s one of the components in Oracle Documaker Enterprise Edition (ODEE). Oracle WIP Edit Plug-in WIP Edit Plug-in is used inconjunction with Documaker Interactive. It is a browser-based plug-in that letsyou create, edit and submit transactions in a WYSIWYG (what you see is what youget) format. Oracle Automated Testing Suite (OATS) Oracle Application Testing Suite orOATS is an integrated testing solution. It consists of these integratedproducts: Oracle Functional Testing - automated functional and regression testing of web applications Oracle Functional Testing Suite for Oracle Applications - functional and regression testing of Oracle packaged applications Oracle Load Testing - scalability, performance and load testing of web applications Oracle Load Testing Suite for Oracle Applications - scalability, performance and load testing of Oracle packaged applications Oracle Test Manager - test process management, including test requirements management, test management, test execution and defect tracking.   Oracle OpenScript OpenScript is used in OracleFunctional Testing and Oracle Load Testing. It enables you to create automatedtests for Web Applications. You can record, script or manually create testsusing the different frameworks that the tool supports. OpenScript can also beintegrated with the test management component of Oracle Test Manager (OTM). Youcan initiate the tests in OTM or OpenScript. OpenScript Workbench has multipleviews including a GUI (Graphical User Interface) and Java Code View. You canrecord and play back tests in the GUI View, and script or edit tests using the Java Code View. More informationon OATS is available here. Test Frameworks A testframework is the set of assumptions, concepts and tools that provide supportfor automated software testing. It includes the processes, procedures and environmentin which automated tests will be designed, created and implemented and theresults reported. There areseveral different frameworks and scripting techniques. They include: Linear Structured Data-driven Keyword-driven Hybrid (two or more of the above areused) Agile automation framework For thispost, we’ll focus on two of these frameworks: keyword-driven and data-driven. Keyword-Driven Frameworks Keyword-drivenframeworks look very similar to manual test cases. In a keyword-driven framework,the functionality of the application being tested is documented in a table aswell as in step-by-step instructions for each test. A keyword describes actionand input data. The test script code drives the application and data. Keyword-drivenframeworks require the development of data tables and keywords, which areindependent of the test automation tool used to execute them and the testscript code that "drives" the application-under-test and the data. Let’s usethis example: Column Acontains the keywords “Enter Client.” Enter Client is the functionality beingtested. The remaining columns contain data to execute the keyword. To enteranother client, you would add another table row. Data-Driven Frameworks Data-driven frameworks are drivenby test data. Test scripts are built-in so they will work with different setsof data covering different scenarios without test script changes. In a data-driven framework, test input and outputvalues are read from data files (e.g., data pools, ODBC sources, CSV files,Excel files, DAO objects, ADO objects, etc.) and are loaded into variables in thescripts. In this framework, variables are used for both input values and outputverification values. Data-drivenframeworks are preferable for applications that use large amounts of input datasuch as Documaker Interactive. Moreinformation on testing frameworks is available here. 

In this post, we detail the steps our Oracle Documaker QA team uses to automate some regression test cases, specifically Documaker Interactive and WIP Edit Plug-inusing Oracle Functional Testing and...


Documaker Integration : A Primer

Integration of a document automation system with externalsystems is a critical phase in any Documaker implementation. Whether Documakeris being deployed in a “green field” scenario or into a mature enterprise thefact remains that integration must occur on one or more levels. My goal in thisblog post is to explore some concepts and methods required to define andestablish integration available in a typical implementation of Oracle DocumakerEnterprise Edition, with the hope this may help readers think about possiblesolutions for their own use. Establish the Foundation As I mentioned, Documaker has two possible implementation targets:the green field and the established enterprise. Each target presents both uniqueand common challenges that must be overcome during the project. In a greenfield implementation, the software packages being implementation are typicallynew from top to bottom. This type of target is typically encountered when abrand new company, business unit, or product line is being built from theground up with new systems, procedures, and software. Implementing within theestablished enterprise refers to the process of upgrading or replacing anexisting software package, or implementing new software within an existing infrastructure.This implementation is most often used with established companies, units, orproduct lines and serves to enhance functional offerings of the enterprise to servethe needs of the business. In both cases where Documaker is concerned therewill be integrations – either to new or existing software, processes, andprocedures, which form the basic foundation of the enterprise. In order toperform the subsequent steps, we must know capabilities of the foundation.Therefore, the final activity in this step is to establish a functional catalogof capabilities offered by each component of the system. The table below illustrates one such method ofcapabilities cataloging. Table 1, Capabilities Catalog ID System I/O Type Method Format Notes CAP-1 Policy Administration System Output Flat-file File system delivery Fixed-record format Schema dictated by product. Can add new record types and data CAP-2 Billing System Output XML Web Service Fixed Schema XSL-controlled schema, fixed element names. Cannot add new elements. CAP-3 Content Management System Input SOAP Web Service Fixed Schema Product-controlled schema. New data elements can be added. This table illustrates how the input and output capabilitiesof various element in the foundation system and how they may (or may not) bechanged to accommodate additional business requirements. Data Flow Mapping Having defined the capabilities, we can now further refinebusiness requirements and (hopefully) obtain a match between the requirementsand capabilities. When performing a Documaker implementation, part of thebusiness requirements analysis will involve form and data analysis. Thisanalysis defines, among other things, the data needed for document triggering,data mapping, and controlling other aspects of handling document automation.This information constitutes requirements for systems that feed information toDocumaker (“upstream” systems). Similarly, there may be a need for Documaker tofeed information to systems after processing (“downstream” systems), such asarchive repositories, publishing systems, delivery systems, and the like. Bynow you can probably see that defining the upstream and downstream requirementsis directly related to the table of capabilities we developed in Step 1. Fromhere we can further refine our data requirements for each system. The tablebelow represents a simplified view into cataloging the data requirements foreach system: Table 2, Data Requirements ID System Capability Map Element Source Notes DR-1 Content Management System CAP-1 Account Number RECORD ID=100 FIELD=ACCT_NUM Required for indexing output DR-2 Mail Processing System CAP-2 XML Document barcode (see barcode requirements) Required for sorting mail pieces to obtain postal discounts. Once these two steps have been fully executed and mappedout, we’ll have an accurate picture of all upstream and downstream datarequirements as well as a map of how those data elements can be captured andpassed between systems. Targeting Documaker At this point, what we have defined is not specific toDocumaker – the steps I’ve outlined above are very generic and could be appliedto any software implementation in any enterprise where data interchange andsoftware integration is required. The key is that we are now in a frame of mindwhere we can start to consider the integration capabilities of Documaker, andhow they will map into a possible solution. What better way to do that than to presenta capabilities catalog for Documaker Enterprise Edition, as we would do in anactual implementation! The table format I’ve chosen to represent thecapabilities catalog is slightly different from Table 1, as I have formatted the table to represent capabilities specificto Documaker. Table 3, Documaker Input Channels Model Transport Type(s) Notes Transactional Input Hot Folder Web Service Direct Queue Flat-File Record layout dictated by upstream capability, informed by form requirements and downstream requirements. Only one layout supported per assembly line. Transactional Input Hot Folder Web Service Direct Queue XML Schema dictated by upstream capability, informed by form requirements and downstream requirements. Only one schema supported per assembly line. Consolidation ETL-to-Transactional Input Flat-file XML Databases Other Files ETL Tool such as Transall, a Documaker component, is used to consolidate multiple input sources into a single transactional input. Interactive Augmentation Documaker Interactive Web Service-to-various Documents that are processed in Documaker Interactive can be augmented by external data, which is retrieved by Documaker Interactive using a custom web service. This service accepts input from Documaker Interactive (which is transactional or user entered data) and then performs operations to retrieve additional data, format if necessary, and return using a pre-defined schema. Scripted Augmentation DAL –ODBC DAL-File Database or File-based DAL-scripting can be used to obtain data from external sources to augment transactional data or for use during processing as needed. This method is usually not recommended as it can present problems when scaling across servers. The preferred model is Consolidation or Interactive Augmentation. The following table outlines the output capabilities ofDocumaker. Table 4, Documaker Output Channels Model Transport Type(s) Notes Archiver s/FTP Web Center: Content (UCM) File System Publication Streams Metadata Files This channel is a standard output delivery channel included with Documaker. The Archiver component is used for transmitting documents generated by Documaker to various destinations. Archiver also has the capability to emit accompanying metadata files in arbitrary (user-defined) formats. Metadata includes any information related to the transaction that is carried in the ODEE table structure. Typical uses would be to submit documents to archive systems with indexing data in the metadata file. Customization Various Publication Streams Metadata The Archiver component has an open framework for adding new output destinations (transports) in addition to those shown above. A typical use case is to write a custom integration to a system that has integration requirements not satisfied by the default transports, e.g. invoking a web service to transmit publications and metadata. Distribution Web Services Dashboard Query Publication Streams Metadata Documaker supports multiple methods of self-service integration. While the Archiver and customization hooks provided by Archiver are a push model, the Distribution channel is a pull model – suitable for self-service integration via web services, database queries, or using the Dashboard UI. Notification SMS Metadata This channel is a standard output delivery channel supported by Documaker which notifies a recipient that a document has been generated. Publication Email Printer Publication Streams This is a standard Documaker output delivery channel used for pushing documents directly to attached printers, and for sending documents via email. Instrumentation JMX Statistics This channel is often overlooked, but can be configured to allow ODEE Factory Workers to output statistical information. While the preceding tables are not exhaustive of all of thepossible integration points with Documaker, they are the most common. Some of thesechannels aren’t considered traditional integration points, but they are useful whenthinking about systems management of an enterprise solution that includesDocumaker. Ihope you find this information useful!

Integration of a document automation system with external systems is a critical phase in any Documaker implementation. Whether Documakeris being deployed in a “green field” scenario or into a...

Why Projects Fail

I'm going to divert a little from my technical orientation for blogging here and discuss something that's critical to any software implementation: project failure.  To properly frame the discussion about why projects fail, wefirst need to define a project. The Project Management Institute (“PMI”) definesa project as “a temporary group activity designed to produce a unique product,service, or result.” (What is Project Management?). This is in contrast toroutine activities that use repetitive processes to generate a product orservice, such as manufacturing or customer service. PMI further stipulates thata project is temporary because the beginning and end, scope and resources arefinite and defined to achieve a singular goal. Timeline, scope, and resources,are the three factors that exert direct influence over the success of project.Each must be in balance with the other two for the project to meet its goals –if the scope becomes too large, the project will exceed available resources(personnel or budget). Similarly, if the timeline becomes compressed, the scopeand resources will not be able to deliver on time. With the triangle of time,resources, and scope in mind, we can say that a successful project is one thatis delivered within the timeframe, using the identified resources, and meetsthe agreed-upon scope. Having defined a project, we can then explore the concept ofproject management. Again, PMI provides a broad definition: “Projectmanagement, then, is the application of knowledge, skills, and techniques toexecute projects effectively and efficiently” (What Is Project Management?).The collection of knowledge, skills, and techniques that have been refined overtime to produce reliable, repeatable results are called a methodology. A project management methodology is the primary toolused by project managers to ensure the successful delivery of a project. Thereare many different methodologies such as SCRUM, Agile, Waterfall, and SLDC toname a few (Project Management Methodologies). The project manager uses themethodology to guide the project to completion by controlling the three legs ofthe project triangle. With this framework in mind, we can explore the failurepoints of the triangle. The scope leg of the triangle involves the definition,acceptance, and management of the requirements of a project. A project that hasill-defined requirements, which do not meet the needs of the users, or userswho are unable to achieve consensus on requirements, or pressure to executebefore requirements are defined is set on a path to failure (Why ProjectsFail). The presence of ill-defined requirements can also be a symptom of twobroader problems that can lead to project failure: management buy-in andproject communication. Project sponsors, executives, and leaders must be inagreement on the high-level deliverables of a project. When these stakeholdersare not in alignment, the project scope will be unclear; the project will be injeopardy. Similarly, the users must also be aware of the goals of the project,otherwise the requirements they devise may be at odds with parameters of theproject. These two factors can be managed with clear, concise communication atall points. With the scope leg of the triangle properly defined, thetime and resource legs can then be constructed. It is possible that the projecttimeline may be set before the scope has been identified, in which case the projectwill require additional resources to complete on time. When scope and resourcesare not properly aligned the project may experience cost overruns or latedelivery, both of which constitute failure. One method to avoid misalignment isto use proof-of-concept or pilot programs. These programs can help determineviability of an approach to meeting scope requirements within time and resourceconstraints, which will improve the chances of success. When the three legs of the project management triangle havebeen properly defined, agreed-upon, communicated, and have resources allocated,the remaining step is to execute the project using the methods prescribed bythe chosen methodology. Herein also lie additional failure points for theproject, one of which is reactive management. If a project starts to exceed theconstraints of time, scope, or resources, the application of risk managementshould be used to return the project to the boundaries of control. Proactiverisk management includes proper identification, analysis, and mitigation ofproject risks before they occur. Failure to perform proactive risk managementwill cause problems to be addressed in a reactive manner, which will result inschedule slippage, and budget/resource overuse (Why Projects Fail). In conclusion, we have identified multiple factors that cannegatively affect the outcome of a project: Ill-defined requirements; Management buy-in; Communication; Alignment of scope and resources; and Ineffective risk management I have also presented some common ways that these pitfallscan be avoided to help guide a project to success. I hope you find this information useful and can avoid project failures in your own endeavors. References: What is Project Management? (n.d.). RetrievedOctober 15, 2014, fromhttp://www.pmi.org/About-Us/About-Us-What-is-Project-Management.aspx Project Management Methodologies. (n.d.).Retrieved October 15, 2014, fromhttp://www.tutorialspoint.com/management_concepts/project_management_methodologies.htm Why Projects Fail: Avoiding the ClassicPitfalls. (2011, October). Retrieved October 15, 2014, from http://www.oracle.com/us/solutions/018860.pdf

I'm going to divert a little from my technical orientation for blogging here and discuss something that's critical to any software implementation: project failure.  To properly frame the discussion...

Web Services

How to Publish with EWPS - Quick Guide

Getting Started with Documaker and Web Services  Today I'll be addressing a common question - how do I get started publishing with web services and Documaker? The aim of this guide is to give you a quick rundown on how you can be up and running with publishing via web services and Documaker within a few hours, or even minutes! Before we get started, I think it's pertinent to discuss web services. Web Services are, put simply, a method for communication between two software components over a network. Prior to web services, there were various protocols, languages, and proprietary methods for performing this sort of communication. Web Services provide a (hopefully) simpler method of facilitating this communication by being language-independent and by providing information on how a system can request and receive data (e.g. what parameters are needed in the request and what the structure of the data response will look like). I use the analogy that it's like an IKEA pictograph manual for assembling a bed: it's generic enough that almost anyone can interpret the instructions and (hopefully) obtain the same results given the same set of inputs. There are two major programming models used when providing and consuming web services: REST-based (Representational State Transfer) - this model represents each web document and process as a web resource, which has a unique URI. HTTP headers are used to specify actions that manipulate the web resources. Because HTTP headers are used, this means that message exchange can be performed in a multitude of formats (JSON, HTML, XML, etc.) and SOAP, WSDL, and other WS-* standards are not used. RESTful services use the HTTP protocol, which means the only methods available are GET, PUT, POST, and DELETE; and this also means that requests can be bookmarked and responses cached. Security in RESTful services is implemented in the HTTP infrastructure.  SOAP-based (Simple Object Access Protocol) - this model exposes web service interfaces using an XML document called WSDL, which have URLs. Message exchange is in SOAP format, which is a document specification represented in XML. SOAP is suitable for large applications using complicated operations and applications requiring sophisticated security, reliability, and other features supported by the WS-* standards. SOAP is also useful where protocols other than HTTP must be used. SOAP is used by financial and insurance companies, government agencies, and more. Documaker offers two sets of web services: EWPS and DWS. For all Documaker Standard Editions and pre-12.0 versions, the only flavor available is EWPS. For Documaker Enterprise Edition and 12.0 forward, you have a choice of EWPS or DWS. Determining which to set of web services to use is documented in the ODEE Administration Guide: EWPS - These web service methods offer a number of ways to gather information about the MRL, locate documents or field information, and retrieve a form during transaction processing. EWPS also lets you update a document in WIP, publish a document from an extract file or publish a document stored in WIP. EWPS supports SOAP and JSON protocols. DWS - These web services, introduced in Documaker version 12.0, let you submit a job that tells the system to publish a document from an input or extract file. DWS also provides a generic web service method, doCallIDS, that lets you work with Docupresentment using specific request types. Because of Documaker Web Services' concrete schema, you should use the doCallIDS method with the Business Process Execution Language (BPEL) to facilitate workflow within the iDocumaker application. This method can also be used by BPEL outside of iDocumaker or by other web service clients to make specific requests to IDS or Documaker and should be used if your request needs to be asynchronous. DWS supports SOAP protocols. With that out of the way, let's begin. There are four components to using Documaker with Web Services: Documaker (and a resource library) Docupresentment Web Service application server Web Service test tool We'll address each individually - let's get started! Note: for any images shown below, you can click the image to have it display in a new window. I shake my fist at restrictive CSS! :-) Documaker/Docupresentment You'll need to have a functioning Documaker environment - that means Documaker and a Master Resource Library (MRL). The version of Documaker doesn't matter too much, so let's assume it's a Standard Edition, or any version pre-12.0. You're also going to need Docupresentment - if you don't already have this in your enterprise, you can download a trial license version from eDelivery. Login to your Oracle account, accept the license and restrictions, then search for the Product Pack "Oracle Insurance Applications".  Select the appropriate OS platform (for the purposes of this demonstration, I'm going to assume you're running on Microsoft Windows). You should see search results similar to this:  Click on the link for Oracle Documaker Standard Edition media pack, then you'll see the contents of the media pack: Click the Download button, and about 207MB later, you'll have a zip file ready for installation. Extract the contents of the zip file, and you'll see a number of files. Open up the read me file and have a look - in here you'll find important information like links to documentation, release notes, bugs fixed in this version, and other useful things. Most useful is a description of what these files are: Docupresentment Server: Collection of applications including IDS and EWPS used to web-enable Documaker solutions. Docupresentment Client: Core client files and example code for connecting to an IDS Server solution. Documaker Shared Objects: A package of Documaker service routines that can be invoked while running under IDS. What does all that mean? It's pretty simple, really: Docupresentment Server and Documaker Shared Objects are the core server-side components that you'll need to install to have Docupresentment running. Docupresentment Client is a package that needs to be installed on any servers that will be submitting requests to Docupresentment - that is to say, Docupresentment's clients - examples of this would be application or web servers that submit requests to Docupresentment. At this point, it's a good idea to define what Docupresentment is - Docupresentment itself is a request processor that is scalable and highly configurable. Once installed, you'll get a configuration that allows Docupresentment to processes requests that are specific to Documaker. You can even make your own requests - but that's a post for another day. For now, you need only know that Docupresentment can receive a request, process it, and send a response. Go ahead and perform the installation routines for all three packages - starting with the ODDS (Oracle Documaker Docupresentment Server), then ODSO (Oracle Documaker Shared Objects) and then ODDC (Oracle Documaker Docupresentment Client). You can accept the defaults for all installation questions - the one key point is to make sure the EWPS component of ODDS is checked for install (it is by default). At this point you should have two directories for Docupresentment: Server (c:\docserv) hereafter called [ODDS] and Client (c:\docclnt).  We won't be doing anything further with the Client directory, but let's talk about it for a moment. These instructions are going to assume that you will run the web services host (an application server like Tomcat) and Docupresentment on the same machine, however in a real environment that might not always be the case - you might have application servers running in a cluster, and back end services like Documaker and Docupresentment run on totally different machines. That's acceptable (and a good practice) so in those cases, you would find the Docupresentment client installed to the application servers only. Next, we'll do a little Docupresentment configuration. First we have to work around some Windows items - starting with Windows Vista/Windows Server 2008, the default dynamic port range for TCP/IP ports changed. It so happens that the new default lower end of this range changed to the same port used by the installation routines for Docupresentment, so we need to do a little change to accommodate this. In the file, locate this line: <entry name="http.url"></entry> And change it to: <entry name="http.url"></entry> This line may appear more than once so change all occurrences! Note that the might be a different IP address or a host name - no matter, just change the 49152 to 49200. You might be wondering why I chose port 49200. Well it happens that this port was free on my test system. You can determine what ports are in use on your test system too! Open a command prompt and type "netstat -a" and press Enter. Have a look at the ports in use and make sure the number you pick isn't shown anywhere in netstat's output!  Save your changes to the configuration file and close it. Next we'll move to [ODDS]\docserver.xml - you're going to perform a very similar change from:  <entry name="port">49152</entry> to:  <entry name="port">49200</entry> You're done - save the file and close it. While we're here, let's open [ODDS]\dap.ini for editing. You're going to add a value to tell Docupresentment about your configuration, so add these 3 lines at the top of the file: [ Config:DMS1 ] INIFile=c:\fap\mstrres\dms1\fsiuser.ini INIFile=c:\fap\mstrres\dms1\fsisys.ini  The value of [ Config:DMS1 ] should match a Config name in your FSIUSER/FSISYS INI files. Refer back to your existing Documaker MRL and have a look in the INI files to determine the proper values - I always make the libraryId and name match a <CONFIG:_> value. The values for the INIFile= options should be the path and filenames of your Documaker FSIUSER and FSISYS INI files. Next, add this to the list of configurations in the DAP.INI file by locating the INI group [Configurations] and adding Config=DMS1 to the list. Save and close the file. If you haven't started Docupresentment, go ahead and do so now. If you already started it, you can stop and start it again. To start Docupresentment, run [ODDS]\docserver.bat. To stop, locate the console window for Docupresentment and press CTRL-C. You'll be prompted to "terminate the file? (y/n)" to which you can reply with a Y. The last bit of configuration we need to do for Documaker/Docupresentment is to the master resource library (MRL). You'll need to go into both the FSIUSER and FSISYS INI files and change all paths to absolute paths. This is because Docupresentment needs absolute paths for locating resources, since it can be running in any location and relative paths may not be relative to Docupresentment. Next you'll need to add a bit of configuration that's needed for Docupresentment to run your MRL. In either the FSIUSER or FSISYS (recommended) add the following items: < RPDRunRP > Executable = c:\fap\dll\gendaw32.exe Directory = c:\fap\mstrres\dms1 UserINI = fsiuser.ini < RPRun > BaseDirectory = c:\fap\mstrres\dms1\ BaseLocation = http://localhost/doc-data CacheTime=60 Debug = Yes SingleStepGendata = Yes Startup = c:\fap\dll UserIni = c:\fap\mstrres\dms1\fsiuser.ini < IDSServer > doPublishAttachment = Yes Naturally you'll want to change any of the paths to match your particular environment. There is one item we need to discuss - SingleStepGendata. It's quite possible that your MRL is set up to run in multi-step. That's fine - just make sure you set this option to no if that's the case. Otherwise, if you're running single step, set this option to Yes. If you're not sure, locate the JDT file referenced by the <Data>AfgJobFile= INI option and look at the job level rules forInitPrint form set rules for NoGenTrnTransactionProc. If you find both, then you're running single step. If you don't find both, then it's possible you're running two step, in which case you might need to reconfigure your JDT for single step. Have a look at the Documaker Standard Edition Administration Guide in the section entitled "Using Single Step Processing".   Tomcat Now you'll need a software implementation of the Java Servlet and Java Server Pages specification in which you can host the EWPS web services. I am going to assume that you'll use Apache Tomcat, but you can use any supported Servlet/JSP implementation. You can try unsupported options like Glassfish or JBoss, but they will be unsupported and therefore not recommended. I chose Tomcat because it's familiar, solid, and light-weight. You can visit the Apache site here, and select the appropriate installer for your platform - it's a simple process to install it; all you have to do is unzip the file into a directory of your choosing. At this point you should have a directory for Tomcat (c:\apache-tomcat-6.0.41), hereafter called [TOMCAT]. Let's deploy EWPS components into Tomcat. Deployment is simple - the web services are contained in a WAR (web archive) file in [ODDS]\webservices\ewps-axis2.war. To deploy, simply copy this file into [TOMCAT]\webapps. Start up Tomcat by executing [TOMCAT]\bin\startup.bat, and the WAR file will be exploded into [TOMCAT]\webapps\ewps-axis2. Hereafter we'll call this directory [EWPS]. Note: if you get a warning about a missing JAVA_HOME or JRE_HOME when you try to start Tomcat, you need to create an environment variable. You can do this by opening your startup.bat file in a text editor and adding the following at the top of the file: set JRE_HOME=c:\progra~1\java\jre1.7.0_55 Replace the value to the right of the = with the path to your JRE. If you don't have a JRE and instead have a JDK, use JAVA_HOME= instead. If you don't have a JDK or a JRE, go to www.java.com and download/install one. Save the file and start Tomcat again and you should be up and running. Once the deployment is completed, you can peruse the directory structure created - there are two files in particular that you will be interested in: [EWPS]\WEB-INF\xml\ewps.config.xml and  [EWPS]\WEB-INF\classes\log4j.xml. We won't be doing anything with the latter file, but in the future if you want to change the logging mechanisms used by EWPS or enable debugging output, this is one place you'll come back to. For now, open up the ewps.config.xml file and have a look. There's not a lot to this file at first glance, but there are some important things to note. You might already see the libraryConfigurations node, and notice there's a commented-out configuration. You'll be uncommenting this libraryConfiguration and changing the name - make it match the <Config: _> name you used in the Docupresentment configuration. After you're done it should look similar to this: <libraryConfigurations> <libraryConfiguration libraryId="DMS1" name"DMS1"/> </libraryConfigurations>  After making this change, stop Tomcat by pressing CTRL-C in the Tomcat window, and then restart it by issuing the [TOMCAT]\bin\startup.bat command. You can validate the deployment of EWPS and Tomcat by opening a browser and navigating to http://localhost:8080/ewps-axis2 (replace localhost with your server name or IP address if it's different). You should see a screen like this: Click on the Services link to see the available services, and you should immediately notice DocumentService which contains most of the methods or operations we are interested in: You can explore this further at your leisure, but for now note the Service EPR or endpoint: http://localhost:8080/ewps-axis2/services/DocumentService. We'll need this later.  Web Services  Once you've completed this, you're done - all that's left is to submit a web service request! You will need an additional piece of software to accomplish this task. If you're going to work on developing applications that will interface with Documaker via web services, I recommend using an IDE that supports web services and the language(s) of the applications you'll be developing. Some examples of this would be Jdeveloper, Eclipse, NetBeans, or Visual Studio. I've used all three and they are all very competent IDEs, although my personal preference is Jdeveloper for Java-based projects and Visual Studio for .NET applications. If you're not going to do application development, then you're probably ok using a web service tool like SoapUI - my personal favorite. The instructions from here will illustrate testing the web service with SoapUI but the instructions are generally similar across toolsets. 1. Download and install your IDE or toolset. 2. Open the toolset and start a new SOAP-based Project. 3. Give the project a name ("EWPS Test") and import a WSDL - remember that endpoint from our Tomcat validation? Here's where you'll use it. Enter the endpoint followed by "?wsdl":  4. Click OK and the project is created. Expand the project and primary endpoint ("EWPSDocumentServicesSoap") and locate/expand the doPublish method. You can see that SoapUI automatically created a request based on the WSDL definition of the method. Have a look at the XML document that represents the SOAP request and notice that it follows a defined schema (envelope, header, body).   5. Replace the contents of the request with the following. Note the bold item, libraryId, which you will need to change to match your libraryId from above. Additionally you'll need to base-64 encode the extract file to send along with the request. You can easily do with with online web tools, however SoapUI will allow you to right-click at the point you wish to insert the file, and select "Insert File as Base-64 Encoded". You can then locate the file, and SoapUI will encode and insert it for you. Be sure to remove any extraneous text or spaces between the <ImportFile></ImportFile> tags! <soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema">    <soap:Body>       <doPublishRequest xsi:type="doPublishReq_Import" xmlns="http://webservices.docucorp.com/ewps/schema/2005-12-01">          <LibraryId>DMS1</LibraryId>          <DistributionOptions xsi:type="DistributionOptions_PREDEFINED" source="PREDEFINED">             <DocucorpArchive>false</DocucorpArchive>             <Priority>REALTIME</Priority>             <Channel xsi:type="Channel_IMMEDIATE">                <PublishType>PDF</PublishType>                <Disposition location="ATTACH"/>                      <DistributionType>IMMEDIATE</DistributionType>                <Recipient xsi:type="RecipientDistributionItem" name="ALLRECIPIENTS"/>             </Channel>          </DistributionOptions>          <SourceType>IMPORT</SourceType>          <Import>          <ImportFile xsi:type="ImportFile_ATTACH" location="ATTACH">INSERT BASE64 ENCODED EXTRACT DATA HERE</ImportFile>          </Import>       </doPublishRequest>    </soap:Body> </soap:Envelope> 6.  Submit the request by clicking the right-facing triangle, and await your glorious results, hopefully similar to below:  On the left side you see the request and on the right side you'll see the response. Notice the <Result>Success</Result>? This means it all worked, and if you look a bit further into the <Documents> node, you can see a base-64 encoded PDF document was attached. Now all you need to do is copy the contents of the <Document> node and base-64 decode it to a file. Being a Mac user I use the OpenSSL toolkit to decode base-64 from the command line. There are also online tools and other options available to you - a quick internet search will show you many options. Just ensure that you don't decode sensitive documents or data on the open interwebs! Once decoded, you can download the file as binary (this decoder will do it automatically if you use the "decode" button). Rename the file with the appropriate extension (e.g. ".pdf") and then open with your favorite PDF reader. You've done it! Now What? At this point you are ready to explore the other methods and operations available to you with EWPS. While the WSDL is quite useful in giving you a starting point for creating requests for each of the methods, you really must refer back to the EWPS documentation, which is quite detailed in explaining the requisite parameters for each operation. Given that the EWPS method design is abstract in nature, it can be a bit confusing at first, but stick with the documentation and the models created by SoapUI and you should come out on top. If you run into any problems, remember that most issues you'll encounter are Documaker/Docupresentment configuration-related, or MRL-related. EWPS and the web services host are usually very thin layers that do not add much to the complexity of the solution. One final pointer - make sure you work out any MRL issues before testing web services. Use the testing tool within Documaker Studio to make sure you are producing the proper output given your inputs, then move on to test the same with web services. When all else fails, you can also look for assistance at the Oracle Forum. I wish you luck in your web servicing endeavors! 

Getting Started with Documaker and Web Services  Today I'll be addressing a common question - how do I get started publishing with web services and Documaker? The aim of this guide is to give you a...


ODEE Green Field (Windows) Part 5 - Deployment and Validation

And here we are, almost finished with our installation of Oracle Documaker Enterprise Edition ("ODEE") in a Windows green field environment. Let's recap what we've done so far: In part 1, I went over the basic process that I intended to show with installing an ODEE on a green field server. I walked you through the basic installation of Oracle 11g database In part 2, I covered the installation of WebLogic application server. In part 3, I showed you how to install SOA Suite for WebLogic. In part 4, we did the first part of the installation of ODEE itself. What remains after all of that, is the deployment of the ODEE components onto the database and application server - so let's get to it! DATABASE First, we'll deploy the schemas to the database. The schemas are created during the ODEE installation according to the responses provided during the install process. To deploy the schemas, you'll need to login to the database server in your green field environment. Open a command line and CD into ODEE_HOME\documaker\database\oracle11g.Run SQLPLUS as SYSDBA and execute dmkr_admin.sql:  sqlplus / as sysdba @dmkr_admin.sql Execute dmkr_asline.sql, dmkr_admin_correspondence_example.sql.  If you require additional languages, run the appropriate SQL scripts (e.g. dmkr_asline_es.sql for Spanish). APPLICATION SERVER Next, we'll deploy the WebLogic domain and it's components - Documaker web services, Documaker Interactive, Documaker dashboard, and more. To deploy the components, you'll need to login to the application server in your green field environment. 1. Open Windows Explorer and navigate to ODEE_HOME\documaker\j2ee\weblogic\oracle11g\scripts.2. Using a text editor such as Notepad++, modify weblogic_installation_properties and set location of MIDDLEWARE_HOME and ODEE HOME. If you have used the defaults you’ll probably need to change the E: to C: and that’s it. Save the changes.3. Continuing in the same directory, use your text editor to modify set_middleware_env.cmd and set the drive and path to MIDDLEWARE_HOME. If you have used the defaults you’ll probably need to just change E: to C: and that’s it. Save the changes.4. In the same directory, execute wls_create_domain.cmd by double-clicking it. This should run to completion. If it does not, review any errors and correct them, and rerun the script.5. In the same directory, execute wls_add_correspondence.cmd by double-clicking it - again this should run to completion. 6. Next, we'll start the AdminServer - this is the main WebLogic domain server. To start it, use Windows Explorer and navigate to MIDDLEWARE_HOME\user_projects\domains\idocumaker_domain. Double-click startWebLogic.cmd and the server startup will begin. Once you see output that indicates that the server status changed to RUNNING you may proceed.  a. Note: if you saw database connection errors, you probably didn’t make sure your database name and connection type match. You can change this manually in the WebLogic Console. Open a browser and navigate to http://localhost:7001/console (replace localhost with the name of your application server host if you aren't opening the browser on the server), and login with the the weblogic credential you provided in the ODEE installation process. b. Once you're logged in, open Services→Data Sources. Select dmkr_admin and click Connection Pool.  c. The end of the URL should match the connection type you chose. If you chose ServiceName, the URL should be:jdbc:oracle:thin:@//<hostname>:1521/<serviceName> and if you chose SID, the URL should be:jdbc:oracle:thin:@//<hostname>:1521/<SIDname> d. An example serviceName is a fully qualified DNS-style name, e.g. "idmaker.us.oracle.com". (It does not need to actually resolve in DNS). An example SID is just a name, e.g. IDMAKER. e. Save the change and repeat for the data source dmkr_asline.  f. You will also need to make the same changes in the ODEE_HOME/documaker/docfactory/config/context/.bindings file - open the file in a text editor, locate the URL lines and make the appropriate change, then save the file.  7. Back in the ODEE_HOME\documaker\j2ee\weblogic\oracle11g\scripts directory, execute create_users_groups.cmd. 8. In the same directory, execute create_users_groups_correspondence_example.cmd. 9. Open a browser and navigate to http://localhost:7001/jpsquery. Replace localhost with the name of your application server host if you aren't running the browser on the application server. If you changed the default port for the AdminServer from 7001, use the port you changed it to. You should see output like this: 10. Start the WebLogic managed servers by opening a command prompt and navigating to MIDDLEWARE_HOME/user_projects/domains/idocumaker_domain/bin/. When you start the servers listed below, you will be prompted to enter the WebLogic credentials to start the server. You can prevent this by providing the credential in the startManagedwebLogic.cmd file for the WLS_USER and WLS_PASS values. Note that the credential will be stored in cleartext. To start the server, type in the command shown. a. Start the JMS Server: ./startManagedWebLogic.cmd jms_server b. Start Dashboard/Documaker Administrator: ./startManagedWebLogic.cmd dmkr_server c. Start Documaker Interactive for Correspondence: ./startManagedWebLogic.cmd idm_server SOA Composites  If you're planning on testing out the approval process components of BPEL that can be used with Documaker Interactive, then use the following steps to deploy the SOA composites. If you're not going to use BPEL, you can skip to the next section.1. Stop the servers listed in the previous section (Step 10) in the reverse order that they were started.2. Run the Domain configuration command: navigate to and execute MIDDLEWARE_HOME/wlserver_10.3/common/bin/config.cmd.3. Select Extend and click next. 4. Select the iDocumaker Domain and click Next. 5. Select the Oracle SOA Suite – (this may automatically select other components which is OK). Click Next. 6. View the Configure JDBC resources screen. You should not make any changes. Click Next. 7. Check both connections and click Test Connections. After successful test, click Next. If the tests fail, something is broken. Go back to configure JDBC resources and check your service name/SID. 8. Check all schemas. Set a password (will be the same for all schemas). Enter the database information (service name, host name, port). Click Next. 9. Connections should test successfully. If not, go back and fix any errors. Click Next. 10. Click Next to pass through Optional Configuration. 11. Click Extend. 12. Click Done. 13. Open a terminal window and navigate to/execute: ODEE_HOME/documaker/j2ee/weblogic/oracle11g/bpel/antbuild.cmd14. Start the WebLogic Servers – AdminServer, jms_server, dmkr_server, idm_server. If you forgot how to do this, see the previous section Step 10. Note: if you previously changed the startManagedWebLogic.cmd script for WLS_USER and WLS_PASS you will need to make those changes again. 15. Start the WebLogic server soa_server1: MIDDLEWARE_HOME/user_projects/domains/idocumaker_domain/bin/startManagedWebLogic.cmd soa_server116. Open a browser to http://localhost:7001/console and login. 17. Navigate to Services→Data Sources and select DMKR_ASLINE. 18. Click the Targets tab. Check soa_server1, then click Save. Repeat for the DMKR_ADMIN data source. 19. Open a command prompt and navigate to ODEE_HOME/j2ee/weblogic/oracle11g/scripts, then execute deploy_soa.cmd. That's it! (As if that wasn't enough?) DOCUMAKER Deploy the sample MRL resources by navigating to/executing ODEE_HOME/documaker/mstrres/dmres/deploysamplemrl.bat. You should see approximately 500 resources deployed into the database. Start the Factory Services. Start→Run→services.msc. Locate the service named "ODDF xxxx" and right-click, select Start. Note that each Assembly Line has a separate Factory setup, including its own Factory service and Docupresentment service. The services are named for the assembly line and the machine on which they are installed (because you could have multiple machines servicing a single assembly line, so this allows for easy scripting to control all the services if you choose to do so. Repeat for the Docupresentment service. Note that each Assembly Line has a separate Docupresentment. Using Windows Explorer, navigate to ODEE_HOME/documaker/mstrres/dmres/input and select one of the XML files, and copy it into ODEE_HOME/documaker/hotdirectory. Note: if you chose a different hot directory during installation, copy the file there instead. Momentarily you should see the XML file disappear! Open browser and navigate to http://localhost:10001/DocumakerDashboard (previous versions 12.0-12.2 use http://localhost:10001/dashboard) and verify that job processed successfully. Note that some transactions may fail if you do not have a properly configured email server, and this is ok. You can set up a simple SMTP server (just search the internet for "SMTP developer" and you'll get several to choose from.  So... that's it? Where are we at this point? You now have a completely functional ODEE installation, from soup to nuts as they say. You can further expand your installation by doing some of the following activities: clustering WebLogic services configuring WebLogic for redundancy configuring Oracle 11g for RAC adding additional Factory servers for redundancy/processing capacity setting up a real MRL (instead of the sample resources) testing Documaker Web Services for job submission and more!  I certainly hope you've enjoyed this and find it useful. If you find yourself running into trouble, visit the Oracle Community for Documaker - there is plenty of activity there and you can ask questions. For more concentrated assistance, you can engage an Oracle consultant who is a subject matter expert to assist you. Feel free to email me [andy (dot) little (at) oracle (dot) com] and I can connect you with the appropriate resource to get started. Best of luck! -Andy 

And here we are, almost finished with our installation of Oracle Documaker Enterprise Edition ("ODEE") in a Windows green field environment. Let's recap what we've done so far: In part 1, I went over...


ODEE Green Field (Windows) Part 4 - Documaker

Welcome back! We're about nearing completion of our installation of Oracle Documaker Enterprise Edition ("ODEE") in a green field. In my previous post, I covered the installation of SOA Suite for WebLogic. Before that, I covered the installation of WebLogic, and Oracle 11g database - all of which constitute the prerequisites for installing ODEE. Naturally, if your environment already has a WebLogic server and Oracle database, then you can skip all those components and go straight for the heart of the installation of ODEE. The ODEE installation is comprised of two procedures, the first covers the installation, which is running the installer and answering some questions. This will lay down the files necessary to install into the tiers (e.g. database schemas, WebLogic domains, etcetera). The second procedure is to deploy the configuration files into the various components (e.g. deploy the database schemas, WebLogic domains, SOA composites, etcetera). I will segment my posts accordingly! Let's get started, shall we? Unpack the installation files into a temporary directory location. This should extract a zip file. Extract that zip file into the temporary directory location. Navigate to and execute the installer in Disk1/setup.exe. You may have to allow the program to run if User Account Control is enabled. Once the dialog below is displayed, click Next. Select your ODEE Home - inside this directory is where all the files will be deployed. For ease of support, I recommend using the default, however you can put this wherever you want. Click Next. Select the database type, database connection type – note that the database name should match the value used for the connection type (e.g. if using SID, then the name should be IDMAKER; if using ServiceName, the name should be “idmaker.us.oracle.com”). Verify whether or not you want to enable advanced compression. Note: if you are not licensed for Oracle 11g Advanced Compression option do not use this option! Terrible, terrible calamities will befall you if you do! Click Next. Enter the Documaker Admin user name (default "dmkr_admin" is recommended for support purposes) and set the password. Update the System name and ID (must be unique) if you want/need to - since this is a green field install you should be able to use the default System ID. The only time you'd change this is if you were, for some reason, installing a new ODEE system into an existing schema that already had a system. Click Next. Enter the Assembly Line user name (default "dmkr_asline" is recommended) and set the password. Update the Assembly Line name and ID (must be unique) if you want/need to - it's quite possible that at some point you will create another assembly line, in which case you have several methods of doing so. One is to re-run the installer, and in this case you would pick a different assembly line ID and name. Click Next. Note: you can set the DB folder if needed (typically you don’t – see ODEE Installation Guide for specifics. Select the appropriate Application Server type - in this case, our green field install is going to use WebLogic - set the username to weblogic (this is required) and specify your chosen password. This credential will be used to access the application server console/control panel. Keep in mind that there are specific criteria on password choices that are required by WebLogic, but are not enforced by the installer (e.g. must contain a number, must be of a certain length, etcetera). Choose a strong password. Set the connection information for the JMS server. Note that for the 12.3.x version, the installer creates a separate JVM (WebLogic managed server) that hosts the JMS server, whereas prior editions place the JMS server on the AdminServer.  You may also specify a separate URL to the JMS server in case you intend to move the JMS resources to a separate/different server (e.g. back to AdminServer). You'll need to provide a login principal and credentials - for simplicity I usually make this the same as the WebLogic domain user, however this is not a secure practice! Make your JMS principal different from the WebLogic principal and choose a strong password, then click Next. Specify the Hot Folder(s) (comma-delimited if more than one) - this is the directory/directories that is/are monitored by ODEE for jobs to process. Click Next. If you will be setting up an SMTP server for ODEE to send emails, you may configure the connection details here. The details required are simple: hostname, port, user/password, and the sender's address (e.g. emails will appear to be sent by the address shown here so if the recipient clicks "reply", this is where it will go). Click Next. If you will be using Oracle WebCenter:Content (formerly known as Oracle UCM) you can enable this option and set the endpoints/credentials here. If you aren't sure, select False - you can always go back and enable this later. I'm almost 76% certain there will be a post sometime in the future that details how to configure ODEE + WCC:C! Click Next. If you will be using Oracle UMS for sending MMS/text messages, you can enable and set the endpoints/credentials here. As with UCM, if you're not sure, don't enable it - you can always set it later. Click Next. On this screen you can change the endpoints for the Documaker Web Service (DWS), and the endpoints for approval processing in Documaker Interactive. The deployment process for ODEE will create 3 managed WebLogic servers for hosting various Documaker components (JMS, Interactive, DWS, Dashboard, Documaker Administrator, etcetera) and it will set the ports used for each of these services. In this screen you can change these values if you know how you want to deploy these managed servers - but for now we'll just accept the defaults. Click Next. Verify the installation details and click Install. You can save the installation into a response file if you need to (which might be useful if you want to rerun this installation in an unattended fashion). Allow the installation to progress... Click Next. You can save the response file if needed (e.g. in case you forgot to save it earlier!) Click Finish. That's it, you're done with the initial installation. Have a look around the ODEE_HOME that you just installed (remember we selected c:\oracle\odee_1?) and look at the files that are laid down. Don't change anything just yet! Stay tuned for the next segment where we complete and verify the installation. 

Welcome back! We're about nearing completion of our installation of Oracle Documaker Enterprise Edition ("ODEE") in a green field. In my previous post, I covered the installation of SOA Suite...


ODEE Green Field (Windows) Part 3 - SOA Suite

 So you're still here, are you? I'm sure you're probably overjoyed at the prospect of continuing with our green field installation of ODEE. In my previous post, I covered the installation of WebLogic - you probably noticed, like I did, that it's a pretty quick install. I'm pretty certain this had everything to do with how quickly the next post made it to the internet! So let's dig in. Make sure you've followed the steps from the initial post to obtain the necessary software and prerequisites! Unpack the RCU (Repository Creation Utility). This ZIP file contains a directory (rcuHome) that should be extracted into your ORACLE_HOME.Run the RCU – execute rcuHome/bin/rcu.bat. Click Next. Select Create and click Next. Enter the database connection details and click Next – any failure to connection will show in the Messages box. Click Ok Expand and select the SOA Infrastructure item. This will automatically select additional required components. You can change the prefix used, but DEV is recommended. If you are creating a sandbox that includes additional components like WebCenter Content and UMS, you may select those schemas as well but they are not required for a basic ODEE installation. Click Next. Click OK. Specify the password for the schema(s). Then click Next. Click Next. Click OK. Click OK. Click Create. Click Close. Unpack the SOA Suite installation files into a single directory e.g. SOA. Run the installer – navigate and execute SOA/Disk1/setup.exe. If you receive a JDK error, switch to a command line to start the installer. To start the installer via command line, do Start→Run→cmd and cd into the SOA\Disk1 directory. Run setup.exe –jreLoc < pathtoJRE >. Ensure you do not use a path with spaces – use the ~1 notation as necessary (your directory must not exceed 8 characters so “Program Files” becomes “Progra~1” and “Program Files (x86)” becomes “Progra~2” in this notation). Click Next. Select Skip and click Next. Resolve any issues shown and click Next. Verify your oracle home locations. Defaults are recommended. Click Next. Select your application server. If you’ve already installed WebLogic, this should be automatically selected for you. Click Next. Click Install. Allow the installation to progress… Click Next. Click Finish. You can save the installation details if you want. That should keep you satisfied for the moment. Get ready, because the next posts are going to be meaty! 

 So you're still here, are you? I'm sure you're probably overjoyed at the prospect of continuing with our green field installation of ODEE. In my previous post, I covered the installation of WebLogic...


ODEE Green Field (Windows) Part 2 - WebLogic

Welcome back to the next installment on how to install Oracle Documaker Enterprise Edition onto a green field environment! In my previous post, I went over some basic introductory information and we installed the Oracle database. Hopefully you've completed that step successfully, and we're ready to move on - so let's dive in! For this installment, we'll be installing WebLogic 10.3.6, which is a prerequisite for ODEE 12.3 and 12.2. Prior toinstalling the WebLogic application server, verify that you have met thesoftware prerequisites. Review the documentation –specifically you need to make sure that you have the appropriate JDK installed.There are advisories if you are using JDK 1.7. These instructions assume youare using JDK 1.6, which is available here. [UPDATE: The FMW compability matrix has been updated and you should note that JDK 1.8 is not supported on WebLogic 10.3.6. Do not attempt to install and use WebLogic 10.3.6 with JDK 1.8.] The first order of business is to unzip the installation package into a directory location. This procedure should create a single file, wls1036_generic.jar. Navigate to and execute this fileby double-clicking it. This should launch the installer. Depending on your UserAccount Control rights you may need to approve running the setup application. Once the installer application opens, click Next. Select your Middleware Home. This should be within your ORACLE_HOME. The default is probably fine. Click Next. Uncheck the Email option. Click Yes. Click Next. Click Yes Click Yes and Yes again (yes, it’s quite circular). Check that you wish to remain uninformed and click Continue. Click Custom and Next. Uncheck Evaluation Database and Coherence, then click Next. Select the appropriate JDK. This should be a 64-bit JDK if you’re running a 64-bit OS. You may need to browse to locate the appropriate JAVA_HOME location. Check the JDK and click Next. Verify the installation directory and click Next. Click Next. Allow the installation to progress… Uncheck Run Quickstart and click Done.  And that's it! It's all quite painless - so let's proceed on to set up SOA Suite, shall we? 

Welcome back to the next installment on how to install Oracle Documaker Enterprise Edition onto a green field environment! In my previous post, I went over some basic introductory information and we...


ODEE Green Field (Windows) Part 1 - Intro & Database

This post is the first in a series in which I will detail the steps needed to create an Oracle Documaker Enterprise Edition sandbox. This installation will be a "green field" install, which means that all the prerequisites will need to be located, installed, and configured. This includes the database, application server, and all ancillary components. The goal of this post series is to detail the steps required to get you up and running with a sandbox in as little time as possible. Before we get started, a few housekeeping items. You should know that these instructions are presented for a Windows system that will use Oracle Database and Oracle WebLogic. These instructions should be considered as supplemental to the official product documentation, and the information herein is presented as-is, and I am not responsible should it not work for you, etc, etc, (more legalese here). So that said, let's get started! Preparation If your goal is to have a sandbox, you're going to need a box! Almost quite literally - you'll need to acquire a machine, virtual or physical, and it needs to have a functional Windows operating system on it. You will need to have an appropriate amount of disk space as well. You can probably get by with 50GB, but you'll need to keep an eye on disk usage over time by pruning log files and trimming your database files. Go with as much space as you can get. A recommended system specification can be found in the ODEE 12.3 System Requirementsdocument. If you're installing all the sandbox components onto a single server be aware that you will need some stout hardware (Intel dual core i5 or i7) with at least 12GB of RAM, preferably 16GB. Next you'll need to obtain the software. Oracle employees/partners have several options available; customers need to use MOS or eDelivery. Give yourself plenty of time to acquire the software, it's about 6.7 GB of stuff you'll need to download. The following items are required: Oracle 11g R2: There are two downloads for the Oracle Database, Part 1 & Part 2. First, open the Oracle database page. Accept the license agreement, then scroll down to the Oracle Database 11g Release 2 section (note: you can also use Oracle Database 12c, however these instructions are specific to 11gR2). Locate the appropriate section for your operating system (these instructions will be based on the Windows 2012 64-bit version). Download part 1 and 2. WebLogic 10.3.6: Navigate to here, accept the license agreement, and scroll down to the Oracle WebLogic Server 10.3.6 section. Download the generic installer (do not use the Windows installer). SOA Suite There are two downloads for the Oracle SOA Suite, Part 1, Part 2. Navigate to here and accept the license agreement. Select the Generic: 64bit JVM from the dropdown in the Free Oracle SOA Suite 11g Installations box under Release 11gR1 ( Click on the + symbol to expand the Preqrequisites & Recommended Install Process section. You're going to download several items from here. Click the appropriate links to download Part 1 and Part 2 of the SOA Suite under Product Installation. Don't leave this page yet - locate the Repository Creation Utility (RCU) There's only one part to download so go ahead and grab it. JDK: you'll need both a 64-bit and 32-bit JDK. You should be ok to use the latest version of the JDK, but please use nothing older that 1.6! Download the Windows version of the 32-bit and 64-bit JDKs for your desired version of the JDK. I'm only going to link to the Java SE site, then you'll have to click the appropriate link for JDK download.Oracle Documaker Enterprise Edition: obviously you're going to need this. It's why you're here, right? To obtain this software, login to the Oracle Software Delivery cloud. Accept and restrictions/license agreements, then tick the Programs box under Filter Products. Then, in the Product drop down type Documaker. Select Oracle Documaker Enterprise Edition from the list (the latest version - 12.4 as of this writing). Under Select Platform, choose Microsoft Windows x64 (64-bit). Click Continue. Expand the list of download items by clicking the triangle on the left of the check box in the Available Release column. Untick all boxes except the Oracle Documaker Enterprise Edition then click Continue. Accept any terms and restrictions, and proceed to download.  Once you've downloaded all this stuff, you'll need to stage it for installation. My recommendation is to create a directory for each component, and unzip the download files into the appropriate directories. This will keep everything organized and ready to install. Next you need to do a little planning, but relax, it's not that hard. You're going to want to plan where everything will be installed, and we're going to refer to those directories with special names: ORACLE_HOME - this is the base directory where we'll install everything, including the database. My recommendation is to place this in C:\ORACLE on your sandbox server. Or D:\ORACLE. Or E:\ORACLE (you get the idea?) MIDDLEWARE_HOME - this is the base directory where we'll install WebLogic. This should typically reside inside the ORACLE_HOME - my recommendation is C:\ORACLE\MIDDLEWARE. ODEE_HOME - this is the base directory where we'll install Documaker. The system default for this is C:\ORACLE\ODEE_1 and we'll assume that's what you'll use. JAVA_HOME - this is the location for the JDK. Normally this goes in C:\Program Files\Java or C:\Program Files (x86)\Java for 64-bit and 32-bit versions, respectively. Accept the system defaults for these items or know where you have them installed. Now that we've established these defaults, let's review our installation process. We're going to do a very straight forward, all defaults installation in this order: database, application server, SOA components, and then ODEE. One final thing before we kick off the process - I recommend you follow these steps in order, however, if you're installing a multi-tier system you could do some of the components simultaneously, however, make sure you review all the steps before you embark in this fashion - you don't want to be half-way through the SOA Suite installation and find out that you need the database and application server components to continue! Also, make sure you execute all steps as an administrative user. Database The first step if you haven't already done it, is to unzip the downloaded installation files into a single directory location. If you simply unzip both files into the current directory, you should have a single directory called database. Once extracted, navigate to the database directory and run setup.exe. Note: depending on your system's settings, you may need to approve the User Account Control dialog to continue. Once the installer has initialized, you'll be faced with the option to sign up for critical update alerts. Normally this would be a good idea, however, for a sandbox system it's not a requirement. Uncheck the email options and click Next. Click Yes because you really mean it! On the following screen, select the "Create and Configure a Database option". If you're really into database creation, you could select the install only option, but really, you're on a fast-track right? We don't have the luxury of time for delving into the intricacies of Oracle database installation and configuration. That's another post. On the next screen we need to determine the database class and that's Server class. This gives us access to options that aren't available in the desktop class so it's important to make the correct choice. Choose the Single database instance. While RAC installs are supported, the installation and configuration of a RAC system is beyond the scope of this article. While it would be nice to have a typical install, we must choose Advanced. On the following screen, choose the language(s) you wish to install. This doesn't affect the system other than changing the languages that are present in the database web interfaces and tools. Next we need to choose the appropriate database edition. We have several choices here, and I recommend using the top-most selection that your license can accommodate. The solution will work with any of the editions, however there are options such as compression and de-duplication that are supported only on the Enterprise or Standard edition. For our sandbox purposes, choose Enterprise. Now we need to key in the location of ORACLE_HOME that we defined earlier. As you type, you'll see the value for the product location change accordingly. You can change the product location, but I would advise to leave it where it is by default. Select the General Purpose database, and we'll be on our way. Next, we need to name our database that we're going to create. You should provide a global database name as well as a SID (service identifier). In my example below, I've used idmaker.us.oracle.com as the global database name and IDMAKER as the SID. We need to specify a few settings here. You can change the memory allocations as necessary but for sandbox purposes I'll be leaving my settings to be automatically managed. The important setting is on the Character Sets tab. You'll need to set the database to use the Unicode character set (AL32UTF8). This is very important, so double-check that you've set this appropriately before continuing. We're going to use the Database Control for management. If your environment has other options (e.g. Grid Control) you can use that, or you can set up email notifications if you like but for the sandbox we won't enable either of those. Our sandbox will use the standard File System for storing database files. Note the message about separating database files and software - you can do this if your system allows for this configuration. If your sandbox will ultimately be used for a development system you might consider enabling automated backups, however, you can do this at a later time as well. For now we'll skip it. For my sandbox system I will use the same password for all my database administration accounts. If your IT policies have greater restrictions you can choose a different option to suit your needs. Finally, we have finished most of the pre-configuration work! If you think you might do this installation several more times, you may consider saving the response file. It's also a good idea if you want to keep a record of the settings you used when configuring the system. Once you're ready to conduct the installation, click Finish. Now is a great time to get a coffee, catch up on email, read a book, or take a walk outside. Once the files have been created on the file system, the database will be created. Go ahead and finish your coffee/email/book/walk. Finally, we are complete! Take a look at your summary screen and take note of your database control URL - you'll want to know this for later use, so store this URL in your favorites. This concludes Part 1 of our sandbox installation. We'll continue the next segment with installing WebLogic!

This post is the first in a series in which I will detail the steps needed to create an Oracle Documaker Enterprise Edition sandbox. This installation will be a "green field" install, which means that...

Demystifying Docupresentment

It's no secret that Docupresentment (part of the Oracle Documaker suite) is powerful tool for integrating on-demand and interactive applications for publishing with the Oracle Documaker framework.  It's also no secret there are are many details with respect to the configuration of Docupresentment that can elude even the most erudite of of techies.  To be sure, Docupresentment will work for you right out of the box, and in most cases will suit your needs without toying with a configuration file.  But, where's the adventure in that?   To get started, visit Oracle's E-Delivery site and acquire your properly-licensed version of Oracle Documaker.  Login to the e-delivery site, and accept the license terms and restrictions.  Then, you'll be able to select the Oracle Insurance Applications product pack and your appropriate platform. Click Go and you will be presented with a list of applicable products.  For our purposes today you will click on Oracle Documaker Media Pack (as I went to press with this article the version is 11.4): Finally, click the Download button next to Docupresentment (again, version at press time is 2.2 p5). This should give you a ZIP file that contains the installation packages for the Docupresentment Server and Client -- somewhat cryptically named IDSServer22P05W32.exe and IDSClient22P05W32.exe. At this time, I'd like to take a little detour and explain that the world of Oracle, like most technical companies, is rife with acronyms.  One of the reasons Skywire Software was appealing to Oracle was our creative if not confusing use of acronyms, including using multiple acronyms to refer to the same piece of software.  I apologize in advance and will try to point these out along the way, and hopefully you'll find some solace in the fact that I too must occasionally refer back to my cheat-sheet of products.  It's okay if you have a crib sheet, it can be our secret.  Here's the first entry on your sheet: IDS = Internet Document Server = Docupresentment Once you've completed the installation, you'll have a shiny new Docupresentment server and client, and if you installed the default location it will be living in c:\docserv. Unix users, I'm one of you!  You'll find it by default in  ~/docupresentment/docserv.  Take a few minutes to familiarize yourself with the documentation included with the download (specifically ids_book.pdf) which goes into some detail of the configuration file.  You'll be pleasantly surprised to find there's even a handy utility that provides an interface to the configuration file (see Running IDSConfig in the documentation).  As I mentioned before, I'm going to lift up the hood on Docupresentment, which means we're going to edit the file by hand! I shall now proceed with the standard Information Technology Under the Hood Disclaimer: Please remember to back up any files before you make changes.  I am not responsible for any damage, outages, hair loss, stress fractures or anything else.  I may be guilty of bad puns, but that's about it. Go to your installation directory, and locate your docserv.xml file.  Open it in your favorite XML editor.  I happen to be fond ofNotepad++ with the XML Tools plugin.  Almost immediately you will behold the splendor of the configuration file.  Just take a moment and let that sink in.  If you reviewed the documentation you know that inside the root <configuration> node there are multiple <section> nodes, each containing a specific group of settings.  Let's take a look at <section name="DocumentServer">: There are a few entries I'd like to discuss.  First, <entry name="StartCommand">. This should be pretty self-explanatory; it's the name of the executable that's run when you fire up Docupresentment.  Immediately following that is <entry name="StartArguments"> and as you might imagine these are the arguments passed to the executable.  A few things to point out: The -Dids.configuration=docserv.xml parameter specifies the name of your configuration file.  You could change this if you like, but it's best to leave it alone.  Why?  Because, when operations personnel call in for support 5 years from now, they may have long-forgotten the original nomenclature of the configuration file, and when an Oracle Support technician asks for the docserv.xml file, the ops personnel may be left scratching their heads. The -Dlogging.configuration=logconf.xml parameter specifies the name of your logging configuration file (this uses log4j so bone up on that before you delve here).  You could rename it as well, but the same caveat mentioned above applies. The -Djava.endorsed.dirs=lib/endorsed parameter specifies the path where 3rd party Java libraries can be located for use with Docupresentment.  More on that in another post, and yes, you could change this, but see the caveat mentioned above.  Are you noticing a pattern here? The <entry name="Instances"> allows you to specify the number of instances of Docupresentment that will be started.  By default this is two, and generally two instances per CPU is adequate, however you will always need to perform load testing to determine thesweet spot based on your hardware and types of transactions.  You may have many, many more instances than 2. Time for a sidebar on instances.  An instance is nothing more than a separate process of Docupresentment.  When you fire updocserver.bat (docserver.sh for you Unix folks), the process that is launched is the watchdog process, which is then responsible for starting up the actual Docupresentment processes.  Each of these processes act independently from one another, so if one crashes, it has no effect on the sibling process.  To illustrate my point in a real world scenario, consider a longboat manned by multiple oarsmen, directed by the coxswain.  If an oarsman goes overboard, passes out, or is otherwise unable to row, the rest of the oarsmen continue their work unabated, and a new oarsman is brought in to replace him, all under the watchful eye and leadership of the coxswain.  Such is the relationship of thewatchdog service and the Docupresentment instances.  In the case of a crashed process, the watchdog will start up another instance so the number of configured instances are always running.  The bottom line is an instance is a single Docupresentment process. And now, finally, to the settings which gave me pause on a not-too-long-ago implementation, and the impetus for creating That's The Way.  One handy feature of Docupresentment is the file watcher.  This component keeps an eye on standard configuration files such as docserv.xml and logconf.xml.  If these files are modified, Docupresentment will automatically restart itself (and all the instances) to load the changes.  You can configure the time that Docupresentment waits to check these files using the setting <entry name="FileWatchTimeMillis">.  By default the number is 12000ms, or 12 seconds.  You can save yourself a few CPU cycles by extending this time, or by disabling  the check altogether by setting the value to 0.  This may or may not be appropriate for your environment; if you have 100% uptime requirements then you probably don't want to bring down an entire set of processes just to accept a new configuration value, so it's best to leave this somewhere between 12 seconds to a few minutes.  Another point to consider: if you are using Documaker real-time processing under Docupresentment the Master Resource Library (MRL) files and INI options are cached, and if you need to affect a change, you'll have to "restart" Docupresentment.  Touching the docserv.xml file is an easy way to do this (other methods including using the RSS request, but that's another post). The next item up: <entry name="FilePurgeTimeSeconds">.  You may already know that the Docupresentment system can generate many temporary files based on certain request types that are processed through the system.  What you may not know is how those files are cleaned up.  There are many rules in Docupresentment that cause the creation of temporary files.  When these files are created, Docupresentment writes an entry into a properties file called the file cache.  This file contains the name, creation date, and expiration time of each temporary file created by each instance of Docupresentment.  Periodically Docupresentment will check the file cache to determine if there are files that are past the expiration time, not unlike that block of cheese festering away in the back of my refrigerator.  However, unlike my 'fridge cleaning tendencies, Docupresentment is quick to remove files that are past their expiration time.  You, my friend, have the power to control how often Docupresentment inspects the file cache.  Simply set the value for <entry name="FilePurgeTimeSeconds"> to the number of seconds appropriate for your requirements and you're set.  Note that file purging happens on a separate thread from normal request processing, so this shouldn't interfere with response times unless the CPU happens to be exceptionally taxed at the point of cache processing.  Finally, after all of this, we get to the final setting I'm going to address in this post: <entry name="FilePurgeList">.  The default is "filecache.properties", which etablishes the root name for the Docupresentment file cache that I mentioned previously.  Docupresentment creates a separate cache file for each instance based on this setting.  If you have two instances, you'll see two files created:filecache.properties.1 and filecache.properties.2.  Feel free to open these up and check them out, but be careful not to delete or modify the files in any way. Otherwise, Docupresentment may lose track of some temporary files, and they'll be left to gather the dull fuzz of electronic dust in the lonely corners of your server, not unlike that block of cheese I mentioned. I hope you've enjoyed this first foray into the configuration file of Docupresentment.  If you did enjoy it, feel free to drop a comment as I am most appreciative of feedback.  If you have ideas for other posts you'd like to see, please do let me know.  You can reach me at an andy.little@oracle.com. 'Til next time!

It's no secret that Docupresentment (part of the Oracle Documaker suite) is powerful tool for integrating on-demand and interactive applications for publishing with the Oracle Documaker framework. ...


By Way of Introduction...

With this inaugural post to Inside Document Automation, I'm going to introduce myself, and what my aim is with this blog.  If you didn't figure it out already by perusing my profile, my name is Andy and I've been with Oracle (nee Skywire Software nee Docucorp nee Formmaker) since the formative years of 1998.  Strangely, it doesn't seem that long ago, but it's certainly a lifetime in the age of technology.  I recall running a BBS from my parent's basement on a 1200 baud modem, and the trepidation that accompanies the sweaty-palmed excitement of upgrading to the power and speed of 2400 baud!  I'll admit that perhaps I'm inflating the experience a bit, but I was kid!  This is the stuff of War Games and King's Quest I and the demise of TI-99 4/A.  Exciting times.  So fast-forward a bit and I'm 12 years into a career in the world of document automation and publishing working for, in my humble opinion, the best software company on the planet. With Inside Document Automation I hope to peek under the covers, go behind closed doors, lift up the hood and bang on the fenders of the tech space within Oracle Documaker.  I may delve off course a bit, and you'll likely get a dose of humor (at least in my mind) but I hope you'll glean at least a tidbit of usefulness with each post as I shed a little light in the underpinnings of our software.  Feel free to comment as I'm a fairly conversant guy and happy to talk -- it's stopping the talking that's the hard part... So read on!

With this inaugural post to Inside Document Automation, I'm going to introduce myself, and what my aim is with this blog.  If you didn't figure it out already by perusing my profile, my name is Andy...