Last week I had the pleasure of presenting at the UK Cross Government Business Rules Group meeting on whether technology generally, and Oracle
Policy Automation specifically, can help improve draft policy and legislation. I thought it might be worthwhile to share some of those thoughts here.
The process of transforming natural language text into OPA’s constrained rule format involves understanding the logical structure of the material you are working with, identifying the conclusions and conditions of each rule, and how each of these link to other sections of the policy material. In doing so, the process naturally highlights any logical and structural errors and ambiguities which may not be immediately apparent to the reader. From my experience, even well written policy and legislation usually contains a logical error or ambiguity that requires clarification from a policy expert every 2-5 pages.
In the early days of modeling rules, we started keeping a list of errors we found, and ended up with ~30 common legislative errors uncovered by modeling legislation in this format. For example, if a section mixes and/or logic, links to another section that no longer exists, or contains a loop between sections, OPA will immediately identify this on modeling, and in many cases insist that the error be corrected before continuing.
Many people don’t realize that the structure and principles behind modeling rules were developed in consultation with a senior legislative drafter to help avoid many of these logical errors. It was important to the development team, that by allowing the rules to be modeled in a natural language format (Microsoft Word and Excel), we did not also encourage the rule modeler to create rules that failed to deliver clear and correct outcomes.
In other words, the design of OPA’s constrained rule format is specifically aimed at identifying and avoiding logical and structural errors or uncertainty.
Once rules have been modeled in OPA, there are a few techniques for identifying whether the draft policy or legislation achieves the desired policy outcome.
The Policy Modeling Debugger allows you to run through a single scenario to see which questions are asked, and how the outcome is determined for a single user. I’ve found that simply running through an OPA interview quickly identifies information that is poorly worded, unreasonable to collect from the target audience, or is simply too onerous as a whole. An OPA interview is also useful to assess whether the policy calculates the desired outcome for any given scenario. The decision report is automatically generated to show the reasons for the decision, so errors can be directly traced back to the exact section of draft policy or legislation.
The Coverage tools (version 10.x) allow you to check that your test cases execute every rule in the policy model. This is particularly useful for checking that every section of your policy or legislation has substantive effect. For example, I’ve seen draft legislation which categorized claimants in order to apply one of several formulas for calculating compensation, but one of the categories was worded so broadly that another category would never be applied.
The What-If Analysis (version 10.x) and Excel Testing (cloud) features allow the policy modeler to create a series of test cases to see the effect of the rules on various scenarios. The data is entered into Excel, and the values automatically calculated by OPA appear in Excel column(s), allowing the tester to use Excel’s charting, highlighting and sorting capabilities to identify and analyze the effect of the policy on a range of scenarios, including highlighting unusual outcomes/payments.
While these techniques can provide some insight into the quality of the draft itself, they are limited in their ability to assess the overall policy impact on the draft’s target audience. The announcement earlier this year of Oracle In-Memory Policy Analytics, signals a significant leap in capability. The key difference here is twofold:
1 - The analysis applies to real-world data, so you can see the actual effect and budget outcomes of the draft policy. For example, you could identify that changes to a disability care scheme would cost the government an additional $1.1 million but disproportionally impact families in a particular region.
2 – The dashboard interface
allows people unfamiliar with OPA to analyze and tweak the policy to compare policy options without changing the rules. Policy experts, management, committees etc within an organization could use the dashboard interface to produce charts comparing policy options using real data and real legislative rules, without installing or modeling in OPA themselves.
The experts within your own organization are often the key brains that identify when the policy is likely to go awry. I’ve demoed a prototype to a few organizations and the feedback I’ve received is that it has great potential to improve both the quality of review and speed of the internal review process itself.
Numerous studies have looked at whether public consultation can contribute to the quality of draft legislation in a meaningful way. Some countries (eg Canada, UK, US, Australia and New Zealand) have actively involved citizen participation in reviewing draft policy and legislation, with varying success. With OPA, governments now have the option to quickly and accurately expose the legislation as an interactive questionnaire, allowing citizens or targeted interest groups to assess their own scenarios against the draft legislation and leave their comments on the outcome and experience. The average citizen may not have the time or inclination to read though dense legislation to determine how the changes will affect their circumstances, but I believe many would have the curiosity to answer a few questions to see how they are likely to be affected by a legislative change. An OPA interview can therefore serve to inform as well as elicit public feedback on the draft policy itself.
So can OPA help improve draft legislation? Absolutely. It’s not going to tell you that your draft is a masterpiece or award you a gold star for effort, it’s not even going to look for every error you could possibly introduce, but it is another tool in your armory for improving the policy and laws that govern determinations, and therefore to ultimately improve your overall customer experience.