This blog is little different from my usual contents on Oracle Products. In this article, we will give an overview of Ethereum wallets as well as their types. We start by explaining what wallet technology is, its role in blockchain applications and how it works. Then, we explain two major types of wallets: deterministic and non-deterministic wallets. Due to the popularity of blockchain and decentralized finance, many applications that offer digital currencies like Bitcoin or Cryptocurrencies, come with a feature called digital wallet. A wallet allows users to store and manage their private keys. A wallet is also used for key generation. Depending on the platform in which a user can access and manage their wallet, there are 4 types of wallets: web, desktop, mobile, and Hardware. Another way to categorize wallets is based on the way in which keys are generated which includes deterministic and nondeterministic wallets. A non-deterministic wallet is considered a rather old style compared to a deterministic wallet. Modern deterministic wallets have advantages over non-deterministic wallets in many aspects, such as backups, security, data storage, accounting, auditing, and access controls. Understanding the wallet technology When we refer to something as a wallet, we assume that it's something that holds money. In fact, an Ethereum wallet does not store any ether or tokens. Wallets keep the private keys that are needed for signing the transactions. An Ethereum wallet, just like a cryptocurrency wallet, can be classified as a non-deterministic or deterministic wallet. Understanding non-deterministic and deterministic wallets What's the difference between these two types of wallets? They are different in how the private keys are backed up. A non-deterministic wallet generates private keys that are random and independent of each other (such as in the left-hand side of the following diagram). There is no particular pattern as to how the keys are derived and hence we need to create a backup of the keys each time there is a new one. In a deterministic wallet, on the contrary, the private keys are related because they originate from the same key called seed, as shown here on the right-hand side of the diagram. Just backing up the seed once will be enough to regenerate all the keys: There are three types of deterministic wallets: Deterministic wallet Hierarchical Deterministic (HD) wallet Armory deterministic wallet The first wallet, which is used in Ethereum presale, is a non-deterministic wallet. It's considered a rather old style compared to a deterministic wallet. Modern deterministic wallets have advantages over non-deterministic wallets in many aspects, such as backups, security, data storage, accounting, auditing, and access controls. The following list showcases some of these advantages: One-time backup for deterministic wallets is so much easier than the cumbersome way a non-deterministic wallet has to be backed up. If you don't create a backup each time, you will risk losing the data, which is referring to ether and smart contracts. For a deterministic wallet, only copying the master seed will be enough. It's less work during the import and export of wallets. For a non-deterministic wallet, the entire list of keys needs to be copied during the migration process. In the concept of deterministic wallets, deriving the keys from the seed makes it possible to store the private keys offline. Disabling web server access to private keys offers more privacy and security. Securing the seed takes less effort compared to keeping all the keys safe, which is what non-deterministic wallets need to do. The HD wallet is the most well-known kind of deterministic wallet. It's also a preferred option among the deterministic wallets if you want to create one since it is well developed. When it comes to the following cases, the HD wallet shows more advantages than other types of deterministic wallet: Use cases for creating public keys without accessing corresponding private keys Use cases for keys to match the hierarchical structure in an organization context The first use case shows the security advantage of the HD wallet. Normally, a deterministic wallet holds a single chain of keypairs, which doesn't support selective keypair sharing, while the HD wallet allows such selective sharing on multiple keypair chains that are generated from a single root. In some cases, being able to create public keys without exposing private keys makes the HD wallet available in environments that would normally be a higher security risk. The second use case listed here takes advantage of the hierarchical structure of the design and puts it into the use of real-world cases. HD wallet has security advantages just mentioned previously, we will bring up another aspect of secure consideration—private key management. It does not only refer to key generation offline (away from the network, such as cold storage. Key sharding and splitting or division of signatures are other alternatives. Key sharding, also known as Shamir's secret sharing, is basically splitting a key into several pieces, or shards. This makes each shard useless, and the original key would be reconstructed unless enough pieces are assembled to reconstruct. A similar but very different concept to this is multi-signature wallet. Readers who are interested in the topic are encouraged to do further reading about this. With cryptocurrency wallets becoming more and more developed, the protocols and standards need to be shared across the industry. The Ethereum community has been establishing standards over the years. Some of them are known as Ethereum Improvement Proposals (EIPs). Some are on application-level standards, for example, the standard format for smart contracts, known as Ethereum Requests for Comment (ERC). Standards are also followed for wallet creation. There are a few standards that are widely adopted, as follows: HD wallets (BIP-32) Multipurpose HD wallets (BIP-43) Multi-currency and multi-account wallets (BIP-44) Mnemonic code words (BIP-39) BIP stands for Bitcoin Improvement Proposal. Although there are so many differences between Bitcoin and Ethereum networks today, they still have a lot in common and share standards and protocols. The mnemonic code words is 128-bit to 256-bit entropy. This entropy will be plugged into a Password-Based Key Derivation Function 2 (PBKDF2) function and stretched to a 512-bit seed. The seed will be used in building a deterministic wallet. In our next article, we will review HD wallet or BIP- standard in depth. Summary In this article, we learn about wallet technology and its role in blockchain. We also discuss different types of Ethereum wallets: deterministic and non-deterministics. In our next article, we will explain one of the most popular Ethereum wallet categories or Hardware wallet (that is based on BIP- standard). Resources Here is a list of resources for learning more about blockchain development with Ethereum: Hands-on Smart Contract Development with Hyperledger Fabric Oracle-Blockchain-Services Quick Start Guide This article is written in collaboration with Matt Zand who is the founder of: DC Web Makers, Hash Flow, Coding Bootcamps and High School Technology Services. He is also the leading author of Hands-on Smart Contract Development with Hyperledger Fabric book by O’Reilly Media.
This blog is little different from my usual contents on Oracle Products. In this article, we will give an overview of Ethereum wallets as well as their types. We start by explaining what wallet...
Disruption is natural product of economics and innovation and technology changes are central to human progression and evolution. Innovation and technology are also hallmark of economy. Today, digital transformation has changed our personal, professional, transactional and behavioral world. It also transformed and revolutionized the business world and world economy. Not a single industry is immune. This series of blog post will cover various aspects, components, domains, process and strategy of digital transformation. For now, let us start with human intuition. Human’s institution in most cases, massively underestimate the magnitude of exponentiality. Human mind adapt to imagine and estimate linear changes and hardly estimate exponential change. Example when you move from 1 to 2 to 3 or when you move from 2 to 4 to 8 etc. These are linear changes and human mind can estimate these linear changes. However, exponential change like 2 to the power of n, where n can be any given number. This turns difficult and human cannot handle this exponential representation and would need algorithms or scale to do so. So, why we are talking about human intuition, linear & exponential growth and its relation with human intuition? We are discussion about it, because, digital trend, digital transformation and its impact on lives & business has progressed exponentially and not linearly. In addition, those who can imagine, handle and use it, can survive. Take for instance yellow cab’s missing opportunity to turn into a uber transportation provider. Nokia and blackberry is missing transformational journey to recognize the touch screen revolution and the holistic impact of already ahead ecosystem around it. So, what’s in the core of digital transformation? Am a physics and mathematics enthusiast. It hard for me to image something delivering value, which is not build on the laws and principles. At the core, values are variables as they are internal and subjective, while laws are constant as they are universal and immutable. Hence, we take a glimpse into the laws first, before we delve into this blog series on digital transformation. There are few laws which we can enlist, that can help describe why an exponential growth of digital technologies will be inevitable. Moore’s law – It states, number of transistors in a dense integrated circuit doubles about every two years. Butter’s law – It says amount of data communicated will double every 9 months. Nielsen's Law – This law calculates that the bandwidth available to users increases by 50% annually. Kryder’s law - This law mention that disk drive density, also known as areal density, will double every thirteen months. Copper’s law - This law observe that the maximum number of voice conversations or equivalent data transactions that can be conducted in all of the useful radio spectrum over a given area doubles every 30 months. Are these law’s dead? Well, I think all these Law are alive and still relevant through a variety of design innovations. Computing speed increases with advances in semiconductors, circuit designs, 2D to 3D, using graphite instead of silicone, quantum computing etc. Enhanced machines with super computing capabilities are growing which proves Moore’s law. In addition, if the amount of data doubles every few months, than cost of transmission decreases by half every 9 months and as per Nielsen’s law, bandwidth doubles every 18 month as well. Considering Kryder's Law, it implies that as real density improves, storage will become cheaper. Copper’s law and Butter’s law indicates the exponential data growth, which can be even imagined by our linear thought processing power. Individually, all these laws will hit a plateau. One such example is Copper’s law. It will hit the plateau when network densification stops. With network densification, I means deployment of more base stations (telecom) and access points per unit area (or volume). Another example one is Moore’s law. It will hit the plateau when ramification in semiconductor. When advancement in semiconductor hits a plateau, Moore’s law will hit its plateau. It will happen when we no longer expect smaller chips, Nano circuits, faster chips consuming less green energy etc. Now, put your intuition forward and imagine not just processing speed, but also storage capabilities, communication speed & advancements and networking growth. Exponential increase in computing speed, storage, communication, networking etc. enables several transformational possibilities. All those possibilities that are exponential and beyond our linear thought processing. Individually, none of these laws would lead to the world we currently know or we foresee in next few years. Consider if only computing capabilities increased as per Moore’s law. Bandwidth, communication speed, storage capabilities didn’t grew as per other laws. In this case, Moore’s law would have only resulted into powerful super computer, which barely communicate, they neither store exponential data nor network to exchange. Interaction and coexistence of these laws resulted into an ecosystem, which is the real catalyst for today’s digital transformation progress. In addition, even if they individually hit a plateau, the combinations they produce and the outcome of those possibilities is exponential. The acceleration of an object is directly related to the net force and inversely related to its mass as per Newton’s second law. Means acceleration depends on force and mass. The force of combined technological changes, empowered by these laws, will further accelerate the pace of changes in our lives, business and economy. When organization turns agile, they shrink mass. When organizations reduce mass they can swiftly maneuver with the force to stay relevant and yet maintain balance. This force of digital transformation is and will be inexorable. It will create pressure that will blow and wipeout those industries and business, that are based on traditional practices, technologies and people with linear mindset. And it’s important to talk about linear mindset, as one of the core domain of digital transformation is people and culture in an organization. Linear mindset tend to be incomprehensible. What would happen if technology grows exponentially and organization grows linearly? This gap will widen quickly as incumbents sometime fail to understand and underestimate the force, which is quickly filled by the startups. Digital transformation is not new. Think about the journey of watch, from swiss mechanical watch to Japanese digital, to Korean wearables with several offerings. Similarly, photography from Kodak’s film imaging to digital. Another example is telecom, where Blackberry and Nokia continues phones for professionals and lower end and iPhone revolutionized it with touchscreens. Another one is bookstore, which changed from café, and community based model to online buying, subscribing model. There are several examples from other industry too like entertainment moved from blockbuster movie rental to Netflix streaming, transport revolutionized from taxi to uber, retail moved almost completely to online like amazon. Automobile is shifting to EVs and autonomous, deliveries moving to door dash and instacart etc. 2016 world economic forum term it as fourth industrial revolution. More force and acceleration is injected to digital transformation by next generation technologies like robotics, artificial intelligence, machine learning, data analytics etc. However, I think its not only about the advent of new transactional technologies, it’s also about the ecosystem, culture and acceptability. Take for example, electric cars (EV). EVs are there even before Tesla, Nio, GM or Lucid. Eclectic cars appeared almost at the same time when internal-combustion engines and adventurer’s tried to work on them. You can find information about electric car built by Thomas Parker, in 1895. In New York City, in the pre-World War I era, ten electric vehicle companies banded together to form the New York Electric Vehicle Association. After enjoying success at the beginning of the 20th century, the electric car began to lose its position in the automobile market. A number of developments contributed to this situation. By the 1920s an improved road infrastructure improved travel times, creating a need for vehicles with a greater range than that offered by electric cars. Although EVs were there, they never took-off mainstream, as the ecosystem was not ready. Today, imagine of a Lucid’s EV with Apple’s engineering and some other org’s AI, machine learning and autonomous. This example is to illustrate the power of ecosystem and the acceleration it can bring to digital transformation. Such transformation is exponentially disruptive to current incumbent’s business models, but it also offers opportunities. The question is – are you ready? This series of blog posts on Digital Transformation will cover types of disruption - technology, architecture and business model. Will discuss about domains like data, customer, competition, value etc. We will delve into challenges and liabilities of incumbents and discuss when incumbents fail and succeed. Will have a deep dive into some of the technology trends that are drivers of digital transformation such as big data, cloud, IoT, AI/ML, robotics, addictive manufacturing, 5G and Blockchain. Finally, we will explore why an IT strategy needs a digital transformation & experimental innovation strategy.
Disruption is natural product of economics and innovation and technology changes are central to human progression and evolution. Innovation and technology are also hallmark of economy. Today, digital...
Robotic process automation (RPA) – Quick Glance The buzz is real! The Digital Transformation Era has begun and automation is leading the way. Businesses are going digital, with 89% of them having already adopted a "digital first" strategy. This will heavily influence the work landscape. Share of task hours between humans and robots in 2018 is 70:30 respectively. Which will change to 60:40 by 2022 as per the world economic forum. Digital transformation covers the use of digital technologies in reshaping business processes, so it's more than automation. Still, automation already brings unique benefits. Automation is happening across all industries and in our personal lives as well. There are various processes, which are already automated such as – Services – Customer invoicing, claim processing, customer data gathering, change of service like information update etc. Financial – Credit assessment & fraud prevention, account settlement & payment clearance, interrogation of public database and automatic account closure. Healthcare – Digitize patient files, inventory management, invoice settlement etc. Education – Course registration, class schedule, attendance management, grades processing and report cards and certifications. High Tech – Client data updating, complaint management, client processes and services removal etc. Manufacturing – Inventory management, payment processing, customer communication etc. Retail – OnBoarding, OffBoarding, Payroll, Inventory and contract management, invoicing & return processing and sales analytics. Government – Benefit claim, fraud prevention, payer data updating etc. Robots check-in for you 24 hours prior to any flight or robot can checks all your emails and sends you the important ones as a digest after your vacation. Robot that tells you when your kid has left school or has arrived home? Or robot can look for similar movies to the ones you like. Robots can send you latest version of legal documents at any point of time, so that you have the latest and greatest information. Robots can translate non English documents to English. Robots can do mundane, background operations and you can concentrate on what matters to you the most. Overalll, robots drive greater productivity, creates efficiency & reduce costs and accelerate human achievements. How is automation impacting us? The adoption of robots will likely create more jobs than displace. Impact on the number of jobs from the shift in the division of labor between humans and robots. Technological disruption has impacted human life throughout history in cycles. In every era, technology has increased productivity and jobs, providing a better life to humans......and this applies to the Automation Era as well. So where’s RPA in this picture? Companies know that they have to go digital, but do they know how to start? RPA is probably the fastest path to digital transformation and one of the most efficient and effective. To better understand why, let’s see first what it is and what it can do. RPA, or Robotic Process Automation, is the technology that enables computer software to emulate and integrate actions typically performed by human interacting with digital systems. The computer software that executes the operations is called a “robot.” RPA robots are able to capture data, run applications, trigger responses and communicate with other systems. RPA primarily targets processes which are highly manual, repetitive, rule-based, with low exceptions rate and standard electronic readable input. RPA solutions can be thought of as virtual robotic workforces, whose operational management is made by the business line (only supported by IT), just like for a human workforce. Why RPA is a good starting point for a company's digital transformation? RPA is non-invasive It does not require any major IT architecture changes or deep integration with the underlying systems. RPA offers a reliable, fast and cost-efficient solution for a “light-weight” integration into processes and IT assets. RPA is easy to scale The amount of work involved by a process can vary, as changes are likely to occur in most business environments. If an RPA solution is used, companies can easily adapt by scaling the solution up or down, depending on the requirements. RPA is future proof Robots work with today’s technology, yet the automations are extensible, able to handle tomorrow’s technology. RPA is here to reboot work by taking the robot out of the human, so that humans can get more time for valuable and meaningful work: make judgement calls, handle exceptions, and provide oversight. The future of work with RPA Automation is the future of work. We are now stepping into the Automation First Era, which will tip the balance in favor of more cognitive, social, emotional and technologically oriented work. What is Automation First Era? It's the era in which... Your first instinct is to automate repetitive mundane work and free people up to do more challenging work You look at each challenge the company is facing and ask the question: will automating this issue make the problem go away? You enable a bottom up approach where everybody is able to make choices on what to automate and when. Oracle (Oracle Integration Cloud) via a co-selling agreement with UiPath (leading RPA vendor in the market) allows creating Robot-based transactions and integrate RPA with oracle solutions. Types of Robots - Attended and Unattended Robots There are two types of robots, from the point of view of human intervention: Attended Robots - Attended robots work on the same workstation as humans and execute some tasks of the entire process, after which they require human intervention. When are they generally used? In business scenarios that require input or decision making from the human user, or when a well-defined schedule cannot be applied due to the volatility of the process. What are some specific scenarios in which Attended Robots are preferable? The automated process relies on data that needs to be validated by humans (the HR Robot in the example in the previous chapter) The robot assists the human directly in performing his tasks (a robot filling in data for a call center operator) The automated task cannot be scheduled beforehand and needs manual triggering (a robot that gathers data in real-time about a customer calling the support line) How should the Attended Robots be? Responsive: when triggered (either manually, or automatically), the Attended Robot needs to do his part in order to enable the human to take care of the non-automated tasks User-friendly: the human user needs to be able to operate the robot easily Flexibility: the Attended Robot needs to be able to navigate between different applications and environments. Unattended Robots - Unattended robots work independently of any human interaction. In addition, they work on separate, virtual workstations. When are they generally used? In manual, repetitive, highly rule-based back office activities, which do not require any human intervention. What are some specific scenarios in which Unattended Robots are preferable? Tasks must be completed continuously in a batch-mode model (a robot that validates bank payments) Large amounts of data have to be gathered, sorted, analyzed, and distributed amongst key players in an organization (a robot processing claims for a health insurance company) The output of the automated process is needed as an input for humans in their work (a robot that processes financial statements from subsidiaries from different countries) How should the Unattended Robots be? Flexible to deploy on virtual or remote environments: since the human intervention is minimal (or absent), the Unattended Robots are generally deployed to virtual or remote environments Easy to scale: the Unattended Robots need to accommodate variation of task and data volumes Accurate: again, because human intervention lacks or is minimal, the outputs have to be reliable. RPA Qualifier - What makes a process a good candidate for automation? In the Automation First Era, you need to constantly look at your work and the processes in your company through the lens of automation potential. For this you need to know the types of processes that can be automated, the factors driving the automation potential, as well as the factors that increase the complexity of a process and make it more difficult to automate. Let’s recap some of the examples provided in the previous lessons: The HR robot that is able to extract data from scanned IDs and fill in hiring papers The robot assisting a call center operator by bringing and filling in relevant data The robot processing claims for a health insurance company The robot processing financial statements The processes that can be automated are rule-based, repetitive, and the input data is standardized. The robots are able to make decisions if these are clearly stated in the business logic. A high exception rate would need human intervention and affect the robot productivity. There are two sets of criteria you can use to determine the automation potential - process fitness and automation complexity. Process Fitness - Here are the criteria you can evaluate how fit a process is for automation: Rule-based – The decisions made (including data interpretation) in the process can be captured in a pre-defined logic. The exception rate is either low or can be included as well in the business logic. Automatable and/or repetitive process – We can differentiate 4 types of processes: Manual & non-repetitive: the process steps are performed by humans and can be different every time the process is executed Manual & repetitive: the steps in the process are performed by the user, and at least some of them are the same every time Semi-automated & repetitive: some of the repetitive steps have already been automated (using macros, Outlook rules, and so on) Automated: there are processes that have been already automated using other technologies than RPA Note - Processes that need to stay manual or are non-repetitive, due to the high exception rate or factors that cannot be integrated in a business logic, aren't good candidates for automation. Standard input – The input in the process should either be electronic and easily readable or readable using a technology that can be associated with RPA (such as OCR). A good example is an invoice having the fields pre-defined. Stable – Processes that have been the same for a certain period of time and no changes are expected within the next months are good candidates for automation, provided they meet the other criteria as well. Automation Complexity - This set of criteria determines how hard it is to automate a process: Number of screens - RPA works by programming the robot to perform tasks at screen level (when the screen changes, the logic has to be taught). The higher the number of screens, the more elements have to be captured and configured prior to the process automation Types of application - Some applications are more easily automated (such as the Office suite, or Java), others heavily increase the automation effort (Mainframe, for example). And the more different the applications are, the number of screens will increase, as well (see previous point) Business logic scenario - An automation's complexity increases with the number of decision points in the business logic. Basically, each one could multiply by two the number of scenarios Types and number of inputs - As previously stated, standard input is desirable. Yet there are cases in which one standard input (such as an invoice) has to be configured for each supplier that will be affected by the automation. Moreover, non-standard input can be of different complexity grades, with free text being the most complex. By using these factors in our automation potential assessment, we can split the processes into 4 categories: No RPA - Processes where change is frequent, the system environment is volatile, and multiple manual (even non-digital) actions are required Semi Automation - Processes that can be broken down into steps that can be clearly automated, and steps that need to stay manual (such as validations or usage of physical security tokens) High Cost RPA - Processes that are rather digital and can be automated, but use some technologies that are complex (such as OCR) or require advanced programming skills Zero Touch Automation - Processes that are digital and involve a highly static system and process environment, so that they can be easily broken into instructions and simple triggers can be defined. Stage of an RPA Journey Let's take a look at the stages of an RPA implementation Most often, we will find six stages in RPA implementations: Prepare RPA - the processes are defined, assessed, prioritized and the implementation is planned. Solution Design - Each process to be automated is documented ("as is" and "to be"), the architecture is created and reviewed, the test scenarios and environments are prepared and the solution design is created and documented for each process. Build RPA - The processes are automated, the workflow is tested and validated and the UAT prepared. Test RPA - The UAT is performed, the workflow is debugged and the process is signed off. Stabilize RPA - The Go-Live is prepared, the process is moved to production, monitored, measured and the lessons learned are documented. Constant Improvement - The process automation performance is assessed, the benefits tracked and the changes managed. RPA Roles Now let's discover what's beneath each implementation stage and most importantly, which are are some of the roles encountered in RPA implementations that drive the change - Roles - Solution Architect - Is in charge of defining the Architecture of the RPA solution. The Solution Architect translates the requirements captured by the functional analysts, creating the architecture and design artifacts. They lead, advises, and is responsible for the developers' team delivery. Business Analyst - Is responsible for mapping of the AS IS and proposed TO BE processes. Business Analysts hold knowledge of the business process that gets automated, general business process theory and RPA capabilities. They are responsible with listing the process requirements for automation, clarifying the inputs and expected outputs, creating RPA documentation (Process Design Documents, Process maps. Implementation Manager/Project Manager - Forms and manages the RPA team, does resource planning and teams availability, in order to hit automation goals. Most of the times the PM is the Single Point of Contact (SPOC) for questions, RPA initiatives, or parallel RPA product projects. Infrastructure & IT Security admin - With good technical and security skills, they are responsible for setting up and maintaining hardware & software resources for RPA provider (example UiPath) product installations. They set up accounts for all the devs, end users and robots. Process Owner - Is the key stakeholder and beneficiaries of the RPA solution. Usually Senior Management level, with some 10-15+ years of experience, possibly split across domains. Multiple people can have this role, based on department (Finance, IT, HR, etc). RPA Support - Manage the robots after the processes have been moved to production, with support from the original RPA devs who have performed the automation. May have multiple levels of support: L1- Client, L2- client/ partner, (L0 – Super users; ) As you can see, to make automation possible at a larger scale, a company requires an engaged staff to offer their professional support and guidance throughout the entire process. RPA in business Let's expand the perspective by seeing real examples of RPA: Payroll Processing - Payroll processing refers to the actions that companies take to pay their employees - keeping track of their presence, of their salaries, bonuses and taxes. Payroll processing needs manual intervention month after month, every year. An RPA system can be used to extract the details that are required from handwritten time sheets and calculate the pay from their stipulated contracts and pay them as well (by even ordering the necessary bank transactions). Client Information Updates - Any organization that has implemented a CRM faces all sorts of related issues - the client-base is spread across many geographies, there are frequent calls to the back-end databases, and updates and changes are coming from all sources. RPA solutions can process these requests in batches instead of one after the other, reducing the load on the back-end systems and ensuring better performance and data quality across the whole application. Renewal Process – Irrespective of industry, the client renewal process is in general a complex process, but not necessarily due to exceptions and complications, but rather to the number of operations and the synchronization between different departments and systems. Robots can take over the entire process, starting with the standardized communication with the client, processing the changes, drafting the documents and updating the internal systems accordingly. Statement Reconciliation – Financial statement reconciliation covers all the operations (done mostly by the accounting teams) of matching orders, payments, losses, margins, and so on, with accounts and financial statements. It is a common process that an organization needs to manage in order to ensure clean records and reliable financial documents. This process is well handled by the RPA software robots. Once they are set up, they can seamlessly replace the human beings who would have to do these jobs, from the beginning to the end. Compliance Reporting – As organizations grow, it becomes increasingly difficult to closely monitor the compliance requirements that each department has to follow: reporting to authorities, complying with the internal procedures, audit requirements, and so on. Robots can be set up to cover all these needs, with a low error rate and low human intervention. Customer Complaint Processing – Irrespective of industry, customer complaints are always on the radar. Their number and substance is an important indicator of the business health and good predictor of the future of the company. Through RPA, customer complaints can be categorized based on keywords and other criteria, and possible solutions can be suggested to the customers right away. By doing so, the customer complaints can be answered 24 x 7 instead of 8 hours a day and only 5 days a week. Automation solutions for business lines Here are some common internal processes across many industries that meet the criteria presented in the previous chapter and which companies should consider automating: HR – Payroll Time and attendance management Onboarding and offboarding Benefits administration Recruitment administrative activities Personnel administration Training admininstration IT Services – Server and application monitoring Routine maintenance and monitoring Batch processing Email processing and distribution Password reset/unlock Back-up and restoration Employee services – Travel expense processing Contractual amendments Employment proof issuing Long-term and medical leave administration Supply chain – Inventory management Demand and supply planning Invoice and contract management Work order management Returns processing Freight management Finance and accounting – Procure-to-Pay: vendor master, requisitions, payment management and processing, reporting, invoicing Order-to-cash: quote management, cash applications, customer master, credit management Record-to-report: general/inter-company accounting, bank reconciliations, fixed assets, closings, consolidations Collections Customer management – Customer inquiries Order management Customer account setup Document processing Duplicate system entry Deep Dive into RPA Use Cases This section we deep dive into few of the use cases. HR - Onboarding and off boarding The Challenge - HR processes for new hires (that are done to a certain extent for those who move inside the organization and those who leave it) come with a series of tasks that are manual, tedious and prone to errors, such as: Customizing document templates with personal data (with copy/paste) Background checks of candidates (academic record, criminal record, employment history) using different applications or even communication with public authorities or academic institutions. The Solution - attended robot that is able to... perform document fill in in Word send emails to universities and interpret the answers received log in on public websites and retrieve information. The Impact - 100% automation of the process steps Process duration dropped by 85% vs. when it was conducted by humans Error rate dropped close to 0%. Employee Services - Travel Expense Report verification The Challenge - Once a travel expense is submitted in the dedicated application, with the receipts attached, the compliance of the report and the expenses with the internal travel policy was manually verified and approved. It was a tedious process, involving reading the receipts and matching the types, checking the travel policies, and doing manual calculations. The Solution - attended robot that is able to... use OCR to extract information from receipts match the amounts and categories of expenses with those in the report confirm the authenticity of the receipts verify the policy limits and provisions Whenever the robot is unable to complete the task on its own, it asks for the help of a human user. The Impact - 80% automation (1 in 5 cases need human assistance) Process duration dropped by 60% vs. when it was conducted by humans Error rate dropped by 65% vs. when it was conducted by humans. Supply Chain - Invoice matching and validation The Challenge - In the supply chain, large organizations work with many suppliers, and need to handle variations in quantity, quality and price from one order to another. Each order of supplies and raw materials must match the purchase order or be within a tolerated discrepancy. Humans spend many hours to match line by line the invoices received from the suppliers with the purchase order. The Solution - attended robot that is able to... login in the ERP system match the invoice with a purchase order use OCR activity if the invoice is scanned match quantity and price for each item on the invoice If the calculated discrepancy is within the tolerated limits, the robot can approve it. Otherwise, it asks for human confirmation. The Impact - 75% automation (1 in 4 cases needs human assistance) Process duration dropped by 50% vs. when it was conducted by humans Error rate dropped by 30% vs. when it was conducted by humans. Finance & Accounting - Account reconciliation The Challenge - One of the most important accounting processes is the account reconciliations, consisting of matching the transactions made throughout the month with accounts, and closing the statements. Every month, the accounting team has to analyze and reconcile accounts using various reports and manually search for information. The Solution - Unattended robot that is able to... login to different applications retrieve necessary reports and documents match the entries highlight discrepancies. The Impact - 98% automation (1 in 50 entries cannot be matched by the robot) Process duration dropped by 70% vs. when it was conducted by humans Error rate dropped by 60% vs. when it was conducted by humans. Customer Management - Tracking inactive users The Challenge - Inactive users are never good news. The underlying issue may be more serious (a competitive disadvantage, or even a fraud risk), and the implications are also serious (usually revenues dropping). As the sales and support teams are more often busy with the active clients, the tracking of the silent clients is done either manually from time to time, or through reports that almost never capture the whole picture. The Solution - Unattended robot that is able to... check the internal systems for inactive clients, as defined by the business match the information with public sources send alerts to sales and support teams. The Impact - 100% automation of the process steps Retention increased by 55% due to timely interventions. Industry-specific automation solutions It’s time to delve into a couple of industries where RPA has proved its value on more specific processes. Banking Specific challenges & Why RPA is the solution Regulation & Control - RPA is easily customizable in order to keep up with the changing regulations. Multiple Legacy Application - RPA is non-intrusive, thus it can be adjusted as the environment changes. Customer Experience - Robots ensure faster delivery and no errors, allowing banks to compete with more agile competitors (such as fintechs). In banking, there is automation potential across the entire customer management lifecycle - Onboarding Customer document reading Preparation of contractual papers Know Your Customer Procedures Initial verifications Document validity verification Ultimate beneficial owner verification Source of wealth Sanctions and political exposure verification Periodic verifications Customer portfolio screening On change event screening Due dilligence Sactions and political exposure verification Client Set Up Data set up in product specific systems Special pricing in case of salary conventions Customer Maintenance Static data change (ID change, business activity) Registration address change Contact info change Ultimate beneficial owner change Block and unblock due to legal events Closure Dormancy Closure initiated by the bank from due dilligence Zero balance and dormant account closure Closure at customer request. Verification of Loan Application Documents The Challenge - To verify loan application documents, the bank’s employees had to manually check different web portal documents and related information for home loan applications, then collate everything into a single file. They were spending too much time processing more than 100 loan applications per week and needed to expedite the resolution process for customers The Solution - The Robot was used to quickly open different web portals and verify information before sending an email to the person who requested the documentation for a decision. The results: 20 hours saved per week Shortened time to client response. Rejected Direct Debit Management The Challenge - Direct debit is a service offered by banks to ease the collection process of service or utility providers. In short, beneficiaries agree to pay the amounts that the providers request directly to the bank. However, mostly due to insufficient funds, there are many direct debits rejected by the system that need to be processed manually. The company's rejected direct debits management process wasted employees' time by requiring them to manually check 800 to 1.000 transactions during the first four hours of each day. Based on a printed paper transaction report, they analyzed each customer with an overdrawn account and decided if the bank would honor or reject payment. Unfortunately, process rules weren't clear and bank fees were charged inconsistently. The Solution - The Robot was used to capture the report and convert it to a spreadsheet, grab customer account information from the core banking system, analyze it, and - using a core set of rules - decide to honor or reject the direct debits. This increased accuracy and allowed the paper-recorded client histories to be added to the customer relationship management (CRM) system. The results: Implemented within 7-9 weeks 95% automation rate More than 25.000$ of monthly revenue gain Turnaround time down from 16 hours to 6 hours daily. Fraud Detection The Challenge - Fraud detection is a process done for each loan approval, by a different person than the client representative. It consists of bringing together and interpreting data from different sources, inspecting documents and monitoring any suspicious signs. The bank had an insufficient amount of resources allocated to their fraud detection process, which caused undue stress and work for their team, generating inefficiencies, frustration, and errors. The Solution - The Robot accessed up to 15 applications and databases, both internal and external, for potential signs of suspicious activity among the bank’s clients. It then compiled this information in a report for review by a human fraud analyst. This decreased the required work of employees and streamlined fraud protection. The results: Processing time for each application dropped from 45 minutes to 20 minutes 1 hour of work was automated to 5 minutes 95% automation of the process steps 0% exception rate for the automated processes. Online Retail The Challenge - Although the sales are done by the retailer, the installation and the warranty are done by the official dealers. It is the responsibility of the retailer to notify the dealers using different channels. This was done manually, by having the same person navigating from one application to another, copying data from one place to another, and so on. The Solution - Unattended robot able to... Use internal CRM or external websites to raise tickets for installation repair and un-installation check their status and follow-up until they are solved The robot is also able to provide relevant reports when requested. Insurance Communication with brokers The Challenge - The insurance industry business model involves brokers. Working with many brokers generate a lot of paperwork (as the contracts are signed between 3 parties), validation (given that the conditions for each broker are unique) and processing work. The Solution - Uttended robot that took over almost the entire communication with the brokers, from the moment when the email is received, by verifying the attachments, confirming the contractual conditions available for that particular broker, making the fraud verifications and approving the requirements. Human intervention is needed only for business exception handling. Claim processing The Challenge - Insurance is a large volume business, with lots of clients, policies, and papers to verify and validate. Consequently, the amount of manual work and the fraud risk are both high, especially at the claim processing time. The Solution - Unattended robot that is able to process claim emails, categorize and check the attachments, make fraud verifications and authorize payments if there are no flags. All the claims with fraud suspicions and business exceptions are forwarded to claim inspectors, which receive them via email in the morning. As you can see, RPA offers valuable solutions across industries and business lines. Oracle & RPA This section we discuss RPA in concert with oracle solutions. Oracle Integration Cloud’s Process Automation with RPA Oracle Integration Cloud Service can extend its existing process automation and integration capabilities with UiPath Robotic Process Automation RPA solution to create an ultimate digital workforce orchestrating people, systems and robots. Oracle Integration Cloud Service in conjunction with UiPath offer a simple recipe to be successful in this process automation journey: Build, Integrate and Engage. Oracle Integration Cloud is an Oracle managed subscription based service that empowers Line of Business and Power Users to create process applications that extend existing applications of record and create innovation on a fast paced platform layer (PaaS). When the target application you are trying to integrate does not offer APIs, Oracle Integration Cloud via a co-selling agreement with UiPath (leading RPA vendor in the market) ,allows creating Robot-based transactions that replay the user interaction via the application user interface. These transactions encapsulated and executed by a UiPath Robot can be triggered at any step/activity of a business process. Robotic Process Automations can be simply recorded and generated with the RPA Designer called UiPath Studio. These Robots in turn are deployed to an Oracle specific UiPath Orchestration Cloud Edition (managed service) where an administrator can configure how these Robotic Process Automations can be executed by Robots replaying the transactions. These robots work in an unattended fashion (no end user need to manually trigger the robot execution) as they are invoked and triggered from a business process implemented in Oracle Integration Cloud fulfilling the promise of an end to end automated digital workforce. UiPath Robotic Process Automation complements Oracle’s process automation services as a low code, fast implementation and easy integration solution to enable broader, end-to-end automation at scale. UiPath Robotic Process Automation Adapter with Oracle Integration Robotic Process Automation (RPA) is a technology that uses robots to interact with application user interfaces. Using RPA, you can create UI scripts that reproduce actions in the interface as if a human user is performing them. After a script is created, it can be replayed using different input parameters by an application that simulates human input, known as a robot. Robots can interact with any application that has a user interface, including web apps, character-oriented terminal applications, and native Windows applications. Adapter Use Cases The UiPath Robotic Process Automation Adapter can be used in scenarios such as the following. Integrate with Applications without Adapters or APIs - You can use the UiPath Robotic Process Automation Adapter to integrate with applications that don't have an adapter in Oracle Integration and don't expose APIs. The UiPath Robotic Process Automation Adapter offers a new way to integrate with applications that Oracle Integration doesn't support natively. The adapter simplifies the discovery of robots that have been created and deployed in the UiPath Orchestrator. Robots, created with RPA technology, can be invoked from an integration flow to interact with applications and systems previously unreachable using Oracle Integration due to a lack of exposed APIs or an adapter. Using the adapter, you can add data to queues, instruct robots to start jobs using data from queues, and receive output from jobs. The UiPath Robotic Process Automation Adapter also enables you to use robots to interact with applications that have been modified or extended. You may not be able to use extended functionality using APIs or an application adapter. With the UiPath Robotic Process Automation Adapter, you can use this functionality in an integration flow by instructing a robot to make these transactions. Automate Repetitive Human Tasks You can use the UiPath Robotic Process Automation Adapter to automate simple repetitive tasks usually performed by a human. RPA robots can perform repetitive tasks, like data entry, that don't involve decision making. In Oracle Integration, you can trigger these transactions automatically using the UiPath Robotic Process Automation Adapter. RPA and Oracle ERP Robotic process automation (RPA) makes it easier to use your Oracle ERP system. It paves the way for a workflow that suffers from fewer mistakes, virtually eliminating lag time and helping you achieve peace of mind. RPA is a computerized approach to automating routine tasks. Machine-learning software can be programmed to move through certain workflows the same way a human would. For instance, a computer can: Scan a document for important data Validate the data extracted with known data (vendor master, receipts, etc.) Import that data into Oracle (eliminating the need for manual data entry) Use that data to automatically create corresponding business documents Introducing RPA into your ERP system allows you to modernize your legacy processes. It can instantly perform time-consuming administrative responsibilities like looking up information and entering it into Oracle. When these responsibilities are reassigned to RPA, it leads to faster and less labor-intensive processes across your entire organization. RPA technology doesn’t necessarily require artificial intelligence, but those that incorporate AI can gradually “learn” how you use Oracle, allowing it to make informed decisions on your behalf over time. It becomes more productive over time, resulting in long-term cost savings and less burdensome work for your employees. Oracle RPA can help you streamline a number of common processes — including the majority of your back-office administration duties. For instance, you can use RPA tools to: Collect and sort your invoices Match your invoices with shipments and purchase orders Convert currencies Assign GL codes Create payment vouchers Converting units of measure Create new sales orders Send shipment notifications With RPA, the completion of one action can immediately start another, which means your projects move seamlessly from start to finish. However, the potential benefits go above and beyond your day-to-day processes. You’ll also benefit from improved insight into your company’s performance, higher employee satisfaction, and more customer engagement. https://go.oracle.com/advancing-your-analytics-five-key-areas-to-automate?elqCampaignId=61754&src1=OW:MS:PT:five-key-areas-to-automate Following link, enlist some of the collaborative solution offers by RPA and Oracle - https://www.uipath.com/partners/technology-partners/oracle Thank you..Happy RPAing…
Robotic process automation (RPA) – Quick Glance The buzz is real! The Digital Transformation Era has begun and automation is leading the way. Businesses are going digital, with 89% of them...
Global Blockchain based system can play a huge role in an active pandemic as well as in a passive (after normalization) epidemic. It could acts as a single source of truth of information sharing, tracking, tracing of data, such as test cases, symptoms, possible treatments, case numbers, case complexity, demographics, age-graphics etc. It can act as a solution and system for donation, grants, insurance claims and government payments. Further can allow faster insurance claim processing, supply chain tacking, alerting AI based systems for solutions, simulations etc. A Blockchain based solution will keep the world ready for the next war. This blog starts with a story that illustrates the pain points of centralized system, without going into the technical details. It lay the foundation for ‘why a decentralized Blockchain based system’ is required to handle and further check pandemics. Read the complete article here. GSC (Global Surveillance Chain) is global and cannot be affected by politics, which establish it as single source of ‘TRUTH’. Hence, such permissioned decentralized system can easily alert health care professionals too. DApps built on GSC can allow health care professionals, agencies, doctors to receive information. They can track and trace their patients and their real time data (example temperature when patients are in home isolation etc.) via IoT based system. Lab tests approved for patients, lab test results, prescriptions, telemedice etc. can be effectively implemented on the GSB Blockchain. As it’s global and based on Blockchain technology, its highly available and scalable. Thus allowing countries with poor infrastructure to avail benefits of global initiatives and further help in containing the spread of the pandemic. Read the complete article here.
Global Blockchain based system can play a huge role in an active pandemic as well as in a passive (after normalization) epidemic. It could acts as a single source of truth of information...
Blockchain is a disruptive game-changing technology that will have an impact as huge as the internet in the next two decades. Enterprise blockchain solutions are gaining momentum as they enable inter & intra org collaboration, offers enhances interoperability, transparency, enhanced security and holistic visibility (traceability). Learn about the blockchain, hyperledger fabric and design strategies with Vivek Acharya in his latest book – Oracle Blockchain Quick Start Guide. Foreword by David Gadd, Navroop K. Sahdev and Mary Hall, this ledger of knowledge, offers in-depth exploration of hyperledger fabric and discuss the design strategies and ease of building enterprise blockchain solution using Blockchain Platforms. Co-authored from Oracle product team, Anand Eswararao Yerrapati and Nimesh Prakash, helped in adding depth to the book from Oracle Blockchain Platform perspective. Enterprises and evangelists are not confine to proof of concepts (PoCs), but are engaged in real implementations. In addition, they have started demonstrating concrete achievements. With this book, experience the prominence of hyperledger fabric, by engaging with real enterprise use cases. Also learn to model & implement an enterprise blockchain business network. Book’s chapter 1 begin with accounting systems, centralized and decentralized ledgers and coins the terms like distributed double-entry accounting system. Next, it describes the definition and analogy of blockchain and demonstrates the power of equity offered by peer-to-peer (P2P) networks. Here, you will walk through various types of blockchain networks, such as permissioned and permissionless or public and private. This chapter then graduates toward the layered structure of the blockchain architecture and structure of transaction, as well as the blocks, consensus and the transaction flow. Chapter also offers detailed comparison on various consensus algorithms and enlist blockchain actors in detail. At the end of this chapter, it display the prominence of enterprise blockchain solution. In addition, covers Blockchain-as-a –Service (BaaS) platform, its qualifiers and glance you though Oracle’s offering in the BaaS category. Following diagram shows the BaaS qualifier criteria. Explore – Oracle Blockchain Quick Start Guide, for more interesting topics and diagrams like below - Enterprises are exploring the immense opportunities of DLT and blockchain, and they acknowledge the strategic and long-term benefits of distributed technology; however, there are various challenges tagged with DLT and blockchain, which need to be mitigated before it is adopted by enterprises. Chapter 2 highlighted the challenges and opportunities of DLT and blockchain. Although there are challenges, there is a wide array of opportunities available. As the trust and benefits of DLT and blockchain grows, businesses will explore and engage more in adopting DLT/blockchain. This chapter drills into various properties of blockchain. Moreover, it highlights how those properties are upheaving the usage of blockchain in various scenarios. This chapter delve into modeling use case and demonstrates the integration aspects and infrastructure for implementing a use case. Chapter 2 also focus on defining the design strategy for a blockchain project. Design strategy refers to an integrated process that examines the relationships between exploring strategic solution and business experience will complement each other. Design strategy is book’s advocated approach to a blockchain initiative. Following diagram shows the design strategy’s stages. In addition, they are elaborated in the book: Once you have explored the design strategy, it's time to engage with the as-is flows and define the to-be flows. A holistic design of the solution will ensure synchronicity between the flows, use cases, and technology. Starting blockchain solutions with a Big Bang can be a problematic approach. Hence, choose a minimum viable product (MVP) and draw a sketch for the future enhancement. Start with a simple, clear, yet critical use case. Engaging stage, also focus on defining a blockchain-based business network, for a given use case. It also concentrates on sketching flows and identify various blockchain-based business network components, such as assets, transactions, participants, events, access control, and channels. Finally this stage focus on define the scope of the engagement and run it agile with the stories that allows you to realize the blockchain network for the real business use case(s). Following diagram shows the glimpse of engage state, which resulted into the identification of various blockchain-based business network components, such as assets, transactions, participants, events, access control, and channels. For such exciting topics and diagram, engage with my latest book - Oracle Blockchain Quick Start Guide. Chapter 3 focuses on the deep dive into Hyperledger Fabric (HLF). This will allow you to understand how business logic is implement in HLF. This helps you learn about various transaction types that facilitate read and write operations to distributed ledgers. This chapter focused on illustrating HLF, its architecture, and its components. Chapter brings in yet another qualifier, which focus on defining qualifiers for the Hyperledger fabric project. Chapter 3 focus on Hyperledger's architecture and demonstrates how to assemble a Hyperledger-based enterprise blockchain network. It provides an explanation of founder-based and consortium-based business networks. It illustrated business network components, adding peers to channels, and working with chaincode and smart contracts. It also covered identity, security, privacy, membership services, channels, ledgers, and transaction flow. Following diagram is one of the many diagrams in this book that highlights the conceptual diagram of the business network via an example. This chapter follows an example-based approach in illustrating HLF's architecture. The chapter 3 also construed channels and private data collection to allow private transactions between organizations. It concluded with design strategies for storing large objects—on-chain or off-chain. Chapter 4 focus on designing the solution in line with the constructs of the Oracle Blockchain Platform (OBP). This chapter has two parts: the first part focuses on defining a use case around certificate issuance and sharing certificates with trusted parties on a Blockchain network. The second part covers the Blockchain-as-a-Service (BaaS) platform, which offers an effective, efficient, and economical avenue to realize the potential of blockchain technology. BaaS is a catalyst for blockchain adoption. BaaS provider ensures the installation and maintenance of your blockchain network, allowing your business and IT to focus on chaincode and dApps. The entire blockchain ecosystem will be manage, administer, maintain, and support by the cloud service provider. Many top vendors, such as Oracle, have a managed Platform-as-a-Service (PaaS) product for BaaS. Because BaaS allows customers to take their SaaS, business process management (BPM) processes, custom applications, and so on, to harness the power of blockchain in a cost-effective and efficient way. This chapter also explores Oracle's BaaS by engaging with a use case in the education sector, showing you the ease of using the blockchain platform. This chapter covers the sample business network topology, network artifacts, and solution and deployment architecture. This chapter also delve into defining and creating an instance of a founder-based business network and adding participants to it. The knowledge gained through this chapter will help you manage the blockchain network, and acts as a foundational milestone for developing solutions on OBP. The following chapter allows you to deep dive into the administration aspects of OBP and teaches you to translate network topology on OBP. It delves into peers, orders, and channel configurations. It also provides details on REST configuration and the administration of REST interfaces. Subsequent chapters will cover more on OBP and highlight the power of Oracle BaaS Platform. Chapter 5, is a cinch; it allows you to experiment with the doing rather than reading about the doing. It effectively demonstrates the doing with samples. This chapter offers in-depth facts on OBP and allows you to graduate with the practical knowledge of OBP. With this chapter, you will get into the practicality of translating the network topology onto OBP, creating network stakeholders, and configuring OBP instances. This ledger of knowledge illustrates setting up transaction infrastructures, joining participants to business networks, access control, adding smartness (chaincode) to business networks, and using the REST proxy configuration to expose chaincode to dApps. For the most part, the OBP SDK and OBP on Oracle Cloud are similar in features, except for the steps that let you create OBP instances. The differences between the two options are not large and are also self-explanatory. This chapter primarily covers translating the network topology on OBP, adding business smartness to the OBP network. and using the Administration REST Interface. Following diagram shows the OBP console for a founder organization. Experimenting with OBP is fun with this Oracle Blockchain Quick Start Guide. Concluding chapter 6, delves into chaincode and covers details of chaincode development, including the language section, development tools, and development environment setup. This chapter also focuses on mapping asset models, operations, and developing chaincode functions and interfaces. It details the full life cycle of chaincode, from development to updates, including installation, initiation, testing, and versioning. It also demonstrates the full chaincode; where code base is built on Go and Node.js. Endorsement policy, private data collections, and their functioning are also illustrated. This chapter also demonstrates chaincode testing via shim and REST endpoints and integrating client apps with business networks using SDK, REST, and events. Finally, it concludes with insights into chaincode, transactions, and channels by experimenting with the monitoring of business via chaincode logs and channel logs. The chapter covers topics such as setting up chaincode development, chaincode development, chaincode deployment, testing chaincode, and integrating client applications with blockchain. Conclusion This book is a one-stop shop for exploring distributed ledger technology, blockchain, its components, features, qualifiers, architecture and demystify the prominence of blockchain-as-a-service (BaaS) by practically experimenting with hyperledger fabric and Oracle’s Blockchain Platform. We tried our best to share the knowledge we gained. May it serve us well!
Blockchain is a disruptive game-changing technology that will have an impact as huge as the internet in the next two decades. Enterprise blockchain solutions are gaining momentum as they enable inter &...
Business Rules are everywhere in business applications. Business policies, user interface’s validation rules, elevation/escalation rules, dynamic assignment rules, data loading rules etc. It is safe to say – behind every successful business application, there are business rules. Business rules brings in transparency and agility to business applications. Infusing quick, sustainable customization of rules leads to business agility. Along with it, if the system allows business analysts and auditors to view the translated (code) version of business policies and validate the precision of translating business policies as business rules, then such applications are termed as transparent applications. Business policies and there effective implementation can be possible with a business rule engine. This blog post drills into rule engine and define the qualifiers for choosing rule engine. Blog also covers Oracle Business Rules (OBR), Oracle Policy Automation (OPA) and showcase difference between OBR and OPA. For sake of convenience, this blog is divided into three parts – part one delves into rule engine, part two drills into OBR, OPA and algorithms while part three focus on OPA vs OBR. Blog covers following topics – Part 1 Rule engine and qualifiers Declarative vs Imperative model Part 2 Oracle business rule (OBR) and Oracle Policy Automation (OPA) OBR Technical Glance Terminologies Approach to write rules Rules structure OBR architecture Rule activation Forward chaining system Charm of Declarative rules Ordering algorithms Underlying algorithms Performance tips Oracle Policy Automation – quick link Part 3 OPA vs OBR and qualifiers Part # 1 - Rule Engine and Qualifiers A software system that provides an engine for the execution of business policies (as rules) is termed as business rule engine (BRE). Business policies can be legal regulations, business policies, validation rules, translation rules; operational rules like assignment rules, escalation rules etc. A BRE allows creation, modification and maintenance of such business policies as separate entity from the business application (code) and data. Business applications like workflows, user interfaces and supporting components like integration etc. are meant for executing business functions. However, Business rules induces knowledge to business applications and business rules produces knowledge. Example, a 22 year male is buying a sports car of red color. Such business scenario can lead to events (via underlying messaging framework) to create business knowledge (example evaluating regulatory laws, insurance laws etc.). However, a workflow or a business application will only reach to such event. Maybe, that event is captured by the workflow, which again triggers business rules to route such approval requests to someone authorized. Here, following are the interesting pieces – Workflow can be replaced, by any workflow (from any vendor). Business policies (as rules) remain the same. Here, workflow represent the business logic and business rules represent the business policies. But there is a separation between them. This allows greater reuse of the business rules, allows centralization of rules, and allows separate lifecycle of creating, modifying and maintaining the rules. From the design and architecture perspective, separation of business logic from business policies (as rules) is the recommended approach. However, from product prespecitve, many vendors offers native integration. Example Oracle’s BPM/SOA integrated with Oracle business rules (OBR). Alternatively, applications can also connect with rules engines like Oracle policy automation via a service invocation or APIs. Business have business rules and rule engine offers the core to execute ‘business rules’. Business rules are the coded version of business policies, which leads to a business decision. Example – If Car is red and car is a sports car and the driver is a male less then 22 years of age then increase insurance premium by 15 %. Based on rules scheduling, there are different types of rules engine – forward chaining and backward chaining rules engine. Backward chaining are goal driven and facts are resolved to reach a specific goal. Forward chaining rule engine are either inference rules or reactive rules. Inference rules uses condition-action model to represent rules while reactive rules detect events, processes event patterns and reacts based on it example complex event processing. Rule Engine Qualifiers Weather you need a rule engine or not, can be gauged with matching your use case and requirements and weighing the benefits of rules engine vs complexity of not having a rule engine. Rules can be embedded in the programs itself or can be in a rule repository, which is used by rule engine. Rules engines are not singular, they comes with tools and GUI to manage, edit rules and options to deploy, version and administer rules. Defining nature of rule engine, comes from the rule algorithms that drives it. Example RETE, LEAPS etc. They connect facts with rule, determines the order, chaining, conflict resolution and the execution of the rules. This section, we enlist few of the qualifiers for using rule engine. Following are the qualifies for using Rule Engine Centralization of Knowledge - Rule repository and rule engine offers a central ledger of knowledge. Such ledger is executable and acts as single truth for business policy. Just glance though the decentralized and distributed blockchain technology, which relies on smart contracts. Here, smart contracts are the centralized authored set of rules for interacting business parties. Multi-channel service delivery – Common rule repository will allow organization to use same repository of rule(s) for internal, external and integrated use. Chaining – Chaining is a property of rules that allows action part of a specific rule to changes/alters the condition part of other rules, which ultimately leads to change in the state of the system. Is your domain knowledge in abundance? Domain knowledge workers knows about processes and rules and are often non-technical. Rule authoring tools allows authoring rules via rule authoring tools. Declarative Rule - Rule engine allows you to specify, “What needs to be done”, instead of “How it needs to be done”. Means, rules are simple expression which can be verified (application transparency). Separation of data, logic and rules - Data is the core which logic is in the business logic layer, while rules are separate. This essentially means – data, logic and rules are decoupled. It has its own set of advantages and disadvantages, depending on your analysis. Example, a business can argue of maintainability issue with rule engine, however the advantage that rule engine offers is the central repository for rules and easies the rules maintenance and helps you inch towards the future of ‘Autonomous’ organizations. Decoupling logic and rules allows ease of rule maintenance, as changes to rules need to be reflected at just one central place. Agility – Rule engine in its standalone mode (e.g. Determination eligibility tool or assignment & escalation rules) can typically be implemented much more quickly than a full custom solution. The full solution may be a multi-year implementation, but the organization could build, deploy and test an eligibility determination tool or assignment and escalation tool relatively quickly before the full system is ready to go. Power of Algorithms - Proven algorithms like RETE, LEAPS etc. are efficient at pattern matching. Rules authoring - Tools and applications like JDeveloper, Composer applications etc. allows you to edit, manage and change rules. Also offering audits and empowers business analysts to do the same. Ease to Read - You can write rules in natural language, simple if-else statements, decision matrix etc. These are understandable logic to technical and non-technical audience. Does your logic changes often and Do you need rules agility? Logic can be simple however, rule changes often and business sis seeking rules agility. Beyond business empowerment, there lies Declarative vs Imperative – Organizations are skeptical about the rule engine’s pitch on the empowerment of business users to specific the rules and modify it. It is plausible yet it works out on a case by case basis. However, the point here is, even if it does not works well for business users, there are several factors that should incline business towards rule engine such as - declarative vs imperative models. Declarative vs Imperative Model We discussed about imperative and declarative model above, hence it is important for me to delve into it, in this section. Imperative model consists of sequenced commands with loops, conditions. Essentially, imperative model focus on HOW, where the program dictates HOW to do it. On the other hand, declarative model consists of rules, which have a condition and an action. Declarative model focus on WHAT and defines what needs to happen instead of how it needs to happen. Imperative model involves righting extensive code with many steps to solve problems and to reach decisions (desired results). These code instructions are dictating system HOW to perform something. Codes can turn out so overwhelming and focus gets shift towards the code and its maintenance rather than rules. Besides that, such code are distributed and maintenance of it will soon turn into a nightmare. On the other hand, rule engines are declarative computational model. Rule engine (declarative computational model) have following advantages – They have framework, offers native integration with underlying infrastructure, They offers a development environment and they offers business and IT round trip. Enables separation of rule engine from data and business logic. Separation of rule has its benefits like separate development cycle for rules and later those rules can be integrated with business logic. They are based on algorithms and suites for deterministic or non-deterministic use cases. No Rule Engine (Imperative computation model) has following disadvantages – Imperative model will lead to lot of code example Java code with loops, functions, libraries etc. Disparate applications may have contradicting logic. These are codes PRETENDING to be a rule engine. This will result into a system where rules are ONLY maintained by IT and not business users. Imperative model sounds appealing when certain technical resources (skills) are in abundance, however this ‘masquerading rule engine’ leads to complicated structure. Debugging will be nightmare, ordeal for maintenance. In addition, as it grows, determining the overlap between rules will time consuming and error prone. With Imperative model, business policies become lost in such implementations. Due to the basic nature of imperative programming, business policies need to be defined in a declarative form. Requirements need to be translated into sequential execution flow. This translation is costly, error prone, hard to trace and difficult to maintain. Imperative model will lead to complex scenarios and add complexity of writing logic to maintaining the ordering of rules, conflict resolution, chaining etc. Declarative models allows expressing business policies (aka rules) in simple logic or natural language, allowing reasoning and reaching decisions more obvious to those, who author and maintain policies/rules. Ordering, chaining, conflict resolution etc. of policies is automatically taken care by the rule engine, which takes care of correct evaluation of the business policies. Part # 2 – OBR and OPA Following section covers Oracle Business rules, offers a glimpse to Oracle policy automation (OPA) and compares them from various use case perspective. Forward chaining System, Inference Cycle, declarative models of rule engine are exclusively added in below section to allow readers to correlate above rule engine section with OBR and OPA sections below. Oracle Business Rules (OBR) When an application can be view by business analyst or auditor and they can determine that an application can precisely implement’s business policies, it can be termed as transparent application. Example insurance, human resourcing, financial services etc. OBR offers efficient development and deployment of business rules. OBR consists of rule authoring tool for defining rules, an SDK for rule access and a rule engine for execution of rules. Technical Glance Oracle business rules adds flexibility to applications and processes by empowering analysts and process owners to define, edit business rules. With OBR, application’s business logic is separate from rules, allowing agile rule maintenance and creates empowered business analysts and process owners. The Oracle Business Rules includes following components – The rule editor, Rule browser, Rules engine and Rule repository for rule discovery, governance, versioning, traceability and availability across the enterprise. Users can use business rules editor [rules designer or rule composer] and rules are stored and managed in a central business rules repository. You can reference pre-defined business process rules within the modeler. The Business Rules activity in the business process model gets converted to a decision service that in turn invokes the business rules engine in the executable business process. Business users can change these business policies on the fly via an intuitive web browser interface without having to redeploy or re-implement the business process. Terminologies Business rule comprises of following – Rulesets – A set of condition and actions that results into outcome of the rule. Facts – These are the data objects used by the ruleset. Decision function – It's the reference to the code that executes the rules. Functions – These are optional and are called in the ruleset. Globals – Usually the constants (data objects) used in the ruleset. Valuesets – Range or list of values used in the rule’s condition. Links – These are links to other business rules dictionary. The Oracle Business Rules environment is implemented in a JVM or in a J2EE container by the classes supplied with rl.jar. The RL Language command-line interface provides access to an Oracle Business Rules RuleSession. The RuleSession is the API that allows Java programmers to access the RL Language in a Java application (the command-line interface uses a RuleSession internally). An RL Language ruleset provides a namespace, similar to a Java package, for RL classes, functions, and rules. In addition, you can use rulesets to partially order rule firing. A ruleset may contain executable actions, may include or contain other rulesets, and may import Java classes and packages. An RL Language rule consists of rule conditions, also called fact-set-conditions, and an action-block or list of actions. Rules follow an if-then structure with rule conditions followed by rule actions. Ruleset - In OBR, ruleset is a container for If-Then rules and decision tables. Ruleset offers a namespace, which is similar to Java packages. Along with it, you can use rulesets to order rules firing. Facts - Oracle Business Rules (OBR) are written in terms of facts types, where each fact is an instance of a fact type. They are the prerequisites to rule. You must have facts (created or imported) before authoring rules. In OBR, factType is a type definition in data model while fact is an instance of that type. Rule designer allows you to define variety of fact types based on Java class, Oracle RL, XML schema and ADF BC view objects. Rules are written in terms of factTypes. Decision Function - Decision functions are contracts to invoke rules from SOA/BPM/Java composite/code. Decision functions contract includes fact types, rulesets and output fact types. Rule Dictionary - Rule dictionary is an XML file (.rules files) that stores rulesets and data model and you can link dictionary with other dictionaries. It is In OBR, dictionary are containers for facts, functions, global value sets, links, decision functions and rulesets. You can create more than one dictionary and each dictionary can contain any number of rulesets. SDK - Rules SDK offers decision point API, which allows applications to access, create, update and execute rules in OBR dictionaries. Approach to Write Rules OBR rules can be modelled using follows ways – IF/THEN rules, and Decision Tables - Decision Tables are multiple related rules expressed in a spreadsheet-like format. If/Then rules written using following approaches – As general rules - General rules use a pseudo-code language to express rule logic As verbal rules - Verbal rules use natural language statements to express rule logic. The verbal rule condition is a business phrase that can specify one or more logical tests Note - Rules and Decision Tables are grouped in an Oracle Business Rules object called a ruleset. You group one or more rulesets and their facts and valuesets in an Oracle Business Rules object called a dictionary Rule Structure Rule comprises of rule condition and rule actions. Rules are used to evaluate conditions and specify actions when the conditions are met (evaluate to true). Rule Condition - A rule condition is a component of a rule that is composed of conditional expressions that refer to facts. General rule example - if Driver.age < 21 Verbal rule - IF driver is an underage driver Rule action - The Oracle Rules Engine activates a rule whenever there is a combination of facts that makes the rule's conditional expression true. In some respects, a rule condition is like a query over the available facts in the Oracle Rules Engine, and for every row that returns from the query, the rule activates. Note – Rule activation is different from rule firing. The rule condition activates the rule whenever a combination of facts makes the conditional expression true. In some respects, the rule condition is like a query over the available facts in the Rules Engine, and for every row returned from the query the rule is activated. The rule action aka rule THEN part contains the actions that are executed when the rule is fired. A rule is fired after it is activated and selected among the other rule activations using conflict resolution mechanisms such as priority. A rule might perform several kinds of actions. An action can add facts, modify facts, or remove facts. An action can execute a Java method or perform a function which may modify the status of facts or create facts. Note - Rules fire sequentially, not in parallel. Note that rule actions often change the set of rule activations and thus can affect which rule fires next. Rule Activation A rule action is activated if all of the rule conditions are satisfied. There are several kinds of actions that a rule's action-block might perform. For example, an action in the rule's action-block can add new facts by calling the assert function or remove facts by calling the retract function. An action can also execute a Java method or perform an RL Language function. Using actions, you can call functions that perform a desired task associated with a pattern match. The Oracle Rules Engine first matches the facts against the rule conditions of all rules as the state of working memory changes. A group of facts that makes a given rule condition true is called a fact set row. A fact set is a collection of all the fact set rows for a given rule. Thus a fact set consists of the facts that match the rule conditions for a rule. For each fact set row in a fact set, an activation, consisting of a fact set row and a reference to the rule is added to the agenda (the agenda contains the complete list of activations). Rules fire when the Oracle Rules Engine removes activations, by popping the activations off the agenda and performing the rule's actions. The Oracle Rules Engine may remove activations without firing a rule if the rule conditions are no longer satisfied. For example, if the facts change or the rule is cleared then activations may be removed without firing. Further, the Oracle Rules Engine removes activations from the agenda when the facts referenced in a fact set row are modified or the facts are retracted, such that they no longer match a rule condition (and this can also happen in cases where new facts are asserted, when the ! operator applies). Oracle Business Rules provides high performance and easy to use implementation of Business Rules technology. It provides easy to use authoring environment as well as a very high performance inference capable rules engine. Oracle Business Rules is part of the Oracle Fusion Middleware stack and will be a core component of many Oracle products including both middleware and applications. OBR Architecture For internet based business applications, there are three primary layers for applications – presentation layer (user interface aka display layer), business logic layer (middle layer) and policy layer (rules engine layer). This is ideal for rule enabled application, policy logic is separated from business logic and its written as business rules (if-else, decision matrix or declarative form). Business logic can be written in Java or Java based technology etc. while rules are authored as business policy and implemented as set of rules, which are executed by a rule engine. Below diagram shows the architecture of OBR. Here, business policy are executed in rule engine, where facts are analyzed and results are returned. Rule authoring GUI [or applications using rules SDK], allows rule creation, which are stored in rule repository and rule engine refers to this repository for rules execution. Oracle business rule is written in Java and is integrated with XML and Java. OBR constitutes of following components – Rule engine, Rules SDK and Rule authoring tool. Authoring tools offers if-else, decision matrix and English-like paradigm for declaring rules. Forward chaining System and Inference Cycle In OBR, the rule-based system is a data-driven, forward chaining system. The facts determine which rules can fire so when a rule fires that matches a set of facts, the rule may add facts and these facts are again run against the rules. This process repeats until a conclusion is reached or the cycle is stopped or reset. Thus, in a forward-chaining rule-based system, facts cause rules to fire and firing rules can create more facts, which in turn can fire more rules. This process is called an inference cycle. A rule-based forward chaining system consists of the following: The rule-base: Contains the appropriate business policies or other knowledge encoded into IF/THEN rules, verbal rules and Decision Tables. Working memory: Contains the information that has been added to the system. With Oracle Business Rules you add a set of facts to the system using assert calls. Inference Engine: The Rules Engine, which processes the rules, performs pattern-matching to determine which rules match the facts, for a given run through the set of facts. OBR offers charm of declarative rule With OBR you can use declarative rules, where you create rules that make declarations based on facts rather than coding. Here is an example of declarative rules: IF a Driver is a Premium customer, offer them 20% discount IF a Driver is a Gold customer, offer them 10% discount Advantages of declarative rules - Statements are declared without any control flow. Control flow is determined by the Rules Engine. Rules are easier to maintain than procedural code. Rules relate well to business user work methods. When a rule adds facts and these facts run against the rules, this process is called an inference cycle. An inference cycle uses the initial facts to cause rules to fire and firing rules can create more facts, which in turn can fire more rules. For example, using the initial facts, Rules Engine runs and adds an additional fact, and an additional rule tests for conditions on this fact creating an inference cycle: Ordering Algorithms - Order of rules Oracle Business Rules uses working memory to contain facts (facts do not exist outside of working memory). A RuleSession contains the working memory. Each RuleSession includes one Rule stack. The RuleSession's ruleset stack contains the top of the stack, called the focus ruleset, and any non-focus rulesets that are also on the ruleset stack. The focus of the ruleset stack is the current top ruleset in the ruleset stack. The Oracle Rules Engine sequentially selects a rule activation from all of the activations on the agenda, using the following ordering algorithm: The Oracle Rules Engine selects all the rule activations for the focus ruleset, that is the ruleset at the top of the ruleset. Within the set of activations associated with the focus ruleset, rule priority specifies the firing order, with the higher priority rule activations selected to be fired ahead of lower priority rule activations (the default priority level is 0). Within the set of rule activations of the same priority, within the focus ruleset, the most recently added rule activation is the next rule to fire. However, note that in some cases multiple activations may be added to the agenda at the same time, the ordering for such activations is not defined. When all of the rule activations in the current focus fire, the Oracle Rules Engine pops the ruleset stack, and the process returns to Step 1, with the current focus. Algorithm – Underlying enabler The Rete algorithm, invented by Charles Forgy in 1979 PhD thesis was based on the ‘fact’ that the facts in the rule engine's working memory change slowly over time through a series of inference cycles. Facts about RETE algorithm - RETE caches previous rule conditions. Caching previous rule condition evaluation results avoids the re-evaluation of all rule conditions when a small change is made to working memory. Hence, it is good for those applications where rules are NOT changed frequently. For business rules, where they changes frequently, do not benefit from the use of a Rete algorithm while incurring the overhead of the algorithm's memory consumption characteristics. The Non-Rete algorithm (NRE) is an alternative to the Rete algorithm that consumes less memory than the Rete algorithm. For business rules use cases it will result in improved performance. The core of NRE algorithm is a new rule condition evaluation approach. The majority of the rules engine is unmodified and shared across the Rete and NRE algorithms. The externally defined semantics of the existing Rete algorithm are preserved by the NRE algorithm. Facts about NRE – NRE is a simpler internal rule representation. NRE results in byte code generated for rule tests, rule actions, and user defined functions. NRE offers more efficient modify operation. With NRE, rule conditions not evaluated until the containing ruleset is on the top of the stack. After initial evaluation, re-evaluation occurs on fact operations as needed. With NRE, ability to avoid unnecessary re-evaluation when rulesets are only present on the ruleset stack once during rule execution. Rete vs Non-Rete Following is a quick comparison of RETE vs No-RETE algorithms - In the Rete algorithm, rule conditions are evaluated when fact operations occur (assert, modify, retract). In the Non-Rete algorithm, rule conditions are evaluated for the first time when the ruleset is on the top of the stack, then on fact operations after that. Rule firing order. There are cases where the rule firing order is not defined, for example when a single fact activates multiple rules at the same time and the priorities are identical. In these cases, the order in which the rule activations fire may be different. Switching to NRE In OBR designer, NRE algorithm can be selected in the dictionary settings panel under preference tab. SOA and BPM applications will automatically handle the switch of algoritms. For JEE applications, the algorithm selection will need to be specified when the RuleSession or RuleSessionPool is created. Oracle Business Rules uses working memory to contain facts. Facts do not exist outside of working memory. A RuleSession contains the Oracle Business Rules working memory. Quick performance tips for business rules – Some bonus to the developers - OBR rule engine is efficient when the facts are based on RL class / Java Beans. Facts is the data n which rule performs its reasoning. Flatten fact type, means flatten the facts to single dereferences. Example account.user.address has multiple object dereference and it is efficient to flatten it to single dereference. Example if facts have hierarchical structure then use assetXPath to asset object hierarchy to asset account and user as fact types. Code rules with emphasis on avoiding side effect – Rule engine try’s to build a RETE network and RETE network operations are built as facts are asserted, modified or retracted. During this RETE network building, rule conditions are evaluated more/less times depending on the rule conditions. If rule conditions have side, effects like changing a value of the facts used as rule fact or changing state. This might lead to rule conditions evaluation multiple times, which hits the performance. Avoid expensive operations – Avoid rules that access database (directly from rule engine) or operations that involves I/O (disk/network). Visit Ordering of rules – Reordering of rules can improve performance. As a recommended rules, try placing the facts, that are not expected to change, before the facts that might experience changes. Placing non-changing facts as first clause improves the ‘time’ of rules evaluation. Another recommended approach is to place the facts (as first fact clause), which have fewer facts then other fact clauses in the rule condition. Esentially place the rules which might match fewer facts first. This will reduce memory usage and enhances response time. Oracle Policy Automation OPA is a rule engine designed to offer consistent and auditable advice across channels and business processes by capturing rules in natural language Microsoft Word and Excel documents and building interactive customer service experiences called interviews around those rules. This blog goes into drills into OPA. Please follow below link to read more about OPA – https://documentation.custhelp.com/euf/assets/devdocs/cloud19b/PolicyAutomation/en/Default.htm Oracle Policy Automation – quick glance from Algorithm perspective. Oracle Policy Automation provides complex eligibility and policy inferencing algorithm for forward chain inferencing. This is well suited for making decisions based on legislation modeled in natural language. Difference between RETE and Linear Algorithm - Linear RETE The linear inferencing algorithm maximizes the use of large, onboard processor memory caches by serializing the inferencing process. RETE algorithm maintains a network of data flows with the aim of minimizing the discrete number of operations required for each inference. This, however, requires larger and more-complex data structures. The algorithm works by ordering rules so that they can be processed in a single left-to-right sweep for each forward inference cycle. A rule engine essentially replicate the logical structure of the underlying rules for each inference session. Only one sweep of the rules is required, regardless of how much data is changed between inference cycles. Each individual change in data requires a “walk” of the network, because multiple changes in data cannot be handled at the same time. By minimizing the number of processor cache misses, this approach maximizes the efficiency of memory access to the rules. That is, rules are almost always accessed from the processor’s high-speed onboard memory. This is the most important feature of the algorithm, because nontrivial rule sets such as those typically found in legislation and policy will not fit within such caches. Each walk of the network requires non sequential, mutable memory access. This results in substantially higher memory requirements and lower memory access locality, generating substantially more memory cache misses. As a result, it is less suitable to handling the complex logic expressed in legislation and organizational policy. By minimizing the number of processor cache misses, the linear inferencing algorithm maximizes the efficiency of memory access to the rules. That is, rules are almost always accessed from the processor’s high-speed onboard memory The sequential access of the rules also means more space-efficient data structures; such structures further increase the use of the onboard processor cache. Moreover, all rule access is read-only, which means the same cached copy can be shared across multiple inferencing sessions. Following are the advantages of linear inferencing over the Rete algorithm - Easily handles multiple, simultaneous changes to input data. This makes it better suited to real-world processing scenarios that include transactional processing and batch data upload from a database or interactive applications. Minimizes per-session memory usage. Less memory is required to service high performance applications with heavy concurrent usage. Better exploits modern processor architectures. In particular, the linear inferencing algorithm’s space-efficient data structures increase the use of the onboard processor cache to provide much-higher performance. Part # 3 - OPA vs OBR If you are reading part # 3, you are already convinced the need for a rule engine. So, welcome to part # 3. Choosing rule engine is a case-by-case assessment. This section, we discuss about the qualifiers to choose Oracle Policy Automation (OPA) vs Oracle Business Rule (OBR). Following are the qualifiers to choose between OPA and OBR - Do you need complex determinations, decisions, recommendations and calculations? Do you need guidance or advice tools where you want to publish questions and users can see questions based on their circumstances? Are there clear determinations to-be made based on pre-determined rules? For determination, recommendation and advisory use case, OPA is the best fit. Can the goals be defined effectively? If goals are defined effectively, then OPA is a fit. Example - Is the person eligible for, entitled to, liable for, required to...? or What is the amount of benefit, compensation, tax, payment, discount...? or How do I report/claim/return/file/onboard...? If the goals can be expressed in above terms, OPA is the solution. One more example – Does rules involve triggering alters, routing and escalation? In this case, use native OBR solutions with processes, as they are more efficient than OPA. Do you have an optimization problem or eligibility determination case? Use OBR for optimization problems and use OPA for eligibility determinations. Example there are 100 widgets and 200 people in a group. Each person may have one or more widgets based on certain criteria about each person’s position in the organization. Alternatively, assignment of BPM tasks needs to be automated based on certain attributes of person such as title and district. Alternatively, escalation of tasks need to be performed based on certain input values such as dollar amount. For such optimization case, OBR is the idea choice as this is a case of non-deterministic nature and OPA is not a fit for non-deterministic cases. Deterministic vs Non Deterministic scenario? Non-Deterministic - In a non-deterministic algorithm, system may produce different result for same input. Example, if input contains title as Appraiser and district as North, then the task need to be assigned to AppraiserNorth group. Maybe later, this changes to a different group or role. Deterministic - If you have a problem of deterministic nature example determining eligibility, OPA is more suitable. For deterministic algorithm, system will produce same output for same input. Example you have rules to determine the eligibility for a specific type of widget. Alternatively, you have a business function rule for ownership transfer (example below). OPA is a nice fit for deterministic problems. Refer following diagram for a quick glance to the qualifiers - Conclusion Today, compliance with complex regulations or intricate company policies is a substantial challenge for many organizations. Rule engine plays an important role in driving compliance with these rules. If you need to externalize business rules, supports rule agility and empower business & product owners to change rules then you should consider rule engines. If business decides to adopt and embrace rule engine as part of strategical solution then they should be ready to acquire skills in design, development using rule engine. Consider it an investment, because with AL, Blockchain and all the modern technologies, business are fast-forwarding to Rules Based Organizations aka Autonomous Organizations. No rule engine and the act of ‘masquerading rule engine’, surprises me the most as there must be a great reason why rules are termed as “business rules”. Hence, I recommend to avoid imperative model and custom rule engine as they are only masquerading rule engine. A rule engine should be for the business users, by the business users and of the business users.
Business Rules are everywhere in business applications. Business policies, user interface’s validation rules, elevation/escalation rules, dynamic assignment rules, data loading rules etc. It is...
To start with, there may not be a single matrix to measure the complexity of BPM processes. However, I think following factors should be considered to derive the complexity of processes – Activity Complexity Control Complexity Data Complexity and Resource Complexity Activity Complexity The act of defining complexity based on the count of activities in processes is termed as activity complexity. Deriving complexity just by defining and measuring activities count does not gives the right level of complexity. It needs to be quantified with control, resource and data complexity. Example – if you have a process with no control flow, loops, gateways and it is just a sequential process with 200 activities. Then it is a simple process with zero control complexity. It is similar to the concept of defining a software program by just accounting for line-of-codes in the program without considering the control complexity and other factors too. Control Complexity Elements like gateways (split, joins, loops etc.) and transition flow, along with fan-out from these gateways is the most important consideration to determine the complexity of the processes. Control complexity also helps create process models and then enrich those models by adding resource and data complexity to the process. After a fan-out, complexity also adds when you account for resource after fan-out like tasks, services etc. Data Complexity Each activity has input and output parameter. Resource intensive activities like tasks, services, events, transformations, assignments etc. have further transactions, calculations and enrichments. This collectively adds to another complexity factor, termed as data complexity. Data complexity can further be sub-divided into other complexities like – integration complexity, transformation complexity, exchange complexity (example exchange of data between task and its UI) etc. Some data complexity like integration complexity could be static or might need enrichments. However, transformations complexity are dynamic and governs the transition flow of the process. Resource Complexity Process interacts with various resources like tasks, services, events, human, assignments, checks etc. Along with it, user and roles need to be considered for process execution. Resource like users and role need mapping org structure, defining roles in role repositories etc. Meticulous definition of business requirement around roles and its realization and adoption in processes leads to access and actions controls for BPM process. This adds to resource complexity of the process. Along with that, process’s ability to publish/subscribe events, services like log etc., further adds to complexity. Conclusion Calculating BPM process complexity, just by considering number of activities does not lead to real quantified values. Control and resource complexity along with data complexity will together lead to the real quantified complexity value for BPM process. What's next in this series? - Stay tuned for future blog where we talk about the mathematical formula to define complexity for processes used in Banking and Finance.
To start with, there may not be a single matrix to measure the complexity of BPM processes. However, I think following factors should be considered to derive the complexity of processes – Activity...
This post if one in the series of posts to-be published on Oracle API Platform Cloud Service. In this article, we will focus on - API Marketplace, API Platform, Oracle’s API Platform, Oracle API PCS and Stakeholders, Extend Enterprise API Ecosystem, Oracle’s API PCS architecture and Oracle API PCS In-Concert with other Oracle Products To starts with, let’s define an API. Application Programming Interface (API) is a set of routine definitions, protocols and tools for building software and applications and allows ease to develop programs by providing all the building blocks while abstracting the underlying implementation and only exposing objects/actions that developers needs. Endpoint – Endpoints have addresses and for REST service endpoints, they look like URLs. Service – Service is a task that an API defines and it's behind the endpoint. Interface – Interaction point and it hides the implementation definition of the service the API will perform. The API definition identifies the address (endpoint), where the target service can be invoked. Services do something useful for a client and finally API enforces an interface or a contract between a client (web-page or mobile app) and the service. Rules to access the endpoint are prescribed by the interface. An interface definition makes it clear that what’s expected and what will be as a result of the API call. Note – APIs are doors to the digital world. The digital world of cloud based infrastructure, mobile app, social sites and online shopping are all proliferating. APIs increase the connectivity to these digital worlds. APIs are the doors to the modern digital world as they open up the backend system capabilities to partner integrations and mobile applications. APIs allows customer to build eco-systems to encourage partners to utilize and extend customers backend services. This enhances the customer’s reach out to the digital world. It opens up new revenue streams. API Marketplace We talk about APIs, now let’s check the role of API marketplace and API platform. API Marketplace has great strategic values as APIs allows organizations to grow their business quickly by sharing services with other organizations and firms. It's about nurturing and building an API ecosystem to leverage automated services with business partners. Example of an Organization “A” can share its photo printing APIs via developer’s portal to allow access to their printing APIs. Other company “B” can use these APIs and can build mobile apps to print photos from phone without uploading it to desktop computers.. Company “B” allows uses from various social platforms to print via it's apps and now Organization “A” is getting printing business via Organization “B”‘s customers. Ebay, Google, IBM, Salesforce.com and various other companies are participating in API Economy by sharing their API to allow other companies to use these APIs and it also helps them to generate revenues. Traditionally APIs where used behind firewall and primary focus by service discovery and re-usability while in IT department. Now, in the API Economy era, APIs have become externalize. Digital assets are exposed to digital world via APIs. It's an initiative way for organizations to collaborate and exchange information and services with partners. This broadens the ecosystem and will increase the revenues many-fold. Example – an airline company uses Uber APIs to allow customers from their application to book Uber. This will offer Airline Company’s customer to schedule Uber rides that fits his/her travel. Uber will in-turn allows airline apps to access their services to allow airline customers schedule and book Uber. This benefits both business partners and ads to their revenues. What’s needed to participate in API economy – off-course the answer is an API Platform. API Platform API platform allows hosting and managing APIs and making it easy for stakeholders to build and integrate new and existing applications respectively and also allows develop mobile and IoT applications. API economy is not focus on a specify vertical and it goes horizontal across all vertical industry and various business and sectors can use API platform to participate in the new era digital world. Who can participate and How? Following are the different stakeholders in the API economy - Business - Business development evangelizes value of API and initiates its entry into digital world, while Line of Business can find the key area to collaborate with partners to use the APIs. Once the APIs are developed and exposed to business partners, business development team can monitor the - performance, reach, value and effectiveness of the APIs. Business will love to know, who uses the APIs and how much they use it. LOBs are looking for the business to-be nimble and agile and faster time to market and low IT dependency. Oracle's API PCS allows monitoring of APIs quite easy and addresses LOBs business needs. Technical Leadership - Technical leadership will define the architecture & standards, procure API management software, works with partners to define and develop those APIs, establish SLAs agreements and monitor technical performance of APIs. Technical leadership comprising of CIO, IT Managers and enterprise architects are looking for solutions to implement LOBs desire of participating in digital era and extend their business yet minimize the startup cost to implement this. You can address all these though Oracle API PCS. Developers - Developers will develop APIs and monitor its throughput and performance. They are concerned about ease of developing, creating, publishing, sharing and tracking APIs and Oracle’s API PCS addresses these concerned very well. Oracle’s API Platform Organization’s initiative of LOB, Technical leadership requires a world class API Management Platform and Oracle API PCS is the go-to-product. Following diagram shows the key features of Oracle’s API Platform Cloud Service (API PCS) Key features of API PCS are - Lifecycle Management - Publish, Monitor and Retire APIs – Once a developer define and implement an API, API PCS will support the life cycle of the API this includes publishing the API interface to the developer’s portal so that developers can register those APIs. API PCS users can track resource consumption and follow rates. If an API is to-be discontinued the platform can retire it and prevent further access. Operations - Once the APIs is published, the platform can be used to manage and monitor their access. Security – API PCS offers the full range of security policy to protect API access. Community Management – Managers the consumer of APIs Oracle API PCS Platform and Stakeholders This section illustrates Oracle API PCS platform and the expectation of various stakeholders out of it. Let’s start with the developers and address their concerns first - Developers – API platform need to serve developers from both the end – producer developers and consumer developers. Producer developers are looking for platform which allows ease of development and publishing APIs using rich developer portals and to monitor it. Similarly consumer developers are looking for a convenient way to discover and use APIs which are published on the producer’s portal. They also need this platform to help them understand the APIs reliability, SLA, availability and performance. Technical Leadership – They want to reduce time and efforts to setup environments for developers to work-on and are looking for a platform which also offers reduced maintenance and ease in patching, cloning, backup and scaling. Business Leadership – They are looking to open new business avenue and to participate in digital economy by creating new and strategic apps. Oracle API PCS allows business to be effective and allows ease of API adoption. What’s the Urgency? There is a great urgency in organizations to move towards, cloud, mobile and IoT, which is in-turn increasing the urgency for integrations. Urgency to deliver quality solutions in lowest time is intense which further makes integration paramount. API PCS can be integration solutions to help IT and business to meet the integration demand which Cloud, Mobile and IoT has generated. API PCS is a result of evolution in integration. Customers started with using SOA and ICS to integrate Oracle Applications, Legacy applications and Custom applications deployed on-premise and cloud. Now they want their applications extended to mobile applications and devices and allow them to monetize by allowing access to some of their key services. Enterprise are now looking to extend their services and build an ecosystem where there partners can leverage their internal back-end system and data. API PCS offers an API Management system that empowers partner ecosystem to leverage APIs that works with integration systems which results in modern applications which opens new revenue stream for the enterprises. Extend Enterprise API Ecosystem This section will focus on how enterprise can extend their digital reach by partner ecosystem. Following diagram try to shows address the “how”. Customer’s typically have SOA platform that host many reusable services. If the customer has added APIs to its strategic tier, they will have layer of APIs that exposes their services in the SOA platform. They would have used APIs to build applications that meet their business needs. Now customers can use API gateway technology (API Gateway covered in subsequent section) to securely expose the APIs outside firewalls so that partners can utilize the SOA platform services and new applications they build. Partners can also use these published APIs to integrate their systems with customer’s system through customer’s exposed SOA services. These partner integration apps, can use some of partners own APIs to access their back-end system to create new services. These new partner services then use the APIs to inter-operate with the customer services. You can even add customer’s to this API ecosystem to act as consumers of the APIs. Oracle API PCS Architecture This section introduces Oracle API PCS architecture and API gateways. Oracle API PCS serves the purpose of all the different stakeholders and persons involved. API manager, implementers and administrators can use the browsers based manager portal to build, manage APIs. API consumers can use browser based consumer portals to discover and use exposed APIs. API designer can use Apiary cloud service to provide rich API documents to surface on developer’s portal. Below table shows the Persona included - Persona Tool Task API Manager / API Implementers / Gateway Administrators Browser Based Manager Portal Use Manager portal to interact with API PCS Management service (API PCS Management Service runs on Oracle Cloud) API Consumer Browser Based Consumer Portal Use Consumer portal to find published APIs API Designer Brower based tool for API Cloud Service – Apiary Uses Apiary Cloud Service to provide API documentation that surface on developer’s portal, so API consumers can understand the interfaces to APIs. Gateways – Architecture diagram clearly shows that Gateways are separate from the management service however the Gateway communicates periodically with the management service in order to learn about new APIs, policy changes and to exchange matrix about the service requests. Function of Gateways – Receives API requests from applications, Responsible to enforce policies and Pass requests to backend services Response back to calling applications - Response from backend service is passed back as response from backend service to calling application through the gateway Manages and Enforce Security – Gateways are responsible to enforce security. As the architecture diagram shows, gateways are deployed inside the firewall and behind load balancers. These gateways enforce the security policies that API PCS users define. Ensure APIs are made available to the user applications by posting their definitions and security policies to the gateways. Note – API Gateways is the only way for client applications to access the APIs. Gateway can be deployed anywhere including Oracle cloud, on any other cloud and also on-premise. Oracle API PCS In-Concert with other Oracle Products Customers have integration products like ICS and SOA suite (on-premises or SOA CS) to build new services. ICS allows LOB and citizen integrations to quickly integrate two applications and enterprises can use SOA/SOA CS for more challenging and complex integrations. API PCS allows you to expose APIs that leverage integrations which are created using ICS/SOA-On-Premise/SOA CS. API Platform Cloud service can manage and monitor these APIs. Customers can use Process cloud service and easily combine workflow applications with APIs. In conclusion, build services and integrate those using ICS, SOA (On-Premise) or SOA CS and expose via APIs using API PCS and further build new workflow process and use APIs with them. Conclusion - Economy of the data transactions that are empowered by APIs is the API Economy. Welcome to API Economy and be successful in it with Oracle PCS. Stay tunes for more technical articles on Oracle’s API Platform Cloud Service. We will be covering technical details and demos in this series on Oracle API Platform Cloud Service.
This post if one in the series of posts to-be published on Oracle API Platform Cloud Service. In this article, we will focus on - API Marketplace, API Platform, Oracle’s API Platform, Oracle API PCS and Sta...
BPM Security When talking about BPMsecurity, you need to know the about certain set of information and where thoseinformation will come from etc. Following are the key information you need toknow, to manage security in BPM – You need to know about users,groups,roles and permissions and user’s group membership You need to know about attributes, example if you don’t know the email attribute then BPM process would not be able to send the notifications. In future we will be creating parametric roles based on users special attribute like skills, language etc, then attributes become more important and must be defined for each users. You need to know the Organization structure to define the organization unit in BPM and then associate groups and users to those org units. You also need information about reporting structure. Example define escalation etc. BPM finds manager by navigating to the manager attribute of the users and then same manager has the authority to reassign task to his/her direct reportee. So we need to know the managerial hierarchy in LDAP to perform effective task management. Above are the informationwhich BPM needs to perform effective process and task management. How BPM gets theseinformation? Following are the twoservices which BPM uses to get these information – OPSS – Oracle Platform Security Services – it gives information about identities and memberships. So things like, looking up users in LDAP, authenticating them are done by OPSS. BPM does not directly perform user authentication. It’s being done by the perimeter security provided by WLS or the container and then BPM gets that information about the user from OPSS.Similarly for group and application role membership, determinations are made by the OPSS security service and BPM gets those information though a set of APIs. So information about individuals and there membership comes though OPSS. Verification Service – it’s part of SOA infrastructure that offers access to the tokens that are used to authenticate users in BPM. All of the APIs in BPM and Human Workflow are stateless. So when those APIs are invoked,consumer gets a token and when those token gets authorized, you can pass that token in subsequent calls. So verification service manages the creation of those tokens and the caching of those tokens. Also, it’s the verification service that determines individual access rights to processes and task data. How LDAP,BPM,OPSS and Verification service arerelated? Information about users is inLDAP. BPM runtime needs to know about the users,groups,roles and permissions tomake the task assignment decisions and to control access to task and processinstance data. Access to user attributes is needed to support – email notification, parametric roles, organization units. Information about reporting structure isneeded to define the task escalation paths and admin rights over the tasks. BPM gets those informationvia OPSS.Management and filtering of those information at rutime is performedby the verification service. BPM runtime uses OPSS for authentication andderivation of groups/role membership. BPM runtime uses verification service tomanage and cache context tokens and determine process and task level accessrights. OPSS - OPSS service is used across all the FMW. OPSS is - Standard-based, portable, integrated, enterprise-grade security framework for java applications. OPSS uses java APIs for security and integrates those into the web logic container and adds application roles to grant access to resources. Underlying security platform that provides security to Oracle Fusion Middleware including, WLS, SOA, BPM, Webcenter, ADF etc. Abstraction layer in the form of standard-based programming interface (APIs) that insulate applications from security infrastructure. SPI – it’s the service provider interface that providesservices upwards to fusion middleware environment.Providers are plugged-in to the container offers services likeAuthentication, Authorization, Credential Store Framework and User/Role. These servicesare abstracted by OPSS and provided to FMW. BPM and OPSS (Integrationwith OPSS) Java Platform Security Suite (JPS) section shows the coreset of services offered upward to BPM like identity store, policy store, credentialstore and framework. Identity Store - The identity store is queries to getinformation about users and groups. You can also concatenate multiple sourcesof users though identity store. Example you can have administrative users in embeddedLDAP and business users in external LDAP or OID. You can concatenate users fromthese multiple source sung identity store. Policy Store – Application roles are stored in policy store.Application roles offer the ability to group users, irrespective of how usersare organized in the LDAP. Identity Service - Identity store and policy store serviceare the core services offered to BPM using identity service. Identity serviceis a web service deployed on SOA servers and identity service is the serviceused when people login and it also determines the authentication andauthorizations for the login user. Verification Service – Verification service manages thetokens and specific privileges to task data and process instances. Internal Service used by Human Workflow (HWF) and BPM Manages creation and validation of context objects HWF and BPM APIs are stateless and require Context object as parameter. Context encapsulates authenticated identity derived from Credentials, Identity Propagation, “On behalf of” using Admin Context. Public APIs for Context creation are on ITaskQueryService Caches Context objects with corresponding BPMUser object BPMUser object caches user data including group/app role membership Authorizes access to task data and actions Uses the Context to fetch the BPMUser object from cache Following diagram enlist the service offered by verificationservice. Verification Servicein Detail First it’s the perimeter authorization that makes sure thatuser who is authenticated can get to the system. Then application roles andgroups take care of human task assignment. Then it’s the verification servicewhich offers granularity in terms of who can see task data and actions. Verificationservice offers caching of Context tokens and authenticate. Verification servicealso authorizes user’s access to user task data and actions and to BPMprocesses. To configure task and action access control, use thehuman task editor’s access tab. There you can find tab for content and actions. To configure access control for the process, go toOrganization and there you can find the predefined roles for the processowners and process reviewers. This is how we can grant ownership to user forthe entire process or let them just being the reviewers. These are the rolesassociated with the process. Then you can associate user, roles, and groups tothese roles in EM or Workspace. How it all works? When someone login to BPM workspace, verification service goesto the provider to get information like application role and groups. Then the informationis cached in the BPMUser object and the information is hashed and stored in thecache by the identity context. It is this, identity context that is passed oneach of the subsequent calls made by that user. It offers the ability to nolook-up for user and his/her attributes like application role, groups,attributes, membership etc. Application Role We talked about application role in the validation servicesection. Let’s talk more about the application role in this section. Application role is a powerful capability offered thoughOPSS, which allows grouping of users who are not grouped in any other way. Exampleyou may have groups in external LDAP however you might not need to organizeusers/groups in a different way. Application role allows you to create similarto LDAP groups, it’s a container that can contain groups, users or otherapplication roles. You can create combination of users in an application roleand then you can assign task to application role and can assign privileges andaccess to them and so on. So if your LDAP structure does not matches your workassignment then you can use application roles to group users in a flexible way.You can associate user in LDAP to one/more application roles. This way you canmanage groups independently without LDAP changes. Application roles are persisted in the policy store (filestore ‘system-jazn-data.xml’ in DEV environment and DB in PROD environment). Applicationroles can be made of users, groups and other application roles. You can manageapplication roles via BPM studio, workspace, EM or WLST. How it Works withBPM? When you model process, you define swimlanes. While definingswimlanes you are defining the categorization of users. So when you create swimlanes,you are defining the grouping of users by name, which will interact with theprocess. Example sales-rep or executive or supervisor. These names are definedas Swimlane roles, however at deployment time, a application role is created inOPSS’s policy store as <Project_Name>.<Swimlane_Role_Name>, examplemyBPMprocessPrj.Supervisor. Swimlane roles are added to policy store as applicationroles during the deployment. You can define Swimlane roles assignment to groups and usersin BPM studio whoever I would recommend to define mapping in EM or Workspace orWLST. If it’s the technical audience responsible for mappings then better toperform in EM however if it’s the business audience responsible for mappingthen workspace can be the better choice. Conclusion – This document covers OPSS, Integration of BMwith OPSS, BPM security for data and access and detailed analysis ofverification service.We have learn that when someone login to BPM workspace, verificationservice goes to get to the provider information like application role andgroups. Then the information is cached in the BPMUser object and theinformation is hashed and stored in the cache by the identity context. It isthis, identity context that is passed on each of the subsequent calls made bythat user. Next blog post on BPMsecurity will cover details around configuration of LDAP providers, usingmultiple LDAP providers and virtualization using libOVD. Subsequent blog willalso cover troubleshooting and performance tuning when BPM is in concert withLDAP.
BPM Security When talking about BPM security, you need to know the about certain set of information and where those information will come from etc. Following are the key information you need toknow, to...
Datafrom devices will need to be analyzed and actions will need to be taken basedon that data. These actions could trigger alerts or invoke corrective processesbefore routine issues snowball into disasters (examples include flight delays,parts replacements, fire emergencies, etc). These actions will impact criticalbusiness processes requiring integration with operational systems, fromenterprise resource planning (ERP) and customer relationship management (CRM)to specialist vertical applications. IoT Architectural Framework The IoT serviceslayer is completely independent from the underlying devices, communicationprotocols and connectivity semantics. This layer includes a core set of servicesto build IoT applications (i.e. composite applications) across a range ofindustry sectors. The IoT services layer helps the enterprise to: Analyze data in real-time (event processing) Act on M2M data and events (integration services) Provide historical, real-time and predictive analytics (analytics services) Visualize operational and analytical data through mobile/desktop (UI services) Manage data security and identity of devices/apps (security and identity management Service) The IoT developer services layer enables developers to build applications using IoT services, development kits, software tools and services. This layer helps expose the platform to a range of applications and use-cases. The role of middleware is to provide theinfrastructure and IoT services which in turn help drive innovation, enable newrevenue streams, and improve operational efficiencies. Benefitsof IoT and BPM Integration Embedding intelligence by way of real-time datagathering from gateways and devices and consuming them through businessprocesses helps businesses achieve not just cost savings and efficiency butalso helps them generate more revenue patterns. Businesses need to overcomeseveral business and service challenges to be able to realize smoothorchestration and manageability of disparate systems. UseCase - Outage Initiation over IoT When we talk about ‘Internet of things’, we aretalking about - Senseà Acquire à Communicate à Event Processing à Integrate à Visualize and Analyze Let’s walk though this with a sample use case ofoutage management. UseCase explained–We are here talking about outage caused dueto brownout (drop in voltage in the supply), faults at power stations, fault insupply devices etc. If power outage not resolved within limits, can lead to, customercompensation, penalties and loss of revenue. In this use case, we will talkabout 2 things – first is how IoT can lead to the initiation of a BPM processwhich is going to handle the outage incident. Second is the BPM process itself,where we will Model the process using Process Cloud Service and leave it forimplementation by developers. Equipment sends status information at a regular rate to the smart gateway. [Sense] Smart gateways have Oracle event processing for java embedded, which is a small foot print of Oracle event processing which is deployment on the gateways. It performs various upstream operations to perform basic filtering and aggregation, local decisions eliminate noise/false positive, optimize bandwidth, etc. Oracle Event Processing for Java Embedded has the ability to handle millions of events per second with microseconds of processing latencies. Oracle Event Processing for Java Embedded installed in the gateway appliance, helps filter and analyze real time usage, and detect fault and problem event patterns across thousands of distributed data centers, enabling dynamic diagnostic and automatic correction interception. [Acquire] Events of significance are sent from the gateway to the backend systems for a more detailed analysis. [Communicate] Oracle Event Processing Server edition performs complex downstream operations combining and correlating multiple streams of data putting it in a larger context (Customer information from CRM, etc.) Oracle Event Processing delivers on real-time analysis of high-velocity data. It is a complete solution for building IoT applications to filter, correlate and process events in real time so that downstream applications are driven by true, real-time intelligence. Oracle Event Processing filters out noise (such as data ticks without any change in values) and helps identify critical conditions before this data is actually relayed to the back-end. Oracle Event Processing can trigger a business event to perform integration with the BPM process. [Event processing] Processes essentially “listen” for event patterns & issues as they arise. Powered with this insight, systems can trigger alerts or invoke corrective processes immediately before routine issues snowball into disasters.[Integrate] · Using BPEL and BPMN industry standards, userscan model process that capture optimal paths, alternative paths, exceptionflows, process conversations, and handling of business events. · Processes can be invoked directly on receipt ofevents from devices or after the events have been pre-processed by eventprocessing engines such as Oracle Event Processing · Oracle BPM thus provides the ability tointegrate processes that involve devices, applications and human intervention. Processes can be integrated with business intelligence infrastructure to further optimize core processes and operations. SOA processes can invoke Oracle Business Intelligence to gather contextual information In IoT use-cases, this is essential when sensor data or events lack enough context to determine how the data should be processed. Combining real-time sensor information with historical and prescriptive analytics can make the processes intelligent and responses much more human centric. SOA and BPM processes acting on sensor-related data and events are tightly integrated with Oracle Business Activity Monitoring, which provides dashboards that allow administrators to make decisions based on real-time streaming information coming either directly from sensing devices/gateways or events arriving from Oracle Event Processing engine. [Visualize and Analyze] KeyTake away - Following are the key - Oracle SOA Suite on Oracle Exalogic provides IoT applications the ability to scale, delivering faster response time, and delivers 15X more throughput gains, 2X faster response time, and 2X improvement in SOA file processing for large payloads. Integration Pattern - The devices should be decoupled with the applications. Loose coupling is achieved through abstracting and resolving the differences between two or more systems in order to facilitate a seamless integration. Loose Coupling & Virtualization - Oracle Service Bus integrates new devices and device services and enables true plug and play supporting a wide variety of applications and services. Oracle Service Bus is designed to connect, mediate, and manage interactions between device communication/event processing modules and business services instances across an expanding service network. Oracle Service Bus enables loose coupling between the device layer and the enterprise services (through virtualization) while providing guaranteed reliability and scalability of enterprise services. How the right application instance/process is initiated? -Oracle Service Bus virtualizes the REST services exposed by Oracle Event Processing. At the same time, Oracle Service Bus can gather all the operational statistics of exposed IoT services, monitor service level agreements (SLAs) and then initiate actions such as invoke the right application instance in order to initiate action based on the gathered statistics. Usecase – sample process In the sample process “Outage Incident Management”,there are various options to initiate the process. Customer can report outageby one-click option on the ‘Reporting Apps’ or customer can call the back officerepresentative and report the issue. However it's not just a human interventionthat can lead to process invocation. Devices too can report the problem. ‘SensorDetection' is a REST service with a POSTmethod. It accepts device id, area/zip in input. This REST service is invokedby Oracle Event Processing - OEP. This service results in the invocation of the‘Outage Incident Management’ process which will take care of the incidentresolution. Howit worksOracle Service Bus virtualizes the REST services exposed by Oracle EventProcessing. Oracle Service Bus cangather all the operational statistics of exposed IoT services, monitor servicelevel agreements (SLAs) and then initiate actions such as invoke the rightapplication instance in order to initiate action based on the gatheredstatistics. Processes can be invoked directly on receipt of events from devices or afterthe events have been pre-processed by event processing engines such as OracleEvent Processing. In this case, Oracle Event Processing REST service is exposed and virtualizedby OSB and is connected with the devices. Oracle event processing in turn willinvoke the right set of process/service and in this case it invokes the outagemanagement process by invoking the 'SensorDetection' service. Following is the process flow screen capture , partencircled shows the message event part of the process, which expose the processas 'SensorDetection' service. Conclusion Self-driving cars will soon be a reality.Connected devices powered with a unique identifier enabled by IPV6,watches,home security monitoring, smart phones, home devices such as lights, home alarmsystems, thermostats and other appliances etc are resulting in vast data andinformation which needs treatment beyond just transmission and processing, thatneed to be managed, tracked and utilized in a seamless manner. When a sensor receives data from a device, ittransmits/generates events. These events need to be captured to get the essenceout of it and to manage it better and to connect people with the automated systemsand processes. It’s not just about string the information extracted from thedata or it’s just about analyzing the data, it’s also about managing this informationand bringing in automated systems and processes. It’s about connecting smartobjects including human intuition. Smart devices will keep sending data andintelligent devices will keep generating information out of it, to initiativeresponse to those events. Response can be in the form of process initiationwhich in turn invokes various systems and processes. BPM into IoT will result in a properly managed IoT& an effective communication between devices. Just by capturing events fromdevices will not lead to an end-to-end solution, it’s the inclusion of BPMwhich will result in an End-to-End solution. Merger of BPM and IoT will bringin analytics, social and mobile capabilities into current processes. Processand Business analytics will add more size and depth to data.
Data from devices will need to be analyzed and actions will need to be taken based on that data. These actions could trigger alerts or invoke corrective processesbefore routine issues snowball into...
This paper tries to offer my view points on adaptive case management, whiletalking about different attributes an adaptive enterprise should have. Documentalso highlights the difference between BPM and ACM. It also covers what ACM hasto offer to an enterprise. It also covers the example scenario defined bytaking customer as the subject for the case. At the end you will walk throughthe building blocks of ACM. Case A case is focal point to collect all the information required for the work. Current Case Management Manually and semi-structured process management. Case Management Case management is a way of organizing and framing work around the case. Process Vs. Case Process is a path to accomplish tasks/activities and case is the work that needs to be performed from opening to closure. ACM ACM is a novel mechanism of managing work. For me, ACM is about defining a milestone oriented, state based, rule-governed, content outbid, event-driven case. ACM define a milestone oriented, state based, rule-governed, event driven case management. ACM is about Defining case and Work. Working on ad-hoc, dynamic, un-structured and un-predictable process/case. Design @ Execution. Milestones. Content management. Process and Socio- collaboration. Incorporation of business intelligence. Valuing human intuition. Empowerment. Known and unknown events. Optimizing on real-time. Who Works on Case Practically everyone – Case and Knowledge workers, participants etc. Event It’s an occurrence impacting case, which may lead to addition/deletion/modification of work and tasks. Introduction This section we will walk though the definitions and try toget the essence of what ACM is all about. Case Case is a unit of work. It’s a package in itself. A case isfocal point to collect all the information required for the work. There aregoals and milestones in the case lifecycle which are achieved when some work isperformed on the case. A case is a superset of work, processes, transactionsand services which traverse from being opening to closure over a time frame toreach a collaborative solution of an investigation, incident, service requestor a long running process. Essentially it’s a coordination of works. Examplesof case are – Insurance claim, contract management, Managed Health Care etc. Case Management Case management is a way of organizing and framing workaround the case. We have used the word “framing work”, in above definition.Because it’s evident that work cannot be defined for a case in one-short or inone-go. It’s an ongoing process and work keeps deriving and hence case keepsevolving. It’s a collaborative, coordinative, milestone oriented process, tohandle a case from opening to closure by interacting with ecosystem andknowledge workers. Case management coordinates knowledge workers, contents,resources, systems, correspondence to progress a case to different milestones.Progression of the case is determined and governed by human interactions and byoccurrence of internal and external events. Where the process is a non-routine,un-predictive, ad-hoc process. Case management solution offers case and knowledge workers, a greatercontrol and insight to resolve problems more effectively. Case management ensuresthat right information is available for decision making at the right time andin real-time. One can say - effective process management is essential forcase management. Case Management is non- deterministic because case flow isdynamically determined at run-time. ACM focus on managing all the work required to handle acase, regardless of the fact if it’s a content-intensive work, structured orunstructured work, predictable or unpredictable work, deterministic or un-deterministicwork, automated or manual work etc. DCM Many vendors have various definitions. For some it’sprogression from Rigid BPM àHuman centric content oriented BPM àSocial and iBPM àCase Management. Dynamic case management is about semi-structured,human-centric, information intensive, collaborative processes which are drivenby events.Dynamic case management enables dynamic changes at runtime. Adaptive casemanagement is about just-in time creation of work around the case andprocesses, with intelligence to learn from the previous case/sub-case/work.Which means people working on a case should be able to use the sub-case/workwhich is learnt by the process/case just-in-time. To me adaptive casemanagement and dynamic case management are one in the same things, just defineddifferently by different set of people. ACM ACM is a novelmechanism of managing work. For me, ACM is about defining a milestoneoriented, state based, rule-governed, content outbid, event-driven case. For Health care it is a collaborative approach to plan,analyze, define and then advocate & facilitate, an individual’s heal careneeds. Legal industry requires knowledge workers (lawyers, clients, judgesetc.) and their expertise, as they drive though advocacy, consultation etc andeach individual case have different life cycle. Also information and workrelated to case; need to be assembled as the case progresses. For example inlegal sector, as a court case progresses, new works are derived which needcollaboration with different knowledge workers. Results need to be assembledwhich could further lead to new work identification and so on. ERP is a superset of processes. ECM is about contents. CRMis about customer.BPM is about process and process management. There is noprocess without content. No CRM without communication, collaboration &processes. Collaboration cannot be possible without a social BPM. Real-timeanalytics and transparency is engulfed by intelligent BPM. ACM is integratedconsolidation of ERP, ECM, CRM, Social BPM& iBPM to create a holistic viewof the case and it’s the customer which is the focus in the case. ACM targets unstructured processes where the exact steps andbehaviors are not always known ahead of time. Case Management is a way togovern and control these unstructured processes. You need rule definition in the form of templates which canbe changed at runtime. You need tools to define and modify a process on thefly. You need to add work to the case while the case is executing and so on.Essentially you need a case management solution. Work on a case can be performed at discrete places like ERPprocess, CRM, content store, emails, manual etc. However it's the ACM whichmanages discrete pieces of work to be performed on a case.ACM creates anadaptive ecosystem for a work where a change or addition is acknowledged andadopted in the ecosystem to be adapted by the work. All-About: Adaptive Case Management Process vs. case ACM offers a clear distinction between process and case.With a case, to accomplish work and to achieve milestones, many processes mightbe running in sequence and/or in parallel. BPM will understand and executedthese processes as distinct, separate processes being orchestrated by oneprocess and so on. However with case, processes are tightly associated with thecase and sub-cases and hence cases offers a holistic view. BPM vs. ACM BPM is a management practice and BPM suite is a technologyproduct which supports BPM practice. However BPM suite also supports CaseManagement which is a technique to work on ad-hoc, un-predictable processes. BPMand ACM are not technology; they are practice and approach to achieve work andmilestones respectively. BPM ACM Core of BPM is processes. It’s all about model, measure, implement, optimize and monitor a process. A BPM process gets better over time due to optimizing. Case management is about the cases and work defined and executed to progress a case from one milestone to another from opening of the case to closure. ACM is about facilitating infrastructure to knowledge workers to work on processes which are ad-hoc, dynamic, un-structured and un-predictable. BPM is about gathering knowledge and incorporating it into the process through analysis/modeling/simulation. ACM is about collecting knowledge inputs while executing, directly from business users and knowledge works and defining work and case progression on those inputs. BPM helps human (modelers/ users/ analysts etc) to collaborate using a social collaboration tool to socialize on process model. ACM is about creating process during execution on the fly. BPM drives process by models and measures. However ACM is driven by contents, milestones and collaborative knowledge sharing between knowledge workers, process and case owners and off-course the customers and end-users. ACM is about incorporating business intelligence in the process by encapsulating real-time process knowledge. BPM is about defining a work as a process. ACM is about defining and managing work around a case. BPM allows knowledge sharing but it’s up to the participant to share knowledge. Moreover knowledge can be shared and not the experience. However ACM gathers knowledge & experience from people’s actions and hence in ACM it’s not about knowledge sharing it’s about knowledge collecting because human intuition is valued most in ACM. BPM is about using communication and socio-collaboration for the process. However ACM is about process and socio-collaboration as one. BPM is about enablement. ACM is about empowerment. In BPM, process participant’s actions are limited to how the process is designed. However in ACM, adaptive means process/case participants can act on the process/case and work on it as required for each individual customer and not just they get limited to how the process is designed. BPM offering allows customer to drive the process Only ACM allow users to drive the interaction. BPM empowers participants to act on the task. Only ACM empowers knowledge & case workers to include resources to reach milestones. BPM can act on events however events should be known while modeling the process. It’s the ACM which offers capability to act on unknown events. In BPM and Social BPM, you design, model, measure, monitor and optimize by re-design, re-model and so on. However in ACM you design while you are executing. You design on the fly. Knowledge from execution is used to deign while executing. Optimization in BPM is based on data available from past. ACM allows you to optimize your process on the real-time data which is way different from optimizing BPM process based on data available from past tense. In BPM you have swim lanes to which groups/participants/roles are associated and they have tasks and activities associated. Things are very much sequenced, modeled and are predetermined. However with ACM it’s about workers which are associated with case. They can be dynamically associated or disassociated with the case. They collaborate on real-time and can perform same or different tasks at the same or discrete time. BPM context is past (as-is) and present (to-be). ACM is about past (as-is), present (to-be) and future (I). BPM is little-bit clerical ACM is all about knowledge workers. BPM processes are structured, predicated sequence of activities and tasks. ACM is collection of processes and tasks. Routine work based. Knowledge work based. BPM is centered around process ACM is centered on work and data around Case BPM follows POA (Process Oriented Approach) ACM follows MOA (Milestone Oriented Approach) What ACM offers? Transparency Strategies from management, targets from executives andmilestones from process owners should be in-line and must be transparent tothose who act and execute and those who use them (end-users and customers).This transparency can be achieved by knowing ‘what’s being moving’ inreal-time. And based on real-time analysis, decisions should be taken andactions should be performed by those who are empowered to do so. Above allreal-time inclusion of customers, process owners, knowledge works and theecosystem brings in focus. Hence ACM ifabout real-time, focused, empowerment to bring in transparency. Managementacquires full transparency of processes and execution. Empowerment ACM is about empowerment. Empowerment comes with focus andtransparency. And transparency comes with a socio-collaborative infrastructure.Transparency enables monitoring which in-turn increases the focus. And focus isincreased by laying milestones and achieving them. ACM is all about realizing milestones. InACM, adaptive means process/case participants can act on the process/case andwork on it as required for each individual customer and not just they getlimited to how the process is designed. Case knowledge workers should beempowered to add, update, cancel, reuse and delete a work/sub-case assigned tothe case in a focused ACM enabling infrastructure. Even if BPM empowersparticipants to act on the task, it’s only the ACM which empowers knowledgeworks and case workers to include resources to reach milestones. Optimized and efficient customerexperience ACM leads to a better customer satisfaction. Solutions tocustomer problems cannot be addressed effectively in an automated, well-definedprocess. Even if user drives the process, it would not lead to the bestcustomer satisfaction. Moreover solution to customer problems cannot be definedas every customer is an individual with an individual set of issues andchallenges. Knowledge work is required and it cannot be framed into rules andcannot be modeled into process. Hence, to bring in knowledge work, empoweredknowledge worker is required and a system which let user derive theinteraction. Hence even if a BPM offering allows customer to drive the process,it’s only the ACM which allow users to drive the interaction. Even if BPMempowers participants to act on the task, it’s only the ACM which empowersknowledge works and case workers to include resources to reach milestones. Handling Un-Predictability ACM processes needs knowledge workers, coordination ofknowledge workers, contents and content integration and coordination withcase/process, correspondence and participants to act. At various stages or onachieve milestones, policies and rules are required which need to becollaborated with human decision and actions. Also adherence to regulations andpolicies are must. Due to various factors which could arise in a case/processlifecycle which cannot be model in one-go, case/process path cannot bepredicted. Adaptive Enterprise A BPM process path can be predetermined and modeled. Howevera case path cannot be predetermined. It’s non-deterministic because case flowis dynamically determined at run-time. Path cannot be determined, it needs tobe determined and that too on the fly, while case is executing. Hence it’stermed as adaptive case management as path of execution has to adapt to everchanging rules, policies, regulations, knowledge workers skills, events,contents etc. Adaptive ecosystem - Drill, adapt, transform, optimize, improve,adapt are the key characteristics of adaptive enterprises and thosecharacteristics are realized by ACM. Real-time monitoring Real time monitoring, predictive analysis, KPI etc offersbusiness the tools and techniques to gauge the performance of people, resourcesand processes. For instance if a task participant is slow in clearing tasksassigned to him/her. By knowing this in real time would help the business todelegate the task to those who can meet the SLAs. This also increase theresponse time and drastically reduces the time to gauge a challenge andblockage. Greater Insight Stakeholders will have complete visibility andcontrol of their objectives, which are often expressed in key performanceindicators. Greater insight translates to the fact that challenges can beidentified the moment they arise. This makes the enterprise more proactive torespond to such challenges. Collaborative Decision making Participation Business users, management AND CUSTOMERS can participate. Dynamism ACM allows capturing events (internal/external) as and whenthey happens and allows acting on it as they occur. The more responsive CaseManagement system it for the events, the more dynamic enterprise it wouldbecome. Holistic Approach ACM offers holistic work management this improves enterpriseoutcome of work and further translates to increases revenue, effective &better services and efficient risk mitigation. Intellectual Property ACM is not just about dynamic and adaptive contentmanagement. It’s about creating intellectual network of contents around thecase and its work. Example Scenario Following example shows, cases associated with the customer“Rivi”. There are subcases and processes available to be used for the customercase. Usage, inclusion, addition, deletion and modification of subcases andprocess are purely based on case to case basis. For instance if a Notificationwas send to customer “Rivi” , for the perspective identity theft on her creditcard account, then this might lead to inclusion of incident subcases likecredit card dispute and might also lead to addition of investigative subcaseslike identify theft and fraud case. As you can check from the diagram, longrunning subcases wealth management will keep running for the customer. Howevera possible identifies theft and fraud detection might result in inclusion of a‘Stop Credit Card’ process which will suspend/freeze the credit card associatedwith the customer “Rivi”. However once the fraud and compliance completes theinvestigation and customer requests for a new card, she will be issues a newcard and to do so, create payment source (Create Credit Card) process will beassociated with the case to create a new credit card for her. Also updatecustomer account process will be associated with the process to update customerprofile with the new credit card. Customer Case As you have from the example above, inclusion of subcases,cases and process was dynamic and the case keep adapting based on the how it’sprogressing. With each case, different set of knowledge workers andparticipants gets associated with the process. With each Subprocess, differentset of services, infrastructure etc gets associated with the process. Alsoworkers are empowered to add subcases and process. All the discussion we didaround the example was centered on the subject customer “Rivi”. Now think from the perspective that a new subcase is neededto verify the fraud case from a 3rd party. The situation (rules) andregulations which lead to inclusion of 3rd party verification arelike ad-hoc works and should be added as template for future usage by this caseor any other case. This will make the case adaptive and also intelligent enoughto tap knowledge and intelligence from what it (case) has experienced. And bytapping knowledge, case is bring-in experience in real-time. ACM allows work, tasks and activities to be performed in awide spectrum. Work does not get confined within the bandwidth of defined andstructured process, it goes way beyond that. Think from the perspective of adefined and structured process. Then all these adaptive and dynamic tasks weperformed would be difficult to perform and most of such tasks and activitieswill end up being performed manually. This will result in loss of insight andtransparency.ACM building blocks Followingare the building blocks of ACM: · Stakeholders · Case/Knowledge Workers and Participants · Processes · Tasks and activities · Data and Information · Contents · Collaboration · Events · Rules & Policies · Milestones · Integrations · Dashboard or Portal Building Blocks of ACM How to find possibleACM candidates ACM building blocks and attributes hit at some of thepossible areas where ACM can be used as a holistic solution. Following are thecharacteristics which points towards a possible ACM candidate: 1. Heavyknowledge Workers – Enterprise, industries and organization where they relymore on knowledge workers is a possible candidate for ACM. Example – healthcare, medical, legal, engineering, profession services, banking, insurance etc. 2. ServiceRequests – Enterprise which is flooded with service requests fits into ACMspectrum. Examples – Financial services, human resourcing, travel etc. 3. Incidents– Insurance, Legal, healthcare etc. Where human intuition, collaboration andcollaborative decision making is must, those segments are idea candidates ofACM. 4. Investigative– Cases which relay on investigation means knowledge workers, dynamic workinclusion etc. 5. LongRunning – Financial, insurance etc. Example – wealth management is a long runningprocess from a customer case perspective. Such cases are good examples of beingservices through ACM. 6. Mostlymanual – Ares where most of the tasks are performed manually are candidatesfor an ACM solution. 7. Outsideprocess - Ares where tasks are performed outside the scope of a definedprocess. Such scenarios are candidates for an ACM solution. 8. Complexscenario – Processes where there are many complex scenarios and therequirement to fit all those scenarios, and then the complexity can be reducedby ACM. 9. ExceptionManagement – Cases where most of the time and resources are wasted onidentifying various exception scenarios and building solution and integratingsolutions to address them. Such instances are ideal candidate for ACM. 10. Un-known– Scenario where the salability and dynamism of a process are not known andcannot be modeled in one go. 11. Un-predictable– Scenarios which are un-predictable and un-deterministic. 12. Novel andnew – Projects/cases/Process which are new and novel, where the canvas is empty? 13. Not Model-First – process and case which does notfall into category of model-first. 14. Agility – Enterprise which needsreal-time view of current case status and wants to be responsive as and whenevents happen. Case Types · Investigative– Manage compliance and address risks without sacrificing informationintegration goals. For example - KYC (Know your customer) or AML (Anti-moneylaundering) etc. are those compliance functions which demands knowledgeworkers. Today's enterprise requires diverseinvestigation for an effective and efficient - offence management, evidencemanagement and investigation management which brings in end-user efficiency,accurate decision making, and future-proofing your compliance requirements. ACMallows analyzing information in context of an ongoing case. Improvesinvestigational case SLA. Reduce regulatory risk. Offers consolidatedrepository of all investigational cases. Some of the cases which fall intoinvestigative cases category are like – fraud and compliance, insurance claimsetc. · Incident -Incident management allows opening, tracking, monitoring, escalating, andworking, resolving and performing various activities on the incidents. Mostlyincidents are paper-intensive, manual process, stand-alone or non-integratedprocesses and services. More over services and processes build to record,route, manage and track incidents are inconsistent, time-consuming & mostlyerror prone. ACM solution allows effectively tracking, tracing, managing andresolving case based incidents while being confined within policies, rules andregulations. · ServiceRequest - Challenging business scenarios have only worsened the challengesof service requests. Customers, partners, employees are now distributedgeographically. A single service request processing means referring variousapplications (ERP, CRM, 3rd Party etc.). Processes are running acrossfunctional, system and enterprise boundaries. Enterprise don't have anend-to-end definition of the case and on the top of it, there would not be aunified view on it. Stakeholders would be able to reactive only when theproblem has occurred as they have no insight and visibility. Consistence inservice is also a vital part, which you will lag with an ACM solution.Customers are smarter and they need faster, accurate and consistent solutions.Along with that, enterprise has to keep pace with changing economy,regulations, policies and rules. If an enterprise fails to do so, they mightmiss an opportunity. Enterprise has to be agile with continuous improvedofferings. ACM is build on best practice and it combines the power of BPM, ECM,SOA, Processes etc, along withempowering knowledge workers and bringing in socio-collaboration to improveoverall outcome of the case. · LongRunning – There are processes and cases which could last for a long time.For instance – social welfare & benefits, managed health care, wealthmanagement etc. These processes do have case states however they are defined asseparate category as they take more than normal to reach the closure. As casesare long running, a lot can happen over that time and various participants andknowledge workers would have acted and contributed in the due course of time.Hence a holistic view of the case is required which ACM offers. Industry and Verticalwhere case management can be used Following table list some of the cases, specific toindustries. ACM type is also enlisted tocategories the case type. For some example, BPM processes are also listed, forthe sake of reference. Industry Case BPM Process ACM Type Financial Services - Insurance Insurance Claim Quote Request Investigative Dispute handling Incident Custom On-boarding Service Request Financial Services - Banking Loan Origination Starting a Utility Service , Stopping a Credit Card Service Request Fraud and Compliance Investigative Identity Theft Investigative Dispute (Credit and Debit card dispute) and Incident Complaint Handling Incident Account Opening Service Request Wealth Management Long Running Healthcare Managed Health Care Appointments Long Running Medical Trails Investigative Patient Record Management Incident Human Resource Employee Off Boarding Service Request Employee On Boarding Service Request Employee Performance Review Service Request Telecom Billing Issue Resolution Incident Customer Provisioning Service Request Government Social welfare and Benefits Long Running Immigration Application Incident Licensing and permits management Incident Legal Contract Management Long Running Auditing and Compliance Incident Public Service Emergency Response Investigative Retail Custom Complaint Incident ACM Infrastructure SOA infrastructure comprises ofservice engines – BPEL, human workflow, rules, mediator, spring and BPMN andCase management. These engines execute thebusiness logic of their respective component within the defined SOA Compositeapplication. For example –case management engine will provide environment toexecute case similarly BPMN engine provides environment to execute BPMNprocesses. Case management engine publishes events to Oracle EDN. How you can realize ACM? ACMsolution should handle dynamic, ad-hoc, structured, un-structured, predictableand un-predictable processes in one ACM driven infrastructure. Focus on milestones: defining milestones and realizing them, narrows down the focus of management, process owners and everyone, towards achieving the milestone. Define hierarchy of the milestones. Enlist process owners, knowledge workers and participants and stakeholders. Incorporate real-time data analysis in the case. Defining Case states and rules which drive those case states. Defining events which affect process flow. Conclusion Landscape of enterprises processes has changed. The more aprocess complexity increases, the more a process become un-predictable, themore a process is driven by un-known events, the more a process needs to act onun-foreseen consequences, the more a process needs ad-hoc inclusion ofknowledge workers, the more the process have un-known contents and hence more theyneed a ACM solution. Through the course of this whitepaper we have tried todefine characteristics, attributes, components of adaptive case management. Wehave emphasized on differentiating BPM and ACM. We have tried to highlighthuman intuition in processes and participation and inclusion of knowledgeworkers in case. We have enlisted the factors to identify a case managementcandidate and also tried to categorize them. In the last we try our best toenlist some of the cases specific to industries.
This paper tries to offer my view points on adaptive case management, while talking about different attributes an adaptive enterprise should have. Documentalso highlights the difference between BPM...
Escalation is a common requirementwhile implementing Oracle BPM processes with human interactions.Processes don;t do work, it;s the people who do. This concept leadsto those processes which have heavy human interactions. There arecases when a human task assignment, when not performed or acted,inexpected time,need to be escalated to a assignees manager or up inthe hierarchy. To implement escalation for human task,you can implement it from the duration deadline section in human taskdefinition. By default escalation is based on the management chainhierarchy and task gets escalated up in the hierarchy from user tohis/her manager and so on. You can control the level to which taskcan be escalated and can also use a title to which task getsescalated to. Level and title assignment can be configured whileconfiguring human task definition’s duration deadline. Oracle BPM offers varied ways toescalate - Role Based escalation Level & Title based escalationand Custom Escalation. This blog post covers Level & Titlebased escalation and custom escalation by implementing“IdynamicTaskEscalationPattern” interface. For scenarios where hierarchy are notdefined or when requirement is to explicitly use a list or users towhom task should be escalated rather than using default managementchain feature, than custom escalation is an option. Level & Title based escalation Am trying to use a approval group asthe user list building option as this was the initiation requirementwhich fueled this blog post. However other list building mechanismcan be used. Also to simply list building, am usinga static dynamic group however dynamic group can be configured andused too, as a custom mechanism to build user/group list. Followingis the Participant Type definition. Instead of Dynamic Approval Groupam using the Static Approval group, however the story remains thesame. Login to BPM workspace - http://server:host/bpm/workspace as admin user. Go to Administration and clickTask administration → Approval Group Create a Static group named -“DynamicAG” with two users – jstein and jlondon as shown below- To use the Approval group in your humantask - Open the human task definition andgo to Assignment Section and open the participant type definition asshown below . Following configuration is meantto assign tasks in serial fashion and hence participant type isserial. List of participant is build usingapproval group. Use the static group created above or you can use adynamic group for building the list dynamically based on a Javacall. Note - Follow below link to create and use Dynamic Approval Groups - http://acharyavivek.wordpress.com/2012/02/27/dynamic-approval-group-bpm-workspace/ With a requirement to escalate to three(3) levels and stop the escalation when the title reached is CEO canbe configured as follows - 5. Click on Human task definition andopen deadline definition as shown below. For the test purpose, setthe time to three minutes. With this configuration, task will getescalated if the assignees does not act on it in three minutes.Escalation will follow the hierarchy of the user defined in weblogicembedded LDAP and task will escalate till level three and stop if thetitle of CEO is reached or three levels are achieved. Deployed the process. Test escalation process from EMconsole. Following is the screen shot ofthe task assignment. Login to BPM workspace asadmin user and checkthe task assignment as shown below - You can witness that the task gotassigned to “jstein” user as per the user list returned from thestatic approval group. Wait for 3 minutes to get the taskallocation time to expire. Now check the behavior after 3 minuteswhich is as follows - You can check that – task gotassigned to - “wfaulk” user after it got expired from the“jstein” task queue. Why did that happened? Answer lies in the myreal (embeddedLDAP). “wfaulk” is defined as manager to“jstein”. Now there are 2 possibilities - If “wfaulk” doesn’t approve thetask for another 3 minutes – it would get assigned to his managerwhich is - “cdicken” as shown below - And if “cdicken” does not acts onthe task, task gets totally expired. This happens due to the factthat we have defined Highest Approver = CEO and MR. Cdicken is a CEO. Escalation – Normally escalation isbased on the fact that if a “x” users does’t approves then itgoes to it;s manager and so on. Similarly you can browse the identitylookup and that will give you the pointers to the task behavior as wehave experienced above - Click on “jstein” Click on Hierarchy and that willshow the hierarchy defined in embedded ldap. Click on “cdicken” user andclick DETAIL button which will show the user title = CEO. This is because of this – the taskgot stopped at CEO level inspite of the fact that we have definedtask with elevation level = 3 You can also define the title for theuser in myreal - Escalation works on the principle thatuser hierarchy should be defined. If not – You can always use CustomEscalation Java Class in Deadline section of Task Definition. Custom Escalation Create a Generic project say -“CustomEscalator” Add following librariesto the project - BPM Workflow, BPELRuntime, BPM Services. Create a javaclass with the desired name. In the project which you havedownloaded, java class is named as “CustomEscalator”.Class mustbe in the package: oracle.bpel.services.workflow.assignment.dynamicand must implement the interfaceoracle.bpel.services.workflow.assignment .dynamic.IDynamicTaskEscalationPatten Modify following methodsas per you requirement. For this use case, following screen shotcontains the method definition - getTaskEscalationUser(Task task) - This method returns a string value which contains theusername to whom you want to escalate. We have used the username“salesrep”. Remember to create the user “salesrep” inmyrealm (Weblogic embedded LDAP). GetName () - Thismethod returns a string value which is the name of the escalationfunction. With this case, escalationfunction name is entered as “CustomEscalator”. Right click on the“CustomEscalator” project created above and open it;sproperties. Click on deployment and create adeployment profile to generate the jar file of theclass created in the previous step, in this case the deploymentprofile is called customEscalate, generate the jar and place on aroute available in the server classpath. Place the generated jarfile n the path SOA_ORACLE_HOME \ soa \ modules \oracle.soa.ext_11.1.1 In the unix box, go toSOA_ORACLE_HOME \ soa \ modules \ oracle.soa.ext_11.1.1 and run -ant-buildfile build.xml Now you need to create acustom task escalation function and register this with the humanworkflow service. Login to EM consolehttp://server:host/em as admin user. Right click on soa_infra=> Soa Management => Workflow Properties as shown below - Open Task tab and clickAdd Function to create a custom task escalation function. This wouldregister the custom function with human workflow service. Enter the name of thefunction - “CustomEscalator” and enter Classpath as -“oracle.bpel.services.workflow.assignment.dynamic.CustomEscalator”where “CustomEscalator” is the class name. Click OK and Apply topersist the changes. Restart the server tolet the changes applied. Open the .task file ofthe human task and go to deadline configuration as shown below. Let the escalationhappen in three minutes. To enable this, enter three minutes asfixed duration type for escalation. Enter Custom escalationclass name as shown below. Save and Deploy BPMproject. Execute “EscalationPrj”project by login to EM. Login to BPM workspace as adminuser and in administration tab you can check the task assignmenthistory shown as below - You can check the created andexpiration time. This is three minutes as you have set in taskconfiguration. Refresh after three minutes andyou can verify the escalation as shown below. You can witness thatthe task gets assigned to “salesrep” user after three minutes. If “salesrep” user means the userto whom task is escalated to , does not act on the task then thetask gets expired. For any assistance/ project code, you can reach me on vivek.acharya@oracle.com.. Thanks..
Escalation is a common requirement while implementing Oracle BPM processes with human interactions. Processes don;t do work, it;s the people who do. This concept leadsto those processes which have...