X

Advice and Information for Finance Professionals

How Finance Will Make AI More Responsible

By Mike Baccala, Assurance Innovation Leader, and Ed Ponagai, Principal, Finance Consulting, PwC 

At PwC, we’re strong believers that artificial intelligence is going to bring big benefits for finance departments and for the professionals who work in them. But we have no illusions that AI is a panacea for all that ails. Most finance leaders, and even many IT professionals, don’t understand how AI works, and so they find it difficult trust its recommendations. Business leaders need both understanding and trust before they can have responsible AI.

Artificial intelligence can learn a lot of things from data, but some of those things might best be left forgotten. One widely reported story, for example, is the COMPAS risk assessment system. It algorithmically advised prosecutors and judges on the risk of recidivism of individual offenders—but was found to have learned racial biases from the historical data that were fed to it.

In businesses, an AI application might make recommendations that improve working capital or that refine corporate strategy, but the treasurer or FP&A leader is going to want to understand how the machine came to its conclusion before taking any action. In a 2017 PwC survey, 76% of CEOs told us the potential for biases and lack of transparency are holding back AI.

An AI algorithm makes probabilistic determinations in non-obvious ways. It falls to humans to understand why. Leaders, employees, consumers and regulators are all wary of relying on an AI that acts inexplicably. Thus, pressure is growing to open up “black boxes” and make AI explainable, transparent and provable.

Why Responsible AI Is a Finance Pain Point

Technology vendors are recognize the potential problems and are participating in collaborations such as the World Economic Forum’s Center for the Fourth Industrial Revolution, the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, AI Now, The Partnership on AI, Future of Life, AI for Good, and DeepMind, among others. All are looking at how to maximize AI’s benefits for humanity and limit its risks.

But, as we outline in our “2018 AI Predictions,” pressure for responsible AI won’t be on tech companies alone. The risks to the organization are broad and non-technical: from invasion of privacy and algorithmic bias, to brand reputation and the bottom line. No board would outsource those kinds of risks.

As more companies embrace the imperative of responsible AI, the finance function will have to step up, even when an AI is put to work elsewhere in the organization. Responsible AI calls as much for a governance, risk and control solution as it does a technology solution. For example, companies can use a GRC solution such as Oracle Risk Management Cloud to assess potential abuse of their own algorithms—such as when bad actors try to create fake accounts.

GRC software also provides audit trails—and internal auditors are experts in governance, risk and control processes. They can develop risk frameworks to codify how data might be “de-biased” or teach learning algorithms to steer clear of legal and ethical landmines. Or they can help train a secondary AI agent to parallel a primary AI, in order to classify and explain behavior as part of a control process. So, finance will provide the humans in the loop that could finally make AI responsible.

Learn more about AI and other emerging technologies at Oracle’s upcoming finance event. Be the first to find out about special discounts, insider information, and event details by completing this short form

 

Be the first to comment

Comments ( 0 )
Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.Captcha
Oracle

Integrated Cloud Applications & Platform Services