X

Oracle Artificial Intelligence Blog

AI Influencer Blog Series - AI Safety: What Every Data Scientist Needs to Know

Roman Yampolskiy
Associate Professor at the University of Louisville Department of Computer Engineering and Computer Science

Check out our third post in the AI Influencer blog series by Roman Yampolskiy. Roman is a member of the University of Louisville Department of Computer Engineering and Computer Science. He served as editor of the best-selling book Artificial Intelligence Safety and Security. For more insights from Roman, follow him on Twitter @romanyam

One could say that AI is taking the world by storm. Data scientists are in demand everywhere, and the number of high-visibility projects that use learning models in the public and private sectors is growing. Undoubtedly, the work will result in many beneficial outcomes, but it's just as certain that some of those outcomes will be surprises. 

Unexpected behavior in learning models can be good because it expands knowledge, but in uncontrolled settings, surprises can be risky. In extreme cases, unplanned model behavior causes physical harm, but more commonly, the harm is brand erosion due to failures or embarrassment. 

Let's look at one famous example of an AI safety failure. In 2017, Apple released AI face-recognition software that could not distinguish between Chinese iPhone users. Not only was this a product failure, but it also caused Apple to be criticized for having a racist learning model. 

Such risk is unacceptable for learning models within critical applications, such as in autonomous vehicles and medical diagnostics software. Analyzing and preparing for the unexpected -- and then planning for it -- makes AI perform more safely. 

 

Anticipate and Prepare for AI Failures

AI safety should be as much of a front-and-center issue as cybersecurity because machine learning will eventually permeate everyday experiences, from using public transportation to working with accounting software. 

AI safety methods aim to predict and prevent unexpected behavior, whether it is caused by malicious AI coding or is simply an unanticipated learning behavior. 

If you have enough data from AI failures, you can start to identify patterns that point to potential failures. Of course, as with cybersecurity, no method can prevent failure 100% of the time, but to create the safest methods possible, data professionals should scrutinize all the ways they can identify that a model might fail. 

For example, if you design a product to do x -- let's say a self-driving car -- ask yourself how it can fail. Start thinking about possibilities and whether/how they are addressed in the system: The car could fail to recognize road signs. It can hit a pedestrian. Someone could hack the software. Someone could intentionally block sensors or feed incorrect data to sensors. 

As with the car example, AI safety reviews in products are often at a systems level and involve multiple disciplines. This is likely one of the reasons AI safety can be an afterthought in the product-development process -- no one role or function "owns" it. 

 

No Norms for AI Safety Training

In addition to lacking clear roles and responsibilities, AI safety is not necessarily a part of training for everyone working on learning models. Data scientists come from varied backgrounds, and AI itself is a new concept for many people trained in computer science and engineering. Lack of awareness certainly has been a factor contributing to highly publicized AI failures. 

A few big players -- such as Oracle and Google -- are putting vast resources behind safe AI development, and there's been some movement toward industry standards; but generally, there are no standards for training data scientists and others working on learning models on AI safety. 

 

Educate Yourself on AI Safety

Demand for AI-enabled cybersecurity is growing. Oracle started more than 5,000 Autonomous Database trials last quarter. Who will make the next breakthrough in AI safety? Maybe you. 

This blog post scratches the surface of issues that surround AI safety, and developers and data scientists should make efforts to educate themselves as AI becomes more commonplace. 

For instance, there are ethical and commercial considerations. Should we limit safety testing on new models that can alleviate widespread suffering so we can deploy them faster? How much risk is acceptable for Product X given the market demand and revenue potential? Will we gain more by extending safer testing to ensure a safer, more reliable model? 

Additionally, research and advances are happening regularly and offer new insights on how to make AI safer. Follow people who do work on this area on social media, and read what's being published

Learning models are doing marvelous work, but they can also be risky if developers don't put in limits to prevent unanticipated behavior. Getting started is as simple as asking, "What could happen that we haven't thought of?"

Be the first to comment

Comments ( 0 )
Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.