Organizations face a common challenge – how can they improve their security posture with limited budget and staff? One approach is leveraging tools which detect and remediate vulnerabilities in technology, but let’s consider other opportunities. People and processes, along with technology, comprise the three “pillars” of information security. Automation can help strengthen the people and process security pillars for example by using an artificial intelligence (AI) tool to actively guides personnel through security-related tasks and activities. With such an approach, an organization can:
My role in Oracle’s Corporate Security involves providing internal guidance. While doing this, I noticed that many of the same requests and questions among customer-facing teams were occurring frequently. I got curious about using AI to address these common questions and began experimenting. Here’s what I learned from the automation project to answer common security-related questions using Oracle Digital Assistant (ODA). The project’s objective:
We found that ODA’s integration with various “channels” and collaboration tools enabled automated delivery of the accurate information to customer-facing staff in real time. Personnel didn’t need to wait for responses or hunt for guidance – the AI provided correct answers to their questions.
Benefits of security-focused chatbots
|
Answers to employee questions relating to security are provided “just in time”, when and where they need assistance:
|
What are Oracle Digital Assistant (ODA) chatbots?
Digital assistants are artificial intelligence (AI) user interface tools (commonly known as chatbots) that help people accomplish tasks through natural language conversations, without having to search and wade through various applications and web sites. Each digital assistant contains a collection of specialized skills.
You can make digital assistants available to targeted users through a variety of applications, such as chat and collaboration tools. Each digital assistant can use one or more task-specific or topic-specific skills such as answering specific questions and initiating a process. As an example, you can try the Sales chatbot on oracle.com:
Use cases for security-focused chatbots
Chatbots can summarize requirements and provide guidance with links to web pages, instructions, and applications, such as these examples:
Chatbot-initiated employee engagement
|
Be proactive! Chatbots can launch interactions with employees to recommend tasks or educate employees in common collaboration tools, email and other channels:
|
Where to deploy chatbots
Make it easy for your personnel by embedding or integrating chatbot into tools they already use to seek help or complete relevant tasks. Delivering relevant guidance when and where it is needed helps people comply with internal policies. To determine the best location, ask your target audience where they seek help now. Likely candidate locations include:
No single tool is right for every need
As large language models may “hallucinate”, person-to-person communications are advised when discussions require a high degree of sensitivity, people may have emotional reactions to the message, significant nuanced professional judgment is needed or when there are potential legal or safety consequences.
There are several options for limiting risk by controlling how chatbots respond. For example, you can use “retrieval augmented generation” (RAG) in the AI is configured to solely quote from your knowledge base of documents. You can use a structured ODA dialog flow so that users select from a menu of options, with limited or no natural language processing. A third option is configuring a limited set of chatbot responses using ODA “intents”. As with any technology, it should be monitored to detect any “drift” from intended operation.
Oracle Digital Assistant (ODA) configuration
Intents allow your skill to understand what the user wants it to do. An intent categorizes typical user requests by the tasks and actions that your skill performs.
Chatbot training
Skills can have multiple “intents” – these are the chatbot responses which offer guidance or solutions. Intents can include bulleted lists as well as links to documents, html pages or applications. Since the artificial intelligence powering ODA chatbots is trained on language in general, configuring “entities” educates ODA about terminology and sets of synonyms specific to your organization so it selects the right intent of a skill. For example, a chatbot about Oracle cloud services might define these entities:
Test the chatbot with utterances you configured in the skill’s intents, but also try variations. Solicit test questions from your intended audience and peers, so that you can effectively train the chatbot to select the correct intent (response) for each skill. Chatbot training is an iterative process.
Manage artificial intelligence risk
Limit how chatbot responds:
Monitor and measure chatbot performance
Get started in 10 steps
Before building your own AI chatbot, you may wish to experiment with this hands-on ODA chatbot lab.
Oracle recommends that organizations consider relevant use cases in addition to applicable regulations and guidance for using artificial intelligence as well as select a methodology for managing AI risk.
Nancy Kramer has over 20 years of experience managing risk, security, privacy, audit and compliance for complex business processes and computing environments. Nancy advises Legal and other teams making decisions about information security policy, customer commitments and obligation management. She also manages programs which seek to educate personnel and customers about Oracle's security and compliance posture in the Oracle Trust Center (oracle.com/trust). She offers actionable guidance to customers in blogs and webinars.
Previous Post