The Mobile & Digital Assistant Blog covers the latest in mobile and conversational AI development and engagement

Recent Posts

Digital Assistant / Chatbot Development

Oracle Digital Assistant 20.12 introduces Unified Multi-lingual NLU, Enhanced Speech, and Data Manufacturing

I’m excited to announce that the Oracle Digital Assistant Platform Version 20.12 is now being rolled out across all our OCI data centers.    Unified Multi-lingual NLU 20.12 represents a fundamental shift not only for the ODA Platform but for the Conversational AI space in general. With 20.12, we introduced a unified multi-lingual NLU model for key languages in Europe and Middle East. This means customers no longer have to build a separate digital assistant for each language nor do they have to rely on a translation service to power their Intent Classification and Entity Recognition. While multi-lingual language embeddings have become mainstream in the past couple of years, ODA is the first Conversational AI platform to introduce active cross-lingual usage with Few-Shot and Zero-Shot NLU training models for both intents and entities. Few Shot training means that if you want to add Arabic support to your digital assistant, you do not have to recreate all your training data in Arabic — you may be able to add only a fraction of the training data in Arabic to the same digital assistant, and get reasonable accuracy. Actually, you may even get your digital assistant to recognize some Arabic utterances without adding any training data — that’s the Zero Shot training model. As always, please assess your accuracy goals for each use-case and test thoroughly to determine how much more training data you need for your intents and entities. To support this unified multi-lingual NLU, we have introduced other features across the platform such as built-in language detection, multi-lingual retraining in Insights, multi-lingual testing, and ICU resource bundles for complex multi-lingual outputs.   Enhanced Speech Speech recognition has been an integral part of the ODA Platform ever since we introduced Oracle Voice in Dec 2019. The base Speech Models are already fine-tuned for enterprise usage to recognize terms such as EBITDA, GAAP, and KAD (Key Account Director in Oracle). These models use context and statistical inference to correctly resolve ambiguous utterances (e.g. when the user said 'Gap' vs. 'GAAP'). With 20.12, we are introducing Enhanced Speech models which make Speech customized for your digital assistant. These models get transparently trained in the background when you train your digital assistant — there’s no extra Speech recording or adding lists of phonemes. Enhanced Speech picks up all your static and dynamic entities (that means you can inject new entity values on-the-fly into the Speech and NLU models for your digital assistant). With Enhanced Speech, your digital assistant will now automatically recognize custom product/org names (e.g. NeuraLink)  and hard-to-master words  (e.g. my last name!)   Data Manufacturing with Active Learning Deep learning models typically require a lot of diverse training data. While our core ML models often help you generalize with less effort, we understand that there’s no substitute for sourcing good training data, especially if you are creating a highly domain-specific digital assistant. Data manufacturing and crowd-sourcing platforms currently enable developers to gather some training data, but that exercise often feels disjointed and hard to synchronize. With 20.12 we are introducing an integrated data manufacturing capability within the ODA Platform. Now you can quickly select a few intents within your skill and kick off a paraphrasing job within your organization. More importantly if you have a ton of chat transcripts you can kick off an annotation job that leverages Active Learning to suggest potential intents from your skill for a given chat utterance — the crowd worker simply has to pick one of the suggestions or suggest a new one, and that choice is propagated back into the Active Learning model.    Additionally, 20.12 comes packed with other cool features such as Group Chat support (yes, multiple users can continue a single conversation thread with the digital assistant), Intelligent Advisor integration, DA Retraining, custom OCI Functions, and more. We hope you upgrade your skills to 20.12 soon! We look forward to all your feedback!

I’m excited to announce that the Oracle Digital Assistant Platform Version 20.12 is now being rolled out across all our OCI data centers.    Unified Multi-lingual NLU 20.12 represents a fundamental shift...

Digital Assistant / Chatbot Development

Oracle TechExchange: Demystifying The Skill Calling Skill Functionality In Oracle Digital Assistant

Some functions in a digital assistant are typically required by many skills. An example of such functionality is user authentication, which should be performed once for a digital assistant, but must be triggered by all skills of a digital assistant. You could duplicate the code it takes to perform user authentication into each skill or look for a way to modularize the problem. A feature to modularize the problem is to have a skill calling a skill, which is a feature built-in to Oracle Digital Assistant. This article discusses the options and techniques skill developers have available and the practices to follow to implement shared functionality in a separate skill. It is important to mention that the intention of this article is not to say "you must use skill-to-skill conversation" all the time because often a custom component can also be used to wrap and share common functionality. Even copying shared code into several skills sometimes doesn't pose a burden or overhead. This article introduces skill-to-skill calls as a way to share functionality through modularization. Its one more tool in your tool box. 

Some functions in a digital assistant are typically required by many skills. An example of such functionality is user authentication, which should be performed once for a digital assistant, but must...

Digital Assistant / Chatbot Development

Oracle TechExchange: Using BotML in Oracle Digital Assistant to provide a feedback functionality for answers given by a bot

article by Frank Nimphius, September 2020 In an earlier article on Oracle TechExchange, I explained how to implement a feedback feature for answers to frequently asked questions. In the article I used a custom component that actually handled the feedback interaction with a user. In this article I explain the same use case using BotML for handling the feedback conversation (or any other follow-up conversation needed). A benefit of the approach explained in the article is that it is easier to implement for developers that are new to Oracle Digital Assistant and custom components. Where the previous solution required a complex - though interesting - custom component to be developed, this solution goes with pure BotML.  As with many article on Oracle techExchange, there are a couple of things you will learn Formatting messages using an Apache FreeMarker array Defining answers as regular intents Adding icons to Common Response component actions Using a delegate object on the Oracle Web SDK Using CSS to customize the Oracle Web SDK rendering The images below show the final product of this article. As an example I used frequently asked questions about Oracle Digital Assistant. The "thumb up" and "thumb down" icons are actions that a user can use to provide feedback. If the user does not want to provide feedback, then she could continue the conversation with a next message. The design goal of this implementation is to not be annoying by forcing the user to provide feedback. Providing feedback is made optional. Disclaimer: Dependent on when you read this articles, the answers to questions in the screenshots may have changed or been updated.  The user is free to choose whether she wants to provide feedback or not. If she wants, then she would press one of the provided icons (which you choose how they look). Otherwise she would simply send another message that will be passed to the intent engine.  The image above shows a conversation the user continued without providing feedback. The conversation below then had the user clicking the thumbs-up button. The sample for this article prints a message in response to the user selected action button. In your production bot, the selected action should lead to a custom component call that then logs the user feedback so you can improve your chatbot.    The BotML (YAML) Code   All code in this sample is in BotML. Shown in the image below is the System.Intent component state. The System.Intent component state has an actions transition defined for all intents that are regular intents that start a conversation and no answers to a question. The example only has the unresolvedIntent defined for this. All other regular intents that get resolved are considered an answer to a question being asked by a user. Those intents follow the next transition to the prepareAnswerAndFeedback state. For an explanation of what answer intents are and when to use them compared to when to use regular intents for answering questions, see this article. The state shown in the image below defines an array using Apache FreeMarker expressions. The array is saved in the answerWithFeedback variable and contains items with text added and blank items. The blank items mark the end of a paragraph. So basically I am formatting a message into paragraphs without using markup or encoded line-break characters,  The answer to a question is displayed by a System.CommonResponse component. The System.CommonResponse component iterates over the array that got created in the previous dialog flow step to print the message. For this it has the iteratorVariable pointing to the answerWithFeedback variable and its separateBubbles property set to false. This way multiple lines of the array get printed in a single speech bubble. Two actions Ok and Bad are used for the user to provide feedback. The action items use resource bundle strings for their labels. Notice how the labels are printed as a single blank character for when the message channel is the Oracle Web SDK (websdk). This way the label is used by all messengers that e.g. don't support icons on action items and buttons. If you use a messenger that does support icons, for which you don't want the labels to show, you could add the messenger channel to the expression that checks whether or not the label should be printed as a blank character. In the image below, I use a reference to pixabay.com for the thuumb-up and thumb-down icons. In your production environment you should use your own images that you store on an Internet facing server or a content delivery network (CDN).  Notice the three actions transitions defined for the System.CommonResponse component state. negativeFeedback - the transition is followed when the user presses the "thumb-down" action positiveFeedback - the transition is followed when the user presses the "thumb-up" action textReceived - the transition followed when the user doesn't select one of the buttons but types a new test message The dialog flow state shown in the image below handle the three transition types. For the feedback options you replace the System.Output components with custom components that report the user feedback to a backend system for you to improve the answers if needed. The handleTextReceived dialog flow state prints a confirmation message to then direct the flow to the System.Intent component state to pass it to the intent engine. Intents and Resource Bundles The answers to print response to a question are saved in resource bundles, which means they can be translated if needed. The intent names are unique and can be used as the key name for a resource bundle string. The image below shows a resource bundle definition. Note that the resource bundle strings are created in the User-Defined tab.  The answer for an intent is accessed when creating the array for the formatted message. To access the resource bundle content for an intent you use ${rb(iResult.value.intentMatches.summary[0].intent)}. Note:  "rb" is the name of the variable of type "resourcebundle" that you need to define in your dialog flow. "iResult" is the name of the "nlpresult" variable and is also referenced from the System.Intent component.  Oracle Web SDK client configuration Two settings are required for the Oracle Web SDK to work as shown in the initial set of screenshots. settings.js The settings.js file that holds the web SDK configurations in the sample provided with the Oracle Web SDK download needs a delegate object that responds to the postback message. The postback message is sent when a user presses the thumb-up and thumb-down buttons. The beforePostbackSend function highlighted in the image below allows you to modify postback messages. In this example, it looks for answers that have a single blank character. If so, the function does not send the message to the server but re-routes it to Bots.sendMessage() as a hidden message. Note: What the delegate object function in the image above does is to hide a blank message bubble. The blank message bubble would otherwise displayed when a user selects the thumb-up or thumb-down button. The function suppresses this empty bubble by sending a hidden user message. This function is not needed if you wanted to display a label and an icon. (see: "Alternative solution" later in this article) index.html settings You could run the sample without changing the index.html file. All changes that are required are those in the settings.js file. Without the changes in the index.html file, the buttons with the thumb-up and thumb-down actions will be rendered in a vertical orientation. The style information show below change the icons so they can be laid out next to each other as shown in the image at the beginning of the article. Alternative solution Alternative to displaying buttons just with a label you can display the buttons with an icon and a label. In this case you don't use the Apache FreeMarker expression in the showAnswer state to conditionally set the label to a blank character. Instead you always add the label as read from the resource bundle.  Also, the beforePostbackSend() function in the settings.js file can be left empty because no special handling is required.  Last, but not least, the index.html does not require any styling as you would need the buttons to be vertically aligned to allow the label to be placed next to the icon Downloads Sample Skill Import the skill Train the model Run the sample in the integrated conversation tester. Start by "what is an API call" and then "tell me about pricing"  Configure your settings.js and (optional) index.html file of the web SDK implementation (sample) as explained in this article Run sample in web messenger Related Content Oracle Native Client SDK References & Downloads Oracle Web SDK product documentation (all you need to know) TechExchange: Use font awesome and a custom component to create an icon menu for the Oracle Web SDK TechExchange: Oracle Digital Assistant Web SDK customization and programming examples TechExchange: How to respond to user inactivity using the Oracle Web SDK messenger. An implementation strategy TechExchange: What Is Best for Frequently Asked Questions In Oracle Digital Assistant? Regular or Answer Intents?  

article by Frank Nimphius, September 2020 In an earlier article on Oracle TechExchange, I explained how to implement a feedback feature for answers to frequently asked questions. In the article I used...

Oracle Enhances Oracle Digital Assistant with Multilingual Capabilities

New conversational AI capabilities enable customers to interact via channel of their of choice across the enterprise By Suhas Uliyar, Vice President, AI and Digital Assistant, Oracle   Call them chatbots, virtual assistants, or simply bots. Whatever the name, AI-powered conversational interfaces are becoming mainstream staples for consumers and enterprise alike. In fact, leading analyst firm Gartner believes that “by 2022, 70 percent of white collar workers will interact with conversational platforms on a daily basis.”1 When Oracle unveiled its chatbot platform at OpenWorld 2016, it helped set the pace for automation in the enterprise. Automation is a means for increasing scale and efficiency and accelerating efforts to digital – and considering that recent global events and challenges have forced a restructuring of how we work, AI and digital assistants are fundamental to that transformation. Oracle Digital Assistant has been solving the needs of the enterprise since 2016, and analyst firm Omdia recently noted, “By offering full integration with its software as a service (SaaS) applications, Oracle made it exponentially easier for end users to command and control the capabilities of these applications.”2 Today we are announcing a new set of updates to enhance the multilingual capabilities of Oracle Digital Assistant. These features are helping customers such as Loyola University of Chicago and communications startup Yokeru provide their users with the information they need through the channel of their choice. The new features include: New Deep Learning Models: Customers can leverage the power of Oracle Cloud Infrastructure’s native GPU and CPU architecture to improve the ability to distinguish nuances in customer queries, such as: Similar vs. Unrelated phrases: “Can I get some flatbread?” vs. “Can I get some flowers?” Additional context within long sentences: “I have a large party later this afternoon, and I have several guests coming over. I’d like to order some large pizzas.” Closely related sentences: “I want to cancel my order” vs. “Why was my order cancelled?” Distinguishing names that sound like locations: “When will Devon Arlington come to Stratford-upon-Avon?” or “Find Paris Hilton from the Paris office”  Distinguishing number vs. currency: “Lunch with 3 for 60 bucks” Colloquial terms: “I have a budget for 20M bucks” (colloquial usage) Currency formats: “Please add a tip for 2,50 €” (different currency formats) Versatile Data Shapes: Customers can use both large as well as small datasets to train their skills without concern that an imbalance would impact the NLU performance. Custom Domain Vocabulary: Customers can expand the assistants understanding to their own custom domain vocabulary. Data Manufacturing Pipeline: Data is critical to achieving high accuracy in deep learning models. The data manufacturing pipeline provides a cohesive set of tools providing all stakeholders, from technical to line-of-business, the ability to generate, refine, curate, and evaluate conversational data. Combining human sourced intelligence with advanced machine learning delivers better, more nuanced results that only humans can offer. Native Multilingual: With Native Multilingual NLU, customers can add training data in different languages, eliminating the need for external translation services to understand users who do not speak English – and customers can provide multilingual outputs directly using resource bundles. Key Phrase word clouds and Multilingual retrainer: Digital Assistant now features intent and key phrase clouds to help business analysts quickly understand common themes of engagements. The business analyst can quickly drill down into the details of a specific phrase. Since its introduction, Oracle Digital Assistant has offered valuable features and capabilities, including: Digital assistants for FAQs: When considering B2C call centers and B2E help desk, it’s easy to see how a bot can field common incoming questions and requests, providing customers the satisfaction of an instant response 24/7, while offloading staffing resources to work on other tasks. Automated bot-to-agent transfer: Oracle Digital Assistant offers prebuilt integration to Oracle Service Cloud, offering a seamless experience for customers during handoff to a live agent, while providing agents historical information about the recent customer engagement. Enterprise assistant skills: From cloud applications such as Oracle Cloud ERP, Oracle Cloud HCM, and Oracle Cloud CX to on-premises applications like PeopleSoft and JD Edwards, Oracle teams have developed prebuilt assistant skills and templates to meet customer demand. Popular conversational channels: With support for well-known smart speakers to popular text-based channels including SMS, WhatsApp, WeChat, Facebook Messenger – plus collaboration tools like Slack and Teams – Oracle Digital Assistant is ready. Oracle Voice: Oracle invested in its own AI-powered voice capabilities bringing together an end-to-end, secure, and private solution (GDPR, PII), while providing a customizable framework to support terminology that is unique to different industries and businesses.  One digital assistant: Oracle Digital Assistant can unify all assistant skills into one digital assistant, making it easy for users to interact with multiple systems from one conversation. Conversations are contextual and personalized to individual users and roles. Chatbots and conversational AI are quickly becoming integral tools for enterprise communication and information sharing, in addition to automating traditionally manual tasks. With the new updates to Oracle Digital Assistant, we are delivering the innovative features users are seeking – such as multilingual capabilities – to further weave digital assistants into the fabric of the enterprise. As a result, customers are able to offer automation across their entire organization, using a highly secure AI-powered voice assistant that stores their business’ sensitive data in Oracle’s second generation cloud infrastructure.    Customer Quotes Loyola University of Chicago “With more than 17,000 students demanding more timely, more modern engagement, we established a five year plan to advance the Loyola Digital Experience (LDE) strategy. The Transformational Theme of LDE includes leveraging artificial intelligence (AI) and deployment of ‘LUie,’ an AI digital assistant running on the Oracle Digital Assistant with automation and integration from IntraSee,” said Susan M. Malisch, VP & CIO, Loyola University of Chicago. “LUie currently provides hundreds of answers to common questions. Early results have been great with initial accuracy rates of 86%. Feedback has been encouraging with 91% positive comments and we are now looking to broaden LUie to handle even more questions for more audiences. We’re excited about LUie’s future potential.” Yokeru “Throughout the COVID-19 pandemic, millions self-isolated to protect their health and the wellbeing of the community – but for many, the extended period of self-isolation resulted in negative impacts on mental and physical health. In response, Yokeru developed an AI-enabled call centre to contact and identify vulnerable members of the community using the phone line,” said Monty Alexander, CEO, Yokeru. “Through Oracle Digital Assistant and Oracle Autonomous Database, we were able to rapidly develop this system to enable the London Borough of Hammersmith and Fulham to monitor 9,000 Shielded households – resulting in the elimination of over 100 working days of traditional call centre time. The flexibility of Oracle's software on the underlying Oracle Cloud Infrastructure is incredible, and we’re looking forward to understanding how we can develop this technology to support our community in the future.”  State of Oklahoma “In eight days Oracle built and delivered two applications,” said Jerry Moore, CIO, State of Oklahoma. “Not only was this an impressive feat, but showed Oracle’s commitment to helping us and our communities run more smoothly in this difficult time.” To learn more, check out What's New in the Oracle Digital Assistant documentation. 1Smarter with Gartner, “Chatbots Will Appeal to Modern Workers,” 31 July 2019, https://www.gartner.com/smarterwithgartner/chatbots-will-appeal-to-modern-workers/ 2Omdia, “Analyst Commentary: Oracle Digital Assistant democratizing Oracle apps,” Mark Beccue, Q3 2020, https://tractica.omdia.com/research/analyst-commentary-oracle-digital-assistant-democratizing-oracle-apps/

New conversational AI capabilities enable customers to interact via channel of their of choice across the enterprise By Suhas Uliyar, Vice President, AI and Digital Assistant, Oracle   Call them...

Democratizing Oracle Apps

  According to the Merriam Webster, democratize means to “To make (something) available to all people” and that’s exactly what Oracle Digital Assistant is about – making applications more accessible all people, conversationally easy. I’ve borrowed this blog’s title from the Omdia analyst, Mark Beccue’s commentary “Oracle Digital Assistant democratizing Oracle apps” and as you will see, it’s apropos. Historically, the internet came about commercially in the late 90s, and made the world’s information available to anyone w/ a PC. In 2007, Apple introduced the iPhone and effectively put a computer in all our modern-day pockets. More recently, cheap compute power, and effectively infinite cloud storage brought about breakthroughs in artificial intelligence, and thus, the advent of sophisticated conversational AI.  Now we don’t have to navigate to a website, or download, install and figure out a mobile app; more and more commonly, we simply ask, to get what we need. To quote from the analyst’s commentary:  “By offering full integration with its software as a service (SaaS) applications, Oracle made it exponentially easier for end users to command and control the capabilities of these applications.” This doesn't mean that we will no longer need the full power of the web or mobile apps that SaaS apps offer. No, we’re far from that. But for the non-power or infrequent users, conversational interfaces are more approachable. And even power users can benefit from ready access and simplicity of a conversational interface. Do most people need or want to learn a procurement system to order a new computer? Relearn the HR system’s benefits procedures to update a small change to the family’s health plan? Or become a digital marketing expert to check on the success/status of a campaign? Probably not, or at least, we can admit that having to do so creates frictions for employee productivity. Conversational interfaces are here today. More and more, you’ll simply ask your personal digital assistant and carry on with your day. That’s the topic of the Omdia analyst commentary: “Oracle Digital Assistant democratizing Oracle apps,” and if you’re interested, I’d courage you to check it out here.  To learn more about Oracle Digital Assistant, start here.   FYI: For the latest on Oracle Digital Assistant, watch our webcast live/replay: “Innovations update for Oracle Digital Assistant, conversational-AI for 2020 and beyond” hosted by Suhas Uliyar, Vice President, Digital Assistant, Cognitive AI and Integration Cloud, Oracle and Serdar Canbek, Senior Manager, Operations reporting and Analytics, Office Depot, as they discuss the latest innovation, and real-world customer results that Oracle Digital Assistant offers.

  According to the Merriam Webster, democratize means to “To make (something) available to all people” and that’s exactly what Oracle Digital Assistant is about – making applications more accessible...

Live webcast with Office Depot and industry expert, Suhas Uliyar on Oracle Digital Assistant

#outlook a { padding: 0; } body { width: 100% !important; -webkit-text-size-adjust: 100%; -ms-text-size-adjust: 100%; margin: 20px 0px 0px 0px; padding: 0; background-color: #FFFFFF; -webkit-font-smoothing: antialiased; } img { outline: none; text-decoration: none; border: none; -ms-interpolation-mode: bicubic; } a img { border: none; } a.anchorCls { color: #00758F !important; text-decoration: underline !important; font-family: Arial, Helvetica, sans-serif; } table td { border-collapse: collapse; } a:link, span.MsoHyperlink, span.MsoHyperlinkFollowed { color: inherit; text-decoration: none; } .line-sp a, .line-sp a:link, .line-sp a:visited { color: inherit; text-decoration: none; } td a, td a:link, td a:visited { color: inherit; text-decoration: none; } @media only screen and (max-width: 540px) { .vb-fullwidth { width: 100% !important; height: auto !important; } .devicewidthinner { text-align: center!important; padding-right: 0px !important; width: 100% !important; padding-left: 0px !important; } .devicewidth { text-align: center!important; padding-right: 0px !important; width: 100% !important; padding-left: 0px !important; } .fs22 { font-size: 22px !important; } .fs14 { font-size: 14px !important; } .pad-bsp20 { padding-bottom: 20px !important; } .pad-bsp30 { padding-bottom: 30px !important; } .pad-bsp15 { padding-bottom: 15px !important; } .pad-bsp10 { padding-bottom: 10px !important; } .pad-bsp0 { padding-bottom: 0px !important; } .no-bdr { border: none !important; } .pad-lr10 { padding-left: 10px !important; padding-right: 10px !important; } .pad-tb30 { padding-top: 30px !important; padding-bottom: 30px !important; } .pad-tb20 { padding-top: 20px !important; padding-bottom: 20px !important; } .pad-tsp20 { padding-top: 20px !important; } .pad-tsp10 { padding-top: 10px !important; } .pad-tsp15 { padding-top: 15px !important; } .pad-tsp0 { padding-top: 0px !important; } .mb10 { margin-bottom: 10px !important; } .mar-b20 { margin-bottom: 20px !important; } .mar-b30 { margin-bottom: 30px !important; } .mar-b15 { margin-bottom: 15px !important; } .mar-b10 { margin-bottom: 10px !important; } .mar-t20 { margin-top: 20px !important; } .mar-t30 { margin-top: 30px !important; } .mar-t15 { margin-top: 15px !important; } .mar-t10 { margin-top: 10px !important; } .box-cta-arrow { width: 20px !important; height: auto !important; } .box-cta-txt { font-size: 12px !important; padding-right: 8px !important; } .box-cta { padding-top: 9px !important; padding-bottom: 9px !important; padding-right: 16px !important; padding-left: 15px !important; width: auto !important; } .text-cta { font-size: 14px !important; padding-right: 8px !important; } .bdr-cta { padding-top: 9px !important; padding-bottom: 9px !important; padding-right: 14px !important; padding-left: 14px !important; width: auto !important; } .cb-btn-left { text-align: left !important; margin: auto !important; } .cb-hide { display: none !important; } .cb-r-sp { padding-right: 0px !important; } .cb-oracle-logo { width: 120px !important; height: auto !important; } .cb-logo-space { padding-left: 10px; padding-right: 10px; } .prd-logo { width: 145px !important; } .ico30 { width: 30px !important; height: 30px !important; } .icon55 { width: 55px !important; height: 55px !important; } .ct-img-tbl { float: left; width: 55px!important; } .ct-text-tbl { float: left; width: 117px!important; margin-left: 30px!important; } .chat-text-sp { vertical-align: middle !important; text-align: left !important; } .cb-middle { margin: 0 auto !important; } .cb-center { padding-right: 0px !important; text-align: center !important; } .cb-btn-ctr { text-align: center !important; margin: auto !important; } .cb-autoheight { height: auto !important; } .txt-top { display: none; } .img-top { display: block !important; max-height: none !important; } .cb-signature { width: 55% !important; height: auto !important; } .float-l { float: left !important; } .height0 { height: 0px !important; } .c-left { text-align: left !important; } .ftr-devicewidth { text-align: center!important; padding-right: 0px !important; width: 100% !important; padding-left: 0px !important; } .rs-ftr { margin: 0px !important; padding-left: 0px !important; width: 100% !important; padding-top: 0px !important; } .rs-ftr td { font-size: 12px !important; } .rs-ftr-logo { width: 260px; height: auto; } .rs-ftr-lr { padding-left: 15px !important; padding-right: 15px !important; } .line-sp { line-height: 20px; } .tech-logo { width: 120px !important; height: auto !important; } .ar-cb-btn-left { text-align: right !important; float: right; } .ar-c-left { text-align: right !important; } .ar-cb-center { padding-left: 0px !important; text-align: center !important; } .ar-cb-r-sp { padding-left: 0px !important; } .section-ctr { text-align: center !important; margin: auto !important; } } // akam-sw.js install script version 1.3.3 "serviceWorker"in navigator&&"find"in[]&&function(){var e=new Promise(function(e){"complete"===document.readyState||!1?e():(window.addEventListener("load",function(){e()}),setTimeout(function(){"complete"!==document.readyState&&e()},1e4))}),n=window.akamServiceWorkerInvoked,r="1.3.3";if(n)aka3pmLog("akam-setup already invoked");else{window.akamServiceWorkerInvoked=!0,window.aka3pmLog=function(){window.akamServiceWorkerDebug&&console.log.apply(console,arguments)};function o(e){(window.BOOMR_mq=window.BOOMR_mq||[]).push(["addVar",{"sm.sw.s":e,"sm.sw.v":r}])}var i="/akam-sw.js",a=new Map;navigator.serviceWorker.addEventListener("message",function(e){var n,r,o=e.data;if(o.isAka3pm)if(o.command){var i=(n=o.command,(r=a.get(n))&&r.length>0?r.shift():null);i&&i(e.data.response)}else if(o.commandToClient)switch(o.commandToClient){case"enableDebug":window.akamServiceWorkerDebug||(window.akamServiceWorkerDebug=!0,aka3pmLog("Setup script debug enabled via service worker message"),p());break;case"boomerangMQ":o.payload&&(window.BOOMR_mq=window.BOOMR_mq||[]).push(o.payload)}aka3pmLog("akam-sw message: "+JSON.stringify(e.data))});var t=function(e){return new Promise(function(n){var r,o;r=e.command,o=n,a.has(r)||a.set(r,[]),a.get(r).push(o),navigator.serviceWorker.controller&&(e.isAka3pm=!0,navigator.serviceWorker.controller.postMessage(e))})},c=function(e){return t({command:"navTiming",navTiming:e})},s=null,m={},d=function(){var e=i;return s&&(e+="?othersw="+encodeURIComponent(s)),function(e,n){return new Promise(function(r,i){aka3pmLog("Registering service worker with URL: "+e),navigator.serviceWorker.register(e,n).then(function(e){aka3pmLog("ServiceWorker registration successful with scope: ",e.scope),r(e),o(1)}).catch(function(e){aka3pmLog("ServiceWorker registration failed: ",e),o(0),i(e)})})}(e,m)},g=ServiceWorkerContainer.prototype.register;if(ServiceWorkerContainer.prototype.register=function(n,r){return n.includes(i)?g.call(this,n,r):(aka3pmLog("Overriding registration of service worker for: "+n),s=new URL(n,window.location.href),m=r,navigator.serviceWorker.controller?new Promise(function(n,r){var o=navigator.serviceWorker.controller.scriptURL;if(o.includes(i)){var a=encodeURIComponent(s);o.includes(a)?(aka3pmLog("Cancelling registration as we already integrate other SW: "+s),navigator.serviceWorker.getRegistration().then(function(e){n(e)})):e.then(function(){aka3pmLog("Unregistering existing 3pm service worker"),navigator.serviceWorker.getRegistration().then(function(e){e.unregister().then(function(){return d()}).then(function(e){n(e)}).catch(function(e){r(e)})})})}else aka3pmLog("Cancelling registration as we already have akam-sw.js installed"),navigator.serviceWorker.getRegistration().then(function(e){n(e)})}):g.call(this,n,r))},navigator.serviceWorker.controller){var u=navigator.serviceWorker.controller.scriptURL;u.includes("/akam-sw.js")||u.includes("/akam-sw-preprod.js")||u.includes("/threepm-sw.js")||(aka3pmLog("Detected existing service worker. Removing and re-adding inside akam-sw.js"),s=new URL(u,window.location.href),e.then(function(){navigator.serviceWorker.getRegistration().then(function(e){m={scope:e.scope},e.unregister(),d()})}))}else e.then(function(){window.akamServiceWorkerPreprod&&(i="/akam-sw-preprod.js"),d()});if(window.performance){var w=window.performance.timing,l=w.responseEnd-w.responseStart;c(l)}e.then(function(){t({command:"pageLoad"})});var k=!1;function p(){window.akamServiceWorkerDebug&&!k&&(k=!0,aka3pmLog("Initializing debug functions at window scope"),window.aka3pmInjectSwPolicy=function(e){return t({command:"updatePolicy",policy:e})},window.aka3pmDisableInjectedPolicy=function(){return t({command:"disableInjectedPolicy"})},window.aka3pmDeleteInjectedPolicy=function(){return t({command:"deleteInjectedPolicy"})},window.aka3pmGetStateAsync=function(){return t({command:"getState"})},window.aka3pmDumpState=function(){aka3pmGetStateAsync().then(function(e){aka3pmLog(JSON.stringify(e,null,"\t"))})},window.aka3pmInjectTiming=function(e){return c(e)},window.aka3pmUpdatePolicyFromNetwork=function(){return t({command:"pullPolicyFromNetwork"})})}p()}}();(window.BOOMR_mq=window.BOOMR_mq||[]).push(["addVar",{"rua.upush":"false","rua.cpush":"false","rua.upre":"false","rua.cpre":"false","rua.uprl":"false","rua.cprl":"false","rua.cprf":"false","rua.trans":"SJ-c90634b4-7282-4691-958a-4dee947c3be5","rua.cook":"true","rua.ims":"false","rua.ufprl":"false","rua.cfprl":"true"}]);!function(e){var n="https://s.go-mpulse.net/boomerang/";if("False"=="True")e.BOOMR_config=e.BOOMR_config||{},e.BOOMR_config.PageParams=e.BOOMR_config.PageParams||{},e.BOOMR_config.PageParams.pci=!0,n="https://s2.go-mpulse.net/boomerang/";if(window.BOOMR_API_key="DXNLE-YBWWY-AR74T-WMD99-77VRA",function(){function e(){if(!o){var e=document.createElement("script");e.id="boomr-scr-as",e.src=window.BOOMR.url,e.async=!0,i.parentNode.appendChild(e),o=!0}}function t(e){o=!0;var n,t,a,r,d=document,O=window;if(window.BOOMR.snippetMethod=e?"if":"i",t=function(e,n){var t=d.createElement("script");t.id=n||"boomr-if-as",t.src=window.BOOMR.url,BOOMR_lstart=(new Date).getTime(),e=e||d.body,e.appendChild(t)},!window.addEventListener&&window.attachEvent&&navigator.userAgent.match(/MSIE [67]\./))return window.BOOMR.snippetMethod="s",void t(i.parentNode,"boomr-async");a=document.createElement("IFRAME"),a.src="about:blank",a.title="",a.role="presentation",a.loading="eager",r=(a.frameElement||a).style,r.width=0,r.height=0,r.border=0,r.display="none",i.parentNode.appendChild(a);try{O=a.contentWindow,d=O.document.open()}catch(c){n=document.domain,a.src="javascript:var d=document.open();d.domain='"+n+"';void(0);",O=a.contentWindow,d=O.document.open()}if(n)d._boomrl=function(){this.domain=n,t()},d.write("");else if(O._boomrl=function(){t()},O.addEventListener)O.addEventListener("load",O._boomrl,!1);else if(O.attachEvent)O.attachEvent("onload",O._boomrl);d.close()}function a(e){window.BOOMR_onload=e&&e.timeStamp||(new Date).getTime()}if(!window.BOOMR||!window.BOOMR.version&&!window.BOOMR.snippetExecuted){window.BOOMR=window.BOOMR||{},window.BOOMR.snippetStart=(new Date).getTime(),window.BOOMR.snippetExecuted=!0,window.BOOMR.snippetVersion=12,window.BOOMR.url=n+"DXNLE-YBWWY-AR74T-WMD99-77VRA";var i=document.currentScript||document.getElementsByTagName("script")[0],o=!1,r=document.createElement("link");if(r.relList&&"function"==typeof r.relList.supports&&r.relList.supports("preload")&&"as"in r)window.BOOMR.snippetMethod="p",r.href=window.BOOMR.url,r.rel="preload",r.as="script",r.addEventListener("load",e),r.addEventListener("error",function(){t(!0)}),setTimeout(function(){if(!o)t(!0)},3e3),BOOMR_lstart=(new Date).getTime(),i.parentNode.appendChild(r);else t(!1);if(window.addEventListener)window.addEventListener("load",a,!1);else if(window.attachEvent)window.attachEvent("onload",a)}}(),"".length>0)if(e&&"performance"in e&&e.performance&&"function"==typeof e.performance.setResourceTimingBufferSize)e.performance.setResourceTimingBufferSize();!function(){if(BOOMR=e.BOOMR||{},BOOMR.plugins=BOOMR.plugins||{},!BOOMR.plugins.AK){var n="true"=="true"?1:0,t="cookiepresent",a="vu7tgtqxftmvyxyxa6ya-f-b1f57e8d1-clientnsv4-s.akamaihd.net",i={"ak.v":"27","ak.cp":"82485","ak.ai":parseInt("604074",10),"ak.ol":"0","ak.cr":7,"ak.ipv":4,"ak.proto":"h2","ak.rid":"ba7b0f0","ak.r":36216,"ak.a2":n,"ak.m":"dscx","ak.n":"essl","ak.bpcip":"","ak.cport":52761,"ak.gh":"","ak.quicv":"","ak.tlsv":"tls1.3","ak.0rtt":"","ak.csrc":"-","ak.acc":"","ak.t":"1595344816","ak.ak":"hOBiQwZUYzCg5VSAfCLimQ==zd0IMQmj6JT8HRjuHs+TFFiXBwDhGsfa3Sk20ZcobgHB9G3RWm5GqgUUA89M6rSoS/4Mt9HIhaZoFYAHcpA5Mr5hUVlCfj7u0fq9S00WKg1W2j29EujJccYX4NpbYpNUl7F50lIC+b32em6rZSqbS6KVyTHOPUJ53sZBwDg1+eLCG/zPB7CJkNvgBxQIeHswwBW11FeSox3tuA3VcR6UBF+F7MroZgWvB3JPrsqvmc2QZSe3Ly2k866IpCXu150CTWlPHXOhz/72j5+9LxNdj0L61u449Bu/ym9uk4KHBs2+hZTgr2OJLigpMeGHKu50ksXgBA9OaDZ4v87l08l+zniJoymWC1Y/HOYs4CR11O7ux9GrUnM3qCVpFZZa3UgUCgbFpT9v2sKiPTjf3pzd5dS5W4vstnEvtPEMF+CTu6k=","ak.pv":"102"};if(""!==t)i["ak.ruds"]=t;var o={i:!1,av:function(n){var t="http.initiator";if(n&&(!n[t]||"spa_hard"===n[t]))i["ak.feo"]=void 0!==e.aFeoApplied?1:0,BOOMR.addVar(i)},rv:function(){var e=["ak.bpcip","ak.cport","ak.cr","ak.csrc","ak.gh","ak.ipv","ak.m","ak.n","ak.ol","ak.proto","ak.quicv","ak.tlsv","ak.0rtt","ak.r","ak.acc","ak.t"];BOOMR.removeVar(e)}};BOOMR.plugins.AK={akVars:i,akDNSPreFetchDomain:a,init:function(){if(!o.i){var e=BOOMR.subscribe;e("before_beacon",o.av,null,null),e("onbeacon",o.rv,null,null),o.i=!0}return this},is_complete:function(){return!0}}}}()}(window);     Innovations update for Oracle Digital Assistant The conversational-AI for 2020 and beyond July 28, 2020 10 a.m. PT/1 p.m. ET Register Now     Office Depot elevates customer service with Oracle Digital Assistant  In these unprecedented times, organizations are looking for ways to maintain communication and engagement with their customers, employees and citizens, while maintaining business continuity. Intelligent digital assistants can support scale and broaden reach for IT helpdesk and customer service while providing easy to use conversational access to key self-serve functionality. Join this webcast to learn how the latest innovations in Oracle Digital Assistant enable further democratization of information, securely and intelligently. Hear directly from Oracle executives and see first-hand Office Depot’s digital assistant in action as it elevates customer service and drives convenience and personalized engagement for their e-commerce customers. Register Now   Featured Speakers   Suhas Uliyar Vice President, Digital Assistant, Cognitive AI and Integration Cloud, Oracle Serdar Canbek Senior Manager, Operations reporting and Analytics, Office Depot Terms of Use and Privacy | Subscriptions | Unsubscribe | Contact Us |  Copyright © 2020, Oracle and/or its affiliates. All rights reserved. Oracle Corporation - Worldwide Headquarters, 500 Oracle Parkway, OPL - E-mail Services, Redwood Shores, CA 94065, United States

    Innovations update for Oracle Digital AssistantThe conversational-AI for 2020 and beyond July 28, 202010 a.m. PT/1 p.m. ET Register Now     Office Depot elevates customer service with Oracle Digital...

State of Oklahoma Employs Digital Assistants to Help Government Telework

When the COVID-19 pandemic hit, Oklahoma, like every other state, had to figure out a way for its mostly office-based government employees to work from home. Oracle was there to help. Within eight days of receiving a call for help, the local team and members of Oracle’s Austin-based Cloud Solutions Hub were able to design and deploy a chatbot to help Oklahoma’s newly home-based workers get productive as quickly as possible. Office workers unfamiliar with configuring IT gear without hands-on support invariably have questions, and that can lead to bottlenecks. Before COVID-19, the Oklahoma Office of Management and Enterprise IT desk fielded about 500 support calls a month. Overnight, that number spiked to more than 1,500 calls per day, says Jerry Moore, CIO, State of Oklahoma. To eliminate the resulting backlog of IT support calls, the Oracle Cloud Hub engineers built a chatbot that lets users ask basic questions, such as how to reset a password, how to set up a VPN, or how to download workplace applications. The chatbot was instrumental in reducing the volume of calls to the IT helpdesk and getting approximately 30,000 state employees up and running from home so they could keep providing vital constituent services. During that same period, Oracle also built a mobile app for the state’s Department of Human Services that tracks time and purchases related to COVID-19 work. “If an employee buys 12 cases of hand sanitizer, they can take a picture of the purchase and upload it to the app that tracks all activities specific to the pandemic,” Moore says. “Overall, in eight days Oracle  built and delivered two applications.” And those applications ensure that Oklahoma government workers can keep providing important services despite having to shelter in place. Find out  more about Oracle’s state and local government solutions. Learn more about Oklahoma and other public sector agencies who are leading in crisis.

When the COVID-19 pandemic hit, Oklahoma, like every other state, had to figure out a way for its mostly office-based government employees to work from home. Oracle was there to help. Within eight days...

Digital Assistant / Chatbot Development

TechExchange: Use Entities To Build Powerful, Robust And Speech-Ready Action Menus

article by Frank Nimphius, June 2020   In addition to natural language processing (NLP), menus are a popular means of navigation in a chatbot conversation. Action menus are typically used either as fallback when the NLP engine does not resolve a user message with predefined confidence (threshold), or when a resolved user intent has subcategories into which a conversation can be branched. To build action menus you have a choice:  You can build action menus manually using the System.CommonResponse component and BotML.  You can build action menus based on value-list entities that you render using the System.CommonResponse component In this article I will try to convince you to use the latter approach to use entity based menus (or model driven, as I like to refer to this). Behavior of manually built action menus The action menu below has been built manually in BotML. Unlike list-of-values, which set a value to a variable when a user selects an item from the list, action menus trigger a navigation to a specific dialog flow state. With this behavior, action menus are like menus in web and mobile applications.  The sample for this article is very simple: when a user selects an item from the action list, navigation goes to a dialog flow state that confirms the user selection. If the user enters a text message that cannot be resolved to an item label or a keyword defined for the select items, then a textReceived action is triggered. Text received actions are usually directed to the NLP engine (System.Intent component) to resolve the user intent.  The image below shows a choice of three action items, one of which is "Order Pizza". Humans are humans, and because they are, they behave like humans. So if a user instead pf pressing a button (select item) or typing "Order Pizza", types "I want pizza", then the menu does not recognize the user intent to select the Order Pizza item. If the text received action was linked to the System.Intent component state, then the NLP engine probably would get it right and get the user what she wants. But what if the user types in nonsense or a cat walked over the user's keyboard? In this case, shown in the image below, the intent engine would be challenged fro e.g. "grrmpf". Assuming the intent engine does not resolve "grrmpf", the user is actually taken off the track by this.  If the bot is run with voice in the Oracle Digital Assistant Web SDK, how would you press a select button? Probably you would try reading the button label, or more likely fallback to human conversation saying something like "I want to order pizza". This however would have the same effect as "grrmpf" in that your message would be directed to the intent engine.  Behavior of entity driven action menus Action menus you create based on an entity (where a value list entity would be used a metadata for the menu) inherit powerful features if entities Ability to display multiple prompts at random to appear less robotic Validate user input against values and synonyms of the entity Ability to extract values and synonyms (keywords) from free text Ability to detect user input failures and display a help message in the prompt Disambiguate user messages: E.g "I like to cancel order pizza" The image below shows how "gimme pizza" got resolved to that the user wants to order pizza. This did not include an intent but purely got resolved through the use of an entity with the menu. Getting back to the "cat walks keyboard" use case: The image below shows the outcome of "grmpf". The menu shows a message that "grmpf" is not an allowed selection. The maxPrompts setting of the menu defines how many failed user input attempts are allowed before the menu follows a cancel action.  So lets try "order cancel pizza", which meets two select items in the action menu.  As shown in the image below, the entity detects the ambiguity and resolves it with a dialog it displays for the user to select from.   Even the case of a user message doesn't matter using entities How to build entity-driven action menus As mentioned earlier, entities you build for menus are of type "value list". I suggest to mark entities as menu entities in their names to make your code readable. For each option in the action menu, you create a value entry in the list: order, cancel, openingHrs. The synonyms catch options how users could order or cancel pizza or how they would ask for opening hours. Note that synonyms don't need to anticipate the full user sentence. "gimme" that I defined for "order" would work with "gimme a pizza" as well as "could you gimme a pizza".  The error message defined on the entity is displayed whenever a user fails to provide a valid input. In the image below, the message is read from a resource bundle (which I recommend you always use for message strings) that gets the user entered string passed as an input parameter: system.message.messagePayload.text. Notice the Prompt for Disambiguation setting in the image above. If validation of the user message results into multiple action values, then a dialog with the defined prompt will be shown. Again, the prompt uses a message bundle.  Finally, you can specify as many different prompts as you like. Each prompt will be displayed in random order dependent how many failed attempts a user has in providing a valid input. To generate the action menu, a System.CommonResponse component is used. I explain the component properties in a table below the next image. Property Comment variable The "variable" property of the System.CommonResponse component references a variable of the entity type (PizzaActionMenu) in the sample. This ensures that user messages that don't match a label or keyword defined for a select item are validated. autoNumberPostbackAction Setting this property to true adds a numeric value in front of each item. Typing the value as a message will then select the action. In the sample, I used a different implementation of the same so I set this property to false. text The "text" property of the response item displays the prompt. The expression associated with the property references system.entityToResolve.value.prompt to read the prompt from the entity (which then gets it from the resource bundle) actions : label The label of an action is set to a lowercase character (a, b or c) as a shortcut plus a string read from a resource bundle. The resource bundle key is <action>_label (e.g. order_label) so it can be dynamically resolved actions : keyword Keyword defines a comma separated list of short cuts that select (or virtually press) a select item. The sample creates four keywords - order, 1, a, A - for the Order Pizza select item and - cancel, 2, b, B - for the Cancel Pizza item. actions : payload : action  The action of each select item is set to the value of the entity: order, cancel and openingHrs actions : iteratorVariable References system.entityToResolve.value.enumValues to obtain a sequence of entity values. For as long a there are values in the sequence, a select item is being created. transitions : actions An action mapping to a dialog flow state is created for each value in the entity: order, cancel, openingHrs. This way when the user selects a select item, navigation will be to this state. next Very important!. When a user types text that does not match a keyword or label of a select item, but that can be validated by the entity value or synonyms, then the next transition is followed. To make sure the navigation follows to the same dialog flow states as when the user selects a button, an Apache FreeMarker "switch" expression is used. Downloads Download the sample skill and import it to your Oracle Digital Assistant instance. Then open it and run the conversation tester. Type "hi" to get the menu displayed. Download the entity driven action menu skill Related articles TechExchange Quick-Tip: How to Intelligently Cancel Composite Bag Entity Driven User Dialog Flows TechExchange: How-to Use the System.ResolveEntities Component in Oracle Digital Assistant TechExchange: Building Model Driven Questionnaire Conversations Using Composite Bag Entities in Oracle Digital Assistant TechExchange Quick-Tip: Understanding Oracle Digital Assistant Skill Entity Properties - Or, What Does "Fuzzy Match" Do?    

article by Frank Nimphius, June 2020   In addition to natural language processing (NLP), menus are a popular means of navigation in a chatbot conversation. Action menus are typically used either as...

Oracle Digital Assistant Named a Leader in Ovum Decision Matrix for Intelligent Virtual Assistants

Ovum, a leading analyst firm and part of the global technology research organization, Omdia, has recognized Oracle Digital Assistant as a leader in the market in its latest research report, "Ovum Decision Matrix: Selecting an Intelligent Virtual Assistant Solution, 2020–21." The report analyzes the evolution of virtual intelligent assistants, the increasing scope of use cases, and the market landscape, and evaluates 10 niche and large technology vendors to determine Oracle Digital Assistant as one of the leaders in this market. Oracle Digital Assistant is a comprehensive, AI-powered conversational interface for business applications. Oracle Digital Assistant interprets the user’s intent so it can automate processes and deliver contextual responses to their voice or text commands to enrich the user experience, eliminate helpdesk and support overhead, and enable scale for communications and engagement. The Ovum report specifically highlights Oracle Digital Assistant as an easy-to-build solution, thanks to its no code, design-by-example, Conversational Design Interface that is intended to be used by non-developers to build, train, test, deploy, and monitor AI-powered digital assistant on channels of choice. Ovum also noted Oracle Digital Assistant’s advanced linguistic and deep learning-based natural language processing (NLP) models as a key strength that enables the Digital Assistant to better understand domain specific vocabulary, and respond with contextual information and best next step actions accordingly. Oracle Digital Assistant also received kudos in the report for providing an “enterprise-ready” solution. Organizations leveraging Oracle Digital Assistant know that their data is their own, stored securely in Oracle Cloud or via Cloud@Customer for organizations wanting to keep their data within their own boundaries. Furthermore, because it is a comprehensive platform, Oracle Digital Assistant can integrate with existing processes, routing rules, and contact center agents to support enterprises’ unique business needs. In fact, Ovum noted that “A differentiator for ODA [Oracle Digital Assistant] is that a business process engine sits beneath it and is tightly integrated to perform tasks emerging from the conversation. For example, when an end user informs the ODA of a change of address, several relevant processes kick in. Oracle's ODA and business process management R&D teams are also tightly integrated because of the overlap in functions.” Oracle also offers out-of-the-box chatbot skills for Oracle Cloud HCM, Cloud ERP, and Cloud CX, as well as integration with Oracle CX Service to speed up deployment and provide seamless engagement for Oracle Cloud Applications customers – a point that was noted as a strength in the Ovum report. With no apps to download and no training needed to use Oracle Digital Assistant, the use of intelligent assistants has picked up quite significantly in the industry. Over the past years more and more organizations – both public sector and commercial – have come to rely on Oracle Digital Assistant for their needs. Common use cases include enabling easy and 24x7 access to employee HR self-service functions and employee expense and finance functions, offering customer or employee FAQs and information lookup. This enables Oracle Digital Assistant to be the first line of customer/employee helpdesk and drive seamless bot-agent handoff only where needed, and more. These use cases present massive sales and ROI opportunities, freeing up human resources to take on the more complex challenges while at the same time improving the user experience. Oracle’s leadership position in the Ovum report is a testament to the significant R&D investments in AI and NLP-powered Cloud service over these recent years. For more information on how your organization can leverage Oracle Digital Assistant, please visit our website. And to download the full report, click here.

Ovum, a leading analyst firm and part of the global technology research organization, Omdia, has recognized Oracle Digital Assistant as a leader in the market in its latest research report, "Ovum...

Digital Assistant / Chatbot Development

TechExchange Quick-Tip: How-to Use Resource Bundles Defined In A Skill Within Custom Components Without Tying The Component To A Specific Skill

Oracle Digital Assistant skills provide resource bundles as a feature for skill developers to build multi language bot responses, or just for them to keep label and prompts in a single place for ease of administration and management. Custom components that are uploaded to a skill don't have access to resource bundle, which also has to do with how custom components communicate with a skill. The options skill developers have at current to provide translatable label strings and prompts in a custom component are - to create a custom message bundle functionality for custom components. This way custom components get deployed with their message translations and all a skill developer needs to do is to pass the detected or desired language code for the component to pick the correct language strings - pass resources bundle strings to be used as labels and prompts from a skill to a custom component, for which developers create input parameters.  The first option, to create a custom message bundle functionality in a custom component, is a less popular choice among skill developers. Instead the intention is to find a way to pass resource bundle strings for a specific language into the custom component.  In this article I explain a strategy to pass resource bundle strings into a custom component without creating a strong dependency between the skill and the custom component. The implementation introduced in this article also allows developers to pass translations of a language string that matches a detected user language.

Oracle Digital Assistant skills provide resource bundles as a feature for skill developers to build multi language bot responses, or just for them to keep label and prompts in a single place for ease...

Digital Assistant / Chatbot Development

TechExchange: How-to allow customers to provide feedback on the usefulness of answers to frequently asked questions

Sometimes a question is what it is; a question. In this case, it makes little sense for a bot to start a long conversation with the user. Instead, the bot should give a direct answer to the question. Answer intents are a recent addition to Oracle Digital Assistant and use the same machine learning model to understand the user question as regular intents do. With answer intents, Oracle Digital Assistant provides a very reliable and successful implementation of the question-answer user case. A common feature of web-based FAQ pages is that at the end of an answer the user has the opportunity to evaluate the quality of the answer and to provide feedback. Just recently, teams working with answer intents started requesting documentation for a similar implementation pattern for questions answered by Oracle Digital Assistant. In this article I provide a channel independent sample implementation that behaves like answer intents in Oracle Digital Assistant but that allows extension to be added for users to provide feedback and for the answer response itself to optionally add channel specific properties.

Sometimes a question is what it is; a question. In this case, it makes little sense for a bot to start a long conversation with the user. Instead, the bot should give a direct answer to the...

The Role of Digital Assistants in a Time of Remote Work

  By Suhas Uliyar, Vice President, AI and Digital Assistant As the COVID-19 situation continues to unfold, if you are in the human resources or IT/operations division of your organization, you are likely being pushed to the forefront during this extraordinary time. Your division may be actively leading efforts to communicate with your workforce or users as they adjust to a new environment such as remote work, while also complying with ever-shifting policies and guidelines. Given the significant upheaval in the way organizations have to operate these days, there are some common questions that many IT and HR leads are trying to address, including: How can our organization scale and make it as easy as possible for our ecosystem as everyone learns to cope with the new normal? How can we best provide access to policies, guidelines, FAQs, transactions, and data when information is so dynamic? How can we deliver information in real-time without employing more resources? While the more complex communication challenges will still need to be tackled by humans, a digital assistant may offer relief in some areas. For example, organizations may need to automate responses to most basic queries so human minds can be freed up to deal those more complex challenges. Enterprises and organizations may also need to enable more processes and transactions online and offer them in an easy-to-use medium – one that is easily accessible and intuitive.  Meanwhile, organizations are having to reconfigure how they engage with their customers, contractors, and employees – and in the case of public sector organizations and educational institutions, citizens and students, respectively. These various touch points include providing real-time, reliable information on health and safety guidelines; offering assistance in setting up a remote working environment; communicating up-to-date changes in policies; and enabling online self-service functions or access to relevant insights, information, and processes from within the organization’s systems.  Before COVID-19, AI-based chatbots or digital assistants were already changing the way we interact with our ecosystem – customers, employees, partners, citizens, and students. Enterprises had started to use digital assistants to provide 24x7 assistance to their stakeholders with self-service assistants for customer support; employee self-service across HR, ERP, CRM, and business intelligence systems; and vendors and partners for ERP self-service for quotes and invoice management.   A digital assistant can provide a consistent channel of communication and engagement in natural language text or voice, so users don’t have to learn an enterprise’s systems to interact or access information they need. Other benefits delivered by digital assistants include: Providing a 24x7 virtual assistant that is always there for stakeholders Offering users access to information on channels of choice like Slack, Microsoft Teams, Messenger, and more Streamlining employee queries, since there’s no waiting in line for the next available representative or help desk agent Freeing up employees to focus on the more complex challenges and queries that only human minds can solve Providing a natural way to access information and transactions across different backend systems, which promotes adoption of and adherence to processes and policies Proactively notifying users of changes in data so they can remain informed  Eliminating data searches, since a digital assistant powered by AI can be adapted to dynamic data changes so users don’t have to search for data  Reducing costs associated with support operations via self-service and automation As a result, digital assistants can help support the current need of a remote workforce and concerned citizens, students, and customers while creating long-term efficiencies for your organization.  For a more in-depth look into how your organization can derive quick value from a digital assistant in these uncertain times and the longer term, please contact us here.    

  By Suhas Uliyar, Vice President, AI and Digital Assistant As the COVID-19 situation continues to unfold, if you are in the human resources or IT/operations division of your organization, you are...

Bridging the gap for remote workers through digital assistants

By Suhas Uliyar, Vice President, AI and Digital Assistant AI-based chatbots or digital assistants stand to change the way we interact with business applications, not just consumer ones. The main benefit is the ability to get immediate responses to queries via natural local language, without having to download apps or get training. While we have the freedom to engage in user-friendly experiences in our personal lives – such as Alexa and Siri – there have been few options for people in their professional lives. But that’s changing. As Steve Miranda, Oracle’s executive vice president of application development, remarked, “In HR, every common question or transaction has lent itself nicely to digital assistants. Within the next year, we will be calling HTML our ‘old UI.’ Every transaction you have will be through a digital assistant UI.” Work-at-home requirements associated with the spread of COVID-19 have made it all the more important to give employees easy access to ever-changing information – on company policies, insurance coverage, and public health guidance, in addition to the usual cadence of questions on vacation balances, status of expenses, and IT workarounds.  Here are a few key ways in which chatbots and digital assistants can help. An assistant for every employee Finding answers to simple questions can be a frustrating experience if there is no easy way to do so. Take, for example, basic questions like “how many vacation days do I have left?” or “what do I do if I have a change in marital status?” In some cases, employees need to log into their VPN to find the policy document or a web page, or the application – which they then need to further navigate to find answers to these straightforward questions. With a digital assistant, employees can simply speak the question out loud in a natural way or simply input the text, instead of having to navigate multiple screens or interfaces, and they will receive an immediate response. Not only that, the digital assistant can further help them by recommending or taking action as a follow-up to their original interaction and be a true assistant for the employees. For example, rather than just informing the employee on what to do to change their marital status, the digital assistant can actually trigger the change process by gathering the necessary information and then updating the relevant systems with that information.  Answering general policy questions With rapidly evolving governmental directives such as sheltering-in-place and social distancing, most organizations are quickly adapting their HR policies and guidelines. At the same time, employees need help and answers from their organizations more than ever. Questions may range widely from policies on employment, travel guidelines, and health and safety instructions, as well as guidelines on dealing with and working during the pandemic. In some cases, the information is very dynamic and changes by the minute. Digital assistants give employees a consistent channel, which is available 24x7, to ask their questions so they can get an immediate response – while freeing up the HR and IT/support teams to manage the more complex challenges they are facing today. In fact, you can also use digital assistants to send proactive alerts and notifications like changes in policies, so that employees don’t need to keep checking or search for the latest information time and again.  Supporting employee health and safety Practicing social distancing has also had an impact on recruitment, onboarding, and training processes for organizations. In effect, these processes provide resources and support that most organizations may seriously need in these uncertain times. Using a digital assistant, businesses can drive candidates’ pre-screening and interview scheduling online, across any messaging channel. You can drive virtual onboarding by enabling easy remote online access to relevant trainings, policies, and materials all via a digital assistant. Data can also be safely recorded to keep track of employee health status based on the organization’s health policy and guidelines. A digital assistant can also save the employee from the time-consuming task of completing forms or reporting on any health-related issues at work.  Employee self-service Whether working remotely or on-site as needed, employees may need access to both information and processes beyond just the HR systems. From submitting expenses to filing IT support tickets to making changes to travel plans, we touch a number of systems or applications as employees. Some processes even span across multiple systems, like role- or location-based expense reimbursement policies, where the system requires role information from the HR system before interacting with the finance/ERP system for reimbursement. A digital assistant is one common interaction point for employees, contractors, or partners across multiple applications and can provide a quick, consistent, and concise response.  Leading an organization through this unprecedented time has put an increased demand on the HR function. As a result, organizations would be wise to leverage AI-powered technologies such as digital assistants to scale their functions, create online connection and engagement, and provide dynamic updates on policies and safety guidance without bogging down human communication channels – which need to be available for essential tasks. A digital assistant can support an organization by providing benefits such as: Lowering operational costs via online self-service & automation Expanding HR availability 24x7 across different channels Enabling easy access to information and processes delivered via text or natural language Delivering consistent information and maintaining employee engagement Enabling proactive HR outreach Digital assistants can support the functions employees may need now while creating efficiencies for the long term. For more information or to discuss how a digital assistant can support your needs, email us here. Stay well, and be safe.  

By Suhas Uliyar, Vice President, AI and Digital Assistant AI-based chatbots or digital assistants stand to change the way we interact with business applications, not just consumer ones. The...

Digital Assistant / Chatbot Development

TechExchange Quick-Tip: Exploring the Oracle Digital Assistant Test Suite For Automated Conversation Testing

article by Frank Nimphius, February 2020   Bot conversations in Oracle Digital Assistant are not sequential, or in other words, many paths lead to the same result. The image below shows examples of user input and the expected outcome. Notice that "Please show me the menu", "I like to order pasta". I like to order a pasta with bacon" and "I like to order a pasta with bacon and garlic" lead to the same outcome, which is the confirmation of an order. Where the different user messages differ is in the number of states that are visited in the context of the conversation. To throughly test a skill in Oracle Digital Assistant, all possible conversation paths must be tried, and this for every change you apply to the dialog flow or any version or clone of a skills.  Good news is that Oracle Digital Assistant introduced the first implementation of a test suite that allows you, on a skill level, to record a conversation in the embedded conversation tester, which then can be repeatedly run whenever needed. With this you can say that a conversation is the unit of testing in Oracle Digital Assistant skills.   “Tell me and I will forget, show me and I may remember; involve me and I will understand.” ― Confucius This article provides a starter skill, as well as simple steps you can follow to explore the test suite feature and learn how to use it.  To follow along, please download download the starter skill and ensure you have Oracle Digital Assistant 20.01 or later available. In Oracle Digital Assistant, import and open the started skill (if you don't see the import, use the search field and type "Alfredo". Before You Start The test suite feature is not enabled by default. To enable it, in Oracle Digital Assistant. select the "burger" menu item and choose the Feature Management option in the Setting menu section. Change the profile list to Enable all and apply the changes.  Exploring the Test Suite Navigate to the Pasta Alfredo skill and open it. Press the conversation tester to launch the embedded skill tester. Notice (see image below) that the tester header contains links to access test cases and test results, as well as a link to create test cases by saving a conversation.  Start by typing I like to order pasta into the Message field and press the enter-key on your keyboard.  You can navigate the card layout to see the pasta menu. To continue, type I want bacon pasta into the Message field and press the enter-key. It doesn't matter what the message text is for as long as it contains bacon pasta (the magic of entity extraction). Note: Don't press the buy button. by the time of writing, due to a know issue, using a postback action results in an invalid test case that wont be saved.  Next, type Cheese and garlic and press the enter-key. In the order confirmation, notice the the date and time information. This will become relevant later when you test the conversation. Next, click the Save as Test Case button. Provide a name like shown in the image below and press the Save Conversation button.  Reset the conversation by clicking on the Reset link in the header. Next, type I like to order bacon pasta and beef ragout. Notice in the image below that this user message displays a disambiguation dialog alerting the user that only one pasta can be ordered at a time. This dialog wasn't displayed in the first conversation. You can navigate the cards and notice that the card layout only contains two cards. Next, type Okay. If I can only have one, then beef ragout please. This message has "beef ragout" as a valid entity value in it (which is all that matters) Next, type I like cheese and oil into the Message window and press the enter-key. After the confirmation is displayed, click the Save as Text Case link and save the conversation as shown in the image below. Important Note:  Test case don't need to be complete conversations that end with a return statement (transition). You can, for example, stop the conversation in the tester at any time and save the test case. The last message displayed by the bot will then be the outcome the test case will determine success or failure with.  Next, select the Test Cases link. Notice that there are 4 test cases. Two of the test cases is what you created when following the instructions in this article. Two test cases were part of the starter skill you imported. What you learn with this is that test cases you create belong to a skill and will be exported and imported with the skill. This also means that when you version or clone a skill, that those skills will have contain the test cases too.  Note: Test cases can also be exported for a skill without exporting the whole skill. For this, you would exit the skill (navigating to the skill dashboard) and then use  the menu icon in the right bottom of the skill tile. This way you can export test cases to import into other skills.  Notice that the conversation recorded for each skill is saved as a JSON payload. You may want to compare the payload of one of the test cases you created with the SMS test case already provided. You will notice differences in the component rendering expressed in the payload. So for a test case it also matters which channel simulation was used when recording the conversation.  Next, press the Run All Test Cases button to execute the tests. You can change the name of the job if you want. For this article, just keep it and press Submit.  Give it a few minutes to process. Then, click the Test Run Results link.  Notice in the image below (and un your test run) that two test cases out of 4 failed.  Expand one of the failed test cases to learn about the problem. The problem in this case is the date and time information printed within the order confirmation (told you that this will become a problem). Reality is that you cannot exclude that content in a bot message changes. If it wasn't the date and time, it could be a generated order ID that changes between bot test runs.  One option to fix the problem would be to press the Apply Actual  button to modify the test case. While this helps for infrequent changes, it does not for the date and time in the example. To solve this problem, and to fix the test case forever, click on the test case name link.  The link navigates the UI to the test case definition. Scroll the Conversation field content to the bottom. Find the CONFIRMATION entry with the date and time information.   Replace the date and time information with a place holder. The image below uses ${DATE} as a placeholder.  You can however choose whatever name you want for the placeholder. Notice that when you move the cursor out of the Conversation field, the DATE placeholder is shown as a variable.  Next (inportant!), do the same for the other test case that failed, The test case too contains the date and time information.  When you rerun the test cases with the applied changes, then all tests succeed. When you switch to the test case results and you see tests still in progress, refresh the browser page frequently until you see the final test results.  Summary This article provided a brief overview of the test suite in Oracle Digital Assistant that you can use to record and replay conversations. The test cases are exported, versioned and cloned with the skills. You run the test cases from the conversation tester, or from anywhere in the skill editor using the Run Tests button.  Note that a test case stops at the first difference in a conversation. If there are 3 user interactions in a conversation and the first one fails, then the test case stops here and does not continue testing the other two interactions. If you want to test different parts of a conversation it is recommended to create multiple shorter test cases and e.g. use entity slotting to skip some states in between the conversation.    Download Pasta Alfredo Starter Skill   Related Content TechExchange Quick-Tip: How-to Test Apache FreeMarker Expressions in Oracle Digital Assistant All 2-Minute Oracle Digital Assistant TechTip Videos on YouTube TechExchange Quick-Tip: Understanding Oracle Digital Assistant Skill Entity Properties - Or, What Does "Fuzzy Match" Do?  

article by Frank Nimphius, February 2020   Bot conversations in Oracle Digital Assistant are not sequential, or in other words, many paths lead to the same result. The image below shows examples of user...