I’m excited to announce that the Oracle Digital Assistant Platform Version 20.12 is now being rolled out across all our OCI data centers. 

 

Unified Multi-lingual NLU

20.12 represents a fundamental shift not only for the ODA Platform but for the Conversational AI space in general. With 20.12, we introduced a unified multi-lingual NLU model for key languages in Europe and Middle East. This means customers no longer have to build a separate digital assistant for each language nor do they have to rely on a translation service to power their Intent Classification and Entity Recognition. While multi-lingual language embeddings have become mainstream in the past couple of years, ODA is the first Conversational AI platform to introduce active cross-lingual usage with Few-Shot and Zero-Shot NLU training models for both intents and entities. Few Shot training means that if you want to add Arabic support to your digital assistant, you do not have to recreate all your training data in Arabic — you may be able to add only a fraction of the training data in Arabic to the same digital assistant, and get reasonable accuracy. Actually, you may even get your digital assistant to recognize some Arabic utterances without adding any training data — that’s the Zero Shot training model. As always, please assess your accuracy goals for each use-case and test thoroughly to determine how much more training data you need for your intents and entities. To support this unified multi-lingual NLU, we have introduced other features across the platform such as built-in language detection, multi-lingual retraining in Insights, multi-lingual testing, and ICU resource bundles for complex multi-lingual outputs.

 

Enhanced Speech

Speech recognition has been an integral part of the ODA Platform ever since we introduced Oracle Voice in Dec 2019. The base Speech Models are already fine-tuned for enterprise usage to recognize terms such as EBITDA, GAAP, and KAD (Key Account Director in Oracle). These models use context and statistical inference to correctly resolve ambiguous utterances (e.g. when the user said 'Gap' vs. 'GAAP'). With 20.12, we are introducing Enhanced Speech models which make Speech customized for your digital assistant. These models get transparently trained in the background when you train your digital assistant — there’s no extra Speech recording or adding lists of phonemes. Enhanced Speech picks up all your static and dynamic entities (that means you can inject new entity values on-the-fly into the Speech and NLU models for your digital assistant). With Enhanced Speech, your digital assistant will now automatically recognize custom product/org names (e.g. NeuraLink)  and hard-to-master words  (e.g. my last name!)

 

Data Manufacturing with Active Learning

Deep learning models typically require a lot of diverse training data. While our core ML models often help you generalize with less effort, we understand that there’s no substitute for sourcing good training data, especially if you are creating a highly domain-specific digital assistant. Data manufacturing and crowd-sourcing platforms currently enable developers to gather some training data, but that exercise often feels disjointed and hard to synchronize. With 20.12 we are introducing an integrated data manufacturing capability within the ODA Platform. Now you can quickly select a few intents within your skill and kick off a paraphrasing job within your organization. More importantly if you have a ton of chat transcripts you can kick off an annotation job that leverages Active Learning to suggest potential intents from your skill for a given chat utterance — the crowd worker simply has to pick one of the suggestions or suggest a new one, and that choice is propagated back into the Active Learning model. 

 

Additionally, 20.12 comes packed with other cool features such as Group Chat support (yes, multiple users can continue a single conversation thread with the digital assistant), Intelligent Advisor integration, DA Retraining, custom OCI Functions, and more. We hope you upgrade your skills to 20.12 soon! We look forward to all your feedback!