Generative AI (GenAI) has been heralded as one of the most important technologies of our time. But that doesn’t mean that all customers are ready to jump in with both feet—or even know where to start. Many may have lingering questions about how GenAI works in Fusion Apps or how it fits the enterprise. So here are our responses to the most common ones. 

Data privacy and security 

One of the primary concerns about GenAI is the potential to compromise privacy, specifically by sharing data with a large language model (LLM) provider, either training data (used to train the model) or inferencing data (what the model generates when creating responses based on new information). Our unique approach eliminates this possibility and gives customers comfort because Fusion Apps do not send any data—training or inferencing—to a model provider. Similarly, nothing about GenAI has changed our commitment to prevent one customer from seeing or having access to another customer’s data. 

Hallucinations and human control 

One of the high-profile concerns about GenAI is the risk of hallucinations. Since GenAI uses a large language model and mathematical probabilities to predict the next word based on its training data, there is an inherent opportunity for it to offer information in a response that may look real but is not accurate in context. While this risk is intrinsic to GenAI and can never be eliminated, we are taking several steps to help Fusion Apps customers mitigate it, including:  

  • Human-in-the-loop: Broadly speaking, we are focused on using AI to help people do their jobs better. Users will know when we have added a GenAI capability into a workflow (e.g., to create a job description), and they will have the choice to use it or not, and to edit and approve any proposed content. In essence, we leverage our customers’ greatest strength—their employees—and empower them to always remain in control of GenAI. 
  • Testing: We perform a wide array of pre-deployment testing of Fusion Apps including its GenAI capabilities. Post-deployment, we also monitor the applications and GenAI capabilities to meet our delivery requirements to customers and identify areas for enhancements.  
  • LLM updates: LLMs receive updates over time, which may include improvements to the model and training on new data sets. We update Fusion Apps with the latest models to help improve performance and give customers an innovation advantage. 
  • Prompt engineering: The way instructions or questions to an LLM (i.e., prompts) are structured can influence the likelihood of hallucinations. Clear, specific, and well-structured prompts can help lead to more contextually relevant responses, and we are continuing to invest in prompt assembly (i.e., prebuilt, purpose-specific prompts) to benefit customers by saving time and improving accuracy. 

In summary

Ultimately, while AI has been transforming the world of work for decades, GenAI is ushering in a new era. Companies must focus on maximizing its value and minimizing its risks, and no vendor is better positioned to serve their customers than Oracle with GenAI-infused Fusion Apps. We have been building, deploying, and managing AI in Fusion Apps for more than a decade. We remain committed to incorporating AI and GenAI capabilities throughout our portfolio to help customers solve real-world business problems.

Related posts you might like

 

Get more information link

If you’re an Oracle customer and want to get new stories from The Fusion Insider by email, sign up for Oracle Cloud Customer Connect. If you’re an Oracle Partner and want to learn more, visit the Oracle Partner Community.