With AI evolving at a lightning pace, a whole new vocabulary is quickly becoming part of everyday life. Some of these acronyms and concepts can be confusing, however. So this article defines common terms to help you speak confidently in conversations about AI and better understand the technology behind it. This includes the embedded GenAI capabilities and AI agents in your Oracle Fusion Cloud Applications. We ordered them in groups to facilitate understanding (they are intentionally not alphabetical).
I. Background terms
Artificial intelligence (AI): AI refers to the simulation of human intelligence in machines that are programmed to think and learn from experience. AI systems can perform tasks like speech recognition, decision-making, and language translation.
AI model: An AI model is the output of a complex computer program or algorithm that has been trained on a dataset to learn patterns, make predictions, or generate content. The capabilities acquired through this training allow it to perform specific intelligent tasks without being explicitly programmed for every possible scenario.
Training data: Training data refers to the dataset used to teach an AI model so it can perform real-world tasks. For generative AI, this includes vast amounts of text, images, audio, or other data types, from which the model learns to identify patterns and generate new content.
Probabilistic model: When given the exact same input multiple times, a probabilistic model produces different outputs each time. Many GenAI and large language models (see below) behave probabilistically. Other terms for these models are stochastic or non-deterministic.
Deterministic model: Once trained or programmed, deterministic models can produce the exact same output given identical inputs. They include diagnostic systems that run on “if-then” rules and algorithms that use fixed formulas to make predictions.
Generative AI (GenAI): GenAI refers to a system that creates new content, such as text, images, or music, by learning patterns from existing data. It doesn’t just classify or analyze data but can generate original outputs based on predicting what it thinks the next bit of content should be.
Natural language processing (NLP): NLP is a field of AI that focuses on enabling machines to understand, interpret, and generate human language.
Neural networks: Neural networks are a type of machine learning model inspired by the human brain. They consist of interconnected layers of nodes (sometimes referred to as artificial neurons) that process data. Deep learning is a subset of machine learning that uses neural networks with many layers (hence “deep”). Deep neural networks, with many layers, are often used in GenAI models.
AI agents: AI agents are autonomous software applications created by combining large language models (see below) with other advanced technologies. They interact with their environments, make decisions, learn, and adapt their behavior over time. This makes them suitable for automating complex tasks in changing conditions and collaborating with humans and other agents in real time.
II. Terms related to large language models and generative AI
Large language model (LLM): LLMs are a type of GenAI model that is trained on massive amounts of text data to process, interpret, and generate human-like text. These models, like Llama, Cohere, and GPT-4 are capable of tasks like text generation, summarization, translation, and more. They learn language patterns, grammar, and contextual meaning by processing vast amounts of text, enabling them to provide coherent responses to user inputs.
Generative pre-trained transformer (GPT): A GPT is a type of LLM with a transformer-based architecture. This architecture excels at handling sequential data like language and weighing the importance of different words in a sentence while processing them. OpenAI coined the term “GPT,” but models from Google, Anthropic, Meta, and others also use the underlying transformer architecture.
Token: A token is a unit of text—such as a word, a part of a word, or punctuation—that LLMs use to understand and generate content. GenAI breaks down sentences into tokens to analyze and generate text in a meaningful way.
Pricing for AI services is commonly based on the number of tokens used in both the input (prompt) and output (response) for each interaction. Tokens are directly tied to the complexity of the work being done, so more advanced models and more complicated tasks require more tokens.
Retrieval-augmented generation (RAG): RAG is a technique that combines the ability to retrieve specific information with GenAI models—i.e., retrieving relevant external data from sources (e.g., documents, databases) and then using a GenAI model to combine this retrieved information into a coherent, informative response. This helps improve accuracy and relevance, particularly when up-to-date or factual content is required.
III. Terms related to AI optimization and oversight
Prompt: A prompt is the input or instruction provided to a GenAI model to trigger content generation. It directs the model to perform a certain task or generate specific information.
Prompt engineering: Prompt engineering refers to the process of designing and crafting input prompts that effectively guide an AI model to generate the desired output. By adjusting the wording, context, or constraints in the input, users can control the quality, accuracy, and relevance of the model’s response.
Hallucination: A hallucination is false information generated by an LLM and presented as if it were fact. It occurs because models are designed to predict the most plausible next word or pattern rather than verify factual accuracy. Strategies for minimizing hallucinations include improving the quality of training data, using advanced model architectures (see retrieval augmented generation and fine tuning), crafting more specific prompts (see prompt engineering), and including human oversight.
Observability: Observability is the ability to monitor, analyze, and understand the internal workings and performance of AI models in real-world environments. It involves collecting and examining data about an AI system’s behavior, allowing teams to trace decisions, identify issues, and ensure that models are delivering reliable and expected results.
Fine tuning: Fine tuning refers to the process of further training a pre-trained generative AI model on a specific dataset to make it more specialized for certain tasks, such as generating industry-specific content.
Related posts you might like
- Understand the differences between AI, GenAI, and ML
- New—Oracle AI Agent Studio for Fusion Apps
- Quarterly updates made easy
If you’re an Oracle customer and want to get new stories from The Fusion Insider by email, sign up for Oracle Cloud Customer Connect. If you’re an Oracle Partner and want to learn more, visit the Oracle Partner Community.