AI is the new layer of enterprise infrastructure that powers how organizations build software, run operations, and deliver outcomes in a complex and potentially regulated environment around the world. As that happens, one thing becomes non-negotiable: AI governance must scale. Instead of a series of ad hoc checklists, AI governance must run on a repeatable management system that can keep pace with technology, policy, regulations, and risk. ISO/IEC 42001 for Artificial Intelligence Management System (AIMS), developed with Oracle’s active participation, is the world leading standard in establishing robust AI governance in this continuously evolving environment.
I’m pleased to share an important milestone in that direction. Oracle America, Inc. has obtained ISO/IEC 42001:2023 certification from Schellman Compliance, LLC for multiple AIMS spanning key Oracle services:
- Oracle Cloud Infrastructure (OCI) AI services AIMS (including Data Science, Document Understanding, Generative AI, Generative AI Agents, Language, Speech, and Vision)
- Oracle Health Services AIMS, with Oracle operating in the roles of AI Provider and AI Producer
- Oracle Life Sciences Services AIMS, including Oracle Clinical Suite, Oracle Remote Data Capture, Argus, Clinical One Suite, and Argus Cloud
- Oracle SaaS on OCI AIMS, supporting software-as-a-service (SaaS) application development, engineering, and operations for Oracle Fusion Cloud Applications (including the Fusion family of services, EPM, and CX Cloud services)
- NetSuite AIMS for NetSuite SaaS, NetSuite SuiteProjects Pro Professional Services Automation (PSA) SaaS, NetSuite Connectors Cloud Services, and NetSuite CPQ (Configure, Price, Quote) Configurator Cloud Services
Why ISO/IEC 42001 matters now
We view ISO/IEC 42001 as the global state-of-the-art standard for AI governance with rigorous, internationally recognized benchmarks for how organizations establish and run an AI management system. It’s designed for the real world with complex systems, evolving requirements, and the need to continuously improve.
For customers, this kind of certification is valuable because it provides independent validation that AI governance is institutionalized with accountable ownership, lifecycle discipline, risk management, and operational controls that are designed to keep working as AI capabilities and landscape evolve.
How does this reflect Oracle’s approach to AI?
Governance throughout AI lifecycle: Our AI systems are developed and deployed under a governance framework that applies from planning through implementation and into operations. New AI systems undergo early technical review to assess factors such as intended use, training data, accuracy, fairness, transparency, privacy, security, and human control—helping identify benefits, implications, and mitigations from the outset.
Security and privacy as core design requirements: Responsible AI is inseparable from enterprise-grade security and privacy. Our AI policies and practices are designed to work alongside Oracle’s broader security, privacy, and compliance program expectations and contractual commitments. A key part of this is applying established internal assurance mechanisms, such as Oracle’s Corporate Security Solution Assurance Process (CSSAP), to provide structured security review prior to production deployment, including AI systems.
Risk management and data management that supports trust: We focus on risk across the full AI context: the data used to train proprietary systems, the third-party models we make available, and how models are integrated into products and services. We apply robust practices to support data availability, suitability, quality, and integrity across collection, training, and testing. We also take corrective measures when our evaluations identify potential bias or harmful content concerns in our proprietary AI systems.
Quality assurance and post-deployment monitoring: As with all Oracle products and services, Oracle AI systems are subject to rigorous quality assurance to establish performance, reliability, and alignment with intended use. We also monitor deployed AI systems consistent with our practices and applicable requirements to support operations, help address threats to security and integrity, and enable users to report potential risks and incidents.
Documentation and transparency: Our documentation practices cover early lifecycle technical reviews, customer-facing specifications, and other information required by law or contract. Where Oracle services incorporate third-party models, model providers often publish documentation about model design, training, and testing, and we reference that information alongside Oracle’s own service documentation.
Compliance with law, globally: In keeping with the ISO/IEC 42001 requirements, we monitor regulatory and standards developments and update our AI policies and practices accordingly. We also do not deploy AI systems that are prohibited by law, such as those prohibited by EU AI Act Article 5.
What does the ISO/IEC 42001 certification mean to Oracle’s customers and partners?
If you’re a customer building or operating AI in regulated environments, this certification can support your governance and assurance workflows. It provides a common, internationally recognized point of reference for evaluating how AI is managed operationally across development and deployment. For customers who want the formal certification artifacts and details for their due diligence, you can access our certifications from our auditor or contact your Oracle account team to help with the appropriate documentation.
Closing
AI will keep accelerating. Expectations from regulators, boards, and customers will too. Oracle is committed to delivering trustworthy AI that is grounded in governance, validated through independent audits, and supported by the operational rigor that our customers’ mission-critical workloads demand.
Oracle is a global leader in standards development, with active participation in numerous international and regional standards organizations around the world. Through this engagement, we help shape emerging frameworks for AI, security, privacy, and cloud technologies, working collaboratively with industry, governments, and standards bodies to advance interoperable, trustworthy, and practical approaches to innovation. For more information, please visit our website on standards.
