AI System Challenges
Organizations which adopt artificial intelligence (AI) may have policies and procedures designed to protect the AI systems themselves and the data they process and generate. There are several ways to address those information security goals, including monitoring AI system operations and performing regular security maintenance on AI systems.
Managing AI security tasks within an enterprise can be complex, especially in heterogeneous environments with diverse software platforms and specific or varied requirements for supporting critical business processes or types of data. However, strong alignment between an organization’s AI security objectives and its security practices and controls may be achieved with planning.
An AI System Security Plan Can Help
To help track and manage security tasks for AI systems, one possibility for organizations is to create a System Security Plan (SSP). NIST defines an SSP as a “formal document that provides an overview of the security requirements for the system and describes the security controls in place or planned for meeting those requirements” (NIST SP 800-12 Rev. 1).
SSPs can be tailored to address a variety of security needs. For example, these plans can be scoped to a single AI system or model, a few critical AI systems or models, or a group of AI systems or models. Once scoped, the SSP can be a tool to help define and communicate the security objectives and how they are managed.
Potential Benefits of an AI SSP
SSPs can help personnel make AI system decisions that positively impact security, implementing their organization’s security requirements. SSPs can offer benefits, for example:
- Vulnerability management – identify systems impacted by vulnerabilities.
- Business continuity – enable resilience planning and disaster recovery.
- Efficient and consistent operations – identify and understand the AI you use.
- Insights – respond to leadership inquiries and requests for audit evidence.
To illustrate these possibilities, consider an organization seeking to deploy a new AI system as a monitoring tool to improve its risk management. IT and AI operation teams would typically review the AI system’s software inventory, network diagram, data description, and configuration when preparing the SSP and, in turn, the plan can help them make more informed tooling and deployment decisions.

Creating an AI SSP
To establish an AI SSP, leadership should first determine its scope. An example scope could be based on business criticality of the processes an AI system supports. After selecting the scope, leadership should identify the systems and components the AI system will primarily use, as well as the indirect dependencies that those systems and components have, as described in our previous blog on prioritizing business-critical risks. It is also beneficial to engage relevant stakeholders, including the business leaders and IT teams supporting the AI system. Once the stakeholders are engaged, these are some additional steps to help establish an SSP:
- Communicate to personnel the scope of AI systems in the SSP.
- Choose a template, potentially such as the one offered in NIST Special Publication 800-18 “Guide for Developing Security Plans for Federal Information Systems.
- Define and implement processes for: Gathering and maintaining associated system documents (e.g., network diagrams, data flow diagrams, and data ownership matrices); and, Developing a plan to manage SSP changes to keep the AI systems and SSP synchronized.
Maintaining an AI SSP
SSPs should be dynamic documents. To address updates over time, organizations may want to set a time frame or triggers for periodic review and include an appendix with the history of updates that were made, tracking the date and details of changes.
Potential Downsides of an AI SSP
Creating an AI SSP is not without risk. For example, it can introduce additional cost and require operational efforts, and time to maintain the documentation as the AI stacks evolve rapidly. Overly strict or misaligned controls may create complexities that stifle innovation, and conflicting outcomes may arise for an organization that has an existing system of policies and processes that are robust or mature enough to enable teams to manage the unique security risks that AI systems present.
Conclusion
There is no one-size-fits-all approach to managing AI system security. An AI SSP is one possibility to consider, with the key being that it can help provide clarity on who does what, when, and how. That logical framework can provide transparency and create accountability, which is why some organizations have begun to explore using them to meet their information security goals. Check out the resource list below to evaluate whether an SSP may be beneficial for your organization and, if so, to build and maintain one.
Additional Resources
- NIST Special Publication 800-18, Guide for Developing Security Plans for Federal Information Systems
- NIST Special Publication 800-171, Protecting Controlled Unclassified Information in Non-federal Systems and Organizations
- FedRAMP System Security Plan Required Documents
- NIST AI Risk Management Framework
- Cybersecurity & Infrastructure Security Agency Best Practices Guide
- CISA Best Practices for Securing Data Used to Train & Operate AI Systems
Cloud Security Alliance AI Controls Matrix and AI security and risk management publications

