By Eric Lansbery - COO

Leveraging generative AI and machine learning can offer huge productivity gains – even for organizations handling sensitive Controlled Unclassified Information (CUI) and Federal Contract Information (FCI).

However, embedding AI into processes in a CMMC Level 2 environment introduces new risks that must be managed carefully to remain compliant. CMMC 2.0 Level 2 requires full implementation of NIST SP 800-171 controls to protect CUI/FCI, and it emphasizes strict safeguards when using cloud services.

See Seiso Note Building Trust and Compliance: How Seiso Supports Customers Pursuing CMMC Level 2

This article provides a high-level strategy to secure the use of AI in corporate settings under CMMC L2, aligning with AI security frameworks and proven cloud security best practices. We’ll explore how to govern AI risk, enforce compliance requirements, and protect sensitive data – all while enabling innovation responsibly.

Balancing AI Innovation with CMMC L2 Compliance

CMMC Level 2 compliance is non-negotiable for Defense Industrial Base companies handling CUI. It mandates adhering to 110 security controls (NIST SP 800-171) across areas like access control, audit logging, incident response, etc. While the CMMC requirements don’t explicitly mention “AI”, any AI platform used to process, store, or send CUI falls under cloud usage rules – specifically 32 CFR Part 170, which requires using FedRAMP-authorized (Moderate baseline) cloud services for CUI. In practice, an AI SaaS or cloud service is treated as a CSP, so it must be FedRAMP Moderate (or equivalent) if it touches CUI. This means most consumer-grade AI tools are off-limits for CUI data, since uploading CUI to a non-FedRAMP public AI is a direct compliance violation (violating DFARS contracts and inherently CMMC itself).

Meanwhile, organizations are eager to harness AI for efficiency – from code generation and document summarization to predictive analytics. This creates tension between innovation and strict compliance. Key risk factors include:

  • Data exposure: Many generative AI tools send input data and telemetry back to providers for model training or analytics. If CUI/FCI is input to an unsecured AI, it may be logged or stored in uncontrolled servers, undermining confidentiality. Even metadata leakage can violate policies.
  • Unvetted cloud services: “Free” or public AI services provide no contractual privacy or security assurances. They often run on public infrastructure and are not bound to meet DFARS or NIST 800-171 controls. No-cost AI = no compliance in this context.
  • User error in segregation: Some companies try to segregate “non-sensitive vs. sensitive” AI use (e.g. letting marketing staff use public AI for non-CUI data, while restricting engineers to an internal AI for CUI). Human error and pressure can easily blur those lines. Without rigorous technical barriers and monitoring, a single lapse could become a reportable incident.
  • AI model risks: Aside from data handling, AI models introduce new risks. Generative models can produce inaccurate or biased outputs if not properly managed. There’s potential for data bias, hallucinations, or misuse of AI outputs, which can lead to poor decisions or even compliance issues (e.g. an AI incorrectly exposing sensitive info). Adversaries might try prompt injection or data poisoning to manipulate model behavior. CMMC L2’s scope on system security implicitly covers these integrity concerns, but they need specialized mitigations.

The mandate is clear: to use AI in a CMMC L2 environment, security and compliance must be baked in from the start. In practice, this means only using AI solutions within a secure boundary, thoroughly assessing AI-related risks, and extending your compliance controls to cover AI activities.

Companies will need to balance AI adoption with a “trust but verify” stance – ensuring every AI use case is evaluated for risk and aligned with CMMC requirements.

Applying AI Risk Management Frameworks to CMMC Level 2 Compliance

Effective management of AI risk is essential for organizations seeking to deploy AI solutions while maintaining strict adherence to CMMC Level 2 requirements. By using industry-recognized frameworks from bodies like the Cloud Security Alliance (CSA) and NIST, organizations can extend their compliance programs to cover the unique risks introduced by AI workloads, ensuring that CUI and FCI remain protected throughout the AI lifecycle.

  • Cloud Security Alliance – AI Controls Matrix (AICM): The CSA AICM, released in July 2025, provides a structured set of 243 control objectives across 18 domains, specifically designed to address AI security and governance. For organizations operating in a CMMC L2 environment, the AICM offers actionable controls that map directly to NIST SP 800-171 requirements, such as Identity & Access Management, Data Security & Privacy, Audit Logging, and Model Security. This mapping ensures that by implementing AICM controls, organizations reinforce their compliance posture—covering both traditional IT and AI-specific threats like sensitive data disclosure, model manipulation, and data poisoning. In practice, this means organizations can demonstrate to auditors that their AI environments are governed by the same rigorous standards as their other CUI systems, thus preventing governance gaps and data leaks.
  • CSA AI Model Risk Management Framework (MRM): The MRM introduces practical tools for documenting and mitigating model risks, such as model cards, data sheets, and risk scenario planning. In the context of CMMC L2, these practices support the requirements for transparency, traceability, and accountability. For example, model cards provide evidence of model provenance and intended use, while risk cards help document known vulnerabilities and incident responses. This documentation is invaluable when demonstrating compliance during CMMC assessments, as it shows proactive management of AI risks and supports audit logging and incident response controls.
  • Trustworthy AI Principles: CSA’s focus on attributes like robustness, explainability, controllability, and transparency aligns closely with CMMC’s emphasis on system integrity, access control, and auditability. By embedding these principles into AI development and deployment, organizations ensure that their AI workloads remain secure, auditable, and under human oversight—key requirements for handling CUI and passing CMMC evaluations.
  • NIST AI Risk Management Framework (AI RMF): Though CSA’s frameworks are the primary focus, the NIST AI RMF complements them by outlining a lifecycle approach to AI risk management (Map, Measure, Manage, Govern). When used together, NIST provides the overarching process, while CSA delivers detailed control checkpoints. This integrated strategy ensures continuous risk identification and mitigation for AI workloads, bolstering compliance with CMMC’s ongoing risk management and governance mandates.

See Seiso Note Securing the Future: AI Strategy Meets Cloud Security Operations

In summary, adopting AI-specific risk management frameworks such as CSA’s AICM and MRMF, alongside NIST’s AI RMF, enables organizations to systematically secure AI workloads within CMMC Level 2 environments. These frameworks help translate high-level CMMC controls into concrete, actionable measures for AI, ensuring that sensitive data is protected, risks are documented and mitigated, and compliance is demonstrable across both traditional IT and emerging AI platforms.

Architecting a Secure AI Environment (Cloud and Infrastructure)

One of the most critical decisions is where and how your AI systems run. In CMMC L2 settings, environment architecture must ensure that AI tools never become a conduit for data spillage and that they meet the same security bar as the rest of your IT.

1. Keep AI Within the Authorized Boundary: All CUI processing must stay inside your accredited IT environment, using FedRAMP Moderate (or higher) infrastructure. Use government-only cloud AI services (e.g., Azure OpenAI in Azure Government, AWS Bedrock in GovCloud), which are designed for sensitive data and meet compliance requirements. If using in-house or private cloud AI, apply the same security controls as other CUI systems—strict segmentation, access controls, encryption, and no outbound internet connections unless approved.

  • Government Cloud AI: Choose only FedRAMP Moderate or DoD IL4/IL5 authorized services in U.S.-only regions.
  • Private/On-Prem AI: Run models in secured, segmented environments with no unvetted external connections.

2. Avoid Unapproved AI Services: Do not use consumer or public AI tools (e.g., ChatGPT, Google Bard) for any CUI or FCI. Only use FedRAMP-authorized AI platforms. If experimentation is needed, use sanitized data in separate sandbox environments—never sensitive data. Enforce this rule with both policy and technical controls.

3. Segment and Control Access: Isolate AI workloads that handle sensitive data. Implement network segmentation, restrict access to trained personnel, require multi-factor authentication, and use RBAC. Ensure clear separation between sensitive and non-sensitive AI use—preferably with different applications or endpoints and automated technical safeguards to prevent accidental CUI leakage to public AI services.

4. Secure Data Handling and Storage: Encrypt all data in transit and at rest. Control access to training data, outputs, and logs—treat logs as sensitive since they may hold CUI. Use secure log management and monitoring and turn off any AI telemetry or cloud connections that are not routed within the compliant boundary.

5. Manage Supply Chain and Third-Party Risk: Ensure all vendors and AI providers meet NIST 800-171 or equivalent standards. Verify model sources and integrity, keep AI dependencies updated, and coordinate compliance with all involved partners and suppliers. Only share CUI-laden AI outputs with compliant partners.

By building AI within a secure, compliant architecture and enforcing strict controls, you minimize the risk of data leaks and maintain CMMC Level 2 compliance by default.

Governance, Policies, and Controls for AI Usage

Technical controls alone won’t guarantee compliance; strong governance and user awareness are equally important. A misconfigured model or an uninformed employee could still cause an incident. Thus, organizations should update their cybersecurity governance to explicitly cover AI.

1. Update Policies – “AI Acceptable Use”: Develop an Artificial Intelligence Acceptable Use Policy that clearly delineates how employees may or may not use AI tools. This policy should:

  • Prohibit entering any CUI, FCI, or other sensitive business data into unauthorized AI systems (and list examples of such systems).
  • Require that only approved, compliant AI solutions are used for company data.
  • State that all AI usage may be checked and logged for security.
  • Address intellectual property and data ownership: clarify that outputs from AI are company property if generated in the course of work (and ensure the AI tool’s terms of service don’t claim otherwise).
  • Emphasize confidentiality: remind users that even queries to AI about seemingly harmless info can inadvertently include sensitive context.

Have all relevant staff read and sign this policy (or acknowledgement). This sets out a baseline of expectations and can be shown to auditors as evidence of governance.

2. Security Awareness and Training: Extend your security training program to include awareness about AI risks. Users need to be trained on:

  • How to identify CUI/FCI and the rule that it must not be shared with unauthorized systems. Given that humans are poor at consistently spotting what qualifies as CUI, give clear examples and perhaps data labeling tools to aid.
  • Proper use of approved AI tools: For example, if you’ve deployed an internal AI chatbot, train users on its interface, what types of data they can input, and how outputs should be handled. If outputs have CUI, users should know how to mark and store them appropriately (just as they would an email having CUI).
  • Dangers of Shadow AI: Educate about the risks of signing up for free AI services or browser extensions. Make it clear that using such tools for work content is against policy and can lead to disciplinary action (and why it’s a risk to the company).
  • Incident reporting: Encourage staff to immediately report if they accidentally pasted something sensitive into an external AI, or if they notice any AI system behaving oddly (e.g. an output that seems to include data it shouldn’t). A quick report can allow the security team to perform damage control (such as requesting data deletion from a vendor) before it escalates.

3. Document AI in the System Security Plan (SSP): The SSP is a living document in CMMC that details your system boundaries, components, and how controls are implemented. If you integrate an AI system (say an AI SaaS or a new AI server) into the environment, update the SSP to include it. Document:

  • The architecture of the AI solution (boundary, network diagrams).
  • How it inherits or implements required controls (for each relevant NIST 800-171 control, note if the AI system falls under existing measures or needs more).
  • That a recent security review has been completed before deployment or use by the organization.
  • Any additional risks found and how you mitigate them.
  • Third-party services involved and their compliance status (e.g., “Uses Azure OpenAI in GCC High region, FedRAMP Moderate authorized” – include that detail).

A well-documented SSP showing AI usage proves to assessors that you’ve proactively included AI in your security program, not treating it as a loophole. It also helps internal alignment – everyone (IT, security, compliance, procurement) will be on the same page about approved AI activities. 4. Map and Implement Controls for AI: Extend your existing security controls to cover AI workflows. Most of the CMMC (NIST 800-171) families stay applicable – you must interpret them in the AI context.

Here’s a brief mapping of key control areas and how to apply them.

CMMC Control FamilyApplied to AI Systems
Access Control (AC)Restrict AI system use to authorized users only. Enforce RBAC and MFA for any cloud AI portals or internal AI apps. Disable or tightly control guest/anonymous access to AI interfaces.
Audit & Accountability (AU)Log all AI interactions and administrator actions. This includes recording prompted by the AI and occurred. Retain these logs for incident investigations and periodic review. Use alerts to flag if, say, large volumes of data are input to the AI or if outputs have certain keywords (potential data leakage).
Configuration Management (CM(and System & Communications Protection)Secure the AI system configuration to an approved baseline. For cloud AI, review all configurable settings (e.g., data retention OFF, data encryption ON, telemetry OFF). For self-hosted AI, ensure the underlying OS, libraries, and model files are managed under change control. Network-wise, isolate the AI – no unapproved external connectivity (e.g. prevent it from calling external APIs). Encrypt data in transit between AI components and clients.
Media Protection (MP)Treat AI outputs as sensitive data. If an AI generates a document or recommendation that has CUI, that output must be marked and handled per data handling procedures (e.g., stored only on approved drives, not emailed externally without encryption). Implement measures to automatically tag or classify AI-generated files if they likely include input data. Also, if the AI allows file uploads/downloads, ensure that those files are stored securely.
System Integrity (SI)Maintain the integrity of AI models and prompts. Use mechanisms to filter or confirm inputs to the AI to block malicious payloads or obvious CUI leaks (for instance, some AI platforms let you define “forbidden” patterns or use DLP solutions to intercept sensitive content). Monitor the AI’s outputs for anomalies that could show tampering (e.g., sudden output of gibberish might hint at a corrupted model). Regularly update models with security patches or improved versions to fix vulnerabilities.
Incident Response (IR)Update incident response plans to include AI-specific scenarios. For example: how will you respond if an employee accidentally leaks CUI to an external AI? (Plan to notify the proper channels, possibly invoke DFARS reporting if required, and work with the AI provider to purge data. What if the AI produces a very incorrect output that was acted upon? (Consider it as a quality incident or near-miss and analyze the root cause in the model or data.) Prepare playbooks for AI data spillage, model misuse, or model compromise events. Include the AI team in tabletop exercises.

By systematically going through each control family and considering its application to AI, you integrate AI into your overall security control environment. The key is to show that no control gap exists. If anything, AI might require augmenting controls (like DLP for prompts) to address its unique sides.

5. Continuous Monitoring and Enforcement: Once controls and policies are in place, continuously monitor for compliance:

  • Automated DLP/Monitoring: Use data loss prevention tools and CASBs to detect and block unauthorized AI usage, such as attempts to send data to external AI services like ChatGPT. Monitor network and browser logs for activity on known AI service URLs to find “shadow AI.”
  • Audit AI Logs: Regularly review logs from approved AI systems to catch disallowed prompts or inadvertent disclosure of internal data, helping to spot misuse or data leaks.
  • Model Monitoring: Track AI model performance and drift, especially for models affecting security. Address issues promptly to avoid mistakes or improper data release.
  • Update Controls: Frequently reassess and update controls to keep pace with evolving AI technology and threats, following guidance from CSA, NIST, and other authorities.

6. Plan for Future Regulations: The regulatory landscape for AI is still taking shape (EU AI Act, etc.), but the spirit of CMMC – protecting sensitive data – will remain constant. By implementing the above practices, you’re not only compliant with today’s rules but building a foundation to meet future AI-specific regulations. Keep an eye on CMMC updates too – while current CMMC guidance doesn’t detail AI, future revisions might incorporate more AI-specific language. Being proactive now is a competitive advantage, as trust is becoming a key differentiator for organizations using AI.

Conclusion: Enabling Trustworthy AI Under CMMC Compliance

Adopting generative AI and ML in a CMMC Level 2 environment is possible – but it requires a strategic, security-first approach. For CISOs and Cloud Security Engineers, the mission is to embed trust and compliance into every facet of AI integration. By leveraging frameworks like CSA’s AI Controls Matrix and Model Risk Management guidance, you gain a blueprint to address AI risks holistically – from technical vulnerabilities to governance and ethics. Coupling that with CMMC’s rigorous controls ensures that the same protections applied to your traditional systems also shield your AI systems from compromise or misuse.

In summary, companies should:

  • Establish a compliant AI environment – use only vetted platforms or isolated infrastructure that meet government security standards and lock it down to prevent data leakage.
  • Institute strong AI governance – clear policies, user training, and senior leadership support to only use AI responsibly and with oversight.
  • Integrate AI into your security program – extend existing NIST 800-171 controls to AI and adopt AI-specific controls for new threat vectors (data poisoning, model misuse, etc.).
  • Continuously manage AI risk – perform risk assessments before deploying AI, monitor usage and outcomes, and be prepared to respond to incidents in this new domain.

By doing so, you protect CUI/FCI while still reaping AI’s benefits. The Cloud Security Alliance notes that trust is the foundation for responsible AI – in a CMMC environment, trust comes from knowing your AI is secure and compliant by design.

Organizations that achieve this will be able to confidently innovate with AI, maintaining their competitive edge without jeopardizing their obligations. In the high-stakes world of defense contracting, the ability to use cutting-edge AI securely and meet CMMC requirements will distinguish the leaders from the laggards. Finally, keep engaging with industry efforts as they evolve. They will help you stay aligned with best practices and show to stakeholders – from DoD customers to board members – that your AI implementations are not only smart, but also safe, compliant, and worthy of trust.