The Compliance Paradox
Defense contractors face a problem that most industries do not. You need AI to stay competitive, win contracts, and operate efficiently. But the compliance frameworks governing your work were not designed with AI in mind, and a misstep does not just cost money. It can cost your facility clearance, your contracts, and your ability to bid on future work.
I spent years in defense-tech AI, including time at CACI and building systems at Prometheus Intelligence. The contractors who succeed with AI are the ones who treat compliance as an architecture requirement from day one, not a box to check after deployment. This article lays out how to do that.
CMMC 2.0 Levels and What They Mean for AI
The Cybersecurity Maturity Model Certification framework defines three levels under CMMC 2.0, and each one constrains your AI options differently.
- Level 1 (Foundational): 17 practices based on FAR 52.204-21. Self-assessment. At this level, you can use most commercial AI tools as long as they do not process Federal Contract Information (FCI) outside of controlled environments. Basic access controls and authentication requirements apply.
- Level 2 (Advanced): 110 practices aligned with NIST SP 800-171. Third-party assessment required for critical programs. This is where AI tool selection gets serious. Any AI system touching Controlled Unclassified Information (CUI) must operate within your certified boundary. Cloud-based AI services must be FedRAMP authorized or equivalent.
- Level 3 (Expert): 110+ practices with additional controls from NIST SP 800-172. Government-led assessment. AI systems at this level typically require on-premises or air-gapped deployment. The attack surface of cloud-based AI is generally incompatible with Level 3 requirements without significant additional controls.
The critical question for any AI implementation is: what data will the AI system process, and what CMMC level governs that data? Answer that first. Everything else follows from it.
FedRAMP Considerations for Cloud-Based AI
If you plan to use any cloud-based AI service, FedRAMP authorization is not optional for CUI workloads. The Federal Risk and Authorization Management Program establishes the security baseline for cloud services used by federal agencies and their contractors.
In practice, this means your options for cloud-based AI are limited to providers operating in FedRAMP-authorized environments. AWS GovCloud, Azure Government, and Google Cloud's FedRAMP-authorized regions are the primary options. Most commercial AI APIs, including the standard endpoints for major language model providers, are not FedRAMP authorized and cannot be used for CUI processing.
The workaround many contractors attempt, routing data through a FedRAMP-authorized intermediary, only works if the entire data path maintains the required authorization level. One hop through a non-authorized service invalidates the chain.
ITAR Restrictions on AI Training Data
The International Traffic in Arms Regulations add another layer of complexity. ITAR-controlled technical data cannot be used to train AI models unless the training infrastructure and the resulting model are maintained within ITAR-compliant boundaries. This applies to fine-tuning, retrieval-augmented generation (RAG) systems, and even prompt engineering that embeds ITAR data into context windows.
The practical implication: if your AI workflow involves any ITAR-controlled data, the entire pipeline from data ingestion through model inference must reside within ITAR-compliant infrastructure. Using a commercial AI service and pasting ITAR data into a prompt is an ITAR violation, full stop. It does not matter that the service claims not to retain your data.
CUI in AI Workflows
Controlled Unclassified Information handling in AI systems requires clear boundaries around what data enters the AI pipeline, what the model does with it, and where outputs are stored.
What is generally permitted:
- Processing CUI through AI systems within your certified CMMC boundary
- Using AI for analysis, summarization, or classification of CUI when the system meets NIST 800-171 controls
- Storing AI-generated outputs that derive from CUI within the same controlled environment
What is not permitted:
- Sending CUI to any AI service outside your certified boundary
- Using commercial AI APIs that process data on shared infrastructure for CUI workloads
- Allowing AI-generated content derived from CUI to leave the controlled environment without proper marking and handling
- Training or fine-tuning models on CUI data using non-compliant compute resources
Architecture Patterns That Work
Based on the compliance landscape, three architecture patterns consistently satisfy defense AI requirements:
On-Premises Deployment
Running AI models on your own hardware within your facility clearance boundary. This gives you maximum control over data residency and eliminates cloud authorization concerns. The tradeoff is cost: GPU-capable infrastructure is expensive, and you own the maintenance burden. Best for Level 3 requirements and ITAR workloads.
GovCloud Deployment
Deploying AI workloads in FedRAMP High-authorized government cloud regions. This balances scalability with compliance. You get cloud economics without the authorization gap. Best for Level 2 requirements with CUI processing. Ensure your specific AI services, not just the underlying cloud infrastructure, carry FedRAMP authorization.
Air-Gapped Deployment
Fully isolated networks with no internet connectivity. Required for certain classified workloads and some Level 3 scenarios. AI models must be pre-trained and transferred via approved media. No cloud connectivity, no external API calls, no model updates without physical access. This is the most restrictive and expensive option, but it eliminates entire categories of risk.
Building an Audit-Ready Implementation Plan
Your AI implementation plan needs to survive a Defense Contract Management Agency (DCMA) review. That means documentation at every stage. Here is the framework we use:
- Data classification inventory. Document every data type your AI system will touch. Map each to its CUI category, ITAR status, and applicable CMMC level. This is your foundation.
- Architecture decision record. Explain why you chose your deployment pattern. Reference the specific NIST controls it satisfies. Include network diagrams showing data flow boundaries.
- Access control matrix. Define who accesses the AI system, at what privilege level, and through what authentication mechanism. Role-based access control (RBAC) is the minimum. Map each role to NIST 800-171 access control requirements.
- Encryption documentation. AES-256 at rest, TLS 1.3 in transit, and key management procedures. Document the entire key lifecycle. This is table stakes for any CMMC assessment.
- Incident response procedures. What happens if the AI system produces or leaks CUI? Define detection, containment, notification, and recovery procedures specific to your AI workflows.
- Continuous monitoring plan. Automated logging, anomaly detection, and regular access reviews. Your AI system generates audit logs just like any other system in your boundary.
Real Considerations
Data residency is not just about which country your data is in. It is about which specific facility, which network segment, and which personnel have access. Document all of it.
Encryption at rest and in transit is mandatory, but do not forget encryption in use. If your AI model decrypts CUI to process it, that processing environment must be within your certified boundary with appropriate controls.
Access controls must extend to the AI system itself. Model weights, training data, configuration files, and inference logs all require the same protection as the CUI they process.
Vendor management matters. If you use a third-party AI platform, that vendor becomes part of your supply chain risk assessment. Their compliance posture directly affects yours.
In defense AI, compliance is not a constraint on innovation. It is the architecture requirement that separates contractors who keep their clearances from those who lose them.
