The Most Secure AI Data Centers Are Not That Secure
The Most Secure AI Data Centers Are Not That Secure
The U.S.’ current security frameworks weren't built for the moment we are in.
23 March 2026
By Scott J Mulligan || Published stories in MIT Technology Review, VICE News, and AI Frontiers. Previously an investor in AI & robotics at Rainfall Ventures.
AI systems are becoming pivotal for military operations. From the recent Venezuela raid to strikes in Iran, the U.S. has been relying on advanced AI systems to make strategic decisions.
AI’s strategic value in military operations makes the systems a prime target for U.S. adversaries, either through sabotage, espionage, or outright stealing of models from data centers.
If an adversary wants to get their hands on model weights (the “brain” of a model), they have a lot of tools in their belt to do so. Given the new technical challenges that come with AI specific data centers, experts argue it’s possible for an adversary to steal and gain strategic insights into the U.S.’ most advanced AI systems.
Last July, the White House included Building High-security Data Centers in their AI Action Plan as a top priority. But the question of what it would take for security to be effective against nation-state adversaries is an open one.
There are ideas on the table. In one proposed framework out of RAND, there are five security levels. Security level one is the weakest and entails protection from solo novice hackers. The highest security level, 5, would entail a data center capable of preventing China or another highly motivated and capable state-level actor from stealing or otherwise sabotaging an AI system. It’s essentially treating AI data centers that handle the most advanced models like other national security assets that require strong protections.
“SL5 is the security that you need to be protected against top priority Chinese operations,” says Sella Nevo, founding director of the RAND Center on AI, Security, and Technology who proposed the SL5 standard in his paper, Securing Model Weights. “Ones where you know Xi Jinping is tracking them and they are one of the top things that he has asked his intelligence community to go after.”
One looming problem is, as of today, there aren’t specifications for what an SL5 data center would look like. And the standard will most likely continue to evolve with the technology it intends to protect. It may never be one rigid ruleset as the capabilities of states are ever changing as well.
Although SL5 is not strictly defined, there are no data centers that would be considered at SL5 operating today– that is, no data centers that would be able to withstand espionage or model weight theft from dedicated nation-state adversaries with lots of resources.
Anthropic does not publicly map its security posture to RAND’s levels, but its stated safeguards appear aimed at defending against sophisticated non-state actors, roughly consistent with SL3. It is unclear where the other companies stand.
Experts believe even the most tightly controlled classified data centers under an agency like the NSA, operating under SCI, IL6, and ICD503 standards, fall short of SL5. These systems were built for traditional secure computing, not for the unique risks introduced by large-scale AI training.
The implication of this is that all AI systems today, including those used for sensitive national security purposes, are vulnerable and we are not on a clear path to secure them.
Current standards weren't designed to address specific issues involving AI data centers. A recent report by Bennett Tomlinson, then at the Foundation for American Innovation (FAI) (now at the Center for AI Standards and Innovation (CAISI)), has a breakdown of several of those issues from an infrastructure level.
For example, the report details how with AI systems, we have a massive amount of data moving very quickly. This means it’s extremely hard to track that data in one’s facility, necessitating new guardrails.
As an analogy, imagine you’re trying to defend money in a bank. Existing frameworks entail guards at the door, big walls, and cameras. That worked pretty well.
But in our new AI-enabled bank, you have infinite money flying everywhere inside the bank at five hundred miles an hour. If someone were to steal one billion dollars out of that flurry, it wouldn’t make a splash, and we wouldn’t even know it happened.
In this new AI-enabled bank, normal behavior starts looking identical to theft. We can’t rely on the old way of detecting fraud by flagging large or unusual money movements since large unusual money moves are happening all the time, nonstop. Instead, security has to prioritize the management level and treat it as mission critical.
Going back to the report, one recommendation would entail defining “explicit AI-specific privileged operations”. Essentially, it’s not Person A has access to the model, it’s Person A has access to do one specific action with a model like modifying a certain part of the training run.
AI data center security standards would entail giving these types of precise access controls. The key shift is from broad access to tightly controlled authorization for specific high-risk actions.
The FAI report also covers novel risks from emanations. When we train AI models, it releases a lot of signals. Not to harp on our bank analogy, but this means our money is essentially leaking through the walls. A motivated actor can read signals emanating from training runs, and gain powerful insights about how a model is being trained and the structure of the model through the signals alone.
For example, if the U.S. finetunes a model on sensitive classified nuclear data, perhaps in an effort to optimize its nuclear weapon supply, if the facility is not properly shielded, the signals released may leak critical information.
Given that these risks weren’t as much a concern with traditional data centers, current NIST standards (SP 800-53) acknowledge risks associated with emanations but don’t give much tangible guidance beyond pointing to the TEMPEST standards, largely classified, which deal with lowering emanation risks but these standards were designed in the 1950s and updated sporadically since then. As a result, according to an individual spoken to for this article who handles TEMPEST certifications, TEMPEST standards don’t account for the unique risks associated with AI.
And TEMPEST only addresses electromagnetic emanations. There are other new emanation security risks from power usage patterns to cooling fluctuations which can both be analyzed by a motivated state actor to gather intelligence.
Beyond the report’s infrastructure findings, there are also inference-level attacks, where an adversary could use an AI model’s outputs to figure out hidden information about the model. For example, a distillation attack is one that present frameworks do not account for. If an adversary was able to somehow secretly or remotely prompt a model, this would pose a risk in and of itself.
Not only could the answers compromise valuable intelligence, but through a series of questions, they would be capable of creating an AI system from its answers. They would be ‘distilling’ the model from the more advanced one. And this new model may lack the safeguards the U.S. government may have put on its initial model. For reference, OpenAI claimed China's DeepSeek models were trained from distilling OpenAI’s models.
“From a human standpoint, it’s as if I was able to distill the ability of a sniper to shoot,” says Erich Devendorf, expert technical resident at RAND. “That sniper might be an FBI sniper with a set of ethics and a command structure around them where they say, I only take ethical, legal shots. But if I learn the ability to take the shot from the sniper, I necessarily don't get any of the guardrails around it.”
A SL5 framework could account for this type of attack by restricting and filtering queries, effectively axing the ability for a massive distillation attack.
Regarding costs and timelines, they are hard to pin down– especially because there is still much conceptual and experimental work to be done to agree on what “SL5” actually looks like in practice. Nonetheless, for a global commercial system like ChatGPT, experts estimate that SL5 data centers may cost in the tens of billions and take around five years. But for a prototype SL5 data center handling limited queries or a smaller one more focused on national security or nuclear preparedness, it could be in the one year range and cost approximately $40M.
If one thinks there is a chance of highly sophisticated AI systems that may play a significant role in determining the outcome in warfare in the next five to ten years, then the U.S. should already be building SL5 data centers. “There’s a lot of work that needs to be done here, and we’re not on track,” says Nevo.