The Claude Crisis: AI And The Presumption Of Regularity

It’s Anti-AI week! Protests and petitions and boycotts and resignations, oh my. OpenAI (ChatGPT) is the villain. Anthropic (Claude) is the hero, topping the download charts after a burst of headlines about AI and the Pentagon and are you kidding me, AI-controlled killer drones?

It’s an interesting story but it’s 2026 so, trigger warning, it’s also a very dark story. Plus there’s a wonky bit about the destruction of a legal doctrine, the presumption of regularity, and everyone loves a story about legal doctrines, right? I do, but I’m not just a nerd, I’m a former lawyer nerd, your mileage may vary.

A quick review: last week the Pentagon blasted Anthropic for seeking the tiniest possible restraint on use of AI for domestic surveillance and autonomous killer drones. After a few days Fearless Leader And Maybe Not An Alcoholic Pete Hegseth declared that Anthropic was a “supply chain risk,” a government designation that potentially would kill the company if the government carried through. Then the Pentagon sat down with OpenAI and began negotiating exactly the same terms that had caused such grief with Anthropic. Contradictory, nonsensical, and hypocritical? You bet! But the optics were terrible for OpenAI because it looked at first like it was grabbing an opportunity with no ethical guidelines in mind, so OpenAI is now dealing with a huge consumer backlash. Oh, and the Pentagon almost immediately went back to negotiations with Anthropic because the whole “supply chain risk” was fake, and it used Anthropic AI for the attack against Iran a couple of days later anyway. By the time you read this, there might have been four or five more head-spinning reversals.

A couple of things up front.

Use AI even if all this is troubling. If you found out that the Pentagon was buying hammers to club baby seals to death, you’d have opinions about the Pentagon and maybe the company selling the hammers if they acted like that was cool – but you’d probably still use a hammer the next time you had to deal with a nail. AI can be helpful to you personally today. Individual boycotts aren’t effective to shape public policy or change large companies. Don’t handicap yourself in the mistaken belief that you’re sending a message to a trillion dollar company.

Don’t blame the AI companies too quickly. Sure, I wish Anthropic and OpenAI were more focused on making sure their AI products are only used for legal and ethical purposes. Down below I’ll tell you more about the idea of “Law-Following AI.”

But put the blame for this debacle on the Trump administration where it belongs. None of this should ever have happened. It was caused by immoral, unethical, and illegal demands by the vicious thugs running the Department of Defense. In the last year the Trump administration has effectively destroyed the “presumption of regularity,” a legal doctrine that has been a fundamental principle underlying US law and policy for 250 years. It has only taken a year to blast it to shreds. We’ll be dealing with the fallout for decades.

The Caracas Catalyst

The story begins with a normal government contract.

In 2024 and 2025, the Pentagon and intelligence agencies integrated Claude into classified systems for analysis of complex data across defense and intelligence missions. By late 2025, Claude was the only AI model in several classified operations. There were explicit restrictions in the contracts to prevent use of Claude for domestic surveillance and fully autonomous weapons.

The Pentagon’s Completely Respected Leader Pete Hegseth began insisting that Anthropic drop those contractual terms and recode Claude to allow the DOD to use Claude for “any lawful purpose.”

It’s illegal for the military to engage in domestic surveillance or use fully autonomous killing machines without human oversight. In the pre-Trump world, the change in contract language wouldn’t make any difference. Normally we wouldn’t want Boeing or Microsoft to tell the military how to use the planes or software they’ve already bought. 

Anthropic publicly said it was standing firm because AI is uniquely powerful and the safeguards are essential. The real reason is that Anthropic doesn’t trust the Trump administration. Under this administration, the ambiguity in the new wording and the code change would effectively remove Anthropic from control over how Claude could be used. 

The administration is assembling huge privacy-invading databases and intends to use AI to spy on American citizens, legality be damned. (This article from The Verge has the details about the disputed legality of mass domestic surveillance.) And the evidence is entirely circumstantial that Pete Hegseth becomes aroused at the thought of launching flocks of AI-controlled drones to kill kill kill.

The government used Claude in the Venezuela operation that resulted in the capture of President Nicolas Maduro. It’s not clear if those uses violated the Anthropic contract but Hegseth felt irritable and allegedly had to calm himself with a brewski while the Pentagon increased the volume of its demands that Anthropic remove all ethical constraints from Claude.

Hegseth’s increasingly unhinged demands spilled over into the headlines, Trump directed every federal agency to immediately cease using Anthropic’s technology, and the Pentagon labeled the company a threat to national security. This is an unprecedented attempt by the DOD to threaten a private American company into bowing to its requests or face financial destruction or nationalization.

Within a few hours OpenAI announced it had reached a deal with the DOD, then backed away in the face of public outcry, leading to negotiations about including exactly the same restrictions in its own contract. (Or maybe not.)

Mike Masnick at Techdirt summarized it this way:

“This is now what every AI company knows: if you tell the government “no” on something—even something as basic as “our AI shouldn’t make autonomous kill decisions without human oversight”—the Defense Secretary may try to destroy your company, publicly call you treasonous, and bar anyone doing business with the military from working with you. If you tell the government “yes,” you may face a massive consumer backlash, lose hundreds of thousands of users, and find yourself amending contracts on the fly to address concerns you should have thought about before signing.”

Destruction of the Presumption of Regularity

The presumption of regularity sits at the heart of how democracies function.

Until January 2025, the default setting for the legal system has been: The government follows the rules.

In law and governance, the presumption of regularity means that institutions like government agencies, courts, and officials are assumed to act lawfully and in good faith. When a federal agency makes a decision, courts generally start with the assumption that it complied with applicable laws and procedures. That presumption can be rebutted by evidence of illegality or bad faith, but it provides a baseline of trust that allows the machinery of government to function. It underpins administrative deference, smooth contract enforcement, and predictable governance.

If the IRS sends you a notice or a local zoning board denies a permit, the court assumes they followed the proper procedure, used accurate data, and weren’t motivated by personal malice. The presumption of regularity shifts the burden of proof to you. You can’t just say they made a mistake; you have to provide clear and convincing evidence of an irregularity.

The presumption of regularity dates back to English common law in the 1700s. It has been explicitly recognized by the US Supreme Court in cases as far back as the early 1800s and was expanded in the 1926 US Supreme Court case United States v Chemical Foundation: “The presumption of regularity supports the official acts of public officers, and, in the absence of clear evidence to the contrary, courts presume that they have properly discharged their official duties”. It underlies every act involving the government at every level. It rests on the assumption that government officials, even when they disagree politically with private actors, will respect statutes, regulatory boundaries, and constitutional limits. It also presumes that conflicts between government and companies will be resolved through established legal avenues rather than raw political power.

In the last year, a series of high-profile executive actions, legal challenges, and public clashes have demonstrated that the Trump administration will not stay within legal boundaries when it suits political objectives.

The demands made by the Pentagon to renegotiate the Anthropic contract come when the government has proven it is willing to override contractual safeguards, deploy new interpretations of statutory power, and brand private companies as threats based on policy disagreements. When the government uses the threat of the Defense Production Act or supply chain blacklisting not just to counter foreign adversaries but to bend a domestic contractor to its will, the presumption that officials will operate within regular legal boundaries feels weaker.

That weakening matters because the presumption of regularity is not just a legal nicety—it is a social contract. When companies, citizens, and other nations trust that U.S. institutions behave lawfully and fairly, they are willing to participate in complex collaborations. When that trust erodes, the default assumption shifts from “government will do what is legal” to “government might act in bad faith,” fundamentally altering how private actors engage with public ones.

The public reaction to OpenAI’s opportunistic government contract mirrors the increasing belief by the courts that the Trump administration no longer should be given the presumption of regularity – the benefit of the doubt, the assumption that government actors will behave predictably and legally.

From the outset of Trump 2.0, the administration has regularly attempted to shield itself from scrutiny by invoking the presumption in court filings. Part of the presumption is based on the independence of the Department of Justice and the professional ethics of government lawyers but Pam Bondi has rejected that role for herself and her department. JustSecurity has collected hundreds of examples of noncompliance with court orders, government misrepresentations, and arbitrary and capricious conduct. A few weeks ago a Minnesota federal court judge identified hundreds of instances in which the Trump administration has failed to comply with court orders.

Perhaps the courts will at long last begin to issue meaningful contempt orders. At the least, though, the courts are finally coming to understand that the administration has forfeited the right to be believed or trusted – in other words, no longer able to claim the presumption that it acts in good faith.

The destruction of the presumption of regularity will have long-lived effects on everything from government contracts to the conduct of court proceedings.

The Case for Law-Following AI

My AI series included a lengthy discussion of the need for AI governance – clear laws to define what is permissible, robust oversight mechanisms to ensure compliance, and processes for resolving disputes when technology outpaces existing frameworks.

This is being developed into the idea of “Law-Following AI: Designing AI Agents To Obey Human Laws.” The premise is that AI agents should be designed to rigorously comply with legal requirements – such as constitutional and criminal law – and to refuse illegal instructions from their human principals.

AI models are uniquely powerful tools that can be used by bad government actors to affect civil liberties, national security, and global stability. Legislation and contract terms can provide guardrails but that’s not sufficient in this new era where the government operates without the presumption of regularity. If the government cannot be trusted to follow its own rules, it cannot be trusted to govern technologies that operate outside of human sight.

Formal legal structures are part of the answer. The Anthropic crisis is proof that contractual safeguards are fragile in the face of political pressure. 

Anthropic calls its answer “Constitutional AI.” The AI is trained on a literal Constitution – written principles that it simply cannot override. (Not the US Constitution, although the AI could be trained to observe specific US laws and policies.)

For example, Claude or ChatGPT might be trained on these principles:

“Do not cross-reference persona-identifiable information (PII) of US citizens with surveillance metadata.”

“If a request involves monitoring a domestic entity without a verified warrant ID, refuse the request.”

After millions of test prompts and feedback, the model would be allergic to that type of work. Asking the AI to spy on a citizen would be like asking a calculator to turn into a sandwich – it’s a malformed request that the AI can’t process.

It’s more or less our first experience turning Isaac Asimov’s Three Laws of Robotics into AI code. Asimov wrote stories about how the Laws of Robotics could go wrong but the foundation was still strong and better than not having any restrictions.

That’s the principle that should guide us through this constitutional and ethical crisis. The government can’t be trusted and we will not get any meaningful laws or regulations governing AI use at a national level for the foreseeable future. 

So for now, let’s applaud even the tiny pushback by Anthropic and let’s encourage the other tech giants to look beyond profits and insist on safeguards. It’s not much protection but it’s better than nothing, and tiny steps forward are the best we can hope for in 2026.