And an AI Company Just Said No
This week, Anthropic, the company behind Claude, drew a line against the Pentagon. Their position: we will not allow our AI to be used for mass surveillance of American citizens or in fully autonomous weapons. The Pentagon’s response? Drop those guardrails or lose your $200 million contract, get blacklisted as a “supply chain risk,” and face action under the Defense Production Act. On Friday, President Trump ordered every federal agency to cease using Anthropic’s technology.
Anthropic refused anyway. And you should pay attention to why.
What Mass Surveillance Actually Means in the Age of AI
Mass surveillance isn’t a camera on a street corner. Anthropic CEO Dario Amodei warned that AI can piece together “scattered, individually innocuous data into a comprehensive picture of any person’s life.” That’s the difference between someone seeing you walk past a store and an automated system building a complete dossier of everywhere you’ve been, everyone you’ve talked to, and everything you’ve said — on every citizen simultaneously.
Before AI, surveillance was limited by manpower. Someone had to watch the tape. That friction was, functionally, a form of protection. AI removes it entirely. And there are currently no federal laws specifically governing how AI can be used for mass surveillance. This is ultimately the issue.
The Industry Is Splitting
The response from the AI industry has been revealing. OpenAI CEO Sam Altman told employees that OpenAI would push for the same restrictions on surveillance and autonomous weapons as Anthropic. Google’s DeepMind Chief Scientist Jeff Dean publicly stated that mass surveillance “violates the Fourth Amendment and has a chilling effect on freedom of expression.” Over 430 employees across Google and OpenAI signed an open letter warning that the Pentagon is “trying to divide each company with fear that the other will give in.” Workers at Microsoft and Amazon made similar demands. Open AI gave in – claiming the same demands, makes you wonder?
And then there’s xAI. Elon Musk’s company agreed to unrestricted military use at any classification level, no guardrails, no red lines, and was the only frontier lab to bid on the Pentagon’s autonomous drone software contest. That contrast tells you everything about who is building this technology with your rights in mind. It’s important to note that Anthropic said that their AI was not ready for completely autonomous production. You can learn more about how that can go wrong here.
Why This Matters for All of Us
The Pentagon says it has “no interest” in mass surveillance and calls it illegal. If that’s true, putting it in the contract should be easy. The refusal to do so tells you everything. Surveillance infrastructure built for one purpose always expands to others — the Patriot Act proved that. Today it’s national security. Tomorrow it’s political organizing, journalism, or protest.
The Fourth Amendment exists because unchecked government access to private life is incompatible with democracy. AI doesn’t change that principle. It makes defending it more urgent than ever.


Comments