Thank you for watching! In the face of unrelenting disinformation and authoritarian actions, clear truth-telling and independent media are a necessity. If you value pro-democracy journalism, consider becoming a free or paid subscriber to my newsletter. Paid subscribers empower this work and gain access to exclusive benefits. Your support makes a difference.
Anthropic just did something no major tech company has done in Trump’s second term. They said no.
When the Pentagon asked its AI contractors to sign new agreements allowing the use of their systems for all “lawful purposes,” OpenAI, Google, and xAI signed. Anthropic didn’t. Instead, Dario Amodei’s team drew two clear lines: no mass domestic surveillance, and no autonomous weapons.
The Trump administration responded by designating Anthropic a supply chain risk, an unprecedented punitive measure against an American company, and banned Claude from government systems before turning around and signing a deal with OpenAI instead.
What happened next matters just as much as the refusal itself. Anthropic’s Claude shot to number one in the App Store. Downloads surged. The market rewarded them. And in doing so, they may have created the permission structure to defy Trump that other companies needed to see.
I brought in my friend Greg Fish, a computer scientist and author of the Cyberpunk Survival Guide, to break down what this standoff really means for civil liberties, for the AI industry, and for the broader fight against technology-powered authoritarianism.
You can watch our full conversation above and read the key takeaways below.
What The Pentagon Demanded Of Anthropic
This wasn’t a routine contract dispute. The Trump administration wanted something specific and alarming.
According to The New York Times’ reporting, on a call with Anthropic executives, the Pentagon made clear it wanted the ability to collect and analyze unclassified commercial bulk data on Americans, including geolocation and web browsing history. That is mass surveillance. Anthropic said no.
Anthropic’s two carve-outs were narrow and reasonable: don’t use Claude for mass domestic surveillance, and don’t deploy it for autonomous weapons. The fact that the Pentagon refused to accept even those minimal guardrails tells you everything about what they could be planning.
Greg made a critical technical point: AI language models make mistakes. When you pressure them to produce a certain detection rate or flag a certain percentage of people, they will, and they’ll do it convincingly, even when they’re wrong. In the wrong hands, that becomes a tool where you are statistically designated guilty, and the machine builds a confident, coherent case against you that may be entirely fabricated.
The danger isn’t just the technology making mistakes. It’s an administration that would welcome those mistakes. As Greg put it, the Soviet playbook was exactly this: use the error as justification for the kangaroo court. The mistake isn’t a bug, it’s a feature.
Anthropic vs. OpenAI: A Tale of Two Philosophies
The contrast between these two companies couldn’t be sharper, and this moment put it on full display.
OpenAI CEO Sam Altman initially signaled solidarity with Anthropic’s position, then, within hours, signed a deal with the Trump administration, claiming the demands had been met. The whiplash was telling.
OpenAI operates in the tradition of Silicon Valley singularitarianism. Build the biggest model possible, promise superintelligence, juice the valuation, and figure out the rest later. Anthropic is building a real tool with defined uses and honest benchmarks. Dario Amodei’s public messaging has been consistent, careful, and grounded in constitutional principles. Altman’s messaging changes by the hour.
Dario has reportedly handed employees a book about the creation of the atomic bomb, not as inspiration, but as a warning. If you build something powerful, you are responsible for what it gets used for. That’s a fundamentally different culture than what you see at OpenAI.
Greg uses Claude for serious technical work, including database design and unit testing, precisely because it’s more consistent and reliable than the alternatives. That quiet reputation is now translating into market rewards.
Decency Is Being Financially Incentivized. That’s A Key Story.
Anthropic got punished by the government and rewarded by the market. That dynamic could change everything.
Most tech CEOs who might have wanted to push back on this administration have been afraid to, not necessarily because they agree with Trump, but because their investors don’t want the company punished. The fear of investor backlash has been the real enforcer of corporate capitulation.
Anthropic broke that calculus. They lost a $200 million government contract and gained far more in downloads, subscriptions, and market positioning. If there’s a financial reward in telling this administration no, more companies will tell them no. Money is the only language that consistently works.
We already have Palantir fully enthralled with this administration, helping aggregate vast amounts of data. Layer an AI system on top of that, one capable of analyzing it at scale and generating convincing but potentially fabricated conclusions, and you have the infrastructure for technology-powered authoritarianism. Anthropic saw where this was going and stepped off the train.
The supply chain risk designation almost certainly won’t hold up in court. But that’s not the point. The point is that it was meant to intimidate. It didn’t work. And that matters.
The AI Bubble Risks, Displacement, & What Comes Next
Beyond the Anthropic story, we’re in a moment of serious AI hype that carries real risks and real opportunities that aren’t getting enough attention.
The promise of superintelligence is, to put it plainly, a grift. Greg’s view: the sovereign wealth funds and institutional investors financing this boom know it too. They’re riding it out for the return. When that return dips below their threshold, they’ll move on. Meanwhile, companies like OpenAI are building valuations on circular lending, with NVIDIA buying stakes in companies that use that money to buy chips from NVIDIA back. It’s a financial house of cards that could see 30 to 40% valuation corrections when it unwinds.
The real AI innovation is happening in quieter places. Google and Microsoft are deep in medical AI, discovering new classes of antibiotics, advancing cancer diagnostics, and improving climate modeling. These are genuine, consequential breakthroughs that get ignored because they’re not flashy enough to drive hype cycles.
Displacement is real, and it’s coming. Entry-level jobs will be eviscerated. Middle management will contract. The people who will thrive are those who learn to manage AI agents, not those who ignore the technology out of principle. If you’re on the left and reflexively anti-AI, I understand the instinct, but you need to be using these tools professionally, or you will be left behind.
That said, the nuance matters. AI should not be replacing human creativity, art, or editorial judgment. As Greg put it, AI is like a fork. Extremely useful for eating, terrible for stabbing people. We’re currently trying to do both. The answer isn’t to ban the fork. It’s to stop the stabbing.
The regulatory conversation we should be having, about displacement, surveillance, autonomous weapons, and AI in the arts, has been completely flattened by the Trump administration’s chaos. We’re not having it. And the vacuum that creates is being filled by individual company decisions, which is not a substitute for systemic governance.
Bottom Line
Anthropic drew a line. They said mass surveillance and autonomous weapons without human oversight are not things they will enable, regardless of what a government contract pays. They got punished by the state and rewarded by the market. And in doing so, they may have demonstrated something the rest of corporate America has been too afraid to test: that standing up to this administration is not just the right thing to do. It can also be the smart business move.
The question now is who follows. Because if they don’t, what fills the gap is Palantir’s data, OpenAI’s compliance, and an administration that has already shown it views AI’s errors not as problems to fix, but as tools to exploit.
This technology is not going away. The only question is who controls it, and what they’re willing to do with it.
Subscribe to Ahmed Baba News for independent analysis that cuts through the noise. And check out Greg Fish’s Cyberpunk Survival Guide for deep-dive tech and society coverage from someone who actually builds these systems.













