Week of March 23 · Issue #005 · March 30, 2026
Three lenses on the week. Tech, geopolitics, creative. One thesis connecting them all.
The AI industry is fracturing into competing power centers while society grapples with whether AI augments or replaces human agency. Legal, commercial, and philosophical fault lines are emerging simultaneously.
Thirty engineers from OpenAI and Google publicly backed a competitor against the US government. A judge granted Anthropic a preliminary injunction. The AI industry just chose a side — and it was not the Pentagon's.
Anthropic's lawsuit victory against the Pentagon and the public defection of 30 OpenAI and Google employees signal a fundamental rejection of AI being weaponized or monopolized by either military or commercial hegemonies. These are not isolated incidents but coordinated signals that AI researchers see their work differently than their employers do. The lawsuit victory gives legal precedent to the position that AI systems have stewardship obligations beyond shareholder returns or state power.
OpenAI's decision to build proprietary chips and Nscale's $2 billion funding round with ex-Facebook political operators reveal that the AI industry is no longer content with Nvidia's semiconductor chokehold. This is vertical integration at massive scale. Nscale's funding sources suggest that building AI infrastructure is now a geopolitical play, not just a business one. The subtext: whoever controls chips controls the future of AI deployment.
Palantir's public military targeting demo, Netflix's $600 million AI startup acquisition, Adobe's CEO resignation, and the viral essay against thinking outsourcing create a paradox: AI is simultaneously enabling one-person businesses at scale while atrophying the human judgment needed to deploy it responsibly. The week exposed the costs of convenience. Every AI tool that removes friction from decision-making also removes the friction of deliberation. The question facing society is whether we can have AI augmentation without AI dependence.
AI has stopped being a technology problem. It is now a power problem. The industry is splitting between those building AI as a tool for human judgment and those building it as a replacement — and this week, the legal, commercial, and cultural signals all moved in the same direction. Whoever controls that narrative will not just shape the technology. They will shape what it means to decide.