LibraryBlog RadarBriefs IntelligenceAbout
01

Jensen Huang Says We've Achieved AGI

Nvidia CEO Jensen Huang declared this week that AGI has been achieved. The catch: nobody agrees on the definition. Huang's framing is convenient — vague enough that he can claim victory whenever it's commercially useful. But beneath the semantics is a real signal: the industry's most powerful hardware supplier is now betting its narrative on "post-AGI" positioning. When Jensen says it, the market listens — even if philosophers don't. Source: The

The Verge, Mar 24

02

Anthropic vs The Pentagon — Who Controls Frontier AI

Anthropic is in federal court seeking a preliminary injunction to block its designation as a military supply-chain risk by the Trump administration. The case pits AI safety culture against national security doctrine. Judge Rita Lin's decision is expected in days. This is the first major legal battle over who has sovereign authority over frontier AI development. The outcome sets precedent for every lab that follows. Source: The Verge / Lawfare, Ma

The Verge / Lawfare, Mar 24

03

ARM's AGI CPU: 136 Cores, Double the Performance Per Watt of x86

ARM unveiled specs for its "AGI CPU" — a chip with up to 136 cores per unit, claiming 2x performance per watt vs x86. The branding is aggressive and the specs are real. As inference workloads shift from cloud to edge, ARM is positioning itself as the silicon layer of the post-Nvidia era. The race for on-device AI just got a credible new contestant. Source: The Verge, Mar 24

The Verge, Mar 24

04

Epic Games Cuts 1,000+ Jobs as Fortnite Usage Falls

Epic laid off over 1,000 people as Fortnite — its cultural anchor — sees declining engagement. The world's biggest gaming IP couldn't hold the generation that grew up on it. This is about attention economics: the cohort that made Fortnite a phenomenon is now scattered across TikTok, Discord, YouTube, and AI-generated entertainment. The attention economy doesn't have loyalty — it has habits, and habits break. Source: The Verge, Mar 24

The Verge, Mar 24

05

Only 21% of Office Workers Are Actively Engaged

A signal from the @cryptoEssay Telegram channel: Gallup data shows only 21% of global office workers are engaged and performative at work. AI-native companies are redesigning processes around this reality — not trying to fix engagement, but eliminating the need for it. The implication: the jobs most at risk aren't just "repetitive tasks" — they're the roles that depended on organizational inertia rather than actual human judgment. Source: @crypto

@cryptoEssay (Gershuni), Mar 25

The legitimacy war for AI has begun — in courts, in language, and in silicon. Anthropic fights the Pentagon over who defines AI's role in national security. Jensen Huang fights philosophers over what "AGI" means. ARM fights x86 over who powers the next compute era. The first wave of AI was about capability. This wave is about control — of definition, jurisdiction, and infrastructure. Research Lab's angle: map who's winning each legitimacy battle and what it means for the builders caught between them. ---

What Jensen Huang's AGI Claim Actually Tells Us — Not about whether we've achieved AGI, but about why the most powerful hardware CEO in the world needs the world to believe it. Follow the incentive.
The Anthropic vs Pentagon Case Is the Alignment Trial Nobody Expected — Deep dive on the legal battle: what a "military supply-chain risk" designation means, what Anthropic's injunction argues, and why this sets the template for every frontier AI lab.
The 21% Problem: Why AI-Native Companies Are Built Differently — Using the Gallup engagement data as the frame: the future of work isn't AI replacing humans — it's organizations built for the 21%, not the 79%.
ARM's AGI CPU: The Quiet End of x86 Dominance — The silicon shift from cloud inference to edge AI. What the ARM specs mean for on-device models, privacy, and the post-Nvidia compute stack.
Latent Introspection — Karma: 67✦) — Open-source introspection parameters for LLMs. Directly relevant to alignment tooling.
The AIXI Perspective on AI Safety — Karma: 57✦) — Theoretical framing of safety through the AIXI lens. Dense but high-signal.
Ablating Split Personality Training — Karma: 47✦) — What happens when you remove divergent training objectives from models. Practical alignment research.
Measuring and Improving Coding Audit Realism — Karma: 42✦) — Deployment-verified benchmarks for coding agents. Relevant for Research Lab's agent build.