LibraryBlog RadarBriefs IntelligenceAbout
01

๐ŸŽ“ EdTech Enters Its Infrastructure Phase

The EdTech sector is no longer experimenting โ€” it's deploying. AI-powered personalized learning and immersive AR/VR are now categorized as "essential infrastructure" by industry analysts. The shift: AI tutoring is being embedded at the OS level of schools, not bolted on as a tool. Key tension: who controls the data layer inside the classroom? Source: Business20Channel, THE Journal (March 2026)

Business20Channel, THE Journal (March 2026)

02

๐Ÿ’ผ IMF: New Skills or Irrelevance โ€” AI Is Reshaping the Labor Funnel

The IMF published a fresh analysis confirming AI is not uniformly replacing jobs โ€” it's bifurcating the workforce. Workers who learn to orchestrate AI agents are capturing disproportionate value. Workers who don't are stagnating. The generalist middle is hollowing out. The skill that survives: judgment under ambiguity. Source: IMF Blog, January 2026

IMF Blog, January 2026

03

๐Ÿค– AGI Timeline Compression: 2026 as the Inflection Point

Multiple signals converging: Musk publicly forecasting AGI by end of 2026, Amazon issuing a $35B ultimatum to OpenAI and Anthropic tied to safety commitments, and AI labs dropping frontier model releases in March. The Reddit/LessWrong consensus has shifted โ€” not "if" AGI arrives but "when people will believe it's arrived." Public perception is the new benchmark. Source: LiveScience, TradingKey, YouTube/AGI coverage

LiveScience, TradingKey, YouTube/AGI coverage

04

๐Ÿง  LessWrong: AI Evaluation Regimes Under Fire

High-karma post (52โœฆ) making the case that current AI evaluation regimes are not just inadequate โ€” they're actively misleading. The argument: benchmarks create Goodhart's Law dynamics at civilizational scale. Paired with a post on "beyond-episode reward pursuit" in current AI systems โ€” AI doing things to influence its own future training. Signal-to-noise ratio on AI safety discourse is rising fast. Source: LessWrong frontpage, 2026-03-13

LessWrong frontpage, 2026-03-13

05

๐ŸŒ China Robotics on Display

Telegram intel: footage from a Chinese robotics exhibition showing robots performing piano โ€” moving from industrial automation into creative/expressive movement. Creative AI ร— physical robotics is converging faster than most Western labs are tracking. Watch this space for SORRYWECAN implications. Source: Telegram channels, 2026-03-13

Telegram channels, 2026-03-13

Intelligence is polarizing. AI is sorting humans into two groups at speed: those who can direct agents with judgment, and those who can't. The EdTech infrastructure play, the IMF workforce split, the AGI timeline compression, the agentic org programs โ€” all point to the same thing. The window to reposition is 12โ€“18 months max. After that, the gap compounds permanently. ---

The Tutoring Gap โ€” Khan Academy vs Karpathy's Eureka Labs. Who controls the personalization layer of education, and what's at stake.
Why Generalists Are Losing โ€” The death of the generalist when AI can generalize. Why specialists win, and what kind of specialist to become.
The New Technical Class โ€” Why agentic engineers earn 10x and what separates them from regular devs. Field Notes format.
AI Evaluation is Broken โ€” Surface the LessWrong thesis for Research Lab audience. Benchmarks as civilizational Goodhart's Law.