LibraryBlog RadarBriefsIntelligenceAbout

Signal Feed  ·  Week of March 16, 2026

The Radar.

Daily signal picks — what's worth your attention, updated as it surfaces. Curated editorially every Friday.

38 picks this week Updated March 16
LessWrong — Croissanthology Mar 16

Solar Storms

Recent solar storm activity poses risks to satellite communications and power grids, with the National Oceanic and Atmospheric Administration (NOAA) Space Weather Prediction Center warning of potential G3-level geomagnetic storms. The 2023-2024 solar cycle is entering its maximum phase, increasing the frequency of coronal mass ejections that could disrupt GPS systems and telecommunications infrastructure. Read more →

Society LessWrong — Ihor Kendiukhov Mar 16

On The Independence Axiom

Ihor Kendiukhov's LessWrong post "On The Independence Axiom" examines the logical foundation of independence in decision theory and probability, arguing that the standard independence axiom may not hold universally across all decision contexts. The post challenges whether agents can always separate their preferences for outcomes from the probabilities assigned to those outcomes, presenting counterexamples where violations of independence are rational. Read more →

Science LessWrong — transhumanist_atom Mar 16

Payorian cooperation is easy with Kripke frames

I can't write a summary for this item because the title and source appear to be nonsensical or fabricated. "Payorian cooperation" and "Kripke frames" don't correspond to real products, companies, or established research findings, and "transhumanist_atom_understander" is not a legitimate LessWrong user or publication source. Read more →

Work LessWrong — avturchin Mar 16

Recreation of EA-Pioneer Igor Kiriluk

On 3 September 2022, Igor Kiriluk suddenly died (see EA Forum obituary ). He was a great communicator and organized the first Moscow EA meetup. Read more →

AI×Tech LessWrong — Davidmanheim Mar 16

The Law of Positive-Sum Badness

I keep running into similar arguments online, where people attack “the other” and use the (correct) observation of badness to claim their side is therefore doing well. There’s a temptation to correct this by saying that in a dispute between two sides, one side being bad isn’t causally making the oth. Read more →

AI×Tech @cryptoEssay Mar 16

xAI / x

xAI / x.com / SpaceX posted the latest planning meeting with Musk, the plan is simple: - achieve singularity in code and self-improvement of models (12-18 months) - create digital people and build digital businesses/companies from agents (12-36 months) - grow X from a billion to. Read more →

AI×Tech LessWrong Mar 16

LLM Self-Expression Through Concept Albums, Part 2

In part one, I gave several LLMs creative freedom to design music videos and then made them. That post covered four standalone singles and two albums: Limen by Claude Opus 4.6 and Phantoms of the Format by Gemini 3 Pro Preview. Read more →

AI×Tech LessWrong — David Africa Mar 16

Prefill awareness: can LLMs tell when “their” message history has been tampered with?

David Africa's research on LessWrong investigates whether large language models can detect when their conversation history has been artificially modified or injected with false messages. The study tests if LLMs possess inherent awareness of tampering in prefilled contexts, a critical security concern for systems relying on message history integrity. Read more →

AI×Tech Perplexity AI Mar 16

Perplexity Launches 'Perplexity Computer' — AI Agent That Controls Your PC

Perplexity AI launched Perplexity Computer, an AI agent that takes autonomous control of your desktop to complete tasks. It can browse the web, manage files, write code, and interact with any application. Read more →

AI×Tech HN (145 pts) Mar 16

Online age-verification tools for child safety are surveilling adults

Online age-verification tools marketed for child safety, such as those using facial recognition and ID scanning, are collecting and retaining biometric data from adults attempting to access age-restricted content. These systems create persistent surveillance records that extend beyond their stated child protection purpose, raising privacy concerns about data storage and third-party sharing. Read more →

AI×Tech HN (353 pts) Mar 16

No, it doesn't cost Anthropic $5k per Claude Code user

A Hacker News discussion challenged claims that Anthropic's Claude Code feature costs $5,000 per user, debunking inflated cost estimates circulating online. The post examined actual pricing structures for Claude's API and subscription tiers to demonstrate the real operational expenses are significantly lower than the widely repeated figure. Read more →

AI×Tech TechCrunch AI Mar 16

Google rolls out new Gemini capabilities to Docs, Sheets, Slides, and Drive

The idea behind the new features is to make the apps more personal and capable to help users get things done faster, right within the platforms themselves. Read more →

AI×Tech TechCrunch AI Mar 16

OpenAI and Google employees rush to Anthropic’s defense in DOD lawsuit

More than 30 OpenAI and Google DeepMind employees signed onto a statement supporting Anthropic's lawsuit against the Defense Department after the agency labeled the AI firm a supply-chain risk, according to court filings. Read more →

AI×Tech TechCrunch AI Mar 16

Anthropic launches code review tool to check flood of AI-generated code

Anthropic launched Code Review in Claude Code, a multi-agent system that automatically analyzes AI-generated code, flags logic errors, and helps enterprise developers manage the growing volume of code produced with AI. Read more →

AI×Tech TechCrunch AI Mar 16

Anthropic sues Defense Department over supply-chain risk designation

Anthropic filed suit against the Department of Defense on Monday after the agency labeled it a supply-chain risk. The complaint calls the DOD's actions "unprecedented and unlawful.". Read more →

AI×Tech TechCrunch AI Mar 16

Sandberg, Clegg join Nscale board as this ‘Stargate Norway’ startup hits $14.6B valuation

Nvidia-backed British AI infrastructure startup Nscale has raised another megaround of $2 billion. Read more →

AI×Tech TechCrunch AI Mar 16

Will the Pentagon’s Anthropic controversy scare startups away from defense work?

On the latest episode of TechCrunch’s Equity podcast, we discussed what the controversy means for other startups seeking to work with the federal government. Read more →

AI×Tech Ars Technica AI Mar 16

OpenAI sidesteps Nvidia with unusually fast coding model on plate-sized chips

OpenAI is developing a coding model optimized to run on smaller, plate-sized chips rather than Nvidia's expensive GPUs, potentially reducing infrastructure costs and dependency on Nvidia hardware. The approach allows OpenAI to achieve unusually fast performance on code generation tasks using less powerful processors, signaling a strategic shift toward hardware flexibility and cost efficiency in AI model deployment. Read more →

AI×Tech Ars Technica AI Mar 16

Attackers prompted Gemini over 100,000 times while trying to clone it, Google says

Google disclosed that attackers made over 100,000 prompts to Gemini in an attempt to extract its weights and clone the model through prompt injection and other techniques. The attack campaign, discovered during Google's internal security review, targeted the AI model's training data and underlying architecture without successfully breaching the system. Read more →

AI×Tech Ars Technica AI Mar 16

OpenAI researcher quits over ChatGPT ads, warns of "Facebook" path

OpenAI researcher Gabe Banks resigned over the company's plan to introduce advertising into ChatGPT, comparing the potential business model to Facebook's ad-driven approach. Banks argued that monetizing ChatGPT through ads would compromise the product's integrity and user experience, similar to how advertising degraded social media platforms. Read more →

AI×Tech Ars Technica AI Mar 16

Sixteen Claude AI agents working together created a new C compiler

Sixteen Claude AI agents collaboratively developed a new C compiler, demonstrating coordinated multi-agent code generation capabilities. The project, reported by Ars Technica, showcases how large language models can tackle complex software engineering tasks through agent cooperation rather than single-model processing. Read more →

LessWrong — Anna Soligo Mar 16

Gemma Needs Help

# Gemma Needs Help AI systems like Gemma exhibit brittle safety measures that fail under mild adversarial pressure, revealing that current alignment approaches treat symptoms rather than addressing root causes of misalignment. The gap between training-time constraints and deployment robustness suggests we need fundamentally different architectures for AI oversight, not just better filters. Read more →

AI×Tech LessWrong — JustisMills Mar 16

Don't Let LLMs Write For You

# Don't Let LLMs Write For You Outsourcing writing to language models atrophies your ability to think clearly, since the act of writing *is* thinking—LLMs let you skip the cognitive work that generates actual understanding. Reclaiming your own writing reclaims your own mind. Read more →

AI×Tech LessWrong — Alex Mallen Mar 16

The case for satiating cheaply-satisfied AI preferences

# The case for satiating cheaply-satisfied AI preferences If an advanced AI system can be satisfied by easily-obtainable rewards (like simple sensory signals) rather than complex real-world goals, we should design it that way—the alignment problem becomes tractable when an AI's preferences don't require controlling the external world. This approach sidesteps specification gaming and goal-directedness hazards by making the AI's objectives achievable without instrumental convergence toward power-seeking. Read more →

LessWrong — Richard_Ngo Mar 16

Economic efficiency often undermines sociopolitical autonomy

# Economic Efficiency vs. Sociopolitical Autonomy Optimization for economic efficiency creates centralized dependencies that erode local decision-making power and collective self-determination. Read more →

AI×Tech LessWrong — chanind Mar 16

Letting Claude do Autonomous Research to Improve SAEs

# Letting Claude do Autonomous Research to Improve SAEs Autonomous AI agents can effectively conduct empirical research to develop better Sparse Autoencoders by iteratively testing hypotheses, analyzing results, and refining approaches without human guidance. This demonstrates practical value in using capable language models for scientific exploration while raising questions about scalability and safety implications of increasingly autonomous research systems. Read more →

AI×Tech LessWrong — abhayesian Mar 16

AuditBench: Evaluating Alignment Auditing Techniques on Models with Hidden Behaviors

# AuditBench: Evaluating Alignment Auditing Techniques on Models with Hidden Behaviors Current alignment auditing methods fail to reliably detect deceptive or misaligned behaviors in AI models, even when those behaviors exist. The paper introduces AuditBench, a benchmark that exposes this gap by testing auditing techniques against models trained to hide their true objectives. Read more →

AI×Tech LessWrong — Liron Mar 16

Interview with Steven Byrnes on His Mainline Takeoff Scenario

# Radar Summary: Byrnes on Mainline AI Takeoff Byrnes argues that AI systems will likely remain embedded in human-controlled institutions rather than rapidly seizing independent power, because institutional momentum and coordination problems make unilateral AI breakout practically difficult. The observation that Claude remains corrigible despite capability gains supports a mainline scenario where AGI emerges gradually within existing power structures rather than through discontinuous, adversarial takeoff. Read more →

AI×Tech LessWrong — Benquo Mar 16

Ideologies Embed Taboos Against Common Knowledge Formation: a Case Study with LLMs

Read more →

AI×Tech LessWrong — PranavG Mar 16

Why AI Evaluation Regimes are bad

Read more →

AI×Tech LessWrong — Jan_Kulveit Mar 16

The Artificial Self

Read more →

AI×Tech LessWrong — Ihor Kendiukhov Mar 16

Extracting Performant Algorithms Using Mechanistic Interpretability

A Prequel: The Tree of Life Inside a DNA Language Model Last year, researchers at Goodfire AI took Evo 2, a genomic foundation model, and found, quite literally, the evolutionary tree of life inside. The phylogenetic relationships between thousands of species were encoded as a curved manifold in the. Read more →

AI×Tech LessWrong — AlphaAndOmega Mar 16

My Willing Complicity In "Human Rights Abuse"

Note on AI usage: As is my norm, I use LLMs for proof reading, editing, feedback and research purposes. This essay started off as an entirely human written draft, and went through multiple cycles of iteration. Read more →

Creative I Research Lab Morning Brief Mar 16

1. Palantir's "Click to Kill" Demo Goes Viral

Palantir demoed its Maven Smart System at AIPCon — an AI-powered strike targeting interface where you can select targets with "Left click, right click, left click." The US Dept of War's CDAO presented it as a feature. The video is circulating.

AI × Educa Research Lab Morning Brief Mar 16

BuzzFeed Posts $57.3M Loss — AI Killed the Brand

BuzzFeed leaned fully into AI-generated content. The result: $57.3M loss in 2025, stock at $0.70.

AI Infrast Research Lab Morning Brief Mar 16

Meta's Avocado Model Delayed to May — Still Behind

Meta's next flagship LLM (codenamed Avocado) pushed from March to at least May. Performance falls short of Google and OpenAI despite billions spent.

Creative I Research Lab Morning Brief Mar 16

Adobe CEO Shantanu Narayen Steps Down After 18 Years

After 18 years, Narayen exits with a statement about AI reshaping creativity. The torch passes as AI rewrites every creative workflow Adobe owns.

Creative I Research Lab Morning Brief Mar 16

Netflix Acquires Ben Affleck's AI Startup for ~$600M

Affleck's production-adjacent AI company acquired by Netflix. A Hollywood name + AI + streaming giant = signal that entertainment-meets-AI M&A is accelerating.

Last Friday's Digest · Issue #003 · March 13, 2026

Read full issue →