LibraryBlogRadarIntelligenceAbout
Blog

Frontiers  ·  Neuroscience & Computing

They Said Brain Cells Learned to Play Doom. The Truth Is Stranger.

The headline circulated everywhere: neurons in a dish, playing Doom. That's not quite right. What actually happened in that Melbourne lab in 2022 is more unsettling — and more important than any game.

Abstract neural network — cobalt ink on cream

Somewhere in 2022, a headline started moving across the internet: brain cells in a petri dish had learned to play Doom. The image was irresistible — living neurons, the classic "can it run Doom?" meme made biological, consciousness emerging from a lab bench. The headline was wrong. The actual story was more interesting.

The real paper arrived in October 2022 in the journal Neuron, titled "In vitro neurons learn and exhibit sentience when embodied in a simulated game-world." The researchers at Cortical Labs in Melbourne had grown 800,000 human and mouse neurons on a multi-electrode array and connected them — not to Doom — but to a simulation of Pong. The neurons received electrical signals telling them where the ball was. They sent back signals that moved the paddle. Within five minutes, they were playing.

Not Doom. Pong. And yet the Doom headline persisted because the Doom headline was correct in the way that matters: something that should not be able to learn, learned. Something that should not be able to respond to its environment, responded. The game was beside the point.

This was not a metaphor for learning. The neurons were not trained in any conventional sense. There was no gradient descent, no backpropagation, no loss function being minimized. The system received random noise when the paddle missed and silence when it connected. The neurons responded to that feedback by reorganizing their firing patterns. They got better. They played Pong. The lead researcher, Brett Kagan, named the system DishBrain. The name was meant to be slightly tongue-in-cheek. It stopped being funny quickly.

What an Organoid Actually Is

Brain organoids are not brains. That distinction matters and the field is careful about it. They are clusters of human neurons, typically grown from induced pluripotent stem cells, that self-organize into three-dimensional structures with some of the properties of developing brain tissue. They form synapses. They exhibit coordinated electrical activity. Some develop rudimentary layers resembling cortical organization.

The largest organoids today contain roughly 2.5 million neurons. A human brain has 86 billion. A mouse brain has 70 million. Current organoids sit somewhere between the complexity of an insect nervous system and something harder to categorize. They are not sophisticated enough to perceive. They are sophisticated enough to respond to their environment in ways that are not random.

That gap is where almost everything interesting happens.

The Energy Problem Silicon Cannot Solve

Training GPT-4 consumed an estimated 50 gigawatt-hours of electricity. The human brain runs on approximately 20 watts — less than a dim light bulb — while performing tasks that no current AI system can replicate. This gap is not a temporary engineering problem. It is a fundamental consequence of how silicon computation works.

Transistors switch between discrete states. They are fast and reliable and precise. They are also profoundly wasteful relative to what they compute. Biological neurons do something different. They are analog, probabilistic, and adaptive. They change their connection strengths in response to experience. They process information through the timing of spikes, not just the presence or absence of a signal. The architecture is not just more efficient — it is a different kind of computation entirely.

A human neuron uses approximately 10 femtojoules per synaptic operation. A state-of-the-art transistor uses roughly 1,000 times more energy to perform a comparable operation. The gap is not closing. It is widening as AI models scale and their energy demands grow exponentially.

FinalSpark, a Swiss biocomputing company, has been building what they call a Neuroplatform — a system that uses biological neurons as the actual computational substrate. Their published benchmarks claim energy consumption 1,000 times lower than equivalent silicon-based computation. The neurons are kept alive in a controlled environment, connected to electrodes, and tasked with processing information. The company is not a research project. It is a company with customers.

The technical challenges are significant. Neurons die. They require nutrients and temperature regulation and careful maintenance. The interface between biological tissue and electronic systems introduces noise. Reproducibility is difficult because no two neuronal cultures are identical. But none of these are arguments that the approach is wrong. They are arguments that it is hard.

Johns Hopkins and the Formal Birth of a Field

In February 2023, Thomas Hartung and colleagues at Johns Hopkins published a paper in Frontiers in Science that did something important: it named a field. "Organoid intelligence" — OI — was proposed as the formal term for the emerging discipline of using brain organoids as computational substrates. The paper was not a technical report on a specific experiment. It was a research agenda, a map of what the field needed to build, and an argument for why it was worth building.

Hartung's case rests on several pillars. Biological neural networks learn with orders of magnitude less data than artificial ones. They generalize across contexts. They are energy efficient. And — critically — they might eventually allow researchers to model human-specific neurological conditions in ways that animal models cannot, because the neurons are human.

The clinical applications alone justify the investment. Organoids grown from the cells of patients with Alzheimer's or autism or schizophrenia carry those patients' actual genetic profiles. Testing interventions on them is not a simulation of the disease. It is the disease, in a dish, at reduced scale. That is a fundamentally different research tool than anything that existed before.

The Question Nobody Wants to Ask

If a cluster of neurons can learn to play Pong, the question that follows is not comfortable. It does not require answering definitively to be worth asking. At what point, if any, does a biological neural system in a laboratory setting have an experience?

The honest answer is that nobody knows. The field does not have a theory of consciousness robust enough to give a confident answer, and it is not clear that one is coming soon. What researchers do have is a growing ability to build systems that exhibit behaviors associated with experience — learning, adaptation, response to feedback — without any certainty about whether anything is happening on the inside.

The Johns Hopkins OI roadmap explicitly includes ethical considerations as a core research priority, not an afterthought. Hartung has said publicly that the field must develop ethical frameworks before the systems become sophisticated enough to make those frameworks urgent. That is the right sequence. It is also an acknowledgment that the sequence might not hold.

Kagan, the DishBrain researcher, has been careful about the language. He does not claim that DishBrain is sentient. He published a paper with "sentience" in the title because the neurons exhibited what the framework he was using classified as sentience by certain operational definitions. The distinction between "meets the operational criteria for X" and "is X" is precisely the kind of distinction that gets lost when the technology moves faster than the philosophy.

The Third Path

The dominant narrative in computing is binary. Silicon scales until it doesn't, and then quantum computing arrives to take over. Both paths are real and both are being pursued with serious resources. But neither addresses the fundamental mismatch between how transistors work and how biological intelligence works.

Organoid intelligence is a third path. It takes the actual substrate of biological cognition — neurons — and asks what happens when you put that substrate in conversation with digital systems. The answer, so far, is that things happen that nobody fully predicted. Neurons play Pong. Neurons process information at 1,000 times the efficiency of silicon. Neurons do something that looks like learning without being shown any explicit training signal.

The field is young. The organoids are small. The interfaces are crude. The reproducibility problems are real. But the direction of travel is clear: biological computation is not a metaphor for a different kind of AI. It is a different kind of computation, running on the same hardware evolution spent 500 million years optimizing.

That hardware learned to play Pong in five minutes from noise and silence.

The question is not whether that matters. The question is how fast we are ready to find out how much.

Sources

Kagan, B.J. et al. "In vitro neurons learn and exhibit sentience when embodied in a simulated game-world." Neuron, 2022. doi.org/10.1016/j.neuron.2022.09.001

Smirnova, L. et al. "Organoid intelligence (OI): the new frontier in biocomputing and intelligence-in-a-dish." Frontiers in Science, 2023. doi.org/10.3389/fsci.2023.1017235

Cai, H. et al. "Brain organoid reservoir computing for artificial intelligence." Nature Electronics, 2023.

FinalSpark Neuroplatform technical documentation, 2024. finalspark.com

Continue Reading

AI 2027: The Scenario Nobody Wants to Think Through 8 min The Algorithm Is the Intelligence 5 min Context Is Everything 4 min