- Researchers developed a “DishBrain” system that connected neurons to a computer running the classic video game Pong.
- Within five minutes, the cells began “learning” and improved their performance.
- The mechanism of "learning" might involve the free-energy principle, according to which the brain seeks to minimize entropy (unpredictability) in its environment.
A new study published in the journal Neuron shows that networks of brain cells grown in a Petri dish can learn to play the arcade game Pong demonstrating, for the first time, what the researchers are calling “synthetic biological intelligence.” The study was led by Brett Kagan of Cortical Labs, a biological computing startup based in Melbourne, Australia, that is integrating living brain cells with computer chips.
Teaching brain cells Pong
Kagan and his colleagues cultured cortical neurons dissected from the brains of embryonic mice, or human stem cells reprogrammed into neurons, on high-density micro-electrode array chips that simultaneously can record the electrical activity of the cells and stimulate them. On the chip, the cells mature and connect with each other to form neuronal networks that then exhibit spontaneous electrical activity.
The researchers developed their so-called “DishBrain” system by connecting the chip to a computer running the paddle and ball game. The chip provided the cells with feedback about the gameplay, such that they received a predictable electrical stimulus when the paddle made contact with the ball, and an unpredictable stimulus when it did not.
The cells began “learning” and improved their performance within five minutes of gameplay. With each successful interception of the ball, the synchronized “spikes” of electrical activity across the network increased in size. The more feedback they received, the more their performance improved. Under conditions in which they received no feedback at all, the networks completely failed to learn how to play the game.
The study shows that a single layer of neurons can organize and coordinate its activity toward a specific goal, and can learn and adapt behavior in real-time. Interestingly, the networks of human neurons outperformed those of mouse cells, which is consistent with earlier work suggesting that human neurons have a greater information processing capacity than those of rodents.
The researchers describe this “learning” in terms of the free-energy principle, according to which the brain seeks to minimize entropy, or unpredictability, in its environment.
Thus, the unpredictable stimuli delivered when the neuronal networks fail to intercept the ball increases the entropy within the system, and so the cells adapt their behavior in order to receive predictable stimuli. This, in turn, reduces entropy and minimizes uncertainty. That is, they learned to make the sensory outcomes of their behavior as predictable as possible.
The ability of neuronal networks to respond and adapt to environmental stimuli is the basis of learning in humans and other animals. The sensory stimulation delivered to the cells was far cruder than that even a simple organism would receive. Nevertheless, the researchers say this is the first study to show this behavior in cultured neurons, and they suggest that their results demonstrate intelligence in silico.
They added that their results confirm the importance of feedback from the environment about the consequences of actions, which appears vital for proper brain development. These processes may take place at the cellular level.
Brain in a box
Future work could reveal more about why human neurons have greater computational power than mouse cells, as well as provide a simulated model of biological learning. The DishBrain system could also be used in drug screening, to examine the cellular responses to new compounds, and to improve machine learning algorithms.