Neural networks learn more when they are given time to sleep

Neural networks used for cutting-edge artificial intelligence computing systems may benefit from the occasional “sleep”, say researchers.
Writing in the November issue of PLOS Computational Biology, senior author Maxim Bazhenov, Professor of Medicine and a sleep researcher at University of California San Diego School of Medicine and colleagues discuss how biological models may help avoid the threat of “catastrophic forgetting” in artificial neural networks, making them more useful in many research interests.
The scientists used spiking neural networks that artificially mimic natural neural systems. They found that when the spiking networks were trained on a new task, but with occasional off-line periods that mimicked sleep, catastrophic forgetting was mitigated. Like the human brain, “sleep” for the networks allowed them to replay old memories without explicitly using old training data, say the study authors.
“The brain is very busy when we sleep, repeating what we have learned during the day,” says Maxim Bazhenov, Professor of Medicine and a sleep researcher at University of California San Diego School of Medicine. “Sleep helps reorganise memories and presents them in the most efficient way.”
In previously published work, Bazhenov and colleagues reported how sleep builds rational memory, the ability to remember arbitrary or indirect associations between objects, people or events, and protects against forgetting old memories.
Neural networks have superhuman speed but are forgetful
Artificial neural networks leverage the architecture of the human brain to improve numerous technologies and systems, from basic science and medicine to finance and social media. In some ways, they have achieved superhuman performance, such as computational speed, but they fail in one key aspect - when artificial neural networks learn sequentially, new information overwrites previous information, a phenomenon called “catastrophic forgetting”.
“In contrast, the human brain learns continuously and incorporates new data into existing knowledge,” says Bazhenov, “and it typically learns best when new training is interleaved with periods of sleep for memory consolidation.”
Neurons fire in a specific order and this increases synapses between them when we learn new information, says Bazhenov. During sleep, the spiking patterns learned during our awake state are repeated spontaneously in a process called reactivation or replay.
“Synaptic plasticity, the capacity to be altered or moulded, is still in place during sleep and it can further enhance synaptic weight patterns that represent the memory, helping to prevent forgetting or to enable transfer of knowledge from old to new tasks.”
When Bazhenov and colleagues applied this approach to artificial neural networks, they found that it helped the networks avoid catastrophic forgetting.
“It meant that these networks could learn continuously, like humans or animals,” he says. “Understanding how human brain processes information during sleep can help to augment memory in human subjects. Augmenting sleep rhythms can lead to better memory.”
- “Augmented workforce” still finding its feet in shift to AIAI Strategy
- Machine learning hashes out a way to speed up huge databasesMachine Learning
- Scientists reflect on the Harry Potter nature of AI chatbotsAI Applications
- GPT-3 language model matches humans in psychological testsAI Applications