When studying how behaviors or cognitive abilities are represented in the brain, neuroscientists often examine them one by one. However, brains don’t work that way. We know intuitively that learning builds on itself – you probably couldn’t have learned algebra without first grasping arithmetic.
A new study suggests how such additive learning might happen. The researchers used a neural network that learned multiple tasks simultaneously and found that the network developed strategies to break down tasks into modularized patterns of computation, then recombine them in different ways as each task demanded.
These computational patterns, or “motifs,” essentially work like Lego bricks for learning. They point to how neurons might form computational units within a large network, says Laura N. Driscoll, a senior scientist at the Allen Institute for Neural Dynamics.
The researchers used a type of neural network called a recurrent neural network (RNN), where a node’s outputs don’t just flow unidirectionally but can feed back into its inputs. By analyzing the network as it learned, they were able to identify the computations that were reused across tasks.
After learning multiple tasks, the network was able to reuse and recombine these motifs to accomplish seemingly similar new tasks later on, assembling the computational elements in an additive manner. For example, if two different tasks required working memory, the network would reuse a specific motif that stored the memory needed in both of them, Driscoll explains.
The idea that different tasks can co- opt or co-use the same fixed points in neural networks or the same groups of neurons in the brain reflects an emerging concept that neuroscientists call compositionality.
Understanding how a neural network breaks tasks down into components may shed light on how the brain does so. The big hypothesis is that curriculum learning happens through reuse of these motifs, Driscoll says. That means the order in which an animal learns certain tasks would affect both the speed with which it learns and the strategies it employs to do so.
The work raises important yet testable questions about how computations in RNNs might translate to animals’ brains. Motifs develop when a network – and maybe an animal – learns specific tasks, but it’s unclear how they would come into play in a more naturalistic setting.
Source: https://www.thetransmitter.org/computational-neuroscience/neural-network-analysis-posits-how-brains-build-skills/