The brain is, at least to me, an enigma wrapped in a mystery. People who are smarter than I am—a list that encompasses most humans, dogs, and possibly some species of yeast—have worked out many aspects of the brain. But some seemingly basic things, like how we remember, are still understood only at a very vague level. Now, by investigating a mathematical model of neural activity, researchers have found another possible mechanism to store and recall memories.
We know in detail how neurons function. Neurotransmitters, synapse firing, excitation, and suppression are all textbook knowledge. Indeed, we've abstracted these ideas to create blackbox algorithms to help us ruin people's lives by performing real-world tasks.
We also understand the brain at a higher, more structural, level: we know which bits of the brain are involved in processing different tasks. The vision system, for instance is mapped out in exquisite detail. Yet the intermediate level in between these two areas remains frustratingly vague. We know that a set of neurons might be involved in identifying vertical lines in our visual field, but we don't really understand how that recognition occurs.
Memory is hard
Likewise, we know that the brain can hold memories. We can even create and erase a memory in a mouse. But the details of how the memory is encoded are unclear. Our basic hypothesis is that a memory represents something that persists through time: a constant of sorts (we know that memories vary with recall, but they are still relatively constant). That means there should be something constant within the brain that holds the memory. But the brain is incredibly dynamic, and very little stays constant.
This is where the latest research comes in: abstract constants that may hold memories have been proposed.
So, what constants have the researchers found? Let's say that a group of six neurons is networked via interconnected synapses. The firing of any particular synapse is completely unpredictable. Likewise, its influence on its neighbors' activity is unpredictable. So, no single synapse or neuron encodes the memory.
But hidden within all of that unpredictability is predictability that allows a neural network to be modeled with a relatively simple set of equations. These equations replicate the statistics of synapses firing very well (if they didn't, artificial neural networks probably wouldn't work).
A critical part of the equations is the weighting or influence of a synaptic input on a particular neuron. Each weighting varies with time randomly but can be strengthened or weakened due to learning and recall. To study this, the researchers examined the dynamical behavior of a network, focusing on the so-called fixed points (or set points).
Technically, you have to understand complex numbers to understand set points. But I have a short cut. The world of dynamics is divided into stable things (like planets orbiting the Sun), unstable things (like rocks balanced on pointy sticks), and things that are utterly unpredictable.
Memory is plastic
The neuron is a weird combination of stable and unpredictable. The neurons have firing rates and patterns that stay within certain bounds, but you can never know exactly when an individual neuron is going to fire. The researchers show that the characteristic that keeps the network stable does not store information for very long. However, the characteristic that drives unpredictability does store information, and it seems to be able to do so indefinitely.
The researchers demonstrated this by exposing their model to input stimulus, which they found changed the network's fluctuations. Furthermore, the longer the model was exposed to the stimulus, the stronger its influence was.
The individual pattern of firing was still unpredictable, and there was no way to see the memory in the stimulus Read More – Source