A Non-spiking Neuron Model With Dynamic Leak to Avoid Instability in Recurrent Networks
Recurrent circuitry components are distributed widely within the brain, including both excitatory and inhibitory synaptic connections. Recurrent neuronal networks have potential stability problems, perhaps a predisposition to epilepsy. More generally, instability risks making internal representations of information unreliable. To assess the inherent stability properties of such recurrent networks, we tested a linear summation, non-spiking neuron model with and without a “dynamic leak”, corresponding to the low-pass filtering of synaptic input current by the RC circuit of the biological membrane. We first show that the output of this neuron model, in either of its two forms, follows its input at a higher fidelity than a wide range of spiking neuron models across a range of input frequencies. Then we constructed fully connected recurrent networks with equal numbers of excitatory and inhibitory neurons and randomly distributed weights across all synapses. When the networks were driven by pseudorandom sensory inputs with varying frequency, the recurrent network activity tended to induce high frequency self-amplifying components, sometimes evident as distinct transients, which were not present in the input data. The addition of a dynamic leak based on known membrane properties consistently removed such spurious high frequency noise across all networks. Furthermore, we found that the neuron model with dynamic leak imparts a network stability that seamlessly scales with the size of the network, conduction delays, the input density of the sensory signal and a wide range of synaptic weight distributions. Our findings suggest that neuronal dynamic leak serves the beneficial function of protecting recurrent neuronal circuitry from the self-induction of spurious high frequency signals, thereby permitting the brain to utilize this architectural circuitry component regardless of network size or recurrency.