Implementing Spiking Neural Networks with the Biological Complexity of Cortical Neurons
Abstract: Human cortical neurons can contain up to 30,000 synapses each, although an average of 10,000 synapses per neuron is frequently cited. Setting aside the extraordinary nonlinear computations performed by each biological neuron, we raise a more basic question: Assuming a leaky integrate-and-fire model for both biological and artificial neurons, can the large biological neuron be modeled with a functionally-correct network of significantly smaller artificial analog neurons with fewer synapses per neuron? If so, is there a significantly increased cost in either hardware complexity or computational delay when using smaller these neurons in an artificial neural network compared to a biologically-sized neuron? This presentation addresses that pair of questions.
We assume that each arriving spike at the neuron produces an analog postsynaptic potential that decays over time. When the accumulated potential resulting from all presynaptic spikes reaches a threshold, the neuron spikes. We also assume that the timing of the arrival of spikes is not synchronous. We further assume that the threshold for firing is inversely proportional to rate of change of the membrane potential, so that spikes arriving more closely in time cause a postsynaptic spike at a lower threshold. We make the case that these assumptions lead to virtually infinite combinations of presynaptic spike arrivals that can cause a neuron with many synapses to spike. Covering all these combinations with simpler neurons could lead to a combinatoric explosion in simpler neurons in the replacement network.