Stimulating STDP to Exploit Locality for Lifelong Learning without Catastrophic Forgetting
Stochastic gradient descent requires that training samples be drawn from a uniformly random distribution of the data. For a deployed system that must learn online from an uncontrolled and unknown environment, the ordering of input samples often fails to meet this criterion, making lifelong learning a difficult challenge. We exploit the locality of the unsupervised Spike Timing Dependent Plasticity (STDP) learning rule to target subsets of a segmented Spiking Neural Network (SNN) to adapt to novel information while protecting the information in the remainder of the SNN from catastrophic forgetting. In our system, novel information triggers stimulated firing, inspired by biological dopamine signals, to boost STDP in the synapses of neurons associated with outlier information. This targeting controls the forgetting process in a way that reduces accuracy degradation while learning new information. Our preliminary results on the MNIST dataset validate the capability of such a system to learn successfully over time from an unknown, changing environment, achieving up to 93.88% accuracy for a completely disjoint dataset.