RRAM-based Neuro-Inspired Computing for Unsupervised Temporal Predictions
Abstract: The prediction of time series and anomalies therein has received significant attention by the machine learning community. Most of the effort has been put into the writing of high-level software that emulates the spatiotemporal functions of the cortex. Neuro-inspired algorithms like HTM, Sparsey, etc. [1] do provide online learning, however their complexity is such that it prohibits seamless integration into a single silicon die, mainly due to the huge amount of data structures needed to bookkeep the spatiotemporal inference models. On the contrary, traditional computer science has found solutions to classification problems, using straightforward, debuggable and stable algorithms, generally referred to as “Deep Learning”. Unsurprisingly, the relative simplicity of these algorithms has paved the way for their integration in accelerators (through custom low-level software) or even dedicated hardware (through ASIC implementations). In this paper we present an algorithm that focuses on time series learning and prediction, while being lean enough to fit the area and complexity constraints of silicon die integration. In particular, the physical properties of RRAM devices are used to mimic true neurological synapses. We show that the new method has unsupervised learning capability while being predictable and programmable.