Training and Plasticity Concepts of the BrainScaleS Neuromorphic Systems

Abstract: Efficient training is a prerequisite for the successful application of neuromorphic systems. The machine learning community has demonstrated great success in this regard, training deep convolutional neural networks with back-propagation algorithms [1], [2]. The first part of the talk will demonstrate how this know-how can be leveraged for training deep networks on a spiking analog neuromorphic substrate. It will show that such a hardware-in-the-loop training is a viable approach to counteract the inevitable device variations present in the analog circuits modeling neurons and synapses. These deep spiking networks are currently implemented on the first generation BrainScaleS wafer scale neuromorphic hardware system [3] presented at last year’s NICE conference. A photograph of the system with an overlay illustrating the hardware-software loop is depicted in Fig. 1. The second half of the talk will show a novel implementations of several core circuits for neuromorphic computing, like short-term plasticity, neuronal adaptation mechanisms and structural plasticity, that constitute parts of the upcoming second generation BrainScaleS neuromorphic hardware. Fig. 3 shows the prototype system and the layout drawing of the new chip which is currently being manufactured. It will explain details of the new hardware concepts supporting efficient and scalable in-the-loop training methods by implementing local digital compute capability alongside analog neuromorphic circuits.

4819 Emperor Blvd, Suite 300 Durham, NC 27703 Voice: (919) 941-9400 Fax: (919) 941-9450

Important Information for the SRC website. This site uses cookies to store information on your computer. By continuing to use our site, you consent to our cookies. If you are not happy with the use of these cookies, please review our Cookie Policy to learn how they can be disabled. By disabling cookies, some features of the site will not work.