Solving sparse representation for object classification using quantum D-Wave 2X machine
Abstract: While artificial neural networks take some inspiration from the brain for their operational units and connectivity principles, how these neural networks learn is vastly different from their biological counterparts. Although brains are continuously adapting and learning, most machine learning algorithms have distinct training and testing phases. This dichotomy is illustrated in Figure 1. For more information on the limitations of this bottleneck see [1]. Although brains unquestionably provide a breadth of different learning rates and principles, we must understand the implications of these learning rules better to harness them in machine learning algorithms and neural inspired hardware. The mathematics of game theory models strategic interactions, and as such provides a means to analyze the dynamics of when a learning model should be adjusted in a non-stationary environment such as concept drift and transfer learning. We provide a brief description of a game theoretic modeling framework for studying cyber security conflicts and describe how this approach may be extended to study learning theory.