Energy-Efficient Deep In-memory Architecture for NAND Flash Memories

  • Authors:
    Sujan Kumar Gonugondla (UIUC), Yongjune Kim (UIUC), Naresh Shanbhag (UIUC), Mingu Kang (UIUC), Sean Eilert (Micron), Mark Helm (Micron)
    Publication ID:
    P092789
    Publication Type:
    Paper
    Received Date:
    3-Nov-2017
    Last Edit Date:
    7-Nov-2017
    Research:
    2385.002 (Stanford University)

Abstract

This paper proposes an energy-efficient deep in- memory architecture for NAND flash (DIMA-F) to perform machine learning and inference algorithms on NAND flash memory. Algorithms for data analytics, inference, and decision-making require processing of large data volumes and are hence limited by data access costs. DIMA-F achieves energy savings and throughput improvement for such algorithms by reading and processing data in the analog domain at the periphery of NAND flash memory. This paper also provides behavioral models of DIMA-F that can be used for analysis and large scale system simulations in presence of circuit non-idealities and variations. DIMA-F is studied in the context of linear support vector machines and k-nearest neighbor for face detection and recognition, respectively. An estimated 8×-to-23× reduction in energy and 9×-to-15× improvement in throughput resulting in EDP gains up to 345× over the conventional NAND flash architecture incorporating an external digital ASIC for computation.

4819 Emperor Blvd, Suite 300 Durham, NC 27703 Voice: (919) 941-9400 Fax: (919) 941-9450

Important Information for the SRC website. This site uses cookies to store information on your computer. By continuing to use our site, you consent to our cookies. If you are not happy with the use of these cookies, please review our Cookie Policy to learn how they can be disabled. By disabling cookies, some features of the site will not work.