Compute Caches

  • Authors:
    Shaizeen Aga (Univ. of Michigan)
    Publication ID:
    P090547
    Publication Type:
    e-Workshop
    Received Date:
    17-Mar-2017
    Last Edit Date:
    17-Mar-2017
    Research:
    2384.007 (University of Michigan)
    Replay:
    51 minutes
    Sign in to see the View Replay button »

Abstract

Computing today is dominated by data-centric applications and there is a strong impetus for specialization for this important domain. Conventional processors’ narrow vector units fail to exploit the high degree of data-parallelism in these applications. Also, they expend a disproportionately large fraction of time and energy in moving data over cache hierarchy and in instruction processing, as compared to the actual computation. In this talk, PhD. Candidate Shaizeen Aga presents the Compute Cache architecture which aims to tackle these challenges by enabling in-place computation in caches. She will talk about how they harness the emerging SRAM circuit technology of bit-line computing to re-purpose existing cache elements as active very large vector computational units. Further, this also significantly reduces the overheads in moving data between different levels in the cache hierarchy. She will also talk about solutions to satisfy new constraints imposed by Compute Caches such as operand locality. Compute Caches increase performance by 1.9× and reduces energy by 2.4× for a suite of data-centric applications, including text and database query processing, cryptographic kernels, and in-memory checkpointing. Applications with a larger fraction of Compute Cache operations could benefit even more, as our micro-benchmarks indicate (54× throughput, 9× dynamic energy savings).

Past Events

  Event Summary
16 March 2017
STARnet
STARnet
Compute Caches
Thursday, March 16, 2017, 3 p.m.–4 p.m. ET
Ann Arbor, MI, United States

E-Workshop

4819 Emperor Blvd, Suite 300 Durham, NC 27703 Voice: (919) 941-9400 Fax: (919) 941-9450