Computing today is dominated by data-centric applications and there is a strong impetus for specialization for this important domain. Conventional processors’ narrow vector units fail to exploit the high degree of data-parallelism in these applications. Also, they expend a disproportionately large fraction of time and energy in moving data over cache hierarchy and in instruction processing, as compared to the actual computation. In this talk, PhD. Candidate Shaizeen Aga presents the Compute Cache architecture which aims to tackle these challenges by enabling in-place computation in caches. She will talk about how they harness the emerging SRAM circuit technology of bit-line computing to re-purpose existing cache elements as active very large vector computational units. Further, this also significantly reduces the overheads in moving data between different levels in the cache hierarchy. She will also talk about solutions to satisfy new constraints imposed by Compute Caches such as operand locality. Compute Caches increase performance by 1.9× and reduces energy by 2.4× for a suite of data-centric applications, including text and database query processing, cryptographic kernels, and in-memory checkpointing. Applications with a larger fraction of Compute Cache operations could benefit even more, as our micro-benchmarks indicate (54× throughput, 9× dynamic energy savings).
Thursday, March 16, 2017, 3 p.m.–4 p.m. ET
Ann Arbor, MI, United States