Energy-efficient ASIC Accelerators for Machine/deep Learning Algorithms
Author | : Minkyu Kim |
Publisher | : |
Total Pages | : 120 |
Release | : 2019 |
ISBN-10 | : OCLC:1311433432 |
ISBN-13 | : |
Rating | : 4/5 (32 Downloads) |
Download or read book Energy-efficient ASIC Accelerators for Machine/deep Learning Algorithms written by Minkyu Kim and published by . This book was released on 2019 with total page 120 pages. Available in PDF, EPUB and Kindle. Book excerpt: In this work, to reduce computation without accuracy degradation, an energy-efficient deep convolutional neural network (DCNN) accelerator is proposed based on a novel conditional computing scheme and integrates convolution with subsequent max-pooling operations. This way, the total number of bit-wise convolutions could be reduced by ~2x, without affecting the output feature values. This work also has been developing an optimized dataflow that exploits sparsity, maximizes data re-use and minimizes off-chip memory access, which can improve upon existing hardware works. The total off-chip memory access can be saved by 2.12x. Preliminary results of the proposed DCNN accelerator achieved a peak 7.35 TOPS/W for VGG-16 by post-layout simulation results in 40nm. A number of recent efforts have attempted to design custom inference engine based on various approaches, including the systolic architecture, near memory processing, and in-meomry computing concept. This work evaluates a comprehensive comparison of these various approaches in a unified framework. This work also presents the proposed energy-efficient in-memory computing accelerator for deep neural networks (DNNs) by integrating many instances of in-memory computing macros with an ensemble of peripheral digital circuits, which supports configurable multibit activations and large-scale DNNs seamlessly while substantially improving the chip-level energy efficiency. Proposed accelerator is fully designed in 65nm, demonstrating ultralow