Mila > Publication > hardware > Bit-Slicing FPGA Accelerator for Quantized Neural Networks

Bit-Slicing FPGA Accelerator for Quantized Neural Networks

hardware
Jul 2019

Bit-Slicing FPGA Accelerator for Quantized Neural Networks

Jul 2019

Deep Neural Networks (DNNs) become the state-of-the-art in several domains such as computer vision or speechrecognition. However, using DNNs for embedded applications is still strongly limited because of their complexity and the energy required to process large data sets. In this paper, we present the architecture of an accelerator for quantized neural networks and its implementation on a Nallatech 385-A7 board with an Altera Stratix V GX A7 FPGA. The accelerator’s design centers around the matrix-vector product as the key primitive, and exploits bit-slicing to extract maximum performance using low-precision arithmetic.

Reference

[ISCAS2019]
PDF

Linked Profiles