An energy efficient processor array and memory controller for accurate processing of convolutional neural network-based inference engines

一种用于精确处理基于卷积神经网络的推理引擎的节能型处理器阵列和内存控制器

阅读:1

Abstract

Exploiting unstructured sparsity in the hardware accelerator of a Convolutional Neural Networks (CNNs) based inference can improve energy efficiency. However, it needs a complex controller for indexing and load-balancing. A controller for managing unstructured sparsity in Fully Connected (FC) layers is designed. In a pre-trained Visual Geometry-Group-16 (VGG-16) model, a ~ 20% sparsity is introduced using an induced sparsity mechanism. ImageNet dataset-based analysis of this model provides 95% classification accuracy and 0.96 harmonic mean of precision and recall. Each Input Feature Map (IFM) and its corresponding weight vector of an FC layer are arranged in a row of memory. A Combined IFM & Weights - Zero Valued Compression (CIW-ZVC) controller permits only the valid data from off-chip to on-chip memory. This is improving the data-movement rate with minimum hardware overhead. A processor array of 256 Convolution Operators (COs) and parallel computations with zero-gating on weights is used to compute in a 16-tiles per on-chip memory cycle. IFM is stationary for all the tiles which allows load-balancing with ease. This implementation with 14 nm accomplished a peak performance and energy efficiency of 256 × 10(9) Operations/Second (OPS) and 15 × 10(12) OPS/Watt per FC (VGG-16) layer respectively. Also, it improves energy efficiency to a maximum of 6.08 times and area efficiency to 7.6 times compared to the existing processors.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。