Abstract
Exploiting unstructured sparsity in the hardware accelerator of a Convolutional Neural Networks (CNNs) based inference can improve energy efficiency. However, it needs a complex controller for indexing and load-balancing. A controller for managing unstructured sparsity in Fully Connected (FC) layers is designed. In a pre-trained Visual Geometry-Group-16 (VGG-16) model, a ~ 20% sparsity is introduced using an induced sparsity mechanism. ImageNet dataset-based analysis of this model provides 95% classification accuracy and 0.96 harmonic mean of precision and recall. Each Input Feature Map (IFM) and its corresponding weight vector of an FC layer are arranged in a row of memory. A Combined IFM & Weights - Zero Valued Compression (CIW-ZVC) controller permits only the valid data from off-chip to on-chip memory. This is improving the data-movement rate with minimum hardware overhead. A processor array of 256 Convolution Operators (COs) and parallel computations with zero-gating on weights is used to compute in a 16-tiles per on-chip memory cycle. IFM is stationary for all the tiles which allows load-balancing with ease. This implementation with 14 nm accomplished a peak performance and energy efficiency of 256 × 10(9) Operations/Second (OPS) and 15 × 10(12) OPS/Watt per FC (VGG-16) layer respectively. Also, it improves energy efficiency to a maximum of 6.08 times and area efficiency to 7.6 times compared to the existing processors.