Abstract
Compute-in-Memory (CIM) offers an efficient approach for accelerating DNNs by performing matrix–vector multiplications directly within memory. However, its adoption in edge devices is limited by unstable power supplies and the performance overhead of conventional row- or column-wise computing. This paper presents a two-directional CIM-based nvSRAM cell that performs both row- and column-wise operations, enabling faster and more efficient matrix–vector multiplication. The proposed design stores the CIM outputs within the same computation cycle, referred to as Simultaneous Compute and Write (SCW), thereby reducing latency during complex neural network inference. By integrating a single I-MTJ into each SRAM cell, it also provides reliable data retention and restoration during power failures, making it well-suited for low-power, energy-constrained edge applications. Post-layout simulations were conducted to evaluate the proposed architecture. The detailed post-layout simulation results demonstrate a 31% improvement in write margin, a 40% reduction in PDP in memory mode, and an 85% reduction in energy in backup scenarios, compared to state-of-the-art designs. Furthermore, the proposed design achieves a 39.2% EDP reduction during neural network inference operation under power instability, highlighting its suitability for low-power edge computing.