Abstract
Compute-in-memory (CIM) has emerged as a promising solution to mitigate the data movement bottleneck in von Neumann architectures. While vertical NAND (V-NAND) flash memory has been explored for CIM, its structural constraints, including pass-bias overhead and interconnect parasitic capacitances, limit energy efficiency. In this work, we present a comprehensive comparison between V-NAND and vertical AND (V-AND) flash memory for CIM applications. Analytical modeling and experimental validation demonstrate that V-AND achieves superior energy efficiency, particularly with low-inference-count regimes and increased stack height, by eliminating the bias pass requirement. These results demonstrate that V-AND offers compelling advantages over V-NAND, establishing it as a promising candidate for energy-efficient, scalable, and fast CIM accelerators.