Article’s

A Simulation-Based Energy Evaluation of Processing-in-Memory for CNN Workloads

Animesh Kushwaha, Madeeha Laiq, Kuldeep Patel

(03 – 2026)

DOI: 10.5281/zenodo.19111227

 

The rapid adoption of deep learning models, especially convolutional neural networks (CNN), has significantly increased the computational and memory requirements in modern computing systems. Resource- constrained environments such as edge devices and embedded systems face serious challenges in efficiently handling these workloads due to limited energy budgets and memory bandwidth restrictions. Traditional computing systems based on the von Neumann architecture suffer from excessive data movement between the processor and memory, leading to high energy consumption and latency. This paper presents an analytical and simulation-based energy evaluation of a processing-in-memory (PIM) design for CNN estimation. The proposed framework models computation cost, memory access cost, and total energy consumption for both conventional and PIM-based architectures. Experimental results show that the proposed PIM design reduces memory- related energy consumption by up to 45% for memory-intensive CNN workloads. The findings highlight the potential of in- memory computing as a viable architectural solution for energy- efficient AI inference in constrained systems.

 

 

Scroll to Top