Capacitor Minimization in Analog Synapses for Efficient and Compact Binarized {SNNs}

Accelerating the immense workloads of Neural Networks (NNs) is a critical challenge, and analog computing-based approaches present a promising solution. Among these, the Integrate-and-Fire (IF) Spiking Neural Network (SNN) stands out for its potential in efficient neural computation. However, achieving high inference accuracy through precise multiply-accumulate (MAC) operations necessitates a large membrane capacitor—224 to 547 times larger than the synapse array at the 14nm technology node—resulting in prohibitive area costs. Additionally, the large capacitor size introduces higher energy consumption and longer access times. Thus, a key research challenge is maintaining accuracy while mitigating the costs associated with the large membrane capacitor.
In this work, we propose a HW/SW Codesign method, called CapMin, for capacitor size minimization in analog computing IF-SNNs. CapMin minimizes the capacitor size by reducing the number of spike times needed for accurate operation of the HW, based on the absolute frequency of MAC level occurrences in the SW. To increase the operation of IF-SNNs to current variation, we propose the method CapMin-V, which trades capacitor size for protection based on the reduced capacitor size found in CapMin. In our experiments, CapMin achieves more than a 14× reduction in capacitor size along with a 34% reduction in energy consumption over the state of the art, while CapMin-V achieves increased variation tolerance in the IF-SNN operation, requiring only a small increase in capacitor size.

  • Veröffentlicht in:
    {IEEE} Transactions on Circuits and Systems for Artificial Intelligence
  • Typ:
    Article
  • Autoren:
    Yayla, Mikail; Wei, Ming-Liang; Thomann, Simon; Mema, Albi; Yang, Chia-Lin; Chen, Jian-Jia; Amrouch, Hussam
  • Jahr:
    2025
  • Source:
    https://ieeexplore.ieee.org/abstract/document/10858187/

Informationen zur Zitierung

Yayla, Mikail; Wei, Ming-Liang; Thomann, Simon; Mema, Albi; Yang, Chia-Lin; Chen, Jian-Jia; Amrouch, Hussam: Capacitor Minimization in Analog Synapses for Efficient and Compact Binarized {SNNs}, {IEEE} Transactions on Circuits and Systems for Artificial Intelligence, 2025, {IEEE}, https://ieeexplore.ieee.org/abstract/document/10858187/, Yayla.etal.2025a,

Assoziierte Lamarr-ForscherInnen

lamarr institute person Chen Jian Jia - Lamarr Institute for Machine Learning (ML) and Artificial Intelligence (AI)

Prof. Dr. Jian-Jia Chen

Area Chair Ressourcenbewusstes ML zum Profil