HW/SW Codesign for Robust and Efficient Binarized SNNs by Capacitor Minimization

Using accelerators based on analog computing is an efficient way to process the immensely large workloads in Neural Networks (NNs). One example of an analog computing scheme for NNs is Integrate-and-Fire (IF) Spiking Neural Networks (SNNs). However, to achieve high inference accuracy in IF-SNNs, the analog hardware needs to represent current-based multiply-accumulate (MAC) levels as spike times, for which a large membrane capacitor needs to be charged for a certain amount of time. A large capacitor results in high energy use, considerable area cost, and long latency, constituting one of the major bottlenecks in analog IF-SNN implementations. In this work, we propose a HW/SW Codesign method, called CapMin, for capacitor size minimization in analog computing IF-SNNs. CapMin minimizes the capacitor size by reducing the number of spike times needed for accurate operation of the HW, based on the absolute frequency of MAC level occurrences in the SW. To increase the operation of IF-SNNs to current variation, we propose the method CapMin-V, which trades capacitor size for protection based on the reduced capacitor size found in CapMin. In our experiments, CapMin achieves more than a 14 reduction in capacitor size over the state of the art, while CapMin-V achieves increased variation tolerance in the IF-SNN operation, requiring only a small increase in capacitor size.

  • Published in:
    arXiv
  • Type:
    Article
  • Authors:
    Yayla, Mikail; Thomann, Simon; Wei, Ming-Liang; Yang, Chia-Lin; Chen, Jian-Jia; Amrouch, Hussam
  • Year:
    2023

Citation information

Yayla, Mikail; Thomann, Simon; Wei, Ming-Liang; Yang, Chia-Lin; Chen, Jian-Jia; Amrouch, Hussam: HW/SW Codesign for Robust and Efficient Binarized SNNs by Capacitor Minimization, arXiv, 2023, https://arxiv.org/abs/2309.02111, Yayla.etal.2023a,

Associated Lamarr Researchers

lamarr institute person Chen Jian Jia - Lamarr Institute for Machine Learning (ML) and Artificial Intelligence (AI)

Prof. Dr. Jian-Jia Chen

Area Chair Resource-aware ML to the profile