Hardware Accelerated Learning at the Edge

Specialized hardware for machine learning allows us to trainhighly accurate models in hours which would otherwise take days or months of computation time. The advent of recent deep learning techniques can largely be explained by the fact that their training and inference rely heavily on fast matrix algebra that can be accelerated easily via programmable graphics processing units (GPU). Thus, vendors praise the GPU as the hardware for machine learning. However, those accelerators have an energy consumption of several hundred Watts. In distributed learning, each node has to meet resource constraints that exceed those of an ordinary workstation — especially when learning is performed at the edge, i.e., close to the data source. The energy consumption is typically highly restricted, and relying on high-end CPUs and GPUs is thus not a viable option. In this work, we present our new quantum-inspired machine learning hardware accelerator. More precisely, we explain how our hardware approximates the solution to several NP-hard data mining and machine learning problems, including k-means clustering, maximum-a-posterior prediction, and binary support vector machine learning. Our device has a worst-case energy consumption of about 1.5 Watts and is thus especially well suited for distributed learning at the edge.

  • Published in:
    DMLE Workshop at ECML Workshop at European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD)
  • Type:
    Inproceedings
  • Authors:
    S. Mücke, N. Piatkowski, K. Morik
  • Year:
    2019

Citation information

S. Mücke, N. Piatkowski, K. Morik: Hardware Accelerated Learning at the Edge, Workshop at European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD), DMLE Workshop at ECML, 2019, https://www.semanticscholar.org/paper/Hardware-Accelerated-Learning-at-the-Edge-M%C3%BCcke-Piatkowski/396c0b06da307946e8e9bcdb87f64deb343f29ad, Muecke.etal.2019,