Hardware Acceleration of Machine Learning Beyond Linear Algebra

Author: S. Mücke, N. Piatkowski, K. Morik
Journal: ECML PKDD 2019: Machine Learning and Knowledge Discovery in Databases
Year: 2019

Citation information

S. Mücke, N. Piatkowski, K. Morik,
ECML PKDD 2019: Machine Learning and Knowledge Discovery in Databases,
Springer, Cham,

Specialized hardware for machine learning allows us to train highly accurate models in hours which would otherwise take days or months of computation time. The advent of recent deep learning techniques can largely be explained by the fact that their training and inference rely heavily on fast matrix algebra that can be accelerated easily via programmable graphics processing units (GPU). Thus, vendors praise the GPU as the hardware for machine learning. However, those accelerators have an energy consumption of several hundred Watts. In distributed learning, each node has to meet resource constraints that exceed those of an ordinary workstation—especially when learning is performed at the edge, i.e., close to the data source. The energy consumption is typically highly restricted, and relying on high-end CPUs and GPUs is thus not a viable option. In this work, we present our new quantum-inspired machine learning hardware accelerator. More precisely, we explain how our hardware approximates the solution to several NP-hard data mining and machine learning problems, including k-means clustering, maximum-a-posterior prediction, and binary support vector machine learning. Our device has a worst-case energy consumption of about 1.5 W and is thus especially well suited for distributed learning at the edge.