Research Area

Resource-aware Machine Learning

Resource-aware Machine Learning aims to adapt Machine Learning and the underlying hardware technologies to save energy, memory, and computational resources.

Researchers of the Lamarr Institute are dedicated to developing sustainable and environmental-friendly Machine Learning solutions that save energy and computational resources. For this purpose, we study the connection between hardware and Machine Learning. It is our goal to make Machine Learning available even on devices with restricted computing power and limited energy and memory resources.

ResourccenbewusstesML quadratisch - Lamarr Institute for Machine Learning (ML) and Artificial Intelligence (AI)

ML Approaches for High Performance and Low Resource Consumption

To advance resource-aware ML, we are investigating resource-friendly variations of high-performance ML approaches. Topics of our current and future research include resource-aware transformer models for Natural Language Processing (NLP), the choice of optimal hyperparameter to reduce training times and improve performance, research on model quantization as well as efficient search algorithms.

Contact persons

lamarr institute person Chen Jian Jia - Lamarr Institute for Machine Learning (ML) and Artificial Intelligence (AI)

Prof. Dr. Jian-Jia Chen

Area Chair Resource-aware ML to the profile
lamarr institute person Buschjager Sebastian - Lamarr Institute for Machine Learning (ML) and Artificial Intelligence (AI)

Dr. Sebastian Buschjäger

Scientific Coordinator Resource-aware ML to the profile

Adjusting Hardware for More Efficiency

At the Lamarr Institute we also focus on adjustments to hardware which has the potential to further reduce energy consumption and improve the computation efficiency. For instance, in-memory computing, that mitigates costly data transfers between memory and the processing unit, provides potentials for efficient execution as well as energy reduction on modern memory technologies. Moreover, we see a need to develop more flexible hardware accelerators that improve the execution and training of models on resource-constrained devices.

Developing On-demand Machine Learning Solutions

Finally, we perceive a discrepancy between the training of ML models and the de facto practice in the IT industry. On-demand computation, that results in dynamic resource availability over time, has become the industry norm for many IT services. Most ML algorithms and their implementations are not yet designed for such dynamic scenarios. To achieve our future goal of developing on-demand Machine Learning solutions, our researchers are evaluating methods for decomposing ML algorithms and studying best practices from software engineering.