The Lamarr Fellowship: AI made in North Rhine-Westphalia
The “Lamarr Fellow Network Ramp Up”, a funding program by the State of North Rhine-Westphalia (NRW), recognizes internationally renowned AI researchers from NRW and integrates them into the scientific ecosystem of the Lamarr Institute at an early stage. It hereby pools together leading expertise in Artificial Intelligence and strengthens the national and international visibility of AI research made in NRW.
To this end, the Lamarr Fellowship promotes foundational research projects on algorithms, methods and theories of Artificial Intelligence that pertain to the Lamarr Institute’s research activities. The projects hence lay the foundation for the long-term collaboration between the Lamarr Institute and the Fellows. The six Lamarr Fellowships were awarded after the positive evaluation by an international expert committee in the course of three selection rounds.
Get to know our Fellows and their projects below.
Our Fellows and their Research Projects
Trustworthy AI for Spatial-temporal Data Analysis and its Application for Human’s Grand Challenges
Growing data sources on challenges, such as climate change and drinking water supply, open up the opportunity for a better understanding of these phenomena and therefore informed decision-making – supported by AI methods. The project addresses two particular challenges: the requirement for trustworthiness by enabling people to examine the reasons behind decisions of AI technologies and the flexibility of AI in coping with the spatio-temporal information that arises in this context.
Web-Scale Hybrid Explainable Machine Learning
Knowledge bases are an integral part of the Web and therefore of the lives of over 5 billion people. The information contained therein is used for a variety of algorithmic decisions in domains such as Web search, recommendation and personalization. Explainable algorithms for learning on knowledge graphs are hence indispensable to the ethical and law-abiding implementation of Web-scale machine learning. The aim of the WHALE project is hence the development of novel machine learning methods, which will provide the basis for explainable algorithmic decisions on the web. These methods are designed to scale to the volume, complexity and further idiosyncrasies of real-world knowledge graphs.
The Development of New Hybrid Machine Learning Methods for Inverse Imaging and Vision Problems
In many applications in medicine, biology, physics or industrial production, it is not possible to take a direct image of an object to be examined. Instead, data is recorded that implicitly allow for conclusions to be drawn about the actual image, e.g. in computer tomography. This project addresses how physical knowledge about the measurement processes can be integrated into Machine Learning processes. The aim is to use hybrid learning methods that require less training data. As they are model-based, these methods are also more interpretable and more robust against attacks in training or inference, while still benefiting from the power of modern data-driven techniques.
Neural Network Architectures
This project focuses on the design and analysis of accelerated Deep Learning techniques for data-driven learning problems and for model-driven scientific computational problems, e.g. in the form of optimal control problems for robots and multicopters or in the form of parametric partial differential equations such as the parametric Navier-Stokes equations from fluid mechanics. A central approach of this project is the derivation of improved neural network architectures by combining efficient deterministic traditional numerical methods with advanced new Artificial Intelligence techniques.
Trustworthiness of Deep Generative Models
This project addresses the fair and responsible use of generative text-image models and seeks to develop tools to address potential societal risks enabled by recent developments in this technology. It will focus in particular on the following areas: (a) reliable and robust recognition of generated images, (b) robust generated images, (c) robust model watermarking techniques for recognition and attribution, and (d) attacks and defenses related to membership inference. The project aims to help solve the problems of disinformation, unethical data use, and privacy invasion related to generative models.
Trustworthy Integration of Large Language Models in Human-Computer Interactive Systems
As impressive as large language models are, their problems are just as diverse. Compared to other IT applications, they are expensive, cumbersome and unreliable, and they have no access to external knowledge. In this project, new mechanisms of Machine Learning are investigated to enable the safe and profitable use of language models in human-computer dialog systems. The central question to be answered within the scope of this project is: How and where is it expedient and responsible to integrate large language models into an interactive human-computer dialog system? The project thus lies at the interface of Artificial Intelligence geared towards humans, trustworthy Artificial Intelligence and linguistic data processing.
Updates from our Fellow Network
Stay updated on the latest projects, research findings, and activities of our Lamarr Fellows – leading innovators in AI research.