
Two research projects from the Lamarr Institute for Machine Learning and Artificial Intelligence have been included in the latest edition of the “Successes of German AI Research” collection published by the Platform Learning Systems. The platform is an expert committee initiated by the Federal Ministry of Research, Technology, and Space (BMFTR) that brings together stakeholders from science, industry, and society and provides strategic impetus for AI development in Germany. The selection of two Lamarr projects among the ten highlighted examples underscores their significance for current technological developments and their applications.
Technological foundations for new generations of language models
The Teuken-7B language model was developed as part of the OpenGPT-X project and released in its first version in 2024 and is considered one of the most visible outcomes of current AI research in Germany. Researchers at the Lamarr Institute for Machine Learning and Artificial Intelligence made significant contributions to the research and development of the LLM technology stack, from which models such as Teuken have emerged, and which today forms the basis for subsequent model training efforts.
Key contributions include, in particular, methods for multilingual instruction tuning as well as the development of an open-source framework for training large language models. In 2025, the Lamarr Institute further expanded its research in the field of foundation models and extended it to virtually all research areas. The goal is to bring these models more quickly into concrete application domains, such as robotics, the social sciences, or the life sciences.
This also serves as the foundation for follow-up initiatives such as Soofi. There, the developed technologies are taken up and further refined into powerful, open language models specifically designed for use in business and public-sector applications, with a focus on European requirements such as multilingualism, transparency, and adaptability to specific application contexts.
Video: Teuken and the next generation of European AI: In an accompanying interview, project lead by Dr. Mehdi Ali, Lead Scientist Foundation Models at the Lamarr Institute, provides insights into the development of Teuken and the role of open language models for European AI.
AI Unlocks Extreme Data Sets for Science and Industry
The IceCube Neutrino Observatory is one of the world’s leading large-scale experiments in astroparticle physics. Deep within the Antarctic ice, thousands of sensors detect the traces of high-energy neutrinos, which are considered virtually undisturbed messengers from the most extreme regions of the universe.
Among millions of recorded signals per day, only a few relevant events are hidden. With the help of new AI methods, these signals can be identified and reconstructed with significantly greater precision. On this basis, it was possible for the first time to detect high-energy neutrinos from the Milky Way and thus generate an “image” of our galaxy in neutrino emission.
The underlying methods were co-developed within the Lamarr Institute environment and, in particular, improve the separation of signal from background. They exemplify how AI extracts new insights from extremely large and complex datasets and, at the same time, demonstrate that methods that work under these conditions can also be applied to data-intensive applications in industry.
Video: AI in neutrino research: In a video interview, project lead Prof. Dr. Dr. Wolfgang Rhode, Area Chair Physics at the Lamarr Institute and Professor of Experimental Physics (Astroparticle Physics) at the TU Dortmund University, explains how AI methods enable the analysis of neutrino data and open new perspectives in astroparticle physics.
From Research to Scalable AI Technologies
The Platform Learning Systems acts as an interface between research, policy, and application, developing recommendations for the use of AI in Germany. Its selection highlights which developments are currently considered strategically relevant.
The two selected projects exemplify a key trend in AI research: a shift away from isolated applications toward scalable foundational technologies and their targeted transfer into different domains. For implementation, this requires closer integration of basic research and application, for example through open models, shared infrastructures, and new transfer formats.
For industry, this marks a shift in focus. Instead of individual solutions, adaptable systems are becoming central, capable of being integrated flexibly into existing processes. The ability to work with large-scale data and deploy general-purpose models in a targeted way is thus becoming a key factor for innovation and competitiveness.