Foundation Models
As an international leader in foundation model research, the Lamarr Institute drives forward scientific breakthroughs in multilinguality, alignment and safe deployment in diverse application scenarios, such as industry and medicine. Our experts look forward to connecting with you!
Our Foundation Model Experts in Canada
Enhancing LLM Performance on Real-World Tasks
Lucie Flek works at the intersection of Large Language Models, human cognition, and alignment, to build systems that are useful, interpretable, and socially aware while developing rigorous methods for reasoning and evaluation. Her interests span human-centered language systems aligned with human needs and communication styles; social intelligence in LLMs engaged in human-like social reasoning; and LLM agents for scientific discovery and collaboration. Her work on LLM distillation improves model efficiency and accessibility, mechanistic interpretability to better understand and align internal behavior, and safety for reliable deployment.
Spearheading Multilingual Foundation Models
Mehdi Ali conducts research on multilingual foundation models with a strong data-centric focus, improving data efficiency through high-quality training data, which spans five key research areas: (1) large-scale data filtering and synthetic data generation, (2) curriculum learning, (3) multimodal learning (text, vision, and structured data), (4) reasoning, and (5) knowledge distillation. Building on his past work for the multilingual seven-billion-parameter language model Teuken-7B, Mehdi spearheads Lamarr’s efforts for a large multi-modal reasoning model.
Developing Large Language Models for Industrial Applications
David Berghaus develops LLMs for industrial applications, including domain-specific fine-tuning and workflow automation, such as legal reasoning or email order processing. He is looking to collaborate on LLMs for industrial use cases, domain-specific fine-tuning, reasoning in regulated fields like law, compliance and medicine, as well as on the development of LLM agents.
Aligning Foundation Models with Human Values
Florian Mai’s research lies at the intersection of reasoning models and AI alignment for which he is actively seeking collaboration partners. For example, some of his projects focus on curating large multilingual reasoning models, learning human values in reasoning models, and scalable oversight through inductive biases in reasoning models. He also works on foundation models to make safe AI widely accessible.
Building Foundational Models for EEG Data
Matthias Jakobs works on building foundational models for the medical domain, specifically aimed at encoding time-domain electroencephalogram (EEG) data into high-dimensional latent representations suitable for downstream tasks. EEG data are widely used in various neurological contexts, including sleep medicine and the diagnosis and treatment of epilepsy. A strong foundational EEG model is therefore crucial for enabling future advances in the diagnosis and treatment of neurological disorders.
Updates from our NLP Research Area
Stay updated on the latest projects, research findings, and activities in NLP by Lamarr.