As one of three researchers at the University of Bonn – a partner institution of the Lamarr Institute – computer science professor Lucie Flek has been awarded an ERC Starting Grant by the European Research Council. With funding of €1.5 million, she will further advance her research on artificial intelligence, thereby also strengthening the expertise of the Lamarr Institute. Additional grants were awarded to projects in economics and evolutionary biology.
AI with Social Intelligence
AI systems like ChatGPT are increasingly taking on social roles—whether as everyday advisors, learning aids, or conversation partners in difficult situations. Yet it is precisely in moments when empathy, judgment, and social understanding matter most that these systems often fail. This is where the project “LLMpathy” by Prof. Dr. Lucie Flek from the Institute of Computer Science at the University of Bonn comes in. Her goal: to make artificial intelligence (AI) more socially intelligent.
“Today’s AI can imitate empathy, but it doesn’t understand how it actually works,” says Prof. Lucie Flek, who also conducts research at the Lamarr Institute and the Bonn-Aachen International Center for IT (b-it). “At the same time, language models have learned to solve very complex mathematical problems by breaking them down into simple steps. In LLMpathy, we want to teach them to structure and properly justify human thinking and emotions in the same way.” To achieve this, Flek—who is also involved in the Transdisciplinary Research Areas (TRA) “Modelling” and “Matter” as well as the Excellence Cluster “Our Dynamic Universe” starting at the University of Bonn in January 2026—will combine advanced learning methods in AI with long-term psychological studies.
“The AI receives a personalized profile that links traits, values, emotions, and actions. This allows models to causally explain their answers and continuously improve through human feedback.” In addition, a simulation environment will be created in which personalized AI agents interact with one another—in conflicts or negotiations, for example. “This will enable us, for the first time, to systematically measure how well language models can adopt different perspectives, pursue goals, or resist manipulation.” The new findings will also help uncover unethical forms of personalization, such as AI exerting emotional pressure in advertising, and ensure that future AI systems meet high standards of transparency, trustworthiness, and ethical conduct—in line with the upcoming EU AI Act.
Prof. Dr. Lucie Flek heads the Data Science and Language Technologies group at the Bonn-Aachen International Center for Information Technology (b-it) at the University of Bonn, where she is also a member of the Transdisciplinary Research Areas (TRA) “Modelling” and “Matter” as well as the Excellence Cluster “Our Dynamic Universe” starting in January 2026. As Area Chair for Natural Language Processing (NLP) at the Lamarr Institute for Machine Learning and Artificial Intelligence, she connects this work with her research on reasoning language models (Reasoning LLMs), AI safety, and AI for science. Prof. Flek has worked in both academia and industry, including for Amazon Alexa and Google Shopping Search in Europe. At the University of Pennsylvania and University College London, she conducted research on modeling users through text and applying such AI models in psychology and the social sciences.
Media Contact
Prof. Dr. Lucie Flek
University of Bonn
Bonn-Aachen International Center for IT (b-it)
Lamarr Institute
for Machine Learning and Artificial Intelligence
Institute for Computer Science