AI Colloquium with Prof. Dr. Jansen on “Safe Learning Systems – Artificial Intelligence and Formal Method”
On Thursday, January 15, 2026, Prof. Nils Jansen from Ruhr University Bochum will hold a presentation, which is part of the AI Colloquium series, on “Safe Learning Systems – Artificial Intelligence and Formal Methods”.
About the AI Colloquium:
The AI Colloquium, organized by the Lamarr Institute, the Research Center Trustworthy Data Science and Security (RC Trust) and the Center for Data Science & Simulation at TU Dortmund University (DoDas), provides a platform for leading researchers to present groundbreaking work in the field of Machine Learning and Artificial Intelligence. These 90-minute sessions, unlike other colloquia, focus on interactive dialog and international collaboration and include one-hour lectures and 30-minute Q&A sessions. The colloquium will be held mainly in English. The hybrid format of the colloquium ensures that all interested parties can participate either in person or online via Zoom.
About the Talk
This AI Colloquium will present how Artificial Intelligence (AI) has emerged as a disruptive force in society. Its increasing application in safety-critical domains such as healthcare, transportation, and military systems highlights the urgent need for a comprehensive understanding of the robustness and reliability of AI decision-making processes. Neurosymbolic AI aims to address these challenges by combining neural and symbolic approaches, with formal methods serving as a rigorous and structured backbone for symbolic reasoning.
This talk focuses on formal verification, in particular model checking, and its role in building safe and trustworthy learning systems. While reinforcement learning (RL) promises autonomous adaptation in unfamiliar environments with minimal human intervention, its deployment in real-world autonomous systems remains limited due to significant unresolved challenges. One of the most fundamental challenges is uncertainty, arising from incomplete or unknown knowledge about the environment. Such uncertainty poses major difficulties for state-based verification techniques like model checking.
Prof. Jansen will explore how different forms of uncertainty can be incorporated into formal system models to achieve trustworthiness, reliability, and safety in reinforcement learning. The presented work ranges from robust Markov decision processes and stochastic games to multi-environment models. In addition, the talk highlights the close connection between deep (neural) reinforcement learning and symbolic, model-based analysis and verification of safety-critical systems.
About the Speaker
Nils Jansen is a professor at the Ruhr University Bochum, Germany, and leads the chair of Artificial Intelligence and Formal Methods. He is also an ELLIS fellow and a full professor of Safe and Dependable AI at Radboud University, Nijmegen, The Netherlands. The mission of his research is to increase the trustworthiness of Artificial Intelligence (AI). He was a research associate at the University of Texas at Austin and received his Ph.D. with distinction from RWTH Aachen University, Germany. He holds several grants in academic and industrial settings, including an ERC starting grant titled Data-Driven Verification and Learning Under Uncertainty (DEUCE).
Details
Date
15. January 2026
10:15
Location
TU Dortmund
Joseph-von-Fraunhofer Straße 25
Dortmund
Topics
Trustworthy Artificial Intelligence ,