Responsible AI

Responsibility in the design, implementation and use of Artificial Intelligence is central to Lamarr’s mission. To strengthen trustworthiness, sustainability and explainability of AI, our delegates seek to connect with Canadian experts who join them in advancing AI for the good of society!

Our Responsible AI Experts in Canada

Advancing Reliable AI Systems for Sensitive Domains

Michael Kamp focuses on trustworthy, explainable, and robust AI methods, uniting deep learning theory, causality, and privacy-preserving optimization. His research connects mathematical foundations (loss surface geometry) with practical deployments in sensitive domains like healthcare. He is looking for partners to jointly advance reliable AI systems that meet strict safety, fairness, and transparency standards.

MIchael Kamp Lamarr Institut - Lamarr Institute for Machine Learning (ML) and Artificial Intelligence (AI)

Prof. Dr. Michael Kamp

Principal Investigator Trustworthy AI to the profile

Putting Sustainable AI into Practice

Aimee van Wynsberghe’s research focuses on Sustainable AI; AI for sustainability, e.g. using AI to achieve the UN SDGs, mitigating climate crisis, and the sustainability of AI, e.g. examining the environmental impacts of AI development, deployment and disposal including energy consumption, carbon footprints, resource extraction, and waste disposal. She currently works with researchers from ethics/philosophy, engineering, agriculture and political science on these topics and wishes to continue to do so, specifically to explore how to move from the principles of sustainable AI towards putting these principles into practice through testing and/or design of future AI infrastructures. 

Aimee van Wynsberghe Lamarr Institute - Lamarr Institute for Machine Learning (ML) and Artificial Intelligence (AI)

Prof. Dr. Aimee van Wynsberghe

Principal Investigator Trustworthy AI to the profile

Utilizing Reinforcement Learning for Explainable Machine Learning Algorithms

Our research focuses on neurosymbolic concept learning on knowledge graphs, addressing key challenges in scale and explainability. We leverage tensor-based storage, embedding techniques, and reinforcement learning to develop robust concept learning techniques. These can be deployed on incomplete and noisy knowledge and scale to billions of assertions. We are seeking collaborators specializing in reinforcement learning to jointly develop novel, explainable machine learning algorithms and investigate their application to complex reasoning tasks.

Axel Ngonga Lamarr Fellow - Lamarr Institute for Machine Learning (ML) and Artificial Intelligence (AI)

Prof. Dr. Axel-Cyrille Ngonga Ngomo

Lamarr Fellow to the profile

Responsible AI in Practice: Guiding Sustainable and Trustworthy Development

Raphael Fischer’s PhD (defended) was dedicated to advancing AI sustainability with regard to society, environment, and economy. His labeling approach can bridge knowledge gaps and make AI more transparent and trustworthy, while the proposed meta-learning extension allows for user-centric and automated model selection. Providing important insights for AI responsibility, his work links various disciplines and Lamarr research groups such as Resource-Aware ML or AI Certification. Raphael Fischer is open for any exchange, with prime interests in discussing sustainable AI applications and practical AI properties.

LAMARR Person Raphael Fischer 500x500 1 - Lamarr Institute for Machine Learning (ML) and Artificial Intelligence (AI)

Raphael Fischer

Scientist Resource-aware ML to the profile

Updates on Responsible AI from our Trustworthy AI Reserach Area

Stay updated on the latest projects, research findings, and activities on Responsible AI by Lamarr.

Find our more about Lamarr’s Research Findings on our Blog