Explainable AI is primarily aimed at experts. An expert group of the CTAI has determined: Explainable AI – in combination with other measures – can also be helpful for the information of consumers and citizens.
Two different approaches to point cloud interpretability are proposed, which elucidate the properties of the model from global and model-internal perspectives, respectively. All toolkits are now available on GitHub.
A surrogate model-based explainability approach is proposed for point cloud classifiers together with two different evaluation metrics validating the plausibility of the approach. The toolkit is now available online.
Safety-critical systems, such as self-driving cars, place new high demands on AI models. Research into solutions is being carried out in many places, and a comprehensive overview has now been created by the “AI Safeguarding” consortium.
Nowadays, many decisions can be made efficiently, accurately, and automatically through Machine Learning methods. Equally important is the transparent presentation and explanation of why and how these decisions are made.