Two different approaches to point cloud interpretability are proposed, which elucidate the properties of the model from global and model-internal perspectives, respectively. All toolkits are now available on GitHub.
Establishing rigorous testing and certification techniques is essential before deploying new technologies like Deep Learning (DL) in safety-critical applications. We propose a testing approach that could identify weaknesses in DL models.
A surrogate model-based explainability approach is proposed for point cloud classifiers together with two different evaluation metrics validating the plausibility of the approach. The toolkit is now available online.