Two different approaches to point cloud interpretability are proposed, which elucidate the properties of the model from global and model-internal perspectives, respectively. All toolkits are now available on GitHub.
Establishing rigorous testing and certification techniques is essential before deploying new technologies like Deep Learning (DL) in safety-critical applications. We propose a testing approach that could identify weaknesses in DL models.