Visualizing global explanations of point cloud dnns
So far, few researchers have targeted the explainability of point cloud neural networks. Part of the explainability methods are not directly applicable to those networks due to the structural specifics. In this work, we show that Activation Maximization (AM) with traditional pixel-wise regularizations fails to generate human-perceptible global explanations for point cloud networks. We propose new generative model-based AM approaches to clearly outline the global explanations and enhance their comprehensibility. Additionally, we propose a composite evaluation metric to address the limitations of existing evaluating methods, which simultaneously takes into account activation value, diversity and perceptibility. Extensive experiments demonstrate that our generative-based AM approaches outperform regularization-based ones both qualitatively and quantitatively. To the best of our knowledge, this is the first work investigating global explainability of point cloud networks. Our code is available at: https://github.com/Explain3D/PointCloudAM.
- Published in:
Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) - Type:
Inproceedings - Authors:
Tan, Hanxiao - Year:
2023
Citation information
Tan, Hanxiao: Visualizing global explanations of point cloud dnns, Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2023, https://openaccess.thecvf.com/content/WACV2023/html/Tan_Visualizing_Global_Explanations_of_Point_Cloud_DNNs_WACV_2023_paper.html, Tan.2023d,
@Inproceedings{Tan.2023d,
author={Tan, Hanxiao},
title={Visualizing global explanations of point cloud dnns},
booktitle={Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
url={https://openaccess.thecvf.com/content/WACV2023/html/Tan_Visualizing_Global_Explanations_of_Point_Cloud_DNNs_WACV_2023_paper.html},
year={2023},
abstract={So far, few researchers have targeted the explainability of point cloud neural networks. Part of the explainability methods are not directly applicable to those networks due to the structural specifics. In this work, we show that Activation Maximization (AM) with traditional pixel-wise regularizations fails to generate human-perceptible global explanations for point cloud networks. We propose new...}}