{SemRaFiner}: Panoptic Segmentation in Sparse and Noisy Radar Point Clouds
Semantic scene understanding, including the perception and classification of moving agents, is essential to enabling safe and robust driving behaviours of autonomous vehicles. Cameras and {LiDARs} are commonly used for semantic scene understanding. However, both sensor modalities face limitations in adverse weather and usually do not provide motion information. Radar sensors overcome these limitations and directly offer information about moving agents by measuring the Doppler velocity, but the measurements are comparably sparse and noisy. In this paper, we address the problem of panoptic segmentation in sparse radar point clouds to enhance scene understanding. Our approach, called {SemRaFiner}, accounts for changing density in sparse radar point clouds and optimizes the feature extraction to improve accuracy. Furthermore, we propose an optimized training procedure to refine instance assignments by incorporating a dedicated data augmentation. Our experiments suggest that our approach outperforms state-of-the-art methods for radar-based panoptic segmentation.
- Published in:
{IEEE} Robotics and Automation Letters - Type:
Article - Authors:
Zeller, Matthias; Herraez, Daniel Casado; Ayan, Bengisu; Behley, Jens; Heidingsfeld, Michael; Stachniss, Cyrill - Year:
2024 - Source:
https://ieeexplore.ieee.org/abstract/document/10758203
Citation information
Zeller, Matthias; Herraez, Daniel Casado; Ayan, Bengisu; Behley, Jens; Heidingsfeld, Michael; Stachniss, Cyrill: {SemRaFiner}: Panoptic Segmentation in Sparse and Noisy Radar Point Clouds, {IEEE} Robotics and Automation Letters, 2024, 1--8, https://ieeexplore.ieee.org/abstract/document/10758203, Zeller.etal.2024c,
@Article{Zeller.etal.2024c,
author={Zeller, Matthias; Herraez, Daniel Casado; Ayan, Bengisu; Behley, Jens; Heidingsfeld, Michael; Stachniss, Cyrill},
title={{SemRaFiner}: Panoptic Segmentation in Sparse and Noisy Radar Point Clouds},
journal={{IEEE} Robotics and Automation Letters},
pages={1--8},
url={https://ieeexplore.ieee.org/abstract/document/10758203},
year={2024},
abstract={Semantic scene understanding, including the perception and classification of moving agents, is essential to enabling safe and robust driving behaviours of autonomous vehicles. Cameras and {LiDARs} are commonly used for semantic scene understanding. However, both sensor modalities face limitations in adverse weather and usually do not provide motion information. Radar sensors overcome these...}}