Object-level 3D Semantic Mapping using a Network of Smart Edge Sensors
Autonomous robots that interact with their environment require a detailed semantic scene model. For this, volumetric semantic maps are frequently used. The scene understanding can further be improved by including object-level information in the map. In this work, we extend a multi-view 3D semantic mapping system consisting of a network of distributed smart edge sensors with object-level information, to enable downstream tasks that need object-level input. Objects are represented in the map via their 3D mesh model or as an object-centric volumetric sub-map that can model arbitrary object geometry when no detailed 3D model is available. We propose a keypoint-based approach to estimate object poses via PnP and refinement via ICP alignment of the 3D object model with the observed point cloud segments. Object instances are tracked to integrate observations over time and to be robust against temporary occlusions. Our method is evaluated on the public Behave dataset where it shows pose estimation accuracy within a few centimeters and in real-world experiments with the sensor network in a challenging lab environment where multiple chairs and a table are tracked through the scene online, in real time even under high occlusions.
- Published in:
IEEE International Conference on Robotic Computing - Type:
Inproceedings - Authors:
Hau, Julian; Bultmann, Simon; Behnke, Sven - Year:
2022
Citation information
Hau, Julian; Bultmann, Simon; Behnke, Sven: Object-level 3D Semantic Mapping using a Network of Smart Edge Sensors, IEEE International Conference on Robotic Computing, 2022, November, https://ais.uni-bonn.de/papers/IRC_2022_Hau.pdf, Hau.etal.2022a,
@Inproceedings{Hau.etal.2022a,
author={Hau, Julian; Bultmann, Simon; Behnke, Sven},
title={Object-level 3D Semantic Mapping using a Network of Smart Edge Sensors},
booktitle={IEEE International Conference on Robotic Computing},
month={November},
url={https://ais.uni-bonn.de/papers/IRC_2022_Hau.pdf},
year={2022},
abstract={Autonomous robots that interact with their environment require a detailed semantic scene model. For this, volumetric semantic maps are frequently used. The scene understanding can further be improved by including object-level information in the map. In this work, we extend a multi-view 3D semantic mapping system consisting of a network of distributed smart edge sensors with object-level...}}