DoPose-6D dataset for object segmentation and 6D pose estimation

Scene understanding is essential in determining how intelligent robotic grasping and manipulation could get. It is a problem that can be approached using different techniques: seen object segmentation, unseen object segmentation, or 6D pose estimation. These techniques can even be extended to multi-view. Most of the work on these problems depends on synthetic datasets due to the lack of real datasets that are big enough for training and merely use the available real dataset for evaluation. This encourages us to introduce a new dataset (called XYZ-6D). The dataset contains annotations for 6D Pose estimation, object segmentation, and multi-view annotations, which serve all the pre-mentioned techniques. The dataset contains two types of scenes bin picking and tabletop, with the primary motive for this dataset collection being bin picking. We illustrate the effect of this dataset in the context of unseen object segmentation and provide some insights on mixing synthetic and real data for the training. We train a Mask R-CNN model that is practical to be used in industry and robotic grasping applications. Finally, we show how our dataset boosted the performance of a Mask R-CNN model. Our XYZ-6D dataset, trained network models, pipeline code, and ROS driver are available online.

  • Published in:
    ICMLA International Conference on Machine Learning and Applications (ICMLA)
  • Type:
    Inproceedings
  • Authors:
    A. Gouda, A. Ghanem, C. Reining
  • Year:
    2022

Citation information

A. Gouda, A. Ghanem, C. Reining: DoPose-6D dataset for object segmentation and 6D pose estimation, International Conference on Machine Learning and Applications (ICMLA), ICMLA, 2022, https://doi.org/10.48550/arXiv.2204.13613, Gouda.etal.2022,