Learning-Based Dimensionality Reduction for Computing Compact and Effective Local Feature Descriptors

A distinctive representation of image patches in form of features is a key component of many computer vision and robotics tasks, such as image matching, image retrieval, and visual localization. State-of-the-art descriptors, from hand-crafted descriptors such as SIFT to learned ones such as HardNet, are usually high dimensional; 128 dimensions or even more. The higher the dimensionality, the larger the memory consumption and computational time for approaches using such descriptors. In this paper, we investigate multi-layer perceptrons (MLPs) to extract low-dimensional but high-quality descriptors. We thoroughly analyze our method in unsupervised, self-supervised, and supervised settings, and evaluate the dimensionality reduction results on four representative descriptors. We consider different applications, including visual localization, patch verification, image matching and retrieval. The experiments show that our lightweight MLPs achieve better dimensionality reduction than PCA. The lower-dimensional descriptors generated by our approach outperform the original higher-dimensional descriptors in downstream tasks, especially for the hand-crafted ones. The code will be available at this https URL.

  • Published in:
    arXiv
  • Type:
    Article
  • Authors:
    Dong, Hao; Chen, Xieyuanli; Dusmanu, Mihai; Larsson, Viktor; Pollefeys, Marc; Stachniss, Cyrill
  • Year:
    2022

Citation information

Dong, Hao; Chen, Xieyuanli; Dusmanu, Mihai; Larsson, Viktor; Pollefeys, Marc; Stachniss, Cyrill: Learning-Based Dimensionality Reduction for Computing Compact and Effective Local Feature Descriptors, arXiv, 2022, https://arxiv.org/abs/2209.13586, Dong.etal.2022a,

Associated Lamarr Researchers

lamarr institute person Stachniss Cyrill e1663922306234 - Lamarr Institute for Machine Learning (ML) and Artificial Intelligence (AI)

Prof. Dr. Cyrill Stachniss

Principal Investigator Embodied AI to the profile