osmAG-LLM: Zero-Shot Open-Vocabulary Object Navigation via Semantic Maps and Large Language Models Reasoning

Recent open-vocabulary robot mapping methods enrich dense geometric maps with pre-trained visual-language features, achieving a high level of detail and guiding robots to find objects specified by open-vocabulary language queries. While the issue of scalability for such approaches has received some attention, another fundamental problem is that high-detail object mapping quickly becomes outdated, as objects get moved around a lot. In this work, we develop a mapping and navigation system for object-goal navigation that, from the ground up, considers the possibilities that a queried object can have moved, or may not be mapped at all. Instead of striving for high-fidelity mapping detail, we consider that the main purpose of a map is to provide environment grounding and context, which we combine with the semantic priors of LLMs to reason about object locations and deploy an active, online approach to navigate to the objects. Through simulated and real-world experiments we find that our approach tends to have higher retrieval success at shorter path lengths for static objects and by far outperforms prior approaches in cases of dynamic or unmapped object queries.

  • Published in:
    IEEE Robotics and Automation Letters
  • Type:
    Article
  • Authors:
    Xie, Fujing; Schwertfeger, Sören; Blum, Hermann
  • Year:
    2026

Citation information

Xie, Fujing; Schwertfeger, Sören; Blum, Hermann: osmAG-LLM: Zero-Shot Open-Vocabulary Object Navigation via Semantic Maps and Large Language Models Reasoning, IEEE Robotics and Automation Letters, 2026, 11, 3, 2426--2433, March, Xie.etal.2026a,

Associated Lamarr Researchers

Blum Hermann - Lamarr Institute for Machine Learning (ML) and Artificial Intelligence (AI)

Jun. Prof. Dr. Hermann Blum

Principal Investigator Embodied AI to the profile