Explainable Hallucination through Natural Language Inference Mapping

Large language models ({LLMs}) often generate hallucinated content, making it crucial to identify and quantify inconsistencies in their outputs. We introduce {HaluMap}, a post-hoc framework that detects hallucinations by mapping entailment and contradiction relations between source inputs and generated outputs using a natural language inference ({NLI}) model. To improve reliability, we propose a calibration step leveraging intra-text relations to refine predictions. {HaluMap} outperforms state-of-the-art {NLI}-based methods by five percentage points compared to other training-free approaches, while providing clear, interpretable explanations. As a training-free and model-agnostic approach, {HaluMap} offers a practical solution for verifying {LLM} outputs across diverse {NLP} tasks. The resources of this paper are available at https://github.com/caisa-lab/acl25-halumap.

  • Published in:
    Findings of the Association for Computational Linguistics: {ACL} 2025
  • Type:
    Inproceedings
  • Authors:
    Chen, Wei-Fan; Zhao, Zhixue; Karimi, Akbar; Flek, Lucie
  • Year:
    2025
  • Source:
    https://aclanthology.org/2025.findings-acl.96/

Citation information

Chen, Wei-Fan; Zhao, Zhixue; Karimi, Akbar; Flek, Lucie: Explainable Hallucination through Natural Language Inference Mapping, Findings of the Association for Computational Linguistics: {ACL} 2025, 2025, 1888--1896, July, Association for Computational Linguistics, https://aclanthology.org/2025.findings-acl.96/, Chen.etal.2025a,

Associated Lamarr Researchers

Photo. Portrait of Akbar Karimi.

Dr. Akbar Karimi

Postdoctoral Researcher NLP to the profile
Prof. Dr. Lucie Flek

Prof. Dr. Lucie Flek

Area Chair NLP to the profile