{CAISA} at {SemEval}-2025 Task 7: Multilingual and Cross-lingual Fact-Checked Claim Retrieval

We leveraged {LLaMA}, utilizing its ability to evaluate the relevance of retrieved claims within a retrieval-based fact-checking framework. This approach aimed to explore the impact of large language models ({LLMs}) on retrieval tasks and assess their effectiveness in enhancing fact-checking accuracy. Additionally, we integrated Jina embeddings v2 and the {MPNet} multilingual sentence transformer to filter and rank a set of 500 candidate claims. These refined claims were then used as input for {LLaMA}, ensuring that only the most contextually relevant ones were assessed.

  • Published in:
    Proceedings of the 19th International Workshop on Semantic Evaluation ({SemEval}-2025)
  • Type:
    Inproceedings
  • Authors:
    Haroon, Muqaddas; Ashraf, Shaina; Baris, Ipek; Flek, Lucie
  • Year:
    2025
  • Source:
    https://aclanthology.org/2025.semeval-1.183/

Citation information

Haroon, Muqaddas; Ashraf, Shaina; Baris, Ipek; Flek, Lucie: {CAISA} at {SemEval}-2025 Task 7: Multilingual and Cross-lingual Fact-Checked Claim Retrieval, Proceedings of the 19th International Workshop on Semantic Evaluation ({SemEval}-2025), 2025, 1377--1382, July, Association for Computational Linguistics, https://aclanthology.org/2025.semeval-1.183/, Haroon.etal.2025a,