How Can Cross-lingual Knowledge Contribute Better to Fine-Grained Entity Typing?

Cross-lingual Entity Typing (CLET) aims at improving the quality of entity type prediction by transferring semantic knowledge learned from rich-resourced languages to low-resourced languages. In this paper, by utilizing multilingual transfer learning via the mixture-of-experts approach, our model dynamically capture the relationship between target language and each source language, and effectively generalize to predict types of unseen entities in new languages. Extensive experiments on multi-lingual datasets show that our method significantly outperforms multiple baselines and can robustly handle negative transfer. We questioned the relationship between language similarity and the performance of CLET. A series of experiments refute the commonsense that the more source the better, and suggest the Similarity Hypothesis for CLET.

  • Published in:
    Findings of the Association for Computational Linguistics: ACL 2022 Annual Meeting of the Association for Computational Linguistics (ACL)
  • Type:
    Inproceedings
  • Authors:
    H.-J. Jin, T. Dong, L. Hou, J. Li, et al
  • Year:
    2022

Citation information

H.-J. Jin, T. Dong, L. Hou, J. Li, et al: How Can Cross-lingual Knowledge Contribute Better to Fine-Grained Entity Typing?, Annual Meeting of the Association for Computational Linguistics (ACL), Findings of the Association for Computational Linguistics: ACL 2022, 2022, http://dx.doi.org/10.18653/v1/2022.findings-acl.243, Jin.etal.2022,