Cross-lingual Entity Typing (CLET) aims at improving the quality of entity type prediction by transferring semantic knowledge learned from rich-resourced languages to low-resourced languages. In this paper, by utilizing multilingual transfer learning via the mixture-of-experts approach, our model dynamically capture the relationship between target language and each source language, and effectively generalize to predict types of unseen entities in new languages. Extensive experiments on multi-lingual datasets show that our method significantly outperforms multiple baselines and can robustly handle negative transfer. We questioned the relationship between language similarity and the performance of CLET. A series of experiments refute the commonsense that the more source the better, and suggest the Similarity Hypothesis for CLET.
How Can Cross-lingual Knowledge Contribute Better to Fine-Grained Entity Typing?
Type: Inproceedings
Author: H.-J. Jin, T. Dong, L. Hou, J. Li, et al
Journal: Findings of the Association for Computational Linguistics: ACL 2022
Year: 2022
Citation information
H.-J. Jin, T. Dong, L. Hou, J. Li, et al:
How Can Cross-lingual Knowledge Contribute Better to Fine-Grained Entity Typing?.
Findings of the Association for Computational Linguistics: ACL 2022,
2022,
3071–3081,
May,
Association for Computational Linguistics,
Dublin, Ireland,
http://dx.doi.org/10.18653/v1/2022.findings-acl.243