Language-specific Calibration for Pruning Multilingual Language Models
Recent advances in large language model (LLM) pruning have shown state-of-the-art compression results in post-training and retraining-free settings while maintaining high predictive performance. However, such research mainly considers calibrating pruning using English text, despite the multilingual nature of modern LLMs and their frequent uses in non-English languages. In this paper, we set out to explore effective strategies for calibrating the pruning of multilingual language models. We present the first comprehensive empirical study, comparing different calibration languages for pruning multilingual models across diverse tasks, models, and state-of-the-art pruning techniques. Our results present practical suggestions, for example, calibrating in the target language can efficiently yield lower perplexity, but does not necessarily benefit downstream tasks. Our further analysis experiments unveil that calibration in the target language mainly contributes to preserving language-specific features related to fluency and coherence, but might not contribute to capturing language-agnostic features such as language understanding and reasoning. Last, we provide practical recommendations for future practitioners.
- Published in:
arXiv - Type:
Inproceedings - Authors:
Kurz, Simon; Zhao, Zhixue; Chen, Jian-Jia; Flek, Lucie - Year:
2024
Citation information
Kurz, Simon; Zhao, Zhixue; Chen, Jian-Jia; Flek, Lucie: Language-specific Calibration for Pruning Multilingual Language Models, arXiv, 2024, Kurz.etal.2024a,
@Inproceedings{Kurz.etal.2024a,
author={Kurz, Simon; Zhao, Zhixue; Chen, Jian-Jia; Flek, Lucie},
title={Language-specific Calibration for Pruning Multilingual Language Models},
booktitle={arXiv},
year={2024},
abstract={Recent advances in large language model (LLM) pruning have shown state-of-the-art compression results in post-training and retraining-free settings while maintaining high predictive performance. However, such research mainly considers calibrating pruning using English text, despite the multilingual nature of modern LLMs and their frequent uses in non-English languages. In this paper, we set out...}}