Revisiting Pruning vs Quantization for Small Language Models

Deploying language models on resource-constrained devices, such as mobile phones, wearables, and on-device {AI} assistants, demands compact, efficient models without sacrificing performance. Compressing Small Language Models ({SLMs}) is particularly suited for these scenarios, yet their compression dynamics remain underexplored compared to Large Language Models ({LLMs}). We systematically evaluate leading post-training pruning ({SparseGPT}, Wanda) and quantization ({GPTQ}, {AWQ}) methods across six {SLMs} from 0.5 to 3.8B, seven languages, and seven downstream tasks. Our results show that quantization consistently outperforms pruning in preserving model fidelity, multilingual perplexity, and reasoning accuracy. However, quantization’s advantages diminish on complex knowledge and reasoning tasks like {OpenBookQA}, highlighting a disconnect between compression fidelity and downstream task performance. Notably, trends observed in {LLMs} (e.g., Wanda’s competitive performance to {SparseGPT}) do not generalize to {SLMs}. For practitioners, we recommend prioritizing quantization (particularly {AWQ}) for {SLM} compression and caution against relying on a single metric.

Informationen zur Zitierung

Zhou, Zihan; Kurz, Simon; Zhao, Zhixue: Revisiting Pruning vs Quantization for Small Language Models, Findings of the Association for Computational Linguistics: {EMNLP} 2025, 2025, 12055--12070, November, Association for Computational Linguistics, https://aclanthology.org/2025.findings-emnlp.645/, Zhou.etal.2025a,