Towards Understanding Layer Contributions in Tabular In-Context Learning Models
Despite the architectural similarities between tabular in-context learning ({ICL}) models and large language models ({LLMs}), little is known about how individual layers contribute to tabular prediction. In this paper, we investigate how the latent spaces evolve across layers in tabular {ICL} models, identify potential redundant layers, and compare these dynamics with those observed in {LLMs}. We analyze {TabPFN} and {TabICL} through the „layers as painters“ perspective, finding that only subsets of layers share a common representational language, suggesting structural redundancy and offering opportunities for model compression and improved interpretability.
- Veröffentlicht in:
arXiv - Typ:
Article - Autoren:
- Jahr:
2025 - Source:
http://arxiv.org/abs/2511.15432
Informationen zur Zitierung
: Towards Understanding Layer Contributions in Tabular In-Context Learning Models, arXiv, 2025, {arXiv}:2511.15432, November, {arXiv}, http://arxiv.org/abs/2511.15432, Balef.etal.2025a,
@Article{Balef.etal.2025a,
author={Balef, Amir Rezaei; Koshil, Mykhailo; Eggensperger, Katharina},
title={Towards Understanding Layer Contributions in Tabular In-Context Learning Models},
journal={arXiv},
number={{arXiv}:2511.15432},
month={November},
publisher={{arXiv}},
url={http://arxiv.org/abs/2511.15432},
year={2025},
abstract={Despite the architectural similarities between tabular in-context learning ({ICL}) models and large language models ({LLMs}), little is known about how individual layers contribute to tabular prediction. In this paper, we investigate how the latent spaces evolve across layers in tabular {ICL} models, identify potential redundant layers, and compare these dynamics with those observed in {LLMs}. We...}}