Towards Understanding Layer Contributions in Tabular In-Context Learning Models

Despite the architectural similarities between tabular in-context learning ({ICL}) models and large language models ({LLMs}), little is known about how individual layers contribute to tabular prediction. In this paper, we investigate how the latent spaces evolve across layers in tabular {ICL} models, identify potential redundant layers, and compare these dynamics with those observed in {LLMs}. We analyze {TabPFN} and {TabICL} through the „layers as painters“ perspective, finding that only subsets of layers share a common representational language, suggesting structural redundancy and offering opportunities for model compression and improved interpretability.

  • Veröffentlicht in:
    arXiv
  • Typ:
    Article
  • Autoren:
    Balef, Amir Rezaei; Koshil, Mykhailo; Eggensperger, Katharina
  • Jahr:
    2025
  • Source:
    http://arxiv.org/abs/2511.15432

Informationen zur Zitierung

Balef, Amir Rezaei; Koshil, Mykhailo; Eggensperger, Katharina: Towards Understanding Layer Contributions in Tabular In-Context Learning Models, arXiv, 2025, {arXiv}:2511.15432, November, {arXiv}, http://arxiv.org/abs/2511.15432, Balef.etal.2025a,