View a PDF of the paper titled Emergence and Effectiveness of Task Vectors in In-Context Learning: An Encoder Decoder Perspective, by Seungwook Han and 3 other authors
View PDF
HTML (experimental)
Abstract:Autoregressive transformers exhibit adaptive learning through in-context learning (ICL), which begs the question of how. Prior works have shown that transformers represent the ICL tasks as vectors in their representations. In this paper, we leverage the encoding-decoding framework to study how transformers form task vectors during pretraining and how their task encoding quality predicts ICL task performance. On synthetic ICL tasks, we analyze the training dynamics of a small transformer and report the coupled emergence of task encoding and decoding. As the model learns to encode different latent tasks (e.g., “Finding the first noun in a sentence.”) into distinct, separable representations, it concurrently builds conditional decoding algorithms and improves its ICL performance. We validate this phenomenon across pretrained models of varying scales (Gemma-2 2B/9B/27B, Llama-3.1 8B/70B) and over the course of pretraining in OLMo-7B. Further, we demonstrate that the quality of task encoding inferred from representations predicts ICL performance, and that, surprisingly, finetuning the earlier layers can improve the task encoding and performance more than finetuning the latter layers. Our empirical insights shed light into better understanding the success and failure modes of large language models via their representations.
Submission history
From: Seungwook Han [view email]
[v1]
Mon, 16 Dec 2024 19:00:18 UTC (3,685 KB)
[v2]
Wed, 18 Dec 2024 06:02:03 UTC (3,685 KB)
[v3]
Mon, 2 Jun 2025 12:55:12 UTC (7,727 KB)