View a PDF of the paper titled Spin glass model of in-context learning, by Yuhao Li and 2 other authors
View PDF
HTML (experimental)
Abstract:Large language models show a surprising in-context learning ability — being able to use a prompt to form a prediction for a query, yet without additional training, in stark contrast to old-fashioned supervised learning. Providing a mechanistic interpretation and linking the empirical phenomenon to physics are thus challenging and remain unsolved. We study a simple yet expressive transformer with linear attention and map this structure to a spin glass model with real-valued spins, where the couplings and fields explain the intrinsic disorder in data. The spin glass model explains how the weight parameters interact with each other during pre-training, and further clarifies why an unseen function can be predicted by providing only a prompt yet without further training. Our theory reveals that for single-instance learning, increasing the task diversity leads to the emergence of in-context learning, by allowing the Boltzmann distribution to converge to a unique correct solution of weight parameters. Therefore the pre-trained transformer displays a prediction power in a novel prompt setting. The proposed analytically tractable model thus offers a promising avenue for thinking about how to interpret many intriguing but puzzling properties of large language models.
Submission history
From: Haiping Huang [view email]
[v1]
Mon, 5 Aug 2024 07:54:01 UTC (1,087 KB)
[v2]
Wed, 13 Nov 2024 07:13:36 UTC (1,894 KB)
[v3]
Fri, 18 Apr 2025 08:16:22 UTC (970 KB)