View a PDF of the paper titled Regress, Don’t Guess — A Regression-like Loss on Number Tokens for Language Models, by Jonas Zausinger and 15 other authors
View PDF
HTML (experimental)
Abstract:While language models have exceptional capabilities at text generation, they lack a natural inductive bias for emitting numbers and thus struggle in tasks involving quantitative reasoning, especially arithmetic. One fundamental limitation is the nature of the Cross Entropy loss, which assumes a nominal scale and thus cannot convey proximity between generated number tokens. In response, we here present a regression-like loss that operates purely on token level. Our proposed Number Token Loss (NTL) comes in two flavors and minimizes either the Lp norm or the Wasserstein distance between the numerical values of the real and predicted number tokens. NTL can easily be added to any language model and extend the Cross Entropy objective during training without runtime overhead. We evaluate the proposed scheme on various mathematical datasets and find that it consistently improves performance in math-related tasks. In a direct comparison on a regression task, we find that NTL can match the performance of a regression head, despite operating on token level. Finally, we scale NTL up to 3B parameter models and observe improved performance, demonstrating its potential for seamless integration into LLMs. We hope that this work can inspire LLM developers to improve their pretraining objectives. The code is available via: this https URL
Submission history
From: Jannis Born [view email]
[v1]
Mon, 4 Nov 2024 13:43:24 UTC (966 KB)
[v2]
Sun, 25 May 2025 21:13:23 UTC (5,219 KB)