View a PDF of the paper titled Style over Substance: Distilled Language Models Reason Via Stylistic Replication, by Philip Lippmann and Jie Yang
View PDF
HTML (experimental)
Abstract:Specialized reasoning language models (RLMs) have demonstrated that scaling test-time computation through detailed reasoning traces significantly enhances performance. Although these traces effectively facilitate knowledge distillation into smaller, instruction-tuned models, the precise nature of transferred reasoning remains unclear. In this study, we investigate to what extent distilled models internalize replicated stylistic patterns during reasoning. To this end, we systematically analyze reasoning traces, identifying structural and lexical patterns that characterize successful reasoning. We then introduce two new datasets — a dataset of emergent reasoning traces and a synthetic dataset explicitly constructed to replicate these stylistic patterns — to precisely examine their influence on distilled models’ reasoning capabilities. We find that models trained on the synthetic traces achieve comparable performance, indicating that distilled reasoning abilities rely significantly on surface-level patterns. Surprisingly, we observe an increase in performance even when the synthetic traces are altered to lead to the wrong answer. Our findings highlight how stylistic patterns can be leveraged to efficiently enhance LM reasoning across diverse model families.
Submission history
From: Philip Lippmann [view email]
[v1]
Wed, 2 Apr 2025 13:50:20 UTC (275 KB)
[v2]
Wed, 11 Jun 2025 11:31:47 UTC (272 KB)