View a PDF of the paper titled LL\”aMmlein: Transparent, Compact and Competitive German-Only Language Models from Scratch, by Jan Pfister and 2 other authors
View PDF
HTML (experimental)
Abstract:We create two German-only decoder models, LLäMmlein 120M and 1B, transparently from scratch and publish them, along with the training data, for the German NLP research community to use. The model training involved several key steps, including extensive data preprocessing, the creation of a custom German tokenizer, the training itself, as well as the evaluation of the final models on various benchmarks. Throughout the training process, multiple checkpoints were saved and analyzed using the SuperGLEBer benchmark to monitor the models’ learning dynamics. Compared to state-of-the-art models on the SuperGLEBer benchmark, both LLäMmlein models performed competitively, consistently matching or surpassing models with similar parameter sizes. The results show that the models’ quality scales with size as expected, but performance improvements on some tasks plateaued early, offering valuable insights into resource allocation for future model development.
Submission history
From: Jan Pfister [view email]
[v1]
Sun, 17 Nov 2024 20:44:34 UTC (723 KB)
[v2]
Mon, 16 Dec 2024 12:29:41 UTC (1,183 KB)
[v3]
Fri, 23 May 2025 13:18:16 UTC (1,278 KB)
[v4]
Wed, 28 May 2025 12:38:43 UTC (1,278 KB)
[v5]
Wed, 18 Jun 2025 06:29:57 UTC (1,016 KB)