View a PDF of the paper titled StaICC: Standardized Evaluation for Classification Task in In-context Learning, by Hakaze Cho and 1 other authors
View PDF
HTML (experimental)
Abstract:Classification tasks are widely investigated in the In-Context Learning (ICL) paradigm. However, current efforts are evaluated on disjoint benchmarks and settings, while their performances are significantly influenced by some trivial variables, such as prompt templates, data sampling, instructions, etc., which leads to significant inconsistencies in the results reported across various literature, preventing fair comparison or meta-analysis across different papers. Therefore, this paper proposes a standardized and easy-to-use evaluation toolkit (StaICC) for in-context classification. Including, for the normal classification task, we provide StaICC-Normal, selecting 10 widely used datasets, and generating prompts with a fixed form, to mitigate the variance among the experiment implementations. To enrich the usage of our benchmark, we also provide a sub-benchmark StaICC-Diag for diagnosing ICL from several aspects, aiming for a more robust inference processing.
Submission history
From: Cho Hakaze [view email]
[v1]
Mon, 27 Jan 2025 00:05:12 UTC (610 KB)
[v2]
Sat, 1 Feb 2025 14:45:09 UTC (610 KB)
[v3]
Fri, 18 Apr 2025 08:09:26 UTC (610 KB)