View a PDF of the paper titled DREsS: Dataset for Rubric-based Essay Scoring on EFL Writing, by Haneul Yoo and 3 other authors
View PDF
Abstract:Automated essay scoring (AES) is a useful tool in English as a Foreign Language (EFL) writing education, offering real-time essay scores for students and instructors. However, previous AES models were trained on essays and scores irrelevant to the practical scenarios of EFL writing education and usually provided a single holistic score due to the lack of appropriate datasets. In this paper, we release DREsS, a large-scale, standard dataset for rubric-based automated essay scoring with 48.9K samples in total. DREsS comprises three sub-datasets: DREsS_New, DREsS_Std., and DREsS_CASE. We collect DREsS_New, a real-classroom dataset with 2.3K essays authored by EFL undergraduate students and scored by English education experts. We also standardize existing rubric-based essay scoring datasets as DREsS_Std. We suggest CASE, a corruption-based augmentation strategy for essays, which generates 40.1K synthetic samples of DREsS_CASE and improves the baseline results by 45.44%. DREsS will enable further research to provide a more accurate and practical AES system for EFL writing education.
Submission history
From: Haneul Yoo [view email]
[v1]
Wed, 21 Feb 2024 09:12:16 UTC (8,524 KB)
[v2]
Mon, 4 Nov 2024 06:54:34 UTC (280 KB)
[v3]
Wed, 11 Jun 2025 05:32:02 UTC (236 KB)