View a PDF of the paper titled Beyond Functional Correctness: Investigating Coding Style Inconsistencies in Large Language Models, by Yanlin Wang and 7 other authors
View PDF
HTML (experimental)
Abstract:Large language models (LLMs) have brought a paradigm shift to the field of code generation, offering the potential to enhance the software development process. However, previous research mainly focuses on the accuracy of code generation, while coding style differences between LLMs and human developers remain under-explored. In this paper, we empirically analyze the differences in coding style between the code generated by mainstream Code LLMs and the code written by human developers, and summarize coding style inconsistency taxonomy. Specifically, we first summarize the types of coding style inconsistencies by manually analyzing a large number of generation results. We then compare the code generated by Code LLMs with the code written by human programmers in terms of readability, conciseness, and robustness. The results reveal that LLMs and developers have different coding styles. Additionally, we study the possible causes of these inconsistencies and provide some solutions to alleviate the problem.
Submission history
From: Tianyue Jiang [view email]
[v1]
Sat, 29 Jun 2024 14:56:11 UTC (426 KB)
[v2]
Sat, 21 Jun 2025 17:44:03 UTC (2,007 KB)