MALVERN, Pa. — Large language models (LLMs) are artificial intelligence (AI) algorithms that hold great potential to generate code that helps data scientists analyze and visualize information. But how reliable are these models in performing certain coding tasks that produce accurate results? The answer that a team of Penn State Great Valley professors and students determined earned the group the Distinguished Paper Award from the Association for Computing Machinery’s Special Interest Group on Software Engineering (ACM SIGSOFT) at an international conference in April.
The team conducted experiments to study how well LLMs can solve data science coding problems. Over a two-month period, Nathalia Nascimento and Everton Guimarães, both assistant professors of software engineering, conducted a controlled study with two research assistants in the first year of their master’s programs — data analytics student Sai Sanjna Chintakunta and software engineering student Santhosh Anitha Boominathan.
The research team evaluated the performance of four leading LLM-based AI assistants — Microsoft Copilot, ChatGPT, Claude and Perplexity Labs — of solving a diverse set of data science coding challenges, including analytical, algorithmic and visualization problems. The researchers asked the models to perform tasks such as generating charts and analyzing data, exploring whether the type of task or the difficulty level affected the quality of each model’s code output.
As part of the project, the research team created a publicly available novel dataset for assessing LLM performance in data science coding tasks. The students began working on a journal extension with further analysis of the dataset and included another LLM.
“Our findings reveal that all models exceeded a 50% baseline success rate, confirming their capability beyond random chance,” the researchers wrote. “Notably, only ChatGPT and Claude achieved success rates significantly above a 60% baseline, though none of the models reached a 70% threshold.”
The researchers noted that this mid-range success rate indicates the strengths of these LLMs in specific areas but also their limitations.
“This study provides a rigorous, performance-based evaluation framework for LLMs in data science, equipping practitioners with insights to select models tailored to specific task demands and setting empirical standards for future AI assessments beyond basic performance measures,” the team wrote.
The four researchers submitted their paper to the International Conference on Mining Software Repositories (MSR), the premier venue for software analytics research, according the conference’s website, which was co-located this year with the International Conference on Software Engineering. The Great Valley research team’s paper, “How Effective are LLMs for Data Science Coding? A Controlled Experiment,” was one of several accepted submissions for the technical track of the MSR conference.
After the researchers presented their work at the conference, ACM SIGSOFT presented the team with the Distinguished Paper Award.
“With over 100 papers presented, we are truly honored to receive this recognition,” Nascimento said.
“We learned so much throughout the process, from formulating the research question to writing the paper and responding to reviewer feedback during the rebuttal phase,” said student researcher Chintakunta. “Our professors were especially supportive, and their guidance made a big difference. While I wasn’t able to attend the conference, I’m proud of what we accomplished and excited about how this experience will shape my future in research and data science.”