Evaluating the Accuracy and Explanatory Quality of Large Language Models ChatGPT, Claude, DeepSeek, Gemini, Grok, and Le Chat in Statistical Test Selection for Hypothesis Testing Decisions

评估大型语言模型 ChatGPT、Claude、DeepSeek、Gemini、Grok 和 Le Chat 在假设检验决策的统计检验选择中的准确性和解释力

阅读:1

Abstract

Background Large language models (LLMs) are increasingly integrated into academic and professional research workflows, yet their capability to accurately select appropriate statistical tests for hypothesis testing remains underexplored. Incorrect statistical test selection can lead to invalid conclusions and compromise scientific validity, making this evaluation critical for determining the reliability of LLMs in research applications. The study objective was to evaluate and compare the accuracy of six prominent LLMs (ChatGPT, Claude, DeepSeek, Gemini, Grok, and Le Chat) in selecting appropriate statistical tests for various hypothesis testing scenarios. Materials and methods A comparative, cross-sectional evaluation was conducted using 20 standardized statistical testing scenarios. Each scenario was designed to cover 20 different hypothesis testing situations, including comparisons of means, proportions, non-parametric alternatives, paired versus independent samples, and correlation and regression analyses. All models were prompted with identical instructions and evaluated by five independent experts with profound knowledge in biostatistics. Responses were assessed for accuracy and rated on five domains (clarity and accessibility, identification of necessary assumptions, pedagogical value, problem-solving approach, and statistical reasoning) using a five-point Likert scale. Analysis of Variance (ANOVA) was applied for between-group comparisons, and a p<0.05 was considered significant. Results All six LLMs achieved 100% accuracy in statistical test selection across all 20 hypothesis scenarios. However, significant variations emerged in explanatory quality. Claude demonstrated superior performance in clarity and accessibility (4.65 ± 0.58, p=0.05), while the problem-solving approach showed the most consistent excellence across models. Statistical reasoning exhibited variation ranging from 3.16 to 4.66, with complex regression methods receiving lower ratings than basic statistical tests. Gemini excelled in pedagogical value (4.50 ± 0.68), while ChatGPT ranked lowest in statistical reasoning despite strong problem-solving capabilities. Conclusions All LLMs demonstrate perfect accuracy in statistical test selection; however, differences exist in the quality of explanations and reasoning provided. These findings suggest that current-generation LLMs have become dependable tools for statistical consultation in hypothesis testing scenarios. However, users should consider model-specific strengths when seeking detailed explanations or educational content.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。