Editorial stances on large Language models in leading nursing publications: a cross-sectional analysis

领先护理期刊对大型语言模型的编辑立场:一项横断面分析

阅读:1

Abstract

BACKGROUND: The rapid integration of large language models (LLMs) into scholarly publishing has created an urgent need for clear standards. This study aims to comprehensively analyze the editorial stances of leading nursing publications regarding the use of LLMs in manuscript preparation and peer assessment. METHODS: We conducted a cross-sectional analysis of the top 50 nursing publications according to their journal impact factor. Each publication's website was systematically evaluated for directives concerning LLM use in authorship, content generation, image creation, and peer assessment. Journal metrics were also extracted to assess any correlation with policy adoption. RESULTS: Of the 50 publications, 35 (70%) had explicit LLM-related directives. A strong point of agreement permits the use of LLMs for content generation (97%) but prohibits LLM authorship (94%). However, a significant divergence was found regarding AI-generated images, with 52% of publications prohibiting their use. Guidance on LLM use in peer assessment was also inconsistent, with 49% of publications prohibiting it. Policy adoption varied significantly by publisher (ranging from 20% to 100%). No statistical association was found between policy existence and journal impact metrics (p > 0.05). CONCLUSIONS: Leading nursing publications exhibit a fractured landscape on LLM use. While foundational agreement exists on authorship and content generation, critical areas like image creation and peer assessment lack consistent standards. This ambiguity underscores the need for a more unified, transparent framework to guide ethical and responsible LLM integration in nursing scholarship.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。