Abstract
BACKGROUND: The rapid integration of large language models (LLMs) into scholarly publishing has created an urgent need for clear standards. This study aims to comprehensively analyze the editorial stances of leading nursing publications regarding the use of LLMs in manuscript preparation and peer assessment. METHODS: We conducted a cross-sectional analysis of the top 50 nursing publications according to their journal impact factor. Each publication's website was systematically evaluated for directives concerning LLM use in authorship, content generation, image creation, and peer assessment. Journal metrics were also extracted to assess any correlation with policy adoption. RESULTS: Of the 50 publications, 35 (70%) had explicit LLM-related directives. A strong point of agreement permits the use of LLMs for content generation (97%) but prohibits LLM authorship (94%). However, a significant divergence was found regarding AI-generated images, with 52% of publications prohibiting their use. Guidance on LLM use in peer assessment was also inconsistent, with 49% of publications prohibiting it. Policy adoption varied significantly by publisher (ranging from 20% to 100%). No statistical association was found between policy existence and journal impact metrics (p > 0.05). CONCLUSIONS: Leading nursing publications exhibit a fractured landscape on LLM use. While foundational agreement exists on authorship and content generation, critical areas like image creation and peer assessment lack consistent standards. This ambiguity underscores the need for a more unified, transparent framework to guide ethical and responsible LLM integration in nursing scholarship.