Abstract
BACKGROUND: Since November 2022, conversational tools powered by generative artificial intelligence (GAI) have become integrated into academic and professional practice within the health care field. OBJECTIVES: To identify and quantify the prevalence of recommendations to authors regarding the use of GAI as issued by pharmaceutical journals, their publishers, and certain associations of peer-reviewed medical journals. METHODS: A cross-sectional descriptive study was conducted to evaluate the recommendations regarding GAI use issued by 3 medical journal associations (the International Committee of Medical Journal Editors, the Committee on Publication Ethics, and the World Association of Medical Editors), 8 journal publishers (Springer, Taylor & Francis, Elsevier, Wiley, Sage, Oxford Academic, BMJ, and Springer Nature), and 22 pharmaceutical journals. The presence or absence of specific recommendations was coded. RESULTS: The analysis led to synthesis of 16 recommendations concerning use of GAI in scientific publishing, which were classified into 3 categories: reporting and transparency, authorship and accountability, and restrictions on use. The recommendations most often emphasized disclosure of GAI use in manuscripts and the prohibition of GAI as an author. Overall, 14 of the 22 pharmaceutical journals included one or more of the 16 recommendations. Among these 14 journals, the average proportion of included recommendations was 39% (standard deviation [SD] 12%). When recommendations suggested by publishers and journal associations were included in the analysis, as applicable, this proportion increased to 51% (SD 28%). CONCLUSIONS: Recommendations provided to authors about the use of GAI were highly variable. As such, this study highlights a lack of consensus on the integration of GAI within pharmaceutical journals, with many current guidelines being insufficient or outdated. The development of standardized and up-to-date guidelines is crucial to preserving the integrity of scientific publishing.