Reflections on the Potential and Risks of AI for Scientific Article Writing after the AI Endorsement by Some Scientific Publishers: Focusing on Scopus AI

人工智能在部分科学出版商认可后,对人工智能在科学论文写作中的潜力和风险的反思:以Scopus人工智能为例

阅读:1

Abstract

The introduction of ChatGPT3 in 2023 disrupted the field of artificial intelligence (AI). ChatGPT uses large language models (LLMs) but has no access to copyrighted material including scientific articles and books. This review is limited by the lack of access to: (1) prior peer-reviewed articles and (2) proprietary information owned by the companies. Despite these limitations, the article reviews the use of LLMs in the publishing of scientific articles. The first use was plagiarism software. The second use by the American Psychological Association and Elsevier helped their journal editors to screen articles before their review. These two publishers have in common a large number of copyrighted journals and textbooks but, more importantly, a database of article abstracts. Elsevier is the largest of the five large publishing houses and the only one with a database of article abstracts developed to compete with the bibliometric experts of the Web of Science. The third use and most relevant, Scopus AI, was announced on 16 January 2024, by Elsevier; a version of ChatGPT-3.5 was trained using Elsevier copyrighted material written since 2013. Elsevier's description suggests to the authors that Scopus AI can write review articles or the introductions of original research articles with no human intervention. The editors of non-Elsevier journals not willing to approve the use of Scopus AI for writing scientific articles have a problem on their hands; they will need to trust that the authors who have submitted articles have not lied and have not used Scopus AI at all.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。