Limitations of current copyright frameworks for large language models trained on scientific literature

当前版权框架对基于科学文献训练的大型语言模型的局限性

阅读:1

Abstract

Recent copyright lawsuits against artificial intelligence (AI) and large language model (LLM) developers have ignited debates over how to balance technological innovation with the public interest. In the scientific research field, the performance and reliability of LLMs trained on scientific literature (SciLit-LLMs) depend heavily on access to comprehensive, up-to-date full-text sources. This paper argues that the current copyright framework, including the U.S. fair use doctrine, often regarded as a flexible solution for AI-related copyright issues, is ill-suited for SciLit-LLMs. First, the normative values emphasized in scientific-such as accuracy, transparency and interpretability-fundamentally conflict with the "transformative use" requirement central to copyright law. Second, the expression of scientific literature is intended to ensure scientific precision rather than to convey creative originality, remains insufficiently considered under current copyright law. Third, the fair use doctrine's emphasis on limiting the proportion of use from a single copyrighted work contradicts the need for comprehensive training on information-dense scientific texts. Finally, commercial use restrictions impede the sustainable development of SciLit-LLMs and preclude a mutually beneficial model for researchers, publishers, developers, and the public. Imposing current copyright restrictions on these models is unjustified, unnecessary, and risks perpetuating scientific biases. We therefore propose reconstructing copyright exceptions for scientific literature and removing commercial use restrictions to better support scientific innovation.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。