Abstract
Recent copyright lawsuits against artificial intelligence (AI) and large language model (LLM) developers have ignited debates over how to balance technological innovation with the public interest. In the scientific research field, the performance and reliability of LLMs trained on scientific literature (SciLit-LLMs) depend heavily on access to comprehensive, up-to-date full-text sources. This paper argues that the current copyright framework, including the U.S. fair use doctrine, often regarded as a flexible solution for AI-related copyright issues, is ill-suited for SciLit-LLMs. First, the normative values emphasized in scientific-such as accuracy, transparency and interpretability-fundamentally conflict with the "transformative use" requirement central to copyright law. Second, the expression of scientific literature is intended to ensure scientific precision rather than to convey creative originality, remains insufficiently considered under current copyright law. Third, the fair use doctrine's emphasis on limiting the proportion of use from a single copyrighted work contradicts the need for comprehensive training on information-dense scientific texts. Finally, commercial use restrictions impede the sustainable development of SciLit-LLMs and preclude a mutually beneficial model for researchers, publishers, developers, and the public. Imposing current copyright restrictions on these models is unjustified, unnecessary, and risks perpetuating scientific biases. We therefore propose reconstructing copyright exceptions for scientific literature and removing commercial use restrictions to better support scientific innovation.