Abstract
The evolution of large language models (LLMs) is reshaping the landscape of scientific writing, enabling the generation of machine-written review papers with minimal human intervention. This paper presents a pipeline for the automated production of scientific survey articles using Retrieval-Augmented Generation (RAG) and modular LLM agents. The pipeline processes user-selected literature or citation network-derived corpora through vectorized content, reference, and figure databases to generate structured, citation-rich reviews. Two distinct strategies are evaluated: one based on manually curated literature and the other on papers selected through citation network analysis. Results demonstrate that increasing the input materials' diversity and quantity improves the generated output's depth and coherence. Although current iterations produce promising drafts, they fail to meet top-tier publication standards, particularly in critical analysis and originality. Results were obtained for a case study on a particular topic, namely, Langmuir and Langmuir-Blodgett films, but the proposed pipeline applies to any user-selected topic. The paper concludes with suggestions of how the system could be enhanced through specialized modules and discusses broader implications for scientific publishing, including ethical considerations, authorship attribution, and the risk of review proliferation. This work represents an opportunity to discuss the advantages and pitfalls introduced by the possibility of using AI assistants to support scientific knowledge synthesis.