Abstract
MOTIVATION: Scientific software packages impose persistent maintenance costs due to dependency churn, version incompatibilities, and bug triage, even when the underlying algorithms are stable and well described. At the same time, peer-reviewed publications already function as the canonical record of many computational methods, yet translating narrative method descriptions into usable code remains labor-intensive and error-prone. Recent advances in large language models (LLMs) raise the question of whether published articles alone can serve as sufficient specifications for on-demand code generation, potentially reducing reliance on continuously maintained libraries. RESULTS: We systematically evaluated state-of-the-art LLMs by tasking them with implementing core algorithms using only the original scientific publications as input. Across a diverse benchmark including random forests, batch correction methods, gene regulatory network inference, and gene set enrichment analysis, we show that modern LLMs can frequently reproduce package-level functionality with performance indistinguishable from established libraries. Failures and discrepancies primarily arose when manuscripts underspecified implementation details or data structures, rather than from limitations in model reasoning. These results demonstrate that literature-driven code generation is already feasible for many well-specified algorithms, while also exposing where current publication standards hinder reproducibility. AVAILABILITY AND IMPLEMENTATION: All prompts, generated code, evaluation scripts, and benchmark datasets are publicly available at https://github.com/xomicsdatascience/articles-to-code.