Fine-Tuning Unifies Foundational Machine-Learned Interatomic Potential Architectures at ab initio Accuracy

微调统一了基础机器学习原子间势架构,使其达到从头算精度

阅读:2

Abstract

This work demonstrates that fine-tuning transforms foundational machine-learned interatomic potentials (MLIPs) to achieve consistent, near-ab initio accuracy across diverse architectures. Benchmarking five leading MLIP frameworks (MACE, GRACE, SevenNet, MatterSim, and ORB) across seven chemically diverse compounds reveals that fine-tuning universally enhances force predictions by factors of 5-15 and improves energy accuracy by 2-4 orders of magnitude. The investigated models span both equivariant and invariant, as well as conservative and non-conservative, architectures. While general-purpose foundation models are robust, they exhibit architecture-dependent deviations from ab initio reference data; specialized system-specific training (fine-tuning) eliminates these discrepancies, enabling quantitatively accurate predictions of atomistic and structural properties. Using datasets constructed from 2000 equidistantly sampled frames of short ab initio molecular dynamics trajectories, fine-tuning reduces force errors by an order of magnitude and harmonizes performance across all architectures. These findings establish fine-tuning as a universal route to achieving system-specific predictive accuracy while preserving the computational efficiency of MLIPs. To promote widespread adoption, we introduce the aMACEing Toolkit, which provides a unified and reproducible interface for fine-tuning workflows across multiple MLIP frameworks.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。