Transferability of Data Sets between Machine-Learned Interatomic Potential Algorithms

机器学习原子间势算法之间数据集的可迁移性

阅读:1

Abstract

The emergence of Foundational Machine Learning Interatomic Potential (FMLIP) models trained on extensive data sets motivates attempts to transfer data between different ML architectures. Using a common battery electrolyte solvent as a test case, we examine the extent to which training data optimized for one machine-learning method may be reused by a different learning algorithm, aiming to accelerate FMLIP fine-tuning and to reduce the need for costly iterative training. We consider several types of training configurations and compare the benefits they bring to feedforward neural networks (the Deep Potential model) and message-passing networks (MACE). We propose a simple metric to assess model performance and demonstrate that MACE models perform well with even the simplest training sets, whereas simpler architectures require further iterative training to describe the target liquids correctly. We find that configurations designed by human intuition to correct systematic deficiencies of a model often transfer well between algorithms, but that reusing configurations that were generated automatically by one MLIP does not necessarily benefit a different algorithm. We also compare the performance of these bespoke models against two pretrained FMLIPs, demonstrating that system-specific training data are usually necessary for realistic models. Finally, we examine how training data sets affect a model's ability to generalize to unseen molecules, finding that model stability is conserved for small changes in molecule shape but not changes in functional chemistry. Our results provide insight into how training set properties affect the behavior of an MLIP and principles to enhance training sets for molecular liquid models with minimal computational effort. These approaches may be used in tandem with FMLIPs to dramatically accelerate the rate at which new chemical systems can be simulated.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。