Meta simulation approach for evaluating machine learning method selection in data limited settings

在数据受限环境下评估机器学习方法选择的元模拟方法

阅读:1

Abstract

Selecting appropriate machine learning (ML) methods for domain-specific tasks remains a persistent challenge, particularly in medicine where datasets are often small, heterogeneous, and incomplete. Traditional benchmarking strategies rely on limited observational samples, which may not capture the complexity of the underlying data-generating process (DGP). As a result, methods that perform well on available data may generalise poorly in real-world practice. We present SimCalibration, a meta-simulation framework that leverages structural learners (SLs) to infer an approximated data-generating process from limited data and generate synthetic datasets for large-scale benchmarking. This framework enables systematic evaluation of machine learning method selection strategies in settings where the true data-generating process is either known or can be approximated, allowing both validation against the ground truth and the generation of synthetic observations inferred from sparse samples. In rare disease research for example, where patient cohorts are inherently small, causal relationships are often conceptualised as directed acyclic graphs (DAGs). In this work, such structures are approximated directly from observational data, extending the utility of small datasets by enabling investigators to benchmark ML methods in a controlled simulation setting before deploying them in practice. This reduces the risk of selecting models that generalise poorly and supports more reliable decision-making in sensitive healthcare contexts. Experiments demonstrate that (a) structural learners vary in their ability to recover representative simulations for benchmarking, (b) structural learner-based benchmarking reduces variance in performance estimates compared to traditional validation, and (c) in some cases, structural learner-based approaches yield rankings that more closely match true relative performance than those derived from limited datasets. These findings highlight the value of simulation-based benchmarking for domains where drawing generalisable conclusions is critical, such as medicine, and offer greater transparency into the assumptions underlying predictive decisions.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。