Multimodal data generative fusion method for complex system health condition estimation

用于复杂系统健康状况估计的多模态数据生成融合方法

阅读:1

Abstract

For the health management of complex systems, the high value of such systems often necessitates multimodal monitoring data, including video surveillance, internal sensors, empirical formulas, and even digital twins. Therefore, it is essential to design an effective intelligent fusion method for multimodal data. Firstly, a global monotonicity calculation method and a time series data augmentation technique are developed to address the inconsistencies arising from varying temporal lengths across different modalities. Secondly, in response to the need for efficient time series fusion, we propose a fast sequential learning network architecture along with a time series generative data structure. Finally, we introduce a many-to-many transfer training approach that culminates in the formation of a Multi-source Generative Adversarial Network (Ms-GAN). Numerical experiments and monitoring datasets are employed to validate the effectiveness of this multimodal generative fusion method. Notably, Ms-GAN enhances traditional GANs-typically limited to learning single data distributions-by enabling multimodal data fusion capabilities. This advancement holds significant promise for applications in various fields such as multimedia processing and medical diagnosis.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。