Memristive floating-point Fourier neural operator network for efficient scientific modeling

用于高效科学建模的忆阻器浮点傅里叶神经网络算子网络

阅读:2

Abstract

Emerging artificial intelligence for science (AI-for-Science) algorithms, such as the Fourier neural operator (FNO), enabled fast and efficient scientific simulation. However, extensive data transfers and intensive high-precision computing are necessary for network training, which challenges conventional digital computing platforms. Here, we demonstrated the potential of a heterogeneous computing-in-memristor (CIM) system to accelerate the FNO for scientific modeling tasks. Our system contains eight four-kilobit memristor chips with embedded floating-point computing workflows and a heterogeneous training scheme, representing a heterogeneous CIM platform that leverages precision-limited analog devices to accelerate floating-point neural network training. We demonstrate the capabilities of this system by solving the one-dimensional Burgers' equation and modeling the three-dimensional thermal conduction phenomenon. An expected nearly 116 times to 21 times increase in computational energy efficiency was achieved, with solution precision comparable to those of digital processors. Our results extend in-memristor computing applicability beyond edge neural networks and facilitate construction of future AI-for-Science computing platforms.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。