Computational imaging reconstructions from multiple measurements that are captured sequentially often suffer from motion artifacts if the scene is dynamic. We propose a neural space-time model (NSTM) that jointly estimates the scene and its motion dynamics, without data priors or pre-training. Hence, we can both remove motion artifacts and resolve sample dynamics from the same set of raw measurements used for the conventional reconstruction. We demonstrate NSTM in three computational imaging systems: differential phase-contrast microscopy, three-dimensional structured illumination microscopy and rolling-shutter DiffuserCam. We show that NSTM can recover subcellular motion dynamics and thus reduce the misinterpretation of living systems caused by motion artifacts.
Neural space-time model for dynamic multi-shot imaging.
用于动态多帧成像的神经时空模型
阅读:4
作者:Cao Ruiming, Divekar Nikita S, Nuñez James K, Upadhyayula Srigokul, Waller Laura
| 期刊: | Nature Methods | 影响因子: | 32.100 |
| 时间: | 2024 | 起止号: | 2024 Dec;21(12):2336-2341 |
| doi: | 10.1038/s41592-024-02417-0 | 研究方向: | 神经科学 |
特别声明
1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。
2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。
3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。
4、投稿及合作请联系:info@biocloudy.com。
