Volumetric localization microscopy with deep learning

基于深度学习的体积定位显微镜

阅读:2

Abstract

Super-resolution microscopy, particularly localization-based methods, necessitates careful balancing of optical complexity, computational demands, and user accessibility. Conventional strategies typically adopt either deterministic or learning-based approaches, overlooking opportunities to leverage their synergistic strengths. In this work, we introduce volumetric localization microscopy (VLM) with deep learning, a super-resolution methodology that integrates instrumental and algorithmic advancements for high-fidelity 3D single-molecule imaging. VLM employs a wavefront-optimized light-field configuration to capture single-molecule data, while a cascaded neural network reconstructs 3D volumes and extracts molecular coordinates at a 10 nm lateral and 25 nm axial localization precision with effective imaging depth over 4 µm. Unlike existing methods, VLM is trained exclusively with system-aware intrinsic point-spread functions, bypassing dependencies on external imaging modalities or sample-specific data training. We validate VLM across diverse biological specimens, demonstrating hardware simplicity, data efficiency, and minimal phototoxicity. We anticipate VLM will overcome current limitations in fluorescence microscopy, empowering broader advancements in biomedical research.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。