A deep learning-based segmentation pipeline for profiling cellular morphodynamics using multiple types of live cell microscopy

基于深度学习的分割流程,使用多种类型的活细胞显微镜分析细胞形态动力学

阅读:25
作者:Junbong Jang, Chuangqi Wang, Xitong Zhang, Hee June Choi, Xiang Pan, Bolun Lin, Yudong Yu, Carly Whittle, Madison Ryan, Yenyu Chen, Kwonmoo Lee

Abstract

Motivation: Quantitative studies of cellular morphodynamics rely on extracting leading-edge velocity time series based on accurate cell segmentation from live cell imaging. However, live cell imaging has numerous challenging issues regarding accurate edge localization. Fluorescence live cell imaging produces noisy and low-contrast images due to phototoxicity and photobleaching. While phase contrast microscopy is gentle to live cells, it suffers from the halo and shade-off artifacts that cannot be handled by conventional segmentation algorithms. Here, we present a deep learning-based pipeline, termed MARS-Net (Multiple-microscopy-type-based Accurate and Robust Segmentation Network), that utilizes transfer learning and data from multiple types of microscopy to localize cell edges with high accuracy, allowing quantitative profiling of cellular morphodynamics. Summary: To accurately segment cell edges and quantify cellular morphodynamics from live-cell imaging data, we developed a deep learning-based pipeline termed MARS-Net (multiple-microscopy-type-based accurate and robust segmentation network). MARS-Net utilizes transfer learning and data from multiple types of microscopy to localize cell edges with high accuracy. For effective training on distinct types of live-cell microscopy, MARS-Net comprises a pretrained VGG19 encoder with U-Net decoder and dropout layers. We trained MARS-Net on movies from phase-contrast, spinning-disk confocal, and total internal reflection fluorescence microscopes. MARS-Net produced more accurate edge localization than the neural network models trained with single-microscopy-type datasets. We expect that MARS-Net can accelerate the studies of cellular morphodynamics by providing accurate pixel-level segmentation of complex live-cell datasets.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。