Abstract
BACKGROUND: Fluorescent live-cell microscopy enables the study of cellular dynamics by imaging specific molecules, their interactions, and biochemical states in living samples. It is vital for biological research and drug screening. However, live-cell imaging must balance temporal resolution with cell viability due to photo-toxicity, often necessitating lower temporal resolution to extend observation periods. This reduction complicates the tracking of cells and detection of cell division events, limiting the study of dynamic cellular processes. RESULTS: We present an integrated methodology combining contrastive learning and graph-based techniques to improve cell division detection and tracking in video microscopy with low temporal resolution. Our approach uses contrastive learning models to generate robust cell representations that enhance both division detection and tracking accuracy. Specifically, we develop a weakly-supervised contrastive learning strategy leveraging time-based augmentations to build temporal cell representations. In addition, we propose a novel graph optimization method to identify cell tracks using these representations alongside observed division events. Evaluation on an in-house dataset and public benchmarks demonstrates significant performance gains across both native and reduced temporal resolutions. CONCLUSIONS: The proposed methodology improves adaptability to various temporal resolutions, enabling more precise and efficient analysis of live-cell microscopy data. This advancement supports extended observation periods necessary for drug screening and biological studies by preserving cell viability and normal homeostasis. Our approach facilitates deeper insights into cellular mechanisms and has the potential to enhance therapeutic research workflows. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1186/s12859-025-06344-5.