Abstract
The intersection of advanced microscopy and machine learning is transforming cell biology into a quantitative, data-driven field. Traditional cell profiling depends on manual feature extraction, which is labor-intensive and prone to bias, while deep learning provides alternatives but faces challenges with interpretability and reliance on labeled data. We present MorphoGenie, an unsupervised deep-learning framework for single-cell morphological profiling. By combining disentangled representation learning with high-fidelity image reconstruction, MorphoGenie creates a compact, interpretable latent space that captures biologically meaningful features without annotation, overcoming the "curse of dimensionality." Unlike previous models, it systematically links latent representations to hierarchical morphological attributes, ensuring semantic and biological interpretability. It also supports combinatorial generalization, enabling robust performance across diverse imaging modalities (e.g., fluorescence, quantitative phase imaging) and experimental conditions, from discrete cell type/state classification to continuous trajectory inference. This provides a generalized, unbiased strategy for morphological profiling, revealing cellular behaviors often overlooked by expert visual examination.