Abstract
Whole-slide multiplex brain tissue images for spatial proteomics are massive, information-dense, and challenging to analyze. We present mVISE, an interactive multiplex visual search engine that offers an alternative programming-free query-driven analysis method based on retrieving and profiling communities of similar cells, proximal cell pairs, and multicellular niches. The retrievals can be used for exploratory cell and tissue analysis, delineating brain regions and cortical layers, profiling and comparing brain regions/sub-regions/sub-layers, etc. mVISE is enabled by multiplex encoders that seamlessly integrate visual cues across imaging channels overcoming the limitations of current foundation models. We train separate encoders to learn each facet of tissue including cell morphologies, spatial protein expression (chemoarchitecture), cell arrangements (cytoarchitecture), and wiring patterns (myeloarchitecture) from a set of user-defined molecular marker panels, without the need for human annotations or intervention, and with visual confirmation of successful learning. Multiple encoders can be combined logically to drive specialized searches. We validated mViSE's ability to retrieve single cells, proximal cell pairs, tissue patches, delineate cortical layers, brain regions, and sub-regions. mVISE is disseminated as an open-source QuPath plug-in tool.