Abstract
Multimodal imaging techniques have the potential to enhance the interpretation of histology by offering additional molecular and structural information beyond that accessible through hematoxylin and eosin (H&E) staining alone. Here, we present a quantitative approach for comparing the information content of different image modalities, such as H&E and multimodal imaging. We used a combination of deep learning and radiomics-based feature extraction with different information markers, implemented in Python 3.12, to compare the information content of the H&E stain, multimodal imaging, and the combined dataset. We also compared the information content of individual channels in the multimodal image and of different Coherent Anti-Stokes Raman Scattering (CARS) microscopy spectral channels. The quantitative measurements of information that we utilized were Shannon entropy, inverse area under the curve (1-AUC), the number of principal components describing 95% of the variance (PC95), and inverse power law fitting. For example, the combined dataset achieved an entropy value of 0.5740, compared to 0.5310 for H&E and 0.5385 for the multimodal dataset using MobileNetV2 features. The number of principal components required to explain 95 percent of the variance was also highest for the combined dataset, with 62 components, compared to 33 for H&E and 47 for the multimodal dataset. These measurements consistently showed that the combined datasets provide more information. These observations highlight the potential of multimodal combinations to enhance image-based analyses and provide a reproducible framework for comparing imaging approaches in digital pathology and biomedical image analysis.