The contrastive learning method in tongue image representation learning and its significance in intelligent diagnosis

对比学习方法在舌像表征学习中的应用及其在智能诊断中的意义

阅读:1

Abstract

OBJECTIVE: Tongue diagnosis is crucial in traditional Chinese medicine (TCM). As diagnoses cannot be standardized in TCM, reaching an ideal consensus when labeling TCM syndrome is difficult. This results in the introduction of subjective bias in the representation learning method. Therefore, we explore the application of contrastive learning to automatically extract semantic features in tongue images, thereby reducing the need for manual labeling and avoiding manual biases in a self-supervised manner. METHODS: We applied clustering contrastive learning (CCL) to the representation learning of tongue images. Based on TCM theory, we also coupled with a refined data augmented strategy. The embedding of tongue images by CCL-based models was utilized in downstream tasks, and the feature extraction capability was verified through their loss drop curve, precision, and other metrics. RESULTS: The downstream task experiments showed that CCL-based models outperformed the supervised models for most evaluation metrics. In the qualitative experiment, cluster analysis showed that the CCL-based model could perceive the colors and textures of the nasolabial fold or the eye without human-supervised information. CONCLUSIONS: The contrastive learning (CL) method automatically extracted tongue image features and avoided interference from artificial subjective labels. Thus, the symptoms, signs, and other phenotypes associated with Zheng (syndrome) of TCM can be objectively quantified, thereby solving long-standing standardization problem of TCM.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。