Exploring Acoustic Correlates of Depression and Preliminary Screening Models Using XGBoost and SHAP

利用 XGBoost 和 SHAP 探索抑郁症的声学相关性及初步筛选模型

阅读:1

Abstract

This exploratory study investigated whether voice-derived acoustic features reflect depressive symptom severity and whether they carry preliminary predictive signal for distinguishing individuals with Major Depressive Disorder (MDD) from healthy controls (HC). Using the publicly available MODMA dataset (23 MDD; 29 HC), 6553 acoustic features were extracted with openSMILE. Spearman correlation and group-difference analyses identified several MFCC-derived spectral features as moderately and systematically associated with PHQ-9 scores, indicating their potential relevance as severity-linked acoustic markers. To complement these findings, a supplementary severity-based classification using a PHQ-9 ≥ 10 threshold showed that a logistic regression model trained on the top five correlated MFCC features achieved a cross-validated AUC of 0.78 (SD = 0.15), supporting their association with clinically defined symptom burden. Four machine learning pipelines were further evaluated for an exploratory MDD-HC classification task. Among them, the PCA + XGBoost model demonstrated the most stable generalization (test AUC = 0.60), although predictive performance remained limited within the constraints of the small and high-dimensional dataset. SHAP analysis highlighted MFCC-derived features as key contributors to model decisions, providing transparent interpretability. Overall, the study presents preliminary evidence linking acoustic characteristics to depressive symptoms and outlines a reproducible analytical workflow, while underscoring the need for substantially larger and more diverse datasets to establish clinically meaningful predictive validity.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。