Abstract
Graduate education has entered the era of big data, and systematic analysis of dissertation evaluations has become crucial for quality monitoring. However, the complexity and subjectivity inherent in peer-review texts pose significant challenges for automated analysis. While natural language processing (NLP) offers potential solutions, most existing methods fail to adequately capture nuanced disciplinary criteria or provide interpretable inferences for educators. Inspired by soft-sensor, this study employs a BERT-based model enhanced with additional attention mechanisms to quantify latent evaluation dimensions from dissertation reviews. The framework integrates Shapley Additive exPlanations (SHAP) to ensure the interpretability of model predictions, combining deep semantic modeling with SHAP to quantify characteristic importance in academic evaluation. The experimental results demonstrate that the implemented model outperforms baseline methods in accuracy, precision, recall, and F1-score. Furthermore, its interpretability mechanism reveals key evaluation dimensions experts prioritize during the paper assessment. This analytical framework establishes an interpretable soft-sensor paradigm that bridges NLP with substantive review principles, providing actionable insights for enhancing dissertation improvement strategies.