Abstract
Understanding music styles is essential for music information retrieval, personalized recommendation, and AI-assisted content creation. However, existing work typically addresses tasks such as emotion classification and singing style classification independently, thereby neglecting the intrinsic relationships between them. In this study, we introduce a multi-task learning framework that jointly models these two tasks to enable explicit knowledge sharing and mutual enhancement. Our results indicate that joint optimization consistently outperforms single-task counterparts, demonstrating the value of leveraging inter-task correlations for more robust singing style analysis. To assess the generality and adaptability of the proposed framework, we evaluate it across various backbone architectures, including Transformer, TextCNN, and BERT, and observe stable performance improvements in all cases. Experiments on a benchmark dataset, which were self-constructed and collected through professional recording devices, further show that the framework not only achieves the best accuracy on both tasks on our dataset under a singer-wise split, but also yields interpretable insights into the interplay between emotional expression and stylistic characteristics in vocal performance.