Abstract
Generally, music genres have not new established framework, since they are often determined by the composer's background by cultural or historical impact and geographical origin. In this work, a new methodology is presented based on deep learning and metaheuristic algorithms to enhance the performance in music style categorization. The model consists of two main parts: a pre-trained model, a ZFNet, through which high level features are extracted from audio signals and a ResNeXt model for classification. A fractional-order-based variant of the Grey Lag Goose Optimization (FGLGO) algorithm is used to optimize the parameters of ResNeXt to boost the performance of the model. A dual-path recurrent network is employed for real-time music generation and evaluate the model on two benchmark datasets, ISMIR2004 and extended Ballroom, compared to the state-of-the-art models included CNN, PRCNN, BiLSTM and BiRNN. Experimental results show that with accuracy rates of 0.918 on the extended Ballroom dataset and 0.954 on the ISMIR2004 dataset, the proposed model improves accuracy and efficiency incrementally over existing models.