Abstract
Since electronic music is simpler to produce and distribute than analog music, the variety of musicals available worldwide has increased rapidly along with the music marketplace's shift from analog to digital. Due to the abundance of available songs, people are discovering songs in various ways; one of them is by analyzing their emotional content. Not every age group can listen to the same music at all times. Deep learning techniques have yielded excellent results recently, marking a significant advance in NLP. However, there have been few attempts to use a deep learning model to sort out lyrics from improper music. Hence, a deep learning-based lyrics text classification process is presented in this proposal. Firstly, indispensable text data are fetched from the standard online resources and further, it is applied to the text pre-processing stage. After that, the resultant pre-processed text is subjected to the Serial Cascaded Hybrid Adaptive Deep Networks (SCHADNet) for classification purposes. The Transformer-based Bidirectional Long Short-Term Memory (Trans Bi-LSTM) is integrated with a Gated Recurrent Unit (GRU) for developing the model of SCHADNet, where the parameters of SCHADNet are optimally tuned by the Improved Marine Predators Algorithm (IMPA). Lastly, the classified outcome is accomplished from the SCHADNet. In order to enhance the classification performance, the developed model shows significant advancement by increasing the accuracy rate of 93.4%, 93.47% recall and 99.2% NPV, respectively. The numerical analysis is performed for the suggested lyrics text classification model over numerous classical text classification techniques to portray the effectiveness of the presented model.