An automatic music generation and evaluation method based on transfer learning

一种基于迁移学习的自动音乐生成与评价方法

阅读:1

Abstract

In recent years, deep learning has seen remarkable progress in many fields, especially with many excellent pre-training models emerged in Natural Language Processing(NLP). However, these pre-training models can not be used directly in music generation tasks due to the different representations between music symbols and text. Compared with the traditional presentation method of music melody that only includes the pitch relationship between single notes, the text-like representation method proposed in this paper contains more melody information, including pitch, rhythm and pauses, which expresses the melody in a form similar to text and makes it possible to use existing pre-training models in symbolic melody generation. In this paper, based on the generative pre-training-2(GPT-2) text generation model and transfer learning we propose MT-GPT-2(music textual GPT-2) model that is used in music melody generation. Then, a symbolic music evaluation method(MEM) is proposed through the combination of mathematical statistics, music theory knowledge and signal processing methods, which is more objective than the manual evaluation method. Based on this evaluation method and music theories, the music generation model in this paper are compared with other models (such as long short-term memory (LSTM) model,Leak-GAN model and Music SketchNet). The results show that the melody generated by the proposed model is closer to real music.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。