LINK
Deep Learning Techniques for Music Generation – A SurveyというDeep Learningを用いた音楽生成手法をまとめたサーベイ論文が公開されてます。著者は、現在Spotifyの研究所のリーダーを務めるPachet氏を中心にしたグループです。
今回その内容を日本語にまとめてみました。Medium上で公開しています。
サーベイの切り口としては、生成の対象(メロディー、リズム etc)、入出力のデータ、データのフォーマット(MIDI/テキスト etc)、モデルのアーキテクチャ(RNN, CNN)、学習の戦略(GAN)などです。論文では聞けない、各論文のシステムによって生成された音楽もあわせて掲載しているので、ぜひ読んで・聞いてみてください!
arXiv(2017.09.05公開)
This book is a survey and an analysis of different ways of using deep learning (deep artificial neural networks) to generate musical content. At first, we propose a methodology based on four dimensions for our analysis: – objective – What musical content is to be generated? (e.g., melody, accompaniment…); – representation – What are the information formats used for the corpus and for the expected generated output? (e.g., MIDI, piano roll, text…); – architecture – What type of deep neural network is to be used? (e.g., recurrent network, autoencoder, generative adversarial networks…); – strategy – How to model and control the process of generation (e.g., direct feedforward, sampling, unit selection…). For each dimension, we conduct a comparative analysis of various models and techniques. For the strategy dimension, we propose some tentative typology of possible approaches and mechanisms. This classification is bottom-up, based on the analysis of many existing deep-learning based systems for music generation, which are described in this book. The last part of the book includes discussion and prospects.