Music Generation with Neural Networks using Daniel Johnson's RNN
In this post, we’ll examine a music generating neural network created by Daniel Johnson. A description of Daniel Johnson’s neural network can be found at his website.
If you don’t have theano (a python library providing optimized mathematical computations) on your machine, you can find installation instructions for Ubuntu here.
Training data came from this website.
If your machine has a GPU, it’s highly recommended to set up GPU drivers like CUDA, as this can speed up your training by several factors and help you avoid spending money.
Using a machine with Intel i5-7500 processor and a GeForce GTX 1070 NVIDIA GPU, it took me around 6 hours to train 10000 epochs, which got me the following: composition.mp3
Here are some samples taken while training:
After 100 epochs: sample100.mp3
After 1000 epochs: sample1000.mp3
After 5000 epochs: sample5000.mp3
The network did a good job on rhythm (it’s clear that the music is written in 4/4 time using standard note values). However, there’s not really a clear melody. In addition, the music shifts keys several times, leading to dissonant chords.