Existing automatic music generation approaches that feature deep learning can
be broadly classified into two types: raw audio models and symbolic models.
Symbolic models, which train and generate at the note level, are currently the
more prevalent approach; these models can capture long-range dependencies of
melodic structure, but fail to grasp the nuances and richness of raw audio
generations. Raw audio models, such as DeepMind's WaveNet, train directly on
sampled audio waveforms, allowing them to produce realistic-sounding, albeit
unstructured music. In this paper, we propose an automatic music generation
methodology combining both of these approaches to create structured,
realistic-sounding compositions. We consider a Long Short Term Memory network
to learn the melodic structure of different styles of music, and then use the
unique symbolic generations from this model as a conditioning input to a
WaveNet-based raw audio generator, creating a model for automatic, novel music.
We then evaluate this approach by showcasing results of this work.