Wykorzystanie sztucznej inteligencji do generowania treści muzycznych

15 Dec 2019  ·  Mateusz Dorobek ·

This thesis is presenting a method for generating short musical phrases using a deep convolutional generative adversarial network (DCGAN). To train neural network were used datasets of classical and jazz music MIDI recordings. Our approach introduces translating the MIDI data into graphical images in a piano roll format suitable for the network input size, using the RGB channels as additional information carriers for improved performance. The network has learned to generate images that are indistinguishable from the input data and, when translated back to MIDI and played back, include several musically interesting rhythmic and harmonic structures. The results of the conducted experiments are described and discussed, with conclusions for further work and a short comparison with selected existing solutions.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here