no code implementations • 5 Aug 2020 • Carlos Cancino-Chacón, Silvan Peter, Shreyan Chowdhury, Anna Aljanaki, Gerhard Widmer
In this paper, we offer a first account of this new data resource for expressive performance research, and provide an exploratory analysis, addressing three main questions: (1) how similarly do different listeners describe a performance of a piece?
1 code implementation • 24 Jun 2019 • Federico Simonetta, Carlos Cancino-Chacón, Stavros Ntalampiras, Gerhard Widmer
The backbone of the method consists of a convolutional neural network (CNN) estimating the probability that each note in the score (more precisely: each pixel in a piano roll encoding of the score) belongs to the melody line.
no code implementations • 14 Jun 2019 • Zhengshan Shi, Carlos Cancino-Chacón, Gerhard Widmer
Musicians produce individualized, expressive performances by manipulating parameters such as dynamics, tempo and articulation.
no code implementations • 11 Sep 2017 • Carlos Cancino-Chacón, Maarten Grachten, David R. W. Sears, Gerhard Widmer
In this paper we present preliminary work examining the relationship between the formation of expectations and the realization of musical performances, paying particular attention to expressive tempo and dynamics.
no code implementations • 19 Jul 2017 • Carlos Cancino-Chacón, Maarten Grachten, Kat Agres
Tonal structure is in part conveyed by statistical regularities between musical events, and research has shown that computational models reflect tonal structure in music by capturing these regularities in schematic constructs like pitch histograms.