Search Results for author: Dimos Makris

Found 4 papers, 3 papers with code

Predicting emotion from music videos: exploring the relative contribution of visual and auditory information to affective responses

1 code implementation19 Feb 2022 Phoebe Chua, Dimos Makris, Dorien Herremans, Gemma Roig, Kat Agres

In this paper we present MusicVideos (MuVi), a novel dataset for affective multimedia content analysis to study how the auditory and visual modalities contribute to the perceived emotion of media.

Descriptive Feature Importance +2

Conditional Drums Generation using Compound Word Representations

1 code implementation9 Feb 2022 Dimos Makris, Guo Zixun, Maximos Kaliakatsos-Papakostas, Dorien Herremans

The field of automatic music composition has seen great progress in recent years, specifically with the invention of transformer-based architectures.

Generating Lead Sheets with Affect: A Novel Conditional seq2seq Framework

1 code implementation27 Apr 2021 Dimos Makris, Kat R. Agres, Dorien Herremans

In this paper, we present a novel approach for calculating the valence (the positivity or negativity of the perceived emotion) of a chord progression within a lead sheet, using pre-defined mood tags proposed by music experts.

Machine Translation Music Generation +1

DeepDrum: An Adaptive Conditional Neural Network

no code implementations17 Sep 2018 Dimos Makris, Maximos Kaliakatsos-Papakostas, Katia Lida Kermanidis

Considering music as a sequence of events with multiple complex dependencies, the Long Short-Term Memory (LSTM) architecture has proven very efficient in learning and reproducing musical styles.

Cannot find the paper you are looking for? You can Submit a new open access paper.