Search Results for author: Kat Agres

Found 8 papers, 3 papers with code

Predicting emotion from music videos: exploring the relative contribution of visual and auditory information to affective responses

1 code implementation19 Feb 2022 Phoebe Chua, Dimos Makris, Dorien Herremans, Gemma Roig, Kat Agres

In this paper we present MusicVideos (MuVi), a novel dataset for affective multimedia content analysis to study how the auditory and visual modalities contribute to the perceived emotion of media.

Descriptive Feature Importance +2

A dataset and classification model for Malay, Hindi, Tamil and Chinese music

no code implementations9 Sep 2020 Fajilatun Nahar, Kat Agres, Balamurali BT, Dorien Herremans

We use this new dataset to train different classification models to distinguish the origin of the music in terms of these ethnic groups.

Classification General Classification

The impact of Audio input representations on neural network based music transcription

1 code implementation25 Jan 2020 Kin Wai Cheuk, Kat Agres, Dorien Herremans

This paper thoroughly analyses the effect of different input representations on polyphonic multi-instrument music transcription.

Sound Audio and Speech Processing

Learning Disentangled Representations of Timbre and Pitch for Musical Instrument Sounds Using Gaussian Mixture Variational Autoencoders

no code implementations19 Jun 2019 Yin-Jyun Luo, Kat Agres, Dorien Herremans

Specifically, we use two separate encoders to learn distinct latent spaces for timbre and pitch, which form Gaussian mixture components representing instrument identity and pitch, respectively.

From Context to Concept: Exploring Semantic Relationships in Music with Word2Vec

no code implementations29 Nov 2018 Ching-Hua Chuan, Kat Agres, Dorien Herremans

In this newly learned vector space, a metric based on cosine distance is able to distinguish between functional chord relationships, as well as harmonic associations in the music.

Music Generation

From Bach to the Beatles: The simulation of human tonal expectation using ecologically-trained predictive models

no code implementations19 Jul 2017 Carlos Cancino-Chacón, Maarten Grachten, Kat Agres

Tonal structure is in part conveyed by statistical regularities between musical events, and research has shown that computational models reflect tonal structure in music by capturing these regularities in schematic constructs like pitch histograms.

Cannot find the paper you are looking for? You can Submit a new open access paper.