Search Results for author: NIcholas Cummins

Found 16 papers, 2 papers with code

Detecting the Severity of Major Depressive Disorder from Speech: A Novel HARD-Training Methodology

no code implementations2 Jun 2022 Edward L. Campbell, Judith Dineley, Pauline Conde, Faith Matcham, Femke Lamers, Sara Siddi, Laura Docio-Fernandez, Carmen Garcia-Mateo, NIcholas Cummins, the RADAR-CNS Consortium

In this regard, speech samples were collected as part of the Remote Assessment of Disease and Relapse in Major Depressive Disorder (RADAR-MDD) research programme.

Speech and the n-Back task as a lens into depression. How combining both may allow us to isolate different core symptoms of depression

no code implementations30 Mar 2022 Salvatore Fara, Stefano Goria, Emilia Molimpakis, NIcholas Cummins

Finally, we present a set of experiments that highlight the association between different speech and n-Back markers at the PHQ-8 item level.

Automatic Detection of Expressed Emotion from Five-Minute Speech Samples: Challenges and Opportunities

no code implementations30 Mar 2022 Bahman Mirheidari, André Bittar, NIcholas Cummins, Johnny Downs, Helen L. Fisher, Heidi Christensen

We present a novel feasibility study on the automatic recognition of Expressed Emotion (EE), a family environment concept based on caregivers speaking freely about their relative/family member.

The Ambiguous World of Emotion Representation

no code implementations1 Sep 2019 Vidhyasaharan Sethu, Emily Mower Provost, Julien Epps, Carlos Busso, NIcholas Cummins, Shrikanth Narayanan

A key reason for this is the lack of a common mathematical framework to describe all the relevant elements of emotion representations.

Face Recognition Speaker Verification +2

AVEC 2019 Workshop and Challenge: State-of-Mind, Detecting Depression with AI, and Cross-Cultural Affect Recognition

no code implementations10 Jul 2019 Fabien Ringeval, Björn Schuller, Michel Valstar, NIcholas Cummins, Roddy Cowie, Leili Tavabi, Maximilian Schmitt, Sina Alisamir, Shahin Amiriparian, Eva-Maria Messner, Siyang Song, Shuo Liu, Ziping Zhao, Adria Mallol-Ragolta, Zhao Ren, Mohammad Soleymani, Maja Pantic

The Audio/Visual Emotion Challenge and Workshop (AVEC 2019) "State-of-Mind, Detecting Depression with AI, and Cross-cultural Affect Recognition" is the ninth competition event aimed at the comparison of multimedia processing and machine learning methods for automatic audiovisual health and emotion analysis, with all participants competing strictly under the same conditions.

Emotion Recognition

Voice command generation using Progressive Wavegans

no code implementations13 Mar 2019 Thomas Wiest, NIcholas Cummins, Alice Baird, Simone Hantke, Judith Dineley, Björn Schuller

Generative Adversarial Networks (GANs) have become exceedingly popular in a wide range of data-driven research fields, due in part to their success in image generation.

Audio Generation Image Generation

Adversarial Training in Affective Computing and Sentiment Analysis: Recent Advances and Perspectives

no code implementations21 Sep 2018 Jing Han, Zixing Zhang, NIcholas Cummins, Björn Schuller

Over the past few years, adversarial training has become an extremely active research topic and has been successfully applied to various Artificial Intelligence (AI) domains.

Sentiment Analysis

Calibrated Prediction Intervals for Neural Network Regressors

1 code implementation26 Mar 2018 Gil Keren, NIcholas Cummins, Björn Schuller

Despite their obvious aforementioned advantage in relation to accuracy, contemporary neural networks can, generally, be regarded as poorly calibrated and as such do not produce reliable output probability estimates.

Prediction Intervals

auDeep: Unsupervised Learning of Representations from Audio with Deep Recurrent Neural Networks

1 code implementation12 Dec 2017 Michael Freitag, Shahin Amiriparian, Sergey Pugachevskiy, NIcholas Cummins, Björn Schuller

auDeep is a Python toolkit for deep unsupervised representation learning from acoustic data.

Sound Audio and Speech Processing

Cannot find the paper you are looking for? You can Submit a new open access paper.