Search Results for author: Michael Neumann

Found 8 papers, 1 papers with code

“It seemed like an annoying woman”: On the Perception and Ethical Considerations of Affective Language in Text-Based Conversational Agents

no code implementations CoNLL (EMNLP) 2021 Lindsey Vanderlyn, Gianna Weber, Michael Neumann, Dirk Väth, Sarina Meyer, Ngoc Thang Vu

Based on statistical and qualitative analysis of the responses, we found language style played an important role in how human-like participants perceived a dialog agent as well as how likable.

Chatbot

Investigations on Audiovisual Emotion Recognition in Noisy Conditions

no code implementations2 Mar 2021 Michael Neumann, Ngoc Thang Vu

In this paper we explore audiovisual emotion recognition under noisy acoustic conditions with a focus on speech features.

Speech Emotion Recognition

URoboSim -- An Episodic Simulation Framework for Prospective Reasoning in Robotic Agents

no code implementations8 Dec 2020 Michael Neumann, Sebastian Koralewski, Michael Beetz

We show the capabilities of URoboSim in form of mental simulations, generating data for machine learning and the usage as belief state for a real robot.

BIG-bench Machine Learning

On the Utility of Audiovisual Dialog Technologies and Signal Analytics for Real-time Remote Monitoring of Depression Biomarkers

no code implementations WS 2020 Michael Neumann, Oliver Roessler, David Suendermann-Oeft, Vikram Ramanarayanan

We investigate the utility of audiovisual dialog systems combined with speech and video analytics for real-time remote monitoring of depression at scale in uncontrolled environment settings.

ADVISER: A Toolkit for Developing Multi-modal, Multi-domain and Socially-engaged Conversational Agents

1 code implementation ACL 2020 Chia-Yu Li, Daniel Ortega, Dirk Väth, Florian Lux, Lindsey Vanderlyn, Maximilian Schmidt, Michael Neumann, Moritz Völkel, Pavel Denisov, Sabrina Jenne, Zorica Kacarevic, Ngoc Thang Vu

We present ADVISER - an open-source, multi-domain dialog system toolkit that enables the development of multi-modal (incorporating speech, text and vision), socially-engaged (e. g. emotion recognition, engagement level prediction and backchanneling) conversational agents.

BIG-bench Machine Learning Emotion Recognition

Cross-lingual and Multilingual Speech Emotion Recognition on English and French

no code implementations1 Mar 2018 Michael Neumann, Ngoc Thang Vu

Research on multilingual speech emotion recognition faces the problem that most available speech corpora differ from each other in important ways, such as annotation methods or interaction scenarios.

Speech Emotion Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.