Search Results for author: Daniel Ortega

Found 7 papers, 1 papers with code

Oh, Jeez! or Uh-huh? A Listener-aware Backchannel Predictor on ASR Transcriptions

no code implementations10 Apr 2023 Daniel Ortega, Chia-Yu Li, Ngoc Thang Vu

This paper presents our latest investigation on modeling backchannel in conversations.

Modeling Speaker-Listener Interaction for Backchannel Prediction

no code implementations10 Apr 2023 Daniel Ortega, Sarina Meyer, Antje Schweitzer, Ngoc Thang Vu

We present our latest findings on backchannel modeling novelly motivated by the canonical use of the minimal responses Yeah and Uh-huh in English and their correspondent tokens in German, and the effect of encoding the speaker-listener interaction.

ADVISER: A Toolkit for Developing Multi-modal, Multi-domain and Socially-engaged Conversational Agents

1 code implementation ACL 2020 Chia-Yu Li, Daniel Ortega, Dirk Väth, Florian Lux, Lindsey Vanderlyn, Maximilian Schmidt, Michael Neumann, Moritz Völkel, Pavel Denisov, Sabrina Jenne, Zorica Kacarevic, Ngoc Thang Vu

We present ADVISER - an open-source, multi-domain dialog system toolkit that enables the development of multi-modal (incorporating speech, text and vision), socially-engaged (e. g. emotion recognition, engagement level prediction and backchanneling) conversational agents.

BIG-bench Machine Learning Emotion Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.