Search Results for author: Chia-Yu Li

Found 8 papers, 1 papers with code

Oh, Jeez! or Uh-huh? A Listener-aware Backchannel Predictor on ASR Transcriptions

no code implementations10 Apr 2023 Daniel Ortega, Chia-Yu Li, Ngoc Thang Vu

This paper presents our latest investigation on modeling backchannel in conversations.

Improving Semi-supervised End-to-end Automatic Speech Recognition using CycleGAN and Inter-domain Losses

no code implementations20 Oct 2022 Chia-Yu Li, Ngoc Thang Vu

In this paper, we exploit the advantages from both inter-domain loss and CycleGAN to achieve better shared representation of unpaired speech and text inputs and thus improve the speech-to-text mapping.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +1

Integrating Knowledge in End-to-End Automatic Speech Recognition for Mandarin-English Code-Switching

no code implementations19 Dec 2021 Chia-Yu Li, Ngoc Thang Vu

Code-Switching (CS) is a common linguistic phenomenon in multilingual communities that consists of switching between languages while speaking.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +3

ADVISER: A Toolkit for Developing Multi-modal, Multi-domain and Socially-engaged Conversational Agents

1 code implementation ACL 2020 Chia-Yu Li, Daniel Ortega, Dirk Väth, Florian Lux, Lindsey Vanderlyn, Maximilian Schmidt, Michael Neumann, Moritz Völkel, Pavel Denisov, Sabrina Jenne, Zorica Kacarevic, Ngoc Thang Vu

We present ADVISER - an open-source, multi-domain dialog system toolkit that enables the development of multi-modal (incorporating speech, text and vision), socially-engaged (e. g. emotion recognition, engagement level prediction and backchanneling) conversational agents.

BIG-bench Machine Learning Emotion Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.