Search Results for author: Yanchao Yu

Found 10 papers, 1 papers with code

A Comprehensive Evaluation of Incremental Speech Recognition and Diarization for Conversational AI

2 code implementations COLING 2020 Angus Addlesee, Yanchao Yu, Arash Eshghi

Automatic Speech Recognition (ASR) systems are increasingly powerful and more accurate, but also more numerous with several options existing currently as a service (e. g. Google, IBM, and Microsoft).

automatic-speech-recognition Speaker Diarization +1

Learning how to learn: an adaptive dialogue agent for incrementally learning visually grounded word meanings

no code implementations WS 2017 Yanchao Yu, Arash Eshghi, Oliver Lemon

We present an optimised multi-modal dialogue agent for interactive learning of visually grounded word meanings from a human tutor, trained on real human-human tutoring data.

The BURCHAK corpus: a Challenge Data Set for Interactive Learning of Visually Grounded Word Meanings

no code implementations WS 2017 Yanchao Yu, Arash Eshghi, Gregory Mills, Oliver Joseph Lemon

We motivate and describe a new freely available human-human dialogue dataset for interactive learning of visually grounded word meanings through ostensive definition by a tutor to a learner.

Training an adaptive dialogue policy for interactive learning of visually grounded word meanings

no code implementations WS 2016 Yanchao Yu, Arash Eshghi, Oliver Lemon

We present a multi-modal dialogue system for interactive learning of perceptually grounded word meanings from a human tutor.

Semantic Parsing

VOILA: An Optimised Dialogue System for Interactively Learning Visually-Grounded Word Meanings (Demonstration System)

no code implementations WS 2017 Yanchao Yu, Arash Eshghi, Oliver Lemon

We present VOILA: an optimised, multi-modal dialogue agent for interactive learning of visually grounded word meanings from a human user.

Active Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.