Search Results for author: Yanchao Yu

Found 13 papers, 2 papers with code

A Visually-Aware Conversational Robot Receptionist

no code implementations SIGDIAL (ACL) 2022 Nancie Gunson, Daniel Hernandez Garcia, Weronika Sieińska, Angus Addlesee, Christian Dondrup, Oliver Lemon, Jose L. Part, Yanchao Yu

Socially Assistive Robots (SARs) have the potential to play an increasingly important role in a variety of contexts including healthcare, but most existing systems have very limited interactive capabilities.

Question Answering

A Comprehensive Evaluation of Incremental Speech Recognition and Diarization for Conversational AI

2 code implementations COLING 2020 Angus Addlesee, Yanchao Yu, Arash Eshghi

Automatic Speech Recognition (ASR) systems are increasingly powerful and more accurate, but also more numerous with several options existing currently as a service (e. g. Google, IBM, and Microsoft).

Automatic Speech Recognition Automatic Speech Recognition (ASR) +3

Learning how to learn: an adaptive dialogue agent for incrementally learning visually grounded word meanings

no code implementations WS 2017 Yanchao Yu, Arash Eshghi, Oliver Lemon

We present an optimised multi-modal dialogue agent for interactive learning of visually grounded word meanings from a human tutor, trained on real human-human tutoring data.

Reinforcement Learning Reinforcement Learning (RL)

The BURCHAK corpus: a Challenge Data Set for Interactive Learning of Visually Grounded Word Meanings

no code implementations WS 2017 Yanchao Yu, Arash Eshghi, Gregory Mills, Oliver Joseph Lemon

We motivate and describe a new freely available human-human dialogue dataset for interactive learning of visually grounded word meanings through ostensive definition by a tutor to a learner.

Attribute Reinforcement Learning +1

Training an adaptive dialogue policy for interactive learning of visually grounded word meanings

no code implementations WS 2016 Yanchao Yu, Arash Eshghi, Oliver Lemon

We present a multi-modal dialogue system for interactive learning of perceptually grounded word meanings from a human tutor.

Semantic Parsing

VOILA: An Optimised Dialogue System for Interactively Learning Visually-Grounded Word Meanings (Demonstration System)

no code implementations WS 2017 Yanchao Yu, Arash Eshghi, Oliver Lemon

We present VOILA: an optimised, multi-modal dialogue agent for interactive learning of visually grounded word meanings from a human user.

Active Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.