1 code implementation • LREC 2022 • Yuru Jiang, Yang Xu, Yuhang Zhan, WeiKai He, Yilin Wang, Zixuan Xi, Meiyun Wang, Xinyu Li, Yu Li, Yanchao Yu
We describe a new freely available Chinese multi-party dialogue dataset for automatic extraction of dialogue-based character relationships.
no code implementations • SIGDIAL (ACL) 2022 • Nancie Gunson, Daniel Hernandez Garcia, Weronika Sieińska, Angus Addlesee, Christian Dondrup, Oliver Lemon, Jose L. Part, Yanchao Yu
Socially Assistive Robots (SARs) have the potential to play an increasingly important role in a variety of contexts including healthcare, but most existing systems have very limited interactive capabilities.
no code implementations • 11 Apr 2024 • Xavier Alameda-Pineda, Angus Addlesee, Daniel Hernández García, Chris Reinke, Soraya Arias, Federica Arrigoni, Alex Auternaud, Lauriane Blavette, Cigdem Beyan, Luis Gomez Camara, Ohad Cohen, Alessandro Conti, Sébastien Dacunha, Christian Dondrup, Yoav Ellinson, Francesco Ferro, Sharon Gannot, Florian Gras, Nancie Gunson, Radu Horaud, Moreno D'Incà, Imad Kimouche, Séverin Lemaignan, Oliver Lemon, Cyril Liotard, Luca Marchionni, Mordehay Moradi, Tomas Pajdla, Maribel Pino, Michal Polic, Matthieu Py, Ariel Rado, Bin Ren, Elisa Ricci, Anne-Sophie Rigaud, Paolo Rota, Marta Romeo, Nicu Sebe, Weronika Sieińska, Pinchas Tandeitnik, Francesco Tonini, Nicolas Turro, Timothée Wintz, Yanchao Yu
Despite the many recent achievements in developing and deploying social robotics, there are still many underexplored environments and applications for which systematic evaluation of such systems by end-users is necessary.
2 code implementations • COLING 2020 • Angus Addlesee, Yanchao Yu, Arash Eshghi
Automatic Speech Recognition (ASR) systems are increasingly powerful and more accurate, but also more numerous with several options existing currently as a service (e. g. Google, IBM, and Microsoft).
Automatic Speech Recognition Automatic Speech Recognition (ASR) +3
no code implementations • 20 Dec 2017 • Ioannis Papaioannou, Amanda Cercas Curry, Jose L. Part, Igor Shalyminov, Xinnuo Xu, Yanchao Yu, Ondřej Dušek, Verena Rieser, Oliver Lemon
Open-domain social dialogue is one of the long-standing goals of Artificial Intelligence.
no code implementations • WS 2017 • Yanchao Yu, Arash Eshghi, Oliver Lemon
We present an optimised multi-modal dialogue agent for interactive learning of visually grounded word meanings from a human tutor, trained on real human-human tutoring data.
no code implementations • WS 2017 • Yanchao Yu, Arash Eshghi, Gregory Mills, Oliver Joseph Lemon
We motivate and describe a new freely available human-human dialogue dataset for interactive learning of visually grounded word meanings through ostensive definition by a tutor to a learner.
no code implementations • WS 2016 • Yanchao Yu, Arash Eshghi, Oliver Lemon
We present a multi-modal dialogue system for interactive learning of perceptually grounded word meanings from a human tutor.
no code implementations • WS 2017 • Yanchao Yu, Arash Eshghi, Oliver Lemon
We present VOILA: an optimised, multi-modal dialogue agent for interactive learning of visually grounded word meanings from a human user.
no code implementations • WS 2014 • Helen Hastie, Marie-Aude Aufaure, Panos Alexopoulos, Hugues Bouchard, Catherine Breslin, Heriberto Cuay{\'a}huitl, Nina Dethlefs, Milica Ga{\v{s}}i{\'c}, James Henderson, Oliver Lemon, Xingkun Liu, Peter Mika, Nesrine Ben Mustapha, Tim Potter, Verena Rieser, Blaise Thomson, Pirros Tsiakoulis, Yves Vanrompay, Boris Villazon-Terrazas, Majid Yazdani, Steve Young, Yanchao Yu