no code implementations • JEPTALNRECITAL 2012 • Fabrice Lef{\`e}vre, Djamel Mostefa, Laurent Besacier, Yannick Est{\`e}ve, Matthieu Quignard, Nathalie Camelin, Benoit Favre, Bassam Jabaian, Lina Rojas-Barahona
2 code implementations • NAACL 2016 • Nikola Mrkšić, Diarmuid Ó Séaghdha, Blaise Thomson, Milica Gašić, Lina Rojas-Barahona, Pei-Hao Su, David Vandyke, Tsung-Hsien Wen, Steve Young
In this work, we present a novel counter-fitting method which injects antonymy and synonymy constraints into vector space representations in order to improve the vectors' capability for judging semantic similarity.
no code implementations • ACL 2016 • Pei-Hao Su, Milica Gasic, Nikola Mrksic, Lina Rojas-Barahona, Stefan Ultes, David Vandyke, Tsung-Hsien Wen, Steve Young
The ability to compute an accurate reward function is essential for optimising a dialogue policy via reinforcement learning.
no code implementations • 8 Jun 2016 • Pei-Hao Su, Milica Gasic, Nikola Mrksic, Lina Rojas-Barahona, Stefan Ultes, David Vandyke, Tsung-Hsien Wen, Steve Young
We describe a two-step approach for dialogue management in task-oriented spoken dialogue systems.
no code implementations • WS 2017 • Paweł Budzianowski, Stefan Ultes, Pei-Hao Su, Nikola Mrkšić, Tsung-Hsien Wen, Iñigo Casanueva, Lina Rojas-Barahona, Milica Gašić
In doing that, we show that our approach has the potential to facilitate policy optimisation for more sophisticated multi-domain dialogue systems.
no code implementations • WS 2017 • Stefan Ultes, Paweł Budzianowski, Iñigo Casanueva, Nikola Mrkšić, Lina Rojas-Barahona, Pei-Hao Su, Tsung-Hsien Wen, Milica Gašić, Steve Young
Reinforcement learning is widely used for dialogue policy optimization where the reward function often consists of more than one component, e. g., the dialogue success and the dialogue length.
Multi-Objective Reinforcement Learning reinforcement-learning +1
no code implementations • 29 Nov 2017 • Iñigo Casanueva, Paweł Budzianowski, Pei-Hao Su, Nikola Mrkšić, Tsung-Hsien Wen, Stefan Ultes, Lina Rojas-Barahona, Steve Young, Milica Gašić
Dialogue assistants are rapidly becoming an indispensable daily aid.
no code implementations • NAACL 2018 • Iñigo Casanueva, Paweł Budzianowski, Pei-Hao Su, Stefan Ultes, Lina Rojas-Barahona, Bo-Hsiang Tseng, Milica Gašić
Reinforcement learning (RL) is a promising approach to solve dialogue policy optimisation.
1 code implementation • WS 2018 • Lina Rojas-Barahona, Bo-Hsiang Tseng, Yinpei Dai, Clare Mansfield, Osman Ramadan, Stefan Ultes, Michael Crawford, Milica Gasic
In recent years, we have seen deep learning and distributed representations of words and sentences make impact on a number of natural language processing tasks, such as similarity, entailment and sentiment analysis.
no code implementations • WS 2018 • Stefan Ultes, Paweł\ Budzianowski, Iñigo Casanueva, Lina Rojas-Barahona, Bo-Hsiang Tseng, Yen-chen Wu, Steve Young, Milica Gašić
Statistical spoken dialogue systems usually rely on a single- or multi-domain dialogue model that is restricted in its capabilities of modelling complex dialogue structures, e. g., relations.
no code implementations • ACL (WebNLG, INLG) 2020 • Sebastien Montella, Betty Fabre, Tanguy Urvoy, Johannes Heinecke, Lina Rojas-Barahona
The task of verbalization of RDF triples has known a growth in popularity due to the rising ubiquity of Knowledge Bases (KBs).
no code implementations • Findings (ACL) 2021 • Sebastien Montella, Lina Rojas-Barahona, Johannes Heinecke
We further propose Hercules, a time-aware extension of AttH model, which defines the curvature of a Riemannian manifold as the product of both relation and time.
no code implementations • NAACL (SUKI) 2022 • Sebastien Montella, Lina Rojas-Barahona, Frederic Bechet, Johannes Heinecke, Alexis Nasr
In general, QA systems query a Knowledge Base (KB) to detect and extract the raw answers as final prediction.