In general, QA systems query a Knowledge Base (KB) to detect and extract the raw answers as final prediction.
We further propose Hercules, a time-aware extension of AttH model, which defines the curvature of a Riemannian manifold as the product of both relation and time.
The task of verbalization of RDF triples has known a growth in popularity due to the rising ubiquity of Knowledge Bases (KBs).
Statistical spoken dialogue systems usually rely on a single- or multi-domain dialogue model that is restricted in its capabilities of modelling complex dialogue structures, e. g., relations.
In recent years, we have seen deep learning and distributed representations of words and sentences make impact on a number of natural language processing tasks, such as similarity, entailment and sentiment analysis.
Reinforcement learning (RL) is a promising approach to solve dialogue policy optimisation.
Dialogue assistants are rapidly becoming an indispensable daily aid.
Reinforcement learning is widely used for dialogue policy optimization where the reward function often consists of more than one component, e. g., the dialogue success and the dialogue length.
In doing that, we show that our approach has the potential to facilitate policy optimisation for more sophisticated multi-domain dialogue systems.
We describe a two-step approach for dialogue management in task-oriented spoken dialogue systems.
The ability to compute an accurate reward function is essential for optimising a dialogue policy via reinforcement learning.
In this work, we present a novel counter-fitting method which injects antonymy and synonymy constraints into vector space representations in order to improve the vectors' capability for judging semantic similarity.