Conversational Information Retrieval (CIR) is an emerging field of Information Retrieval (IR) at the intersection of interactive IR and dialogue systems for open domain information needs.
In information retrieval (IR) systems, trends and users' interests may change over time, altering either the distribution of requests or contents to be recommended.
In this work, our aim is to provide a structured answer in natural language to a complex information need.
QuestEval is a reference-less metric used in text-to-text tasks, that compares the generated summaries directly to the source text, by automatically asking and answering questions.
Specifically, we propose a Multi-Branch Decoder which is able to leverage word-level labels to learn the relevant parts of each training instance.
Ranked #3 on Table-to-Text Generation on WikiBio
Evaluations on the widely used WikiBIO and WebNLG benchmarks demonstrate the effectiveness of this framework compared to state-of-the-art models.
To overcome this limitation, we propose to transfer visual information to textual representations by learning an intermediate representation space: the grounded space.
This however loses most of the structure contained in the data.
Zero-Shot Learning (ZSL) aims at classifying unlabeled objects by leveraging auxiliary knowledge, such as semantic representations.
Search-oriented conversational systems rely on information needs expressed in natural language (NL).
Recent advances in the machine learning community allowed different use cases to emerge, as its association to domains like cooking which created the computational cuisine.
Designing powerful tools that support cooking activities has rapidly gained popularity due to the massive amounts of available data, as well as recent advances in machine learning that are capable of analyzing them.
Ranked #7 on Cross-Modal Retrieval on Recipe1M
Representing the semantics of words is a long-standing problem for the natural language processing community.
The state-of-the-art solutions to the vocabulary mismatch in information retrieval (IR) mainly aim at leveraging either the relational semantics provided by external resources or the distributional semantics, recently investigated by deep neural approaches.
With this in mind, we argue that embedding KBs within deep neural architectures supporting documentquery matching would give rise to fine-grained latent representations of both words and their semantic relations.