342 papers with code • 2 benchmarks • 56 datasets
Information retrieval is the task of ranking a list of documents or search results in response to a query
( Image credit: sudhanshumittal )
Recommendation, information retrieval, and other information access systems pose unique challenges for investigating and applying the fairness and non-discrimination concepts that have been developed for studying other machine learning systems.
Cross-lingual text representations have gained popularity lately and act as the backbone of many tasks such as unsupervised machine translation and cross-lingual information retrieval, to name a few.
We comprehensively review state-of-the-art research outcomes in dialogue systems and analyze them from two angles: model type and system type.
When experiencing an information need, users want to engage with an expert, but often turn to an information retrieval system, such as a search engine, instead.
In this paper, we propose an Unsupervised Document Expansion with Generation (UDEG) framework with a pre-trained language model, which generates diverse supplementary sentences for the original document without using labels on query-document pairs for training.
To address this shortcoming, we propose SmoothI, a smooth approximation of rank indicators that serves as a basic building block to devise differentiable approximations of IR metrics.
Large-scale pre-trained models like BERT, have obtained a great success in various Natural Language Processing (NLP) tasks, while it is still a challenge to adapt them to the math-related tasks.