no code implementations • COLING 2022 • Revanth Gangi Reddy, Vikas Yadav, Md Arafat Sultan, Martin Franz, Vittorio Castelli, Heng Ji, Avirup Sil
Research on neural IR has so far been focused primarily on standard supervised learning settings, where it outperforms traditional term matching baselines.
no code implementations • 19 May 2023 • Revanth Gangi Reddy, Pradeep Dasigi, Md Arafat Sultan, Arman Cohan, Avirup Sil, Heng Ji, Hannaneh Hajishirzi
Neural information retrieval often adopts a retrieve-and-rerank framework: a bi-encoder network first retrieves K (e. g., 100) candidates that are then re-ranked using a more powerful cross-encoder model to rank the better candidates higher.
1 code implementation • 1 Mar 2023 • Jon Saad-Falcon, Omar Khattab, Keshav Santhanam, Radu Florian, Martin Franz, Salim Roukos, Avirup Sil, Md Arafat Sultan, Christopher Potts
After that, a much less expensive one is used to create large numbers of synthetic queries, which are used to fine-tune a family of reranker models.
1 code implementation • 23 Jan 2023 • Avirup Sil, Jaydeep Sen, Bhavani Iyer, Martin Franz, Kshitij Fadnis, Mihaela Bornea, Sara Rosenthal, Scott McCarley, Rong Zhang, Vishwajeet Kumar, Yulong Li, Md Arafat Sultan, Riyaz Bhat, Radu Florian, Salim Roukos
The field of Question Answering (QA) has made remarkable progress in recent years, thanks to the advent of large pre-trained language models, newer realistic benchmark datasets with leaderboards, and novel algorithms for key components such as retrievers and readers.
no code implementations • 2 Dec 2022 • Keshav Santhanam, Jon Saad-Falcon, Martin Franz, Omar Khattab, Avirup Sil, Radu Florian, Md Arafat Sultan, Salim Roukos, Matei Zaharia, Christopher Potts
Neural information retrieval (IR) systems have progressed rapidly in recent years, in large part due to the release of publicly available benchmarking tasks.
1 code implementation • 29 Nov 2022 • Ameet Deshpande, Md Arafat Sultan, Anthony Ferritto, Ashwin Kalyan, Karthik Narasimhan, Avirup Sil
Fine-tuning pre-trained language models (PLMs) achieves impressive performance on a range of downstream tasks, and their sizes have consequently been getting bigger.
2 code implementations • 17 Nov 2022 • Yousef El-Kurdi, Jerry Quinn, Avirup Sil
We introduce a novel run-time method for significantly reducing the accuracy loss associated with quantizing BERT-like models to 8-bit integers.
no code implementations • 16 Jun 2022 • Scott McCarley, Mihaela Bornea, Sara Rosenthal, Anthony Ferritto, Md Arafat Sultan, Avirup Sil, Radu Florian
Recent machine reading comprehension datasets include extractive and boolean questions but current approaches do not offer integrated support for answering both question types.
1 code implementation • DeepLo 2022 • Xiang Pan, Alex Sheng, David Shimshoni, Aditya Singhal, Sara Rosenthal, Avirup Sil
Pretrained language models have shown success in various areas of natural language processing, including reading comprehension tasks.
no code implementations • 15 May 2022 • Md Arafat Sultan, Avirup Sil, Radu Florian
Machine learning models are prone to overfitting their training (source) domains, which is commonly believed to be the reason why they falter in novel target domains.
1 code implementation • 24 Apr 2022 • Revanth Gangi Reddy, Md Arafat Sultan, Martin Franz, Avirup Sil, Heng Ji
On two public IR benchmarks, we empirically show that the proposed method helps improve both the model's attention patterns and retrieval performance, including in zero-shot settings.
no code implementations • 20 Apr 2022 • Revanth Gangi Reddy, Bhavani Iyer, Md Arafat Sultan, Rong Zhang, Avirup Sil, Vittorio Castelli, Radu Florian, Salim Roukos
Neural passage retrieval is a new and promising approach in open retrieval question answering.
2 code implementations • 20 Dec 2021 • Revanth Gangi Reddy, Xilin Rui, Manling Li, Xudong Lin, Haoyang Wen, Jaemin Cho, Lifu Huang, Mohit Bansal, Avirup Sil, Shih-Fu Chang, Alexander Schwing, Heng Ji
Specifically, the task involves multi-hop questions that require reasoning over image-caption pairs to identify the grounded visual object being referred to and then predicting a span from the news body text to answer the question.
1 code implementation • NAACL 2022 • Yulong Li, Martin Franz, Md Arafat Sultan, Bhavani Iyer, Young-suk Lee, Avirup Sil
We present DR. DECR (Dense Retrieval with Distillation-Enhanced Cross-Lingual Representation), a new cross-lingual information retrieval (CLIR) system trained using multi-stage knowledge distillation (KD).
Cross-Lingual Information Retrieval
Knowledge Distillation
+3
no code implementations • 14 Dec 2021 • Sara Rosenthal, Mihaela Bornea, Avirup Sil, Radu Florian, Scott McCarley
Existing datasets that contain boolean questions, such as BoolQ and TYDI QA , provide the user with a YES/NO response to the question.
no code implementations • ACL 2022 • Pengcheng Yin, John Wieting, Avirup Sil, Graham Neubig
Semantic parsers map natural language utterances into meaning representations (e. g., programs).
no code implementations • 21 Jul 2021 • Lin Pan, Chung-Wei Hang, Avirup Sil, Saloni Potdar
We propose a simple and general method to regularize the fine-tuning of Transformer-based encoders for text classification tasks.
no code implementations • ACL 2021 • Haoyang Wen, Anthony Ferritto, Heng Ji, Radu Florian, Avirup Sil
Existing models on Machine Reading Comprehension (MRC) require complex model architecture for effectively modeling long texts with paragraph representation and classification, thereby making inference computationally inefficient for production use.
no code implementations • 15 Apr 2021 • Revanth Gangi Reddy, Vikas Yadav, Md Arafat Sultan, Martin Franz, Vittorio Castelli, Heng Ji, Avirup Sil
Recent work has shown that commonly available machine reading comprehension (MRC) datasets can be used to train high-performance neural information retrieval (IR) systems.
no code implementations • 15 Apr 2021 • Sara Rosenthal, Mihaela Bornea, Avirup Sil
Recent approaches have exploited weaknesses in monolingual question answering (QA) models by adding adversarial statements to the passage.
no code implementations • 20 Jan 2021 • Rishav Chakravarti, Avirup Sil
Performance prediction is particularly important in cases of domain shift (as measured by training RC models on SQUAD 2. 0 and evaluating on NQ), where Mr. C not only improves AUC, but also traditional answerability prediction (as measured by a 5 point improvement in F1).
Extractive Question-Answering
Machine Reading Comprehension
+1
no code implementations • 10 Dec 2020 • Mihaela Bornea, Lin Pan, Sara Rosenthal, Radu Florian, Avirup Sil
Prior work on multilingual question answering has mostly focused on using large multilingual pre-trained language models (LM) to perform zero-shot language-wise learning: train a QA model on English and test on other languages.
no code implementations • Findings of the Association for Computational Linguistics 2020 • Revanth Gangi Reddy, Md Arafat Sultan, Efsun Sarioglu Kayi, Rong Zhang, Vittorio Castelli, Avirup Sil
Answer validation in machine reading comprehension (MRC) consists of verifying an extracted answer against an input context and question pair.
no code implementations • EMNLP 2020 • Rong Zhang, Revanth Gangi Reddy, Md Arafat Sultan, Vittorio Castelli, Anthony Ferritto, Radu Florian, Efsun Sarioglu Kayi, Salim Roukos, Avirup Sil, Todd Ward
Transfer learning techniques are particularly useful in NLP tasks where a sizable amount of high-quality annotated data is difficult to obtain.
2 code implementations • ACL 2020 • Vittorio Castelli, Rishav Chakravarti, Saswati Dana, Anthony Ferritto, Radu Florian, Martin Franz, Dinesh Garg, Dinesh Khandelwal, Scott McCarley, Mike McCawley, Mohamed Nasr, Lin Pan, Cezar Pendus, John Pitrelli, Saurabh Pujar, Salim Roukos, Andrzej Sakrajda, Avirup Sil, Rosario Uceda-Sosa, Todd Ward, Rong Zhang
We introduce TechQA, a domain-adaptation question answering dataset for the technical support domain.
no code implementations • IJCNLP 2019 • Ananya Subburathinam, Di Lu, Heng Ji, Jonathan May, Shih-Fu Chang, Avirup Sil, Clare Voss
The identification of complex semantic structures such as events and entity relations, already a challenging Information Extraction task, is doubly difficult from sources written in under-resourced and under-annotated languages.
no code implementations • 30 Oct 2019 • Anthony Ferritto, Lin Pan, Rishav Chakravarti, Salim Roukos, Radu Florian, J. William Murdock, Avirup Sil
Many of the top question answering systems today utilize ensembling to improve their performance on tasks such as the Stanford Question Answering Dataset (SQuAD) and Natural Questions (NQ) challenges.
no code implementations • 14 Oct 2019 • J. S. McCarley, Rishav Chakravarti, Avirup Sil
The recent trend in industry-setting Natural Language Processing (NLP) research has been to operate large %scale pretrained language models like BERT under strict computational limits.
no code implementations • 11 Sep 2019 • Lin Pan, Rishav Chakravarti, Anthony Ferritto, Michael Glass, Alfio Gliozzo, Salim Roukos, Radu Florian, Avirup Sil
Existing literature on Question Answering (QA) mostly focuses on algorithmic novelty, data augmentation, or increasingly large pre-trained language models like XLNet and RoBERTa.
Ranked #5 on
Question Answering
on Natural Questions (long)
1 code implementation • ACL 2020 • Michael Glass, Alfio Gliozzo, Rishav Chakravarti, Anthony Ferritto, Lin Pan, G P Shrivatsa Bhargav, Dinesh Garg, Avirup Sil
BERT (Bidirectional Encoder Representations from Transformers) and related pre-trained Transformers have provided large gains across many language understanding tasks, achieving a new state-of-the-art (SOTA).
no code implementations • IJCNLP 2019 • Rishav Chakravarti, Cezar Pendus, Andrzej Sakrajda, Anthony Ferritto, Lin Pan, Michael Glass, Vittorio Castelli, J. William Murdock, Radu Florian, Salim Roukos, Avirup Sil
This paper introduces a novel orchestration framework, called CFO (COMPUTATION FLOW ORCHESTRATOR), for building, experimenting with, and deploying interactive NLP (Natural Language Processing) and IR (Information Retrieval) systems to production environments.
no code implementations • ACL 2018 • Gourab Kundu, Avirup Sil, Radu Florian, Wael Hamza
We propose an entity-centric neural cross-lingual coreference model that builds on multi-lingual embeddings and language-independent features.
no code implementations • ACL 2016 • Avirup Sil, Radu Florian
Entity linking (EL) is the task of disambiguating mentions in text by associating them with entries in a predefined database of mentions (persons, organizations, etc).
no code implementations • 5 Dec 2017 • Avirup Sil, Gourab Kundu, Radu Florian, Wael Hamza
A major challenge in Entity Linking (EL) is making effective use of contextual information to disambiguate mentions to Wikipedia that might refer to different entities in different contexts.
Ranked #3 on
Entity Disambiguation
on TAC2010
no code implementations • EMNLP 2017 • Lifu Huang, Avirup Sil, Heng Ji, Radu Florian
Slot Filling (SF) aims to extract the values of certain types of attributes (or slots, such as person:cities\_of\_residence) for a given entity from a large collection of source documents.
no code implementations • 24 Feb 2016 • Thien Huu Nguyen, Avirup Sil, Georgiana Dinu, Radu Florian
One of the key challenges in natural language processing (NLP) is to yield good performance across application domains and languages.