1 code implementation • 23 Jan 2023 • Avirup Sil, Jaydeep Sen, Bhavani Iyer, Martin Franz, Kshitij Fadnis, Mihaela Bornea, Sara Rosenthal, Scott McCarley, Rong Zhang, Vishwajeet Kumar, Yulong Li, Md Arafat Sultan, Riyaz Bhat, Radu Florian, Salim Roukos
The field of Question Answering (QA) has made remarkable progress in recent years, thanks to the advent of large pre-trained language models, newer realistic benchmark datasets with leaderboards, and novel algorithms for key components such as retrievers and readers.
no code implementations • 16 Jun 2022 • Scott McCarley, Mihaela Bornea, Sara Rosenthal, Anthony Ferritto, Md Arafat Sultan, Avirup Sil, Radu Florian
Recent machine reading comprehension datasets include extractive and boolean questions but current approaches do not offer integrated support for answering both question types.
no code implementations • 15 Dec 2021 • Mihaela Bornea, Ramon Fernandez Astudillo, Tahira Naseem, Nandana Mihindukulasooriya, Ibrahim Abdelaziz, Pavan Kapanipathi, Radu Florian, Salim Roukos
We propose a transition-based system to transpile Abstract Meaning Representation (AMR) into SPARQL for Knowledge Base Question Answering (KBQA).
Abstract Meaning Representation
Knowledge Base Question Answering
+1
no code implementations • 14 Dec 2021 • Sara Rosenthal, Mihaela Bornea, Avirup Sil, Radu Florian, Scott McCarley
Existing datasets that contain boolean questions, such as BoolQ and TYDI QA , provide the user with a YES/NO response to the question.
no code implementations • 16 Aug 2021 • Gaetano Rossiello, Nandana Mihindukulasooriya, Ibrahim Abdelaziz, Mihaela Bornea, Alfio Gliozzo, Tahira Naseem, Pavan Kapanipathi
Relation linking is essential to enable question answering over knowledge bases.
Ranked #1 on
Relation Linking
on QALD-9
no code implementations • 15 Apr 2021 • Sara Rosenthal, Mihaela Bornea, Avirup Sil
Recent approaches have exploited weaknesses in monolingual question answering (QA) models by adding adversarial statements to the passage.
no code implementations • 10 Dec 2020 • Mihaela Bornea, Lin Pan, Sara Rosenthal, Radu Florian, Avirup Sil
Prior work on multilingual question answering has mostly focused on using large multilingual pre-trained language models (LM) to perform zero-shot language-wise learning: train a QA model on English and test on other languages.
no code implementations • COLING 2020 • Anthony Ferritto, Sara Rosenthal, Mihaela Bornea, Kazi Hasan, Rishav Chakravarti, Salim Roukos, Radu Florian, Avi Sil
We also show how M-GAAMA can be used in downstream tasks by incorporating it into an END-TO-END-QA system using CFO (Chakravarti et al., 2019).
no code implementations • 22 Jun 2020 • Luca Buratti, Saurabh Pujar, Mihaela Bornea, Scott McCarley, Yunhui Zheng, Gaetano Rossiello, Alessandro Morari, Jim Laredo, Veronika Thost, Yufan Zhuang, Giacomo Domeniconi
We explore this hypothesis through the use of a pre-trained transformer-based language model to perform code analysis tasks.
no code implementations • IJCNLP 2019 • Oren Melamud, Mihaela Bornea, Ken Barker
In this work, we combine these two approaches to improve low-shot text classification with two novel methods: a simple bag-of-words embedding approach; and a more complex context-aware method, based on the BERT model.
no code implementations • WS 2017 • Nazneen Fatema Rajani, Mihaela Bornea, Ken Barker
In the medical domain, it is common to link text spans to medical concepts in large, curated knowledge repositories such as the Unified Medical Language System.
no code implementations • WS 2016 • D, Bharath ala, Murthy Devarakonda, Mihaela Bornea, Christopher Nielson
In predicting positive associations, the stacked combination significantly outperformed the baseline (a distant semi-supervised method on large medical text), achieving F scores of 0. 75 versus 0. 55 on the pairs seen in the patient records, and F scores of 0. 69 and 0. 35 on unique pairs.