no code implementations • dialdoc (ACL) 2022 • Srijan Bansal, Suraj Tripathi, Sumit Agarwal, Sireesh Gururaja, Aditya Srikanth Veerubhotla, Ritam Dutt, Teruko Mitamura, Eric Nyberg
In this paper, we present our submission to the DialDoc shared task based on the MultiDoc2Dial dataset.
no code implementations • ACL (dialdoc) 2021 • Xi Chen, Faner Lin, Yeju Zhou, Kaixin Ma, Jonathan Francis, Eric Nyberg, Alessandro Oltramari
In this paper, we describe our systems for solving the two Doc2Dial shared task: knowledge identification and response generation.
1 code implementation • 23 May 2023 • Alex Wilf, Syeda Nahida Akter, Leena Mathur, Paul Pu Liang, Sheryl Mathew, Mengrou Shou, Eric Nyberg, Louis-Philippe Morency
The self-supervised objective of masking-and-predicting has led to promising performance gains on a variety of downstream tasks.
1 code implementation • 4 May 2023 • Kaixin Ma, Hao Cheng, Yu Zhang, Xiaodong Liu, Eric Nyberg, Jianfeng Gao
Our approach outperforms recent self-supervised retrievers in zero-shot evaluations and achieves state-of-the-art fine-tuned retrieval performance on NQ, HotpotQA and OTT-QA.
Ranked #4 on
Question Answering
on HotpotQA
no code implementations • 26 Apr 2023 • Hugo Rodrigues, Eric Nyberg, Luisa Coheur
Each generated question, after being corrected by the user, is used as a new seed in the next iteration, so more patterns are created each time.
no code implementations • 8 Jan 2023 • Leonid Boytsov, Preksha Patel, Vivek Sourabh, Riddhi Nisar, Sayani Kundu, Ramya Ramanathan, Eric Nyberg
We carried out a reproducibility study of InPars recipe for unsupervised training of neural rankers.
no code implementations • 21 Dec 2022 • Gyan Tatiya, Jonathan Francis, Luca Bondi, Ingrid Navarro, Eric Nyberg, Jivko Sinapov, Jean Oh
We also define a new audio-visual navigation sub-task, where agents are evaluated on novel sounding objects, as opposed to unheard clips of known objects.
no code implementations • 16 Dec 2022 • Jonathan Francis, Bingqing Chen, Weiran Yao, Eric Nyberg, Jean Oh
The feasibility of collecting a large amount of expert demonstrations has inspired growing research interests in learning-to-drive settings, where models learn by imitating the driving behaviour from experts.
2 code implementations • 22 Oct 2022 • Kaixin Ma, Hao Cheng, Xiaodong Liu, Eric Nyberg, Jianfeng Gao
We propose a novel open-domain question answering (ODQA) framework for answering single/multi-hop questions across heterogeneous knowledge sources.
1 code implementation • COLING 2022 • Kaixin Ma, Filip Ilievski, Jonathan Francis, Eric Nyberg, Alessandro Oltramari
In this paper, we propose Coalescing Global and Local Information (CGLI), a new model that builds entity- and timestep-aware input representations (local input) considering the whole context (global input), and we jointly model the entity states with a structured prediction objective (global output).
3 code implementations • 4 Jul 2022 • Leonid Boytsov, Tianyi Lin, Fangwei Gao, Yutian Zhao, Jeffrey Huang, Eric Nyberg
We carry out a comprehensive evaluation of 13 recent models for ranking of long documents using two popular collections (MS MARCO documents and Robust04).
1 code implementation • NAACL (SUKI) 2022 • Zhiruo Wang, Zhengbao Jiang, Eric Nyberg, Graham Neubig
In this work, we focus on the task of table retrieval, and ask: "is table-specific model design necessary for table retrieval, or can a simpler text-based model be effectively used to achieve a similar result?"
no code implementations • 5 May 2022 • Jonathan Francis, Bingqing Chen, Siddha Ganju, Sidharth Kathpal, Jyotish Poonganam, Ayush Shivani, Vrushank Vyas, Sahika Genc, Ivan Zhukov, Max Kumskoy, Anirudh Koul, Jean Oh, Eric Nyberg
In the first stage of the challenge, we evaluate an autonomous agent's ability to drive as fast as possible, while adhering to safety constraints.
1 code implementation • ACL 2022 • Kaixin Ma, Hao Cheng, Xiaodong Liu, Eric Nyberg, Jianfeng Gao
The retriever-reader framework is popular for open-domain question answering (ODQA) due to its ability to use explicit knowledge.
no code implementations • 14 Oct 2021 • Bingqing Chen, Jonathan Francis, Jean Oh, Eric Nyberg, Sylvia L. Herbert
Given the nature of the task, autonomous agents need to be able to 1) identify and avoid unsafe scenarios under the complex vehicle dynamics, and 2) make sub-second decision in a fast-changing environment.
1 code implementation • EMNLP 2021 • Kaixin Ma, Filip Ilievski, Jonathan Francis, Satoru Ozaki, Eric Nyberg, Alessandro Oltramari
In this paper, we investigate what models learn from commonsense reasoning datasets.
1 code implementation • ICCV 2021 • James Herman, Jonathan Francis, Siddha Ganju, Bingqing Chen, Anirudh Koul, Abhinav Gupta, Alexey Skabelkin, Ivan Zhukov, Max Kumskoy, Eric Nyberg
Existing research on autonomous driving primarily focuses on urban driving, which is insufficient for characterising the complex driving behaviour underlying high-speed racing.
no code implementations • 19 Dec 2020 • Yikang Li, Pulkit Goel, Varsha Kuppur Rajendra, Har Simrat Singh, Jonathan Francis, Kaixin Ma, Eric Nyberg, Alessandro Oltramari
Conditional text generation has been a challenging task that is yet to see human-level performance from state-of-the-art models.
1 code implementation • 7 Nov 2020 • Kaixin Ma, Filip Ilievski, Jonathan Francis, Yonatan Bisk, Eric Nyberg, Alessandro Oltramari
Guided by a set of hypotheses, the framework studies how to transform various pre-existing knowledge resources into a form that is most effective for pre-training models.
2 code implementations • EMNLP (NLPOSS) 2020 • Leonid Boytsov, Eric Nyberg
Our objective is to introduce to the NLP community an existing k-NN search library NMSLIB, a new retrieval toolkit FlexNeuART, as well as their integration capabilities.
no code implementations • WS 2019 • Hemant Pugaliya, James Route, Kaixin Ma, Yixuan Geng, Eric Nyberg
The field of question answering (QA) has seen rapid growth in new tasks and modeling approaches in recent years.
no code implementations • WS 2019 • Kaixin Ma, Jonathan Francis, Quanyang Lu, Eric Nyberg, Alessandro Oltramari
Non-extractive commonsense QA remains a challenging AI task, as it requires systems to reason about, synthesize, and gather disparate pieces of information, in order to generate responses to queries.
Ranked #12 on
Common Sense Reasoning
on CommonsenseQA
no code implementations • 8 Oct 2019 • Leonid Boytsov, Eric Nyberg
We consider two known data-driven approaches to extend these rules to non-metric spaces: TriGen and a piece-wise linear approximation of the pruning rule.
no code implementations • 8 Oct 2019 • Leonid Boytsov, Eric Nyberg
We demonstrate that a graph-based search algorithm-relying on the construction of an approximate neighborhood graph-can directly work with challenging non-metric and/or non-symmetric distances without resorting to metric-space mapping and/or distance symmetrization, which, in turn, lead to substantial performance degradation.
no code implementations • WS 2019 • Sai Abishek Bhaskar, Rashi Rungta, James Route, Eric Nyberg, Teruko Mitamura
This paper presents a multi-task learning approach to natural language inference (NLI) and question entailment (RQE) in the biomedical domain.
no code implementations • WS 2019 • Vinayshekhar Bannihatti Kumar, Ashwin Srinivasan, Aditi Chaudhary, James Route, Teruko Mitamura, Eric Nyberg
This paper presents the submissions by Team Dr. Quad to the ACL-BioNLP 2019 shared task on Textual Inference and Question Entailment in the Medical Domain.
no code implementations • ICLR Workshop DeepGenStruct 2019 • Ch, Khyathi u, Eric Nyberg, Alan W. black
We introduce a dataset for sequential procedural (how-to) text generation from images in cooking domain.
no code implementations • WS 2019 • Hemant Pugaliya, Karan Saxena, Shefali Garg, Sheetal Shalini, Prashant Gupta, Eric Nyberg, Teruko Mitamura
Parallel deep learning architectures like fine-tuned BERT and MT-DNN, have quickly become the state of the art, bypassing previous deep and shallow learning methods by a large margin.
no code implementations • WS 2018 • Ashwin Naresh Kumar, Harini Kesavamoorthy, Madhura Das, Pramati Kalwad, Ch, Khyathi u, Teruko Mitamura, Eric Nyberg
The ever-increasing magnitude of biomedical information sources makes it difficult and time-consuming for a human researcher to find the most relevant documents and pinpointed answers for a specific question or topic when using only a traditional search engine.
no code implementations • WS 2018 • Yutong Li, Nicholas Gekakis, Qiuze Wu, Boyue Li, Ch, Khyathi u, Eric Nyberg
The growing number of biomedical publications is a challenge for human researchers, who invest considerable effort to search for relevant documents and pinpointed answers.
no code implementations • WS 2018 • Vasu Sharma, Nitish Kulkarni, Srividya Pranavi, Gabriel Bayomi, Eric Nyberg, Teruko Mitamura
In this paper, we present a novel Biomedical Question Answering system, BioAMA: {``}Biomedical Ask Me Anything{''} on task 5b of the annual BioASQ challenge.
no code implementations • WS 2018 • Ch, Khyathi u, Ekaterina Loginova, Vishal Gupta, Josef van Genabith, G{\"u}nter Neumann, Manoj Chinnakotla, Eric Nyberg, Alan W. black
As a first step towards fostering research which supports CM in NLP applications, we systematically crowd-sourced and curated an evaluation dataset for factoid question answering in three CM languages - Hinglish (Hindi+English), Tenglish (Telugu+English) and Tamlish (Tamil+English) which belong to two language families (Indo-Aryan and Dravidian).
no code implementations • WS 2018 • Soumya Wadhwa, Khyathi Raghavi Chandu, Eric Nyberg
The task of Question Answering has gained prominence in the past few decades for testing the ability of machines to understand natural language.
no code implementations • WS 2018 • Soumya Wadhwa, Varsha Embar, Matthias Grabmair, Eric Nyberg
In this paper, we investigate the tendency of end-to-end neural Machine Reading Comprehension (MRC) models to match shallow patterns rather than perform inference-oriented reasoning on RC benchmarks.
no code implementations • 15 Nov 2017 • Yuan Yang, Jingcheng Yu, Ye Hu, Xiaoyao Xu, Eric Nyberg
In this paper, we present LiveMedQA, a question answering system that is optimized for consumer health question.
1 code implementation • EMNLP 2017 • Di Wang, Nebojsa Jojic, Chris Brockett, Eric Nyberg
We propose simple and flexible training and decoding methods for influencing output style and topic in neural encoder-decoder based language generation.
1 code implementation • WS 2017 • Harsh Jhamtani, Varun Gangal, Eduard Hovy, Eric Nyberg
Variations in writing styles are commonly used to adapt the content to a specific context, audience, or purpose.
no code implementations • WS 2017 • Khyathi u, Aakanksha Naik, Ch, Aditya rasekar, Zi Yang, Niloy Gupta, Eric Nyberg
In this paper, we describe our participation in phase B of task 5b of the fifth edition of the annual BioASQ challenge, which includes answering factoid, list, yes-no and summary questions from biomedical data.
no code implementations • WS 2017 • Ravich, Abhilasha er, Thomas Manzini, Matthias Grabmair, Graham Neubig, Jonathan Francis, Eric Nyberg
Wang et al. (2015) proposed a method to build semantic parsing datasets by generating canonical utterances using a grammar and having crowdworkers paraphrase them into natural wording.
1 code implementation • EMNLP 2017 • Varun Gangal, Harsh Jhamtani, Graham Neubig, Eduard Hovy, Eric Nyberg
Portmanteaus are a word formation phenomenon where two words are combined to form a new word.
2 code implementations • 4 Jul 2017 • Harsh Jhamtani, Varun Gangal, Eduard Hovy, Eric Nyberg
Variations in writing styles are commonly used to adapt the content to a specific context, audience, or purpose.
no code implementations • EMNLP 2017 • Rui Liu, Junjie Hu, Wei Wei, Zi Yang, Eric Nyberg
Deep neural networks for machine comprehension typically utilizes only word or character embeddings without explicitly taking advantage of structured linguistic information such as constituency trees and dependency trees.
Ranked #40 on
Question Answering
on SQuAD1.1 dev
no code implementations • WS 2016 • Nancy Ide, Keith Suderman, Eric Nyberg, James Pustejovsky, Marc Verhagen
The US National Science Foundation (NSF) SI2-funded LAPPS/Galaxy project has developed an open-source platform for enabling complex analyses while hiding complexities associated with underlying infrastructure, that can be accessed through a web interface, deployed on any Unix system, or run from the cloud.
1 code implementation • 10 Jun 2015 • Bilegsaikhan Naidan, Leonid Boytsov, Eric Nyberg
The underpinning assumption is that, for both metric and non-metric spaces, the distance between permutations is a good proxy for the distance between original points.
no code implementations • LREC 2014 • Nancy Ide, James Pustejovsky, Christopher Cieri, Eric Nyberg, Di Wang, Keith Suderman, Marc Verhagen, Jonathan Wright
The Language Application (LAPPS) Grid project is establishing a framework that enables language service discovery, composition, and reuse and promotes sustainability, manageability, usability, and interoperability of natural language Processing (NLP) components.