no code implementations • 13 Oct 2022 • Abhijeet Awasthi, Nitish Gupta, Bidisha Samanta, Shachi Dave, Sunita Sarawagi, Partha Talukdar
Despite cross-lingual generalization demonstrated by pre-trained multilingual models, the translate-train paradigm of transferring English datasets across multiple languages remains to be a key mechanism for training task-specific multilingual models.
no code implementations • 1 Jul 2022 • Wenhu Chen, William W. Cohen, Michiel de Jong, Nitish Gupta, Alessandro Presta, Pat Verga, John Wieting
In this position paper, we propose a new approach to generating a type of knowledge base (KB) from text, based on question generation and entity linking.
1 code implementation • 15 Dec 2021 • Xiaodong Yu, Wenpeng Yin, Nitish Gupta, Dan Roth
Third, we retrain and evaluate two state-of-the-art (SOTA) entity linking models, showing the challenges of event linking, and we propose an event-specific linking system EVELINK to set a competitive result for the new task.
no code implementations • 16 Aug 2021 • Nitish Gupta, Ruchir Kaul, Satwik Gupta, Jay Shah
The results based on the nonparametric nearest neighbor matching suggest a statistically significant positive effect of the EU ETS on the economic performance of the regulated firms during Phase I of the EU ETS.
no code implementations • 16 Aug 2021 • Nitish Gupta, Jay Shah, Satwik Gupta, Ruchir Kaul
In this paper, we estimate the causal impact (i. e. Average Treatment Effect, ATT) of the EU ETS on GHG emissions and firm competitiveness (primarily measured by employment, turnover, and exports levels) by combining a difference-in-differences approach with semi-parametric matching techniques and estimators an to investigate the effect of the EU ETS on the economic performance of these German manufacturing firms using a Stochastic Production Frontier model.
1 code implementation • ACL 2021 • Nitish Gupta, Sameer Singh, Matt Gardner
The predominant challenge in weakly supervised semantic parsing is that of spurious programs that evaluate to correct answers for the wrong reasons.
no code implementations • EMNLP 2021 • Nitish Gupta, Sameer Singh, Matt Gardner, Dan Roth
Such an objective does not require external supervision for the values of the latent output, or even the end task, yet provides an additional training signal to that provided by individual training examples themselves.
no code implementations • Findings of the Association for Computational Linguistics 2020 • Krunal Shah, Nitish Gupta, Dan Roth
The recent success of machine learning systems on various QA datasets could be interpreted as a significant improvement in models' language understanding abilities.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Inbar Oren, Jonathan Herzig, Nitish Gupta, Matt Gardner, Jonathan Berant
Generalization of models to out-of-distribution (OOD) data has captured tremendous attention recently.
no code implementations • 1 Oct 2020 • Matt Gardner, Yoav Artzi, Victoria Basmova, Jonathan Berant, Ben Bogin, Sihao Chen, Pradeep Dasigi, Dheeru Dua, Yanai Elazar, Ananth Gottumukkala, Nitish Gupta, Hanna Hajishirzi, Gabriel Ilharco, Daniel Khashabi, Kevin Lin, Jiangming Liu, Nelson F. Liu, Phoebe Mulcaire, Qiang Ning, Sameer Singh, Noah A. Smith, Sanjay Subramanian, Reut Tsarfaty, Eric Wallace, A. Zhang, Ben Zhou
Unfortunately, when a dataset has systematic gaps (e. g., annotation artifacts), these evaluations are misleading: a model can learn simple decision rules that perform well on the test set but do not capture a dataset's intended capabilities.
no code implementations • ACL 2020 • Jordan Kodner, Nitish Gupta
With the advent of powerful neural language models over the last few years, research attention has increasingly focused on what aspects of language they represent that make them so successful.
1 code implementation • ACL 2020 • Sanjay Subramanian, Ben Bogin, Nitish Gupta, Tomer Wolfson, Sameer Singh, Jonathan Berant, Matt Gardner
Neural module networks (NMNs) are a popular approach for modeling compositionality: they achieve high accuracy when applied to problems in language and vision, while reflecting the compositional structure of the problem in the network architecture.
no code implementations • 10 Apr 2020 • Jordan Kodner, Nitish Gupta
With the advent of powerful neural language models over the last few years, research attention has increasingly focused on what aspects of language they represent that make them so successful.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Matt Gardner, Yoav Artzi, Victoria Basmova, Jonathan Berant, Ben Bogin, Sihao Chen, Pradeep Dasigi, Dheeru Dua, Yanai Elazar, Ananth Gottumukkala, Nitish Gupta, Hanna Hajishirzi, Gabriel Ilharco, Daniel Khashabi, Kevin Lin, Jiangming Liu, Nelson F. Liu, Phoebe Mulcaire, Qiang Ning, Sameer Singh, Noah A. Smith, Sanjay Subramanian, Reut Tsarfaty, Eric Wallace, Ally Zhang, Ben Zhou
Unfortunately, when a dataset has systematic gaps (e. g., annotation artifacts), these evaluations are misleading: a model can learn simple decision rules that perform well on the test set but do not capture a dataset's intended capabilities.
no code implementations • 15 Dec 2019 • Stephen Mayhew, Nitish Gupta, Dan Roth
Although modern named entity recognition (NER) systems show impressive performance on standard datasets, they perform poorly when presented with noisy data.
Ranked #9 on
Named Entity Recognition (NER)
on WNUT 2017
2 code implementations • ICLR 2020 • Nitish Gupta, Kevin Lin, Dan Roth, Sameer Singh, Matt Gardner
Answering compositional questions that require multiple steps of reasoning against text is challenging, especially when they involve discrete, symbolic operations.
1 code implementation • EMNLP 2018 • Shyam Upadhyay, Nitish Gupta, Dan Roth
This enables our approach to: (a) augment the limited supervision in the target language with additional supervision from a high-resource language (like English), and (b) train a single entity linking model for multiple languages, improving upon individually trained models for each language.
no code implementations • EMNLP 2018 • Nitish Gupta, Mike Lewis
Answering compositional questions requiring multi-step reasoning is challenging.
no code implementations • EMNLP 2017 • Nitish Gupta, Sameer Singh, Dan Roth
For accurate entity linking, we need to capture various information aspects of an entity, such as its description in a KB, contexts in which it is mentioned, and structured knowledge.
no code implementations • COLING 2016 • Shyam Upadhyay, Nitish Gupta, Christos Christodoulopoulos, Dan Roth
Cross document event coreference (CDEC) is an important task that aims at aggregating event-related information across multiple documents.
no code implementations • 23 Apr 2015 • Nitish Gupta, Sameer Singh
Matrix factorization has found incredible success and widespread application as a collaborative filtering based approach to recommendations.