Search Results for author: Koustuv Sinha

Found 21 papers, 14 papers with code

Sometimes We Want Ungrammatical Translations

1 code implementation Findings (EMNLP) 2021 Prasanna Parthasarathi, Koustuv Sinha, Joelle Pineau, Adina Williams

Rapid progress in Neural Machine Translation (NMT) systems over the last few years has focused primarily on improving translation quality, and as a secondary focus, improving robustness to perturbations (e. g. spelling).

Machine Translation Translation

Towards Reproducible Machine Learning Research in Natural Language Processing

no code implementations ACL 2022 Ana Lucic, Maurits Bleeker, Samarth Bhargav, Jessica Forde, Koustuv Sinha, Jesse Dodge, Sasha Luccioni, Robert Stojnic

While recent progress in the field of ML has been significant, the reproducibility of these cutting-edge results is often lacking, with many submissions lacking the necessary information in order to ensure subsequent reproducibility.

Evaluating Gender Bias in Natural Language Inference

1 code implementation12 May 2021 Shanya Sharma, Manan Dey, Koustuv Sinha

Gender-bias stereotypes have recently raised significant ethical concerns in natural language processing.

Natural Language Inference Natural Language Understanding

Sometimes We Want Translationese

no code implementations15 Apr 2021 Prasanna Parthasarathi, Koustuv Sinha, Joelle Pineau, Adina Williams

Rapid progress in Neural Machine Translation (NMT) systems over the last few years has been driven primarily towards improving translation quality, and as a secondary focus, improved robustness to input perturbations (e. g. spelling and grammatical mistakes).

Machine Translation Translation

Masked Language Modeling and the Distributional Hypothesis: Order Word Matters Pre-training for Little

no code implementations EMNLP 2021 Koustuv Sinha, Robin Jia, Dieuwke Hupkes, Joelle Pineau, Adina Williams, Douwe Kiela

A possible explanation for the impressive performance of masked language model (MLM) pre-training is that such models have learned to represent the syntactic structures prevalent in classical NLP pipelines.

Language Modelling Masked Language Modeling

COVID-19 Prognosis via Self-Supervised Representation Learning and Multi-Image Prediction

1 code implementation13 Jan 2021 Anuroop Sriram, Matthew Muckley, Koustuv Sinha, Farah Shamout, Joelle Pineau, Krzysztof J. Geras, Lea Azour, Yindalon Aphinyanaphongs, Nafissa Yakubova, William Moore

The first is deterioration prediction from a single image, where our model achieves an area under receiver operating characteristic curve (AUC) of 0. 742 for predicting an adverse event within 96 hours (compared to 0. 703 with supervised pretraining) and an AUC of 0. 765 for predicting oxygen requirements greater than 6 L a day at 24 hours (compared to 0. 749 with supervised pretraining).

Representation Learning Self-Supervised Learning

GraphLog: A Benchmark for Measuring Logical Generalization in Graph Neural Networks

1 code implementation1 Jan 2021 Koustuv Sinha, Shagun Sodhani, Joelle Pineau, William L. Hamilton

In this work, we study the logical generalization capabilities of GNNs by designing a benchmark suite grounded in first-order logic.

Continual Learning Knowledge Graphs +1

UnNatural Language Inference

1 code implementation ACL 2021 Koustuv Sinha, Prasanna Parthasarathi, Joelle Pineau, Adina Williams

We provide novel evidence that complicates this claim: we find that state-of-the-art Natural Language Inference (NLI) models assign the same labels to permuted examples as they do to the original, i. e. they are largely invariant to random word-order permutations.

Natural Language Inference Natural Language Understanding

A Closer Look at Codistillation for Distributed Training

no code implementations6 Oct 2020 Shagun Sodhani, Olivier Delalleau, Mahmoud Assran, Koustuv Sinha, Nicolas Ballas, Michael Rabbat

Surprisingly, we find that even at moderate batch sizes, models trained with codistillation can perform as well as models trained with synchronous data-parallel methods, despite using a much weaker synchronization mechanism.

Distributed Computing

Ideas for Improving the Field of Machine Learning: Summarizing Discussion from the NeurIPS 2019 Retrospectives Workshop

no code implementations21 Jul 2020 Shagun Sodhani, Mayoore S. Jaiswal, Lauren Baker, Koustuv Sinha, Carl Shneider, Peter Henderson, Joel Lehman, Ryan Lowe

This report documents ideas for improving the field of machine learning, which arose from discussions at the ML Retrospectives workshop at NeurIPS 2019.

Probing Linguistic Systematicity

1 code implementation ACL 2020 Emily Goodwin, Koustuv Sinha, Timothy J. O'Donnell

Recently, there has been much interest in the question of whether deep natural language understanding models exhibit systematicity; generalizing such that units like words make consistent contributions to the meaning of the sentences in which they appear.

Natural Language Inference Natural Language Understanding

Learning an Unreferenced Metric for Online Dialogue Evaluation

1 code implementation ACL 2020 Koustuv Sinha, Prasanna Parthasarathi, Jasmine Wang, Ryan Lowe, William L. Hamilton, Joelle Pineau

Evaluating the quality of a dialogue interaction between two agents is a difficult task, especially in open-domain chit-chat style dialogue.

Dialogue Evaluation

Improving Reproducibility in Machine Learning Research (A Report from the NeurIPS 2019 Reproducibility Program)

no code implementations27 Mar 2020 Joelle Pineau, Philippe Vincent-Lamarre, Koustuv Sinha, Vincent Larivière, Alina Beygelzimer, Florence d'Alché-Buc, Emily Fox, Hugo Larochelle

Reproducibility, that is obtaining similar results as presented in a paper or talk, using the same code and data (when available), is a necessary step to verify the reliability of research findings.

Evaluating Logical Generalization in Graph Neural Networks

1 code implementation ICML Workshop LifelongML 2020 Koustuv Sinha, Shagun Sodhani, Joelle Pineau, William L. Hamilton

Recent research has highlighted the role of relational inductive biases in building learning agents that can generalize and reason in a compositional manner.

Continual Learning Knowledge Graphs +2

CLUTRR: A Diagnostic Benchmark for Inductive Reasoning from Text

5 code implementations IJCNLP 2019 Koustuv Sinha, Shagun Sodhani, Jin Dong, Joelle Pineau, William L. Hamilton

The recent success of natural language understanding (NLU) systems has been troubled by results highlighting the failure of these models to generalize in a systematic and robust way.

Inductive logic programming Natural Language Understanding +2

Compositional Language Understanding with Text-based Relational Reasoning

2 code implementations7 Nov 2018 Koustuv Sinha, Shagun Sodhani, William L. Hamilton, Joelle Pineau

Neural networks for natural language reasoning have largely focused on extractive, fact-based question-answering (QA) and common-sense inference.

Common Sense Reasoning Language Modelling +2

Adversarial Gain

no code implementations4 Nov 2018 Peter Henderson, Koustuv Sinha, Rosemary Nan Ke, Joelle Pineau

Adversarial examples can be defined as inputs to a model which induce a mistake - where the model output is different than that of an oracle, perhaps in surprising or malicious ways.

General Classification

A Hierarchical Neural Attention-based Text Classifier

1 code implementation EMNLP 2018 Koustuv Sinha, Yue Dong, Jackie Chi Kit Cheung, Derek Ruths

Deep neural networks have been displaying superior performance over traditional supervised classifiers in text classification.

Classification General Classification +1

Ethical Challenges in Data-Driven Dialogue Systems

1 code implementation24 Nov 2017 Peter Henderson, Koustuv Sinha, Nicolas Angelard-Gontier, Nan Rosemary Ke, Genevieve Fried, Ryan Lowe, Joelle Pineau

The use of dialogue systems as a medium for human-machine interaction is an increasingly prevalent paradigm.

reinforcement-learning

Cannot find the paper you are looking for? You can Submit a new open access paper.