Search Results for author: Adam Poliak

Found 21 papers, 11 papers with code

An Immersive Computational Text Analysis Course for Non-Computer Science Students at Barnard College

no code implementations NAACL (TeachingNLP) 2021 Adam Poliak, Jalisha Jenifer

We provide an overview of a new Computational Text Analysis course that will be taught at Barnard College over a six week period in May and June 2021.

On Gender Biases in Offensive Language Classification Models

no code implementations NAACL (GeBNLP) 2022 Sanjana Marcé, Adam Poliak

We explore whether neural Natural Language Processing models trained to identify offensive language in tweets contain gender biases.


Discovering changes in birthing narratives during COVID-19

no code implementations25 Apr 2022 Daphna Spira, Noreen Mayat, Caitlin Dreisbach, Adam Poliak

We investigate whether, and if so how, birthing narratives written by new parents on Reddit changed during COVID-19.

Fine-Tuning Transformers for Identifying Self-Reporting Potential Cases and Symptoms of COVID-19 in Tweets

1 code implementation NAACL (SMM4H) 2021 Max Fleming, Priyanka Dondeti, Caitlin N. Dreisbach, Adam Poliak

We describe our straight-forward approach for Tasks 5 and 6 of 2021 Social Media Mining for Health Applications (SMM4H) shared tasks.

A Survey on Recognizing Textual Entailment as an NLP Evaluation

no code implementations EMNLP (Eval4NLP) 2020 Adam Poliak

Recognizing Textual Entailment (RTE) was proposed as a unified evaluation framework to compare semantic understanding of different NLP systems.

Natural Language Inference RTE

Probing Neural Language Models for Human Tacit Assumptions

no code implementations10 Apr 2020 Nathaniel Weir, Adam Poliak, Benjamin Van Durme

Our prompts are based on human responses in a psychological study of conceptual associations.

Uncertain Natural Language Inference

no code implementations ACL 2020 Tongfei Chen, Zhengping Jiang, Adam Poliak, Keisuke Sakaguchi, Benjamin Van Durme

We introduce Uncertain Natural Language Inference (UNLI), a refinement of Natural Language Inference (NLI) that shifts away from categorical labels, targeting instead the direct prediction of subjective probability assessments.

Learning-To-Rank Natural Language Inference +1

Don't Take the Premise for Granted: Mitigating Artifacts in Natural Language Inference

1 code implementation ACL 2019 Yonatan Belinkov, Adam Poliak, Stuart M. Shieber, Benjamin Van Durme, Alexander M. Rush

In contrast to standard approaches to NLI, our methods predict the probability of a premise given a hypothesis and NLI label, discouraging models from ignoring the premise.

Natural Language Inference

Probing What Different NLP Tasks Teach Machines about Function Word Comprehension

no code implementations SEMEVAL 2019 Najoung Kim, Roma Patel, Adam Poliak, Alex Wang, Patrick Xia, R. Thomas McCoy, Ian Tenney, Alexis Ross, Tal Linzen, Benjamin Van Durme, Samuel R. Bowman, Ellie Pavlick

Our results show that pretraining on language modeling performs the best on average across our probing tasks, supporting its widespread use for pretraining state-of-the-art NLP models, and CCG supertagging and NLI pretraining perform comparably.

CCG Supertagging Language Modelling +1

On the Evaluation of Semantic Phenomena in Neural Machine Translation Using Natural Language Inference

1 code implementation NAACL 2018 Adam Poliak, Yonatan Belinkov, James Glass, Benjamin Van Durme

We propose a process for investigating the extent to which sentence representations arising from neural machine translation (NMT) systems encode distinct semantic phenomena.

Machine Translation Natural Language Inference +2

Collecting Diverse Natural Language Inference Problems for Sentence Representation Evaluation

no code implementations EMNLP (ACL) 2018 Adam Poliak, Aparajita Haldar, Rachel Rudinger, J. Edward Hu, Ellie Pavlick, Aaron Steven White, Benjamin Van Durme

We present a large-scale collection of diverse natural language inference (NLI) datasets that help provide insight into how well a sentence representation captures distinct types of reasoning.

Natural Language Inference

Efficient, Compositional, Order-sensitive n-gram Embeddings

1 code implementation EACL 2017 Adam Poliak, Pushpendre Rastogi, M. Patrick Martin, Benjamin Van Durme

We propose ECO: a new way to generate embeddings for phrases that is Efficient, Compositional, and Order-sensitive.

Word Embeddings

Cannot find the paper you are looking for? You can Submit a new open access paper.