1 code implementation • EMNLP (NLP-COVID19) 2020 • Adam Poliak, Max Fleming, Cash Costello, Kenton Murray, Mahsa Yarmohammadi, Shivani Pandya, Darius Irani, Milind Agarwal, Udit Sharma, Shuo Sun, Nicola Ivanov, Lingxi Shang, Kaushik Srinivasan, Seolhwa Lee, Xu Han, Smisha Agarwal, João Sedoc
We release a dataset of over 2, 100 COVID19 related Frequently asked Question-Answer pairs scraped from over 40 trusted websites.
no code implementations • NAACL (TeachingNLP) 2021 • Adam Poliak, Jalisha Jenifer
We provide an overview of a new Computational Text Analysis course that will be taught at Barnard College over a six week period in May and June 2021.
no code implementations • NAACL (GeBNLP) 2022 • Sanjana Marcé, Adam Poliak
We explore whether neural Natural Language Processing models trained to identify offensive language in tweets contain gender biases.
no code implementations • 11 Oct 2024 • Grace Proebsting, Adam Poliak
We test whether replacing crowdsource workers with LLMs to write Natural Language Inference (NLI) hypotheses similarly results in annotation artifacts.
no code implementations • 29 Jun 2023 • Dhruv Verma, Yash Kumar Lal, Shreyashee Sinha, Benjamin Van Durme, Adam Poliak
We present PaRTE, a collection of 1, 126 pairs of Recognizing Textual Entailment (RTE) examples to evaluate whether models are robust to paraphrasing.
no code implementations • 25 Apr 2022 • Daphna Spira, Noreen Mayat, Caitlin Dreisbach, Adam Poliak
We investigate whether, and if so how, birthing narratives written by new parents on Reddit changed during COVID-19.
1 code implementation • Findings (ACL) 2021 • Tuhin Chakrabarty, Debanjan Ghosh, Adam Poliak, Smaranda Muresan
We introduce a collection of recognizing textual entailment (RTE) datasets focused on figurative language.
1 code implementation • NAACL (SMM4H) 2021 • Max Fleming, Priyanka Dondeti, Caitlin N. Dreisbach, Adam Poliak
We describe our straight-forward approach for Tasks 5 and 6 of 2021 Social Media Mining for Health Applications (SMM4H) shared tasks.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Siddharth Vashishtha, Adam Poliak, Yash Kumar Lal, Benjamin Van Durme, Aaron Steven White
We introduce five new natural language inference (NLI) datasets focused on temporal reasoning.
no code implementations • EMNLP (Eval4NLP) 2020 • Adam Poliak
Recognizing Textual Entailment (RTE) was proposed as a unified evaluation framework to compare semantic understanding of different NLP systems.
no code implementations • 10 Apr 2020 • Nathaniel Weir, Adam Poliak, Benjamin Van Durme
Our prompts are based on human responses in a psychological study of conceptual associations.
no code implementations • ACL 2020 • Tongfei Chen, Zhengping Jiang, Adam Poliak, Keisuke Sakaguchi, Benjamin Van Durme
We introduce Uncertain Natural Language Inference (UNLI), a refinement of Natural Language Inference (NLI) that shifts away from categorical labels, targeting instead the direct prediction of subjective probability assessments.
1 code implementation • SEMEVAL 2019 • Yonatan Belinkov, Adam Poliak, Stuart M. Shieber, Benjamin Van Durme, Alexander M. Rush
Popular Natural Language Inference (NLI) datasets have been shown to be tainted by hypothesis-only biases.
1 code implementation • ACL 2019 • Yonatan Belinkov, Adam Poliak, Stuart M. Shieber, Benjamin Van Durme, Alexander M. Rush
In contrast to standard approaches to NLI, our methods predict the probability of a premise given a hypothesis and NLI label, discouraging models from ignoring the premise.
2 code implementations • ICLR 2019 • Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R. Thomas McCoy, Najoung Kim, Benjamin Van Durme, Samuel R. Bowman, Dipanjan Das, Ellie Pavlick
The jiant toolkit for general-purpose text understanding models
no code implementations • SEMEVAL 2019 • Najoung Kim, Roma Patel, Adam Poliak, Alex Wang, Patrick Xia, R. Thomas McCoy, Ian Tenney, Alexis Ross, Tal Linzen, Benjamin Van Durme, Samuel R. Bowman, Ellie Pavlick
Our results show that pretraining on language modeling performs the best on average across our probing tasks, supporting its widespread use for pretraining state-of-the-art NLP models, and CCG supertagging and NLI pretraining perform comparably.
1 code implementation • SEMEVAL 2018 • Adam Poliak, Jason Naradowsky, Aparajita Haldar, Rachel Rudinger, Benjamin Van Durme
We propose a hypothesis only baseline for diagnosing Natural Language Inference (NLI).
1 code implementation • NAACL 2018 • Adam Poliak, Yonatan Belinkov, James Glass, Benjamin Van Durme
We propose a process for investigating the extent to which sentence representations arising from neural machine translation (NMT) systems encode distinct semantic phenomena.
no code implementations • EMNLP (ACL) 2018 • Adam Poliak, Aparajita Haldar, Rachel Rudinger, J. Edward Hu, Ellie Pavlick, Aaron Steven White, Benjamin Van Durme
We present a large-scale collection of diverse natural language inference (NLI) datasets that help provide insight into how well a sentence representation captures distinct types of reasoning.
no code implementations • IJCNLP 2017 • Benjamin Van Durme, Tom Lippincott, Kevin Duh, Deana Burchfield, Adam Poliak, Cash Costello, Tim Finin, Scott Miller, James Mayfield, Philipp Koehn, Craig Harman, Dawn Lawrie, Ch May, ler, Max Thomas, Annabelle Carrell, Julianne Chaloux, Tongfei Chen, Alex Comerford, Mark Dredze, Benjamin Glass, Shudong Hao, Patrick Martin, Pushpendre Rastogi, Rashmi Sankepally, Travis Wolfe, Ying-Ying Tran, Ted Zhang
It combines a multitude of analytics together with a flexible environment for customizing the workflow for different users.
1 code implementation • SEMEVAL 2017 • Francis Ferraro, Adam Poliak, Ryan Cotterell, Benjamin Van Durme
We study how different frame annotations complement one another when learning continuous lexical semantics.
1 code implementation • EACL 2017 • Adam Poliak, Pushpendre Rastogi, M. Patrick Martin, Benjamin Van Durme
We propose ECO: a new way to generate embeddings for phrases that is Efficient, Compositional, and Order-sensitive.
no code implementations • EACL 2017 • Ryan Cotterell, Adam Poliak, Benjamin Van Durme, Jason Eisner
The popular skip-gram model induces word embeddings by exploiting the signal from word-context coocurrence.