Search Results for author: Ethan Perez

Found 17 papers, 10 papers with code

Single-Turn Debate Does Not Help Humans Answer Hard Reading-Comprehension Questions

no code implementations LNLS (ACL) 2022 Alicia Parrish, Harsh Trivedi, Ethan Perez, Angelica Chen, Nikita Nangia, Jason Phang, Samuel R. Bowman

We use long contexts -- humans familiar with the context write convincing explanations for pre-selected correct and incorrect answers, and we test if those explanations allow humans who have not read the full context to more accurately determine the correct answer.

Multiple-choice Reading Comprehension

Red Teaming Language Models with Language Models

no code implementations7 Feb 2022 Ethan Perez, Saffron Huang, Francis Song, Trevor Cai, Roman Ring, John Aslanides, Amelia Glaese, Nat McAleese, Geoffrey Irving

In this work, we automatically find cases where a target LM behaves in a harmful way, by generating test cases ("red teaming") using another LM.

Chatbot Language Modelling

True Few-Shot Learning with Language Models

1 code implementation NeurIPS 2021 Ethan Perez, Douwe Kiela, Kyunghyun Cho

Here, we evaluate the few-shot ability of LMs when such held-out examples are unavailable, a setting we call true few-shot learning.

Few-Shot Learning Model Selection +1

Unsupervised Question Decomposition for Question Answering

2 code implementations EMNLP 2020 Ethan Perez, Patrick Lewis, Wen-tau Yih, Kyunghyun Cho, Douwe Kiela

We aim to improve question answering (QA) by decomposing hard questions into simpler sub-questions that existing QA systems are capable of answering.

Question Answering

Finding Generalizable Evidence by Learning to Convince Q\&A Models

no code implementations IJCNLP 2019 Ethan Perez, Siddharth Karamcheti, Rob Fergus, Jason Weston, Douwe Kiela, Kyunghyun Cho

We propose a system that finds the strongest supporting evidence for a given answer to a question, using passage-based question-answering (QA) as a testbed.

Question Answering

Finding Generalizable Evidence by Learning to Convince Q&A Models

1 code implementation12 Sep 2019 Ethan Perez, Siddharth Karamcheti, Rob Fergus, Jason Weston, Douwe Kiela, Kyunghyun Cho

We propose a system that finds the strongest supporting evidence for a given answer to a question, using passage-based question-answering (QA) as a testbed.

Question Answering

Supervised Multimodal Bitransformers for Classifying Images and Text

6 code implementations6 Sep 2019 Douwe Kiela, Suvrat Bhooshan, Hamed Firooz, Ethan Perez, Davide Testuggine

Self-supervised bidirectional transformer models such as BERT have led to dramatic improvements in a wide variety of textual classification tasks.

Classification General Classification

ELI5: Long Form Question Answering

2 code implementations ACL 2019 Angela Fan, Yacine Jernite, Ethan Perez, David Grangier, Jason Weston, Michael Auli

We introduce the first large-scale corpus for long-form question answering, a task requiring elaborate and in-depth answers to open-ended questions.

Language Modelling Question Answering

Visual Reasoning with Multi-hop Feature Modulation

1 code implementation ECCV 2018 Florian Strub, Mathieu Seurin, Ethan Perez, Harm de Vries, Jérémie Mary, Philippe Preux, Aaron Courville, Olivier Pietquin

Recent breakthroughs in computer vision and natural language processing have spurred interest in challenging multi-modal tasks such as visual question-answering and visual dialogue.

Question Answering Visual Dialog +2

HoME: a Household Multimodal Environment

no code implementations29 Nov 2017 Simon Brodeur, Ethan Perez, Ankesh Anand, Florian Golemo, Luca Celotti, Florian Strub, Jean Rouat, Hugo Larochelle, Aaron Courville

We introduce HoME: a Household Multimodal Environment for artificial agents to learn from vision, audio, semantics, physics, and interaction with objects and other agents, all within a realistic context.

OpenAI Gym reinforcement-learning

Learning Visual Reasoning Without Strong Priors

2 code implementations10 Jul 2017 Ethan Perez, Harm de Vries, Florian Strub, Vincent Dumoulin, Aaron Courville

Previous work has operated under the assumption that visual reasoning calls for a specialized architecture, but we show that a general architecture with proper conditioning can learn to visually reason effectively.

Visual Reasoning

Semi-Supervised Learning with the Deep Rendering Mixture Model

no code implementations6 Dec 2016 Tan Nguyen, Wanjia Liu, Ethan Perez, Richard G. Baraniuk, Ankit B. Patel

Semi-supervised learning algorithms reduce the high cost of acquiring labeled training data by using both labeled and unlabeled data during learning.

Variational Inference

Cannot find the paper you are looking for? You can Submit a new open access paper.