Search Results for author: Ian Porada

Found 7 papers, 3 papers with code

Modeling Event Plausibility with Consistent Conceptual Abstraction

1 code implementation NAACL 2021 Ian Porada, Kaheer Suleman, Adam Trischler, Jackie Chi Kit Cheung

Understanding natural language requires common sense, one aspect of which is the ability to discern the plausibility of events.

Common Sense Reasoning

META-Learning State-based Eligibility Traces for More Sample-Efficient Policy Evaluation

2 code implementations25 Apr 2019 Mingde Zhao, Sitao Luan, Ian Porada, Xiao-Wen Chang, Doina Precup

Temporal-Difference (TD) learning is a standard and very successful reinforcement learning approach, at the core of both algorithms that learn the value of a given policy, as well as algorithms which learn how to improve policies.

Meta-Learning

Can a Gorilla Ride a Camel? Learning Semantic Plausibility from Text

no code implementations WS 2019 Ian Porada, Kaheer Suleman, Jackie Chi Kit Cheung

Previous work has focused specifically on modeling physical plausibility and shown that distributional methods fail when tested in a supervised setting.

Natural Language Understanding

Does Pre-training Induce Systematic Inference? How Masked Language Models Acquire Commonsense Knowledge

no code implementations NAACL 2022 Ian Porada, Alessandro Sordoni, Jackie Chi Kit Cheung

Transformer models pre-trained with a masked-language-modeling objective (e. g., BERT) encode commonsense knowledge as evidenced by behavioral probes; however, the extent to which this knowledge is acquired by systematic inference over the semantics of the pre-training corpora is an open question.

Language Modelling Masked Language Modeling +1

Investigating Failures to Generalize for Coreference Resolution Models

no code implementations16 Mar 2023 Ian Porada, Alexandra Olteanu, Kaheer Suleman, Adam Trischler, Jackie Chi Kit Cheung

We investigate the extent to which errors of current coreference resolution models are associated with existing differences in operationalization across datasets (OntoNotes, PreCo, and Winogrande).

coreference-resolution

A Controlled Reevaluation of Coreference Resolution Models

1 code implementation31 Mar 2024 Ian Porada, Xiyuan Zou, Jackie Chi Kit Cheung

When controlling for language model size, encoder-based CR models outperform more recent decoder-based models in terms of both accuracy and inference speed.

coreference-resolution Language Modelling

Cannot find the paper you are looking for? You can Submit a new open access paper.