no code implementations • 16 Mar 2023 • Ian Porada, Alexandra Olteanu, Kaheer Suleman, Adam Trischler, Jackie Chi Kit Cheung
We investigate the extent to which errors of current coreference resolution models are associated with existing differences in operationalization across datasets (OntoNotes, PreCo, and Winogrande).
no code implementations • NAACL 2022 • Ian Porada, Alessandro Sordoni, Jackie Chi Kit Cheung
Transformer models pre-trained with a masked-language-modeling objective (e. g., BERT) encode commonsense knowledge as evidenced by behavioral probes; however, the extent to which this knowledge is acquired by systematic inference over the semantics of the pre-training corpora is an open question.
no code implementations • ACL 2021 • Ali Emami, Ian Porada, Alexandra Olteanu, Kaheer Suleman, Adam Trischler, Jackie Chi Kit Cheung
A false contract is more likely to be rejected than a contract is, yet a false key is less likely than a key to open doors.
1 code implementation • NAACL 2021 • Ian Porada, Kaheer Suleman, Adam Trischler, Jackie Chi Kit Cheung
Understanding natural language requires common sense, one aspect of which is the ability to discern the plausibility of events.
no code implementations • WS 2019 • Ian Porada, Kaheer Suleman, Jackie Chi Kit Cheung
Previous work has focused specifically on modeling physical plausibility and shown that distributional methods fail when tested in a supervised setting.
2 code implementations • 25 Apr 2019 • Mingde Zhao, Sitao Luan, Ian Porada, Xiao-Wen Chang, Doina Precup
Temporal-Difference (TD) learning is a standard and very successful reinforcement learning approach, at the core of both algorithms that learn the value of a given policy, as well as algorithms which learn how to improve policies.