no code implementations • 11 Jun 2016 • Shikhar Sharma, Jing He, Kaheer Suleman, Hannes Schulz, Philip Bachman
Natural language generation plays a critical role in spoken dialogue systems.
no code implementations • WS 2017 • Layla El Asri, Hannes Schulz, Shikhar Sharma, Jeremie Zumer, Justin Harris, Emery Fine, Rahul Mehrotra, Kaheer Suleman
We developed this dataset to study the role of memory in goal-oriented dialogue systems.
no code implementations • WS 2016 • Mehdi Fatemi, Layla El Asri, Hannes Schulz, Jing He, Kaheer Suleman
Indeed, with only a few hundred dialogues collected with a handcrafted policy, the actor-critic deep learner is considerably bootstrapped from a combination of supervised and batch RL.
no code implementations • 30 Jun 2016 • Layla El Asri, Jing He, Kaheer Suleman
The model takes as input a sequence of dialogue contexts and outputs a sequence of dialogue acts corresponding to user intentions.
no code implementations • EMNLP 2016 • Adam Trischler, Zheng Ye, Xingdi Yuan, Kaheer Suleman
We present the EpiReader, a novel model for machine comprehension of text.
Ranked #7 on Question Answering on Children's Book Test
no code implementations • EMNLP 2018 • Ali Emami, Noelia De La Cruz, Adam Trischler, Kaheer Suleman, Jackie Chi Kit Cheung
We introduce an automatic system that achieves state-of-the-art results on the Winograd Schema Challenge (WSC), a common sense reasoning task that requires diverse, complex forms of inference and knowledge.
Ranked #65 on Coreference Resolution on Winograd Schema Challenge
no code implementations • NAACL 2018 • Ali Emami, Adam Trischler, Kaheer Suleman, Jackie Chi Kit Cheung
We introduce an automatic system that performs well on two common-sense reasoning tasks, the Winograd Schema Challenge (WSC) and the Choice of Plausible Alternatives (COPA).
no code implementations • 13 Aug 2019 • Peter Potash, Kaheer Suleman
We propose a two-agent game wherein a questioner must be able to conjure discerning questions between sentences, incorporate responses from an answerer, and keep track of a hypothesis state.
no code implementations • 9 Sep 2019 • Deepak Gupta, Kaheer Suleman, Mahmoud Adada, Andrew McNamara, Justin Harris
In this paper, we propose a method for incorporating world knowledge (linked entities and fine-grained entity types) into a neural question generation model.
no code implementations • WS 2019 • Ian Porada, Kaheer Suleman, Jackie Chi Kit Cheung
Previous work has focused specifically on modeling physical plausibility and shown that distributional methods fail when tested in a supervised setting.
no code implementations • COLING 2020 • Ali Emami, Adam Trischler, Kaheer Suleman, Jackie Chi Kit Cheung
The Winograd Schema Challenge (WSC) and variants inspired by it have become important benchmarks for common-sense reasoning (CSR).
no code implementations • ACL 2021 • Ali Emami, Ian Porada, Alexandra Olteanu, Kaheer Suleman, Adam Trischler, Jackie Chi Kit Cheung
A false contract is more likely to be rejected than a contract is, yet a false key is less likely than a key to open doors.
no code implementations • NAACL 2022 • Kaitlyn Zhou, Su Lin Blodgett, Adam Trischler, Hal Daumé III, Kaheer Suleman, Alexandra Olteanu
There are many ways to express similar things in text, which makes evaluating natural language generation (NLG) systems difficult.
no code implementations • 16 Mar 2023 • Ian Porada, Alexandra Olteanu, Kaheer Suleman, Adam Trischler, Jackie Chi Kit Cheung
We investigate the extent to which errors of current coreference resolution models are associated with existing differences in operationalization across datasets (OntoNotes, PreCo, and Winogrande).
1 code implementation • 15 Dec 2022 • Akshatha Arodi, Martin Pömsl, Kaheer Suleman, Adam Trischler, Alexandra Olteanu, Jackie Chi Kit Cheung
In this work, we propose a test suite of coreference resolution subtasks that require reasoning over multiple facts.
1 code implementation • Joint Conference on Lexical and Computational Semantics 2020 • Abhilasha Ravichander, Eduard Hovy, Kaheer Suleman, Adam Trischler, Jackie Chi Kit Cheung
In particular, we demonstrate through a simple consistency probe that the ability to correctly retrieve hypernyms in cloze tasks, as used in prior work, does not correspond to systematic knowledge in BERT.
1 code implementation • IJCNLP 2019 • Paul Trichelair, Ali Emami, Adam Trischler, Kaheer Suleman, Jackie Chi Kit Cheung
Recent studies have significantly improved the state-of-the-art on common-sense reasoning (CSR) benchmarks like the Winograd Schema Challenge (WSC) and SWAG.
Ranked #36 on Coreference Resolution on Winograd Schema Challenge
1 code implementation • ACL 2019 • Ali Emami, Paul Trichelair, Adam Trischler, Kaheer Suleman, Hannes Schulz, Jackie Chi Kit Cheung
To explain this performance gap, we show empirically that state-of-the art models often fail to capture context, instead relying on the gender or number of candidate antecedents to make a decision.
1 code implementation • NAACL 2021 • Ian Porada, Kaheer Suleman, Adam Trischler, Jackie Chi Kit Cheung
Understanding natural language requires common sense, one aspect of which is the ability to discern the plausibility of events.
1 code implementation • ACL 2016 • Adam Trischler, Zheng Ye, Xingdi Yuan, Jing He, Phillip Bachman, Kaheer Suleman
The parallel hierarchy enables our model to compare the passage, question, and answer from a variety of trainable perspectives, as opposed to using a manually designed, rigid feature set.
Ranked #1 on Question Answering on MCTest-160
1 code implementation • 2 Oct 2021 • Vaibhav Adlakha, Shehzaad Dhuliawala, Kaheer Suleman, Harm de Vries, Siva Reddy
On average, a conversation in our dataset spans 13 question-answer turns and involves four topics (documents).
2 code implementations • WS 2017 • Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, Kaheer Suleman
We present NewsQA, a challenging machine comprehension dataset of over 100, 000 human-generated question-answer pairs.