Search Results for author: Kaheer Suleman

Found 22 papers, 8 papers with code

A Parallel-Hierarchical Model for Machine Comprehension on Sparse Data

1 code implementation ACL 2016 Adam Trischler, Zheng Ye, Xingdi Yuan, Jing He, Phillip Bachman, Kaheer Suleman

The parallel hierarchy enables our model to compare the passage, question, and answer from a variety of trainable perspectives, as opposed to using a manually designed, rigid feature set.

Question Answering Reading Comprehension +1

Policy Networks with Two-Stage Training for Dialogue Systems

no code implementations WS 2016 Mehdi Fatemi, Layla El Asri, Hannes Schulz, Jing He, Kaheer Suleman

Indeed, with only a few hundred dialogues collected with a handcrafted policy, the actor-critic deep learner is considerably bootstrapped from a combination of supervised and batch RL.

Dialogue State Tracking Gaussian Processes +2

A Sequence-to-Sequence Model for User Simulation in Spoken Dialogue Systems

no code implementations30 Jun 2016 Layla El Asri, Jing He, Kaheer Suleman

The model takes as input a sequence of dialogue contexts and outputs a sequence of dialogue acts corresponding to user intentions.

Dialogue State Tracking Spoken Dialogue Systems +1

A Generalized Knowledge Hunting Framework for the Winograd Schema Challenge

no code implementations NAACL 2018 Ali Emami, Adam Trischler, Kaheer Suleman, Jackie Chi Kit Cheung

We introduce an automatic system that performs well on two common-sense reasoning tasks, the Winograd Schema Challenge (WSC) and the Choice of Plausible Alternatives (COPA).

Common Sense Reasoning Coreference Resolution +1

A Knowledge Hunting Framework for Common Sense Reasoning

no code implementations EMNLP 2018 Ali Emami, Noelia De La Cruz, Adam Trischler, Kaheer Suleman, Jackie Chi Kit Cheung

We introduce an automatic system that achieves state-of-the-art results on the Winograd Schema Challenge (WSC), a common sense reasoning task that requires diverse, complex forms of inference and knowledge.

Common Sense Reasoning Coreference Resolution

The Knowref Coreference Corpus: Removing Gender and Number Cues for Difficult Pronominal Anaphora Resolution

1 code implementation ACL 2019 Ali Emami, Paul Trichelair, Adam Trischler, Kaheer Suleman, Hannes Schulz, Jackie Chi Kit Cheung

To explain this performance gap, we show empirically that state-of-the art models often fail to capture context, instead relying on the gender or number of candidate antecedents to make a decision.

Common Sense Reasoning coreference-resolution +2

Playing log(N)-Questions over Sentences

no code implementations13 Aug 2019 Peter Potash, Kaheer Suleman

We propose a two-agent game wherein a questioner must be able to conjure discerning questions between sentences, incorporate responses from an answerer, and keep track of a hypothesis state.

Improving Neural Question Generation using World Knowledge

no code implementations9 Sep 2019 Deepak Gupta, Kaheer Suleman, Mahmoud Adada, Andrew McNamara, Justin Harris

In this paper, we propose a method for incorporating world knowledge (linked entities and fine-grained entity types) into a neural question generation model.

Question Generation Question-Generation +1

Can a Gorilla Ride a Camel? Learning Semantic Plausibility from Text

no code implementations WS 2019 Ian Porada, Kaheer Suleman, Jackie Chi Kit Cheung

Previous work has focused specifically on modeling physical plausibility and shown that distributional methods fail when tested in a supervised setting.

Natural Language Understanding

An Analysis of Dataset Overlap on Winograd-Style Tasks

no code implementations COLING 2020 Ali Emami, Adam Trischler, Kaheer Suleman, Jackie Chi Kit Cheung

The Winograd Schema Challenge (WSC) and variants inspired by it have become important benchmarks for common-sense reasoning (CSR).

Common Sense Reasoning

On the Systematicity of Probing Contextualized Word Representations: The Case of Hypernymy in BERT

1 code implementation Joint Conference on Lexical and Computational Semantics 2020 Abhilasha Ravichander, Eduard Hovy, Kaheer Suleman, Adam Trischler, Jackie Chi Kit Cheung

In particular, we demonstrate through a simple consistency probe that the ability to correctly retrieve hypernyms in cloze tasks, as used in prior work, does not correspond to systematic knowledge in BERT.

Modeling Event Plausibility with Consistent Conceptual Abstraction

1 code implementation NAACL 2021 Ian Porada, Kaheer Suleman, Adam Trischler, Jackie Chi Kit Cheung

Understanding natural language requires common sense, one aspect of which is the ability to discern the plausibility of events.

Common Sense Reasoning

Investigating Failures to Generalize for Coreference Resolution Models

no code implementations16 Mar 2023 Ian Porada, Alexandra Olteanu, Kaheer Suleman, Adam Trischler, Jackie Chi Kit Cheung

We investigate the extent to which errors of current coreference resolution models are associated with existing differences in operationalization across datasets (OntoNotes, PreCo, and Winogrande).

coreference-resolution

Cannot find the paper you are looking for? You can Submit a new open access paper.