Search Results for author: Kaheer Suleman

Found 20 papers, 7 papers with code

Modeling Event Plausibility with Consistent Conceptual Abstraction

1 code implementation NAACL 2021 Ian Porada, Kaheer Suleman, Adam Trischler, Jackie Chi Kit Cheung

Understanding natural language requires common sense, one aspect of which is the ability to discern the plausibility of events.

Common Sense Reasoning

On the Systematicity of Probing Contextualized Word Representations: The Case of Hypernymy in BERT

1 code implementation Joint Conference on Lexical and Computational Semantics 2020 Abhilasha Ravichander, Eduard Hovy, Kaheer Suleman, Adam Trischler, Jackie Chi Kit Cheung

In particular, we demonstrate through a simple consistency probe that the ability to correctly retrieve hypernyms in cloze tasks, as used in prior work, does not correspond to systematic knowledge in BERT.

An Analysis of Dataset Overlap on Winograd-Style Tasks

no code implementations COLING 2020 Ali Emami, Adam Trischler, Kaheer Suleman, Jackie Chi Kit Cheung

The Winograd Schema Challenge (WSC) and variants inspired by it have become important benchmarks for common-sense reasoning (CSR).

Common Sense Reasoning

Can a Gorilla Ride a Camel? Learning Semantic Plausibility from Text

no code implementations WS 2019 Ian Porada, Kaheer Suleman, Jackie Chi Kit Cheung

Previous work has focused specifically on modeling physical plausibility and shown that distributional methods fail when tested in a supervised setting.

Natural Language Understanding Pretrained Language Models

Improving Neural Question Generation using World Knowledge

no code implementations9 Sep 2019 Deepak Gupta, Kaheer Suleman, Mahmoud Adada, Andrew McNamara, Justin Harris

In this paper, we propose a method for incorporating world knowledge (linked entities and fine-grained entity types) into a neural question generation model.

Question Generation

Playing log(N)-Questions over Sentences

no code implementations13 Aug 2019 Peter Potash, Kaheer Suleman

We propose a two-agent game wherein a questioner must be able to conjure discerning questions between sentences, incorporate responses from an answerer, and keep track of a hypothesis state.

How Reasonable are Common-Sense Reasoning Tasks: A Case-Study on the Winograd Schema Challenge and SWAG

1 code implementation IJCNLP 2019 Paul Trichelair, Ali Emami, Adam Trischler, Kaheer Suleman, Jackie Chi Kit Cheung

Recent studies have significantly improved the state-of-the-art on common-sense reasoning (CSR) benchmarks like the Winograd Schema Challenge (WSC) and SWAG.

Common Sense Reasoning

The Knowref Coreference Corpus: Removing Gender and Number Cues for Difficult Pronominal Anaphora Resolution

1 code implementation ACL 2019 Ali Emami, Paul Trichelair, Adam Trischler, Kaheer Suleman, Hannes Schulz, Jackie Chi Kit Cheung

To explain this performance gap, we show empirically that state-of-the art models often fail to capture context, instead relying on the gender or number of candidate antecedents to make a decision.

Common Sense Reasoning Coreference Resolution +1

A Knowledge Hunting Framework for Common Sense Reasoning

no code implementations EMNLP 2018 Ali Emami, Noelia De La Cruz, Adam Trischler, Kaheer Suleman, Jackie Chi Kit Cheung

We introduce an automatic system that achieves state-of-the-art results on the Winograd Schema Challenge (WSC), a common sense reasoning task that requires diverse, complex forms of inference and knowledge.

Common Sense Reasoning

A Generalized Knowledge Hunting Framework for the Winograd Schema Challenge

no code implementations NAACL 2018 Ali Emami, Adam Trischler, Kaheer Suleman, Jackie Chi Kit Cheung

We introduce an automatic system that performs well on two common-sense reasoning tasks, the Winograd Schema Challenge (WSC) and the Choice of Plausible Alternatives (COPA).

Common Sense Reasoning Coreference Resolution +1

A Sequence-to-Sequence Model for User Simulation in Spoken Dialogue Systems

no code implementations30 Jun 2016 Layla El Asri, Jing He, Kaheer Suleman

The model takes as input a sequence of dialogue contexts and outputs a sequence of dialogue acts corresponding to user intentions.

Dialogue State Tracking Spoken Dialogue Systems

Policy Networks with Two-Stage Training for Dialogue Systems

no code implementations WS 2016 Mehdi Fatemi, Layla El Asri, Hannes Schulz, Jing He, Kaheer Suleman

Indeed, with only a few hundred dialogues collected with a handcrafted policy, the actor-critic deep learner is considerably bootstrapped from a combination of supervised and batch RL.

Dialogue State Tracking Gaussian Processes

A Parallel-Hierarchical Model for Machine Comprehension on Sparse Data

1 code implementation ACL 2016 Adam Trischler, Zheng Ye, Xingdi Yuan, Jing He, Phillip Bachman, Kaheer Suleman

The parallel hierarchy enables our model to compare the passage, question, and answer from a variety of trainable perspectives, as opposed to using a manually designed, rigid feature set.

Question Answering Reading Comprehension

Cannot find the paper you are looking for? You can Submit a new open access paper.