Search Results for author: William W. Cohen

Found 68 papers, 23 papers with code

ConditionalQA: A Complex Reading Comprehension Dataset with Conditional Answers

1 code implementation13 Oct 2021 Haitian Sun, William W. Cohen, Ruslan Salakhutdinov

In addition to conditional answers, the dataset also features: (1) long context documents with information that is related in logically complex ways; (2) multi-hop questions that require compositional logical reasoning; (3) a combination of extractive questions, yes/no questions, questions with multiple answers, and not-answerable questions; (4) questions asked without knowing the answers.

Question Answering Reading Comprehension

Multilingual Fact Linking

1 code implementation AKBC 2021 Keshav Kolluru, Martin Rezk, Pat Verga, William W. Cohen, Partha Talukdar

This makes it challenging to link KG facts to sentences in languages other than the limited set of languages.

Re-Ranking

MATE: Multi-view Attention for Table Transformer Efficiency

2 code implementations EMNLP 2021 Julian Martin Eisenschlos, Maharshi Gor, Thomas Müller, William W. Cohen

However, more than 20% of relational tables on the web have 20 or more rows (Cafarella et al., 2008), and these large tables present a challenge for current Transformer models, which are typically limited to 512 tokens.

Time-Aware Language Models as Temporal Knowledge Bases

no code implementations29 Jun 2021 Bhuwan Dhingra, Jeremy R. Cole, Julian Martin Eisenschlos, Daniel Gillick, Jacob Eisenstein, William W. Cohen

We introduce a diagnostic dataset aimed at probing LMs for factual knowledge that changes over time and highlight problems with LMs at either end of the spectrum -- those trained on specific slices of temporal data, as well as those trained on a wide range of temporal data.

Iterative Hierarchical Attention for Answering Complex Questions over Long Documents

1 code implementation1 Jun 2021 Haitian Sun, William W. Cohen, Ruslan Salakhutdinov

We propose a new model, DocHopper, that iteratively attends to different parts of long, hierarchically structured documents to answer complex questions.

Information Seeking Multi-hop Question Answering +2

What's the best place for an AI conference, Vancouver or ______: Why completing comparative questions is difficult

no code implementations5 Apr 2021 Avishai Zagoury, Einat Minkov, Idan Szpektor, William W. Cohen

Here we study using such LMs to fill in entities in human-authored comparative questions, like ``Which country is older, India or ______?''

Reasoning Over Virtual Knowledge Bases With Open Predicate Relations

no code implementations14 Feb 2021 Haitian Sun, Pat Verga, Bhuwan Dhingra, Ruslan Salakhutdinov, William W. Cohen

We present the Open Predicate Query Language (OPQL); a method for constructing a virtual KB (VKB) trained entirely from text.

Language Modelling Open-Domain Question Answering

Evaluating Explanations: How much do explanations from the teacher aid students?

no code implementations1 Dec 2020 Danish Pruthi, Bhuwan Dhingra, Livio Baldini Soares, Michael Collins, Zachary C. Lipton, Graham Neubig, William W. Cohen

While many methods purport to explain predictions by highlighting salient features, what precise aims these explanations serve and how to evaluate their utility are often unstated.

Differentiable Open-Ended Commonsense Reasoning

no code implementations NAACL 2021 Bill Yuchen Lin, Haitian Sun, Bhuwan Dhingra, Manzil Zaheer, Xiang Ren, William W. Cohen

As a step towards making commonsense reasoning research more realistic, we propose to study open-ended commonsense reasoning (OpenCSR) -- the task of answering a commonsense question without any pre-defined choices -- using as a resource only a corpus of commonsense facts written in natural language.

Open Question Answering over Tables and Text

1 code implementation ICLR 2021 Wenhu Chen, Ming-Wei Chang, Eva Schlinger, William Wang, William W. Cohen

In open question answering (QA), the answer to a question is produced by retrieving and then analyzing documents that might contain answers to the question.

Question Answering

Facts as Experts: Adaptable and Interpretable Neural Memory over Symbolic Knowledge

no code implementations2 Jul 2020 Pat Verga, Haitian Sun, Livio Baldini Soares, William W. Cohen

Massive language models are the core of modern NLP modeling and have been shown to encode impressive amounts of commonsense and factual information.

Language Modelling Question Answering

Faithful Embeddings for Knowledge Base Queries

1 code implementation NeurIPS 2020 Haitian Sun, Andrew O. Arnold, Tania Bedrax-Weiss, Fernando Pereira, William W. Cohen

We address this problem with a novel QE method that is more faithful to deductive reasoning, and show that this leads to better performance on complex queries to incomplete KBs.

Question Answering

Differentiable Reasoning over a Virtual Knowledge Base

1 code implementation ICLR 2020 Bhuwan Dhingra, Manzil Zaheer, Vidhisha Balachandran, Graham Neubig, Ruslan Salakhutdinov, William W. Cohen

In particular, we describe a neural module, DrKIT, that traverses textual data like a KB, softly following paths of relations between mentions of entities in the corpus.

Re-Ranking

Scalable Neural Methods for Reasoning With a Symbolic Knowledge Base

no code implementations ICLR 2020 William W. Cohen, Haitian Sun, R. Alex Hofer, Matthew Siegler

We describe a novel way of representing a symbolic knowledge base (KB) called a sparse-matrix reified KB.

Game Design for Eliciting Distinguishable Behavior

no code implementations NeurIPS 2019 Fan Yang, Liu Leqi, Yifan Wu, Zachary C. Lipton, Pradeep Ravikumar, William W. Cohen, Tom Mitchell

The ability to inferring latent psychological traits from human behavior is key to developing personalized human-interacting machine learning systems.

Handling Divergent Reference Texts when Evaluating Table-to-Text Generation

1 code implementation ACL 2019 Bhuwan Dhingra, Manaal Faruqui, Ankur Parikh, Ming-Wei Chang, Dipanjan Das, William W. Cohen

Automatically constructed datasets for generating text from semi-structured data (tables), such as WikiBio, often contain reference texts that diverge from the information in the corresponding semi-structured data.

Table-to-Text Generation

Differentiable Representations For Multihop Inference Rules

no code implementations24 May 2019 William W. Cohen, Haitian Sun, R. Alex Hofer, Matthew Siegler

We present efficient differentiable implementations of second-order multi-hop reasoning using a large symbolic knowledge base (KB).

Neural Query Language: A Knowledge Base Query Language for Tensorflow

no code implementations15 May 2019 William W. Cohen, Matthew Siegler, Alex Hofer

Large knowledge bases (KBs) are useful for many AI tasks, but are difficult to integrate into modern gradient-based learning systems.

PullNet: Open Domain Question Answering with Iterative Retrieval on Knowledge Bases and Text

no code implementations IJCNLP 2019 Haitian Sun, Tania Bedrax-Weiss, William W. Cohen

We focus on a setting in which a corpus is supplemented with a large but incomplete KB, and on questions that require non-trivial (e. g., ``multi-hop'') reasoning.

Open-Domain Question Answering

Probing Biomedical Embeddings from Language Models

1 code implementation WS 2019 Qiao Jin, Bhuwan Dhingra, William W. Cohen, Xinghua Lu

For this we use the pre-trained LMs as fixed feature extractors and restrict the downstream task models to not have additional sequence modeling layers.

NER Word Embeddings

Incremental Reading for Question Answering

no code implementations15 Jan 2019 Samira Abnar, Tania Bedrax-Weiss, Tom Kwiatkowski, William W. Cohen

Current state-of-the-art question answering models reason over an entire passage, not incrementally.

Continual Learning Question Answering

GLoMo: Unsupervised Learning of Transferable Relational Graphs

no code implementations NeurIPS 2018 Zhilin Yang, Jake Zhao, Bhuwan Dhingra, Kaiming He, William W. Cohen, Ruslan R. Salakhutdinov, Yann Lecun

We also show that the learned graphs are generic enough to be transferred to different embeddings on which the graphs have not been trained (including GloVe embeddings, ELMo embeddings, and task-specific RNN hidden units), or embedding-free units such as image pixels.

Image Classification Natural Language Inference +4

Open Domain Question Answering Using Early Fusion of Knowledge Bases and Text

1 code implementation EMNLP 2018 Haitian Sun, Bhuwan Dhingra, Manzil Zaheer, Kathryn Mazaitis, Ruslan Salakhutdinov, William W. Cohen

In this paper we look at a more practical setting, namely QA over the combination of a KB and entity-linked text, which is appropriate when an incomplete KB is available with a large text corpus.

Graph Representation Learning Open-Domain Question Answering

GLoMo: Unsupervisedly Learned Relational Graphs as Transferable Representations

1 code implementation14 Jun 2018 Zhilin Yang, Jake Zhao, Bhuwan Dhingra, Kaiming He, William W. Cohen, Ruslan Salakhutdinov, Yann Lecun

We also show that the learned graphs are generic enough to be transferred to different embeddings on which the graphs have not been trained (including GloVe embeddings, ELMo embeddings, and task-specific RNN hidden unit), or embedding-free units such as image pixels.

Image Classification Natural Language Inference +4

LEARNING TO ORGANIZE KNOWLEDGE WITH N-GRAM MACHINES

no code implementations ICLR 2018 Fan Yang, Jiazhong Nie, William W. Cohen, Ni Lao

Existing end-to-end deep QA models (Miller et al., 2016; Weston et al., 2014) need to read the entire text after observing the question, and therefore their complexity in responding a question is linear in the text size.

Language Modelling Latent Variable Models +2

Learning to Organize Knowledge and Answer Questions with N-Gram Machines

no code implementations17 Nov 2017 Fan Yang, Jiazhong Nie, William W. Cohen, Ni Lao

Though deep neural networks have great success in natural language processing, they are limited at more knowledge intensive AI tasks, such as open-domain Question Answering (QA).

Open-Domain Question Answering

Breaking the Softmax Bottleneck: A High-Rank RNN Language Model

8 code implementations ICLR 2018 Zhilin Yang, Zihang Dai, Ruslan Salakhutdinov, William W. Cohen

We formulate language modeling as a matrix factorization problem, and show that the expressiveness of Softmax-based models (including the majority of neural language models) is limited by a Softmax bottleneck.

Language Modelling Word Embeddings

TensorLog: Deep Learning Meets Probabilistic DBs

no code implementations17 Jul 2017 William W. Cohen, Fan Yang, Kathryn Rivard Mazaitis

We present an implementation of a probabilistic first-order logic called TensorLog, in which classes of logical queries are compiled into differentiable functions in a neural-network infrastructure such as Tensorflow or Theano.

Good Semi-supervised Learning that Requires a Bad GAN

1 code implementation NeurIPS 2017 Zihang Dai, Zhilin Yang, Fan Yang, William W. Cohen, Ruslan Salakhutdinov

Semi-supervised learning methods based on generative adversarial networks (GANs) obtained strong empirical results, but it is not clear 1) how the discriminator benefits from joint training with a generator, and 2) why good semi-supervised classification performance and a good generator cannot be obtained at the same time.

General Classification Semi-Supervised Image Classification

Linguistic Knowledge as Memory for Recurrent Neural Networks

no code implementations7 Mar 2017 Bhuwan Dhingra, Zhilin Yang, William W. Cohen, Ruslan Salakhutdinov

We introduce a model that encodes such graphs as explicit memory in recurrent neural networks, and use it to model coreference relations in text.

Reading Comprehension

A Comparative Study of Word Embeddings for Reading Comprehension

no code implementations2 Mar 2017 Bhuwan Dhingra, Hanxiao Liu, Ruslan Salakhutdinov, William W. Cohen

The focus of past machine learning research for Reading Comprehension tasks has been primarily on the design of novel deep learning architectures.

Reading Comprehension Word Embeddings

Differentiable Learning of Logical Rules for Knowledge Base Reasoning

1 code implementation NeurIPS 2017 Fan Yang, Zhilin Yang, William W. Cohen

We propose a framework, Neural Logic Programming, that combines the parameter and structure learning of first-order logical rules in an end-to-end differentiable model.

Semi-Supervised QA with Generative Domain-Adaptive Nets

no code implementations ACL 2017 Zhilin Yang, Junjie Hu, Ruslan Salakhutdinov, William W. Cohen

In this framework, we train a generative model to generate questions based on the unlabeled text, and combine model-generated questions with human-generated questions for training question answering models.

Domain Adaptation Question Answering

Words or Characters? Fine-grained Gating for Reading Comprehension

1 code implementation6 Nov 2016 Zhilin Yang, Bhuwan Dhingra, Ye Yuan, Junjie Hu, William W. Cohen, Ruslan Salakhutdinov

Previous work combines word-level and character-level representations using concatenation or scalar weighting, which is suboptimal for high-level tasks like reading comprehension.

Question Answering Reading Comprehension

Bootstrapping Distantly Supervised IE using Joint Learning and Small Well-structured Corpora

no code implementations10 Jun 2016 Lidong Bing, Bhuwan Dhingra, Kathryn Mazaitis, Jong Hyuk Park, William W. Cohen

We propose a framework to improve performance of distantly-supervised relation extraction, by jointly learning to solve two related tasks: concept-instance extraction and relation extraction.

Relation Extraction

TensorLog: A Differentiable Deductive Database

1 code implementation20 May 2016 William W. Cohen

Then, for each type of query to the factor graph, the message-passing steps required to perform belief propagation (BP) are "unrolled" into a function, which is differentiable.

Distant IE by Bootstrapping Using Lists and Document Structure

no code implementations4 Jan 2016 Lidong Bing, Mingyang Ling, Richard C. Wang, William W. Cohen

Distant labeling for information extraction (IE) suffers from noisy training data.

Grounded Discovery of Coordinate Term Relationships between Software Entities

no code implementations1 May 2015 Dana Movshovitz-Attias, William W. Cohen

To this end, we develop a similarity measure for Java classes using distributional information about how they are used in software, which we combine with corpus statistics on the distribution of contexts in which the classes appear in text.

Efficient Inference and Learning in a Large Knowledge Base: Reasoning with Extracted Information using a Locally Groundable First-Order Probabilistic Logic

no code implementations12 Apr 2014 William Yang Wang, Kathryn Mazaitis, Ni Lao, Tom Mitchell, William W. Cohen

We show that the problem of constructing proofs for this logic is related to computation of personalized PageRank (PPR) on a linearized version of the proof space, and using on this connection, we develop a proveably-correct approximate grounding scheme, based on the PageRank-Nibble algorithm.

Relational Reasoning

WebSets: Extracting Sets of Entities from the Web Using Unsupervised Information Extraction

no code implementations1 Jul 2013 Bhavana Dalvi, William W. Cohen, Jamie Callan

We describe a open-domain information extraction method for extracting concept-instance pairs from an HTML corpus.

Information Retrieval

Exploratory Learning

no code implementations1 Jul 2013 Bhavana Dalvi, William W. Cohen, Jamie Callan

In multiclass semi-supervised learning (SSL), it is sometimes the case that the number of classes present in the data is not known, and hence no labeled examples are provided for some classes.

The Effect of Biased Communications On Both Trusting and Suspicious Voters

no code implementations11 Jun 2013 William W. Cohen, David P. Redlawsk, Douglas Pierce

We consider scenarios in which this effect arises in a model of rational decision making which includes the possibility of deceptive information.

Decision Making

Programming with Personalized PageRank: A Locally Groundable First-Order Probabilistic Logic

no code implementations10 May 2013 William Yang Wang, Kathryn Mazaitis, William W. Cohen

In many probabilistic first-order representation systems, inference is performed by "grounding"---i. e., mapping it to a propositional representation, and then performing propositional inference.

Entity Resolution

Cannot find the paper you are looking for? You can Submit a new open access paper.