You need to log in to edit.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

no code implementations • EMNLP 2020 • Jin Dong, Marc-Antoine Rondeau, William L. Hamilton

There is an increasing interest in developing text-based relational reasoning systems, which are capable of systematically reasoning about the relationships between entities mentioned in a text.

1 code implementation • 20 Sep 2021 • Vincent Mallet, Carlos G. Oliver, William L. Hamilton

For instance, the 3D structure of RNA can be efficiently represented as $\textit{2. 5D graphs}$, graphs whose nodes are nucleotides and edges represent chemical interactions.

1 code implementation • 9 Sep 2021 • Vincent Mallet, Carlos Oliver, Jonathan Broadbent, William L. Hamilton, Jérôme Waldispühl

RNA 3D architectures are stabilized by sophisticated networks of (non-canonical) base pair interactions, which can be conveniently encoded as multi-relational graphs and efficiently exploited by graph theoretical approaches and recent progresses in machine learning techniques.

no code implementations • 22 Jul 2021 • Dylan Sandfelder, Priyesh Vijayan, William L. Hamilton

Graph neural networks (GNNs) have achieved remarkable success as a framework for deep learning on graph-structured data.

4 code implementations • ICLR 2022 • Mikhail Galkin, Etienne Denis, Jiapeng Wu, William L. Hamilton

To this end, we propose NodePiece, an anchor-based approach to learn a fixed-size entity vocabulary.

Ranked #15 on Link Property Prediction on ogbl-wikikg2

1 code implementation • NeurIPS 2021 • Devin Kreuzer, Dominique Beaini, William L. Hamilton, Vincent Létourneau, Prudencio Tossou

Here, we present the $\textit{Spectral Attention Network}$ (SAN), which uses a learned positional encoding (LPE) that can take advantage of the full Laplacian spectrum to learn the position of each node in a given graph.

1 code implementation • ICLR 2022 • Andjela Mladenovic, Avishek Joey Bose, Hugo Berard, William L. Hamilton, Simon Lacoste-Julien, Pascal Vincent, Gauthier Gidel

Adversarial attacks expose important vulnerabilities of deep learning models, yet little attention has been paid to settings where data arrives as a stream.

no code implementations • EACL 2021 • Dora Jambor, Komal Teru, Joelle Pineau, William L. Hamilton

Real-world knowledge graphs are often characterized by low-frequency relations - a challenge that has prompted an increasing interest in few-shot link prediction methods.

no code implementations • ICLR 2021 • Zichao Yan, William L. Hamilton, Mathieu Blanchette

Our work is concerned with the generation and targeted design of RNA, a type of genetic macromolecule that can adopt complex structures which influence their cellular activities and functions.

1 code implementation • 1 Jan 2021 • Koustuv Sinha, Shagun Sodhani, Joelle Pineau, William L. Hamilton

In this work, we study the logical generalization capabilities of GNNs by designing a benchmark suite grounded in first-order logic.

1 code implementation • EMNLP 2020 • Jiapeng Wu, Meng Cao, Jackie Chi Kit Cheung, William L. Hamilton

Our analysis also reveals important sources of variability both within and across TKG datasets, and we introduce several simple but strong baselines that outperform the prior state of the art in certain settings.

2 code implementations • 6 Oct 2020 • Dominique Beaini, Saro Passaro, Vincent Létourneau, William L. Hamilton, Gabriele Corso, Pietro Liò

Then, we propose the use of the Laplacian eigenvectors as such vector field.

Ranked #2 on Node Classification on PATTERN 100k

1 code implementation • EMNLP 2020 • Kian Ahrabian, Aarash Feizi, Yasmin Salehi, William L. Hamilton, Avishek Joey Bose

Learning low-dimensional representations for entities and relations in knowledge graphs using contrastive estimation represents a scalable and effective method for inferring connectivity patterns.

2 code implementations • 1 Sep 2020 • Carlos Oliver, Vincent Mallet, Pericles Philippopoulos, William L. Hamilton, Jerome Waldispuhl

State of the art methods solve special cases of the motif problem by constraining the structural variability in occurrences of a motif, and narrowing the substructure search space.

1 code implementation • NeurIPS 2020 • Avishek Joey Bose, Gauthier Gidel, Hugo Berard, Andre Cianflone, Pascal Vincent, Simon Lacoste-Julien, William L. Hamilton

We introduce Adversarial Example Games (AEG), a framework that models the crafting of adversarial examples as a min-max game between a generator of attacks and a classifier.

no code implementations • WS 2020 • Ashutosh Adhikari, Achyudh Ram, Raphael Tang, William L. Hamilton, Jimmy Lin

Fine-tuned variants of BERT are able to achieve state-of-the-art accuracy on many natural language processing tasks, although at significant computational costs.

1 code implementation • ACL 2020 • Koustuv Sinha, Prasanna Parthasarathi, Jasmine Wang, Ryan Lowe, William L. Hamilton, Joelle Pineau

Evaluating the quality of a dialogue interaction between two agents is a difficult task, especially in open-domain chit-chat style dialogue.

1 code implementation • ICML Workshop LifelongML 2020 • Koustuv Sinha, Shagun Sodhani, Joelle Pineau, William L. Hamilton

Recent research has highlighted the role of relational inductive biases in building learning agents that can generalize and reason in a compositional manner.

1 code implementation • NeurIPS 2020 • Ashutosh Adhikari, Xingdi Yuan, Marc-Alexandre Côté, Mikuláš Zelinka, Marc-Antoine Rondeau, Romain Laroche, Pascal Poupart, Jian Tang, Adam Trischler, William L. Hamilton

Playing text-based games requires skills in processing natural language and sequential decision making.

1 code implementation • ICML 2020 • Avishek Joey Bose, Ariella Smofsky, Renjie Liao, Prakash Panangaden, William L. Hamilton

One effective solution is the use of normalizing flows \cut{defined on Euclidean spaces} to construct flexible posterior distributions.

no code implementations • 4 Feb 2020 • Agnieszka Słowik, Abhinav Gupta, William L. Hamilton, Mateja Jamnik, Sean B. Holden, Christopher Pal

In order to communicate, humans flatten a complex representation of ideas and their attributes into a single word or a sentence.

no code implementations • 24 Jan 2020 • Agnieszka Słowik, Abhinav Gupta, William L. Hamilton, Mateja Jamnik, Sean B. Holden

Recent findings in neuroscience suggest that the human brain represents information in a geometric structure (for instance, through conceptual spaces).

1 code implementation • 20 Dec 2019 • Avishek Joey Bose, Ankit Jain, Piero Molino, William L. Hamilton

We consider the task of few shot link prediction on graphs.

8 code implementations • ICML 2020 • Komal K. Teru, Etienne Denis, William L. Hamilton

The dominant paradigm for relation prediction in knowledge graphs involves learning and operating on latent representations (i. e., embeddings) of entities and relations.

2 code implementations • NeurIPS 2019 • Renjie Liao, Yujia Li, Yang Song, Shenlong Wang, Charlie Nash, William L. Hamilton, David Duvenaud, Raquel Urtasun, Richard S. Zemel

Our model generates graphs one block of nodes and associated edges at a time.

5 code implementations • IJCNLP 2019 • Koustuv Sinha, Shagun Sodhani, Jin Dong, Joelle Pineau, William L. Hamilton

The recent success of natural language understanding (NLU) systems has been troubled by results highlighting the failure of these models to generalize in a systematic and robust way.

Inductive logic programming
Natural Language Understanding
**+2**

no code implementations • 24 Jun 2019 • Charles C. Onu, Jonathan Lebensold, William L. Hamilton, Doina Precup

Despite continuing medical advances, the rate of newborn morbidity and mortality globally remains high, with over 6 million casualties every year.

no code implementations • 26 May 2019 • Avishek Joey Bose, Andre Cianflone, William L. Hamilton

Adversarial attacks on deep neural networks traditionally rely on a constrained optimization paradigm, where an optimization procedure is used to obtain a single adversarial perturbation for a given input example.

1 code implementation • 25 May 2019 • Avishek Joey Bose, William L. Hamilton

Learning high-quality node embeddings is a key building block for machine learning models that operate on graph data, such as social networks and recommender systems.

2 code implementations • 7 Nov 2018 • Koustuv Sinha, Shagun Sodhani, William L. Hamilton, Joelle Pineau

Neural networks for natural language reasoning have largely focused on extractive, fact-based question-answering (QA) and common-sense inference.

1 code implementation • 4 Oct 2018 • Christopher Morris, Martin Ritzert, Matthias Fey, William L. Hamilton, Jan Eric Lenssen, Gaurav Rattan, Martin Grohe

We show that GNNs have the same expressiveness as the $1$-WL in terms of distinguishing non-isomorphic (sub-)graphs.

Ranked #4 on Graph Classification on NCI1

11 code implementations • ICLR 2019 • Petar Veličković, William Fedus, William L. Hamilton, Pietro Liò, Yoshua Bengio, R. Devon Hjelm

We present Deep Graph Infomax (DGI), a general approach for learning node representations within graph-structured data in an unsupervised manner.

Ranked #48 on Node Classification on Citeseer

14 code implementations • NeurIPS 2018 • Rex Ying, Jiaxuan You, Christopher Morris, Xiang Ren, William L. Hamilton, Jure Leskovec

Recently, graph neural networks (GNNs) have revolutionized the field of graph representation learning through effectively learned node embeddings, and achieved state-of-the-art results in tasks such as node classification and link prediction.

Ranked #1 on Graph Classification on REDDIT-MULTI-12K

5 code implementations • 6 Jun 2018 • Rex Ying, Ruining He, Kai-Feng Chen, Pong Eksombatchai, William L. Hamilton, Jure Leskovec

We develop a data-efficient Graph Convolutional Network (GCN) algorithm PinSage, which combines efficient random walks and graph convolutions to generate embeddings of nodes (i. e., items) that incorporate both graph structure as well as node feature information.

5 code implementations • NeurIPS 2018 • William L. Hamilton, Payal Bajaj, Marinka Zitnik, Dan Jurafsky, Jure Leskovec

Learning low-dimensional embeddings of knowledge graphs is a powerful approach used to predict unobserved or missing edges between entities.

Ranked #5 on Complex Query Answering on FB15k-237

no code implementations • 9 Mar 2018 • Srijan Kumar, William L. Hamilton, Jure Leskovec, Dan Jurafsky

Here we study intercommunity interactions across 36, 000 communities on Reddit, examining cases where users of one community are mobilized by negative sentiment to comment in another community.

3 code implementations • ICML 2018 • Jiaxuan You, Rex Ying, Xiang Ren, William L. Hamilton, Jure Leskovec

Modeling and generating graphs is fundamental for studying networks in biology, engineering, and social sciences.

no code implementations • 17 Sep 2017 • William L. Hamilton, Rex Ying, Jure Leskovec

Machine learning on graphs is an important and ubiquitous task with applications ranging from drug design to friendship recommendation in social networks.

17 code implementations • NeurIPS 2017 • William L. Hamilton, Rex Ying, Jure Leskovec

Low-dimensional embeddings of nodes in large graphs have proved extremely useful in a variety of prediction tasks, from content recommendation to identifying protein functions.

Ranked #1 on Link Property Prediction on ogbl-ddi

no code implementations • 26 May 2017 • Justine Zhang, William L. Hamilton, Cristian Danescu-Niculescu-Mizil, Dan Jurafsky, Jure Leskovec

To this end we introduce a quantitative, language-based typology reflecting two key aspects of a community's identity: how distinctive, and how temporally dynamic it is.

1 code implementation • 9 Mar 2017 • William L. Hamilton, Justine Zhang, Cristian Danescu-Niculescu-Mizil, Dan Jurafsky, Jure Leskovec

In this paper we operationalize loyalty as a user-community relation: users loyal to a community consistently prefer it over all others; loyal communities retain their loyal users over time.

1 code implementation • EMNLP 2016 • William L. Hamilton, Kevin Clark, Jure Leskovec, Dan Jurafsky

A word's sentiment depends on the domain in which it is used.

no code implementations • EMNLP 2016 • William L. Hamilton, Jure Leskovec, Dan Jurafsky

Words shift in meaning for many reasons, including cultural factors like new technologies and regular linguistic processes like subjectification.

4 code implementations • ACL 2016 • William L. Hamilton, Jure Leskovec, Dan Jurafsky

Understanding how words change their meanings over time is key to models of language and cultural evolution, but historical data on meaning is scarce, making theories hard to develop and test.

no code implementations • 1 Dec 2013 • William L. Hamilton, Mahdi Milani Fard, Joelle Pineau

Predictive state representations (PSRs) offer an expressive framework for modelling partially observable systems.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.