Search Results for author: Weihua Hu

Found 20 papers, 14 papers with code

Temporal Graph Benchmark for Machine Learning on Temporal Graphs

3 code implementations3 Jul 2023 Shenyang Huang, Farimah Poursafaei, Jacob Danovitch, Matthias Fey, Weihua Hu, Emanuele Rossi, Jure Leskovec, Michael Bronstein, Guillaume Rabusseau, Reihaneh Rabbany

We present the Temporal Graph Benchmark (TGB), a collection of challenging and diverse benchmark datasets for realistic, reproducible, and robust evaluation of machine learning models on temporal graphs.

Node Property Prediction Property Prediction

TuneUp: A Simple Improved Training Strategy for Graph Neural Networks

no code implementations26 Oct 2022 Weihua Hu, Kaidi Cao, Kexin Huang, Edward W Huang, Karthik Subbian, Kenji Kawaguchi, Jure Leskovec

Extensive evaluation of TuneUp on five diverse GNN architectures, three types of prediction tasks, and both transductive and inductive settings shows that TuneUp significantly improves the performance of the base GNN on tail nodes, while often even improving the performance on head nodes.

Data Augmentation

Learning Backward Compatible Embeddings

1 code implementation7 Jun 2022 Weihua Hu, Rajas Bansal, Kaidi Cao, Nikhil Rao, Karthik Subbian, Jure Leskovec

We formalize the problem where the goal is for the embedding team to keep updating the embedding version, while the consumer teams do not have to retrain their models.

Fraud Detection Product Recommendation +1

Extending the WILDS Benchmark for Unsupervised Adaptation

1 code implementation ICLR 2022 Shiori Sagawa, Pang Wei Koh, Tony Lee, Irena Gao, Sang Michael Xie, Kendrick Shen, Ananya Kumar, Weihua Hu, Michihiro Yasunaga, Henrik Marklund, Sara Beery, Etienne David, Ian Stavness, Wei Guo, Jure Leskovec, Kate Saenko, Tatsunori Hashimoto, Sergey Levine, Chelsea Finn, Percy Liang

Unlabeled data can be a powerful point of leverage for mitigating these distribution shifts, as it is frequently much more available than labeled data and can often be obtained from distributions beyond the source distribution as well.

OGB-LSC: A Large-Scale Challenge for Machine Learning on Graphs

6 code implementations17 Mar 2021 Weihua Hu, Matthias Fey, Hongyu Ren, Maho Nakata, Yuxiao Dong, Jure Leskovec

Enabling effective and efficient machine learning (ML) over large-scale graph data (e. g., graphs with billions of edges) can have a great impact on both industrial and scientific applications.

BIG-bench Machine Learning Graph Learning +4

ForceNet: A Graph Neural Network for Large-Scale Quantum Calculations

no code implementations2 Mar 2021 Weihua Hu, Muhammed Shuaibi, Abhishek Das, Siddharth Goyal, Anuroop Sriram, Jure Leskovec, Devi Parikh, C. Lawrence Zitnick

By not imposing explicit physical constraints, we can flexibly design expressive models while maintaining their computational efficiency.

Data Augmentation

ForceNet: A Graph Neural Network for Large-Scale Quantum Chemistry Simulation

no code implementations1 Jan 2021 Weihua Hu, Muhammed Shuaibi, Abhishek Das, Siddharth Goyal, Anuroop Sriram, Jure Leskovec, Devi Parikh, Larry Zitnick

We use ForceNet to perform quantum chemistry simulations, where ForceNet is able to achieve 4x higher success rate than existing ML models.

The Open Catalyst 2020 (OC20) Dataset and Community Challenges

4 code implementations20 Oct 2020 Lowik Chanussot, Abhishek Das, Siddharth Goyal, Thibaut Lavril, Muhammed Shuaibi, Morgane Riviere, Kevin Tran, Javier Heras-Domingo, Caleb Ho, Weihua Hu, Aini Palizhati, Anuroop Sriram, Brandon Wood, Junwoong Yoon, Devi Parikh, C. Lawrence Zitnick, Zachary Ulissi

Catalyst discovery and optimization is key to solving many societal and energy challenges including solar fuels synthesis, long-term energy storage, and renewable fertilizer production.

Open Graph Benchmark: Datasets for Machine Learning on Graphs

19 code implementations NeurIPS 2020 Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, Jure Leskovec

We present the Open Graph Benchmark (OGB), a diverse set of challenging and realistic benchmark datasets to facilitate scalable, robust, and reproducible graph machine learning (ML) research.

Knowledge Graphs Node Property Prediction

Query2box: Reasoning over Knowledge Graphs in Vector Space using Box Embeddings

6 code implementations ICLR 2020 Hongyu Ren, Weihua Hu, Jure Leskovec

Our main insight is that queries can be embedded as boxes (i. e., hyper-rectangles), where a set of points inside the box corresponds to a set of answer entities of the query.

Complex Query Answering

Strategies for Pre-training Graph Neural Networks

9 code implementations ICLR 2020 Weihua Hu, Bowen Liu, Joseph Gomes, Marinka Zitnik, Percy Liang, Vijay Pande, Jure Leskovec

Many applications of machine learning require a model to make accurate pre-dictions on test examples that are distributionally different from training ones, while task-specific labels are scarce during training.

Graph Classification Molecular Property Prediction +4

How Powerful are Graph Neural Networks?

19 code implementations ICLR 2019 Keyulu Xu, Weihua Hu, Jure Leskovec, Stefanie Jegelka

Here, we present a theoretical framework for analyzing the expressive power of GNNs to capture different graph structures.

General Classification Graph Classification +3

Co-teaching: Robust Training of Deep Neural Networks with Extremely Noisy Labels

5 code implementations NeurIPS 2018 Bo Han, Quanming Yao, Xingrui Yu, Gang Niu, Miao Xu, Weihua Hu, Ivor Tsang, Masashi Sugiyama

Deep learning with noisy labels is practically challenging, as the capacity of deep models is so high that they can totally memorize these noisy labels sooner or later during training.

Learning with noisy labels Memorization

Learning from Complementary Labels

1 code implementation NeurIPS 2017 Takashi Ishida, Gang Niu, Weihua Hu, Masashi Sugiyama

Collecting complementary labels would be less laborious than collecting ordinary labels, since users do not have to carefully choose the correct class from a long list of candidate classes.

Classification General Classification +1

Learning Discrete Representations via Information Maximizing Self-Augmented Training

2 code implementations ICML 2017 Weihua Hu, Takeru Miyato, Seiya Tokui, Eiichi Matsumoto, Masashi Sugiyama

Learning discrete representations of data is a central machine learning task because of the compactness of the representations and ease of interpretation.

Ranked #3 on Unsupervised Image Classification on SVHN (using extra training data)

Clustering Data Augmentation +1

Does Distributionally Robust Supervised Learning Give Robust Classifiers?

no code implementations ICML 2018 Weihua Hu, Gang Niu, Issei Sato, Masashi Sugiyama

Since the DRSL is explicitly formulated for a distribution shift scenario, we naturally expect it to give a robust classifier that can aggressively handle shifted distributions.

BIG-bench Machine Learning General Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.