1 code implementation • 15 Jul 2024 • Wentao Zhao, Qitian Wu, Chenxiao Yang, Junchi Yan

It effectively utilizes geometry information to interpolate features and labels with those from the nearby neighborhood, generating synthetic nodes and establishing connections for them.

1 code implementation • 7 Jun 2024 • Qitian Wu, Fan Nie, Chenxiao Yang, Junchi Yan

Real-world data generation often involves certain geometries (e. g., graphs) that induce instance-level interdependence.

1 code implementation • 18 Feb 2024 • Qitian Wu, Fan Nie, Chenxiao Yang, TianYi Bao, Junchi Yan

In this paper, we adopt a bottom-up data-generative perspective and reveal a key observation through causal analysis: the crux of GNNs' failure in OOD generalization lies in the latent confounding bias from the environment.

no code implementations • 10 Oct 2023 • Qitian Wu, Chenxiao Yang, Kaipeng Zeng, Fan Nie, Michael Bronstein, Junchi Yan

Graph diffusion equations are intimately related to graph neural networks (GNNs) and have recently attracted attention as a principled framework for analyzing GNN dynamics, formalizing their expressive power, and justifying architectural choices.

1 code implementation • 8 Oct 2023 • Chenxiao Yang, Qitian Wu, David Wipf, Ruoyu Sun, Junchi Yan

In particular, we find that the gradient descent optimization of GNNs implicitly leverages the graph structure to update the learned function, as can be quantified by a phenomenon which we call \emph{kernel-graph alignment}.

1 code implementation • 20 Jun 2023 • Wentao Zhao, Qitian Wu, Chenxiao Yang, Junchi Yan

Graph structure learning is a well-established problem that aims at optimizing graph structures adaptive to specific graph datasets to help message passing neural networks (i. e., GNNs) to yield effective and robust node embeddings.

1 code implementation • NeurIPS 2023 • Qitian Wu, Wentao Zhao, Chenxiao Yang, Hengrui Zhang, Fan Nie, Haitian Jiang, Yatao Bian, Junchi Yan

Learning representations on large-sized graphs is a long-standing challenge due to the inter-dependence nature involved in massive data points.

1 code implementation • 6 Feb 2023 • Qitian Wu, Yiting Chen, Chenxiao Yang, Junchi Yan

This paves a way for a simple, powerful and efficient OOD detection model for GNN-based learning on graphs, which we call GNNSafe.

Out-of-Distribution Detection Out of Distribution (OOD) Detection

1 code implementation • 23 Jan 2023 • Qitian Wu, Chenxiao Yang, Wentao Zhao, Yixuan He, David Wipf, Junchi Yan

Real-world data generation often involves complex inter-dependencies among instances, violating the IID-data hypothesis of standard learning paradigms and posing a challenge for uncovering the geometric structures for learning desired instance representations.

1 code implementation • 18 Dec 2022 • Chenxiao Yang, Qitian Wu, Jiahua Wang, Junchi Yan

Graph neural networks (GNNs), as the de-facto model class for representation learning on graphs, are built upon the multi-layer perceptrons (MLP) architecture with additional message passing layers to allow features to flow across nodes.

2 code implementations • 24 Oct 2022 • Chenxiao Yang, Qitian Wu, Junchi Yan

We study a new paradigm of knowledge transfer that aims at encoding graph topological information into graph neural networks (GNNs) by distilling knowledge from a teacher GNN model trained on a complete graph to a student GNN model operating on a smaller or sparser graph.

1 code implementation • 24 Oct 2022 • Chenxiao Yang, Qitian Wu, Qingsong Wen, Zhiqiang Zhou, Liang Sun, Junchi Yan

The goal of sequential event prediction is to estimate the next event based on a sequence of historical events, with applications to sequential recommendation, user behavior analysis and clinical treatment.

no code implementations • 25 Apr 2022 • Chenxiao Yang, Qitian Wu, Jipeng Jin, Xiaofeng Gao, Junwei Pan, Guihai Chen

To circumvent false negatives, we develop a principled approach to improve the reliability of negative instances and prove that the objective is an unbiased estimation of sampling from the true negative distribution.

no code implementations • 20 Feb 2022 • Chenxiao Yang, Junwei Pan, Xiaofeng Gao, Tingyu Jiang, Dapeng Liu, Guihai Chen

Multi-task learning (MTL) has been widely used in recommender systems, wherein predicting each type of user feedback on items (e. g, click, purchase) are treated as individual tasks and jointly trained with a unified model.

1 code implementation • NeurIPS 2021 • Qitian Wu, Chenxiao Yang, Junchi Yan

We target open-world feature extrapolation problem where the feature space of input data goes through expansion and a model trained on partially observed features needs to handle new features in test data without further retraining.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.