no code implementations • 28 Oct 2024 • Hengrui Zhang, Liancheng Fang, Qitian Wu, Philip S. Yu
We posit that this can be attributed to two factors: 1) tabular data contains heterogeneous data type, while the autoregressive model is primarily designed to model discrete-valued data; 2) tabular data is column permutation-invariant, requiring a generation model to generate columns in arbitrary order.
no code implementations • 13 Sep 2024 • Qitian Wu, David Wipf, Junchi Yan
Learning representations for structured data with certain geometries (observed or unobserved) is a fundamental challenge, wherein message passing neural networks (MPNNs) have become a de facto class of model solutions.
1 code implementation • 13 Sep 2024 • Qitian Wu, Kai Yang, Hengrui Zhang, David Wipf, Junchi Yan
Learning representations on large graphs is a long-standing challenge due to the inter-dependence nature.
1 code implementation • 15 Jul 2024 • Wentao Zhao, Qitian Wu, Chenxiao Yang, Junchi Yan
It effectively utilizes geometry information to interpolate features and labels with those from the nearby neighborhood, generating synthetic nodes and establishing connections for them.
1 code implementation • 7 Jun 2024 • Qitian Wu, Fan Nie, Chenxiao Yang, Junchi Yan
Real-world data generation often involves certain geometries (e. g., graphs) that induce instance-level interdependence.
1 code implementation • 28 May 2024 • Wujiang Xu, Qitian Wu, Zujie Liang, Jiaojiao Han, Xuying Ning, Yunxiao Shi, Wenfang Lin, Yongfeng Zhang
Motivated by this insight, we empower small language models for SR, namely SLMRec, which adopt a simple yet effective knowledge distillation method.
1 code implementation • 18 Feb 2024 • Qitian Wu, Fan Nie, Chenxiao Yang, TianYi Bao, Junchi Yan
In this paper, we adopt a bottom-up data-generative perspective and reveal a key observation through causal analysis: the crux of GNNs' failure in OOD generalization lies in the latent confounding bias from the environment.
no code implementations • 18 Dec 2023 • Yang Fan, XiangPing Wu, Qingcai Chen, Heng Li, Yan Huang, Zhixiang Cai, Qitian Wu
The diversity of tables makes table detection a great challenge, leading to existing models becoming more tedious and complex.
1 code implementation • 8 Nov 2023 • Wujiang Xu, Qitian Wu, Runzhong Wang, Mingming Ha, Qiongxu Ma, Linxun Chen, Bing Han, Junchi Yan
To address these challenges under open-world assumptions, we design an \textbf{A}daptive \textbf{M}ulti-\textbf{I}nterest \textbf{D}ebiasing framework for cross-domain sequential recommendation (\textbf{AMID}), which consists of a multi-interest information module (\textbf{MIM}) and a doubly robust estimator (\textbf{DRE}).
no code implementations • 10 Oct 2023 • Qitian Wu, Chenxiao Yang, Kaipeng Zeng, Fan Nie, Michael Bronstein, Junchi Yan
Graph diffusion equations are intimately related to graph neural networks (GNNs) and have recently attracted attention as a principled framework for analyzing GNN dynamics, formalizing their expressive power, and justifying architectural choices.
1 code implementation • 8 Oct 2023 • Chenxiao Yang, Qitian Wu, David Wipf, Ruoyu Sun, Junchi Yan
In particular, we find that the gradient descent optimization of GNNs implicitly leverages the graph structure to update the learned function, as can be quantified by a phenomenon which we call \emph{kernel-graph alignment}.
1 code implementation • 20 Jun 2023 • Wentao Zhao, Qitian Wu, Chenxiao Yang, Junchi Yan
Graph structure learning is a well-established problem that aims at optimizing graph structures adaptive to specific graph datasets to help message passing neural networks (i. e., GNNs) to yield effective and robust node embeddings.
1 code implementation • NeurIPS 2023 • Qitian Wu, Wentao Zhao, Chenxiao Yang, Hengrui Zhang, Fan Nie, Haitian Jiang, Yatao Bian, Junchi Yan
Learning representations on large-sized graphs is a long-standing challenge due to the inter-dependence nature involved in massive data points.
1 code implementation • 14 Jun 2023 • Qitian Wu, Wentao Zhao, Zenan Li, David Wipf, Junchi Yan
In this paper, we introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes, as an important building block for a pioneering Transformer-style network for node classification on large graphs, dubbed as \textsc{NodeFormer}.
1 code implementation • 6 Feb 2023 • Qitian Wu, Yiting Chen, Chenxiao Yang, Junchi Yan
This paves a way for a simple, powerful and efficient OOD detection model for GNN-based learning on graphs, which we call GNNSafe.
Out-of-Distribution Detection
Out of Distribution (OOD) Detection
1 code implementation • 23 Jan 2023 • Qitian Wu, Chenxiao Yang, Wentao Zhao, Yixuan He, David Wipf, Junchi Yan
Real-world data generation often involves complex inter-dependencies among instances, violating the IID-data hypothesis of standard learning paradigms and posing a challenge for uncovering the geometric structures for learning desired instance representations.
1 code implementation • 18 Dec 2022 • Chenxiao Yang, Qitian Wu, Jiahua Wang, Junchi Yan
Graph neural networks (GNNs), as the de-facto model class for representation learning on graphs, are built upon the multi-layer perceptrons (MLP) architecture with additional message passing layers to allow features to flow across nodes.
no code implementations • 8 Dec 2022 • Hengrui Zhang, Qitian Wu, Yu Wang, Shaofeng Zhang, Junchi Yan, Philip S. Yu
Contrastive learning methods based on InfoNCE loss are popular in node representation learning tasks on graph-structured data.
1 code implementation • NeurIPS 2023 • Yongduo Sui, Qitian Wu, Jiancan Wu, Qing Cui, Longfei Li, Jun Zhou, Xiang Wang, Xiangnan He
From the perspective of invariant learning and stable learning, a recently well-established paradigm for out-of-distribution generalization, stable features of the graph are assumed to causally determine labels, while environmental features tend to be unstable and can lead to the two primary types of distribution shifts.
1 code implementation • 24 Oct 2022 • Chenxiao Yang, Qitian Wu, Qingsong Wen, Zhiqiang Zhou, Liang Sun, Junchi Yan
The goal of sequential event prediction is to estimate the next event based on a sequence of historical events, with applications to sequential recommendation, user behavior analysis and clinical treatment.
2 code implementations • 24 Oct 2022 • Chenxiao Yang, Qitian Wu, Junchi Yan
We study a new paradigm of knowledge transfer that aims at encoding graph topological information into graph neural networks (GNNs) by distilling knowledge from a teacher GNN model trained on a complete graph to a student GNN model operating on a smaller or sparser graph.
no code implementations • 25 Apr 2022 • Chenxiao Yang, Qitian Wu, Jipeng Jin, Xiaofeng Gao, Junwei Pan, Guihai Chen
To circumvent false negatives, we develop a principled approach to improve the reliability of negative instances and prove that the objective is an unbiased estimation of sampling from the true negative distribution.
2 code implementations • ICLR 2022 • Qitian Wu, Hengrui Zhang, Junchi Yan, David Wipf
There is increasing evidence suggesting neural networks' sensitivity to distribution shifts, so that research on out-of-distribution (OOD) generalization comes into the spotlight.
1 code implementation • NeurIPS 2021 • Qitian Wu, Chenxiao Yang, Junchi Yan
We target open-world feature extrapolation problem where the feature space of input data goes through expansion and a model trained on partially observed features needs to handle new features in test data without further retraining.
no code implementations • 29 Sep 2021 • Hengrui Zhang, Qitian Wu, Shaofeng Zhang, Junchi Yan, David Wipf, Philip S. Yu
In this paper, we propose ESCo (Effective and Scalable Contrastive), a new contrastive framework which is essentially an instantiation of the Information Bottleneck principle under self-supervised learning settings.
1 code implementation • NeurIPS 2021 • Hengrui Zhang, Qitian Wu, Junchi Yan, David Wipf, Philip S. Yu
We introduce a conceptually simple yet effective model for self-supervised representation learning with graph data.
no code implementations • 1 Jan 2021 • Qitian Wu, Hengrui Zhang, Xiaofeng Gao, Hongyuan Zha
In this paper, we propose an inductive collaborative filtering framework that learns a hidden relational graph among users from the rating matrix.
no code implementations • 1 Jan 2021 • Qitian Wu, Rui Gao, Hongyuan Zha
Deep generative models are generally categorized into explicit models and implicit models.
1 code implementation • 9 Jul 2020 • Qitian Wu, Hengrui Zhang, Xiaofeng Gao, Junchi Yan, Hongyuan Zha
The first model follows conventional matrix factorization which factorizes a group of key users' rating matrix to obtain meta latents.
1 code implementation • NeurIPS 2019 • Qitian Wu, Zixuan Zhang, Xiaofeng Gao, Junchi Yan, Guihai Chen
We target modeling latent dynamics in high-dimension marked event sequences without any prior knowledge about marker relations.
no code implementations • NeurIPS 2021 • Qitian Wu, Rui Gao, Hongyuan Zha
To take full advantages of both models and enable mutual compensation, we propose a novel joint training framework that bridges an explicit (unnormalized) density estimator and an implicit sample generator via Stein discrepancy.
no code implementations • 25 Sep 2019 • Qitian Wu, Rui Gao, Hongyuan Zha
Deep generative models are generally categorized into explicit models and implicit models.
1 code implementation • 25 Mar 2019 • Qitian Wu, Hengrui Zhang, Xiaofeng Gao, Peng He, Paul Weng, Han Gao, Guihai Chen
Social recommendation leverages social information to solve data sparsity and cold-start problems in traditional collaborative filtering methods.
Ranked #1 on
Recommendation Systems
on WeChat