Search Results for author: Wenrui Dai

Found 19 papers, 4 papers with code

LiftPool: Lifting-based Graph Pooling for Hierarchical Graph Representation Learning

no code implementations27 Apr 2022 Mingxing Xu, Wenrui Dai, Chenglin Li, Junni Zou, Hongkai Xiong

Subsequently, this local information is aligned and propagated to the preserved nodes to alleviate information loss in graph coarsening.

Graph Classification Graph Representation Learning

Hybrid ISTA: Unfolding ISTA With Convergence Guarantees Using Free-Form Deep Neural Networks

no code implementations25 Apr 2022 Ziyang Zheng, Wenrui Dai, Duoduo Xue, Chenglin Li, Junni Zou, Hongkai Xiong

This framework is general to endow arbitrary DNNs for solving linear inverse problems with convergence guarantees.

Compressive Sensing

Hierarchical Graph Networks for 3D Human Pose Estimation

no code implementations23 Nov 2021 Han Li, Bowen Shi, Wenrui Dai, Yabo Chen, Botao Wang, Yu Sun, Min Guo, Chenlin Li, Junni Zou, Hongkai Xiong

Recent 2D-to-3D human pose estimation works tend to utilize the graph structure formed by the topology of the human skeleton.

3D Human Pose Estimation

Graph Convolutional Networks via Adaptive Filter Banks

no code implementations29 Sep 2021 Xing Gao, Wenrui Dai, Chenglin Li, Junni Zou, Hongkai Xiong, Pascal Frossard

Graph convolutional networks have been a powerful tool in representation learning of networked data.

Representation Learning

Variance Reduced Domain Randomization for Policy Gradient

no code implementations29 Sep 2021 Yuankun Jiang, Chenglin Li, Wenrui Dai, Junni Zou, Hongkai Xiong

In this paper, we theoretically derive a bias-free and state/environment-dependent optimal baseline for DR, and analytically show its ability to achieve further variance reduction over the standard constant and state-dependent baselines for DR. We further propose a variance reduced domain randomization (VRDR) approach for policy gradient methods, to strike a tradeoff between the variance reduction and computational complexity in practice.

Policy Gradient Methods

Understanding Self-supervised Learning via Information Bottleneck Principle

no code implementations29 Sep 2021 Jin Li, Yaoming Wang, Dongsheng Jiang, Xiaopeng Zhang, Wenrui Dai, Hongkai Xiong

To address this issue, we introduce the information bottleneck principle and propose the Self-supervised Variational Information Bottleneck (SVIB) learning framework.

Contrastive Learning Self-Supervised Learning

Graph Neural Networks With Lifting-based Adaptive Graph Wavelets

no code implementations3 Aug 2021 Mingxing Xu, Wenrui Dai, Chenglin Li, Junni Zou, Hongkai Xiong, Pascal Frossard

To ensure that the learned graph representations are invariant to node permutations, a layer is employed at the input of the networks to reorder the nodes according to their local topology information.

Graph Representation Learning

Bag of Instances Aggregation Boosts Self-supervised Distillation

1 code implementation ICLR 2022 Haohang Xu, Jiemin Fang, Xiaopeng Zhang, Lingxi Xie, Xinggang Wang, Wenrui Dai, Hongkai Xiong, Qi Tian

Here bag of instances indicates a set of similar samples constructed by the teacher and are grouped within a bag, and the goal of distillation is to aggregate compact representations over the student with respect to instances in a bag.

Contrastive Learning Self-Supervised Learning

Message Passing in Graph Convolution Networks via Adaptive Filter Banks

no code implementations18 Jun 2021 Xing Gao, Wenrui Dai, Chenglin Li, Junni Zou, Hongkai Xiong, Pascal Frossard

Furthermore, each filter in the spectral domain corresponds to a message passing scheme, and diverse schemes are implemented via the filter bank.

Graph Classification Representation Learning

Multi-dataset Pretraining: A Unified Model for Semantic Segmentation

no code implementations8 Jun 2021 Bowen Shi, Xiaopeng Zhang, Haohang Xu, Wenrui Dai, Junni Zou, Hongkai Xiong, Qi Tian

This is achieved by first pretraining the network via the proposed pixel-to-prototype contrastive loss over multiple datasets regardless of their taxonomy labels, and followed by fine-tuning the pretrained model over specific dataset as usual.

Semantic Segmentation

Learning Latent Architectural Distribution in Differentiable Neural Architecture Search via Variational Information Maximization

no code implementations ICCV 2021 Yaoming Wang, Yuchen Liu, Wenrui Dai, Chenglin Li, Junni Zou, Hongkai Xiong

Existing differentiable neural architecture search approaches simply assume the architectural distribution on each edge is independent of each other, which conflicts with the intrinsic properties of architecture.

Neural Architecture Search

VEM-GCN: Topology Optimization with Variational EM for Graph Convolutional Networks

no code implementations1 Jan 2021 Rui Yang, Wenrui Dai, Chenglin Li, Junni Zou, Hongkai Xiong

In the variational E-step, graph topology is optimized by approximating the posterior probability distribution of the latent adjacency matrix with a neural network learned from node embeddings.

Classification General Classification +2

PAC-Bayesian Randomized Value Function with Informative Prior

no code implementations1 Jan 2021 Yuankun Jiang, Chenglin Li, Junni Zou, Wenrui Dai, Hongkai Xiong

To address this, in this paper, we propose a Bayesian linear regression with informative prior (IP-BLR) operator to leverage the data-dependent prior in the learning process of randomized value function, which can leverage the statistics of training results from previous iterations.

Monotonic Robust Policy Optimization with Model Discrepancy

no code implementations1 Jan 2021 Yuankun Jiang, Chenglin Li, Junni Zou, Wenrui Dai, Hongkai Xiong

To mitigate the model discrepancy between training and target (testing) environments, domain randomization (DR) can generate plenty of environments with a sufficient diversity by randomly sampling environment parameters in simulator.

NCGNN: Node-level Capsule Graph Neural Network

no code implementations7 Dec 2020 Rui Yang, Wenrui Dai, Chenglin Li, Junni Zou, Hongkai Xiong

However, most existing works naively sum or average all the neighboring features to update node representations, which suffers from the following limitations: (1) lack of interpretability to identify crucial node features for GNN's prediction; (2) over-smoothing issue where repeated averaging aggregates excessive noise, making features of nodes in different classes over-mixed and thus indistinguishable.

Node Classification

MimicNorm: Weight Mean and Last BN Layer Mimic the Dynamic of Batch Normalization

1 code implementation19 Oct 2020 Wen Fei, Wenrui Dai, Chenglin Li, Junni Zou, Hongkai Xiong

We leverage the neural tangent kernel (NTK) theory to prove that our weight mean operation whitens activations and transits network into the chaotic regime like BN layer, and consequently, leads to an enhanced convergence.

Graph Pooling with Node Proximity for Hierarchical Representation Learning

no code implementations19 Jun 2020 Xing Gao, Wenrui Dai, Chenglin Li, Hongkai Xiong, Pascal Frossard

In this paper, we propose a novel graph pooling strategy that leverages node proximity to improve the hierarchical representation learning of graph data with their multi-hop topology.

Graph Classification Representation Learning

Spatial-Temporal Transformer Networks for Traffic Flow Forecasting

1 code implementation9 Jan 2020 Mingxing Xu, Wenrui Dai, Chunmiao Liu, Xing Gao, Weiyao Lin, Guo-Jun Qi, Hongkai Xiong

In this paper, we propose a novel paradigm of Spatial-Temporal Transformer Networks (STTNs) that leverages dynamical directed spatial dependencies and long-range temporal dependencies to improve the accuracy of long-term traffic forecasting.

Traffic Prediction

Trained Rank Pruning for Efficient Deep Neural Networks

1 code implementation9 Oct 2019 Yuhui Xu, Yuxi Li, Shuai Zhang, Wei Wen, Botao Wang, Wenrui Dai, Yingyong Qi, Yiran Chen, Weiyao Lin, Hongkai Xiong

To accelerate DNNs inference, low-rank approximation has been widely adopted because of its solid theoretical rationale and efficient implementations.

Cannot find the paper you are looking for? You can Submit a new open access paper.