no code implementations • 27 Apr 2022 • Mingxing Xu, Wenrui Dai, Chenglin Li, Junni Zou, Hongkai Xiong
Subsequently, this local information is aligned and propagated to the preserved nodes to alleviate information loss in graph coarsening.
no code implementations • 25 Apr 2022 • Ziyang Zheng, Wenrui Dai, Duoduo Xue, Chenglin Li, Junni Zou, Hongkai Xiong
This framework is general to endow arbitrary DNNs for solving linear inverse problems with convergence guarantees.
no code implementations • 23 Apr 2022 • Tao Yan, Rui Yang, Ziyang Zheng, Xing Lin, Hongkai Xiong, Qionghai Dai
Photonic neural networks perform brain-inspired computations using photons instead of electrons that can achieve substantially improved computing performance.
no code implementations • 23 Nov 2021 • Han Li, Bowen Shi, Wenrui Dai, Yabo Chen, Botao Wang, Yu Sun, Min Guo, Chenlin Li, Junni Zou, Hongkai Xiong
Recent 2D-to-3D human pose estimation works tend to utilize the graph structure formed by the topology of the human skeleton.
1 code implementation • 30 Sep 2021 • Shuangrui Ding, Maomao Li, Tianyu Yang, Rui Qian, Haohang Xu, Qingyi Chen, Jue Wang, Hongkai Xiong
To alleviate such bias, we propose \textbf{F}oreground-b\textbf{a}ckground \textbf{Me}rging (FAME) to deliberately compose the moving foreground region of the selected video onto the static background of others.
no code implementations • 29 Sep 2021 • Xing Gao, Wenrui Dai, Chenglin Li, Junni Zou, Hongkai Xiong, Pascal Frossard
Graph convolutional networks have been a powerful tool in representation learning of networked data.
no code implementations • 29 Sep 2021 • Yuankun Jiang, Chenglin Li, Wenrui Dai, Junni Zou, Hongkai Xiong
In this paper, we theoretically derive a bias-free and state/environment-dependent optimal baseline for DR, and analytically show its ability to achieve further variance reduction over the standard constant and state-dependent baselines for DR. We further propose a variance reduced domain randomization (VRDR) approach for policy gradient methods, to strike a tradeoff between the variance reduction and computational complexity in practice.
no code implementations • 29 Sep 2021 • Jin Li, Yaoming Wang, Dongsheng Jiang, Xiaopeng Zhang, Wenrui Dai, Hongkai Xiong
To address this issue, we introduce the information bottleneck principle and propose the Self-supervised Variational Information Bottleneck (SVIB) learning framework.
no code implementations • 3 Aug 2021 • Mingxing Xu, Wenrui Dai, Chenglin Li, Junni Zou, Hongkai Xiong, Pascal Frossard
To ensure that the learned graph representations are invariant to node permutations, a layer is employed at the input of the networks to reorder the nodes according to their local topology information.
1 code implementation • ICLR 2022 • Haohang Xu, Jiemin Fang, Xiaopeng Zhang, Lingxi Xie, Xinggang Wang, Wenrui Dai, Hongkai Xiong, Qi Tian
Here bag of instances indicates a set of similar samples constructed by the teacher and are grouped within a bag, and the goal of distillation is to aggregate compact representations over the student with respect to instances in a bag.
no code implementations • 18 Jun 2021 • Xing Gao, Wenrui Dai, Chenglin Li, Junni Zou, Hongkai Xiong, Pascal Frossard
Furthermore, each filter in the spectral domain corresponds to a message passing scheme, and diverse schemes are implemented via the filter bank.
no code implementations • 8 Jun 2021 • Bowen Shi, Xiaopeng Zhang, Haohang Xu, Wenrui Dai, Junni Zou, Hongkai Xiong, Qi Tian
This is achieved by first pretraining the network via the proposed pixel-to-prototype contrastive loss over multiple datasets regardless of their taxonomy labels, and followed by fine-tuning the pretrained model over specific dataset as usual.
1 code implementation • 14 Feb 2021 • Jing Jin, Hui Liu, Junhui Hou, Hongkai Xiong
Besides, to promote the effectiveness of our method trained with simulated hybrid data on real hybrid data captured by a hybrid LF imaging system, we carefully design the network architecture and the training strategy.
no code implementations • 1 Jan 2021 • Yuankun Jiang, Chenglin Li, Junni Zou, Wenrui Dai, Hongkai Xiong
To mitigate the model discrepancy between training and target (testing) environments, domain randomization (DR) can generate plenty of environments with a sufficient diversity by randomly sampling environment parameters in simulator.
no code implementations • ICCV 2021 • Yaoming Wang, Yuchen Liu, Wenrui Dai, Chenglin Li, Junni Zou, Hongkai Xiong
Existing differentiable neural architecture search approaches simply assume the architectural distribution on each edge is independent of each other, which conflicts with the intrinsic properties of architecture.
no code implementations • 1 Jan 2021 • Rui Yang, Wenrui Dai, Chenglin Li, Junni Zou, Hongkai Xiong
In the variational E-step, graph topology is optimized by approximating the posterior probability distribution of the latent adjacency matrix with a neural network learned from node embeddings.
no code implementations • 1 Jan 2021 • Yuankun Jiang, Chenglin Li, Junni Zou, Wenrui Dai, Hongkai Xiong
To address this, in this paper, we propose a Bayesian linear regression with informative prior (IP-BLR) operator to leverage the data-dependent prior in the learning process of randomized value function, which can leverage the statistics of training results from previous iterations.
no code implementations • 7 Dec 2020 • Rui Yang, Wenrui Dai, Chenglin Li, Junni Zou, Hongkai Xiong
However, most existing works naively sum or average all the neighboring features to update node representations, which suffers from the following limitations: (1) lack of interpretability to identify crucial node features for GNN's prediction; (2) over-smoothing issue where repeated averaging aggregates excessive noise, making features of nodes in different classes over-mixed and thus indistinguishable.
no code implementations • 4 Dec 2020 • Haohang Xu, Xiaopeng Zhang, Hao Li, Lingxi Xie, Hongkai Xiong, Qi Tian
In this paper, we propose a hierarchical semantic alignment strategy via expanding the views generated by a single image to \textbf{Cross-samples and Multi-level} representation, and models the invariance to semantically similar images in a hierarchical way.
Ranked #37 on
Self-Supervised Image Classification
on ImageNet
1 code implementation • 28 Nov 2020 • Yuhui Xu, Lingxi Xie, Cihang Xie, Jieru Mei, Siyuan Qiao, Wei Shen, Hongkai Xiong, Alan Yuille
Batch normalization (BN) is a fundamental unit in modern deep networks, in which a linear transformation module was designed for improving BN's flexibility of fitting complex data distributions.
no code implementations • 5 Nov 2020 • Hao Li, Xiaopeng Zhang, Hongkai Xiong
Contrastive learning based on instance discrimination trains model to discriminate different transformations of the anchor sample from other samples, which does not consider the semantic similarity among samples.
1 code implementation • 19 Oct 2020 • Wen Fei, Wenrui Dai, Chenglin Li, Junni Zou, Hongkai Xiong
We leverage the neural tangent kernel (NTK) theory to prove that our weight mean operation whitens activations and transits network into the chaotic regime like BN layer, and consequently, leads to an enhanced convergence.
no code implementations • 27 Jul 2020 • Haohang Xu, Hongkai Xiong, Guo-Jun Qi
In this paper, we propose the $K$-Shot Contrastive Learning (KSCL) of visual features by applying multiple augmentations to investigate the sample variations within individual instances.
no code implementations • 23 Jun 2020 • Ruoyu Sun, Fuhui Tang, Xiaopeng Zhang, Hongkai Xiong, Qi Tian
Knowledge distillation, which aims at training a smaller student network by transferring knowledge from a larger teacher model, is one of the promising solutions for model miniaturization.
no code implementations • 19 Jun 2020 • Xing Gao, Wenrui Dai, Chenglin Li, Hongkai Xiong, Pascal Frossard
In this paper, we propose a novel graph pooling strategy that leverages node proximity to improve the hierarchical representation learning of graph data with their multi-hop topology.
no code implementations • 9 May 2020 • Weiyao Lin, Huabin Liu, Shizhan Liu, Yuxi Li, Rui Qian, Tao Wang, Ning Xu, Hongkai Xiong, Guo-Jun Qi, Nicu Sebe
We demonstrate that the proposed method is able to boost the performance of existing pose estimation pipelines on our HiEve dataset.
1 code implementation • 30 Apr 2020 • Yuhui Xu, Yuxi Li, Shuai Zhang, Wei Wen, Botao Wang, Yingyong Qi, Yiran Chen, Weiyao Lin, Hongkai Xiong
The TRP trained network inherently has a low-rank structure, and is approximated with negligible performance loss, thus eliminating the fine-tuning process after low rank decomposition.
no code implementations • 6 Apr 2020 • Hao Li, Xiaopeng Zhang, Hongkai Xiong, Qi Tian
In this paper, we propose Attribute Mix, a data augmentation strategy at attribute level to expand the fine-grained samples.
Ranked #11 on
Fine-Grained Image Classification
on CUB-200-2011
1 code implementation • 17 Jan 2020 • Yuhui Xu, Lingxi Xie, Xiaopeng Zhang, Xin Chen, Bowen Shi, Qi Tian, Hongkai Xiong
However, these methods suffer the difficulty in optimizing network, so that the searched network is often unfriendly to hardware.
1 code implementation • 9 Jan 2020 • Mingxing Xu, Wenrui Dai, Chunmiao Liu, Xing Gao, Weiyao Lin, Guo-Jun Qi, Hongkai Xiong
In this paper, we propose a novel paradigm of Spatial-Temporal Transformer Networks (STTNs) that leverages dynamical directed spatial dependencies and long-range temporal dependencies to improve the accuracy of long-term traffic forecasting.
no code implementations • 29 Dec 2019 • Haohang Xu, Hongkai Xiong, Guo-Jun Qi
To this end, we present a novel regularization mechanism by learning the change of feature representations induced by a distribution of transformations without using the labels of data examples.
no code implementations • 16 Nov 2019 • Feng Lin, Haohang Xu, Houqiang Li, Hongkai Xiong, Guo-Jun Qi
For this reason, we should use the geodesic to characterize how an image transform along the manifold of a transformation group, and adopt its length to measure the deviation between transformations.
1 code implementation • 9 Oct 2019 • Yuhui Xu, Yuxi Li, Shuai Zhang, Wei Wen, Botao Wang, Wenrui Dai, Yingyong Qi, Yiran Chen, Weiyao Lin, Hongkai Xiong
To accelerate DNNs inference, low-rank approximation has been widely adopted because of its solid theoretical rationale and efficient implementations.
6 code implementations • ICLR 2020 • Yuhui Xu, Lingxi Xie, Xiaopeng Zhang, Xin Chen, Guo-Jun Qi, Qi Tian, Hongkai Xiong
Differentiable architecture search (DARTS) provided a fast solution in finding effective network architectures, but suffered from large memory and computing overheads in jointly training a super-network and searching for an optimal architecture.
Ranked #15 on
Neural Architecture Search
on CIFAR-10
no code implementations • 1 Jul 2019 • Xing Gao, Hongkai Xiong, Pascal Frossard
In this paper, we propose a parameter-free pooling operator, called iPool, that permits to retain the most informative features in arbitrary graphs.
no code implementations • 17 May 2019 • Weiyao Lin, Yuxi Li, Hao Xiao, John See, Junni Zou, Hongkai Xiong, Jingdong Wang, Tao Mei
The task of re-identifying groups of people underdifferent camera views is an important yet less-studied problem. Group re-identification (Re-ID) is a very challenging task sinceit is not only adversely affected by common issues in traditionalsingle object Re-ID problems such as viewpoint and human posevariations, but it also suffers from changes in group layout andgroup membership.
1 code implementation • 6 Dec 2018 • Yuhui Xu, Yuxi Li, Shuai Zhang, Wei Wen, Botao Wang, Yingyong Qi, Yiran Chen, Weiyao Lin, Hongkai Xiong
We propose Trained Rank Pruning (TRP), which iterates low rank approximation and training.
no code implementations • 6 Dec 2018 • Yuhui Xu, Shuai Zhang, Yingyong Qi, Jiaxian Guo, Weiyao Lin, Hongkai Xiong
Network quantization is an effective method for the deployment of neural networks on memory and energy constrained mobile devices.
no code implementations • CVPR 2018 • Xiaopeng Zhang, Jiashi Feng, Hongkai Xiong, Qi Tian
Unlike them, we propose a zigzag learning strategy to simultaneously discover reliable object instances and prevent the model from overfitting initial seeds.
Ranked #13 on
Weakly Supervised Object Detection
on PASCAL VOC 2007
1 code implementation • 6 Mar 2018 • Yuhui Xu, Yongzhuang Wang, Aojun Zhou, Weiyao Lin, Hongkai Xiong
In this paper, we propose two novel network quantization approaches, single-level network quantization (SLQ) for high-bit quantization and multi-level network quantization (MLQ) for extremely low-bit quantization (ternary). We are the first to consider the network quantization from both width and depth level.
no code implementations • 20 Nov 2017 • Weiyao Lin, Yang Mi, Jianxin Wu, Ke Lu, Hongkai Xiong
In this paper, we propose a novel deep-based framework for action recognition, which improves the recognition accuracy by: 1) deriving more precise features for representing actions, and 2) reducing the asynchrony between different information streams.
no code implementations • 29 May 2017 • Xiaopeng Zhang, Hongkai Xiong, Weiyao Lin, Qi Tian
Part-based representation has been proven to be effective for a variety of visual applications.
no code implementations • CVPR 2016 • Xiaopeng Zhang, Hongkai Xiong, Wengang Zhou, Weiyao Lin, Qi Tian
Recognizing fine-grained sub-categories such as birds and dogs is extremely challenging due to the highly localized and subtle differences in some specific parts.