1 code implementation • 6 Jun 2023 • Peggy Tang, Junbin Gao, Lei Zhang, Zhiyong Wang
Recently, compressive text summarisation offers a balance between the conciseness issue of extractive summarisation and the factual hallucination issue of abstractive summarisation.
no code implementations • 25 May 2023 • Dai Shi, Zhiqi Shao, Yi Guo, Qibin Zhao, Junbin Gao
This work presents a comprehensive theoretical analysis of graph p-Laplacian based framelet network (pL-UFG) to establish a solid understanding of its properties.
no code implementations • 1 May 2023 • Lequan Lin, Zhengkun Li, Ruikun Li, Xuliang Li, Junbin Gao
Diffusion models, a family of generative models based on deep learning, have become increasingly prominent in cutting-edge machine learning research.
no code implementations • 7 Nov 2022 • Huidong Liang, Xingjian Du, Bilei Zhu, Zejun Ma, Ke Chen, Junbin Gao
Existing graph contrastive learning methods rely on augmentation techniques based on random perturbations (e. g., randomly adding or dropping edges and nodes).
no code implementations • 27 Oct 2022 • Zhiqi Shao, Andi Han, Dai Shi, Andrey Vasnev, Junbin Gao
This paper introduces a novel Framelet Graph approach based on p-Laplacian GNN.
no code implementations • 20 Oct 2022 • Lequan Lin, Junbin Gao
Spectral Graph Convolutional Networks (spectral GCNNs), a powerful tool for analyzing and processing graph data, typically apply frequency filtering via Fourier transform to obtain representations with selective information.
1 code implementation • 18 Oct 2022 • Jie Chen, Shouzhen Chen, Mingyuan Bai, Junbin Gao, Junping Zhang, Jian Pu
Then, we introduce a novel structure-mixing knowledge distillation strategy to enhance the learning ability of MLPs for structure information.
no code implementations • 8 Oct 2022 • Andi Han, Dai Shi, Zhiqi Shao, Junbin Gao
In this work, we provide a theoretical understanding of the framelet-based graph neural networks through the perspective of energy gradient flow.
no code implementations • 13 Aug 2022 • Andi Han, Bamdev Mishra, Pratik Jawanpuria, Junbin Gao
In this paper, we propose a simple acceleration scheme for Riemannian gradient methods by extrapolating iterates on manifolds.
1 code implementation • 30 May 2022 • Bingxin Zhou, Xuebin Zheng, Yu Guang Wang, Ming Li, Junbin Gao
Learning efficient graph representation is the key to favorably addressing downstream tasks on graphs, such as node or graph property prediction.
no code implementations • 30 May 2022 • Jie Chen, Weiqi Liu, Zhizhong Huang, Junbin Gao, Junping Zhang, Jian Pu
The performance of GNNs degrades as they become deeper due to the over-smoothing.
Ranked #7 on
Node Classification
on Squirrel
1 code implementation • 19 May 2022 • Chunya Zou, Andi Han, Lequan Lin, Junbin Gao
In this paper, we propose a simple yet effective graph neural network for directed graphs (digraph) based on the classic Singular Value Decomposition (SVD), named SVD-GCN.
no code implementations • 19 May 2022 • Andi Han, Bamdev Mishra, Pratik Jawanpuria, Junbin Gao
We introduce a framework of differentially private Riemannian optimization by adding noise to the Riemannian gradient on the tangent space.
no code implementations • 25 Apr 2022 • Andi Han, Bamdev Mishra, Pratik Jawanpuria, Pawan Kumar, Junbin Gao
In this paper, we study the min-max optimization problems on Riemannian manifolds.
1 code implementation • Findings (NAACL) 2022 • Peggy Tang, Kun Hu, Rui Yan, Lei Zhang, Junbin Gao, Zhiyong Wang
Optimal sentence extraction is conceptualised as obtaining an optimal summary that minimises the transportation cost to a given document regarding their semantic distributions.
1 code implementation • 19 Mar 2022 • Jie Chen, Shouzhen Chen, Junbin Gao, Zengfeng Huang, Junping Zhang, Jian Pu
Moreover, we propose a simple yet effective Conv-Agnostic GNN framework (CAGNNs) to enhance the performance of most GNNs on heterophily datasets by learning the neighbor effect for each node.
no code implementations • 10 Feb 2022 • Bingxin Zhou, Yuanhong Jiang, Yu Guang Wang, Jingwei Liang, Junbin Gao, Shirui Pan, Xiaoqun Zhang
The performance of graph representation learning is affected by the quality of graph input.
1 code implementation • 30 Jan 2022 • Andi Han, Bamdev Mishra, Pratik Jawanpuria, Junbin Gao
In this work, we study the optimal transport (OT) problem between symmetric positive definite (SPD) matrix-valued measures.
no code implementations • 11 Jan 2022 • Mengxi Yang, Xuebin Zheng, Jie Yin, Junbin Gao
This paper aims to provide a novel design of a multiscale framelets convolution for spectral graph neural networks.
no code implementations • 9 Nov 2021 • Huidong Liang, Junbin Gao
This paper introduces Wasserstein Adversarially Regularized Graph Autoencoder (WARGA), an implicit generative algorithm that directly regularizes the latent distribution of node embedding to a target distribution via the Wasserstein metric.
1 code implementation • 5 Nov 2021 • Bingxin Zhou, Ruikun Li, Xuebin Zheng, Yu Guang Wang, Junbin Gao
As graph data collected from the real world is merely noise-free, a practical representation of graphs should be robust to noise.
1 code implementation • 20 Oct 2021 • Andi Han, Bamdev Mishra, Pratik Jawanpuria, Junbin Gao
This paper proposes a generalized Bures-Wasserstein (BW) Riemannian geometry for the manifold of symmetric positive definite matrices.
1 code implementation • 30 Sep 2021 • Huidong Liang, Junbin Gao
Link prediction is a fundamental problem in graph data analysis.
no code implementations • 22 Jul 2021 • Mingyuan Bai, S. T. Boris Choy, Junping Zhang, Junbin Gao
In this paper, we propose a neural ODE model for evolutionary subspace clustering to overcome this limitation and a new objective function with subspace self-expressiveness constraint is introduced.
no code implementations • 14 Jun 2021 • Renlong Jie, Junbin Gao
It is extended from the existing study on differentiable neural architecture search, and we made the backbone architecture transformable rather than fixed during the training process.
no code implementations • 3 Jun 2021 • Dai Shi, Andi Han, Yi Guo, Junbin Gao
In this work, we investigate the validity of learning results of some widely used DR and ManL methods through the chart mapping function of a manifold.
1 code implementation • NeurIPS 2021 • Andi Han, Bamdev Mishra, Pratik Jawanpuria, Junbin Gao
We build on this to show that the BW metric is a more suitable and robust choice for several Riemannian optimization problems over ill-conditioned SPD matrices.
no code implementations • 28 Apr 2021 • Jie Chen, Shouzhen Chen, Mingyuan Bai, Jian Pu, Junping Zhang, Junbin Gao
In this paper, we consider the label dependency of graph nodes and propose a decoupling attention mechanism to learn both hard and soft attention.
no code implementations • ICLR Workshop GTRL 2021 • Bingxin Zhou, Xuebin Zheng, Yu Guang Wang, Ming Li, Junbin Gao
Geometric deep learning that employs the geometric and topological features of data has attracted increasing attention in deep neural networks.
1 code implementation • 13 Feb 2021 • Xuebin Zheng, Bingxin Zhou, Junbin Gao, Yu Guang Wang, Pietro Lio, Ming Li, Guido Montufar
The graph neural networks with the proposed framelet convolution and pooling achieve state-of-the-art performance in many node and graph prediction tasks.
1 code implementation • 18 Jan 2021 • Guangyu Huo, Yong Zhang, Junbin Gao, Boyue Wang, Yongli Hu, BaoCai Yin
In this paper, we propose a cross-attention based deep clustering framework, named Cross-Attention Fusion based Enhanced Graph Convolutional Network (CaEGCN), which contains four main modules: the cross-attention fusion module which innovatively concatenates the Content Auto-encoder module (CAE) relating to the individual data and Graph Convolutional Auto-encoder module (GAE) relating to the relationship between the data in a layer-by-layer manner, and the self-supervised model that highlights the discriminative information for clustering tasks.
no code implementations • 23 Oct 2020 • Andi Han, Junbin Gao
In this paper, we propose a variant of Riemannian stochastic recursive gradient method that can achieve second-order convergence guarantee and escape saddle points using simple perturbation.
no code implementations • 17 Aug 2020 • Renlong Jie, Junbin Gao, Andrey Vasnev, Minh-Ngoc Tran
In this study, we investigate learning rate adaption at different levels based on the hyper-gradient descent framework and propose a method that adaptively learns the optimizer parameters by combining multiple levels of learning rates with hierarchical structures.
no code implementations • 11 Aug 2020 • Andi Han, Junbin Gao
We propose a stochastic recursive momentum method for Riemannian non-convex optimization that achieves a near-optimal complexity of $\tilde{\mathcal{O}}(\epsilon^{-3})$ to find $\epsilon$-approximate solution with one sample.
no code implementations • 26 Jul 2020 • Renlong Jie, Junbin Gao, Andrey Vasnev, Min-ngoc Tran
Based on this, a novel family of flexible activation functions that can replace sigmoid or tanh in LSTM cells are implemented, as well as a new family by combining ReLU and ELUs.
no code implementations • 22 Jul 2020 • Xuebin Zheng, Bingxin Zhou, Ming Li, Yu Guang Wang, Junbin Gao
In this paper, we propose a framework for graph neural networks with multiresolution Haar-like wavelets, or MathNet, with interrelated convolution and pooling strategies.
no code implementations • 3 Jul 2020 • Andi Han, Junbin Gao
Variance reduction techniques are popular in accelerating gradient descent and stochastic gradient descent for optimization problems defined on both Euclidean space and Riemannian manifold.
no code implementations • 23 Jan 2020 • Zhengkun Li, Minh-Ngoc Tran, Chao Wang, Richard Gerlach, Junbin Gao
Value-at-Risk (VaR) and Expected Shortfall (ES) are widely used in the financial sector to measure the market risk and manage the extreme market movement.
no code implementations • 17 Jan 2020 • Bingxin Zhou, Xuebin Zheng, Junbin Gao
Adam-type optimizers, as a class of adaptive moment estimation methods with the exponential moving average scheme, have been successfully used in many applications of deep learning.
no code implementations • 15 Nov 2019 • Dai Shi, Junbin Gao, Xia Hong, S. T. Boris Choy, Zhiyong Wang
These geometrical features of CMM have paved the way for developing numerical Riemannian optimization algorithms such as Riemannian gradient descent and Riemannian trust-region algorithms, forming a uniform optimization method for all types of OT problems.
no code implementations • 19 Oct 2019 • Di Xu, Tianhang Long, Junbin Gao
Massive volumes of high-dimensional data that evolves over time is continuously collected by contemporary information processing systems, which brings up the problem of organizing this data into clusters, i. e. achieve the purpose of dimensional deduction, and meanwhile learning its temporal evolution patterns.
no code implementations • 25 Sep 2019 • Renlong Jie, Junbin Gao, Andrey Vasnev, Minh-Ngoc Tran
Based on this, we develop two novel flexible activation functions that can be implemented in LSTM cells and auto-encoder layers.
no code implementations • 14 Aug 2019 • Mingyuan Bai, S. T. Boris Choy, Xin Song, Junbin Gao
Thus, we propose a tensor-train parameterization for ultra dimensionality reduction (TTPUDR) in which the traditional LPP mapping is tensorized in terms of tensor-trains and the LPP objective is replaced with the Frobenius norm to increase the robustness of the model.
1 code implementation • 23 Jul 2019 • Ming Yin, Weitian Huang, Junbin Gao
Clustering multi-view data has been a fundamental research topic in the computer vision community.
1 code implementation • 10 Jun 2019 • Darren Yates, Md Zahidul Islam, Junbin Gao
Smartphones have become the ultimate 'personal' computer, yet despite this, general-purpose data-mining and knowledge discovery tools for mobile devices are surprisingly rare.
no code implementations • 11 Feb 2019 • Bingxin Zhou, Junbin Gao, Minh-Ngoc Tran, Richard Gerlach
Gaussian variational approximation is a popular methodology to approximate posterior distributions in Bayesian inference especially in high dimensional and large data settings.
no code implementations • 29 Jan 2019 • Di Xu, Manjing Fang, Xia Hong, Junbin Gao
A general framework of least squares support vector machine with low rank kernels, referred to as LR-LSSVM, is introduced in this paper.
1 code implementation • 12 Nov 2018 • Wei Wu, Bin Li, Ling Chen, Junbin Gao, Chengqi Zhang
In this review, we mainly categorize the Weighted MinHash algorithms into quantization-based approaches, "active index"-based ones and others, and show the evolution and inherent connection of the weighted MinHash algorithms, from the integer weighted MinHash algorithms to real-valued weighted MinHash ones (particularly the Consistent Weighted Sampling scheme).
Data Structures and Algorithms
no code implementations • 3 Aug 2018 • Lin Wu, Yang Wang, Junbin Gao, Xue Li
Video-based person re-identification (re-id) is a central application in surveillance systems with significant concern in security.
1 code implementation • 30 Apr 2018 • Lin Wu, Yang Wang, Junbin Gao, DaCheng Tao
Recent effective methods are developed in a pair-wise similarity learning system to detect a fixed set of features from distinct regions which are mapped to their vector embeddings for the distance measuring.
no code implementations • 1 Aug 2017 • Mingyuan Bai, Boyan Zhang, Junbin Gao
In this paper, we propose a new variant of tensorial neural networks which directly take tensorial time series data as inputs.
no code implementations • 21 Jul 2017 • Lin Wu, Yang Wang, Xue Li, Junbin Gao
To address \emph{what} to match, our deep network emphasizes common local patterns by learning joint representations in a multiplicative way.
no code implementations • 3 Jul 2017 • Fujiao Ju, Yanfeng Sun, Junbin Gao, Yongli Hu, Bao-Cai Yin
Under this expression, the projection base of the model is based on the tensor CandeComp/PARAFAC (CP) decomposition and the number of free parameters in the model only grows linearly with the number of modes rather than exponentially.
no code implementations • CVPR 2017 • Qiong Wang, Junbin Gao, Hong Li
Spectral Clustering is one of pioneered clustering methods in machine learning and pattern recognition field.
no code implementations • CVPR 2017 • Yongqiang Zhang, Daming Shi, Junbin Gao, Dansong Cheng
Learning robust regression model from high-dimensional corrupted data is an essential and difficult problem in many practical applications.
no code implementations • 10 Jun 2017 • Lin Wu, Yang Wang, Junbin Gao, Xue Li
To this end, a novel objective function is proposed to jointly optimize similarity metric learning, local positive mining and robust deep embedding.
no code implementations • 9 Jun 2017 • Yanfei Zhang, Junbin Gao
In the traditional approach to solving this problem, it relies on the probability distribution of the demand.
no code implementations • 17 May 2017 • Boyue Wang, Yongli Hu, Junbin Gao, Yanfeng Sun, Bao-Cai Yin
Subspace data representation has recently become a common practice in many computer vision tasks.
no code implementations • 27 Apr 2017 • Boyue Wang, Yongli Hu, Junbin Gao, Yanfeng Sun, Haoran Chen, Bao-Cai Yin
Learning on Grassmann manifold has become popular in many computer vision tasks, with the strong capability to extract discriminative information for imagesets and videos.
1 code implementation • 13 Apr 2017 • Stephen Tierney, Junbin Gao, Yi Guo, Zheng Zhang
However the data may actually be functional i. e.\ each data point is a function of some variable such as time and the function is discretely sampled.
1 code implementation • 13 Apr 2017 • Stephen Tierney, Yi Guo, Junbin Gao
Sparse Subspace Clustering (SSC) has been used extensively for subspace identification tasks due to its theoretical guarantees and relative ease of implementation.
1 code implementation • 13 Apr 2017 • Stephen Tierney, Yi Guo, Junbin Gao
In this paper we present Collaborative Low-Rank Subspace Clustering.
no code implementations • 21 Sep 2016 • Simeng Liu, Yanfeng Sun, Yongli Hu, Junbin Gao, Bao-Cai Yin
Restricted Boltzmann Machine (RBM) is a particular type of random neural network models modeling vector data based on the assumption of Bernoulli distribution.
no code implementations • 21 Sep 2016 • Haoran Chen, Yanfeng Sun, Junbin Gao, Yongli Hu, Bao-Cai Yin
Partial least squares regression (PLSR) has been a popular technique to explore the linear relationship between two datasets.
no code implementations • 30 Aug 2016 • Ming Yin, Junbin Gao, Shengli Xie, Yi Guo
Multi-view subspace clustering is based on the fact that the multi-view data are generated from a latent subspace.
no code implementations • 13 Jun 2016 • Boyue Wang, Yongli Hu, Junbin Gao, Yanfeng Sun, Bao-Cai Yin
In multi-camera video surveillance, it is challenging to represent videos from different cameras properly and fuse them efficiently for specific applications such as human activity recognition and clustering.
no code implementations • CVPR 2016 • Mingkui Tan, Shijie Xiao, Junbin Gao, Dong Xu, Anton Van Den Hengel, Qinfeng Shi
Trace-norm regularization plays an important role in many areas such as machine learning and computer vision.
no code implementations • 27 Jan 2016 • Ming Yin, Shengli Xie, Yi Guo, Junbin Gao, Yun Zhang
Due to its promising classification performance, sparse representation based classification(SRC) algorithm has attracted great attention in the past few years.
no code implementations • 21 Jan 2016 • Boyue Wang, Yongli Hu, Junbin Gao, Yanfeng Sun, Bao-Cai Yin
As a significant subspace clustering method, low rank representation (LRR) has attracted great attention in recent years.
no code implementations • 15 Jan 2016 • Junbin Gao, Yi Guo, Zhiyong Wang
This process can be problematic.
no code implementations • 9 Jan 2016 • Boyue Wang, Yongli Hu, Junbin Gao, Yanfeng Sun, Bao-Cai Yin
The novelty of this paper is to generalize LRR on Euclidean space onto an LRR model on Grassmann manifold in a uniform kernelized LRR framework.
no code implementations • CVPR 2016 • Fujiao Ju, Yanfeng Sun, Junbin Gao, Simeng Liu, Yongli Hu
This paper proposes a mixture of bilateral-projection probabilistic principal component analysis model (mixB2DPPCA) on 2D data.
no code implementations • 7 Jan 2016 • Xinglin Piao, Yongli Hu, Yanfeng Sun, Junbin Gao, Bao-Cai Yin
In a sparse representation based recognition scheme, it is critical to learn a desired dictionary, aiming both good representational power and discriminative performance.
no code implementations • 5 Jan 2016 • Stephen Tierney, Junbin Gao, Yi Guo, Zhengwu Zhang
However the data may actually be functional i. e.\ each data point is a function of some variable such as time and the function is discretely sampled.
no code implementations • 5 Jan 2016 • Guanglei Qi, Yanfeng Sun, Junbin Gao, Yongli Hu, Jinghua Li
In this paper, a Matrix-Variate Restricted Boltzmann Machine (MVRBM) model is proposed by generalizing the classic RBM to explicitly model matrix data.
no code implementations • CVPR 2016 • Ming Yin, Yi Guo, Junbin Gao, Zhaoshui He, Shengli Xie
Sparse subspace clustering (SSC), as one of the most successful subspace clustering methods, has achieved notable clustering accuracy in computer vision tasks.
no code implementations • 2 Jan 2016 • Xinglin Piao, Yongli Hu, Junbin Gao, Yanfeng Sun, Zhouchen Lin, Bao-Cai Yin
A new submodule clustering method via sparse and low-rank representation for multi-way data is proposed in this paper.
no code implementations • 7 Dec 2015 • Haoran Chen, Yanfeng Sun, Junbin Gao, Yongli Hu
The paper addresses the problem of optimizing a class of composite functions on Riemannian manifolds and a new first order optimization algorithm (FOA) with a fast convergence rate is proposed.
no code implementations • 4 Sep 2015 • Xia Hong, Sheng Chen, Yi Guo, Junbin Gao
A l1-norm penalized orthogonal forward regression (l1-POFR) algorithm is proposed based on the concept of leaveone- out mean square error (LOOMSE).
no code implementations • 18 Aug 2015 • Yifan Fu, Junbin Gao, Xia Hong, David Tien
In this paper, we present a novel low rank representation (LRR) algorithm for data lying on the manifold of square root densities.
no code implementations • CVPR 2015 • Mingkui Tan, Qinfeng Shi, Anton Van Den Hengel, Chunhua Shen, Junbin Gao, Fuyuan Hu, Zhen Zhang
Exploiting label dependency for multi-label image classification can significantly improve classification performance.
1 code implementation • 16 Apr 2015 • Stephen Tierney, Yi Guo, Junbin Gao
We propose Ordered Subspace Clustering (OSC) to segment data drawn from a sequentially ordered union of subspaces.
no code implementations • 8 Apr 2015 • Boyue Wang, Yongli Hu, Junbin Gao, Yanfeng Sun, Bao-Cai Yin
One of its successful applications is subspace clustering which means data are clustered according to the subspaces they belong to.
no code implementations • 8 Apr 2015 • Boyue Wang, Yongli Hu, Junbin Gao, Yanfeng Sun, Bao-Cai Yin
Many computer vision algorithms employ subspace models to represent data.
no code implementations • 7 Apr 2015 • Yanfeng Sun, Junbin Gao, Xia Hong, Bamdev Mishra, Bao-Cai Yin
In contrast to existing techniques, we propose a new clustering algorithm that alternates between different modes of the proposed heterogeneous tensor model.
no code implementations • 10 Mar 2015 • Mingkui Tan, Shijie Xiao, Junbin Gao, Dong Xu, Anton Van Den Hengel, Qinfeng Shi
Nuclear-norm regularization plays a vital role in many learning tasks, such as low-rank matrix recovery (MR), and low-rank representation (LRR).
no code implementations • 6 Dec 2014 • Hongyang Zhang, Zhouchen Lin, Chao Zhang, Junbin Gao
More specifically, we discover that once a solution to one of the models is obtained, we can obtain the solutions to other models in closed-form formulations.
no code implementations • CVPR 2014 • Stephen Tierney, Junbin Gao, Yi Guo
We propose Ordered Subspace Clustering (OSC) to segment data drawn from a sequentially ordered union of subspaces.