no code implementations • ECCV 2020 • Xiangyu He, Zitao Mo, Ke Cheng, Weixiang Xu, Qinghao Hu, Peisong Wang, Qingshan Liu, Jian Cheng
The matrix composed of basis vectors is referred to as the proxy matrix, and auxiliary variables serve as the coefficients of this linear combination.
1 code implementation • ECCV 2020 • Jiwei Chen, Yubao Sun, Qingshan Liu, Rui Huang
The IDR module is designed to reconstruct the remaining details from the residual measurement vector, and MRU is employed to update the residual measurement vector and feed it into the next IDR module.
1 code implementation • 16 Mar 2022 • Jun Wang, Ying Cui, Dongyan Guo, Junxia Li, Qingshan Liu, Chunhua Shen
To solve the problems, we leverage the cross-attention and self-attention mechanisms to design novel neural network for processing point cloud in a per-point manner to eliminate kNNs.
1 code implementation • 11 Oct 2021 • Hui Shuai, Lele Wu, Qingshan Liu
MFT fuses the features of a varying number of views with a relative-attention block.
no code implementations • CVPR 2021 • Kaihua Zhang, Mingliang Dong, Bo Liu, Xiao-Tong Yuan, Qingshan Liu
This dense correlation volumes enables the network to accurately discover the structured pair-wise pixel similarities among the common salient objects.
no code implementations • ICCV 2021 • Kaihua Zhang, Zicheng Zhao, Dong Liu, Qingshan Liu, Bo Liu
The popular unsupervised video object segmentation methods fuse the RGB frame and optical flow via a two-stream network.
no code implementations • 18 Dec 2020 • Yubao Sun, Ying Yang, Qingshan Liu, Mohan Kankanhalli
Hyperspectral compressive imaging takes advantage of compressive sensing theory to achieve coded aperture snapshot measurement without temporal scanning, and the entire three-dimensional spatial-spectral data is captured by a two-dimensional projection during a single integration period.
no code implementations • ECCV 2020 • Hongduan Tian, Bo Liu, Xiao-Tong Yuan, Qingshan Liu
To remedy this deficiency, we propose a network pruning based meta-learning approach for overfitting reduction via explicitly controlling the capacity of network.
1 code implementation • 25 May 2020 • Renlong Hang, Zhu Li, Qingshan Liu, Pedram Ghamisi, Shuvra S. Bhattacharyya
Specifically, a spectral attention sub-network and a spatial attention sub-network are proposed for spectral and spatial classification, respectively.
1 code implementation • CVPR 2020 • Kaihua Zhang, Tengpeng Li, Shiwen Shen, Bo Liu, Jin Chen, Qingshan Liu
Second, we develop an attention graph clustering algorithm to discriminate the common objects from all the salient foreground objects in an unsupervised fashion.
no code implementations • 13 Mar 2020 • Kaihua Zhang, Long Wang, Dong Liu, Bo Liu, Qingshan Liu, Zhu Li
We present an end-to-end network which stores short- and long-term video sequence information preceding the current frame as the temporal memories to address the temporal modeling in VOS.
no code implementations • 4 Feb 2020 • Renlong Hang, Zhu Li, Pedram Ghamisi, Danfeng Hong, Guiyu Xia, Qingshan Liu
For the feature-level fusion, three different fusion strategies are evaluated, including the concatenation strategy, the maximization strategy, and the summation strategy.
1 code implementation • 2 Jan 2020 • Jin Chen, Huihui Song, Kaihua Zhang, Bo Liu, Qingshan Liu
Due to a variety of motions across different frames, it is highly challenging to learn an effective spatiotemporal representation for accurate video saliency prediction (VSP).
1 code implementation • 29 Nov 2019 • Kaihua Zhang, Jin Chen, Bo Liu, Qingshan Liu
With the multi-resolution features of the relevant images as input, we design a spatial modulator to learn a mask for each image.
no code implementations • 25 Sep 2019 • Hongduan Tian, Bo Liu, Xiao-Tong Yuan, Qingshan Liu
Meta-Learning has achieved great success in few-shot learning.
no code implementations • 28 Feb 2019 • Renlong Hang, Qingshan Liu, Danfeng Hong, Pedram Ghamisi
The first RNN layer is used to eliminate redundant information between adjacent spectral bands, while the second RNN layer aims to learn the complementary information from non-adjacent spectral bands.
no code implementations • 20 Aug 2018 • Feng Zhou, Renlong Hang, Qingshan Liu, Xiaotong Yuan
Specifically, for each pixel, we feed its spectral values in different channels into Spectral LSTM one by one to learn the spectral feature.
no code implementations • 30 Mar 2018 • Guangcan Liu, Zhao Zhang, Qingshan Liu, Kongkai Xiong
Dimension reduction is widely regarded as an effective way for decreasing the computation, storage and communication loads of data-driven intelligent systems, leading to a growing demand for statistical methods that allow analysis (e. g., clustering) of compressed data.
no code implementations • NeurIPS 2017 • Guangcan Liu, Qingshan Liu, Xiaotong Yuan
To break through the limits of random sampling, this paper introduces a new hypothesis called \emph{isomeric condition}, which is provably weaker than the assumption of uniform sampling and arguably holds even when the missing data is placed irregularly.
no code implementations • 23 Mar 2017 • Qingshan Liu, Feng Zhou, Renlong Hang, Xiao-Tong Yuan
In the network, the issue of spectral feature extraction is considered as a sequence learning problem, and a recurrent connection operator across the spectral domain is used to address it.
no code implementations • ICML 2017 • Bo Liu, Xiao-Tong Yuan, Lezi Wang, Qingshan Liu, Dimitris N. Metaxas
It remains open to explore duality theory and algorithms in such a non-convex and NP-hard problem setting.
no code implementations • 24 Dec 2016 • Kaihua Zhang, Xuejun Li, Qingshan Liu
Then, with the updated appearances, we formulate a spatio-temporal graphical model comprised of the superpixel label consistency potentials.
no code implementations • NeurIPS 2016 • Xiaotong Yuan, Ping Li, Tong Zhang, Qingshan Liu, Guangcan Liu
We investigate a subclass of exponential family graphical models of which the sufficient statistics are defined by arbitrary additive forms.
no code implementations • 11 Nov 2016 • Qingshan Liu, Renlong Hang, Huihui Song, Fuping Zhu, Javier Plaza, Antonio Plaza
In this paper, we propose a new adaptive deep pyramid matching (ADPM) model that takes advantage of the features from all of the convolutional layers for remote sensing image classification.
no code implementations • 11 Nov 2016 • Qingshan Liu, Renlong Hang, Huihui Song, Zhi Li
In this paper, we propose a multi-scale deep feature learning method for high-resolution satellite image classification.
no code implementations • 30 Oct 2016 • Kaihua Zhang, Qingshan Liu, Ming-Hsuan Yang
In this paper, we present a simple yet effective Boolean map based representation that exploits connectivity cues for visual tracking.
no code implementations • 6 Apr 2016 • Changsheng Li, Junchi Yan, Fan Wei, Weishan Dong, Qingshan Liu, Hongyuan Zha
In this paper, we propose a novel multi-task learning (MTL) framework, called Self-Paced Multi-Task Learning (SPMTL).
no code implementations • 22 Mar 2016 • Changsheng Li, Fan Wei, Junchi Yan, Weishan Dong, Qingshan Liu, Xiao-Yu Zhang, Hongyuan Zha
In this paper, we propose a novel multi-label learning framework, called Multi-Label Self-Paced Learning (MLSPL), in an attempt to incorporate the self-paced learning strategy into multi-label learning regime.
no code implementations • 3 Mar 2016 • Qingshan Liu, Yubao Sun, Cantian Wang, Tongliang Liu, DaCheng Tao
In the second step, hypergraph is used to represent the high order relationships between each datum and its prominent samples by regarding them as a hyperedge.
no code implementations • 22 Feb 2016 • Yubao Sun, Renlong Hang, Qingshan Liu, Fuping Zhu, Hucheng Pei
In this paper, we propose a novel data-driven regression model for aerosol optical depth (AOD) retrieval.
no code implementations • ICCV 2015 • Zhenzhen Wang, Xiao-Tong Yuan, Qingshan Liu, Shuicheng Yan
In this paper, we present a concise framework to approximately construct feature maps for nonlinear additive kernels such as the Intersection, Hellinger's, and Chi^2 kernels.
no code implementations • 21 Apr 2015 • Qingshan Liu, Jing Yang, Kaihua Zhang, Yi Wu
Recently, the compressive tracking (CT) method has attracted much attention due to its high efficiency, but it cannot well deal with the large scale target appearance variations due to its data-independent random projection matrix that results in less discriminative features.
no code implementations • 4 Mar 2015 • Changsheng Li, Xiangfeng Wang, Weishan Dong, Junchi Yan, Qingshan Liu, Hongyuan Zha
In particular, our method runs in one-shot without the procedure of iterative sample selection for progressive labeling.
no code implementations • 19 Jan 2015 • Kaihua Zhang, Qingshan Liu, Yi Wu, Ming-Hsuan Yang
In this paper we present that, even without offline training with a large amount of auxiliary data, simple two-layer convolutional networks can be powerful enough to develop a robust representation for visual tracking.
no code implementations • 18 Dec 2014 • Changsheng Li, Fan Wei, Weishan Dong, Qingshan Liu, Xiangfeng Wang, Xin Zhang
MORES can \emph{dynamically} learn the structure of the coefficients change in each update step to facilitate the model's continuous refinement.
no code implementations • 16 Dec 2014 • Changsheng Li, Qingshan Liu, Weishan Dong, Xin Zhang, Lin Yang
In this paper, we propose a new max-margin based discriminative feature learning method.
no code implementations • CVPR 2014 • Xiao-Tong Yuan, Qingshan Liu
The main theme of this type of methods is to evaluate the function gradient in the previous iteration to update the non-zero entries and their values in the next iteration.