1 code implementation • NeurIPS 2022 Conference 2023 • Qilong Wang, Mingze Gao, Zhaolin Zhang, Jiangtao Xie, Peihua Li, QinGhua Hu
Particularly, we for the first time show that \textit{effective post-normalization can make a good trade-off between representation decorrelation and information preservation for GCP, which are crucial to alleviate over-fitting and increase representation ability of deep GCP networks, respectively}.
1 code implementation • CVPR 2022 • Jiangtao Xie, Fei Long, Jiaming Lv, Qilong Wang, Peihua Li
Few-shot classification is a challenging problem as only very few training examples are given for each new task.
1 code implementation • NeurIPS 2021 • Zilin Gao, Qilong Wang, Bingbing Zhang, QinGhua Hu, Peihua Li
Then, a temporal covariance pooling performs temporal pooling of the attentive covariance representations to characterize both intra-frame correlations and inter-frame cross-correlations of the calibrated features.
1 code implementation • 22 Apr 2021 • Jiangtao Xie, Ruiren Zeng, Qilong Wang, Ziqi Zhou, Peihua Li
Therefore, we propose a new classification paradigm, where the second-order, cross-covariance pooling of visual tokens is combined with class token for final classification.
1 code implementation • CVPR 2020 • Qilong Wang, Li Zhang, Banggu Wu, Dongwei Ren, Peihua Li, WangMeng Zuo, QinGhua Hu
Recent works have demonstrated that global covariance pooling (GCP) has the ability to improve performance of deep convolutional neural networks (CNNs) on visual classification task.
12 code implementations • CVPR 2020 • Qilong Wang, Banggu Wu, Pengfei Zhu, Peihua Li, WangMeng Zuo, QinGhua Hu
By dissecting the channel attention module in SENet, we empirically show avoiding dimensionality reduction is important for learning channel attention, and appropriate cross-channel interaction can preserve performance while significantly decreasing model complexity.
Ranked #735 on Image Classification on ImageNet
3 code implementations • 15 Apr 2019 • Qilong Wang, Jiangtao Xie, WangMeng Zuo, Lei Zhang, Peihua Li
The proposed methods are highly modular, readily plugged into existing deep CNNs.
Ranked #1 on Image Classification on iNaturalist (Top 3 Error metric)
1 code implementation • NeurIPS 2018 • Qilong Wang, Zilin Gao, Jiangtao Xie, WangMeng Zuo, Peihua Li
However, both GAP and existing HOP methods assume unimodal distributions, which cannot fully capture statistics of convolutional activations, limiting representation ability of deep CNNs, especially for samples with complex contents.
1 code implementation • CVPR 2019 • Zilin Gao, Jiangtao Xie, Qilong Wang, Peihua Li
Deep Convolutional Networks (ConvNets) are fundamental to, besides large-scale visual recognition, a lot of vision tasks.
2 code implementations • CVPR 2018 • Hao Wang, Qilong Wang, Mingqi Gao, Peihua Li, WangMeng Zuo
Our MLKP can be efficiently computed on a modified multi-scale feature map using a low-dimensional polynomial kernel approximation. Moreover, different from existing orderless global representations based on high-order statistics, our proposed MLKP is location retentive and sensitive so that it can be flexibly adopted to object detection.
4 code implementations • CVPR 2018 • Peihua Li, Jiangtao Xie, Qilong Wang, Zilin Gao
Towards addressing this problem, we propose an iterative matrix square root normalization method for fast end-to-end training of global covariance pooling networks.
Ranked #14 on Fine-Grained Image Classification on CUB-200-2011
Fine-Grained Image Classification Fine-Grained Image Recognition
1 code implementation • 5 Oct 2017 • Feng Li, Yingjie Yao, Peihua Li, David Zhang, WangMeng Zuo, Ming-Hsuan Yang
The aspect ratio variation frequently appears in visual tracking and has a severe influence on performance.
no code implementations • CVPR 2017 • Qilong Wang, Peihua Li, Lei Zhang
Recently, plugging trainable structural layers into deep convolutional neural networks (CNNs) as image representations has made promising progress.
3 code implementations • CVPR 2017 • Hongliang Yan, Yukang Ding, Peihua Li, Qilong Wang, Yong Xu, WangMeng Zuo
Specifically, we introduce class-specific auxiliary weights into the original MMD for exploiting the class prior probability on source and target domains, whose challenge lies in the fact that the class label in target domain is unavailable.
1 code implementation • ICCV 2017 • Peihua Li, Jiangtao Xie, Qilong Wang, WangMeng Zuo
The main challenges involved are robust covariance estimation given a small sample of large-dimensional features and usage of the manifold structure of covariance matrices.
no code implementations • 26 Jul 2016 • Yifan Wang, Lijun Wang, Hongyu Wang, Peihua Li
In this paper, we seek an alternative and propose a new image SR method, which jointly learns the feature extraction, upsampling and HR reconstruction modules, yielding a completely end-to-end trainable deep CNN.
no code implementations • CVPR 2016 • Qilong Wang, Peihua Li, WangMeng Zuo, Lei Zhang
Infinite dimensional covariance descriptors can provide richer and more discriminative information than their low dimensional counterparts.
no code implementations • 9 Jul 2015 • Qilong Wang, Peihua Li, Lei Zhang, WangMeng Zuo
The bag-of-features (BoF) model for image classification has been thoroughly studied over the last decade.
no code implementations • CVPR 2015 • Peihua Li, Xiaoxiao Lu, Qilong Wang
The locality-constrained linear coding (LLC) is a very successful feature coding method in image classification.