1 code implementation • 2 Feb 2024 • Wenhao Jiang, Duo Li, Menghan Hu, Guangtao Zhai, Xiaokang Yang, Xiao-Ping Zhang
To tackle the issues of catastrophic forgetting and overfitting in few-shot class-incremental learning (FSCIL), previous work has primarily concentrated on preserving the memory of old knowledge during the incremental phase.
1 code implementation • 9 Jan 2024 • Kuo Yang, Duo Li, Menghan Hu, Guangtao Zhai, Xiaokang Yang, Xiao-Ping Zhang
This approach allows the model to perceive the uncertainty of pseudo-labels at different training stages, thereby adaptively adjusting the selection thresholds for different classes.
no code implementations • 14 Jul 2022 • Guimei Cao, Zhanzhan Cheng, Yunlu Xu, Duo Li, ShiLiang Pu, Yi Niu, Fei Wu
In this paper, we propose an end-to-end trainable adaptively expandable network named E2-AEN, which dynamically generates lightweight structures for new tasks without any accuracy drop in previous tasks.
no code implementations • 13 Jan 2022 • Duo Li, Guimei Cao, Yunlu Xu, Zhanzhan Cheng, Yi Niu
In the SSLAD-Track 3B challenge on continual learning, we propose the method of COntinual Learning with Transformer (COLT).
no code implementations • 12 Aug 2021 • Duo Li, Shang-Hua Gao
In recent years, the connections between deep residual networks and first-order Ordinary Differential Equations (ODEs) have been disclosed.
1 code implementation • ICCV 2021 • Lei Zhu, Qi She, Duo Li, Yanye Lu, Xuejing Kang, Jie Hu, Changhu Wang
The nonlocal-based blocks are designed for capturing long-range spatial-temporal dependencies in computer vision tasks.
no code implementations • CVPR 2021 • Shang-Hua Gao, Qi Han, Duo Li, Ming-Ming Cheng, Pai Peng
We propose to add a simple yet effective feature calibration scheme into the centering and scaling operations of BatchNorm, enhancing the instance-specific representations with the negligible computational cost.
no code implementations • 5 Apr 2021 • Dengsheng Chen, Haowen Deng, Jun Li, Duo Li, Yao Duan, Kai Xu
In this work, rather than defining a continuous or discrete kernel, we directly embed convolutional kernels into the learnable potential fields, giving rise to potential convolution.
1 code implementation • CVPR 2021 • Lei Zhu, Qi She, Bin Zhang, Yanye Lu, Zhilin Lu, Duo Li, Jie Hu
Superpixel is generated by automatically clustering pixels in an image into hundreds of compact partitions, which is widely used to perceive the object contours for its excellent contour adherence.
1 code implementation • CVPR 2021 • Xiangtai Li, Hao He, Xia Li, Duo Li, Guangliang Cheng, Jianping Shi, Lubin Weng, Yunhai Tong, Zhouchen Lin
Experimental results on three different aerial segmentation datasets suggest that the proposed method is more effective and efficient than state-of-the-art general semantic segmentation methods.
13 code implementations • CVPR 2021 • Duo Li, Jie Hu, Changhu Wang, Xiangtai Li, Qi She, Lei Zhu, Tong Zhang, Qifeng Chen
Convolution has been the core ingredient of modern neural networks, triggering the surge of deep learning in vision.
Ranked #706 on Image Classification on ImageNet
no code implementations • 1 Jan 2021 • Duo Li, Sanli Tang, Zhanzhan Cheng, ShiLiang Pu, Yi Niu, Wenming Tan, Fei Wu, Xiaokang Yang
However, the impact of the pseudo-labeled samples' quality as well as the mining strategies for high quality training sample have rarely been studied in SSL.
1 code implementation • ECCV 2020 • Yikai Wang, Fuchun Sun, Duo Li, Anbang Yao
We propose a general method to train a single convolutional neural network which is capable of switching image resolutions at inference.
1 code implementation • ECCV 2020 • Duo Li, Anbang Yao, Qifeng Chen
To achieve efficient and flexible image classification at runtime, we employ meta learners to generate convolutional weights of main networks for various input scales and maintain privatized Batch Normalization layers per scale.
no code implementations • ECCV 2020 • Duo Li, Qifeng Chen
In this paper, we build upon the weakly-supervised generation mechanism of intermediate attention maps in any convolutional neural networks and disclose the effectiveness of attention modules more straightforwardly to fully exploit their potential.
1 code implementation • ECCV 2020 • Duo Li, Anbang Yao, Qifeng Chen
Despite their strong modeling capacities, Convolutional Neural Networks (CNNs) are often scale-sensitive.
1 code implementation • CVPR 2020 • Duo Li, Qifeng Chen
While the depth of modern Convolutional Neural Networks (CNNs) surpasses that of the pioneering networks with a significant margin, the traditional way of appending supervision only over the final classifier and progressively propagating gradient flow upstream remains the training mainstay.
1 code implementation • ICCV 2019 • Duo Li, Aojun Zhou, Anbang Yao
MobileNets, a class of top-performing convolutional neural network architectures in terms of accuracy and efficiency trade-off, are increasingly used in many resourceaware vision applications.