Search Results for author: Baopu Li

Found 43 papers, 19 papers with code

Cross-modality Person re-identification with Shared-Specific Feature Transfer

no code implementations CVPR 2020 Yan Lu, Yue Wu, Bin Liu, Tianzhu Zhang, Baopu Li, Qi Chu, Nenghai Yu

In this paper, we tackle the above limitation by proposing a novel cross-modality shared-specific feature transfer algorithm (termed cm-SSFT) to explore the potential of both the modality-shared information and the modality-specific characteristics to boost the re-identification performance.

Cross-Modality Person Re-identification Person Re-Identification

Real Image Super Resolution Via Heterogeneous Model Ensemble using GP-NAS

no code implementations2 Sep 2020 Zhihong Pan, Baopu Li, Teng Xi, Yanwen Fan, Gang Zhang, Jingtuo Liu, Junyu Han, Errui Ding

With advancement in deep neural network (DNN), recent state-of-the-art (SOTA) image superresolution (SR) methods have achieved impressive performance using deep residual network with dense skip connections.

Image Super-Resolution Neural Architecture Search

SAMOT: Switcher-Aware Multi-Object Tracking and Still Another MOT Measure

no code implementations22 Sep 2020 Weitao Feng, Zhihao Hu, Baopu Li, Weihao Gan, Wei Wu, Wanli Ouyang

Besides, we propose a new MOT evaluation measure, Still Another IDF score (SAIDF), aiming to focus more on identity issues. This new measure may overcome some problems of the previous measures and provide a better insight for identity issues in MOT.

Multi-Object Tracking Object

AutoPruning for Deep Neural Network with Dynamic Channel Masking

no code implementations22 Oct 2020 Baopu Li, Yanwen Fan, Zhihong Pan, Gang Zhang

In the process of pruning, we utilize a searchable hyperparameter, remaining ratio, to denote the number of channels in each convolution layer, and then a dynamic masking process is proposed to describe the corresponding channel evolution.

AutoML Network Pruning

A Unified Joint Maximum Mean Discrepancy for Domain Adaptation

no code implementations25 Jan 2021 Wei Wang, Baopu Li, Shuhui Yang, Jing Sun, Zhengming Ding, Junyang Chen, Xiao Dong, Zhihui Wang, Haojie Li

From the revealed unified JMMD, we illustrate that JMMD degrades the feature-label dependence (discriminability) that benefits to classification, and it is sensitive to the label distribution shift when the label kernel is the weighted class conditional one.

Domain Adaptation

MetaCorrection: Domain-aware Meta Loss Correction for Unsupervised Domain Adaptation in Semantic Segmentation

1 code implementation CVPR 2021 Xiaoqing Guo, Chen Yang, Baopu Li, Yixuan Yuan

Existing self-training based UDA approaches assign pseudo labels for target data and treat them as ground truth labels to fully leverage unlabeled target data for model adaptation.

Meta-Learning Semantic Segmentation +2

Learning Scene Structure Guidance via Cross-Task Knowledge Transfer for Single Depth Super-Resolution

no code implementations CVPR 2021 Baoli Sun, Xinchen Ye, Baopu Li, Haojie Li, Zhihui Wang, Rui Xu

First, we design a cross-task distillation scheme that encourages DSR and DE networks to learn from each other in a teacher-student role-exchanging fashion.

Depth Estimation Super-Resolution +1

No Need for Interactions: Robust Model-Based Imitation Learning using Neural ODE

1 code implementation3 Apr 2021 HaoChih Lin, Baopu Li, Xin Zhou, Jiankun Wang, Max Q. -H. Meng

Interactions with either environments or expert policies during training are needed for most of the current imitation learning (IL) algorithms.

Imitation Learning

Action Segmentation with Mixed Temporal Domain Adaptation

no code implementations15 Apr 2021 Min-Hung Chen, Baopu Li, Yingze Bao, Ghassan AlRegib

The main progress for action segmentation comes from densely-annotated data for fully-supervised learning.

Action Segmentation Domain Adaptation

PSViT: Better Vision Transformer via Token Pooling and Attention Sharing

no code implementations7 Aug 2021 BoYu Chen, Peixia Li, Baopu Li, Chuming Li, Lei Bai, Chen Lin, Ming Sun, Junjie Yan, Wanli Ouyang

Then, a compact set of the possible combinations for different token pooling and attention sharing mechanisms are constructed.

BN-NAS: Neural Architecture Search with Batch Normalization

1 code implementation ICCV 2021 BoYu Chen, Peixia Li, Baopu Li, Chen Lin, Chuming Li, Ming Sun, Junjie Yan, Wanli Ouyang

We present BN-NAS, neural architecture search with Batch Normalization (BN-NAS), to accelerate neural architecture search (NAS).

Neural Architecture Search

Exploring Gradient Flow Based Saliency for DNN Model Compression

1 code implementation24 Oct 2021 Xinyu Liu, Baopu Li, Zhen Chen, Yixuan Yuan

Model pruning aims to reduce the deep neural network (DNN) model size or computational overhead.

Image Classification Image Denoising +1

b-DARTS: Beta-Decay Regularization for Differentiable Architecture Search

1 code implementation CVPR 2022 Peng Ye, Baopu Li, Yikang Li, Tao Chen, Jiayuan Fan, Wanli Ouyang

Neural Architecture Search (NAS) has attracted increasingly more attention in recent years because of its capability to design deep neural network automatically.

Neural Architecture Search

Towards Bidirectional Arbitrary Image Rescaling: Joint Optimization and Cycle Idempotence

no code implementations CVPR 2022 Zhihong Pan, Baopu Li, Dongliang He, Mingde Yao, Wenhao Wu, Tianwei Lin, Xin Li, Errui Ding

Deep learning based single image super-resolution models have been widely studied and superb results are achieved in upscaling low-resolution images with fixed scale factor and downscaling degradation kernel.

Image Super-Resolution

$β$-DARTS: Beta-Decay Regularization for Differentiable Architecture Search

1 code implementation3 Mar 2022 Peng Ye, Baopu Li, Yikang Li, Tao Chen, Jiayuan Fan, Wanli Ouyang

Neural Architecture Search~(NAS) has attracted increasingly more attention in recent years because of its capability to design deep neural networks automatically.

Neural Architecture Search

Towards Robust Adaptive Object Detection under Noisy Annotations

1 code implementation CVPR 2022 Xinyu Liu, Wuyang Li, Qiushi Yang, Baopu Li, Yixuan Yuan

Domain Adaptive Object Detection (DAOD) models a joint distribution of images and labels from an annotated source domain and learns a domain-invariant transformation to estimate the target labels with the given target domain images.

Object object-detection +1

ShiftAddNAS: Hardware-Inspired Search for More Accurate and Efficient Neural Networks

1 code implementation17 May 2022 Haoran You, Baopu Li, Huihong Shi, Yonggan Fu, Yingyan Lin

To this end, this work advocates hybrid NNs that consist of both powerful yet costly multiplications and efficient yet less powerful operators for marrying the best of both worlds, and proposes ShiftAddNAS, which can automatically search for more accurate and more efficient NNs.

SuperTickets: Drawing Task-Agnostic Lottery Tickets from Supernets via Jointly Architecture Searching and Parameter Pruning

1 code implementation8 Jul 2022 Haoran You, Baopu Li, Zhanyi Sun, Xu Ouyang, Yingyan Lin

In this paper, we discover for the first time that both efficient DNNs and their lottery subnetworks (i. e., lottery tickets) can be directly identified from a supernet, which we term as SuperTickets, via a two-in-one training scheme with jointly architecture searching and parameter pruning.

Neural Architecture Search

Effective Invertible Arbitrary Image Rescaling

no code implementations26 Sep 2022 Zhihong Pan, Baopu Li, Dongliang He, Wenhao Wu, Errui Ding

To increase its real world applicability, numerous models have also been proposed to restore SR images with arbitrary scale factors, including asymmetric ones where images are resized to different scales along horizontal and vertical directions.

Image Super-Resolution

Stimulative Training of Residual Networks: A Social Psychology Perspective of Loafing

1 code implementation9 Oct 2022 Peng Ye, Shengji Tang, Baopu Li, Tao Chen, Wanli Ouyang

In this work, we aim to re-investigate the training process of residual networks from a novel social psychology perspective of loafing, and further propose a new training strategy to strengthen the performance of residual networks.

ViTCoD: Vision Transformer Acceleration via Dedicated Algorithm and Accelerator Co-Design

1 code implementation18 Oct 2022 Haoran You, Zhanyi Sun, Huihong Shi, Zhongzhi Yu, Yang Zhao, Yongan Zhang, Chaojian Li, Baopu Li, Yingyan Lin

Specifically, on the algorithm level, ViTCoD prunes and polarizes the attention maps to have either denser or sparser fixed patterns for regularizing two levels of workloads without hurting the accuracy, largely reducing the attention computations while leaving room for alleviating the remaining dominant data movements; on top of that, we further integrate a lightweight and learnable auto-encoder module to enable trading the dominant high-cost data movements for lower-cost computations.

Instance-aware Model Ensemble With Distillation For Unsupervised Domain Adaptation

no code implementations15 Nov 2022 Weimin Wu, Jiayuan Fan, Tao Chen, Hancheng Ye, Bo Zhang, Baopu Li

To enhance the model, adaptability between domains and reduce the computational cost when deploying the ensemble model, we propose a novel framework, namely Instance aware Model Ensemble With Distillation, IMED, which fuses multiple UDA component models adaptively according to different instances and distills these components into a small model.

Knowledge Distillation Unsupervised Domain Adaptation

MRM: Masked Relation Modeling for Medical Image Pre-Training with Genetics

no code implementations ICCV 2023 Qiushi Yang, Wuyang Li, Baopu Li, Yixuan Yuan

Moreover, to enhance semantic relation modeling, we propose relation matching to align the sample-wise relation between the intact and masked features.

Medical Diagnosis Relation

$β$-DARTS++: Bi-level Regularization for Proxy-robust Differentiable Architecture Search

1 code implementation16 Jan 2023 Peng Ye, Tong He, Baopu Li, Tao Chen, Lei Bai, Wanli Ouyang

To address the robustness problem, we first benchmark different NAS methods under a wide range of proxy data, proxy channels, proxy layers and proxy epochs, since the robustness of NAS under different kinds of proxies has not been explored before.

Neural Architecture Search

Multi-view Vision-Prompt Fusion Network: Can 2D Pre-trained Model Boost 3D Point Cloud Data-scarce Learning?

no code implementations20 Apr 2023 Haoyang Peng, Baopu Li, Bo Zhang, Xin Chen, Tao Chen, Hongyuan Zhu

Then, a novel multi-view prompt fusion module is developed to effectively fuse information from different views to bridge the gap between 3D point cloud data and 2D pre-trained models.

Autonomous Driving Classification +3

Stimulative Training++: Go Beyond The Performance Limits of Residual Networks

no code implementations4 May 2023 Peng Ye, Tong He, Shengji Tang, Baopu Li, Tao Chen, Lei Bai, Wanli Ouyang

In this work, we aim to re-investigate the training process of residual networks from a novel social psychology perspective of loafing, and further propose a new training scheme as well as three improved strategies for boosting residual networks beyond their performance limits.

Boosting Residual Networks with Group Knowledge

1 code implementation26 Aug 2023 Shengji Tang, Peng Ye, Baopu Li, Weihao Lin, Tao Chen, Tong He, Chong Yu, Wanli Ouyang

Specifically, we implicitly divide all subnets into hierarchical groups by subnet-in-subnet sampling, aggregate the knowledge of different subnets in each group during training, and exploit upper-level group knowledge to supervise lower-level subnet groups.

Knowledge Distillation

Rethinking Cross-Domain Pedestrian Detection: A Background-Focused Distribution Alignment Framework for Instance-Free One-Stage Detectors

1 code implementation15 Sep 2023 Yancheng Cai, Bo Zhang, Baopu Li, Tao Chen, Hongliang Yan, Jingdong Zhang, Jiahao Xu

Therefore, we focus on cross-domain background feature alignment while minimizing the influence of foreground features on the cross-domain alignment stage.

Pedestrian Detection

Accelerating Vision Transformers Based on Heterogeneous Attention Patterns

no code implementations11 Oct 2023 Deli Yu, Teng Xi, Jianwei Li, Baopu Li, Gang Zhang, Haocheng Feng, Junyu Han, Jingtuo Liu, Errui Ding, Jingdong Wang

On one hand, different images share more similar attention patterns in early layers than later layers, indicating that the dynamic query-by-key self-attention matrix may be replaced with a static self-attention matrix in early layers.

Dimensionality Reduction

Efficient Architecture Search via Bi-level Data Pruning

no code implementations21 Dec 2023 Chongjun Tu, Peng Ye, Weihao Lin, Hancheng Ye, Chong Yu, Tao Chen, Baopu Li, Wanli Ouyang

Improving the efficiency of Neural Architecture Search (NAS) is a challenging but significant task that has received much attention.

Neural Architecture Search

Rethinking of Feature Interaction for Multi-task Learning on Dense Prediction

no code implementations21 Dec 2023 Jingdong Zhang, Jiayuan Fan, Peng Ye, Bo Zhang, Hancheng Ye, Baopu Li, Yancheng Cai, Tao Chen

In this work, we propose to learn a comprehensive intermediate feature globally from both task-generic and task-specific features, we reveal an important fact that this intermediate feature, namely the bridge feature, is a good solution to the above issues.

Multi-Task Learning

Enhanced Sparsification via Stimulative Training

no code implementations11 Mar 2024 Shengji Tang, Weihao Lin, Hancheng Ye, Peng Ye, Chong Yu, Baopu Li, Tao Chen

To alleviate this issue, we first study and reveal the relative sparsity effect in emerging stimulative training and then propose a structured pruning framework, named STP, based on an enhanced sparsification paradigm which maintains the magnitude of dropped weights and enhances the expressivity of kept weights by self-distillation.

Knowledge Distillation Model Compression

Continuous Spiking Graph Neural Networks

no code implementations2 Apr 2024 Nan Yin, Mengzhu Wan, Li Shen, Hitesh Laxmichand Patel, Baopu Li, Bin Gu, Huan Xiong

Inspired by recent spiking neural networks (SNNs), which emulate a biological inference process and provide an energy-efficient neural architecture, we incorporate the SNNs with CGNNs in a unified framework, named Continuous Spiking Graph Neural Networks (COS-GNN).

Cannot find the paper you are looking for? You can Submit a new open access paper.