Search Results for author: Xing Nie

Found 6 papers, 1 papers with code

Defying Imbalanced Forgetting in Class Incremental Learning

no code implementations22 Mar 2024 Shixiong Xu, Gaofeng Meng, Xing Nie, Bolin Ni, Bin Fan, Shiming Xiang

This intriguing phenomenon, discovered in replay-based Class Incremental Learning (CIL), highlights the imbalanced forgetting of learned classes, as their accuracy is similar before the occurrence of catastrophic forgetting.

Class Incremental Learning Disentanglement +1

Cooperation Does Matter: Exploring Multi-Order Bilateral Relations for Audio-Visual Segmentation

1 code implementation11 Dec 2023 Qi Yang, Xing Nie, Tong Li, Pengfei Gao, Ying Guo, Cheng Zhen, Pengfei Yan, Shiming Xiang

For the first time, our framework explores three types of bilateral entanglements within AVS: pixel entanglement, modality entanglement, and temporal entanglement.

Pro-tuning: Unified Prompt Tuning for Vision Tasks

no code implementations28 Jul 2022 Xing Nie, Bolin Ni, Jianlong Chang, Gaomeng Meng, Chunlei Huo, Zhaoxiang Zhang, Shiming Xiang, Qi Tian, Chunhong Pan

To this end, we propose parameter-efficient Prompt tuning (Pro-tuning) to adapt frozen vision models to various downstream vision tasks.

Adversarial Robustness Image Classification +4

AME: Attention and Memory Enhancement in Hyper-Parameter Optimization

no code implementations CVPR 2022 Nuo Xu, Jianlong Chang, Xing Nie, Chunlei Huo, Shiming Xiang, Chunhong Pan

Training Deep Neural Networks (DNNs) is inherently subject to sensitive hyper-parameters and untimely feedbacks of performance evaluation.

Image Classification object-detection +2

Differentiable Convolution Search for Point Cloud Processing

no code implementations ICCV 2021 Xing Nie, Yongcheng Liu, Shaohong Chen, Jianlong Chang, Chunlei Huo, Gaofeng Meng, Qi Tian, Weiming Hu, Chunhong Pan

It can work in a purely data-driven manner and thus is capable of auto-creating a group of suitable convolutions for geometric shape modeling.

Cannot find the paper you are looking for? You can Submit a new open access paper.