Search Results for author: Cheng Han

Found 6 papers, 3 papers with code

Facing the Elephant in the Room: Visual Prompt Tuning or Full Finetuning?

no code implementations23 Jan 2024 Cheng Han, Qifan Wang, Yiming Cui, Wenguan Wang, Lifu Huang, Siyuan Qi, Dongfang Liu

As the scale of vision models continues to grow, the emergence of Visual Prompt Tuning (VPT) as a parameter-efficient transfer learning technique has gained attention due to its superior performance compared to traditional full-finetuning.

Transfer Learning Visual Prompt Tuning

E^2VPT: An Effective and Efficient Approach for Visual Prompt Tuning

1 code implementation ICCV 2023 Cheng Han, Qifan Wang, Yiming Cui, Zhiwen Cao, Wenguan Wang, Siyuan Qi, Dongfang Liu

Specifically, we introduce a set of learnable key-value prompts and visual prompts into self-attention and input layers, respectively, to improve the effectiveness of model fine-tuning.

Visual Prompt Tuning

Visual Recognition with Deep Nearest Centroids

1 code implementation15 Sep 2022 Wenguan Wang, Cheng Han, Tianfei Zhou, Dongfang Liu

We devise deep nearest centroids (DNC), a conceptually elegant yet surprisingly effective network for large-scale visual recognition, by revisiting Nearest Centroids, one of the most classic and simple classifiers.

Decision Making Image Classification +1

YOLOPv2: Better, Faster, Stronger for Panoptic Driving Perception

2 code implementations24 Aug 2022 Cheng Han, Qichao Zhao, Shuyi Zhang, Yinzi Chen, Zhenlin Zhang, Jinwei Yuan

Over the last decade, multi-tasking learning approaches have achieved promising results in solving panoptic driving perception problems, providing both high-precision and high-efficiency performance.

Autonomous Driving Drivable Area Detection +5

Cannot find the paper you are looking for? You can Submit a new open access paper.