no code implementations • 15 Jan 2025 • Runqi Wang, Sijie Xu, Tianyao He, Yang Chen, Wei Zhu, Dejia Song, Nemo Chen, Xu Tang, Yao Hu
We propose a novel method DynamicFace that leverages the power of diffusion model and plug-and-play temporal layers for video face swapping.
no code implementations • 25 Dec 2024 • Sijie Xu, Runqi Wang, Wei Zhu, Dejia Song, Nemo Chen, Xu Tang, Yao Hu
A promising solution to speed up the process is to obtain few-step consistency models through trajectory distillation.
no code implementations • 26 Sep 2024 • Huixin Sun, Runqi Wang, Yanjing Li, Xianbin Cao, XiaoLong Jiang, Yao Hu, Baochang Zhang
We propose a method that balances fine-tuning and quantization named ``Prompt for Quantization'' (P4Q), in which we design a lightweight architecture to leverage contrastive loss supervision to enhance the recognition performance of a PTQ model.
no code implementations • 23 Apr 2024 • Runqi Wang, Caoyuan Ma, Guopeng Li, Hanrui Xu, Yuke Li, Zheng Wang
Text to Motion aims to generate human motions from texts.
1 code implementation • 4 Nov 2023 • Hao Zheng, Runqi Wang, Jianzhuang Liu, Asako Kanezaki
The conventional few-shot classification aims at learning a model on a large labeled base dataset and rapidly adapting to a target dataset that is from the same distribution as the base dataset.
1 code implementation • 11 Jun 2023 • Yuguang Yang, Yiming Wang, Shupeng Geng, Runqi Wang, Yimi Wang, Sheng Wu, Baochang Zhang
The emergence of cross-modal foundation models has introduced numerous approaches grounded in text-image retrieval.
1 code implementation • CVPR 2023 • Runqi Wang, Xiaoyue Duan, Guoliang Kang, Jianzhuang Liu, Shaohui Lin, Songcen Xu, Jinhu Lv, Baochang Zhang
Text consists of a category name and a fixed number of learnable parameters which are selected from our designed attribute word bank and serve as attributes.
1 code implementation • CVPR 2023 • Runqi Wang, Hao Zheng, Xiaoyue Duan, Jianzhuang Liu, Yuning Lu, Tian Wang, Songcen Xu, Baochang Zhang
However, with only a few training images, there exist two crucial problems: (1) the visual feature distributions are easily distracted by class-irrelevant information in images, and (2) the alignment between the visual and language feature distributions is difficult.
no code implementations • 28 Nov 2022 • Xiaoyue Duan, Guoliang Kang, Runqi Wang, Shumin Han, Song Xue, Tian Wang, Baochang Zhang
Based on this observation, we propose a simple strategy, i. e., increasing the number of training shots, to mitigate the loss of intrinsic dimension caused by robustness-promoting regularization.
1 code implementation • 27 Aug 2022 • Runqi Wang, Yuxiang Bao, Baochang Zhang, Jianzhuang Liu, Wentao Zhu, Guodong Guo
Second, according to the similarity between incremental knowledge and base knowledge, we design an adaptive fusion of incremental knowledge, which helps the model allocate capacity to the knowledge of different difficulties.
no code implementations • 17 Mar 2022 • Runqi Wang, Linlin Yang, Baochang Zhang, Wentao Zhu, David Doermann, Guodong Guo
Research on the generalization ability of deep neural networks (DNNs) has recently attracted a great deal of attention.
no code implementations • 28 Dec 2021 • Runqi Wang, Xiaoyue Duan, Baochang Zhang, Song Xue, Wentao Zhu, David Doermann, Guodong Guo
We show that our method improves the recognition accuracy of adversarial training on ImageNet by 8. 32% compared with the baseline.
no code implementations • 20 Jun 2021 • Runqi Wang, Baochang Zhang, Li'an Zhuo, Qixiang Ye, David Doermann
Conventional gradient descent methods compute the gradients for multiple variables through the partial derivative.
no code implementations • ICCV 2021 • Song Xue, Runqi Wang, Baochang Zhang, Tian Wang, Guodong Guo, David Doermann
Differentiable Architecture Search (DARTS) improves the efficiency of architecture search by learning the architecture and network parameters end-to-end.