no code implementations • 22 Apr 2024 • Yiming Liu, Kezhao Liu, Yao Xiao, Ziyi Dong, Xiaogang Xu, Pengxu Wei, Liang Lin
Empirical results show that ADDT improves the robustness of DBP models.
no code implementations • 21 Nov 2022 • Ziyi Dong, Pengxu Wei, Liang Lin
Although recent attempts have employed fine-tuning or prompt-tuning strategies to teach the pre-trained diffusion model novel concepts from a reference image set, they have the drawback of overfitting to the given reference images, particularly in one-shot applications, which is harmful to generate diverse and high-quality images while maintaining generation controllability.
1 code implementation • 13 Jul 2022 • Ziyi Dong, Pengxu Wei, Liang Lin
In this work, we empirically explore the model training for adversarial robustness in object detection, which greatly attributes to the conflict between learning clean images and adversarial images.