1 code implementation • 8 Apr 2025 • Junxi Chen, Junhao Dong, Xiaohua Xie
Recently, the Image Prompt Adapter (IP-Adapter) has been increasingly integrated into text-to-image diffusion models (T2I-DMs) to improve controllability.
1 code implementation • 6 Jan 2025 • Binyu Zhang, Zhu Meng, Junhao Dong, Fei Su, Zhicheng Zhao
Survival prediction is a crucial task in the medical field and is essential for optimizing treatment options and resource allocation.
no code implementations • 5 Nov 2024 • Junhao Dong, Xinghua Qu, Z. Jane Wang, Yew-Soon Ong
To circumvent these issues, in this paper, we propose a novel uncertainty-aware distributional adversarial training method, which enforces adversary modeling by leveraging both the statistical information of adversarial examples and its corresponding uncertainty estimation, with the goal of augmenting the diversity of adversaries.
no code implementations • 20 Aug 2024 • Zhongliang Guo, Lei Fang, Jingyu Lin, Yifei Qian, Shuai Zhao, Zeyu Wang, Junhao Dong, Cunjian Chen, Ognjen Arandjelović, Chun Pong Lau
Recent advancements in generative AI, particularly Latent Diffusion Models (LDMs), have revolutionized image synthesis and manipulation.
no code implementations • 2 Jul 2024 • Haodong Chen, Haojian Huang, Junhao Dong, Mingzhe Zheng, Dian Shao
Dynamic Facial Expression Recognition (DFER) is crucial for understanding human behavior.
Ranked #1 on
Dynamic Facial Expression Recognition
on MAFW
Dynamic Facial Expression Recognition
Facial Expression Recognition
+3
2 code implementations • 18 Jan 2024 • Zhongliang Guo, Junhao Dong, Yifei Qian, Kaixuan Wang, Weiye Li, Ziheng Guo, Yuheng Wang, Yanli Li, Ognjen Arandjelović, Lei Fang
Neural style transfer (NST) generates new images by combining the style of one image with the content of another.
no code implementations • 13 Jan 2024 • Junxi Chen, Junhao Dong, Xiaohua Xie
Recently, many studies utilized adversarial examples (AEs) to raise the cost of malicious image editing and copyright violation powered by latent diffusion models (LDMs).
no code implementations • CVPR 2024 • Junhao Dong, Piotr Koniusz, Junxi Chen, Z. Jane Wang, Yew-Soon Ong
Existing methods typically align probability distributions of natural and adversarial samples between teacher and student models but they overlook intermediate adversarial samples along the "adversarial path" formed by the multi-step gradient ascent of a sample towards the decision boundary.
no code implementations • CVPR 2024 • Junhao Dong, Piotr Koniusz, Junxi Chen, Xiaohua Xie, Yew-Soon Ong
To bridge this gap we propose a novel framework unifying adversarially robust similarity learning and class concept learning.
no code implementations • 16 Nov 2023 • Zhu Meng, Junhao Dong, Limei Guo, Fei Su, Guangxi Wang, Zhicheng Zhao
Since signet ring cells (SRCs) are associated with high peripheral metastasis rate and dismal survival, they play an important role in determining surgical approaches and prognosis, while they are easily missed by even experienced pathologists.
no code implementations • 12 Oct 2023 • Qiang Li, Dan Zhang, Shengzhao Lei, Xun Zhao, Porawit Kamnoedboon, Weiwei Li, Junhao Dong, Shuyan Li
Despite the promising performance of existing visual models on public benchmarks, the critical assessment of their robustness for real-world applications remains an ongoing challenge.
1 code implementation • 19 Jul 2023 • Junhao Dong, Zhu Meng, Delong Liu, Jiaxuan Liu, Zhicheng Zhao, Fei Su
In addition, to enhance the classification boundaries, we sample and cluster high- and low-confidence features separately based on confidence estimation, facilitating the generation of prototypes closer to the class boundaries.
no code implementations • 16 May 2023 • Junxi Chen, Junhao Dong, Xiaohua Xie
However, a recent work showed the inequality phenomena in $l_{\infty}$-adversarial training and revealed that the $l_{\infty}$-adversarially trained model is vulnerable when a few important pixels are perturbed by i. i. d.
1 code implementation • 24 Mar 2023 • Junhao Dong, Junxi Chen, Xiaohua Xie, JianHuang Lai, Hao Chen
Deep learning techniques have achieved superior performance in computer-aided medical image analysis, yet they are still vulnerable to imperceptible adversarial attacks, resulting in potential misdiagnosis in clinical practice.
no code implementations • CVPR 2023 • Junhao Dong, Seyed-Mohsen Moosavi-Dezfooli, JianHuang Lai, Xiaohua Xie
To circumvent this issue, we propose a novel adversarial training scheme that encourages the model to produce similar outputs for an adversarial example and its ``inverse adversarial'' counterpart.
no code implementations • 26 Apr 2022 • Junhao Dong, YuAn Wang, JianHuang Lai, Xiaohua Xie
DeepFake face swapping presents a significant threat to online security and social media, which can replace the source face in an arbitrary photo/video with the target face of an entirely different person.
1 code implementation • 24 Feb 2022 • Yunhao Du, Junfeng Wan, Yanyun Zhao, Binyu Zhang, Zhihang Tong, Junhao Dong
In recent years, algorithms for multiple object tracking tasks have benefited from great progresses in deep models and video quality.
no code implementations • CVPR 2022 • Junhao Dong, YuAn Wang, Jian-Huang Lai, Xiaohua Xie
Extensive experiments show that our method can significantly outperform state-of-the-art adversarially robust FSIC methods on two standard benchmarks.