Search Results for author: Junhao Dong

Found 18 papers, 6 papers with code

Mind the Trojan Horse: Image Prompt Adapter Enabling Scalable and Deceptive Jailbreaking

1 code implementation8 Apr 2025 Junxi Chen, Junhao Dong, Xiaohua Xie

Recently, the Image Prompt Adapter (IP-Adapter) has been increasingly integrated into text-to-image diffusion models (T2I-DMs) to improve controllability.

Image Generation

ICFNet: Integrated Cross-modal Fusion Network for Survival Prediction

1 code implementation6 Jan 2025 Binyu Zhang, Zhu Meng, Junhao Dong, Fei Su, Zhicheng Zhao

Survival prediction is a crucial task in the medical field and is essential for optimizing treatment options and resource allocation.

Decision Making Survival Prediction +1

Enhancing Adversarial Robustness via Uncertainty-Aware Distributional Adversarial Training

no code implementations5 Nov 2024 Junhao Dong, Xinghua Qu, Z. Jane Wang, Yew-Soon Ong

To circumvent these issues, in this paper, we propose a novel uncertainty-aware distributional adversarial training method, which enforces adversary modeling by leveraging both the statistical information of adversarial examples and its corresponding uncertainty estimation, with the goal of augmenting the diversity of adversaries.

Adversarial Robustness Diversity

A Grey-box Attack against Latent Diffusion Model-based Image Editing by Posterior Collapse

no code implementations20 Aug 2024 Zhongliang Guo, Lei Fang, Jingyu Lin, Yifei Qian, Shuai Zhao, Zeyu Wang, Junhao Dong, Cunjian Chen, Ognjen Arandjelović, Chun Pong Lau

Recent advancements in generative AI, particularly Latent Diffusion Models (LDMs), have revolutionized image synthesis and manipulation.

Image Generation

Exploring Adversarial Attacks against Latent Diffusion Model from the Perspective of Adversarial Transferability

no code implementations13 Jan 2024 Junxi Chen, Junhao Dong, Xiaohua Xie

Recently, many studies utilized adversarial examples (AEs) to raise the cost of malicious image editing and copyright violation powered by latent diffusion models (LDMs).

Adversarial Attack Image Classification

Robust Distillation via Untargeted and Targeted Intermediate Adversarial Samples

no code implementations CVPR 2024 Junhao Dong, Piotr Koniusz, Junxi Chen, Z. Jane Wang, Yew-Soon Ong

Existing methods typically align probability distributions of natural and adversarial samples between teacher and student models but they overlook intermediate adversarial samples along the "adversarial path" formed by the multi-step gradient ascent of a sample towards the decision boundary.

Adversarial Robustness Knowledge Distillation

Now and Future of Artificial Intelligence-based Signet Ring Cell Diagnosis: A Survey

no code implementations16 Nov 2023 Zhu Meng, Junhao Dong, Limei Guo, Fei Su, Guangxi Wang, Zhicheng Zhao

Since signet ring cells (SRCs) are associated with high peripheral metastasis rate and dismal survival, they play an important role in determining surgical approaches and prognosis, while they are easily missed by even experienced pathologists.

Diagnostic Prognosis

XIMAGENET-12: An Explainable AI Benchmark Dataset for Model Robustness Evaluation

no code implementations12 Oct 2023 Qiang Li, Dan Zhang, Shengzhao Lei, Xun Zhao, Porawit Kamnoedboon, Weiwei Li, Junhao Dong, Shuyan Li

Despite the promising performance of existing visual models on public benchmarks, the critical assessment of their robustness for real-world applications remains an ongoing challenge.

Classification

Boundary-Refined Prototype Generation: A General End-to-End Paradigm for Semi-Supervised Semantic Segmentation

1 code implementation19 Jul 2023 Junhao Dong, Zhu Meng, Delong Liu, Jiaxuan Liu, Zhicheng Zhao, Fei Su

In addition, to enhance the classification boundaries, we sample and cluster high- and low-confidence features separately based on confidence estimation, facilitating the generation of prototypes closer to the class boundaries.

Clustering Online Clustering +1

Releasing Inequality Phenomena in $L_{\infty}$-Adversarial Training via Input Gradient Distillation

no code implementations16 May 2023 Junxi Chen, Junhao Dong, Xiaohua Xie

However, a recent work showed the inequality phenomena in $l_{\infty}$-adversarial training and revealed that the $l_{\infty}$-adversarially trained model is vulnerable when a few important pixels are perturbed by i. i. d.

Adversarial Defense Adversarial Robustness

Survey on Adversarial Attack and Defense for Medical Image Analysis: Methods and Challenges

1 code implementation24 Mar 2023 Junhao Dong, Junxi Chen, Xiaohua Xie, JianHuang Lai, Hao Chen

Deep learning techniques have achieved superior performance in computer-aided medical image analysis, yet they are still vulnerable to imperceptible adversarial attacks, resulting in potential misdiagnosis in clinical practice.

Adversarial Attack Medical Diagnosis +2

The Enemy of My Enemy is My Friend: Exploring Inverse Adversaries for Improving Adversarial Training

no code implementations CVPR 2023 Junhao Dong, Seyed-Mohsen Moosavi-Dezfooli, JianHuang Lai, Xiaohua Xie

To circumvent this issue, we propose a novel adversarial training scheme that encourages the model to produce similar outputs for an adversarial example and its ``inverse adversarial'' counterpart.

Restricted Black-box Adversarial Attack Against DeepFake Face Swapping

no code implementations26 Apr 2022 Junhao Dong, YuAn Wang, JianHuang Lai, Xiaohua Xie

DeepFake face swapping presents a significant threat to online security and social media, which can replace the source face in an arbitrary photo/video with the target face of an entirely different person.

Adversarial Attack Face Reconstruction +2

Improving Adversarially Robust Few-Shot Image Classification With Generalizable Representations

no code implementations CVPR 2022 Junhao Dong, YuAn Wang, Jian-Huang Lai, Xiaohua Xie

Extensive experiments show that our method can significantly outperform state-of-the-art adversarially robust FSIC methods on two standard benchmarks.

Classification Few-Shot Image Classification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.