Search Results for author: Junhao Dong

Found 11 papers, 1 papers with code

Exploring Adversarial Attacks against Latent Diffusion Model from the Perspective of Adversarial Transferability

no code implementations13 Jan 2024 Junxi Chen, Junhao Dong, Xiaohua Xie

Recently, many studies utilized adversarial examples (AEs) to raise the cost of malicious image editing and copyright violation powered by latent diffusion models (LDMs).

Adversarial Attack Image Classification

Now and Future of Artificial Intelligence-based Signet Ring Cell Diagnosis: A Survey

no code implementations16 Nov 2023 Zhu Meng, Junhao Dong, Limei Guo, Fei Su, Guangxi Wang, Zhicheng Zhao

Since signet ring cells (SRCs) are associated with high peripheral metastasis rate and dismal survival, they play an important role in determining surgical approaches and prognosis, while they are easily missed by even experienced pathologists.

XIMAGENET-12: An Explainable AI Benchmark Dataset for Model Robustness Evaluation

no code implementations12 Oct 2023 Qiang Li, Dan Zhang, Shengzhao Lei, Xun Zhao, Porawit Kamnoedboon, Weiwei Li, Junhao Dong, Shuyan Li

Despite the promising performance of existing visual models on public benchmarks, the critical assessment of their robustness for real-world applications remains an ongoing challenge.

Classification

Boundary-Refined Prototype Generation: A General End-to-End Paradigm for Semi-Supervised Semantic Segmentation

no code implementations19 Jul 2023 Junhao Dong, Zhu Meng, Delong Liu, Zhicheng Zhao, Fei Su

Prototype-based classification is a classical method in machine learning, and recently it has achieved remarkable success in semi-supervised semantic segmentation.

Semi-Supervised Semantic Segmentation

Releasing Inequality Phenomena in $L_{\infty}$-Adversarial Training via Input Gradient Distillation

no code implementations16 May 2023 Junxi Chen, Junhao Dong, Xiaohua Xie

However, a recent work showed the inequality phenomena in $l_{\infty}$-adversarial training and revealed that the $l_{\infty}$-adversarially trained model is vulnerable when a few important pixels are perturbed by i. i. d.

Adversarial Defense Adversarial Robustness

Adversarial Attack and Defense for Medical Image Analysis: Methods and Applications

no code implementations24 Mar 2023 Junhao Dong, Junxi Chen, Xiaohua Xie, JianHuang Lai, Hao Chen

In this exposition, we present a comprehensive survey on recent advances in adversarial attack and defense for medical image analysis with a novel taxonomy in terms of the application scenario.

Adversarial Attack Medical Diagnosis

The Enemy of My Enemy is My Friend: Exploring Inverse Adversaries for Improving Adversarial Training

no code implementations CVPR 2023 Junhao Dong, Seyed-Mohsen Moosavi-Dezfooli, JianHuang Lai, Xiaohua Xie

To circumvent this issue, we propose a novel adversarial training scheme that encourages the model to produce similar outputs for an adversarial example and its ``inverse adversarial'' counterpart.

Restricted Black-box Adversarial Attack Against DeepFake Face Swapping

no code implementations26 Apr 2022 Junhao Dong, YuAn Wang, JianHuang Lai, Xiaohua Xie

DeepFake face swapping presents a significant threat to online security and social media, which can replace the source face in an arbitrary photo/video with the target face of an entirely different person.

Adversarial Attack Face Reconstruction +2

Improving Adversarially Robust Few-Shot Image Classification With Generalizable Representations

no code implementations CVPR 2022 Junhao Dong, YuAn Wang, Jian-Huang Lai, Xiaohua Xie

Extensive experiments show that our method can significantly outperform state-of-the-art adversarially robust FSIC methods on two standard benchmarks.

Classification Few-Shot Image Classification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.