Search Results for author: Muchao Ye

Found 10 papers, 2 papers with code

VERA: Explainable Video Anomaly Detection via Verbalized Learning of Vision-Language Models

no code implementations2 Dec 2024 Muchao Ye, Weiyang Liu, Pan He

The rapid advancement of vision-language models (VLMs) has established a new paradigm in video anomaly detection (VAD): leveraging VLMs to simultaneously detect anomalies and provide comprehendible explanations for the decisions.

Anomaly Detection Video Anomaly Detection

Buckle Up: Robustifying LLMs at Every Customization Stage via Data Curation

no code implementations3 Oct 2024 Xiaoqun Liu, Jiacheng Liang, Luoxi Tang, Chenyu You, Muchao Ye, Zhaohan Xi

To mitigate such attack, we propose an effective defensive framework utilizing data curation to revise commonsense texts and enhance their safety implication from the perspective of LLMs.

Robustifying Safety-Aligned Large Language Models through Clean Data Curation

no code implementations24 May 2024 Xiaoqun Liu, Jiacheng Liang, Muchao Ye, Zhaohan Xi

Large language models (LLMs) are vulnerable when trained on datasets containing harmful content, which leads to potential jailbreaking attacks in two scenarios: the integration of harmful texts within crowdsourced data used for pre-training and direct tampering with LLMs through fine-tuning.

Safety Alignment

VQAttack: Transferable Adversarial Attacks on Visual Question Answering via Pre-trained Models

no code implementations16 Feb 2024 Ziyi Yin, Muchao Ye, Tianrong Zhang, Jiaqi Wang, Han Liu, Jinghui Chen, Ting Wang, Fenglong Ma

Correspondingly, we propose a novel VQAttack model, which can iteratively generate both image and text perturbations with the designed modules: the large language model (LLM)-enhanced image attack and the cross-modal joint attack module.

Adversarial Robustness Language Modelling +3

VLATTACK: Multimodal Adversarial Attacks on Vision-Language Tasks via Pre-trained Models

1 code implementation NeurIPS 2023 Ziyi Yin, Muchao Ye, Tianrong Zhang, Tianyu Du, Jinguo Zhu, Han Liu, Jinghui Chen, Ting Wang, Fenglong Ma

In this paper, we aim to investigate a new yet practical task to craft image and text perturbations using pre-trained VL models to attack black-box fine-tuned models on different downstream tasks.

Adversarial Robustness

MedAttacker: Exploring Black-Box Adversarial Attacks on Risk Prediction Models in Healthcare

no code implementations11 Dec 2021 Muchao Ye, Junyu Luo, Guanjie Zheng, Cao Xiao, Ting Wang, Fenglong Ma

Deep neural networks (DNNs) have been broadly adopted in health risk prediction to provide healthcare diagnoses and treatments.

Adversarial Attack Position +1

FedSiam: Towards Adaptive Federated Semi-Supervised Learning

no code implementations6 Dec 2020 Zewei Long, Liwei Che, Yaqing Wang, Muchao Ye, Junyu Luo, Jinze Wu, Houping Xiao, Fenglong Ma

In this paper, we focus on designing a general framework FedSiam to tackle different scenarios of federated semi-supervised learning, including four settings in the labels-at-client scenario and two setting in the labels-at-server scenario.

Federated Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.