Search Results for author: Atoosa Chegini

Found 5 papers, 3 papers with code

Fast Adversarial Attacks on Language Models In One GPU Minute

no code implementations23 Feb 2024 Vinu Sankar Sadasivan, Shoumik Saha, Gaurang Sriramanan, Priyatham Kattakinda, Atoosa Chegini, Soheil Feizi

Through human evaluations, we find that our untargeted attack causes Vicuna-7B-v1. 5 to produce ~15% more incorrect outputs when compared to LM outputs in the absence of our attack.

Adversarial Attack Computational Efficiency

Identifying and Mitigating Model Failures through Few-shot CLIP-aided Diffusion Generation

no code implementations9 Dec 2023 Atoosa Chegini, Soheil Feizi

One common reason for these failures is the occurrence of objects in backgrounds that are rarely seen during training.

Robustness of AI-Image Detectors: Fundamental Limits and Practical Attacks

1 code implementation29 Sep 2023 Mehrdad Saberi, Vinu Sankar Sadasivan, Keivan Rezaei, Aounon Kumar, Atoosa Chegini, Wenxiao Wang, Soheil Feizi

Moreover, we show that watermarking methods are vulnerable to spoofing attacks where the attacker aims to have real images identified as watermarked ones, damaging the reputation of the developers.

Adversarial Attack Face Swapping

Run-Off Election: Improved Provable Defense against Data Poisoning Attacks

2 code implementations5 Feb 2023 Keivan Rezaei, Kiarash Banihashem, Atoosa Chegini, Soheil Feizi

Based on this approach, we propose DPA+ROE and FA+ROE defense methods based on Deep Partition Aggregation (DPA) and Finite Aggregation (FA) approaches from prior work.

Data Poisoning

Cannot find the paper you are looking for? You can Submit a new open access paper.