Search Results for author: Gowthami Somepalli

Found 12 papers, 10 papers with code

What Doesn't Kill You Makes You Robust(er): How to Adversarially Train against Data Poisoning

1 code implementation26 Feb 2021 Jonas Geiping, Liam Fowl, Gowthami Somepalli, Micah Goldblum, Michael Moeller, Tom Goldstein

Data poisoning is a threat model in which a malicious actor tampers with training data to manipulate outcomes at inference time.

Data Poisoning

An Investigation into the Role of Author Demographics in ICLR Participation and Review

no code implementations29 Sep 2021 Keshav Ganapathy, Emily Liu, Zain Zarger, Gowthami Somepalli, Micah Goldblum, Tom Goldstein

As machine learning conferences grow rapidly, many are concerned that individuals will be left behind on the basis of traits such as gender and geography.

PatchGame: Learning to Signal Mid-level Patches in Referential Games

1 code implementation NeurIPS 2021 Kamal Gupta, Gowthami Somepalli, Anubhav Gupta, Vinoj Jayasundara, Matthias Zwicker, Abhinav Shrivastava

We study a referential game (a type of signaling game) where two agents communicate with each other via a discrete bottleneck to achieve a common goal.

Diffusion Art or Digital Forgery? Investigating Data Replication in Diffusion Models

no code implementations CVPR 2023 Gowthami Somepalli, Vasu Singla, Micah Goldblum, Jonas Geiping, Tom Goldstein

Cutting-edge diffusion models produce images with high quality and customizability, enabling them to be used for commercial art and graphic design purposes.

Image Retrieval Retrieval

Understanding and Mitigating Copying in Diffusion Models

1 code implementation NeurIPS 2023 Gowthami Somepalli, Vasu Singla, Micah Goldblum, Jonas Geiping, Tom Goldstein

While it is widely believed that duplicated images in the training set are responsible for content replication at inference time, we observe that the text conditioning of the model plays a similarly important role.

Image Captioning Memorization

Baseline Defenses for Adversarial Attacks Against Aligned Language Models

1 code implementation1 Sep 2023 Neel Jain, Avi Schwarzschild, Yuxin Wen, Gowthami Somepalli, John Kirchenbauer, Ping-Yeh Chiang, Micah Goldblum, Aniruddha Saha, Jonas Geiping, Tom Goldstein

We find that the weakness of existing discrete optimizers for text, combined with the relatively high costs of optimization, makes standard adaptive attacks more challenging for LLMs.

Battle of the Backbones: A Large-Scale Comparison of Pretrained Models across Computer Vision Tasks

2 code implementations NeurIPS 2023 Micah Goldblum, Hossein Souri, Renkun Ni, Manli Shu, Viraj Prabhu, Gowthami Somepalli, Prithvijit Chattopadhyay, Mark Ibrahim, Adrien Bardes, Judy Hoffman, Rama Chellappa, Andrew Gordon Wilson, Tom Goldstein

Battle of the Backbones (BoB) makes this choice easier by benchmarking a diverse suite of pretrained models, including vision-language models, those trained via self-supervised learning, and the Stable Diffusion backbone, across a diverse set of computer vision tasks ranging from classification to object detection to OOD generalization and more.

Benchmarking object-detection +2

Cannot find the paper you are looking for? You can Submit a new open access paper.