Search Results for author: Jonas Geiping

Found 42 papers, 28 papers with code

Understanding and Mitigating Copying in Diffusion Models

1 code implementation31 May 2023 Gowthami Somepalli, Vasu Singla, Micah Goldblum, Jonas Geiping, Tom Goldstein

While it is widely believed that duplicated images in the training set are responsible for content replication at inference time, we observe that the text conditioning of the model plays a similarly important role.

Image Captioning Memorization

What Can We Learn from Unlearnable Datasets?

no code implementations30 May 2023 Pedro Sandoval-Segura, Vasu Singla, Jonas Geiping, Micah Goldblum, Tom Goldstein

First, it is widely believed that neural networks trained on unlearnable datasets only learn shortcuts, simpler rules that are not useful for generalization.

JPEG Compressed Images Can Bypass Protections Against AI Editing

no code implementations5 Apr 2023 Pedro Sandoval-Segura, Jonas Geiping, Tom Goldstein

Recently developed text-to-image diffusion models make it easy to edit or create high-quality images.

Face Swapping

Universal Guidance for Diffusion Models

1 code implementation14 Feb 2023 Arpit Bansal, Hong-Min Chu, Avi Schwarzschild, Soumyadip Sengupta, Micah Goldblum, Jonas Geiping, Tom Goldstein

Typical diffusion models are trained to accept a particular form of conditioning, most commonly text, and cannot be conditioned on other modalities without retraining.

Face Recognition object-detection +1

Hard Prompts Made Easy: Gradient-Based Discrete Optimization for Prompt Tuning and Discovery

1 code implementation7 Feb 2023 Yuxin Wen, Neel Jain, John Kirchenbauer, Micah Goldblum, Jonas Geiping, Tom Goldstein

In the text-to-image setting, the method creates hard prompts for diffusion models, allowing API users to easily generate, discover, and mix and match image concepts without prior knowledge on how to prompt the model.

A Watermark for Large Language Models

2 code implementations24 Jan 2023 John Kirchenbauer, Jonas Geiping, Yuxin Wen, Jonathan Katz, Ian Miers, Tom Goldstein

Potential harms of large language models can be mitigated by watermarking model output, i. e., embedding signals into generated text that are invisible to humans but algorithmically detectable from a short span of tokens.

Language Modelling

Cramming: Training a Language Model on a Single GPU in One Day

1 code implementation28 Dec 2022 Jonas Geiping, Tom Goldstein

Recent trends in language modeling have focused on increasing performance through scaling, and have resulted in an environment where training language models is out of reach for most researchers and practitioners.

Language Modelling Masked Language Modeling

Diffusion Art or Digital Forgery? Investigating Data Replication in Diffusion Models

1 code implementation CVPR 2023 Gowthami Somepalli, Vasu Singla, Micah Goldblum, Jonas Geiping, Tom Goldstein

Cutting-edge diffusion models produce images with high quality and customizability, enabling them to be used for commercial art and graphic design purposes.

Image Retrieval Retrieval

K-SAM: Sharpness-Aware Minimization at the Speed of SGD

no code implementations23 Oct 2022 Renkun Ni, Ping-Yeh Chiang, Jonas Geiping, Micah Goldblum, Andrew Gordon Wilson, Tom Goldstein

Sharpness-Aware Minimization (SAM) has recently emerged as a robust technique for improving the accuracy of deep neural networks.

Canary in a Coalmine: Better Membership Inference with Ensembled Adversarial Queries

1 code implementation19 Oct 2022 Yuxin Wen, Arpit Bansal, Hamid Kazemi, Eitan Borgnia, Micah Goldblum, Jonas Geiping, Tom Goldstein

As industrial applications are increasingly automated by machine learning models, enforcing personal data ownership and intellectual property rights requires tracing training data back to their rightful owners.

Thinking Two Moves Ahead: Anticipating Other Users Improves Backdoor Attacks in Federated Learning

1 code implementation17 Oct 2022 Yuxin Wen, Jonas Geiping, Liam Fowl, Hossein Souri, Rama Chellappa, Micah Goldblum, Tom Goldstein

Federated learning is particularly susceptible to model poisoning and backdoor attacks because individual users have direct control over the training data and model updates.

Federated Learning Image Classification +2

Cold Diffusion: Inverting Arbitrary Image Transforms Without Noise

3 code implementations19 Aug 2022 Arpit Bansal, Eitan Borgnia, Hong-Min Chu, Jie S. Li, Hamid Kazemi, Furong Huang, Micah Goldblum, Jonas Geiping, Tom Goldstein

We observe that the generative behavior of diffusion models is not strongly dependent on the choice of image degradation, and in fact an entire family of generative models can be constructed by varying this choice.

Image Restoration Variational Inference

Autoregressive Perturbations for Data Poisoning

1 code implementation8 Jun 2022 Pedro Sandoval-Segura, Vasu Singla, Jonas Geiping, Micah Goldblum, Tom Goldstein, David W. Jacobs

Unfortunately, existing methods require knowledge of both the target architecture and the complete dataset so that a surrogate network can be trained, the parameters of which are used to generate the attack.

Data Poisoning

Fishing for User Data in Large-Batch Federated Learning via Gradient Magnification

1 code implementation1 Feb 2022 Yuxin Wen, Jonas Geiping, Liam Fowl, Micah Goldblum, Tom Goldstein

Federated learning (FL) has rapidly risen in popularity due to its promise of privacy and efficiency.

Federated Learning

DARTS for Inverse Problems: a Study on Stability

no code implementations NeurIPS Workshop Deep_Invers 2021 Jonas Geiping, Jovita Lukasik, Margret Keuper, Michael Moeller

Differentiable architecture search (DARTS) is a widely researched tool for neural architecture search, due to its promising results for image classification.

Image Classification Neural Architecture Search

Protecting Proprietary Data: Poisoning for Secure Dataset Release

no code implementations29 Sep 2021 Liam H Fowl, Ping-Yeh Chiang, Micah Goldblum, Jonas Geiping, Arpit Amit Bansal, Wojciech Czaja, Tom Goldstein

These two behaviors can be in conflict as an organization wants to prevent competitors from using their own data to replicate the performance of their proprietary models.

Data Poisoning

Stochastic Training is Not Necessary for Generalization

1 code implementation ICLR 2022 Jonas Geiping, Micah Goldblum, Phillip E. Pope, Michael Moeller, Tom Goldstein

It is widely believed that the implicit regularization of SGD is fundamental to the impressive generalization behavior we observe in neural networks.

Data Augmentation

Is Differentiable Architecture Search truly a One-Shot Method?

no code implementations12 Aug 2021 Jonas Geiping, Jovita Lukasik, Margret Keuper, Michael Moeller

In this work, we investigate DAS in a systematic case study of inverse problems, which allows us to analyze these potential benefits in a controlled manner.

Hyperparameter Optimization Image Classification +2

Adversarial Examples Make Strong Poisons

1 code implementation NeurIPS 2021 Liam Fowl, Micah Goldblum, Ping-Yeh Chiang, Jonas Geiping, Wojtek Czaja, Tom Goldstein

The adversarial machine learning literature is largely partitioned into evasion attacks on testing data and poisoning attacks on training data.

Data Poisoning

DP-InstaHide: Provably Defusing Poisoning and Backdoor Attacks with Differentially Private Data Augmentations

1 code implementation2 Mar 2021 Eitan Borgnia, Jonas Geiping, Valeriia Cherepanova, Liam Fowl, Arjun Gupta, Amin Ghiasi, Furong Huang, Micah Goldblum, Tom Goldstein

The InstaHide method has recently been proposed as an alternative to DP training that leverages supposed privacy properties of the mixup augmentation, although without rigorous guarantees.

Data Poisoning

What Doesn't Kill You Makes You Robust(er): How to Adversarially Train against Data Poisoning

1 code implementation26 Feb 2021 Jonas Geiping, Liam Fowl, Gowthami Somepalli, Micah Goldblum, Michael Moeller, Tom Goldstein

Data poisoning is a threat model in which a malicious actor tampers with training data to manipulate outcomes at inference time.

Data Poisoning

Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching

1 code implementation ICLR 2021 Jonas Geiping, Liam Fowl, W. Ronny Huang, Wojciech Czaja, Gavin Taylor, Michael Moeller, Tom Goldstein

We consider a particularly malicious poisoning attack that is both "from scratch" and "clean label", meaning we analyze an attack that successfully works against new, randomly initialized models, and is nearly imperceptible to humans, all while perturbing only a small fraction of the training data.

Data Poisoning

Fast Convex Relaxations using Graph Discretizations

no code implementations23 Apr 2020 Jonas Geiping, Fjedor Gaede, Hartmut Bauermeister, Michael Moeller

We discuss this methodology in detail and show examples in multi-label segmentation by minimal partitions and stereo estimation, where we demonstrate that the proposed graph discretization can reduce runtime as well as memory consumption of convex relaxations of matching problems by up to a factor of 10.

Optical Flow Estimation

MetaPoison: Practical General-purpose Clean-label Data Poisoning

2 code implementations NeurIPS 2020 W. Ronny Huang, Jonas Geiping, Liam Fowl, Gavin Taylor, Tom Goldstein

Existing attacks for data poisoning neural networks have relied on hand-crafted heuristics, because solving the poisoning problem directly via bilevel optimization is generally thought of as intractable for deep models.

AutoML Bilevel Optimization +2

WITCHcraft: Efficient PGD attacks with random step size

no code implementations18 Nov 2019 Ping-Yeh Chiang, Jonas Geiping, Micah Goldblum, Tom Goldstein, Renkun Ni, Steven Reich, Ali Shafahi

State-of-the-art adversarial attacks on neural networks use expensive iterative methods and numerous random restarts from different initial points.

Truth or Backpropaganda? An Empirical Investigation of Deep Learning Theory

1 code implementation ICLR 2020 Micah Goldblum, Jonas Geiping, Avi Schwarzschild, Michael Moeller, Tom Goldstein

We empirically evaluate common assumptions about neural networks that are widely held by practitioners and theorists alike.

Learning Theory

Parametric Majorization for Data-Driven Energy Minimization Methods

1 code implementation ICCV 2019 Jonas Geiping, Michael Moeller

Energy minimization methods are a classical tool in a multitude of computer vision applications.

Composite Optimization by Nonconvex Majorization-Minimization

no code implementations20 Feb 2018 Jonas Geiping, Michael Moeller

A popular class of algorithms for solving such problems are majorization-minimization techniques which iteratively approximate the composite nonconvex function by a majorizing function that is easy to minimize.

Super-Resolution

Multiframe Motion Coupling for Video Super Resolution

1 code implementation23 Nov 2016 Jonas Geiping, Hendrik Dirks, Daniel Cremers, Michael Moeller

The idea of video super resolution is to use different view points of a single scene to enhance the overall resolution and quality.

Motion Estimation Video Super-Resolution

Cannot find the paper you are looking for? You can Submit a new open access paper.