Search Results for author: Micah Goldblum

Found 31 papers, 16 papers with code

Towards Transferable Adversarial Attacks on Vision Transformers

no code implementations9 Sep 2021 Zhipeng Wei, Jingjing Chen, Micah Goldblum, Zuxuan Wu, Tom Goldstein, Yu-Gang Jiang

The results of these experiments demonstrate that the proposed dual attack can greatly boost transferability between ViTs and from ViTs to CNNs.

Where do Models go Wrong? Parameter-Space Saliency Maps for Explainability

1 code implementation3 Aug 2021 Roman Levin, Manli Shu, Eitan Borgnia, Furong Huang, Micah Goldblum, Tom Goldstein

We find that samples which cause similar parameters to malfunction are semantically similar.

Adversarial Examples Make Strong Poisons

1 code implementation21 Jun 2021 Liam Fowl, Micah Goldblum, Ping-Yeh Chiang, Jonas Geiping, Wojtek Czaja, Tom Goldstein

The adversarial machine learning literature is largely partitioned into evasion attacks on testing data and poisoning attacks on training data.

Data Poisoning

Sleeper Agent: Scalable Hidden Trigger Backdoors for Neural Networks Trained from Scratch

1 code implementation16 Jun 2021 Hossein Souri, Micah Goldblum, Liam Fowl, Rama Chellappa, Tom Goldstein

In contrast, the Hidden Trigger Backdoor Attack achieves poisoning without placing a trigger into the training data at all.

Can You Learn an Algorithm? Generalizing from Easy to Hard Problems with Recurrent Networks

1 code implementation8 Jun 2021 Avi Schwarzschild, Eitan Borgnia, Arjun Gupta, Furong Huang, Uzi Vishkin, Micah Goldblum, Tom Goldstein

In this work, we show that recurrent networks trained to solve simple problems with few recurrent steps can indeed solve much more complex problems simply by performing additional recurrences during inference.

The Intrinsic Dimension of Images and Its Impact on Learning

1 code implementation ICLR 2021 Phillip Pope, Chen Zhu, Ahmed Abdelkader, Micah Goldblum, Tom Goldstein

We find that common natural image datasets indeed have very low intrinsic dimension relative to the high number of pixels in the images.

Image Generation

DP-InstaHide: Provably Defusing Poisoning and Backdoor Attacks with Differentially Private Data Augmentations

no code implementations2 Mar 2021 Eitan Borgnia, Jonas Geiping, Valeriia Cherepanova, Liam Fowl, Arjun Gupta, Amin Ghiasi, Furong Huang, Micah Goldblum, Tom Goldstein

The InstaHide method has recently been proposed as an alternative to DP training that leverages supposed privacy properties of the mixup augmentation, although without rigorous guarantees.

Data Poisoning

What Doesn't Kill You Makes You Robust(er): Adversarial Training against Poisons and Backdoors

no code implementations26 Feb 2021 Jonas Geiping, Liam Fowl, Gowthami Somepalli, Micah Goldblum, Michael Moeller, Tom Goldstein

Data poisoning is a threat model in which a malicious actor tampers with training data to manipulate outcomes at inference time.

Data Poisoning

The Uncanny Similarity of Recurrence and Depth

1 code implementation22 Feb 2021 Avi Schwarzschild, Arjun Gupta, Amin Ghiasi, Micah Goldblum, Tom Goldstein

It is widely believed that deep neural networks contain layer specialization, wherein networks extract hierarchical features representing edges and patterns in shallow layers and complete objects in deeper layers.

Image Classification

Technical Challenges for Training Fair Neural Networks

no code implementations12 Feb 2021 Valeriia Cherepanova, Vedant Nanda, Micah Goldblum, John P. Dickerson, Tom Goldstein

As machine learning algorithms have been widely deployed across applications, many concerns have been raised over the fairness of their predictions, especially in high stakes settings (such as facial recognition and medical imaging).

Fairness Medical Diagnosis

LowKey: Leveraging Adversarial Attacks to Protect Social Media Users from Facial Recognition

no code implementations ICLR 2021 Valeriia Cherepanova, Micah Goldblum, Harrison Foley, Shiyuan Duan, John Dickerson, Gavin Taylor, Tom Goldstein

Facial recognition systems are increasingly deployed by private corporations, government agencies, and contractors for consumer services and mass surveillance programs alike.

Face Detection Face Recognition

Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses

no code implementations18 Dec 2020 Micah Goldblum, Dimitris Tsipras, Chulin Xie, Xinyun Chen, Avi Schwarzschild, Dawn Song, Aleksander Madry, Bo Li, Tom Goldstein

As machine learning systems grow in scale, so do their training data requirements, forcing practitioners to automate and outsource the curation of training data in order to achieve state-of-the-art performance.

Data Poisoning

Analyzing the Machine Learning Conference Review Process

no code implementations24 Nov 2020 David Tran, Alex Valtchanov, Keshav Ganapathy, Raymond Feng, Eric Slud, Micah Goldblum, Tom Goldstein

Members of the machine learning community are likely to overhear allegations ranging from randomness of acceptance decisions to institutional bias.

Data Augmentation for Meta-Learning

1 code implementation14 Oct 2020 Renkun Ni, Micah Goldblum, Amr Sharaf, Kezhi Kong, Tom Goldstein

Conventional image classifiers are trained by randomly sampling mini-batches of images.

Data Augmentation Meta-Learning

An Open Review of OpenReview: A Critical Analysis of the Machine Learning Conference Review Process

1 code implementation11 Oct 2020 David Tran, Alex Valtchanov, Keshav Ganapathy, Raymond Feng, Eric Slud, Micah Goldblum, Tom Goldstein

Members of the machine learning community are likely to overhear allegations ranging from randomness of acceptance decisions to institutional bias.

Prepare for the Worst: Generalizing across Domain Shifts with Adversarial Batch Normalization

no code implementations18 Sep 2020 Manli Shu, Zuxuan Wu, Micah Goldblum, Tom Goldstein

Adversarial training is the industry standard for producing models that are robust to small adversarial perturbations.

Semantic Segmentation

Adversarial Attacks on Machine Learning Systems for High-Frequency Trading

no code implementations21 Feb 2020 Micah Goldblum, Avi Schwarzschild, Ankit B. Patel, Tom Goldstein

Algorithmic trading systems are often completely automated, and deep learning is increasingly receiving attention in this domain.

Algorithmic Trading

WITCHcraft: Efficient PGD attacks with random step size

no code implementations18 Nov 2019 Ping-Yeh Chiang, Jonas Geiping, Micah Goldblum, Tom Goldstein, Renkun Ni, Steven Reich, Ali Shafahi

State-of-the-art adversarial attacks on neural networks use expensive iterative methods and numerous random restarts from different initial points.

Adversarially Robust Few-Shot Learning: A Meta-Learning Approach

1 code implementation NeurIPS 2020 Micah Goldblum, Liam Fowl, Tom Goldstein

Previous work on adversarially robust neural networks for image classification requires large training sets and computationally expensive training procedures.

Classification Few-Shot Image Classification +2

Truth or Backpropaganda? An Empirical Investigation of Deep Learning Theory

1 code implementation ICLR 2020 Micah Goldblum, Jonas Geiping, Avi Schwarzschild, Michael Moeller, Tom Goldstein

We empirically evaluate common assumptions about neural networks that are widely held by practitioners and theorists alike.

Learning Theory

Understanding Generalization through Visualizations

2 code implementations7 Jun 2019 W. Ronny Huang, Zeyad Emam, Micah Goldblum, Liam Fowl, Justin K. Terry, Furong Huang, Tom Goldstein

The power of neural networks lies in their ability to generalize to unseen data, yet the underlying reasons for this phenomenon remain elusive.

Adversarially Robust Distillation

2 code implementations23 May 2019 Micah Goldblum, Liam Fowl, Soheil Feizi, Tom Goldstein

In addition to producing small models with high test accuracy like conventional distillation, ARD also passes the superior robustness of large networks onto the student.

Knowledge Distillation

Cannot find the paper you are looking for? You can Submit a new open access paper.