Search Results for author: Micah Goldblum

Found 88 papers, 59 papers with code

Measuring Style Similarity in Diffusion Models

1 code implementation1 Apr 2024 Gowthami Somepalli, Anubhav Gupta, Kamal Gupta, Shramay Palta, Micah Goldblum, Jonas Geiping, Abhinav Shrivastava, Tom Goldstein

We also propose a method to extract style descriptors that can be used to attribute style of a generated image to the images used in the training dataset of a text-to-image model.

Attribute

TuneTables: Context Optimization for Scalable Prior-Data Fitted Networks

2 code implementations17 Feb 2024 Benjamin Feuer, Robin Tibor Schirrmeister, Valeriia Cherepanova, Chinmay Hegde, Frank Hutter, Micah Goldblum, Niv Cohen, Colin White

Similar to large language models, PFNs make use of pretraining and in-context learning to achieve strong performance on new tasks in a single forward pass.

Fairness In-Context Learning +1

Spotting LLMs With Binoculars: Zero-Shot Detection of Machine-Generated Text

1 code implementation22 Jan 2024 Abhimanyu Hans, Avi Schwarzschild, Valeriia Cherepanova, Hamid Kazemi, Aniruddha Saha, Micah Goldblum, Jonas Geiping, Tom Goldstein

Detecting text generated by modern large language models is thought to be hard, as both LLMs and humans can exhibit a wide range of complex behaviors.

Non-Vacuous Generalization Bounds for Large Language Models

no code implementations28 Dec 2023 Sanae Lotfi, Marc Finzi, Yilun Kuang, Tim G. J. Rudner, Micah Goldblum, Andrew Gordon Wilson

Modern language models can contain billions of parameters, raising the question of whether they can generalize beyond the training data or simply regurgitate their training corpora.

Generalization Bounds valid

Perspectives on the State and Future of Deep Learning -- 2023

no code implementations7 Dec 2023 Micah Goldblum, Anima Anandkumar, Richard Baraniuk, Tom Goldstein, Kyunghyun Cho, Zachary C Lipton, Melanie Mitchell, Preetum Nakkiran, Max Welling, Andrew Gordon Wilson

The goal of this series is to chronicle opinions and issues in the field of machine learning as they stand today and as they change over time.

Benchmarking

Simplifying Neural Network Training Under Class Imbalance

1 code implementation NeurIPS 2023 Ravid Shwartz-Ziv, Micah Goldblum, Yucen Lily Li, C. Bayan Bruss, Andrew Gordon Wilson

Real-world datasets are often highly class-imbalanced, which can adversely impact the performance of deep learning models.

Data Augmentation

Battle of the Backbones: A Large-Scale Comparison of Pretrained Models across Computer Vision Tasks

2 code implementations NeurIPS 2023 Micah Goldblum, Hossein Souri, Renkun Ni, Manli Shu, Viraj Prabhu, Gowthami Somepalli, Prithvijit Chattopadhyay, Mark Ibrahim, Adrien Bardes, Judy Hoffman, Rama Chellappa, Andrew Gordon Wilson, Tom Goldstein

Battle of the Backbones (BoB) makes this choice easier by benchmarking a diverse suite of pretrained models, including vision-language models, those trained via self-supervised learning, and the Stable Diffusion backbone, across a diverse set of computer vision tasks ranging from classification to object detection to OOD generalization and more.

Benchmarking object-detection +2

Baseline Defenses for Adversarial Attacks Against Aligned Language Models

1 code implementation1 Sep 2023 Neel Jain, Avi Schwarzschild, Yuxin Wen, Gowthami Somepalli, John Kirchenbauer, Ping-Yeh Chiang, Micah Goldblum, Aniruddha Saha, Jonas Geiping, Tom Goldstein

We find that the weakness of existing discrete optimizers for text, combined with the relatively high costs of optimization, makes standard adaptive attacks more challenging for LLMs.

Seeing in Words: Learning to Classify through Language Bottlenecks

no code implementations29 Jun 2023 Khalid Saifullah, Yuxin Wen, Jonas Geiping, Micah Goldblum, Tom Goldstein

Neural networks for computer vision extract uninterpretable features despite achieving high accuracy on benchmarks.

Bring Your Own Data! Self-Supervised Evaluation for Large Language Models

1 code implementation23 Jun 2023 Neel Jain, Khalid Saifullah, Yuxin Wen, John Kirchenbauer, Manli Shu, Aniruddha Saha, Micah Goldblum, Jonas Geiping, Tom Goldstein

With the rise of Large Language Models (LLMs) and their ubiquitous deployment in diverse domains, measuring language model behavior on realistic data is imperative.

Chatbot Language Modelling

On the Reliability of Watermarks for Large Language Models

1 code implementation7 Jun 2023 John Kirchenbauer, Jonas Geiping, Yuxin Wen, Manli Shu, Khalid Saifullah, Kezhi Kong, Kasun Fernando, Aniruddha Saha, Micah Goldblum, Tom Goldstein

We also consider a range of new detection schemes that are sensitive to short spans of watermarked text embedded inside a large document, and we compare the robustness of watermarking to other kinds of detectors.

Understanding and Mitigating Copying in Diffusion Models

1 code implementation NeurIPS 2023 Gowthami Somepalli, Vasu Singla, Micah Goldblum, Jonas Geiping, Tom Goldstein

While it is widely believed that duplicated images in the training set are responsible for content replication at inference time, we observe that the text conditioning of the model plays a similarly important role.

Image Captioning Memorization

What Can We Learn from Unlearnable Datasets?

1 code implementation NeurIPS 2023 Pedro Sandoval-Segura, Vasu Singla, Jonas Geiping, Micah Goldblum, Tom Goldstein

First, it is widely believed that neural networks trained on unlearnable datasets only learn shortcuts, simpler rules that are not useful for generalization.

When Do Neural Nets Outperform Boosted Trees on Tabular Data?

1 code implementation NeurIPS 2023 Duncan McElfresh, Sujay Khandagale, Jonathan Valverde, Vishak Prasad C, Benjamin Feuer, Chinmay Hegde, Ganesh Ramakrishnan, Micah Goldblum, Colin White

To this end, we conduct the largest tabular data analysis to date, comparing 19 algorithms across 176 datasets, and we find that the 'NN vs. GBDT' debate is overemphasized: for a surprisingly high number of datasets, either the performance difference between GBDTs and NNs is negligible, or light hyperparameter tuning on a GBDT is more important than choosing between NNs and GBDTs.

The No Free Lunch Theorem, Kolmogorov Complexity, and the Role of Inductive Biases in Machine Learning

1 code implementation11 Apr 2023 Micah Goldblum, Marc Finzi, Keefer Rowan, Andrew Gordon Wilson

No free lunch theorems for supervised learning state that no learner can solve all problems or that all learners achieve exactly the same accuracy on average over a uniform distribution on learning problems.

Universal Guidance for Diffusion Models

1 code implementation14 Feb 2023 Arpit Bansal, Hong-Min Chu, Avi Schwarzschild, Soumyadip Sengupta, Micah Goldblum, Jonas Geiping, Tom Goldstein

Typical diffusion models are trained to accept a particular form of conditioning, most commonly text, and cannot be conditioned on other modalities without retraining.

Face Recognition object-detection +1

Hard Prompts Made Easy: Gradient-Based Discrete Optimization for Prompt Tuning and Discovery

2 code implementations NeurIPS 2023 Yuxin Wen, Neel Jain, John Kirchenbauer, Micah Goldblum, Jonas Geiping, Tom Goldstein

In the text-to-image setting, the method creates hard prompts for diffusion models, allowing API users to easily generate, discover, and mix and match image concepts without prior knowledge on how to prompt the model.

Exploring and Exploiting Decision Boundary Dynamics for Adversarial Robustness

2 code implementations6 Feb 2023 Yuancheng Xu, Yanchao Sun, Micah Goldblum, Tom Goldstein, Furong Huang

However, it is unclear whether existing robust training methods effectively increase the margin for each vulnerable point during training.

Adversarial Robustness

Diffusion Art or Digital Forgery? Investigating Data Replication in Diffusion Models

no code implementations CVPR 2023 Gowthami Somepalli, Vasu Singla, Micah Goldblum, Jonas Geiping, Tom Goldstein

Cutting-edge diffusion models produce images with high quality and customizability, enabling them to be used for commercial art and graphic design purposes.

Image Retrieval Retrieval

Chroma-VAE: Mitigating Shortcut Learning with Generative Classifiers

no code implementations28 Nov 2022 Wanqian Yang, Polina Kirichenko, Micah Goldblum, Andrew Gordon Wilson

Deep neural networks are susceptible to shortcut learning, using simple features to achieve low training loss without discovering essential semantic structure.

PAC-Bayes Compression Bounds So Tight That They Can Explain Generalization

1 code implementation24 Nov 2022 Sanae Lotfi, Marc Finzi, Sanyam Kapoor, Andres Potapczynski, Micah Goldblum, Andrew Gordon Wilson

While there has been progress in developing non-vacuous generalization bounds for deep neural networks, these bounds tend to be uninformative about why deep learning works.

Generalization Bounds Transfer Learning

K-SAM: Sharpness-Aware Minimization at the Speed of SGD

no code implementations23 Oct 2022 Renkun Ni, Ping-Yeh Chiang, Jonas Geiping, Micah Goldblum, Andrew Gordon Wilson, Tom Goldstein

Sharpness-Aware Minimization (SAM) has recently emerged as a robust technique for improving the accuracy of deep neural networks.

Canary in a Coalmine: Better Membership Inference with Ensembled Adversarial Queries

1 code implementation19 Oct 2022 Yuxin Wen, Arpit Bansal, Hamid Kazemi, Eitan Borgnia, Micah Goldblum, Jonas Geiping, Tom Goldstein

As industrial applications are increasingly automated by machine learning models, enforcing personal data ownership and intellectual property rights requires tracing training data back to their rightful owners.

Rethinking Bias Mitigation: Fairer Architectures Make for Fairer Face Recognition

2 code implementations NeurIPS 2023 Samuel Dooley, Rhea Sanjay Sukthanker, John P. Dickerson, Colin White, Frank Hutter, Micah Goldblum

Our search outputs a suite of models which Pareto-dominate all other high-performance architectures and existing bias mitigation methods in terms of accuracy and fairness, often by large margins, on the two most widely used datasets for face identification, CelebA and VGGFace2.

Face Identification Face Recognition +2

Thinking Two Moves Ahead: Anticipating Other Users Improves Backdoor Attacks in Federated Learning

1 code implementation17 Oct 2022 Yuxin Wen, Jonas Geiping, Liam Fowl, Hossein Souri, Rama Chellappa, Micah Goldblum, Tom Goldstein

Federated learning is particularly susceptible to model poisoning and backdoor attacks because individual users have direct control over the training data and model updates.

Federated Learning Image Classification +2

The Lie Derivative for Measuring Learned Equivariance

1 code implementation6 Oct 2022 Nate Gruver, Marc Finzi, Micah Goldblum, Andrew Gordon Wilson

In order to better understand the role of equivariance in recent vision models, we introduce the Lie derivative, a method for measuring equivariance with strong mathematical foundations and minimal hyperparameters.

Cold Diffusion: Inverting Arbitrary Image Transforms Without Noise

2 code implementations NeurIPS 2023 Arpit Bansal, Eitan Borgnia, Hong-Min Chu, Jie S. Li, Hamid Kazemi, Furong Huang, Micah Goldblum, Jonas Geiping, Tom Goldstein

We observe that the generative behavior of diffusion models is not strongly dependent on the choice of image degradation, and in fact an entire family of generative models can be constructed by varying this choice.

Image Restoration Variational Inference

Transfer Learning with Deep Tabular Models

1 code implementation30 Jun 2022 Roman Levin, Valeriia Cherepanova, Avi Schwarzschild, Arpit Bansal, C. Bayan Bruss, Tom Goldstein, Andrew Gordon Wilson, Micah Goldblum

In this work, we demonstrate that upstream data gives tabular neural networks a decisive advantage over widely used GBDT models.

Medical Diagnosis Transfer Learning

Autoregressive Perturbations for Data Poisoning

2 code implementations8 Jun 2022 Pedro Sandoval-Segura, Vasu Singla, Jonas Geiping, Micah Goldblum, Tom Goldstein, David W. Jacobs

Unfortunately, existing methods require knowledge of both the target architecture and the complete dataset so that a surrogate network can be trained, the parameters of which are used to generate the attack.

Data Poisoning

Pre-Train Your Loss: Easy Bayesian Transfer Learning with Informative Priors

1 code implementation20 May 2022 Ravid Shwartz-Ziv, Micah Goldblum, Hossein Souri, Sanyam Kapoor, Chen Zhu, Yann Lecun, Andrew Gordon Wilson

Deep learning is increasingly moving towards a transfer learning paradigm whereby large foundation models are fine-tuned on downstream tasks, starting from an initialization learned on the source task.

Transfer Learning

A Deep Dive into Dataset Imbalance and Bias in Face Identification

no code implementations15 Mar 2022 Valeriia Cherepanova, Steven Reich, Samuel Dooley, Hossein Souri, Micah Goldblum, Tom Goldstein

This is an unfortunate omission, as 'imbalance' is a more complex matter in identification; imbalance may arise in not only the training data, but also the testing data, and furthermore may affect the proportion of identities belonging to each demographic group or the number of images belonging to each identity.

Face Identification Face Recognition +1

Bayesian Model Selection, the Marginal Likelihood, and Generalization

1 code implementation23 Feb 2022 Sanae Lotfi, Pavel Izmailov, Gregory Benton, Micah Goldblum, Andrew Gordon Wilson

We provide a partial remedy through a conditional marginal likelihood, which we show is more aligned with generalization, and practically valuable for large-scale hyperparameter learning, such as in deep kernel learning.

Model Selection Neural Architecture Search

End-to-end Algorithm Synthesis with Recurrent Networks: Logical Extrapolation Without Overthinking

1 code implementation11 Feb 2022 Arpit Bansal, Avi Schwarzschild, Eitan Borgnia, Zeyad Emam, Furong Huang, Micah Goldblum, Tom Goldstein

Algorithmic extrapolation can be achieved through recurrent systems, which can be iterated many times to solve difficult reasoning problems.

Logical Reasoning

Fishing for User Data in Large-Batch Federated Learning via Gradient Magnification

1 code implementation1 Feb 2022 Yuxin Wen, Jonas Geiping, Liam Fowl, Micah Goldblum, Tom Goldstein

Federated learning (FL) has rapidly risen in popularity due to its promise of privacy and efficiency.

Federated Learning

Plug-In Inversion: Model-Agnostic Inversion for Vision with Data Augmentations

1 code implementation31 Jan 2022 Amin Ghiasi, Hamid Kazemi, Steven Reich, Chen Zhu, Micah Goldblum, Tom Goldstein

Existing techniques for model inversion typically rely on hard-to-tune regularizers, such as total variation or feature regularization, which must be individually calibrated for each network in order to produce adequate images.

Image Classification

Active Learning at the ImageNet Scale

1 code implementation25 Nov 2021 Zeyad Ali Sami Emam, Hong-Min Chu, Ping-Yeh Chiang, Wojciech Czaja, Richard Leapman, Micah Goldblum, Tom Goldstein

Active learning (AL) algorithms aim to identify an optimal subset of data for annotation, such that deep neural networks (DNN) can achieve better performance when trained on this labeled subset.

Active Learning

Comparing Human and Machine Bias in Face Recognition

no code implementations15 Oct 2021 Samuel Dooley, Ryan Downing, George Wei, Nathan Shankar, Bradon Thymes, Gudrun Thorkelsdottir, Tiye Kurtz-Miott, Rachel Mattson, Olufemi Obiwumi, Valeriia Cherepanova, Micah Goldblum, John P Dickerson, Tom Goldstein

Much recent research has uncovered and discussed serious concerns of bias in facial analysis technologies, finding performance disparities between groups of people based on perceived gender, skin type, lighting condition, etc.

Face Recognition

Identification of Attack-Specific Signatures in Adversarial Examples

no code implementations13 Oct 2021 Hossein Souri, Pirazh Khorramshahi, Chun Pong Lau, Micah Goldblum, Rama Chellappa

The adversarial attack literature contains a myriad of algorithms for crafting perturbations which yield pathological behavior in neural networks.

Adversarial Attack

An Investigation into the Role of Author Demographics in ICLR Participation and Review

no code implementations29 Sep 2021 Keshav Ganapathy, Emily Liu, Zain Zarger, Gowthami Somepalli, Micah Goldblum, Tom Goldstein

As machine learning conferences grow rapidly, many are concerned that individuals will be left behind on the basis of traits such as gender and geography.

Stochastic Training is Not Necessary for Generalization

1 code implementation ICLR 2022 Jonas Geiping, Micah Goldblum, Phillip E. Pope, Michael Moeller, Tom Goldstein

It is widely believed that the implicit regularization of SGD is fundamental to the impressive generalization behavior we observe in neural networks.

Data Augmentation

Thinking Deeper With Recurrent Networks: Logical Extrapolation Without Overthinking

no code implementations29 Sep 2021 Arpit Bansal, Avi Schwarzschild, Eitan Borgnia, Zeyad Emam, Furong Huang, Micah Goldblum, Tom Goldstein

Classical machine learning systems perform best when they are trained and tested on the same distribution, and they lack a mechanism to increase model power after training is complete.

A Closer Look at Distribution Shifts and Out-of-Distribution Generalization on Graphs

no code implementations29 Sep 2021 Mucong Ding, Kezhi Kong, Jiuhai Chen, John Kirchenbauer, Micah Goldblum, David Wipf, Furong Huang, Tom Goldstein

We observe that in most cases, we need both a suitable domain generalization algorithm and a strong GNN backbone model to optimize out-of-distribution test performance.

Domain Generalization Graph Classification +1

Protecting Proprietary Data: Poisoning for Secure Dataset Release

no code implementations29 Sep 2021 Liam H Fowl, Ping-Yeh Chiang, Micah Goldblum, Jonas Geiping, Arpit Amit Bansal, Wojciech Czaja, Tom Goldstein

These two behaviors can be in conflict as an organization wants to prevent competitors from using their own data to replicate the performance of their proprietary models.

Data Poisoning

Towards Transferable Adversarial Attacks on Vision Transformers

2 code implementations9 Sep 2021 Zhipeng Wei, Jingjing Chen, Micah Goldblum, Zuxuan Wu, Tom Goldstein, Yu-Gang Jiang

We evaluate the transferability of attacks on state-of-the-art ViTs, CNNs and robustly trained CNNs.

Where do Models go Wrong? Parameter-Space Saliency Maps for Explainability

1 code implementation3 Aug 2021 Roman Levin, Manli Shu, Eitan Borgnia, Furong Huang, Micah Goldblum, Tom Goldstein

We find that samples which cause similar parameters to malfunction are semantically similar.

Adversarial Examples Make Strong Poisons

2 code implementations NeurIPS 2021 Liam Fowl, Micah Goldblum, Ping-Yeh Chiang, Jonas Geiping, Wojtek Czaja, Tom Goldstein

The adversarial machine learning literature is largely partitioned into evasion attacks on testing data and poisoning attacks on training data.

Data Poisoning

Sleeper Agent: Scalable Hidden Trigger Backdoors for Neural Networks Trained from Scratch

1 code implementation16 Jun 2021 Hossein Souri, Liam Fowl, Rama Chellappa, Micah Goldblum, Tom Goldstein

In contrast, the Hidden Trigger Backdoor Attack achieves poisoning without placing a trigger into the training data at all.

Backdoor Attack

Can You Learn an Algorithm? Generalizing from Easy to Hard Problems with Recurrent Networks

1 code implementation NeurIPS 2021 Avi Schwarzschild, Eitan Borgnia, Arjun Gupta, Furong Huang, Uzi Vishkin, Micah Goldblum, Tom Goldstein

In this work, we show that recurrent networks trained to solve simple problems with few recurrent steps can indeed solve much more complex problems simply by performing additional recurrences during inference.

The Intrinsic Dimension of Images and Its Impact on Learning

1 code implementation ICLR 2021 Phillip Pope, Chen Zhu, Ahmed Abdelkader, Micah Goldblum, Tom Goldstein

We find that common natural image datasets indeed have very low intrinsic dimension relative to the high number of pixels in the images.

Image Generation

DP-InstaHide: Provably Defusing Poisoning and Backdoor Attacks with Differentially Private Data Augmentations

1 code implementation2 Mar 2021 Eitan Borgnia, Jonas Geiping, Valeriia Cherepanova, Liam Fowl, Arjun Gupta, Amin Ghiasi, Furong Huang, Micah Goldblum, Tom Goldstein

The InstaHide method has recently been proposed as an alternative to DP training that leverages supposed privacy properties of the mixup augmentation, although without rigorous guarantees.

Data Poisoning

What Doesn't Kill You Makes You Robust(er): How to Adversarially Train against Data Poisoning

1 code implementation26 Feb 2021 Jonas Geiping, Liam Fowl, Gowthami Somepalli, Micah Goldblum, Michael Moeller, Tom Goldstein

Data poisoning is a threat model in which a malicious actor tampers with training data to manipulate outcomes at inference time.

Data Poisoning

The Uncanny Similarity of Recurrence and Depth

1 code implementation ICLR 2022 Avi Schwarzschild, Arjun Gupta, Amin Ghiasi, Micah Goldblum, Tom Goldstein

It is widely believed that deep neural networks contain layer specialization, wherein neural networks extract hierarchical features representing edges and patterns in shallow layers and complete objects in deeper layers.

Image Classification

Technical Challenges for Training Fair Neural Networks

no code implementations12 Feb 2021 Valeriia Cherepanova, Vedant Nanda, Micah Goldblum, John P. Dickerson, Tom Goldstein

As machine learning algorithms have been widely deployed across applications, many concerns have been raised over the fairness of their predictions, especially in high stakes settings (such as facial recognition and medical imaging).

Fairness Medical Diagnosis

LowKey: Leveraging Adversarial Attacks to Protect Social Media Users from Facial Recognition

no code implementations ICLR 2021 Valeriia Cherepanova, Micah Goldblum, Harrison Foley, Shiyuan Duan, John Dickerson, Gavin Taylor, Tom Goldstein

Facial recognition systems are increasingly deployed by private corporations, government agencies, and contractors for consumer services and mass surveillance programs alike.

Face Detection Face Recognition

Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses

no code implementations18 Dec 2020 Micah Goldblum, Dimitris Tsipras, Chulin Xie, Xinyun Chen, Avi Schwarzschild, Dawn Song, Aleksander Madry, Bo Li, Tom Goldstein

As machine learning systems grow in scale, so do their training data requirements, forcing practitioners to automate and outsource the curation of training data in order to achieve state-of-the-art performance.

BIG-bench Machine Learning Data Poisoning

Analyzing the Machine Learning Conference Review Process

no code implementations24 Nov 2020 David Tran, Alex Valtchanov, Keshav Ganapathy, Raymond Feng, Eric Slud, Micah Goldblum, Tom Goldstein

Members of the machine learning community are likely to overhear allegations ranging from randomness of acceptance decisions to institutional bias.

BIG-bench Machine Learning

Data Augmentation for Meta-Learning

1 code implementation14 Oct 2020 Renkun Ni, Micah Goldblum, Amr Sharaf, Kezhi Kong, Tom Goldstein

Conventional image classifiers are trained by randomly sampling mini-batches of images.

Data Augmentation Meta-Learning

An Open Review of OpenReview: A Critical Analysis of the Machine Learning Conference Review Process

1 code implementation11 Oct 2020 David Tran, Alex Valtchanov, Keshav Ganapathy, Raymond Feng, Eric Slud, Micah Goldblum, Tom Goldstein

Members of the machine learning community are likely to overhear allegations ranging from randomness of acceptance decisions to institutional bias.

BIG-bench Machine Learning

Prepare for the Worst: Generalizing across Domain Shifts with Adversarial Batch Normalization

no code implementations28 Sep 2020 Manli Shu, Zuxuan Wu, Micah Goldblum, Tom Goldstein

Adversarial training is the industry standard for producing models that are robust to small adversarial perturbations.

Semantic Segmentation

Encoding Robustness to Image Style via Adversarial Feature Perturbations

1 code implementation NeurIPS 2021 Manli Shu, Zuxuan Wu, Micah Goldblum, Tom Goldstein

We adapt adversarial training by directly perturbing feature statistics, rather than image pixels, to produce models that are robust to various unseen distributional shifts.

Data Augmentation Semantic Segmentation

Adversarial Attacks on Machine Learning Systems for High-Frequency Trading

no code implementations21 Feb 2020 Micah Goldblum, Avi Schwarzschild, Ankit B. Patel, Tom Goldstein

Algorithmic trading systems are often completely automated, and deep learning is increasingly receiving attention in this domain.

Algorithmic Trading BIG-bench Machine Learning +1

WITCHcraft: Efficient PGD attacks with random step size

no code implementations18 Nov 2019 Ping-Yeh Chiang, Jonas Geiping, Micah Goldblum, Tom Goldstein, Renkun Ni, Steven Reich, Ali Shafahi

State-of-the-art adversarial attacks on neural networks use expensive iterative methods and numerous random restarts from different initial points.

Computational Efficiency

Adversarially Robust Few-Shot Learning: A Meta-Learning Approach

1 code implementation NeurIPS 2020 Micah Goldblum, Liam Fowl, Tom Goldstein

Previous work on adversarially robust neural networks for image classification requires large training sets and computationally expensive training procedures.

Classification Few-Shot Image Classification +3

Truth or Backpropaganda? An Empirical Investigation of Deep Learning Theory

1 code implementation ICLR 2020 Micah Goldblum, Jonas Geiping, Avi Schwarzschild, Michael Moeller, Tom Goldstein

We empirically evaluate common assumptions about neural networks that are widely held by practitioners and theorists alike.

Learning Theory

Understanding Generalization through Visualizations

2 code implementations NeurIPS Workshop ICBINB 2020 W. Ronny Huang, Zeyad Emam, Micah Goldblum, Liam Fowl, Justin K. Terry, Furong Huang, Tom Goldstein

The power of neural networks lies in their ability to generalize to unseen data, yet the underlying reasons for this phenomenon remain elusive.

Adversarially Robust Distillation

2 code implementations23 May 2019 Micah Goldblum, Liam Fowl, Soheil Feizi, Tom Goldstein

In addition to producing small models with high test accuracy like conventional distillation, ARD also passes the superior robustness of large networks onto the student.

Adversarial Robustness Knowledge Distillation

Cannot find the paper you are looking for? You can Submit a new open access paper.