1 code implementation • 11 Feb 2022 • Arpit Bansal, Avi Schwarzschild, Eitan Borgnia, Zeyad Emam, Furong Huang, Micah Goldblum, Tom Goldstein
Logical extrapolation can be achieved through recurrent systems, which can be iterated many times to solve difficult reasoning problems.
no code implementations • 29 Sep 2021 • Arpit Bansal, Avi Schwarzschild, Eitan Borgnia, Zeyad Emam, Furong Huang, Micah Goldblum, Tom Goldstein
Classical machine learning systems perform best when they are trained and tested on the same distribution, and they lack a mechanism to increase model power after training is complete.
1 code implementation • 13 Aug 2021 • Avi Schwarzschild, Eitan Borgnia, Arjun Gupta, Arpit Bansal, Zeyad Emam, Furong Huang, Micah Goldblum, Tom Goldstein
We describe new datasets for studying generalization from easy to hard examples.
no code implementations • 17 Jun 2021 • Arpit Bansal, Micah Goldblum, Valeriia Cherepanova, Avi Schwarzschild, C. Bayan Bruss, Tom Goldstein
Class-imbalanced data, in which some classes contain far more samples than others, is ubiquitous in real-world applications.
1 code implementation • NeurIPS 2021 • Avi Schwarzschild, Eitan Borgnia, Arjun Gupta, Furong Huang, Uzi Vishkin, Micah Goldblum, Tom Goldstein
In this work, we show that recurrent networks trained to solve simple problems with few recurrent steps can indeed solve much more complex problems simply by performing additional recurrences during inference.
6 code implementations • 2 Jun 2021 • Gowthami Somepalli, Micah Goldblum, Avi Schwarzschild, C. Bayan Bruss, Tom Goldstein
We devise a hybrid deep learning approach to solving tabular data problems.
1 code implementation • ICLR 2022 • Avi Schwarzschild, Arjun Gupta, Amin Ghiasi, Micah Goldblum, Tom Goldstein
It is widely believed that deep neural networks contain layer specialization, wherein neural networks extract hierarchical features representing edges and patterns in shallow layers and complete objects in deeper layers.
no code implementations • 1 Jan 2021 • Avi Schwarzschild, Micah Goldblum, Arjun Gupta, John P Dickerson, Tom Goldstein
Data poisoning and backdoor attacks manipulate training data in order to cause models to fail during inference.
no code implementations • 18 Dec 2020 • Micah Goldblum, Dimitris Tsipras, Chulin Xie, Xinyun Chen, Avi Schwarzschild, Dawn Song, Aleksander Madry, Bo Li, Tom Goldstein
As machine learning systems grow in scale, so do their training data requirements, forcing practitioners to automate and outsource the curation of training data in order to achieve state-of-the-art performance.
2 code implementations • 22 Jun 2020 • Avi Schwarzschild, Micah Goldblum, Arjun Gupta, John P. Dickerson, Tom Goldstein
Data poisoning and backdoor attacks manipulate training data in order to cause models to fail during inference.
no code implementations • 20 Apr 2020 • Ahmed Abdelkader, Michael J. Curry, Liam Fowl, Tom Goldstein, Avi Schwarzschild, Manli Shu, Christoph Studer, Chen Zhu
We first demonstrate successful transfer attacks against a victim network using \textit{only} its feature extractor.
no code implementations • 21 Feb 2020 • Micah Goldblum, Avi Schwarzschild, Ankit B. Patel, Tom Goldstein
Algorithmic trading systems are often completely automated, and deep learning is increasingly receiving attention in this domain.
1 code implementation • ICLR 2020 • Micah Goldblum, Jonas Geiping, Avi Schwarzschild, Michael Moeller, Tom Goldstein
We empirically evaluate common assumptions about neural networks that are widely held by practitioners and theorists alike.