2 code implementations • NeurIPS 2023 • Micah Goldblum, Hossein Souri, Renkun Ni, Manli Shu, Viraj Prabhu, Gowthami Somepalli, Prithvijit Chattopadhyay, Mark Ibrahim, Adrien Bardes, Judy Hoffman, Rama Chellappa, Andrew Gordon Wilson, Tom Goldstein
Battle of the Backbones (BoB) makes this choice easier by benchmarking a diverse suite of pretrained models, including vision-language models, those trained via self-supervised learning, and the Stable Diffusion backbone, across a diverse set of computer vision tasks ranging from classification to object detection to OOD generalization and more.
no code implementations • 23 Oct 2022 • Renkun Ni, Ping-Yeh Chiang, Jonas Geiping, Micah Goldblum, Andrew Gordon Wilson, Tom Goldstein
Sharpness-Aware Minimization (SAM) has recently emerged as a robust technique for improving the accuracy of deep neural networks.
no code implementations • ICLR 2022 • Renkun Ni, Manli Shu, Hossein Souri, Micah Goldblum, Tom Goldstein
Contrastive learning has recently taken off as a paradigm for learning from unlabeled data.
2 code implementations • NeurIPS 2021 • Chen Zhu, Renkun Ni, Zheng Xu, Kezhi Kong, W. Ronny Huang, Tom Goldstein
Innovations in neural architectures have fostered significant breakthroughs in language modeling and computer vision.
Ranked #137 on Image Classification on CIFAR-10
no code implementations • ICLR 2021 • Renkun Ni, Hong-Min Chu, Oscar Castaneda, Ping-Yeh Chiang, Christoph Studer, Tom Goldstein
Low-precision neural networks represent both weights and activations with few bits, drastically reducing the multiplication complexity.
1 code implementation • 14 Oct 2020 • Renkun Ni, Micah Goldblum, Amr Sharaf, Kezhi Kong, Tom Goldstein
Conventional image classifiers are trained by randomly sampling mini-batches of images.
no code implementations • 26 Jul 2020 • Renkun Ni, Hong-Min Chu, Oscar Castañeda, Ping-Yeh Chiang, Christoph Studer, Tom Goldstein
Low-resolution neural networks represent both weights and activations with few bits, drastically reducing the multiplication complexity.
1 code implementation • ICLR 2020 • Ping-Yeh Chiang, Renkun Ni, Ahmed Abdelkader, Chen Zhu, Christoph Studer, Tom Goldstein
Adversarial patch attacks are among one of the most practical threat models against real-world computer vision systems.
no code implementations • 22 Feb 2020 • Chen Zhu, Renkun Ni, Ping-Yeh Chiang, Hengduo Li, Furong Huang, Tom Goldstein
Convex relaxations are effective for training and certifying neural networks against norm-bounded adversarial attacks, but they leave a large gap between certifiable and empirical robustness.
1 code implementation • ICML 2020 • Micah Goldblum, Steven Reich, Liam Fowl, Renkun Ni, Valeriia Cherepanova, Tom Goldstein
In doing so, we introduce and verify several hypotheses for why meta-learned models perform better.
no code implementations • 18 Nov 2019 • Ping-Yeh Chiang, Jonas Geiping, Micah Goldblum, Tom Goldstein, Renkun Ni, Steven Reich, Ali Shafahi
State-of-the-art adversarial attacks on neural networks use expensive iterative methods and numerous random restarts from different initial points.
no code implementations • 25 Sep 2019 • Chen Zhu, Renkun Ni, Ping-Yeh Chiang, Hengduo Li, Furong Huang, Tom Goldstein
Convex relaxations are effective for training and certifying neural networks against norm-bounded adversarial attacks, but they leave a large gap between certifiable and empirical (PGD) robustness.
1 code implementation • 3 Aug 2017 • Yinpeng Dong, Renkun Ni, Jianguo Li, Yurong Chen, Jun Zhu, Hang Su
This procedure can greatly compensate the quantization error and thus yield better accuracy for low-bit DNNs.