no code implementations • 14 Apr 2024 • Brian R. Bartoldson, James Diffenderfer, Konstantinos Parasyris, Bhavya Kailkhura
However, our scaling laws also predict robustness slowly grows then plateaus at $90$%: dwarfing our new SOTA by scaling is impractical, and perfect robustness is impossible.
3 code implementations • 9 Oct 2023 • Neel Jain, Ping-Yeh Chiang, Yuxin Wen, John Kirchenbauer, Hong-Min Chu, Gowthami Somepalli, Brian R. Bartoldson, Bhavya Kailkhura, Avi Schwarzschild, Aniruddha Saha, Micah Goldblum, Jonas Geiping, Tom Goldstein
We show that language model finetuning can be improved, sometimes dramatically, with a simple augmentation.
2 code implementations • 1 Apr 2023 • Brian R. Bartoldson, Yeping Hu, Amar Saini, Jose Cadena, Yucheng Fu, Jie Bao, Zhijie Xu, Brenda Ng, Phan Nguyen
With this, we were able to train MGN on meshes with \textit{millions} of nodes to generate computational fluid dynamics (CFD) simulations.
no code implementations • 13 Oct 2022 • Brian R. Bartoldson, Bhavya Kailkhura, Davis Blalock
To address this problem, there has been a great deal of research on *algorithmically-efficient deep learning*, which seeks to reduce training costs not at the hardware or implementation level, but through changes in the semantics of the training program.
no code implementations • 8 Jul 2022 • Sara Fridovich-Keil, Brian R. Bartoldson, James Diffenderfer, Bhavya Kailkhura, Peer-Timo Bremer
However, there still is no clear understanding of the conditions on OOD data and model properties that are required to observe effective robustness.
2 code implementations • NeurIPS 2021 • James Diffenderfer, Brian R. Bartoldson, Shreya Chaganti, Jize Zhang, Bhavya Kailkhura
Successful adoption of deep learning (DL) in the wild requires models to be: (1) compact, (2) accurate, and (3) robust to distributional shifts.
no code implementations • NeurIPS 2020 • Brian R. Bartoldson, Ari S. Morcos, Adrian Barbu, Gordon Erlebacher
Pruning neural network parameters is often viewed as a means to compress models, but pruning has also been motivated by the desire to prevent overfitting.