Search Results for author: Brian R. Bartoldson

Found 7 papers, 3 papers with code

Adversarial Robustness Limits via Scaling-Law and Human-Alignment Studies

no code implementations14 Apr 2024 Brian R. Bartoldson, James Diffenderfer, Konstantinos Parasyris, Bhavya Kailkhura

However, our scaling laws also predict robustness slowly grows then plateaus at $90$%: dwarfing our new SOTA by scaling is impractical, and perfect robustness is impossible.

Adversarial Robustness

Scientific Computing Algorithms to Learn Enhanced Scalable Surrogates for Mesh Physics

2 code implementations1 Apr 2023 Brian R. Bartoldson, Yeping Hu, Amar Saini, Jose Cadena, Yucheng Fu, Jie Bao, Zhijie Xu, Brenda Ng, Phan Nguyen

With this, we were able to train MGN on meshes with \textit{millions} of nodes to generate computational fluid dynamics (CFD) simulations.

Numerical Integration

Compute-Efficient Deep Learning: Algorithmic Trends and Opportunities

no code implementations13 Oct 2022 Brian R. Bartoldson, Bhavya Kailkhura, Davis Blalock

To address this problem, there has been a great deal of research on *algorithmically-efficient deep learning*, which seeks to reduce training costs not at the hardware or implementation level, but through changes in the semantics of the training program.

Models Out of Line: A Fourier Lens on Distribution Shift Robustness

no code implementations8 Jul 2022 Sara Fridovich-Keil, Brian R. Bartoldson, James Diffenderfer, Bhavya Kailkhura, Peer-Timo Bremer

However, there still is no clear understanding of the conditions on OOD data and model properties that are required to observe effective robustness.

Data Augmentation

The Generalization-Stability Tradeoff In Neural Network Pruning

no code implementations NeurIPS 2020 Brian R. Bartoldson, Ari S. Morcos, Adrian Barbu, Gordon Erlebacher

Pruning neural network parameters is often viewed as a means to compress models, but pruning has also been motivated by the desire to prevent overfitting.

Network Pruning

Cannot find the paper you are looking for? You can Submit a new open access paper.