Search Results for author: James Diffenderfer

Found 14 papers, 5 papers with code

End-to-End Mesh Optimization of a Hybrid Deep Learning Black-Box PDE Solver

no code implementations17 Apr 2024 Shaocong Ma, James Diffenderfer, Bhavya Kailkhura, Yi Zhou

In this study, we explore the feasibility of end-to-end training of a hybrid model with a black-box PDE solver and a deep learning model for fluid flow prediction.

Adversarial Robustness Limits via Scaling-Law and Human-Alignment Studies

no code implementations14 Apr 2024 Brian R. Bartoldson, James Diffenderfer, Konstantinos Parasyris, Bhavya Kailkhura

However, our scaling laws also predict robustness slowly grows then plateaus at $90$%: dwarfing our new SOTA by scaling is impractical, and perfect robustness is impossible.

Adversarial Robustness

Decoding Compressed Trust: Scrutinizing the Trustworthiness of Efficient LLMs Under Compression

no code implementations18 Mar 2024 Junyuan Hong, Jinhao Duan, Chenhui Zhang, Zhangheng Li, Chulin Xie, Kelsey Lieberman, James Diffenderfer, Brian Bartoldson, Ajay Jaiswal, Kaidi Xu, Bhavya Kailkhura, Dan Hendrycks, Dawn Song, Zhangyang Wang, Bo Li

While state-of-the-art (SoTA) compression methods boast impressive advancements in preserving benign task performance, the potential risks of compression in terms of safety and trustworthiness have been largely neglected.

Ethics Fairness +1

GTBench: Uncovering the Strategic Reasoning Limitations of LLMs via Game-Theoretic Evaluations

1 code implementation19 Feb 2024 Jinhao Duan, Renming Zhang, James Diffenderfer, Bhavya Kailkhura, Lichao Sun, Elias Stengel-Eskin, Mohit Bansal, Tianlong Chen, Kaidi Xu

As Large Language Models (LLMs) are integrated into critical real-world applications, their strategic and logical reasoning abilities are increasingly crucial.

Card Games Logical Reasoning

When Bio-Inspired Computing meets Deep Learning: Low-Latency, Accurate, & Energy-Efficient Spiking Neural Networks from Artificial Neural Networks

no code implementations12 Dec 2023 Gourav Datta, Zeyu Liu, James Diffenderfer, Bhavya Kailkhura, Peter A. Beerel

However, advanced ANN-to-SNN conversion approaches demonstrate that for lossless conversion, the number of SNN time steps must equal the number of quantization steps in the ANN activation function.

Quantization

DeepZero: Scaling up Zeroth-Order Optimization for Deep Model Training

1 code implementation3 Oct 2023 Aochuan Chen, Yimeng Zhang, Jinghan Jia, James Diffenderfer, Jiancheng Liu, Konstantinos Parasyris, Yihua Zhang, Zheng Zhang, Bhavya Kailkhura, Sijia Liu

Our extensive experiments show that DeepZero achieves state-of-the-art (SOTA) accuracy on ResNet-20 trained on CIFAR-10, approaching FO training performance for the first time.

Adversarial Defense Computational Efficiency +1

Efficient Multi-Prize Lottery Tickets: Enhanced Accuracy, Training, and Inference Speed

no code implementations26 Sep 2022 Hao Cheng, Pu Zhao, Yize Li, Xue Lin, James Diffenderfer, Ryan Goldhahn, Bhavya Kailkhura

Recently, Diffenderfer and Kailkhura proposed a new paradigm for learning compact yet highly accurate binary neural networks simply by pruning and quantizing randomly weighted full precision neural networks.

Models Out of Line: A Fourier Lens on Distribution Shift Robustness

no code implementations8 Jul 2022 Sara Fridovich-Keil, Brian R. Bartoldson, James Diffenderfer, Bhavya Kailkhura, Peer-Timo Bremer

However, there still is no clear understanding of the conditions on OOD data and model properties that are required to observe effective robustness.

Data Augmentation

Zeroth-Order SciML: Non-intrusive Integration of Scientific Software with Deep Learning

no code implementations4 Jun 2022 Ioannis Tsaknakis, Bhavya Kailkhura, Sijia Liu, Donald Loveland, James Diffenderfer, Anna Maria Hiszpanski, Mingyi Hong

Existing knowledge integration approaches are limited to using differentiable knowledge source to be compatible with first-order DL training paradigm.

Multi-Prize Lottery Ticket Hypothesis: Finding Accurate Binary Neural Networks by Pruning A Randomly Weighted Network

1 code implementation17 Mar 2021 James Diffenderfer, Bhavya Kailkhura

In this paper, we propose (and prove) a stronger Multi-Prize Lottery Ticket Hypothesis: A sufficiently over-parameterized neural network with random weights contains several subnetworks (winning tickets) that (a) have comparable accuracy to a dense target network with learned weights (prize 1), (b) do not require any further training to achieve prize 1 (prize 2), and (c) is robust to extreme forms of quantization (i. e., binary weights and/or activation) (prize 3).

Classification with Binary Neural Network Classification with Binary Weight Network +1

Multi-Prize Lottery Ticket Hypothesis: Finding Generalizable and Efficient Binary Subnetworks in a Randomly Weighted Neural Network

no code implementations ICLR 2021 James Diffenderfer, Bhavya Kailkhura

A sufficiently over-parameterized neural network with random weights contains several subnetworks (winning tickets) that (a) have comparable accuracy to a dense target network with learned weights (prize 1), (b) do not require any further training to achieve prize 1 (prize 2), and (c) is robust to extreme forms of quantization (i. e., binary weights and/or activation) (prize 3).

Quantization

Cannot find the paper you are looking for? You can Submit a new open access paper.