Search Results for author: David Gifford

Found 8 papers, 6 papers with code

Constrained Submodular Optimization for Vaccine Design

1 code implementation16 Jun 2022 Zheng Dai, David Gifford

Advances in machine learning have enabled the prediction of immune system responses to prophylactic and therapeutic vaccines.

BIG-bench Machine Learning

Lost in Pruning: The Effects of Pruning Neural Networks beyond Test Accuracy

1 code implementation4 Mar 2021 Lucas Liebenwein, Cenk Baykal, Brandon Carter, David Gifford, Daniela Rus

Neural network pruning is a popular technique used to reduce the inference costs of modern, potentially overparameterized, networks.

Network Pruning

Maximum n-times Coverage for Vaccine Design

1 code implementation ICLR 2022 Ge Liu, Alexander Dimitrakakis, Brandon Carter, David Gifford

We introduce the maximum $n$-times coverage problem that selects $k$ overlays to maximize the summed coverage of weighted elements, where each element must be covered at least $n$ times.

Overinterpretation reveals image classification model pathologies

2 code implementations NeurIPS 2021 Brandon Carter, Siddhartha Jain, Jonas Mueller, David Gifford

Here, we demonstrate that neural networks trained on CIFAR-10 and ImageNet suffer from overinterpretation, and we find models on CIFAR-10 make confident predictions even when 95% of input images are masked and humans cannot discern salient features in the remaining pixel-subsets.

Classification General Classification +2

Information Condensing Active Learning

1 code implementation18 Feb 2020 Siddhartha Jain, Ge Liu, David Gifford

We introduce Information Condensing Active Learning (ICAL), a batch mode model agnostic Active Learning (AL) method targeted at Deep Bayesian Active Learning that focuses on acquiring labels for points which have as much information as possible about the still unacquired points.

Active Learning

Maximizing Overall Diversity for Improved Uncertainty Estimates in Deep Ensembles

no code implementations18 Jun 2019 Siddhartha Jain, Ge Liu, Jonas Mueller, David Gifford

The inaccuracy of neural network models on inputs that do not stem from the training data distribution is both problematic and at times unrecognized.

Bayesian Optimization

What made you do this? Understanding black-box decisions with sufficient input subsets

1 code implementation9 Oct 2018 Brandon Carter, Jonas Mueller, Siddhartha Jain, David Gifford

Local explanation frameworks aim to rationalize particular decisions made by a black-box prediction model.

Decision Making

Sequence to Better Sequence: Continuous Revision of Combinatorial Structures

no code implementations ICML 2017 Jonas Mueller, David Gifford, Tommi Jaakkola

Under this model, gradient methods can be used to efficiently optimize the continuous latent factors with respect to inferred outcomes.

Cannot find the paper you are looking for? You can Submit a new open access paper.