1 code implementation • 16 Jun 2022 • Zheng Dai, David Gifford
Advances in machine learning have enabled the prediction of immune system responses to prophylactic and therapeutic vaccines.
1 code implementation • 4 Mar 2021 • Lucas Liebenwein, Cenk Baykal, Brandon Carter, David Gifford, Daniela Rus
Neural network pruning is a popular technique used to reduce the inference costs of modern, potentially overparameterized, networks.
1 code implementation • ICLR 2022 • Ge Liu, Alexander Dimitrakakis, Brandon Carter, David Gifford
We introduce the maximum $n$-times coverage problem that selects $k$ overlays to maximize the summed coverage of weighted elements, where each element must be covered at least $n$ times.
2 code implementations • NeurIPS 2021 • Brandon Carter, Siddhartha Jain, Jonas Mueller, David Gifford
Here, we demonstrate that neural networks trained on CIFAR-10 and ImageNet suffer from overinterpretation, and we find models on CIFAR-10 make confident predictions even when 95% of input images are masked and humans cannot discern salient features in the remaining pixel-subsets.
1 code implementation • 18 Feb 2020 • Siddhartha Jain, Ge Liu, David Gifford
We introduce Information Condensing Active Learning (ICAL), a batch mode model agnostic Active Learning (AL) method targeted at Deep Bayesian Active Learning that focuses on acquiring labels for points which have as much information as possible about the still unacquired points.
no code implementations • 18 Jun 2019 • Siddhartha Jain, Ge Liu, Jonas Mueller, David Gifford
The inaccuracy of neural network models on inputs that do not stem from the training data distribution is both problematic and at times unrecognized.
1 code implementation • 9 Oct 2018 • Brandon Carter, Jonas Mueller, Siddhartha Jain, David Gifford
Local explanation frameworks aim to rationalize particular decisions made by a black-box prediction model.
no code implementations • ICML 2017 • Jonas Mueller, David Gifford, Tommi Jaakkola
Under this model, gradient methods can be used to efficiently optimize the continuous latent factors with respect to inferred outcomes.