Search Results for author: Sara Hooker

Found 15 papers, 9 papers with code

When less is more: Simplifying inputs aids neural network understanding

no code implementations14 Jan 2022 Robin Tibor Schirrmeister, Rosanne Liu, Sara Hooker, Tonio Ball

To answer these questions, we need a clear measure of input simplicity (or inversely, complexity), an optimization objective that correlates with simplification, and a framework to incorporate such objective into training and inference.

A Tale Of Two Long Tails

1 code implementation27 Jul 2021 Daniel D'souza, Zach Nussbaum, Chirag Agarwal, Sara Hooker

As machine learning models are increasingly employed to assist human decision-makers, it becomes critical to communicate the uncertainty associated with these model predictions.

Data Augmentation

When does loss-based prioritization fail?

no code implementations16 Jul 2021 Niel Teng Hu, Xinyu Hu, Rosanne Liu, Sara Hooker, Jason Yosinski

Each example is propagated forward and backward through the network the same amount of times, independent of how much the example contributes to the learning protocol.

Randomness In Neural Network Training: Characterizing The Impact of Tooling

1 code implementation22 Jun 2021 Donglin Zhuang, Xingyao Zhang, Shuaiwen Leon Song, Sara Hooker

However, we also find that the cost of ensuring determinism varies dramatically between neural network architectures and hardware types, e. g., with overhead up to $746\%$, $241\%$, and $196\%$ on a spectrum of widely used GPU accelerator architectures, relative to non-deterministic training.

Keep the Gradients Flowing: Using Gradient Flow to Study Sparse Network Optimization

no code implementations2 Feb 2021 Kale-ab Tessera, Sara Hooker, Benjamin Rosman

Based upon these findings, we show that gradient flow in sparse networks can be improved by reconsidering aspects of the architecture design and the training regime.

Characterising Bias in Compressed Models

no code implementations6 Oct 2020 Sara Hooker, Nyalleng Moorosi, Gregory Clark, Samy Bengio, Emily Denton

However, overall accuracy hides disproportionately high errors on a small subset of examples; we call this subset Compression Identified Exemplars (CIE).

Fairness Quantization

The Hardware Lottery

1 code implementation14 Sep 2020 Sara Hooker

Hardware, systems and algorithms research communities have historically had different incentive structures and fluctuating motivation to engage with each other explicitly.

Estimating Example Difficulty Using Variance of Gradients

1 code implementation26 Aug 2020 Chirag Agarwal, Daniel D'souza, Sara Hooker

In this work, we propose Variance of Gradients (VoG) as a valuable and efficient metric to rank data by difficulty and to surface a tractable subset of the most challenging examples for human-in-the-loop auditing.

Out-of-Distribution Detection

What Do Compressed Deep Neural Networks Forget?

2 code implementations13 Nov 2019 Sara Hooker, Aaron Courville, Gregory Clark, Yann Dauphin, Andrea Frome

However, this measure of performance conceals significant differences in how different classes and images are impacted by model compression techniques.

Fairness Interpretability Techniques for Deep Learning +4

Selective Brain Damage: Measuring the Disparate Impact of Model Pruning

no code implementations25 Sep 2019 Sara Hooker, Yann Dauphin, Aaron Courville, Andrea Frome

Neural network pruning techniques have demonstrated it is possible to remove the majority of weights in a network with surprisingly little degradation to top-1 test set accuracy.

Network Pruning

The State of Sparsity in Deep Neural Networks

5 code implementations25 Feb 2019 Trevor Gale, Erich Elsen, Sara Hooker

We rigorously evaluate three state-of-the-art techniques for inducing sparsity in deep neural networks on two large-scale learning tasks: Transformer trained on WMT 2014 English-to-German, and ResNet-50 trained on ImageNet.

Model Compression Sparse Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.