no code implementations • 14 Jan 2022 • Robin Tibor Schirrmeister, Rosanne Liu, Sara Hooker, Tonio Ball
To answer these questions, we need a clear measure of input simplicity (or inversely, complexity), an optimization objective that correlates with simplification, and a framework to incorporate such objective into training and inference.
1 code implementation • Findings (EMNLP) 2021 • Orevaoghene Ahia, Julia Kreutzer, Sara Hooker
However, evaluation of the trade-offs incurred by popular compression techniques has been centered on high-resource datasets.
1 code implementation • 27 Jul 2021 • Daniel D'souza, Zach Nussbaum, Chirag Agarwal, Sara Hooker
As machine learning models are increasingly employed to assist human decision-makers, it becomes critical to communicate the uncertainty associated with these model predictions.
no code implementations • 16 Jul 2021 • Niel Teng Hu, Xinyu Hu, Rosanne Liu, Sara Hooker, Jason Yosinski
Each example is propagated forward and backward through the network the same amount of times, independent of how much the example contributes to the learning protocol.
1 code implementation • 22 Jun 2021 • Donglin Zhuang, Xingyao Zhang, Shuaiwen Leon Song, Sara Hooker
However, we also find that the cost of ensuring determinism varies dramatically between neural network architectures and hardware types, e. g., with overhead up to $746\%$, $241\%$, and $196\%$ on a spectrum of widely used GPU accelerator architectures, relative to non-deterministic training.
no code implementations • 2 Feb 2021 • Kale-ab Tessera, Sara Hooker, Benjamin Rosman
Based upon these findings, we show that gradient flow in sparse networks can be improved by reconsidering aspects of the architecture design and the training regime.
no code implementations • 6 Oct 2020 • Sara Hooker, Nyalleng Moorosi, Gregory Clark, Samy Bengio, Emily Denton
However, overall accuracy hides disproportionately high errors on a small subset of examples; we call this subset Compression Identified Exemplars (CIE).
1 code implementation • 14 Sep 2020 • Sara Hooker
Hardware, systems and algorithms research communities have historically had different incentive structures and fluctuating motivation to engage with each other explicitly.
1 code implementation • 26 Aug 2020 • Chirag Agarwal, Daniel D'souza, Sara Hooker
In this work, we propose Variance of Gradients (VoG) as a valuable and efficient metric to rank data by difficulty and to surface a tractable subset of the most challenging examples for human-in-the-loop auditing.
no code implementations • 15 Apr 2020 • Miles Brundage, Shahar Avin, Jasmine Wang, Haydn Belfield, Gretchen Krueger, Gillian Hadfield, Heidy Khlaaf, Jingying Yang, Helen Toner, Ruth Fong, Tegan Maharaj, Pang Wei Koh, Sara Hooker, Jade Leung, Andrew Trask, Emma Bluemke, Jonathan Lebensbold, Cullen O'Keefe, Mark Koren, Théo Ryffel, JB Rubinovitz, Tamay Besiroglu, Federica Carugati, Jack Clark, Peter Eckersley, Sarah de Haas, Maritza Johnson, Ben Laurie, Alex Ingerman, Igor Krawczuk, Amanda Askell, Rosario Cammarota, Andrew Lohn, David Krueger, Charlotte Stix, Peter Henderson, Logan Graham, Carina Prunkl, Bianca Martin, Elizabeth Seger, Noa Zilberman, Seán Ó hÉigeartaigh, Frens Kroeger, Girish Sastry, Rebecca Kagan, Adrian Weller, Brian Tse, Elizabeth Barnes, Allan Dafoe, Paul Scharre, Ariel Herbert-Voss, Martijn Rasser, Shagun Sodhani, Carrick Flynn, Thomas Krendl Gilbert, Lisa Dyer, Saif Khan, Yoshua Bengio, Markus Anderljung
With the recent wave of progress in artificial intelligence (AI) has come a growing awareness of the large-scale impacts of AI systems, and recognition that existing regulations and norms in industry and academia are insufficient to ensure responsible AI development.
Computers and Society
2 code implementations • 13 Nov 2019 • Sara Hooker, Aaron Courville, Gregory Clark, Yann Dauphin, Andrea Frome
However, this measure of performance conceals significant differences in how different classes and images are impacted by model compression techniques.
no code implementations • 25 Sep 2019 • Sara Hooker, Yann Dauphin, Aaron Courville, Andrea Frome
Neural network pruning techniques have demonstrated it is possible to remove the majority of weights in a network with surprisingly little degradation to top-1 test set accuracy.
5 code implementations • 25 Feb 2019 • Trevor Gale, Erich Elsen, Sara Hooker
We rigorously evaluate three state-of-the-art techniques for inducing sparsity in deep neural networks on two large-scale learning tasks: Transformer trained on WMT 2014 English-to-German, and ResNet-50 trained on ImageNet.
3 code implementations • NeurIPS 2019 • Sara Hooker, Dumitru Erhan, Pieter-Jan Kindermans, Been Kim
We propose an empirical measure of the approximate accuracy of feature importance estimates in deep neural networks.
1 code implementation • ICLR 2018 • Pieter-Jan Kindermans, Sara Hooker, Julius Adebayo, Maximilian Alber, Kristof T. Schütt, Sven Dähne, Dumitru Erhan, Been Kim
Saliency methods aim to explain the predictions of deep neural networks.