Search Results for author: Davis Blalock

Found 9 papers, 4 papers with code

Multiple Instance Learning for ECG Risk Stratification

no code implementations2 Dec 2018 Divya Shanmugam, Davis Blalock, John Guttag

We focus on estimating a patient's risk of cardiovascular death after an acute coronary syndrome based on a patient's raw electrocardiogram (ECG) signal.

Ecg Risk Stratification Multiple Instance Learning

What is the State of Neural Network Pruning?

1 code implementation6 Mar 2020 Davis Blalock, Jose Javier Gonzalez Ortiz, Jonathan Frankle, John Guttag

Neural network pruning---the task of reducing the size of a network by removing parameters---has been the subject of a great deal of work in recent years.

Network Pruning

Better Aggregation in Test-Time Augmentation

no code implementations ICCV 2021 Divya Shanmugam, Davis Blalock, Guha Balakrishnan, John Guttag

In this paper, we present 1) experimental analyses that shed light on cases in which the simple average is suboptimal and 2) a method to address these shortcomings.

Image Classification

Causally motivated Shortcut Removal Using Auxiliary Labels

1 code implementation13 May 2021 Maggie Makar, Ben Packer, Dan Moldovan, Davis Blalock, Yoni Halpern, Alexander D'Amour

Shortcut learning, in which models make use of easy-to-represent but unstable associations, is a major failure mode for robust machine learning.

Causal Inference Disentanglement +1

Multiplying Matrices Without Multiplying

3 code implementations21 Jun 2021 Davis Blalock, John Guttag

Multiplying matrices is among the most fundamental and compute-intensive operations in machine learning.

Fast Benchmarking of Accuracy vs. Training Time with Cyclic Learning Rates

1 code implementation2 Jun 2022 Jacob Portes, Davis Blalock, Cory Stephenson, Jonathan Frankle

Benchmarking the tradeoff between neural network accuracy and training time is computationally expensive.

Benchmarking

Compute-Efficient Deep Learning: Algorithmic Trends and Opportunities

no code implementations13 Oct 2022 Brian R. Bartoldson, Bhavya Kailkhura, Davis Blalock

To address this problem, there has been a great deal of research on *algorithmically-efficient deep learning*, which seeks to reduce training costs not at the hardware or implementation level, but through changes in the semantics of the training program.

Dynamic Masking Rate Schedules for MLM Pretraining

no code implementations24 May 2023 Zachary Ankner, Naomi Saphra, Davis Blalock, Jonathan Frankle, Matthew L. Leavitt

Most works on transformers trained with the Masked Language Modeling (MLM) objective use the original BERT model's fixed masking rate of 15%.

Language Modelling Masked Language Modeling +1

Cannot find the paper you are looking for? You can Submit a new open access paper.