Search Results for author: Alexandra Peste

Found 9 papers, 4 papers with code

ELSA: Partial Weight Freezing for Overhead-Free Sparse Network Deployment

no code implementations11 Dec 2023 Paniz Halvachi, Alexandra Peste, Dan Alistarh, Christoph H. Lampert

We present ELSA, a practical solution for creating deep networks that can easily be deployed at different levels of sparsity.

Accurate Neural Network Pruning Requires Rethinking Sparse Optimization

no code implementations3 Aug 2023 Denis Kuznedelev, Eldar Kurtic, Eugenia Iofinova, Elias Frantar, Alexandra Peste, Dan Alistarh

Obtaining versions of deep neural networks that are both highly-accurate and highly-sparse is one of the main challenges in the area of model compression, and several high-performance pruning techniques have been investigated by the community.

Model Compression Network Pruning +1

Knowledge Distillation Performs Partial Variance Reduction

1 code implementation NeurIPS 2023 Mher Safaryan, Alexandra Peste, Dan Alistarh

We show that, in the context of linear and deep linear models, KD can be interpreted as a novel type of stochastic variance reduction mechanism.

Knowledge Distillation

Bias in Pruned Vision Models: In-Depth Analysis and Countermeasures

no code implementations CVPR 2023 Eugenia Iofinova, Alexandra Peste, Dan Alistarh

Pruning - that is, setting a significant subset of the parameters of a neural network to zero - is one of the most popular methods of model compression.

Model Compression Network Pruning

CrAM: A Compression-Aware Minimizer

1 code implementation28 Jul 2022 Alexandra Peste, Adrian Vladu, Eldar Kurtic, Christoph H. Lampert, Dan Alistarh

In this work we propose a new compression-aware minimizer dubbed CrAM that modifies the optimization step in a principled way, in order to produce models whose local loss behavior is stable under compression operations such as pruning.

Image Classification Language Modelling +2

How Well Do Sparse Imagenet Models Transfer?

1 code implementation CVPR 2022 Eugenia Iofinova, Alexandra Peste, Mark Kurtz, Dan Alistarh

Transfer learning is a classic paradigm by which models pretrained on large "upstream" datasets are adapted to yield good results on "downstream" specialized datasets.

Transfer Learning

SSSE: Efficiently Erasing Samples from Trained Machine Learning Models

no code implementations8 Jul 2021 Alexandra Peste, Dan Alistarh, Christoph H. Lampert

The availability of large amounts of user-provided data has been key to the success of machine learning for many real-world tasks.

BIG-bench Machine Learning

AC/DC: Alternating Compressed/DeCompressed Training of Deep Neural Networks

2 code implementations NeurIPS 2021 Alexandra Peste, Eugenia Iofinova, Adrian Vladu, Dan Alistarh

The increasing computational requirements of deep neural networks (DNNs) have led to significant interest in obtaining DNN models that are sparse, yet accurate.

Network Pruning

Sparsity in Deep Learning: Pruning and growth for efficient inference and training in neural networks

no code implementations31 Jan 2021 Torsten Hoefler, Dan Alistarh, Tal Ben-Nun, Nikoli Dryden, Alexandra Peste

The growing energy and performance costs of deep learning have driven the community to reduce the size of neural networks by selectively pruning components.

Cannot find the paper you are looking for? You can Submit a new open access paper.