Search Results for author: Eugenia Iofinova

Found 8 papers, 5 papers with code

Hacking Generative Models with Differentiable Network Bending

1 code implementation7 Oct 2023 Giacomo Aldegheri, Alina Rogalska, Ahmed Youssef, Eugenia Iofinova

In this work, we propose a method to 'hack' generative models, pushing their outputs away from the original training distribution towards a new objective.

SPADE: Sparsity-Guided Debugging for Deep Neural Networks

no code implementations6 Oct 2023 Arshia Soltani Moakhar, Eugenia Iofinova, Dan Alistarh

Towards this goal, multiple tools have been proposed to aid a human examiner in reasoning about a network's behavior in general or on a set of instances.

Learning Theory

Accurate Neural Network Pruning Requires Rethinking Sparse Optimization

no code implementations3 Aug 2023 Denis Kuznedelev, Eldar Kurtic, Eugenia Iofinova, Elias Frantar, Alexandra Peste, Dan Alistarh

Obtaining versions of deep neural networks that are both highly-accurate and highly-sparse is one of the main challenges in the area of model compression, and several high-performance pruning techniques have been investigated by the community.

Model Compression Network Pruning +1

Bias in Pruned Vision Models: In-Depth Analysis and Countermeasures

no code implementations CVPR 2023 Eugenia Iofinova, Alexandra Peste, Dan Alistarh

Pruning - that is, setting a significant subset of the parameters of a neural network to zero - is one of the most popular methods of model compression.

Model Compression Network Pruning

SparseProp: Efficient Sparse Backpropagation for Faster Training of Neural Networks

1 code implementation9 Feb 2023 Mahdi Nikdan, Tommaso Pegolotti, Eugenia Iofinova, Eldar Kurtic, Dan Alistarh

We provide a new efficient version of the backpropagation algorithm, specialized to the case where the weights of the neural network being trained are sparse.

Transfer Learning

How Well Do Sparse Imagenet Models Transfer?

1 code implementation CVPR 2022 Eugenia Iofinova, Alexandra Peste, Mark Kurtz, Dan Alistarh

Transfer learning is a classic paradigm by which models pretrained on large "upstream" datasets are adapted to yield good results on "downstream" specialized datasets.

Transfer Learning

AC/DC: Alternating Compressed/DeCompressed Training of Deep Neural Networks

2 code implementations NeurIPS 2021 Alexandra Peste, Eugenia Iofinova, Adrian Vladu, Dan Alistarh

The increasing computational requirements of deep neural networks (DNNs) have led to significant interest in obtaining DNN models that are sparse, yet accurate.

Network Pruning

FLEA: Provably Robust Fair Multisource Learning from Unreliable Training Data

1 code implementation22 Jun 2021 Eugenia Iofinova, Nikola Konstantinov, Christoph H. Lampert

In this work we address the problem of fair learning from unreliable training data in the robust multisource setting, where the available training data comes from multiple sources, a fraction of which might not be representative of the true data distribution.

Fairness

Cannot find the paper you are looking for? You can Submit a new open access paper.