1 code implementation • 7 Oct 2023 • Giacomo Aldegheri, Alina Rogalska, Ahmed Youssef, Eugenia Iofinova
In this work, we propose a method to 'hack' generative models, pushing their outputs away from the original training distribution towards a new objective.
no code implementations • 6 Oct 2023 • Arshia Soltani Moakhar, Eugenia Iofinova, Dan Alistarh
Towards this goal, multiple tools have been proposed to aid a human examiner in reasoning about a network's behavior in general or on a set of instances.
no code implementations • 3 Aug 2023 • Denis Kuznedelev, Eldar Kurtic, Eugenia Iofinova, Elias Frantar, Alexandra Peste, Dan Alistarh
Obtaining versions of deep neural networks that are both highly-accurate and highly-sparse is one of the main challenges in the area of model compression, and several high-performance pruning techniques have been investigated by the community.
no code implementations • CVPR 2023 • Eugenia Iofinova, Alexandra Peste, Dan Alistarh
Pruning - that is, setting a significant subset of the parameters of a neural network to zero - is one of the most popular methods of model compression.
1 code implementation • 9 Feb 2023 • Mahdi Nikdan, Tommaso Pegolotti, Eugenia Iofinova, Eldar Kurtic, Dan Alistarh
We provide a new efficient version of the backpropagation algorithm, specialized to the case where the weights of the neural network being trained are sparse.
1 code implementation • CVPR 2022 • Eugenia Iofinova, Alexandra Peste, Mark Kurtz, Dan Alistarh
Transfer learning is a classic paradigm by which models pretrained on large "upstream" datasets are adapted to yield good results on "downstream" specialized datasets.
2 code implementations • NeurIPS 2021 • Alexandra Peste, Eugenia Iofinova, Adrian Vladu, Dan Alistarh
The increasing computational requirements of deep neural networks (DNNs) have led to significant interest in obtaining DNN models that are sparse, yet accurate.
Ranked #1 on Network Pruning on CIFAR-100
1 code implementation • 22 Jun 2021 • Eugenia Iofinova, Nikola Konstantinov, Christoph H. Lampert
In this work we address the problem of fair learning from unreliable training data in the robust multisource setting, where the available training data comes from multiple sources, a fraction of which might not be representative of the true data distribution.