no code implementations • 5 Feb 2023 • Ilia Markov, Adrian Vladu, Qi Guo, Dan Alistarh
Communication-reduction techniques are a popular way to improve scalability in data-parallel training of deep neural networks (DNNs).
1 code implementation • 28 Jul 2022 • Alexandra Peste, Adrian Vladu, Eldar Kurtic, Christoph H. Lampert, Dan Alistarh
In this work we propose a new compression-aware minimizer dubbed CrAM that modifies the optimization step in a principled way, in order to produce models whose local loss behavior is stable under compression operations such as pruning.
2 code implementations • NeurIPS 2021 • Alexandra Peste, Eugenia Iofinova, Adrian Vladu, Dan Alistarh
The increasing computational requirements of deep neural networks (DNNs) have led to significant interest in obtaining DNN models that are sparse, yet accurate.
Ranked #1 on Network Pruning on CIFAR-100
no code implementations • 5 Mar 2021 • Kyriakos Axiotis, Adam Karczmarz, Anish Mukherjee, Piotr Sankowski, Adrian Vladu
This paper bridges discrete and continuous optimization approaches for decomposable submodular function minimization, in both the standard and parametric settings.
no code implementations • 22 Dec 2020 • Alina Ene, Huy L. Nguyen, Adrian Vladu
We design differentially private algorithms for the bandit convex optimization problem in the projection-free setting.
no code implementations • 17 Jul 2020 • Alina Ene, Huy L. Nguyen, Adrian Vladu
We provide new adaptive first-order methods for constrained convex optimization.
no code implementations • 18 Feb 2019 • Alina Ene, Adrian Vladu
The iteratively reweighted least squares method (IRLS) is a popular technique used in practice for solving regression problems.
Data Structures and Algorithms
no code implementations • 4 Dec 2018 • Alina Ene, Huy L. Nguyen, Adrian Vladu
We study parallel algorithms for the problem of maximizing a non-negative submodular function.
57 code implementations • ICLR 2018 • Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, Adrian Vladu
Its principled nature also enables us to identify methods for both training and attacking neural networks that are reliable and, in a certain sense, universal.
no code implementations • 2 Nov 2016 • Ilan Lobel, Renato Paes Leme, Adrian Vladu
We consider a multidimensional search problem that is motivated by questions in contextual decision-making, such as dynamic pricing and personalized medicine.
no code implementations • ICML 2017 • Vahab Mirrokni, Renato Paes Leme, Adrian Vladu, Sam Chiu-wai Wong
We give a deterministic nearly-linear time algorithm for approximating any point inside a convex polytope with a sparse convex combination of the polytope's vertices.