1 code implementation • 1 Mar 2023 • Natalia Ponomareva, Hussein Hazimeh, Alex Kurakin, Zheng Xu, Carson Denison, H. Brendan McMahan, Sergei Vassilvitskii, Steve Chien, Abhradeep Thakurta
However, while some adoption of DP has happened in industry, attempts to apply DP to real world complex ML models are still few and far between.
no code implementations • 28 Feb 2023 • Riade Benbaki, Wenyu Chen, Xiang Meng, Hussein Hazimeh, Natalia Ponomareva, Zhe Zhao, Rahul Mazumder
Our approach, CHITA, extends the classical Optimal Brain Surgeon framework and results in significant improvements in speed, memory, and performance over existing optimization-based approaches for network pruning.
no code implementations • 31 Jan 2023 • Hussein Hazimeh, Natalia Ponomareva
We run large-scale experiments to study the effectiveness of the scheduler on two popular applications: GANs for image generation and adversarial nets for domain adaptation.
1 code implementation • 30 Jan 2023 • Florian Stimberg, Ayan Chakrabarti, Chun-Ta Lu, Hussein Hazimeh, Otilia Stretcu, Wei Qiao, Yintao Liu, Merve Kaya, Cyrus Rashtchian, Ariel Fuxman, Mehmet Tek, Sven Gowal
We evaluate 33 pretrained models on the benchmark and train models with different augmentations, architectures and training methods on subsets of the obfuscations to measure generalization.
no code implementations • 19 May 2022 • Shibal Ibrahim, Hussein Hazimeh, Rahul Mazumder
We therefore propose a novel tensor-based formulation of differentiable trees that allows for efficient vectorization on GPUs.
1 code implementation • 10 Feb 2022 • Hussein Hazimeh, Rahul Mazumder, Tim Nonet
We introduce L0Learn: an open-source package for sparse regression and classification using L0 regularization.
3 code implementations • NeurIPS 2021 • Hussein Hazimeh, Zhe Zhao, Aakanksha Chowdhery, Maheswaran Sathiamoorthy, Yihua Chen, Rahul Mazumder, Lichan Hong, Ed H. Chi
State-of-the-art MoE models use a trainable sparse gate to select a subset of the experts for each input example.
1 code implementation • 14 Apr 2021 • Hussein Hazimeh, Rahul Mazumder, Peter Radchenko
Our algorithmic framework consists of approximate and exact algorithms.
2 code implementations • 13 Apr 2020 • Hussein Hazimeh, Rahul Mazumder, Ali Saab
In this work, we present a new exact MIP framework for $\ell_0\ell_2$-regularized regression that can scale to $p \sim 10^7$, achieving speedups of at least $5000$x, compared to state-of-the-art exact methods.
2 code implementations • ICML 2020 • Hussein Hazimeh, Natalia Ponomareva, Petros Mol, Zhenyu Tan, Rahul Mazumder
We aim to combine these advantages by introducing a new layer for neural networks, composed of an ensemble of differentiable decision trees (a. k. a.
1 code implementation • 17 Jan 2020 • Antoine Dedieu, Hussein Hazimeh, Rahul Mazumder
We aim to bridge this gap in computation times by developing new MIP-based algorithms for $\ell_0$-regularized classification.
1 code implementation • 5 Feb 2019 • Hussein Hazimeh, Rahul Mazumder
In addition, we introduce a specialized active-set strategy with gradient screening for avoiding costly gradient computations.
1 code implementation • 5 Mar 2018 • Hussein Hazimeh, Rahul Mazumder
In spite of the usefulness of $L_0$-based estimators and generic MIO solvers, there is a steep computational price to pay when compared to popular sparse learning algorithms (e. g., based on $L_1$ regularization).