Search Results for author: Hussein Hazimeh

Found 16 papers, 11 papers with code

DART: A Principled Approach to Adversarially Robust Unsupervised Domain Adaptation

no code implementations16 Feb 2024 Yunjuan Wang, Hussein Hazimeh, Natalia Ponomareva, Alexey Kurakin, Ibrahim Hammoud, Raman Arora

To address this challenge, we first establish a generalization bound for the adversarial target loss, which consists of (i) terms related to the loss on the data, and (ii) a measure of worst-case domain divergence.

Adversarial Robustness Unsupervised Domain Adaptation

COMET: Learning Cardinality Constrained Mixture of Experts with Trees and Local Search

1 code implementation5 Jun 2023 Shibal Ibrahim, Wenyu Chen, Hussein Hazimeh, Natalia Ponomareva, Zhe Zhao, Rahul Mazumder

To deal with this challenge, we propose a novel, permutation-based local search method that can complement first-order methods in training any sparse gate, e. g., Hash routing, Top-k, DSelect-k, and COMET.

Language Modelling Recommendation Systems

How to DP-fy ML: A Practical Guide to Machine Learning with Differential Privacy

1 code implementation1 Mar 2023 Natalia Ponomareva, Hussein Hazimeh, Alex Kurakin, Zheng Xu, Carson Denison, H. Brendan McMahan, Sergei Vassilvitskii, Steve Chien, Abhradeep Thakurta

However, while some adoption of DP has happened in industry, attempts to apply DP to real world complex ML models are still few and far between.

Fast as CHITA: Neural Network Pruning with Combinatorial Optimization

no code implementations28 Feb 2023 Riade Benbaki, Wenyu Chen, Xiang Meng, Hussein Hazimeh, Natalia Ponomareva, Zhe Zhao, Rahul Mazumder

Our approach, CHITA, extends the classical Optimal Brain Surgeon framework and results in significant improvements in speed, memory, and performance over existing optimization-based approaches for network pruning.

Combinatorial Optimization Network Pruning

Mind the (optimality) Gap: A Gap-Aware Learning Rate Scheduler for Adversarial Nets

no code implementations31 Jan 2023 Hussein Hazimeh, Natalia Ponomareva

We run large-scale experiments to study the effectiveness of the scheduler on two popular applications: GANs for image generation and adversarial nets for domain adaptation.

Domain Adaptation Fairness +2

Benchmarking Robustness to Adversarial Image Obfuscations

1 code implementation NeurIPS 2023 Florian Stimberg, Ayan Chakrabarti, Chun-Ta Lu, Hussein Hazimeh, Otilia Stretcu, Wei Qiao, Yintao Liu, Merve Kaya, Cyrus Rashtchian, Ariel Fuxman, Mehmet Tek, Sven Gowal

We evaluate 33 pretrained models on the benchmark and train models with different augmentations, architectures and training methods on subsets of the obfuscations to measure generalization.

Benchmarking

Flexible Modeling and Multitask Learning using Differentiable Tree Ensembles

no code implementations19 May 2022 Shibal Ibrahim, Hussein Hazimeh, Rahul Mazumder

We therefore propose a novel tensor-based formulation of differentiable trees that allows for efficient vectorization on GPUs.

Multi-Task Learning

L0Learn: A Scalable Package for Sparse Learning using L0 Regularization

1 code implementation10 Feb 2022 Hussein Hazimeh, Rahul Mazumder, Tim Nonet

We present L0Learn: an open-source package for sparse linear regression and classification using $\ell_0$ regularization.

Combinatorial Optimization regression +1

Sparse Regression at Scale: Branch-and-Bound rooted in First-Order Optimization

2 code implementations13 Apr 2020 Hussein Hazimeh, Rahul Mazumder, Ali Saab

In this work, we present a new exact MIP framework for $\ell_0\ell_2$-regularized regression that can scale to $p \sim 10^7$, achieving speedups of at least $5000$x, compared to state-of-the-art exact methods.

regression Sparse Learning

The Tree Ensemble Layer: Differentiability meets Conditional Computation

2 code implementations ICML 2020 Hussein Hazimeh, Natalia Ponomareva, Petros Mol, Zhenyu Tan, Rahul Mazumder

We aim to combine these advantages by introducing a new layer for neural networks, composed of an ensemble of differentiable decision trees (a. k. a.

Learning Sparse Classifiers: Continuous and Mixed Integer Optimization Perspectives

1 code implementation17 Jan 2020 Antoine Dedieu, Hussein Hazimeh, Rahul Mazumder

We aim to bridge this gap in computation times by developing new MIP-based algorithms for $\ell_0$-regularized classification.

Variable Selection

Learning Hierarchical Interactions at Scale: A Convex Optimization Approach

1 code implementation5 Feb 2019 Hussein Hazimeh, Rahul Mazumder

In addition, we introduce a specialized active-set strategy with gradient screening for avoiding costly gradient computations.

Structured Prediction Variable Selection

Fast Best Subset Selection: Coordinate Descent and Local Combinatorial Optimization Algorithms

1 code implementation5 Mar 2018 Hussein Hazimeh, Rahul Mazumder

In spite of the usefulness of $L_0$-based estimators and generic MIO solvers, there is a steep computational price to pay when compared to popular sparse learning algorithms (e. g., based on $L_1$ regularization).

Combinatorial Optimization Sparse Learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.