Search Results for author: Hamed Hassani

Found 88 papers, 25 papers with code

Approaching Rate-Distortion Limits in Neural Compression with Lattice Transform Coding

no code implementations12 Mar 2024 Eric Lei, Hamed Hassani, Shirin Saeedi Bidokhti

On general vector sources, LTC improves upon standard neural compressors in one-shot coding performance.

Quantization

Defending Large Language Models against Jailbreak Attacks via Semantic Smoothing

1 code implementation25 Feb 2024 Jiabao Ji, Bairu Hou, Alexander Robey, George J. Pappas, Hamed Hassani, Yang Zhang, Eric Wong, Shiyu Chang

Aligned large language models (LLMs) are vulnerable to jailbreaking attacks, which bypass the safeguards of targeted LLMs and fool them into generating objectionable content.

Instruction Following

Stochastic Approximation with Delayed Updates: Finite-Time Rates under Markovian Sampling

no code implementations19 Feb 2024 Arman Adibi, Nicolo Dal Fabbro, Luca Schenato, Sanjeev Kulkarni, H. Vincent Poor, George J. Pappas, Hamed Hassani, Aritra Mitra

Motivated by applications in large-scale and multi-agent reinforcement learning, we study the non-asymptotic performance of stochastic approximation (SA) schemes with delayed updates under Markovian sampling.

Avg Multi-agent Reinforcement Learning +1

Compression of Structured Data with Autoencoders: Provable Benefit of Nonlinearities and Depth

no code implementations7 Feb 2024 Kevin Kögler, Alexander Shevchenko, Hamed Hassani, Marco Mondelli

For the prototypical case of the 1-bit compression of sparse Gaussian data, we prove that gradient descent converges to a solution that completely disregards the sparse structure of the input.

Data Compression Denoising

Generalization Properties of Adversarial Training for $\ell_0$-Bounded Adversarial Attacks

no code implementations5 Feb 2024 Payam Delgosha, Hamed Hassani, Ramtin Pedarsani

In this paper, we focus on the $\ell_0$-bounded adversarial attacks, and aim to theoretically characterize the performance of adversarial training for an important class of truncated classifiers.

Binary Classification

Score-Based Methods for Discrete Optimization in Deep Learning

no code implementations15 Oct 2023 Eric Lei, Arman Adibi, Hamed Hassani

One class of these problems involve objective functions which depend on neural networks, but optimization variables which are discrete.

Jailbreaking Black Box Large Language Models in Twenty Queries

1 code implementation12 Oct 2023 Patrick Chao, Alexander Robey, Edgar Dobriban, Hamed Hassani, George J. Pappas, Eric Wong

PAIR -- which is inspired by social engineering attacks -- uses an attacker LLM to automatically generate jailbreaks for a separate targeted LLM without human intervention.

A Theory of Non-Linear Feature Learning with One Gradient Step in Two-Layer Neural Networks

no code implementations11 Oct 2023 Behrad Moniri, Donghwan Lee, Hamed Hassani, Edgar Dobriban

However, with a constant gradient descent step size, this spike only carries information from the linear component of the target function and therefore learning non-linear components is impossible.

SmoothLLM: Defending Large Language Models Against Jailbreaking Attacks

1 code implementation5 Oct 2023 Alexander Robey, Eric Wong, Hamed Hassani, George J. Pappas

Despite efforts to align large language models (LLMs) with human values, widely-used LLMs such as GPT, Llama, Claude, and PaLM are susceptible to jailbreaking attacks, wherein an adversary fools a targeted LLM into generating objectionable content.

Share Your Representation Only: Guaranteed Improvement of the Privacy-Utility Tradeoff in Federated Learning

1 code implementation11 Sep 2023 Zebang Shen, Jiayuan Ye, Anmin Kang, Hamed Hassani, Reza Shokri

Repeated parameter sharing in federated learning causes significant information leakage about private data, thus defeating its main purpose: data privacy.

Federated Learning Image Classification +1

Min-Max Optimization under Delays

no code implementations13 Jul 2023 Arman Adibi, Aritra Mitra, Hamed Hassani

Motivated by this gap, we examine the performance of standard min-max optimization algorithms with delayed gradient updates.

Adversarial Robustness Stochastic Optimization

Provable Multi-Task Representation Learning by Two-Layer ReLU Neural Networks

no code implementations13 Jul 2023 Liam Collins, Hamed Hassani, Mahdi Soltanolkotabi, Aryan Mokhtari, Sanjay Shakkottai

An increasingly popular machine learning paradigm is to pretrain a neural network (NN) on many tasks offline, then adapt it to downstream tasks, often by re-training only the last linear layer of the network.

Binary Classification Multi-Task Learning +1

Text + Sketch: Image Compression at Ultra Low Rates

1 code implementation4 Jul 2023 Eric Lei, Yiğit Berkay Uslu, Hamed Hassani, Shirin Saeedi Bidokhti

Recent advances in text-to-image generative models provide the ability to generate high-quality images from short text descriptions.

Image Compression

On a Relation Between the Rate-Distortion Function and Optimal Transport

no code implementations1 Jul 2023 Eric Lei, Hamed Hassani, Shirin Saeedi Bidokhti

We discuss a relationship between rate-distortion and optimal transport (OT) theory, even though they seem to be unrelated at first glance.

Quantization Relation

Adversarial Training Should Be Cast as a Non-Zero-Sum Game

no code implementations19 Jun 2023 Alexander Robey, Fabian Latorre, George J. Pappas, Hamed Hassani, Volkan Cevher

One prominent approach toward resolving the adversarial vulnerability of deep neural networks is the two-player zero-sum paradigm of adversarial training, in which predictors are trained against adversarially chosen perturbations of data.

Optimal Heterogeneous Collaborative Linear Regression and Contextual Bandits

no code implementations9 Jun 2023 Xinmeng Huang, Kan Xu, Donghwan Lee, Hamed Hassani, Hamsa Bastani, Edgar Dobriban

MOLAR improves the dependence of the estimation error on the data dimension, compared to independent least squares estimates.

Multi-Armed Bandits regression

Performance-Robustness Tradeoffs in Adversarially Robust Control and Estimation

no code implementations25 May 2023 Bruce D. Lee, Thomas T. C. K. Zhang, Hamed Hassani, Nikolai Matni

In these special cases, we demonstrate that the severity of the tradeoff depends in an interpretable manner upon system-theoretic properties such as the spectrum of the controllability gramian, the spectrum of the observability gramian, and the stability of the system.

Federated Neural Compression Under Heterogeneous Data

no code implementations25 May 2023 Eric Lei, Hamed Hassani, Shirin Saeedi Bidokhti

We discuss a federated learned compression problem, where the goal is to learn a compressor from real-world data which is scattered across clients and may be statistically heterogeneous, yet share a common underlying representation.

Personalized Federated Learning

Federated Temporal Difference Learning with Linear Function Approximation under Environmental Heterogeneity

no code implementations4 Feb 2023 Han Wang, Aritra Mitra, Hamed Hassani, George J. Pappas, James Anderson

We initiate the study of federated reinforcement learning under environmental heterogeneity by considering a policy evaluation problem.

Demystifying Disagreement-on-the-Line in High Dimensions

1 code implementation31 Jan 2023 Donghwan Lee, Behrad Moniri, Xinmeng Huang, Edgar Dobriban, Hamed Hassani

Evaluating the performance of machine learning models under distribution shift is challenging, especially when we only have unlabeled data from the shifted (target) domain, along with labeled data from the original (source) domain.

Vocal Bursts Intensity Prediction

Temporal Difference Learning with Compressed Updates: Error-Feedback meets Reinforcement Learning

no code implementations3 Jan 2023 Aritra Mitra, George J. Pappas, Hamed Hassani

In large-scale machine learning, recent works have studied the effects of compressing gradients in stochastic optimization in order to alleviate the communication bottleneck.

Multi-agent Reinforcement Learning Quantization +3

Probable Domain Generalization via Quantile Risk Minimization

2 code implementations20 Jul 2022 Cian Eastwood, Alexander Robey, Shashank Singh, Julius von Kügelgen, Hamed Hassani, George J. Pappas, Bernhard Schölkopf

By minimizing the $\alpha$-quantile of predictor's risk distribution over domains, QRM seeks predictors that perform well with probability $\alpha$.

Domain Generalization

Toward Certified Robustness Against Real-World Distribution Shifts

1 code implementation8 Jun 2022 Haoze Wu, Teruhiro Tagomori, Alexander Robey, Fengjun Yang, Nikolai Matni, George Pappas, Hamed Hassani, Corina Pasareanu, Clark Barrett

We consider the problem of certifying the robustness of deep neural networks against real-world distribution shifts.

Collaborative Linear Bandits with Adversarial Agents: Near-Optimal Regret Bounds

no code implementations6 Jun 2022 Aritra Mitra, Arman Adibi, George J. Pappas, Hamed Hassani

We consider a linear stochastic bandit problem involving $M$ agents that can collaborate via a central server to minimize regret.

Straggler-Resilient Personalized Federated Learning

1 code implementation5 Jun 2022 Isidoros Tziotis, Zebang Shen, Ramtin Pedarsani, Hamed Hassani, Aryan Mokhtari

Federated Learning is an emerging learning paradigm that allows training models from samples distributed across a large network of clients while respecting privacy and communication restrictions.

Learning Theory Personalized Federated Learning +1

Self-Consistency of the Fokker-Planck Equation

1 code implementation2 Jun 2022 Zebang Shen, Zhenfu Wang, Satyen Kale, Alejandro Ribeiro, Amin Karbasi, Hamed Hassani

In this paper, we exploit this concept to design a potential function of the hypothesis velocity fields, and prove that, if such a function diminishes to zero during the training procedure, the trajectory of the densities generated by the hypothesis velocity fields converges to the solution of the FPE in the Wasserstein-2 sense.

Collaborative Learning of Discrete Distributions under Heterogeneity and Communication Constraints

no code implementations1 Jun 2022 Xinmeng Huang, Donghwan Lee, Edgar Dobriban, Hamed Hassani

In modern machine learning, users often have to collaborate to learn the distribution of the data.

FedAvg with Fine Tuning: Local Updates Lead to Representation Learning

no code implementations27 May 2022 Liam Collins, Hamed Hassani, Aryan Mokhtari, Sanjay Shakkottai

We show that the reason behind generalizability of the FedAvg's output is its power in learning the common data representation among the clients' tasks, by leveraging the diversity among client data distributions via local updates.

Federated Learning Image Classification +1

Distributed Statistical Min-Max Learning in the Presence of Byzantine Agents

no code implementations7 Apr 2022 Arman Adibi, Aritra Mitra, George J. Pappas, Hamed Hassani

Recent years have witnessed a growing interest in the topic of min-max optimization, owing to its relevance in the context of generative adversarial networks (GANs), robust control and optimization, and reinforcement learning.

Neural Estimation of the Rate-Distortion Function With Applications to Operational Source Coding

1 code implementation4 Apr 2022 Eric Lei, Hamed Hassani, Shirin Saeedi Bidokhti

Motivated by the empirical success of deep neural network (DNN) compressors on large, real-world data, we investigate methods to estimate the rate-distortion function on such data, which would allow comparison of DNN compressors with optimality.

Data Compression

Chordal Sparsity for Lipschitz Constant Estimation of Deep Neural Networks

1 code implementation2 Apr 2022 Anton Xue, Lars Lindemann, Alexander Robey, Hamed Hassani, George J. Pappas, Rajeev Alur

Lipschitz constants of neural networks allow for guarantees of robustness in image classification, safety in controller design, and generalizability beyond the training data.

Image Classification Navigate

Performance-Robustness Tradeoffs in Adversarially Robust Linear-Quadratic Control

no code implementations21 Mar 2022 Bruce D. Lee, Thomas T. C. K. Zhang, Hamed Hassani, Nikolai Matni

Though this fundamental tradeoff between nominal performance and robustness is known to exist, it is not well-characterized in quantitative terms.

Do Deep Networks Transfer Invariances Across Classes?

1 code implementation ICLR 2022 Allan Zhou, Fahim Tajwar, Alexander Robey, Tom Knowles, George J. Pappas, Hamed Hassani, Chelsea Finn

Based on this analysis, we show how a generative approach for learning the nuisance transformations can help transfer invariances across classes and improve performance on a set of imbalanced image classification benchmarks.

Image Classification Long-tail Learning

Binary Classification Under $\ell_0$ Attacks for General Noise Distribution

no code implementations9 Mar 2022 Payam Delgosha, Hamed Hassani, Ramtin Pedarsani

We introduce a classification method which employs a nonlinear component called truncation, and show in an asymptotic scenario, as long as the adversary is restricted to perturb no more than $\sqrt{d}$ data samples, we can almost achieve the optimal classification error in the absence of the adversary, i. e. we can completely neutralize adversary's effect.

Binary Classification Classification

T-Cal: An optimal test for the calibration of predictive models

1 code implementation3 Mar 2022 Donghwan Lee, Xinmeng Huang, Hamed Hassani, Edgar Dobriban

We find that detecting mis-calibration is only possible when the conditional probabilities of the classes are sufficiently smooth functions of the predictions.

Linear Stochastic Bandits over a Bit-Constrained Channel

no code implementations2 Mar 2022 Aritra Mitra, Hamed Hassani, George J. Pappas

Specifically, in our setup, an agent interacting with an environment transmits encoded estimates of an unknown model parameter to a server over a communication channel of finite capacity.

Decision Making Decision Making Under Uncertainty

What Functions Can Graph Neural Networks Generate?

no code implementations17 Feb 2022 Mohammad Fereydounian, Hamed Hassani, Amin Karbasi

We prove that: (i) a GNN, as a graph function, is necessarily permutation compatible; (ii) conversely, any permutation compatible function, when restricted on input graphs with distinct node features, can be generated by a GNN; (iii) for arbitrary node features (not necessarily distinct), a simple feature augmentation scheme suffices to generate a permutation compatible function by a GNN; (iv) permutation compatibility can be verified by checking only quadratically many functional constraints, rather than an exhaustive search over all the permutations; (v) GNNs can generate \textit{any} graph function once we augment the node features with node identities, thus going beyond graph isomorphism and permutation compatibility.

Probabilistically Robust Learning: Balancing Average- and Worst-case Performance

1 code implementation2 Feb 2022 Alexander Robey, Luiz F. O. Chamon, George J. Pappas, Hamed Hassani

From a theoretical point of view, this framework overcomes the trade-offs between the performance and the sample-complexity of worst-case and average-case learning.

Efficient and Robust Classification for Sparse Attacks

no code implementations23 Jan 2022 Mark Beliaev, Payam Delgosha, Hamed Hassani, Ramtin Pedarsani

In the past two decades we have seen the popularity of neural networks increase in conjunction with their classification accuracy.

Classification Malware Detection +1

The curse of overparametrization in adversarial training: Precise analysis of robust generalization for random features regression

no code implementations13 Jan 2022 Hamed Hassani, Adel Javanmard

Our developed theory reveals the nontrivial effect of overparametrization on robustness and indicates that for adversarially trained random features models, high overparametrization can hurt robust generalization.

regression

Adversarial Tradeoffs in Robust State Estimation

no code implementations17 Nov 2021 Thomas T. C. K. Zhang, Bruce D. Lee, Hamed Hassani, Nikolai Matni

We provide an algorithm to find this perturbation given data realizations, and develop upper and lower bounds on the adversarial state estimation error in terms of the standard (non-adversarial) estimation error and the spectral properties of the resulting observer.

Minimax Optimization: The Case of Convex-Submodular

no code implementations1 Nov 2021 Arman Adibi, Aryan Mokhtari, Hamed Hassani

Prior literature has thus far mainly focused on studying such problems in the continuous domain, e. g., convex-concave minimax optimization is now understood to a significant extent.

Adversarial Robustness with Semi-Infinite Constrained Learning

no code implementations NeurIPS 2021 Alexander Robey, Luiz F. O. Chamon, George J. Pappas, Hamed Hassani, Alejandro Ribeiro

In particular, we leverage semi-infinite optimization and non-convex duality theory to show that adversarial training is equivalent to a statistical problem over perturbation distributions, which we characterize completely.

Adversarial Robustness

Out-of-Distribution Robustness in Deep Learning Compression

no code implementations13 Oct 2021 Eric Lei, Hamed Hassani, Shirin Saeedi Bidokhti

In recent years, deep neural network (DNN) compression systems have proved to be highly effective for designing source codes for many natural sources.

An Agnostic Approach to Federated Learning with Class Imbalance

no code implementations ICLR 2022 Zebang Shen, Juan Cervino, Hamed Hassani, Alejandro Ribeiro

Federated Learning (FL) has emerged as the tool of choice for training deep models over heterogeneous and decentralized datasets.

Federated Learning

Exploiting Heterogeneity in Robust Federated Best-Arm Identification

no code implementations13 Sep 2021 Aritra Mitra, Hamed Hassani, George Pappas

We study a federated variant of the best-arm identification problem in stochastic multi-armed bandits: a set of clients, each of whom can sample only a subset of the arms, collaborate via a server to identify the best arm (i. e., the arm with the highest mean reward) with prescribed confidence.

Multi-Armed Bandits

AutoEKF: Scalable System Identification for COVID-19 Forecasting from Large-Scale GPS Data

no code implementations28 Jun 2021 Francisco Barreras, Mikhail Hayhoe, Hamed Hassani, Victor M. Preciado

The likelihood of the observations is estimated recursively using an Extended Kalman Filter and can be easily optimized using gradient-based methods to compute maximum likelihood estimators.

Bayesian Inference

Robust Classification Under $\ell_0$ Attack for the Gaussian Mixture Model

no code implementations5 Apr 2021 Payam Delgosha, Hamed Hassani, Ramtin Pedarsani

Under the assumption that data is distributed according to the Gaussian mixture model, our goal is to characterize the optimal robust classifier and the corresponding robust classification error as well as a variety of trade-offs between robustness, accuracy, and the adversary's budget.

Classification General Classification +1

Federated Functional Gradient Boosting

no code implementations11 Mar 2021 Zebang Shen, Hamed Hassani, Satyen Kale, Amin Karbasi

First, in the semi-heterogeneous setting, when the marginal distributions of the feature vectors on client machines are identical, we develop the federated functional gradient boosting (FFGB) method that provably converges to the global minimum.

Federated Learning

Model-Based Domain Generalization

1 code implementation NeurIPS 2021 Alexander Robey, George J. Pappas, Hamed Hassani

Despite remarkable success in a variety of applications, it is well-known that deep learning can fail catastrophically when presented with out-of-distribution data.

Domain Generalization

Linear Convergence in Federated Learning: Tackling Client Heterogeneity and Sparse Gradients

no code implementations NeurIPS 2021 Aritra Mitra, Rayana Jaafar, George J. Pappas, Hamed Hassani

We consider a standard federated learning (FL) architecture where a group of clients periodically coordinate with a central server to train a statistical model.

Federated Learning

Exploiting Shared Representations for Personalized Federated Learning

3 code implementations14 Feb 2021 Liam Collins, Hamed Hassani, Aryan Mokhtari, Sanjay Shakkottai

Based on this intuition, we propose a novel federated learning framework and algorithm for learning a shared data representation across clients and unique local heads for each client.

Meta-Learning Multi-Task Learning +2

Straggler-Resilient Federated Learning: Leveraging the Interplay Between Statistical Accuracy and System Heterogeneity

no code implementations28 Dec 2020 Amirhossein Reisizadeh, Isidoros Tziotis, Hamed Hassani, Aryan Mokhtari, Ramtin Pedarsani

Federated Learning is a novel paradigm that involves learning from data samples distributed across a large network of clients while the data remains local.

Federated Learning

Sinkhorn Natural Gradient for Generative Models

no code implementations NeurIPS 2020 Zebang Shen, Zhenfu Wang, Alejandro Ribeiro, Hamed Hassani

In this regard, we propose a novel Sinkhorn Natural Gradient (SiNG) algorithm which acts as a steepest descent method on the probability space endowed with the Sinkhorn divergence.

Sinkhorn Barycenter via Functional Gradient Descent

no code implementations NeurIPS 2020 Zebang Shen, Zhenfu Wang, Alejandro Ribeiro, Hamed Hassani

In this paper, we consider the problem of computing the barycenter of a set of probability distributions under the Sinkhorn divergence.

Submodular Meta-Learning

1 code implementation NeurIPS 2020 Arman Adibi, Aryan Mokhtari, Hamed Hassani

Motivated by this terminology, we propose a novel meta-learning framework in the discrete domain where each task is equivalent to maximizing a set function under a cardinality constraint.

Meta-Learning

Safe Learning under Uncertain Objectives and Constraints

no code implementations23 Jun 2020 Mohammad Fereydounian, Zebang Shen, Aryan Mokhtari, Amin Karbasi, Hamed Hassani

More precisely, by assuming that Reliable-FW has access to a (stochastic) gradient oracle of the objective function and a noisy feasibility oracle of the safety polytope, it finds an $\epsilon$-approximate first-order stationary point with the optimal ${\mathcal{O}}({1}/{\epsilon^2})$ gradient oracle complexity (resp.

Learning to Track Dynamic Targets in Partially Known Environments

1 code implementation17 Jun 2020 Heejin Jeong, Hamed Hassani, Manfred Morari, Daniel D. Lee, George J. Pappas

In particular, we introduce Active Tracking Target Network (ATTN), a unified RL policy that is capable of solving major sub-tasks of active target tracking -- in-sight tracking, navigation, and exploration.

Navigate Reinforcement Learning (RL)

Provable tradeoffs in adversarially robust classification

no code implementations9 Jun 2020 Edgar Dobriban, Hamed Hassani, David Hong, Alexander Robey

It is well known that machine learning methods can be vulnerable to adversarially-chosen perturbations of their inputs.

Classification General Classification +1

Model-Based Robust Deep Learning: Generalizing to Natural, Out-of-Distribution Data

1 code implementation20 May 2020 Alexander Robey, Hamed Hassani, George J. Pappas

Indeed, natural variation such as lighting or weather conditions can significantly degrade the accuracy of trained neural networks, proving that such natural variation presents a significant challenge for deep learning.

Adversarial Robustness

Precise Tradeoffs in Adversarial Training for Linear Regression

no code implementations24 Feb 2020 Adel Javanmard, Mahdi Soltanolkotabi, Hamed Hassani

Furthermore, we precisely characterize the standard/robust accuracy and the corresponding tradeoff achieved by a contemporary mini-max adversarial training approach in a high-dimensional regime where the number of data points and the parameters of the model grow in proportion to each other.

regression

Quantized Decentralized Stochastic Learning over Directed Graphs

no code implementations ICML 2020 Hossein Taheri, Aryan Mokhtari, Hamed Hassani, Ramtin Pedarsani

We consider a decentralized stochastic learning problem where data points are distributed among computing nodes communicating over a directed graph.

Quantization

Stochastic Continuous Greedy ++: When Upper and Lower Bounds Match

no code implementations NeurIPS 2019 Amin Karbasi, Hamed Hassani, Aryan Mokhtari, Zebang Shen

Concretely, for a monotone and continuous DR-submodular function, \SCGPP achieves a tight $[(1-1/e)\OPT -\epsilon]$ solution while using $O(1/\epsilon^2)$ stochastic gradients and $O(1/\epsilon)$ calls to the linear optimization oracle.

Learning Q-network for Active Information Acquisition

2 code implementations23 Oct 2019 Heejin Jeong, Brent Schlotfeldt, Hamed Hassani, Manfred Morari, Daniel D. Lee, George J. Pappas

In this paper, we propose a novel Reinforcement Learning approach for solving the Active Information Acquisition problem, which requires an agent to choose a sequence of actions in order to acquire information about a process of interest using on-board sensors.

reinforcement-learning Reinforcement Learning (RL)

One Sample Stochastic Frank-Wolfe

no code implementations10 Oct 2019 Mingrui Zhang, Zebang Shen, Aryan Mokhtari, Hamed Hassani, Amin Karbasi

One of the beauties of the projected gradient descent method lies in its rather simple mechanism and yet stable behavior with inexact, stochastic gradients, which has led to its wide-spread use in many machine learning applications.

Optimal Algorithms for Submodular Maximization with Distributed Constraints

no code implementations30 Sep 2019 Alexander Robey, Arman Adibi, Brent Schlotfeldt, George J. Pappas, Hamed Hassani

Given this distributed setting, we develop Constraint-Distributed Continuous Greedy (CDCG), a message passing algorithm that converges to the tight $(1-1/e)$ approximation factor of the optimum global solution using only local computation and communication.

Robust and Communication-Efficient Collaborative Learning

1 code implementation NeurIPS 2019 Amirhossein Reisizadeh, Hossein Taheri, Aryan Mokhtari, Hamed Hassani, Ramtin Pedarsani

We consider a decentralized learning problem, where a set of computing nodes aim at solving a non-convex optimization problem collaboratively.

Quantization

Efficient and Accurate Estimation of Lipschitz Constants for Deep Neural Networks

1 code implementation NeurIPS 2019 Mahyar Fazlyab, Alexander Robey, Hamed Hassani, Manfred Morari, George J. Pappas

The resulting SDP can be adapted to increase either the estimation accuracy (by capturing the interaction between activation functions of different layers) or scalability (by decomposition and parallel implementation).

Stochastic Conditional Gradient++

no code implementations19 Feb 2019 Hamed Hassani, Amin Karbasi, Aryan Mokhtari, Zebang Shen

It is known that this rate is optimal in terms of stochastic gradient evaluations.

Stochastic Optimization

Black Box Submodular Maximization: Discrete and Continuous Settings

no code implementations28 Jan 2019 Lin Chen, Mingrui Zhang, Hamed Hassani, Amin Karbasi

In this paper, we consider the problem of black box continuous submodular maximization where we only have access to the function values and no information about the derivatives is provided.

Entropic GANs meet VAEs: A Statistical Approach to Compute Sample Likelihoods in GANs

1 code implementation ICLR 2019 Yogesh Balaji, Hamed Hassani, Rama Chellappa, Soheil Feizi

Building on the success of deep learning, two modern approaches to learn a probability model from the data are Generative Adversarial Networks (GANs) and Variational AutoEncoders (VAEs).

Discrete Sampling using Semigradient-based Product Mixtures

no code implementations4 Jul 2018 Alkis Gotovos, Hamed Hassani, Andreas Krause, Stefanie Jegelka

We consider the problem of inference in discrete probabilistic models, that is, distributions over subsets of a finite ground set.

Point Processes

An Exact Quantized Decentralized Gradient Descent Algorithm

no code implementations29 Jun 2018 Amirhossein Reisizadeh, Aryan Mokhtari, Hamed Hassani, Ramtin Pedarsani

We consider the problem of decentralized consensus optimization, where the sum of $n$ smooth and strongly convex functions are minimized over $n$ distributed agents that form a connected network.

Distributed Optimization Quantization

Stochastic Conditional Gradient Methods: From Convex Minimization to Submodular Maximization

no code implementations24 Apr 2018 Aryan Mokhtari, Hamed Hassani, Amin Karbasi

Further, for a monotone and continuous DR-submodular function and subject to a general convex body constraint, we prove that our proposed method achieves a $((1-1/e)OPT-\eps)$ guarantee with $O(1/\eps^3)$ stochastic gradient computations.

Stochastic Optimization

Projection-Free Online Optimization with Stochastic Gradient: From Convexity to Submodularity

no code implementations ICML 2018 Lin Chen, Christopher Harshaw, Hamed Hassani, Amin Karbasi

We also propose One-Shot Frank-Wolfe, a simpler algorithm which requires only a single stochastic gradient estimate in each round and achieves an $O(T^{2/3})$ stochastic regret bound for convex and continuous submodular optimization.

Online Continuous Submodular Maximization

no code implementations16 Feb 2018 Lin Chen, Hamed Hassani, Amin Karbasi

For such settings, we then propose an online stochastic gradient ascent algorithm that also achieves a regret bound of $O(\sqrt{T})$ regret, albeit against a weaker $1/2$-approximation to the best feasible solution in hindsight.

Conditional Gradient Method for Stochastic Submodular Maximization: Closing the Gap

no code implementations5 Nov 2017 Aryan Mokhtari, Hamed Hassani, Amin Karbasi

More precisely, for a monotone and continuous DR-submodular function and subject to a \textit{general} convex body constraint, we prove that \alg achieves a $[(1-1/e)\text{OPT} -\eps]$ guarantee (in expectation) with $\mathcal{O}{(1/\eps^3)}$ stochastic gradient computations.

Stochastic Submodular Maximization: The Case of Coverage Functions

no code implementations NeurIPS 2017 Mohammad Reza Karimi, Mario Lucic, Hamed Hassani, Andreas Krause

By exploiting that common extensions act linearly on the class of submodular functions, we employ projected stochastic gradient ascent and its variants in the continuous domain, and perform rounding to obtain discrete solutions.

Clustering Stochastic Optimization

Gradient Methods for Submodular Maximization

no code implementations NeurIPS 2017 Hamed Hassani, Mahdi Soltanolkotabi, Amin Karbasi

Despite the apparent lack of convexity in such functions, we prove that stochastic projected gradient methods can provide strong approximation guarantees for maximizing continuous submodular functions with convex constraints.

Active Learning

Accelerated Dual Learning by Homotopic Initialization

no code implementations13 Jun 2017 Hadi Daneshmand, Hamed Hassani, Thomas Hofmann

Gradient descent and coordinate descent are well understood in terms of their asymptotic behavior, but less so in a transient regime often used for approximations in machine learning.

Learning to Use Learners' Advice

no code implementations16 Feb 2017 Adish Singla, Hamed Hassani, Andreas Krause

In our setting, the feedback at any time $t$ is limited in a sense that it is only available to the expert $i^t$ that has been selected by the central algorithm (forecaster), \emph{i. e.}, only the expert $i^t$ receives feedback from the environment and gets to learn at time $t$.

Blocking Multi-Armed Bandits

Fast and Provably Good Seedings for k-Means

1 code implementation NeurIPS 2016 Olivier Bachem, Mario Lucic, Hamed Hassani, Andreas Krause

Seeding - the task of finding initial cluster centers - is critical in obtaining high-quality clusterings for k-Means.

Clustering

Near-Optimal Active Learning of Halfspaces via Query Synthesis in the Noisy Setting

no code implementations11 Mar 2016 Lin Chen, Hamed Hassani, Amin Karbasi

This problem has recently gained a lot of interest in automated science and adversarial reverse engineering for which only heuristic algorithms are known.

Active Learning

Sampling from Probabilistic Submodular Models

no code implementations NeurIPS 2015 Alkis Gotovos, Hamed Hassani, Andreas Krause

Submodular and supermodular functions have found wide applicability in machine learning, capturing notions such as diversity and regularity, respectively.

Point Processes

Cannot find the paper you are looking for? You can Submit a new open access paper.