Search Results for author: Hamed Hassani

Found 46 papers, 9 papers with code

Adversarial Tradeoffs in Linear Inverse Problems and Robust State Estimation

no code implementations17 Nov 2021 Bruce D. Lee, Thomas T. C. K. Zhang, Hamed Hassani, Nikolai Matni

Adversarially robust training has been shown to reduce the susceptibility of learned models to targeted input data perturbations.

Minimax Optimization: The Case of Convex-Submodular

no code implementations1 Nov 2021 Arman Adibi, Aryan Mokhtari, Hamed Hassani

Prior literature has thus far mainly focused on studying such problems in the continuous domain, e. g., convex-concave minimax optimization is now understood to a significant extent.

Adversarial Robustness with Semi-Infinite Constrained Learning

no code implementations NeurIPS 2021 Alexander Robey, Luiz F. O. Chamon, George J. Pappas, Hamed Hassani, Alejandro Ribeiro

In particular, we leverage semi-infinite optimization and non-convex duality theory to show that adversarial training is equivalent to a statistical problem over perturbation distributions, which we characterize completely.

Adversarial Robustness

Out-of-Distribution Robustness in Deep Learning Compression

no code implementations13 Oct 2021 Eric Lei, Hamed Hassani, Shirin Saeedi Bidokhti

In recent years, deep neural network (DNN) compression systems have proved to be highly effective for designing source codes for many natural sources.

Exploiting Heterogeneity in Robust Federated Best-Arm Identification

no code implementations13 Sep 2021 Aritra Mitra, Hamed Hassani, George Pappas

We study a federated variant of the best-arm identification problem in stochastic multi-armed bandits: a set of clients, each of whom can sample only a subset of the arms, collaborate via a server to identify the best arm (i. e., the arm with the highest mean reward) with prescribed confidence.

Multi-Armed Bandits

AutoEKF: Scalable System Identification for COVID-19 Forecasting from Large-Scale GPS Data

no code implementations28 Jun 2021 Francisco Barreras, Mikhail Hayhoe, Hamed Hassani, Victor M. Preciado

The likelihood of the observations is estimated recursively using an Extended Kalman Filter and can be easily optimized using gradient-based methods to compute maximum likelihood estimators.

Bayesian Inference

Robust Classification Under $\ell_0$ Attack for the Gaussian Mixture Model

no code implementations5 Apr 2021 Payam Delgosha, Hamed Hassani, Ramtin Pedarsani

Under the assumption that data is distributed according to the Gaussian mixture model, our goal is to characterize the optimal robust classifier and the corresponding robust classification error as well as a variety of trade-offs between robustness, accuracy, and the adversary's budget.

Classification General Classification +1

Federated Functional Gradient Boosting

no code implementations11 Mar 2021 Zebang Shen, Hamed Hassani, Satyen Kale, Amin Karbasi

First, in the semi-heterogeneous setting, when the marginal distributions of the feature vectors on client machines are identical, we develop the federated functional gradient boosting (FFGB) method that provably converges to the global minimum.

Federated Learning

Model-Based Domain Generalization

1 code implementation NeurIPS 2021 Alexander Robey, George J. Pappas, Hamed Hassani

Despite remarkable success in a variety of applications, it is well-known that deep learning can fail catastrophically when presented with out-of-distribution data.

Domain Generalization

Linear Convergence in Federated Learning: Tackling Client Heterogeneity and Sparse Gradients

no code implementations NeurIPS 2021 Aritra Mitra, Rayana Jaafar, George J. Pappas, Hamed Hassani

We consider a standard federated learning (FL) architecture where a group of clients periodically coordinate with a central server to train a statistical model.

Federated Learning

Exploiting Shared Representations for Personalized Federated Learning

1 code implementation14 Feb 2021 Liam Collins, Hamed Hassani, Aryan Mokhtari, Sanjay Shakkottai

Based on this intuition, we propose a novel federated learning framework and algorithm for learning a shared data representation across clients and unique local heads for each client.

Meta-Learning Multi-Task Learning +2

Straggler-Resilient Federated Learning: Leveraging the Interplay Between Statistical Accuracy and System Heterogeneity

no code implementations28 Dec 2020 Amirhossein Reisizadeh, Isidoros Tziotis, Hamed Hassani, Aryan Mokhtari, Ramtin Pedarsani

Federated Learning is a novel paradigm that involves learning from data samples distributed across a large network of clients while the data remains local.

Federated Learning

Sinkhorn Natural Gradient for Generative Models

no code implementations NeurIPS 2020 Zebang Shen, Zhenfu Wang, Alejandro Ribeiro, Hamed Hassani

In this regard, we propose a novel Sinkhorn Natural Gradient (SiNG) algorithm which acts as a steepest descent method on the probability space endowed with the Sinkhorn divergence.

Sinkhorn Barycenter via Functional Gradient Descent

no code implementations NeurIPS 2020 Zebang Shen, Zhenfu Wang, Alejandro Ribeiro, Hamed Hassani

In this paper, we consider the problem of computing the barycenter of a set of probability distributions under the Sinkhorn divergence.

Submodular Meta-Learning

1 code implementation NeurIPS 2020 Arman Adibi, Aryan Mokhtari, Hamed Hassani

Motivated by this terminology, we propose a novel meta-learning framework in the discrete domain where each task is equivalent to maximizing a set function under a cardinality constraint.

Meta-Learning

Safe Learning under Uncertain Objectives and Constraints

no code implementations23 Jun 2020 Mohammad Fereydounian, Zebang Shen, Aryan Mokhtari, Amin Karbasi, Hamed Hassani

More precisely, by assuming that Reliable-FW has access to a (stochastic) gradient oracle of the objective function and a noisy feasibility oracle of the safety polytope, it finds an $\epsilon$-approximate first-order stationary point with the optimal ${\mathcal{O}}({1}/{\epsilon^2})$ gradient oracle complexity (resp.

Learning to Track Dynamic Targets in Partially Known Environments

1 code implementation17 Jun 2020 Heejin Jeong, Hamed Hassani, Manfred Morari, Daniel D. Lee, George J. Pappas

In particular, we introduce Active Tracking Target Network (ATTN), a unified RL policy that is capable of solving major sub-tasks of active target tracking -- in-sight tracking, navigation, and exploration.

Provable tradeoffs in adversarially robust classification

no code implementations9 Jun 2020 Edgar Dobriban, Hamed Hassani, David Hong, Alexander Robey

It is well known that machine learning methods can be vulnerable to adversarially-chosen perturbations of their inputs.

Classification General Classification +1

Model-Based Robust Deep Learning: Generalizing to Natural, Out-of-Distribution Data

1 code implementation20 May 2020 Alexander Robey, Hamed Hassani, George J. Pappas

Indeed, natural variation such as lighting or weather conditions can significantly degrade the accuracy of trained neural networks, proving that such natural variation presents a significant challenge for deep learning.

Adversarial Robustness

Precise Tradeoffs in Adversarial Training for Linear Regression

no code implementations24 Feb 2020 Adel Javanmard, Mahdi Soltanolkotabi, Hamed Hassani

Furthermore, we precisely characterize the standard/robust accuracy and the corresponding tradeoff achieved by a contemporary mini-max adversarial training approach in a high-dimensional regime where the number of data points and the parameters of the model grow in proportion to each other.

Quantized Decentralized Stochastic Learning over Directed Graphs

no code implementations ICML 2020 Hossein Taheri, Aryan Mokhtari, Hamed Hassani, Ramtin Pedarsani

We consider a decentralized stochastic learning problem where data points are distributed among computing nodes communicating over a directed graph.

Quantization

Stochastic Continuous Greedy ++: When Upper and Lower Bounds Match

no code implementations NeurIPS 2019 Amin Karbasi, Hamed Hassani, Aryan Mokhtari, Zebang Shen

Concretely, for a monotone and continuous DR-submodular function, \SCGPP achieves a tight $[(1-1/e)\OPT -\epsilon]$ solution while using $O(1/\epsilon^2)$ stochastic gradients and $O(1/\epsilon)$ calls to the linear optimization oracle.

Learning Q-network for Active Information Acquisition

2 code implementations23 Oct 2019 Heejin Jeong, Brent Schlotfeldt, Hamed Hassani, Manfred Morari, Daniel D. Lee, George J. Pappas

In this paper, we propose a novel Reinforcement Learning approach for solving the Active Information Acquisition problem, which requires an agent to choose a sequence of actions in order to acquire information about a process of interest using on-board sensors.

One Sample Stochastic Frank-Wolfe

no code implementations10 Oct 2019 Mingrui Zhang, Zebang Shen, Aryan Mokhtari, Hamed Hassani, Amin Karbasi

One of the beauties of the projected gradient descent method lies in its rather simple mechanism and yet stable behavior with inexact, stochastic gradients, which has led to its wide-spread use in many machine learning applications.

Optimal Algorithms for Submodular Maximization with Distributed Constraints

no code implementations30 Sep 2019 Alexander Robey, Arman Adibi, Brent Schlotfeldt, George J. Pappas, Hamed Hassani

Given this distributed setting, we develop Constraint-Distributed Continuous Greedy (CDCG), a message passing algorithm that converges to the tight $(1-1/e)$ approximation factor of the optimum global solution using only local computation and communication.

Robust and Communication-Efficient Collaborative Learning

1 code implementation NeurIPS 2019 Amirhossein Reisizadeh, Hossein Taheri, Aryan Mokhtari, Hamed Hassani, Ramtin Pedarsani

We consider a decentralized learning problem, where a set of computing nodes aim at solving a non-convex optimization problem collaboratively.

Quantization

Efficient and Accurate Estimation of Lipschitz Constants for Deep Neural Networks

1 code implementation NeurIPS 2019 Mahyar Fazlyab, Alexander Robey, Hamed Hassani, Manfred Morari, George J. Pappas

The resulting SDP can be adapted to increase either the estimation accuracy (by capturing the interaction between activation functions of different layers) or scalability (by decomposition and parallel implementation).

Stochastic Conditional Gradient++

no code implementations19 Feb 2019 Hamed Hassani, Amin Karbasi, Aryan Mokhtari, Zebang Shen

It is known that this rate is optimal in terms of stochastic gradient evaluations.

Stochastic Optimization

Black Box Submodular Maximization: Discrete and Continuous Settings

no code implementations28 Jan 2019 Lin Chen, Mingrui Zhang, Hamed Hassani, Amin Karbasi

In this paper, we consider the problem of black box continuous submodular maximization where we only have access to the function values and no information about the derivatives is provided.

Entropic GANs meet VAEs: A Statistical Approach to Compute Sample Likelihoods in GANs

1 code implementation ICLR 2019 Yogesh Balaji, Hamed Hassani, Rama Chellappa, Soheil Feizi

Building on the success of deep learning, two modern approaches to learn a probability model from the data are Generative Adversarial Networks (GANs) and Variational AutoEncoders (VAEs).

Discrete Sampling using Semigradient-based Product Mixtures

no code implementations4 Jul 2018 Alkis Gotovos, Hamed Hassani, Andreas Krause, Stefanie Jegelka

We consider the problem of inference in discrete probabilistic models, that is, distributions over subsets of a finite ground set.

Point Processes

An Exact Quantized Decentralized Gradient Descent Algorithm

no code implementations29 Jun 2018 Amirhossein Reisizadeh, Aryan Mokhtari, Hamed Hassani, Ramtin Pedarsani

We consider the problem of decentralized consensus optimization, where the sum of $n$ smooth and strongly convex functions are minimized over $n$ distributed agents that form a connected network.

Distributed Optimization Quantization

Stochastic Conditional Gradient Methods: From Convex Minimization to Submodular Maximization

no code implementations24 Apr 2018 Aryan Mokhtari, Hamed Hassani, Amin Karbasi

Further, for a monotone and continuous DR-submodular function and subject to a general convex body constraint, we prove that our proposed method achieves a $((1-1/e)OPT-\eps)$ guarantee with $O(1/\eps^3)$ stochastic gradient computations.

Stochastic Optimization

Projection-Free Online Optimization with Stochastic Gradient: From Convexity to Submodularity

no code implementations ICML 2018 Lin Chen, Christopher Harshaw, Hamed Hassani, Amin Karbasi

We also propose One-Shot Frank-Wolfe, a simpler algorithm which requires only a single stochastic gradient estimate in each round and achieves an $O(T^{2/3})$ stochastic regret bound for convex and continuous submodular optimization.

Online Continuous Submodular Maximization

no code implementations16 Feb 2018 Lin Chen, Hamed Hassani, Amin Karbasi

For such settings, we then propose an online stochastic gradient ascent algorithm that also achieves a regret bound of $O(\sqrt{T})$ regret, albeit against a weaker $1/2$-approximation to the best feasible solution in hindsight.

Stochastic Submodular Maximization: The Case of Coverage Functions

no code implementations NeurIPS 2017 Mohammad Reza Karimi, Mario Lucic, Hamed Hassani, Andreas Krause

By exploiting that common extensions act linearly on the class of submodular functions, we employ projected stochastic gradient ascent and its variants in the continuous domain, and perform rounding to obtain discrete solutions.

Stochastic Optimization

Conditional Gradient Method for Stochastic Submodular Maximization: Closing the Gap

no code implementations5 Nov 2017 Aryan Mokhtari, Hamed Hassani, Amin Karbasi

More precisely, for a monotone and continuous DR-submodular function and subject to a \textit{general} convex body constraint, we prove that \alg achieves a $[(1-1/e)\text{OPT} -\eps]$ guarantee (in expectation) with $\mathcal{O}{(1/\eps^3)}$ stochastic gradient computations.

Gradient Methods for Submodular Maximization

no code implementations NeurIPS 2017 Hamed Hassani, Mahdi Soltanolkotabi, Amin Karbasi

Despite the apparent lack of convexity in such functions, we prove that stochastic projected gradient methods can provide strong approximation guarantees for maximizing continuous submodular functions with convex constraints.

Active Learning

Accelerated Dual Learning by Homotopic Initialization

no code implementations13 Jun 2017 Hadi Daneshmand, Hamed Hassani, Thomas Hofmann

Gradient descent and coordinate descent are well understood in terms of their asymptotic behavior, but less so in a transient regime often used for approximations in machine learning.

Learning to Use Learners' Advice

no code implementations16 Feb 2017 Adish Singla, Hamed Hassani, Andreas Krause

In our setting, the feedback at any time $t$ is limited in a sense that it is only available to the expert $i^t$ that has been selected by the central algorithm (forecaster), \emph{i. e.}, only the expert $i^t$ receives feedback from the environment and gets to learn at time $t$.

Multi-Armed Bandits

Fast and Provably Good Seedings for k-Means

no code implementations NeurIPS 2016 Olivier Bachem, Mario Lucic, Hamed Hassani, Andreas Krause

Seeding - the task of finding initial cluster centers - is critical in obtaining high-quality clusterings for k-Means.

Near-Optimal Active Learning of Halfspaces via Query Synthesis in the Noisy Setting

no code implementations11 Mar 2016 Lin Chen, Hamed Hassani, Amin Karbasi

This problem has recently gained a lot of interest in automated science and adversarial reverse engineering for which only heuristic algorithms are known.

Active Learning

Sampling from Probabilistic Submodular Models

no code implementations NeurIPS 2015 Alkis Gotovos, Hamed Hassani, Andreas Krause

Submodular and supermodular functions have found wide applicability in machine learning, capturing notions such as diversity and regularity, respectively.

Point Processes

Cannot find the paper you are looking for? You can Submit a new open access paper.