Search Results for author: Babak Hassibi

Found 48 papers, 3 papers with code

The Performance Analysis of Generalized Margin Maximizers on Separable Data

no code implementations ICML 2020 Fariborz Salehi, Ehsan Abbasi, Babak Hassibi

The performance of hard-margin SVM has been recently analyzed in~\cite{montanari2019generalization, deng2019model}.

Binary Classification

One-Bit Quantization and Sparsification for Multiclass Linear Classification via Regularized Regression

no code implementations16 Feb 2024 Reza Ghane, Danil Akhtiamov, Babak Hassibi

We study the use of linear regression for multiclass classification in the over-parametrized regime where some of the training data is mislabeled.

Classification Quantization +1

A Novel Gaussian Min-Max Theorem and its Applications

no code implementations12 Feb 2024 Danil Akhtiamov, David Bosch, Reza Ghane, K Nithin Varma, Babak Hassibi

A celebrated result by Gordon allows one to compare the min-max behavior of two Gaussian processes if certain inequality conditions are met.

Binary Classification Gaussian Processes

Wasserstein Distributionally Robust Regret-Optimal Control in the Infinite-Horizon

no code implementations28 Dec 2023 Taylan Kargin, Joudi Hajar, Vikrant Malik, Babak Hassibi

Our objective is to identify a control policy that minimizes the worst-case expected regret over an infinite horizon, considering all potential disturbance distributions within the ambiguity set.

Regret-Optimal Control under Partial Observability

no code implementations10 Nov 2023 Joudi Hajar, Oron Sabag, Babak Hassibi

This paper studies online solutions for regret-optimal control in partially observable systems over an infinite-horizon.

Regularized Linear Regression for Binary Classification

no code implementations3 Nov 2023 Danil Akhtiamov, Reza Ghane, Babak Hassibi

Regularized linear regression is a promising approach for binary classification problems in which the training set has noisy labels since the regularization term can help to avoid interpolating the mislabeled data points.

Binary Classification Classification +1

The Generalization Error of Stochastic Mirror Descent on Over-Parametrized Linear Models

no code implementations18 Feb 2023 Danil Akhtiamov, Babak Hassibi

This is best understood in linear over-parametrized models where it has been shown that the celebrated stochastic gradient descent (SGD) algorithm finds an interpolating solution that is closest in Euclidean distance to the initial weight vector.

Binary Classification

Precise Asymptotic Analysis of Deep Random Feature Models

no code implementations13 Feb 2023 David Bosch, Ashkan Panahi, Babak Hassibi

We provide exact asymptotic expressions for the performance of regression by an $L-$layer deep random feature (RF) model, where the input is mapped through multiple random embedding and non-linear activation functions.

Stochastic Mirror Descent in Average Ensemble Models

no code implementations27 Oct 2022 Taylan Kargin, Fariborz Salehi, Babak Hassibi

The stochastic mirror descent (SMD) algorithm is a general class of training algorithms, which includes the celebrated stochastic gradient descent (SGD), as a special case.

Binary Classification

Thompson Sampling Achieves $\tilde O(\sqrt{T})$ Regret in Linear Quadratic Control

no code implementations17 Jun 2022 Taylan Kargin, Sahin Lale, Kamyar Azizzadenesheli, Anima Anandkumar, Babak Hassibi

By carefully prescribing an early exploration strategy and a policy update rule, we show that TS achieves order-optimal regret in adaptive control of multidimensional stabilizable LQRs.

Decision Making Decision Making Under Uncertainty +1

Optimal Competitive-Ratio Control

no code implementations3 Jun 2022 Oron Sabag, Sahin Lale, Babak Hassibi

The key techniques that underpin our explicit solution is a reduction of the control problem to a Nehari problem, along with a novel factorization of the clairvoyant controller's cost.

Explicit Regularization via Regularizer Mirror Descent

no code implementations22 Feb 2022 Navid Azizan, Sahin Lale, Babak Hassibi

RMD starts with a standard cost which is the sum of the training loss and a convex regularizer of the weights.

Continual Learning

Online estimation and control with optimal pathlength regret

no code implementations24 Oct 2021 Gautam Goel, Babak Hassibi

A natural goal when designing online learning algorithms for non-stationary environments is to bound the regret of the algorithm in terms of the temporal variation of the input sequence.

How to Query An Oracle? Efficient Strategies to Label Data

no code implementations5 Oct 2021 Farshad Lahouti, Victoria Kostina, Babak Hassibi

Empirical studies suggest that each triplet query takes an expert at most 50\% more time compared with a pairwise query, indicating the effectiveness of the proposed $k$-ary query schemes.

Finite-time System Identification and Adaptive Control in Autoregressive Exogenous Systems

no code implementations26 Aug 2021 Sahin Lale, Kamyar Azizzadenesheli, Babak Hassibi, Anima Anandkumar

Using these guarantees, we design adaptive control algorithms for unknown ARX systems with arbitrary strongly convex or convex quadratic regulating costs.

Competitive Control

no code implementations28 Jul 2021 Gautam Goel, Babak Hassibi

We consider control from the perspective of competitive analysis.

Model Predictive Control

Regret-optimal Estimation and Control

no code implementations22 Jun 2021 Gautam Goel, Babak Hassibi

We consider estimation and control in linear time-varying dynamical systems from the perspective of regret minimization.

Model Predictive Control

Regret-Optimal LQR Control

no code implementations4 May 2021 Oron Sabag, Gautam Goel, Sahin Lale, Babak Hassibi

Motivated by competitive analysis in online learning, as a criterion for controller design we introduce the dynamic regret, defined as the difference between the LQR cost of a causal controller (that has only access to past disturbances) and the LQR cost of the \emph{unique} clairvoyant one (that has also access to future disturbances) that is known to dominate all other controllers.

Learning Theory

Manifold Optimization for High Accuracy Spatial Location Estimation Using Ultrasound Waves

no code implementations28 Mar 2021 Mohammed H. AlSharif, Ahmed Douik, Mohanad Ahmed, Tareq Y. Al-Naffouri, Babak Hassibi

This paper reports the design of a high-accuracy spatial location estimation method using ultrasound waves by exploiting the fixed geometry of the transmitters.

Vocal Bursts Intensity Prediction

Regret-Optimal Filtering for Prediction and Estimation

1 code implementation25 Jan 2021 Oron Sabag, Babak Hassibi

For the important case of signals that can be described with a time-invariant state-space, we provide an explicit construction for the regret optimal filter in the estimation (causal) and the prediction (strictly-causal) regimes.

Stability and Identification of Random Asynchronous Linear Time-Invariant Systems

no code implementations8 Dec 2020 Sahin Lale, Oguzhan Teke, Babak Hassibi, Anima Anandkumar

In this model, each state variable is updated randomly and asynchronously with some probability according to the underlying system dynamics.

Regret-optimal measurement-feedback control

no code implementations24 Nov 2020 Gautam Goel, Babak Hassibi

We consider measurement-feedback control in linear dynamical systems from the perspective of regret minimization.

Robustifying Binary Classification to Adversarial Perturbation

no code implementations29 Oct 2020 Fariborz Salehi, Babak Hassibi

To this end, in this paper we consider the problem of binary classification with adversarial perturbations.

BIG-bench Machine Learning Binary Classification +2

The Performance Analysis of Generalized Margin Maximizer (GMM) on Separable Data

no code implementations29 Oct 2020 Fariborz Salehi, Ehsan Abbasi, Babak Hassibi

We also provide a detailed study for three special cases: ($1$) $\ell_2$-GMM that is the max-margin classifier, ($2$) $\ell_1$-GMM which encourages sparsity, and ($3$) $\ell_{\infty}$-GMM which is often used when the parameter vector has binary entries.

Binary Classification

Regret-optimal control in dynamic environments

no code implementations20 Oct 2020 Gautam Goel, Babak Hassibi

We consider control in linear time-varying dynamical systems from the perspective of regret minimization.

Adaptive Control and Regret Minimization in Linear Quadratic Gaussian (LQG) Setting

no code implementations12 Mar 2020 Sahin Lale, Kamyar Azizzadenesheli, Babak Hassibi, Anima Anandkumar

We study the problem of adaptive control in partially observable linear quadratic Gaussian control systems, where the model dynamics are unknown a priori.

The Power of Linear Controllers in LQR Control

no code implementations7 Feb 2020 Gautam Goel, Babak Hassibi

We also show that cost of the optimal offline linear policy converges to the cost of the optimal online policy as the time horizon grows large, and consequently the optimal offline linear policy incurs linear regret relative to the optimal offline policy, even in the optimistic setting where the noise is drawn i. i. d from a known distribution.

Differentially Quantized Gradient Methods

no code implementations6 Feb 2020 Chung-Yi Lin, Victoria Kostina, Babak Hassibi

We introduce the principle we call Differential Quantization (DQ) that prescribes compensating the past quantization errors to direct the descent trajectory of a quantized algorithm towards that of its unquantized counterpart.

Distributed Optimization Quantization

Regret Minimization in Partially Observable Linear Quadratic Control

no code implementations31 Jan 2020 Sahin Lale, Kamyar Azizzadenesheli, Babak Hassibi, Anima Anandkumar

We propose a novel way to decompose the regret and provide an end-to-end sublinear regret upper bound for partially observable linear quadratic control.

Stochastic Mirror Descent on Overparameterized Nonlinear Models

no code implementations25 Sep 2019 Navid Azizan, Sahin Lale, Babak Hassibi

On the theory side, we show that in the overparameterized nonlinear setting, if the initialization is close enough to the manifold of global optima, SMD with sufficiently small step size converges to a global minimum that is approximately the closest global minimum in Bregman divergence, thus attaining approximate implicit regularization.

The Impact of Regularization on High-dimensional Logistic Regression

no code implementations NeurIPS 2019 Fariborz Salehi, Ehsan Abbasi, Babak Hassibi

In both cases, we obtain explicit expressions for various performance metrics and can find the values of the regularizer parameter that optimizes the desired performance.

regression Vocal Bursts Intensity Prediction

Stochastic Mirror Descent on Overparameterized Nonlinear Models: Convergence, Implicit Regularization, and Generalization

1 code implementation10 Jun 2019 Navid Azizan, Sahin Lale, Babak Hassibi

Most modern learning problems are highly overparameterized, meaning that there are many more parameters than the number of training data points, and as a result, the training loss may have infinitely many global minima (parameter vectors that perfectly interpolate the training data).

A Stochastic Interpretation of Stochastic Mirror Descent: Risk-Sensitive Optimality

no code implementations3 Apr 2019 Navid Azizan, Babak Hassibi

Stochastic mirror descent (SMD) is a fairly new family of algorithms that has recently found a wide range of applications in optimization, machine learning, and control.

Stochastic Linear Bandits with Hidden Low Rank Structure

no code implementations28 Jan 2019 Sahin Lale, Kamyar Azizzadenesheli, Anima Anandkumar, Babak Hassibi

We modify the image classification task into the SLB setting and empirically show that, when a pre-trained DNN provides the high dimensional feature representations, deploying PSLB results in significant reduction of regret and faster convergence to an accurate model compared to state-of-art algorithm.

Decision Making Dimensionality Reduction +2

Learning without the Phase: Regularized PhaseMax Achieves Optimal Sample Complexity

no code implementations NeurIPS 2018 Fariborz Salehi, Ehsan Abbasi, Babak Hassibi

The problem of estimating an unknown signal, $\mathbf x_0\in \mathbb R^n$, from a vector $\mathbf y\in \mathbb R^m$ consisting of $m$ magnitude-only measurements of the form $y_i=|\mathbf a_i\mathbf x_0|$, where $\mathbf a_i$'s are the rows of a known measurement matrix $\mathbf A$ is a classical problem known as phase retrieval.

Retrieval

Low-Rank Riemannian Optimization on Positive Semidefinite Stochastic Matrices with Applications to Graph Clustering

no code implementations ICML 2018 Ahmed Douik, Babak Hassibi

This paper develops a Riemannian optimization framework for solving optimization problems on the set of symmetric positive semidefinite stochastic matrices.

Clustering Graph Clustering +1

Stochastic Gradient/Mirror Descent: Minimax Optimality and Implicit Regularization

no code implementations ICLR 2019 Navid Azizan, Babak Hassibi

In an attempt to shed some light on why this is the case, we revisit some minimax properties of stochastic gradient descent (SGD) for the square loss of linear models---originally developed in the 1990's---and extend them to general stochastic mirror descent (SMD) algorithms for general loss functions and nonlinear models.

A Universal Analysis of Large-Scale Regularized Least Squares Solutions

no code implementations NeurIPS 2017 Ashkan Panahi, Babak Hassibi

Precise expressions for the asymptotic performance of LASSO have been obtained for a number of different cases, in particular when the elements of the dictionary matrix are sampled independently from a Gaussian distribution.

valid

Distributed Solution of Large-Scale Linear Systems via Accelerated Projection-Based Consensus

no code implementations4 Aug 2017 Navid Azizan-Ruhi, Farshad Lahouti, Salman Avestimehr, Babak Hassibi

In this paper, we consider a common scenario in which a taskmaster intends to solve a large-scale system of linear equations by distributing subsets of the equations among a number of computing machines/cores.

Entropic Causality and Greedy Minimum Entropy Coupling

no code implementations28 Jan 2017 Murat Kocaoglu, Alexandros G. Dimakis, Sriram Vishwanath, Babak Hassibi

This framework requires the solution of a minimum entropy coupling problem: Given marginal distributions of m discrete random variables, each on n states, find the joint distribution with minimum entropy, that respects the given marginals.

Crowdsourced Clustering: Querying Edges vs Triangles

no code implementations NeurIPS 2016 Ramya Korlakai Vinayak, Babak Hassibi

When a generative model for the data is available (and we consider a few of these) we determine the cost of a query by its entropy; when such models do not exist we use the average response time per query of the workers as a surrogate for the cost.

Clustering

Entropic Causal Inference

1 code implementation12 Nov 2016 Murat Kocaoglu, Alexandros G. Dimakis, Sriram Vishwanath, Babak Hassibi

We show that the problem of finding the exogenous variable with minimum entropy is equivalent to the problem of finding minimum joint entropy given $n$ marginal distributions, also known as minimum entropy coupling problem.

Causal Inference

Fundamental Limits of Budget-Fidelity Trade-off in Label Crowdsourcing

no code implementations NeurIPS 2016 Farshad Lahouti, Babak Hassibi

The results are established by a joint source channel (de)coding scheme, which represent the query scheme and inference, over parallel noisy channels, which model workers with imperfect skill levels.

LASSO with Non-linear Measurements is Equivalent to One With Linear Measurements

no code implementations NeurIPS 2015 Christos Thrampoulidis, Ehsan Abbasi, Babak Hassibi

In this work, we considerably strengthen these results by obtaining explicit expressions for $\|\hat x-\mu x_0\|_2$, for the regularized Generalized-LASSO, that are asymptotically precise when $m$ and $n$ grow large.

The Squared-Error of Generalized LASSO: A Precise Analysis

no code implementations4 Nov 2013 Samet Oymak, Christos Thrampoulidis, Babak Hassibi

The first LASSO estimator assumes a-priori knowledge of $f(x_0)$ and is given by $\arg\min_{x}\{{\|y-Ax\|_2}~\text{subject to}~f(x)\leq f(x_0)\}$.

Cannot find the paper you are looking for? You can Submit a new open access paper.