Search Results for author: Shahin Shahrampour

Found 34 papers, 1 papers with code

On Centralized and Distributed Mirror Descent: Exponential Convergence Analysis Using Quadratic Constraints

no code implementations29 May 2021 Youbang Sun, Mahyar Fazlyab, Shahin Shahrampour

To the best of our knowledge, the exact (exponential) rate of distributed MD has not been previously explored in the literature.

Regret Analysis of Distributed Online LQR Control for Unknown LTI Systems

no code implementations15 May 2021 Ting-Jui Chang, Shahin Shahrampour

Inspired by this line of research, we study the distributed online linear quadratic regulator (LQR) problem for linear time-invariant (LTI) systems with unknown dynamics.

Decentralized Riemannian Gradient Descent on the Stiefel Manifold

1 code implementation14 Feb 2021 Shixiang Chen, Alfredo Garcia, Mingyi Hong, Shahin Shahrampour

The global function is represented as a finite sum of smooth local functions, where each local function is associated with one agent and agents communicate with each other over an undirected connected graph.

Distributed Optimization

On the Local Linear Rate of Consensus on the Stiefel Manifold

no code implementations22 Jan 2021 Shixiang Chen, Alfredo Garcia, Mingyi Hong, Shahin Shahrampour

We study the convergence properties of Riemannian gradient method for solving the consensus problem (for an undirected connected graph) over the Stiefel manifold.

Linear Convergence of Distributed Mirror Descent with Integral Feedback for Strongly Convex Problems

no code implementations24 Nov 2020 Youbang Sun, Shahin Shahrampour

Distributed optimization often requires finding the minimum of a global objective function written as a sum of local functions.

Distributed Optimization

Distributed Online Linear Quadratic Control for Linear Time-invariant Systems

no code implementations29 Sep 2020 Ting-Jui Chang, Shahin Shahrampour

Recent advancement in online optimization and control has provided novel tools to study LQ problems that are robust to time-varying cost parameters.

Distributed Mirror Descent with Integral Feedback: Asymptotic Convergence Analysis of Continuous-time Dynamics

no code implementations14 Sep 2020 Youbang Sun, Shahin Shahrampour

This work addresses distributed optimization, where a network of agents wants to minimize a global strongly convex objective function.

Distributed Optimization

Unconstrained Online Optimization: Dynamic Regret Analysis of Strongly Convex and Smooth Problems

no code implementations6 Jun 2020 Ting-Jui Chang, Shahin Shahrampour

The regret bound of dynamic online learning algorithms is often expressed in terms of the variation in the function sequence ($V_T$) and/or the path-length of the minimizer sequence after $T$ rounds.

Learning from Non-Random Data in Hilbert Spaces: An Optimal Recovery Perspective

no code implementations5 Jun 2020 Simon Foucart, Chunyang Liao, Shahin Shahrampour, Yinsong Wang

Then, for any Hilbert space, we show that Optimal Recovery provides a formula which is user-friendly from an algorithmic point-of-view, as long as the hypothesis class is linear.

Overcoming the Curse of Dimensionality in Density Estimation with Mixed Sobolev GANs

no code implementations5 Jun 2020 Liang Ding, Rui Tuo, Shahin Shahrampour

We propose a novel GAN framework for non-parametric density estimation with high-dimensional data.

Density Estimation

Statistical and Topological Properties of Sliced Probability Divergences

no code implementations NeurIPS 2020 Kimia Nadjahi, Alain Durmus, Lénaïc Chizat, Soheil Kolouri, Shahin Shahrampour, Umut Şimşekli

The idea of slicing divergences has been proven to be successful when comparing two probability measures in various machine learning applications including generative modeling, and consists in computing the expected value of a `base divergence' between one-dimensional random projections of the two measures.

Generalized Sliced Distances for Probability Distributions

no code implementations28 Feb 2020 Soheil Kolouri, Kimia Nadjahi, Umut Simsekli, Shahin Shahrampour

Probability metrics have become an indispensable part of modern statistics and machine learning, and they play a quintessential role in various applications, including statistical hypothesis testing and generative modeling.

Two-sample testing

RFN: A Random-Feature Based Newton Method for Empirical Risk Minimization in Reproducing Kernel Hilbert Spaces

no code implementations12 Feb 2020 Ting-Jui Chang, Shahin Shahrampour

Often times large-scale finite-sum problems can be solved using efficient variants of Newton's method where the Hessian is approximated via sub-samples.

A General Scoring Rule for Randomized Kernel Approximation with Application to Canonical Correlation Analysis

no code implementations11 Oct 2019 Yinsong Wang, Shahin Shahrampour

Random features has been widely used for kernel approximation in large-scale machine learning.

A Mean-Field Theory for Kernel Alignment with Random Features in Generative and Discriminative Models

no code implementations25 Sep 2019 Masoud Badiei Khuzani, Liyue Shen, Shahin Shahrampour, Lei Xing

We subsequently leverage a particle stochastic gradient descent (SGD) method to solve the derived finite dimensional optimization problem.

Two-sample testing

Distributed Parameter Estimation in Randomized One-hidden-layer Neural Networks

no code implementations20 Sep 2019 Yinsong Wang, Shahin Shahrampour

This paper addresses distributed parameter estimation in randomized one-hidden-layer neural networks.

On Sampling Random Features From Empirical Leverage Scores: Implementation and Theoretical Guarantees

no code implementations20 Mar 2019 Shahin Shahrampour, Soheil Kolouri

Random features provide a practical framework for large-scale kernel approximation and supervised learning.

Learning Bounds for Greedy Approximation with Explicit Feature Maps from Multiple Kernels

no code implementations NeurIPS 2018 Shahin Shahrampour, Vahid Tarokh

We establish an out-of-sample error bound capturing the trade-off between the error in terms of explicit features (approximation error) and the error due to spectral properties of the best model in the Hilbert space associated to the combined kernel (spectral error).

On Data-Dependent Random Features for Improved Generalization in Supervised Learning

no code implementations19 Dec 2017 Shahin Shahrampour, Ahmad Beirami, Vahid Tarokh

The randomized-feature approach has been successfully employed in large-scale kernel approximation and supervised learning.

On Optimal Generalizability in Parametric Learning

no code implementations NeurIPS 2017 Ahmad Beirami, Meisam Razaviyayn, Shahin Shahrampour, Vahid Tarokh

Such bias is measured by the cross validation procedure in practice where the data set is partitioned into a training set used for training and a validation set, which is not used in training and is left to measure the out-of-sample performance.

Nonlinear Sequential Accepts and Rejects for Identification of Top Arms in Stochastic Bandits

no code implementations9 Jul 2017 Shahin Shahrampour, Vahid Tarokh

At each round, the budget is divided by a nonlinear function of remaining arms, and the arms are pulled correspondingly.

Multi-Armed Bandits

An Online Optimization Approach for Multi-Agent Tracking of Dynamic Parameters in the Presence of Adversarial Noise

no code implementations21 Feb 2017 Shahin Shahrampour, Ali Jadbabaie

We formulate this problem as a distributed online optimization where agents communicate with each other to track the minimizer of the global loss.

Distributed Optimization

On Sequential Elimination Algorithms for Best-Arm Identification in Multi-Armed Bandits

no code implementations8 Sep 2016 Shahin Shahrampour, Mohammad Noshad, Vahid Tarokh

Based on this result, we develop an algorithm that divides the budget according to a nonlinear function of remaining arms at each round.

Multi-Armed Bandits

Distributed Estimation of Dynamic Parameters : Regret Analysis

no code implementations2 Mar 2016 Shahin Shahrampour, Alexander Rakhlin, Ali Jadbabaie

To this end, we use a notion of dynamic regret which suits the online, non-stationary nature of the problem.

Learning without Recall by Random Walks on Directed Graphs

no code implementations14 Sep 2015 Mohammad Amin Rahimian, Shahin Shahrampour, Ali Jadbabaie

Each agent might not be able to distinguish the true state based only on her private observations.

Bayesian Inference

Switching to Learn

no code implementations11 Mar 2015 Shahin Shahrampour, Mohammad Amin Rahimian, Ali Jadbabaie

A network of agents attempt to learn some unknown state of the world drawn by nature from a finite set.

Online Optimization : Competing with Dynamic Comparators

no code implementations26 Jan 2015 Ali Jadbabaie, Alexander Rakhlin, Shahin Shahrampour, Karthik Sridharan

Recent literature on online learning has focused on developing adaptive algorithms that take advantage of a regularity of the sequence of observations, yet retain worst-case performance guarantees.

Distributed Detection : Finite-time Analysis and Impact of Network Topology

no code implementations30 Sep 2014 Shahin Shahrampour, Alexander Rakhlin, Ali Jadbabaie

In contrast to the existing literature which focuses on asymptotic learning, we provide a finite-time analysis.

Online Learning of Dynamic Parameters in Social Networks

no code implementations NeurIPS 2013 Shahin Shahrampour, Alexander Rakhlin, Ali Jadbabaie

Based on the decomposition of the global loss function, we introduce two update mechanisms, each of which generates an estimate of the true state.

Exponentially Fast Parameter Estimation in Networks Using Distributed Dual Averaging

no code implementations10 Sep 2013 Shahin Shahrampour, Ali Jadbabaie

When the true state is globally identifiable, and the network is connected, we prove that agents eventually learn the true parameter using a randomized gossip scheme.

Cannot find the paper you are looking for? You can Submit a new open access paper.