no code implementations • 23 Feb 2023 • Ting-Jui Chang, Sapana Chaudhary, Dileep Kalathil, Shahin Shahrampour
We prove that for convex functions, D-Safe-OGD achieves a dynamic regret bound of $O(T^{2/3} \sqrt{\log T} + T^{1/3}C_T^*)$, where $C_T^*$ denotes the path-length of the best minimizer sequence.
no code implementations • 4 Feb 2023 • Yinsong Wang, Shahin Shahrampour
This work investigates the intersection of cross modal learning and semi supervised learning, where we aim to improve the supervised learning performance of the primary modality by borrowing missing information from an unlabeled modality.
no code implementations • 25 Sep 2022 • Youbang Sun, Heshan Fernando, Tianyi Chen, Shahin Shahrampour
We consider the open federated learning (FL) systems, where clients may join and/or leave the system during the FL process.
no code implementations • 3 Jul 2022 • Ting-Jui Chang, Shahin Shahrampour
Inspired by this work, we study distributed online system identification of LTI systems over a multi-agent network.
no code implementations • 15 Mar 2022 • Yinsong Wang, Yu Ding, Shahin Shahrampour
Kernel density estimation is arguably one of the most commonly used density estimation techniques, and the use of "sliding window" mechanism adapts kernel density estimators to dynamic processes.
no code implementations • 11 Dec 2021 • Liang Ding, Rui Tuo, Shahin Shahrampour
In this work, we use Deep Gaussian Processes (DGPs) as statistical surrogates for stochastic processes with complex distributions.
no code implementations • 29 May 2021 • Youbang Sun, Mahyar Fazlyab, Shahin Shahrampour
Our numerical experiments on strongly convex problems indicate that our framework certifies superior convergence rates compared to the existing rates for distributed GD.
no code implementations • 15 May 2021 • Ting-Jui Chang, Shahin Shahrampour
Consider a multi-agent network where each agent is modeled as a LTI system.
1 code implementation • 14 Feb 2021 • Shixiang Chen, Alfredo Garcia, Mingyi Hong, Shahin Shahrampour
The global function is represented as a finite sum of smooth local functions, where each local function is associated with one agent and agents communicate with each other over an undirected connected graph.
no code implementations • 22 Jan 2021 • Shixiang Chen, Alfredo Garcia, Mingyi Hong, Shahin Shahrampour
We study the convergence properties of Riemannian gradient method for solving the consensus problem (for an undirected connected graph) over the Stiefel manifold.
no code implementations • 24 Nov 2020 • Youbang Sun, Shahin Shahrampour
Distributed optimization often requires finding the minimum of a global objective function written as a sum of local functions.
no code implementations • 29 Sep 2020 • Ting-Jui Chang, Shahin Shahrampour
Recent advancement in online optimization and control has provided novel tools to study LQ problems that are robust to time-varying cost parameters.
no code implementations • 14 Sep 2020 • Youbang Sun, Shahin Shahrampour
This work addresses distributed optimization, where a network of agents wants to minimize a global strongly convex objective function.
no code implementations • 6 Jun 2020 • Ting-Jui Chang, Shahin Shahrampour
The regret bound of dynamic online learning algorithms is often expressed in terms of the variation in the function sequence ($V_T$) and/or the path-length of the minimizer sequence after $T$ rounds.
no code implementations • 5 Jun 2020 • Liang Ding, Lu Zou, Wenjia Wang, Shahin Shahrampour, Rui Tuo
Density estimation plays a key role in many tasks in machine learning, statistical inference, and visualization.
no code implementations • 5 Jun 2020 • Simon Foucart, Chunyang Liao, Shahin Shahrampour, Yinsong Wang
Then, for any Hilbert space, we show that Optimal Recovery provides a formula which is user-friendly from an algorithmic point-of-view, as long as the hypothesis class is linear.
no code implementations • 28 Apr 2020 • Shixiang Chen, Alfredo Garcia, Shahin Shahrampour
In this paper, we propose a distributed implementation of the stochastic subgradient method with a theoretical guarantee.
1 code implementation • NeurIPS 2020 • Kimia Nadjahi, Alain Durmus, Lénaïc Chizat, Soheil Kolouri, Shahin Shahrampour, Umut Şimşekli
The idea of slicing divergences has been proven to be successful when comparing two probability measures in various machine learning applications including generative modeling, and consists in computing the expected value of a `base divergence' between one-dimensional random projections of the two measures.
no code implementations • 28 Feb 2020 • Soheil Kolouri, Kimia Nadjahi, Umut Simsekli, Shahin Shahrampour
Probability metrics have become an indispensable part of modern statistics and machine learning, and they play a quintessential role in various applications, including statistical hypothesis testing and generative modeling.
no code implementations • 12 Feb 2020 • Ting-Jui Chang, Shahin Shahrampour
Large-scale finite-sum problems can be solved using efficient variants of Newton method, where the Hessian is approximated via sub-samples of data.
no code implementations • ICML 2020 • Liang Ding, Rui Tuo, Shahin Shahrampour
Despite their success, kernel methods suffer from a massive computational cost in practice.
no code implementations • 11 Oct 2019 • Yinsong Wang, Shahin Shahrampour
We prove that this method, called ORCCA, can outperform (in expectation) the corresponding Kernel CCA with a default kernel.
no code implementations • 25 Sep 2019 • Masoud Badiei Khuzani, Liyue Shen, Shahin Shahrampour, Lei Xing
We subsequently leverage a particle stochastic gradient descent (SGD) method to solve the derived finite dimensional optimization problem.
no code implementations • 25 Sep 2019 • Masoud Badiei Khuzani, Liyue Shen, Shahin Shahrampour, Lei Xing
We subsequently leverage a particle stochastic gradient descent (SGD) method to solve finite dimensional optimization problems.
no code implementations • 20 Sep 2019 • Yinsong Wang, Shahin Shahrampour
This paper addresses distributed parameter estimation in randomized one-hidden-layer neural networks.
no code implementations • 20 Mar 2019 • Shahin Shahrampour, Soheil Kolouri
Random features provide a practical framework for large-scale kernel approximation and supervised learning.
no code implementations • NeurIPS 2018 • Shahin Shahrampour, Vahid Tarokh
We establish an out-of-sample error bound capturing the trade-off between the error in terms of explicit features (approximation error) and the error due to spectral properties of the best model in the Hilbert space associated to the combined kernel (spectral error).
no code implementations • 19 Dec 2017 • Shahin Shahrampour, Ahmad Beirami, Vahid Tarokh
The randomized-feature approach has been successfully employed in large-scale kernel approximation and supervised learning.
no code implementations • NeurIPS 2017 • Ahmad Beirami, Meisam Razaviyayn, Shahin Shahrampour, Vahid Tarokh
Such bias is measured by the cross validation procedure in practice where the data set is partitioned into a training set used for training and a validation set, which is not used in training and is left to measure the out-of-sample performance.
no code implementations • 9 Jul 2017 • Shahin Shahrampour, Vahid Tarokh
At each round, the budget is divided by a nonlinear function of remaining arms, and the arms are pulled correspondingly.
no code implementations • 21 Feb 2017 • Shahin Shahrampour, Ali Jadbabaie
We formulate this problem as a distributed online optimization where agents communicate with each other to track the minimizer of the global loss.
no code implementations • 9 Sep 2016 • Shahin Shahrampour, Ali Jadbabaie
A network of agents aim to track the minimizer of a global time-varying convex function.
no code implementations • 8 Sep 2016 • Shahin Shahrampour, Mohammad Noshad, Vahid Tarokh
Based on this result, we develop an algorithm that divides the budget according to a nonlinear function of remaining arms at each round.
no code implementations • 16 Mar 2016 • Aryan Mokhtari, Shahin Shahrampour, Ali Jadbabaie, Alejandro Ribeiro
In this paper, we address tracking of a time-varying parameter with unknown dynamics.
no code implementations • 2 Mar 2016 • Shahin Shahrampour, Alexander Rakhlin, Ali Jadbabaie
To this end, we use a notion of dynamic regret which suits the online, non-stationary nature of the problem.
no code implementations • 14 Sep 2015 • Mohammad Amin Rahimian, Shahin Shahrampour, Ali Jadbabaie
Each agent might not be able to distinguish the true state based only on her private observations.
no code implementations • 11 Mar 2015 • Shahin Shahrampour, Mohammad Amin Rahimian, Ali Jadbabaie
A network of agents attempt to learn some unknown state of the world drawn by nature from a finite set.
no code implementations • 26 Jan 2015 • Ali Jadbabaie, Alexander Rakhlin, Shahin Shahrampour, Karthik Sridharan
Recent literature on online learning has focused on developing adaptive algorithms that take advantage of a regularity of the sequence of observations, yet retain worst-case performance guarantees.
no code implementations • 30 Sep 2014 • Shahin Shahrampour, Alexander Rakhlin, Ali Jadbabaie
In contrast to the existing literature which focuses on asymptotic learning, we provide a finite-time analysis.
no code implementations • NeurIPS 2013 • Shahin Shahrampour, Alexander Rakhlin, Ali Jadbabaie
Based on the decomposition of the global loss function, we introduce two update mechanisms, each of which generates an estimate of the true state.
no code implementations • 10 Sep 2013 • Shahin Shahrampour, Ali Jadbabaie
When the true state is globally identifiable, and the network is connected, we prove that agents eventually learn the true parameter using a randomized gossip scheme.