Search Results for author: Ramtin Pedarsani

Found 35 papers, 9 papers with code

Robust Decentralized Learning with Local Updates and Gradient Tracking

no code implementations2 May 2024 Sajjad Ghiasvand, Amirhossein Reisizadeh, Mahnoosh Alizadeh, Ramtin Pedarsani

Having local updates is essential in Federated Learning (FL) applications to mitigate the communication bottleneck, and utilizing gradient tracking is essential to proving convergence in the case of data heterogeneity.

Adversarial Robustness Edge-computing +1

Generalization Properties of Adversarial Training for $\ell_0$-Bounded Adversarial Attacks

no code implementations5 Feb 2024 Payam Delgosha, Hamed Hassani, Ramtin Pedarsani

In this paper, we focus on the $\ell_0$-bounded adversarial attacks, and aim to theoretically characterize the performance of adversarial training for an important class of truncated classifiers.

Binary Classification

Learning to Understand: Identifying Interactions via the Mobius Transform

no code implementations4 Feb 2024 Justin S. Kang, Yigit E. Erginbas, Landon Butler, Ramtin Pedarsani, Kannan Ramchandran

In the case where all interactions are between at most $t = \Theta(n^{\alpha})$ inputs, for $\alpha < 0. 409$, we are able to leverage results from group testing to provide the first algorithm that computes the Mobius transform in $O(Kt\log n)$ sample complexity and $O(K\mathrm{poly}(n))$ time with vanishing error as $K \rightarrow \infty$.

Learning Theory

Inverse Reinforcement Learning by Estimating Expertise of Demonstrators

no code implementations2 Feb 2024 Mark Beliaev, Ramtin Pedarsani

In Imitation Learning (IL), utilizing suboptimal and heterogeneous demonstrations presents a substantial challenge due to the varied nature of real-world data.

Imitation Learning reinforcement-learning

Pricing for Multi-modal Pickup and Delivery Problems with Heterogeneous Users

no code implementations17 Mar 2023 Mark Beliaev, Negar Mehr, Ramtin Pedarsani

In this paper we study the pickup and delivery problem with multiple transportation modalities, and address the challenge of efficiently allocating transportation resources while price matching users with their desired delivery modes.

The Fair Value of Data Under Heterogeneous Privacy Constraints in Federated Learning

no code implementations30 Jan 2023 Justin Kang, Ramtin Pedarsani, Kannan Ramchandran

We also formulate a heterogeneous federated learning problem for the platform with privacy level options for users.

Fairness Federated Learning

Equal Improvability: A New Fairness Notion Considering the Long-term Impact

1 code implementation13 Oct 2022 Ozgur Guldogan, Yuchen Zeng, Jy-yong Sohn, Ramtin Pedarsani, Kangwook Lee

In order to promote long-term fairness, we propose a new fairness notion called Equal Improvability (EI), which equalizes the potential acceptance rate of the rejected samples across different groups assuming a bounded level of effort will be spent by each rejected sample.


An Optimal Transport Approach to Personalized Federated Learning

no code implementations6 Jun 2022 Farzan Farnia, Amirhossein Reisizadeh, Ramtin Pedarsani, Ali Jadbabaie

In this paper, we focus on this problem and propose a novel personalized Federated Learning scheme based on Optimal Transport (FedOT) as a learning algorithm that learns the optimal transport maps for transferring data points to a common distribution as well as the prediction model under the applied transport map.

Personalized Federated Learning

Straggler-Resilient Personalized Federated Learning

1 code implementation5 Jun 2022 Isidoros Tziotis, Zebang Shen, Ramtin Pedarsani, Hamed Hassani, Aryan Mokhtari

Federated Learning is an emerging learning paradigm that allows training models from samples distributed across a large network of clients while respecting privacy and communication restrictions.

Learning Theory Personalized Federated Learning +1

Binary Classification Under $\ell_0$ Attacks for General Noise Distribution

no code implementations9 Mar 2022 Payam Delgosha, Hamed Hassani, Ramtin Pedarsani

We introduce a classification method which employs a nonlinear component called truncation, and show in an asymptotic scenario, as long as the adversary is restricted to perturb no more than $\sqrt{d}$ data samples, we can almost achieve the optimal classification error in the absence of the adversary, i. e. we can completely neutralize adversary's effect.

Binary Classification Classification

Imitation Learning by Estimating Expertise of Demonstrators

1 code implementation2 Feb 2022 Mark Beliaev, Andy Shih, Stefano Ermon, Dorsa Sadigh, Ramtin Pedarsani

In this work, we show that unsupervised learning over demonstrator expertise can lead to a consistent boost in the performance of imitation learning algorithms.

Continuous Control Imitation Learning

Efficient and Robust Classification for Sparse Attacks

no code implementations23 Jan 2022 Mark Beliaev, Payam Delgosha, Hamed Hassani, Ramtin Pedarsani

In the past two decades we have seen the popularity of neural networks increase in conjunction with their classification accuracy.

Classification Malware Detection +1

Generalized Likelihood Ratio Test for Adversarially Robust Hypothesis Testing

no code implementations4 Dec 2021 Bhagyashree Puranik, Upamanyu Madhow, Ramtin Pedarsani

We derive the worst-case attack for the GLRT defense, and show that its asymptotic performance (as the dimension of the data increases) approaches that of the minimax defense.

Emergent Prosociality in Multi-Agent Games Through Gifting

no code implementations13 May 2021 Woodrow Z. Wang, Mark Beliaev, Erdem Biyik, Daniel A. Lazar, Ramtin Pedarsani, Dorsa Sadigh

Coordination is often critical to forming prosocial behaviors -- behaviors that increase the overall sum of rewards received by all agents in a multi-agent game.

Robust Classification Under $\ell_0$ Attack for the Gaussian Mixture Model

no code implementations5 Apr 2021 Payam Delgosha, Hamed Hassani, Ramtin Pedarsani

Under the assumption that data is distributed according to the Gaussian mixture model, our goal is to characterize the optimal robust classifier and the corresponding robust classification error as well as a variety of trade-offs between robustness, accuracy, and the adversary's budget.

Classification General Classification +1

Straggler-Resilient Federated Learning: Leveraging the Interplay Between Statistical Accuracy and System Heterogeneity

no code implementations28 Dec 2020 Amirhossein Reisizadeh, Isidoros Tziotis, Hamed Hassani, Aryan Mokhtari, Ramtin Pedarsani

Federated Learning is a novel paradigm that involves learning from data samples distributed across a large network of clients while the data remains local.

Federated Learning

Incentivizing Routing Choices for Safe and Efficient Transportation in the Face of the COVID-19 Pandemic

no code implementations28 Dec 2020 Mark Beliaev, Erdem Biyik, Daniel A. Lazar, Woodrow Z. Wang, Dorsa Sadigh, Ramtin Pedarsani

In turn, significant increases in traffic congestion are expected, since people are likely to prefer using their own vehicles or taxis as opposed to riskier and more crowded options such as the railway.

Adversarially Robust Classification based on GLRT

no code implementations16 Nov 2020 Bhagyashree Puranik, Upamanyu Madhow, Ramtin Pedarsani

We evaluate the GLRT approach for the special case of binary hypothesis testing in white Gaussian noise under $\ell_{\infty}$ norm-bounded adversarial perturbations, a setting for which a minimax strategy optimizing for the worst-case attack is known.

Classification General Classification +2

Asymptotic Behavior of Adversarial Training in Binary Classification

no code implementations26 Oct 2020 Hossein Taheri, Ramtin Pedarsani, Christos Thrampoulidis

It has been consistently reported that many machine learning models are susceptible to adversarial attacks i. e., small additive adversarial perturbations applied to data points can cause misclassification.

Binary Classification Classification +1

Fundamental Limits of Ridge-Regularized Empirical Risk Minimization in High Dimensions

no code implementations16 Jun 2020 Hossein Taheri, Ramtin Pedarsani, Christos Thrampoulidis

For a stylized setting with Gaussian features and problem dimensions that grow large at a proportional rate, we start with sharp performance characterizations and then derive tight lower bounds on the estimation and prediction error that hold over a wide class of loss functions and for any value of the regularization parameter.

Vocal Bursts Intensity Prediction

Robust Federated Learning: The Case of Affine Distribution Shifts

no code implementations NeurIPS 2020 Amirhossein Reisizadeh, Farzan Farnia, Ramtin Pedarsani, Ali Jadbabaie

In such settings, the training data is often statistically heterogeneous and manifests various distribution shifts across users, which degrades the performance of the learnt model.

Federated Learning Image Classification

Quantized Decentralized Stochastic Learning over Directed Graphs

no code implementations ICML 2020 Hossein Taheri, Aryan Mokhtari, Hamed Hassani, Ramtin Pedarsani

We consider a decentralized stochastic learning problem where data points are distributed among computing nodes communicating over a directed graph.


Polarizing Front Ends for Robust CNNs

1 code implementation22 Feb 2020 Can Bakiskan, Soorya Gopalakrishnan, Metehan Cekic, Upamanyu Madhow, Ramtin Pedarsani

The vulnerability of deep neural networks to small, adversarially designed perturbations can be attributed to their "excessive linearity."

Sharp Asymptotics and Optimal Performance for Inference in Binary Models

no code implementations17 Feb 2020 Hossein Taheri, Ramtin Pedarsani, Christos Thrampoulidis

We study convex empirical risk minimization for high-dimensional inference in binary models.

Sharp Guarantees for Solving Random Equations with One-Bit Information

no code implementations12 Aug 2019 Hossein Taheri, Ramtin Pedarsani, Christos Thrampoulidis

We study the performance of a wide class of convex optimization-based estimators for recovering a signal from corrupted one-bit measurements in high-dimensions.

Robust and Communication-Efficient Collaborative Learning

1 code implementation NeurIPS 2019 Amirhossein Reisizadeh, Hossein Taheri, Aryan Mokhtari, Hamed Hassani, Ramtin Pedarsani

We consider a decentralized learning problem, where a set of computing nodes aim at solving a non-convex optimization problem collaboratively.


CodedReduce: A Fast and Robust Framework for Gradient Aggregation in Distributed Learning

no code implementations6 Feb 2019 Amirhossein Reisizadeh, Saurav Prakash, Ramtin Pedarsani, Amir Salman Avestimehr

That is, it parallelizes the communications over a tree topology leading to efficient bandwidth utilization, and carefully designs a redundant data set allocation and coding strategy at the nodes to make the proposed gradient aggregation scheme robust to stragglers.

Robust Adversarial Learning via Sparsifying Front Ends

1 code implementation24 Oct 2018 Soorya Gopalakrishnan, Zhinus Marzi, Metehan Cekic, Upamanyu Madhow, Ramtin Pedarsani

We also devise attacks based on the locally linear model that outperform the well-known FGSM attack.

An Exact Quantized Decentralized Gradient Descent Algorithm

no code implementations29 Jun 2018 Amirhossein Reisizadeh, Aryan Mokhtari, Hamed Hassani, Ramtin Pedarsani

We consider the problem of decentralized consensus optimization, where the sum of $n$ smooth and strongly convex functions are minimized over $n$ distributed agents that form a connected network.

Distributed Optimization Quantization

Combating Adversarial Attacks Using Sparse Representations

3 code implementations11 Mar 2018 Soorya Gopalakrishnan, Zhinus Marzi, Upamanyu Madhow, Ramtin Pedarsani

It is by now well-known that small adversarial perturbations can induce classification errors in deep neural networks (DNNs).

General Classification

Sparsity-based Defense against Adversarial Attacks on Linear Classifiers

3 code implementations15 Jan 2018 Zhinus Marzi, Soorya Gopalakrishnan, Upamanyu Madhow, Ramtin Pedarsani

In this paper, we study this phenomenon in the setting of a linear classifier, and show that it is possible to exploit sparsity in natural data to combat $\ell_{\infty}$-bounded adversarial perturbations.

Coded Computation over Heterogeneous Clusters

1 code implementation21 Jan 2017 Amirhossein Reisizadeh, Saurav Prakash, Ramtin Pedarsani, Amir Salman Avestimehr

There have been recent results that demonstrate the impact of coding for efficient utilization of computation and storage redundancy to alleviate the effect of stragglers and communication bottlenecks in homogeneous clusters.

Distributed, Parallel, and Cluster Computing Information Theory Information Theory

Speeding Up Distributed Machine Learning Using Codes

no code implementations8 Dec 2015 Kangwook Lee, Maximilian Lam, Ramtin Pedarsani, Dimitris Papailiopoulos, Kannan Ramchandran

We focus on two of the most basic building blocks of distributed learning algorithms: matrix multiplication and data shuffling.

BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.