Search Results for author: Farzan Farnia

Found 12 papers, 3 papers with code

Group-Structured Adversarial Training

no code implementations18 Jun 2021 Farzan Farnia, Amirali Aghazadeh, James Zou, David Tse

Robust training methods against perturbations to the input data have received great attention in the machine learning literature.

A Wasserstein Minimax Framework for Mixed Linear Regression

1 code implementation14 Jun 2021 Theo Diamandis, Yonina C. Eldar, Alireza Fallah, Farzan Farnia, Asuman Ozdaglar

We propose an optimal transport-based framework for MLR problems, Wasserstein Mixed Linear Regression (WMLR), which minimizes the Wasserstein distance between the learned and target mixture regression models.

Federated Learning

Train simultaneously, generalize better: Stability of gradient-based minimax learners

no code implementations23 Oct 2020 Farzan Farnia, Asuman Ozdaglar

In this paper, we show that the optimization algorithm also plays a key role in the generalization performance of the trained minimax model.

GAT-GMM: Generative Adversarial Training for Gaussian Mixture Models

no code implementations18 Jun 2020 Farzan Farnia, William Wang, Subhro Das, Ali Jadbabaie

Motivated by optimal transport theory, we design the zero-sum game in GAT-GMM using a random linear generator and a softmax-based quadratic discriminator architecture, which leads to a non-convex concave minimax optimization problem.

Robust Federated Learning: The Case of Affine Distribution Shifts

no code implementations NeurIPS 2020 Amirhossein Reisizadeh, Farzan Farnia, Ramtin Pedarsani, Ali Jadbabaie

In such settings, the training data is often statistically heterogeneous and manifests various distribution shifts across users, which degrades the performance of the learnt model.

Federated Learning Image Classification

GANs May Have No Nash Equilibria

no code implementations ICML 2020 Farzan Farnia, Asuman Ozdaglar

We discuss several numerical experiments demonstrating the existence of proximal equilibrium solutions in GAN minimax problems.

Generalizable Adversarial Training via Spectral Normalization

1 code implementation ICLR 2019 Farzan Farnia, Jesse M. Zhang, David Tse

A significant portion of this gap can be attributed to the decrease in generalization performance due to adversarial training.

A Convex Duality Framework for GANs

no code implementations NeurIPS 2018 Farzan Farnia, David Tse

For a convex set $\mathcal{F}$, this duality framework interprets the original GAN formulation as finding the generative model with minimum JS-divergence to the distributions penalized to match the moments of the data distribution, with the moments specified by the discriminators in $\mathcal{F}$.

A Spectral Approach to Generalization and Optimization in Neural Networks

no code implementations ICLR 2018 Farzan Farnia, Jesse Zhang, David Tse

The recent success of deep neural networks stems from their ability to generalize well on real data; however, Zhang et al. have observed that neural networks can easily overfit random labels.

Understanding GANs: the LQG Setting

no code implementations ICLR 2018 Soheil Feizi, Farzan Farnia, Tony Ginart, David Tse

Generative Adversarial Networks (GANs) have become a popular method to learn a probability model from data.

A Minimax Approach to Supervised Learning

1 code implementation NeurIPS 2016 Farzan Farnia, David Tse

Given a task of predicting $Y$ from $X$, a loss function $L$, and a set of probability distributions $\Gamma$ on $(X, Y)$, what is the optimal decision rule minimizing the worst-case expected loss over $\Gamma$?

Discrete Rényi Classifiers

no code implementations NeurIPS 2015 Meisam Razaviyayn, Farzan Farnia, David Tse

We prove that for a given set of marginals, the minimum Hirschfeld-Gebelein-Renyi (HGR) correlation principle introduced in [1] leads to a randomized classification rule which is shown to have a misclassification rate no larger than twice the misclassification rate of the optimal classifier.

Feature Selection General Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.