Search Results for author: Meisam Razaviyayn

Found 44 papers, 15 papers with code

Differentially Private Next-Token Prediction of Large Language Models

1 code implementation22 Mar 2024 James Flemings, Meisam Razaviyayn, Murali Annavaram

Ensuring the privacy of Large Language Models (LLMs) is becoming increasingly important.

Neural Network-Based Score Estimation in Diffusion Models: Optimization and Generalization

no code implementations28 Jan 2024 Yinbin Han, Meisam Razaviyayn, Renyuan Xu

Our analysis is grounded in a novel parametric form of the neural network and an innovative connection between score matching and regression analysis, facilitating the application of advanced statistical and optimization techniques.

Denoising regression

f-FERM: A Scalable Framework for Robust Fair Empirical Risk Minimization

1 code implementation6 Dec 2023 Sina Baharlouei, Shivam Patel, Meisam Razaviyayn

While numerous constraints and regularization terms have been proposed in the literature to promote fairness in machine learning tasks, most of these methods are not amenable to stochastic optimization due to the complex and nonlinear structure of constraints and regularizers.

Fairness Stochastic Optimization

Dr. FERMI: A Stochastic Distributionally Robust Fair Empirical Risk Minimization Framework

no code implementations20 Sep 2023 Sina Baharlouei, Meisam Razaviyayn

While training fair machine learning models has been studied extensively in recent years, most developed methods rely on the assumption that the training and test data have similar distributions.


Optimal Differentially Private Model Training with Public Data

1 code implementation26 Jun 2023 Andrew Lowy, Zeman Li, Tianjian Huang, Meisam Razaviyayn

We show that the optimal error rates can be attained (up to log factors) by either discarding private data and training a public model, or treating public data like it is private and using an optimal DP algorithm.

Four Axiomatic Characterizations of the Integrated Gradients Attribution Method

no code implementations23 Jun 2023 Daniel Lundstrom, Meisam Razaviyayn

Deep neural networks have produced significant progress among machine learning models in terms of accuracy and functionality, but their inner workings are still largely unknown.

Distributing Synergy Functions: Unifying Game-Theoretic Interaction Methods for Machine-Learning Explainability

no code implementations4 May 2023 Daniel Lundstrom, Meisam Razaviyayn

We show that, given modest assumptions, a unique full account of interactions between features, called synergies, is possible in the continuous input setting.

Decision Making Fairness

Policy Gradient Converges to the Globally Optimal Policy for Nearly Linear-Quadratic Regulators

no code implementations15 Mar 2023 Yinbin Han, Meisam Razaviyayn, Renyuan Xu

Nonlinear control systems with partial information to the decision maker are prevalent in a variety of applications.

Stochastic Differentially Private and Fair Learning

1 code implementation17 Oct 2022 Andrew Lowy, Devansh Gupta, Meisam Razaviyayn

However, existing algorithms for DP fair learning are either not guaranteed to converge or require full batch of data in each iteration of the algorithm to converge.

Binary Classification Decision Making +2

Tradeoffs between convergence rate and noise amplification for momentum-based accelerated optimization algorithms

no code implementations24 Sep 2022 Hesameddin Mohammadi, Meisam Razaviyayn, Mihailo R. Jovanović

We study momentum-based first-order optimization algorithms in which the iterations utilize information from the two previous steps and are subject to an additive white noise.

Private Stochastic Optimization With Large Worst-Case Lipschitz Parameter: Optimal Rates for (Non-Smooth) Convex Losses and Extension to Non-Convex Losses

no code implementations15 Sep 2022 Andrew Lowy, Meisam Razaviyayn

To address these limitations, this work provides near-optimal excess risk bounds that do not depend on the uniform Lipschitz parameter of the loss.

Stochastic Optimization

Private Non-Convex Federated Learning Without a Trusted Server

1 code implementation13 Mar 2022 Andrew Lowy, Ali Ghafelebashi, Meisam Razaviyayn

silo data and two classes of Lipschitz continuous loss functions: First, we consider losses satisfying the Proximal Polyak-Lojasiewicz (PL) inequality, which is an extension of the classical PL condition to the constrained setting.

Federated Learning

A Rigorous Study of Integrated Gradients Method and Extensions to Internal Neuron Attributions

1 code implementation24 Feb 2022 Daniel Lundstrom, Tianjian Huang, Meisam Razaviyayn

Attribution methods address the issue of explainability by quantifying the importance of an input feature for a model prediction.

Robustness through Data Augmentation Loss Consistency

1 code implementation21 Oct 2021 Tianjian Huang, Shaunak Halbe, Chinnadhurai Sankar, Pooyan Amini, Satwik Kottur, Alborz Geramifard, Meisam Razaviyayn, Ahmad Beirami

Our experiments show that DAIR consistently outperforms ERM and DA-ERM with little marginal computational cost and sets new state-of-the-art results in several benchmarks involving covariant data augmentation.

Multi-domain Dialogue State Tracking Visual Question Answering

Nonconvex-Nonconcave Min-Max Optimization with a Small Maximization Domain

no code implementations8 Oct 2021 Dmitrii M. Ostrovskii, Babak Barazandeh, Meisam Razaviyayn

For $0 \le k \le 2$ the surrogate function can be efficiently maximized in $y$; our general approximation result then leads to efficient algorithms for finding a near-stationary point in nonconvex-nonconcave min-max problems, for which we also provide convergence guarantees.

RIFLE: Imputation and Robust Inference from Low Order Marginals

1 code implementation1 Sep 2021 Sina Baharlouei, Kelechi Ogudu, Sze-chuan Suen, Meisam Razaviyayn

We develop a statistical inference framework for regression and classification in the presence of missing data without imputation.

Imputation regression

Private Federated Learning Without a Trusted Server: Optimal Algorithms for Convex Losses

2 code implementations17 Jun 2021 Andrew Lowy, Meisam Razaviyayn

This paper studies federated learning (FL)--especially cross-silo FL--with data from people who do not trust the server or other silos.

Federated Learning Stochastic Optimization

Efficient Algorithms for Estimating the Parameters of Mixed Linear Regression Models

no code implementations12 May 2021 Babak Barazandeh, Ali Ghafelebashi, Meisam Razaviyayn, Ram Sriharsha

When the additive noise in MLR model is Gaussian, Expectation-Maximization (EM) algorithm is a widely-used algorithm for maximum likelihood estimation of MLR parameters.


A Stochastic Optimization Framework for Fair Risk Minimization

1 code implementation NeurIPS 2021 Andrew Lowy, Sina Baharlouei, Rakesh Pavan, Meisam Razaviyayn, Ahmad Beirami

We consider the problem of fair classification with discrete sensitive attributes and potentially large models and data sets, requiring stochastic solvers.

Binary Classification Fairness +1

Fair Empirical Risk Minimization via Exponential Rényi Mutual Information

no code implementations1 Jan 2021 Rakesh Pavan, Andrew Lowy, Sina Baharlouei, Meisam Razaviyayn, Ahmad Beirami

In this paper, we propose another notion of fairness violation, called Exponential Rényi Mutual Information (ERMI) between sensitive attributes and the predicted target.

Attribute Fairness +1

Near-Optimal Procedures for Model Discrimination with Non-Disclosure Properties

1 code implementation4 Dec 2020 Dmitrii M. Ostrovskii, Mohamed Ndaoud, Adel Javanmard, Meisam Razaviyayn

Here we provide matching upper and lower bounds on the sample complexity as given by $\min\{1/\Delta^2,\sqrt{r}/\Delta\}$ up to a constant factor; here $\Delta$ is a measure of separation between $\mathbb{P}_0$ and $\mathbb{P}_1$ and $r$ is the rank of the design covariance matrix.

Finding Second-Order Stationary Points Efficiently in Smooth Nonconvex Linearly Constrained Optimization Problems

no code implementations NeurIPS 2020 Songtao Lu, Meisam Razaviyayn, Bo Yang, Kejun Huang, Mingyi Hong

To the best of our knowledge, this is the first time that first-order algorithms with polynomial per-iteration complexity and global sublinear rate are designed to find SOSPs of the important class of non-convex problems with linear constraints (almost surely).

Alternating Direction Method of Multipliers for Quantization

no code implementations8 Sep 2020 Tianjian Huang, Prajwal Singhania, Maziar Sanjabi, Pabitra Mitra, Meisam Razaviyayn

For such optimization problems, we study the performance of the Alternating Direction Method of Multipliers for Quantization ($\texttt{ADMM-Q}$) algorithm, which is a variant of the widely-used ADMM method applied to our discrete optimization problem.


Non-convex Min-Max Optimization: Applications, Challenges, and Recent Theoretical Advances

no code implementations15 Jun 2020 Meisam Razaviyayn, Tianjian Huang, Songtao Lu, Maher Nouiehed, Maziar Sanjabi, Mingyi Hong

The min-max optimization problem, also known as the saddle point problem, is a classical optimization problem which is also studied in the context of zero-sum games.

Solving Non-Convex Non-Differentiable Min-Max Games using Proximal Gradient Method

no code implementations18 Mar 2020 Babak Barazandeh, Meisam Razaviyayn

Min-max saddle point games appear in a wide range of applications in machine leaning and signal processing.

Adversarial Attack

Efficient Search of First-Order Nash Equilibria in Nonconvex-Concave Smooth Min-Max Problems

no code implementations18 Feb 2020 Dmitrii M. Ostrovskii, Andrew Lowy, Meisam Razaviyayn

As a byproduct, the choice $\varepsilon_y = O(\varepsilon_x{}^2)$ allows for the $O(\varepsilon_x{}^{-3})$ complexity of finding an $\varepsilon_x$-stationary point for the standard Moreau envelope of the primal function.

Optimization and Control 90C06, 90C25, 90C26, 91A99

When Does Non-Orthogonal Tensor Decomposition Have No Spurious Local Minima?

no code implementations22 Nov 2019 Maziar Sanjabi, Sina Baharlouei, Meisam Razaviyayn, Jason D. Lee

We study the optimization problem for decomposing $d$ dimensional fourth-order Tensors with $k$ non-orthogonal components.

Tensor Decomposition

SNAP: Finding Approximate Second-Order Stationary Solutions Efficiently for Non-convex Linearly Constrained Problems

no code implementations9 Jul 2019 Songtao Lu, Meisam Razaviyayn, Bo Yang, Kejun Huang, Mingyi Hong

This paper proposes low-complexity algorithms for finding approximate second-order stationary points (SOSPs) of problems with smooth non-convex objective and linear constraints.

Rényi Fair Inference

no code implementations ICLR 2020 Sina Baharlouei, Maher Nouiehed, Ahmad Beirami, Meisam Razaviyayn

In this paper, we use R\'enyi correlation as a measure of fairness of machine learning models and develop a general training framework to impose fairness.

BIG-bench Machine Learning Clustering +2

Robustness of accelerated first-order algorithms for strongly convex optimization problems

no code implementations27 May 2019 Hesameddin Mohammadi, Meisam Razaviyayn, Mihailo R. Jovanović

We study the robustness of accelerated first-order algorithms to stochastic uncertainties in gradient evaluation.

Training generative networks using random discriminators

2 code implementations22 Apr 2019 Babak Barazandeh, Meisam Razaviyayn, Maziar Sanjabi

This design helps us to avoid the min-max formulation and leads to an optimization problem that is stable and could be solved efficiently.

Solving a Class of Non-Convex Min-Max Games Using Iterative First Order Methods

1 code implementation NeurIPS 2019 Maher Nouiehed, Maziar Sanjabi, Tianjian Huang, Jason D. Lee, Meisam Razaviyayn

In this paper, we study the problem in the non-convex regime and show that an \varepsilon--first order stationary point of the game can be computed when one of the player's objective can be optimized to global optimality efficiently.

On the Behavior of the Expectation-Maximization Algorithm for Mixture Models

no code implementations24 Sep 2018 Babak Barazandeh, Meisam Razaviyayn

Our numerical experiments show that our algorithm outperforms the Naive EM algorithm in almost all scenarios.

Gradient Primal-Dual Algorithm Converges to Second-Order Stationary Solution for Nonconvex Distributed Optimization Over Networks

no code implementations ICML 2018 Mingyi Hong, Meisam Razaviyayn, Jason Lee

In this work, we study two first-order primal-dual based algorithms, the Gradient Primal-Dual Algorithm (GPDA) and the Gradient Alternating Direction Method of Multipliers (GADMM), for solving a class of linearly constrained non-convex optimization problems.

Distributed Optimization

On the Convergence and Robustness of Training GANs with Regularized Optimal Transport

no code implementations NeurIPS 2018 Maziar Sanjabi, Jimmy Ba, Meisam Razaviyayn, Jason D. Lee

A popular GAN formulation is based on the use of Wasserstein distance as a metric between probability distributions.

On Optimal Generalizability in Parametric Learning

no code implementations NeurIPS 2017 Ahmad Beirami, Meisam Razaviyayn, Shahin Shahrampour, Vahid Tarokh

Such bias is measured by the cross validation procedure in practice where the data set is partitioned into a training set used for training and a validation set, which is not used in training and is left to measure the out-of-sample performance.

Discrete Rényi Classifiers

no code implementations NeurIPS 2015 Meisam Razaviyayn, Farzan Farnia, David Tse

We prove that for a given set of marginals, the minimum Hirschfeld-Gebelein-Renyi (HGR) correlation principle introduced in [1] leads to a randomized classification rule which is shown to have a misclassification rate no larger than twice the misclassification rate of the optimal classifier.

Binary Classification feature selection +1

Parallel Successive Convex Approximation for Nonsmooth Nonconvex Optimization

1 code implementation NeurIPS 2014 Meisam Razaviyayn, Mingyi Hong, Zhi-Quan Luo, Jong-Shi Pang

In this work, we propose an inexact parallel BCD approach where at each iteration, a subset of the variables is updated in parallel by minimizing convex approximations of the original objective function.

Optimization and Control

A Unified Convergence Analysis of Block Successive Minimization Methods for Nonsmooth Optimization

no code implementations11 Sep 2012 Meisam Razaviyayn, Mingyi Hong, Zhi-Quan Luo

The block coordinate descent (BCD) method is widely used for minimizing a continuous function f of several block variables.

Optimization and Control

Cannot find the paper you are looking for? You can Submit a new open access paper.