Search Results for author: Saeed Ghadimi

Found 19 papers, 2 papers with code

Fully Zeroth-Order Bilevel Programming via Gaussian Smoothing

no code implementations29 Mar 2024 Alireza Aghasi, Saeed Ghadimi

In this paper, we study and analyze zeroth-order stochastic approximation algorithms for solving bilvel problems, when neither the upper/lower objective values, nor their unbiased gradient estimates are available.

Bilevel Optimization

Stochastic Nested Compositional Bi-level Optimization for Robust Feature Learning

no code implementations11 Jul 2023 Xuxing Chen, Krishnakumar Balasubramanian, Saeed Ghadimi

We develop and analyze stochastic approximation algorithms for solving nested compositional bi-level optimization problems.

Learn What NOT to Learn: Towards Generative Safety in Chatbots

no code implementations21 Apr 2023 Leila Khalatbari, Yejin Bang, Dan Su, Willy Chung, Saeed Ghadimi, Hossein Sameti, Pascale Fung

Our approach differs from the standard contrastive learning framework in that it automatically obtains positive and negative signals from the safe and unsafe language distributions that have been learned beforehand.

Contrastive Learning

A One-Sample Decentralized Proximal Algorithm for Non-Convex Stochastic Composite Optimization

1 code implementation20 Feb 2023 Tesi Xiao, Xuxing Chen, Krishnakumar Balasubramanian, Saeed Ghadimi

We focus on decentralized stochastic non-convex optimization, where $n$ agents work together to optimize a composite objective function which is a sum of a smooth term and a non-smooth convex term.

RIGID: Robust Linear Regression with Missing Data

no code implementations26 May 2022 Alireza Aghasi, MohammadJavad Feizollahi, Saeed Ghadimi

With the significant increase in using robust optimization techniques to train machine learning models, this paper presents a novel robust regression framework that operates by minimizing the uncertainty associated with missing data.

regression

A Projection-free Algorithm for Constrained Stochastic Multi-level Composition Optimization

no code implementations9 Feb 2022 Tesi Xiao, Krishnakumar Balasubramanian, Saeed Ghadimi

We propose a projection-free conditional gradient-type algorithm for smooth stochastic multi-level composition optimization, where the objective function is a nested composition of $T$ functions and the constraint set is a closed convex set.

The Parametric Cost Function Approximation: A new approach for multistage stochastic programming

no code implementations1 Jan 2022 Warren B Powell, Saeed Ghadimi

The most common approaches for solving multistage stochastic programming problems in the research literature have been to either use value functions ("dynamic programming") or scenario trees ("stochastic programming") to approximate the impact of a decision now on the future.

Escaping Saddle-Point Faster under Interpolation-like Conditions

no code implementations NeurIPS 2020 Abhishek Roy, Krishnakumar Balasubramanian, Saeed Ghadimi, Prasant Mohapatra

We next analyze Stochastic Cubic-Regularized Newton (SCRN) algorithm under interpolation-like conditions, and show that the oracle complexity to reach an $\epsilon$-local-minimizer under interpolation-like conditions, is $O(1/\epsilon^{2. 5})$.

Stochastic Optimization

Escaping Saddle-Points Faster under Interpolation-like Conditions

no code implementations28 Sep 2020 Abhishek Roy, Krishnakumar Balasubramanian, Saeed Ghadimi, Prasant Mohapatra

We next analyze Stochastic Cubic-Regularized Newton (SCRN) algorithm under interpolation-like conditions, and show that the oracle complexity to reach an $\epsilon$-local-minimizer under interpolation-like conditions, is $\tilde{\mathcal{O}}(1/\epsilon^{2. 5})$.

Stochastic Optimization

Stochastic Multi-level Composition Optimization Algorithms with Level-Independent Convergence Rates

no code implementations24 Aug 2020 Krishnakumar Balasubramanian, Saeed Ghadimi, Anthony Nguyen

We show that the first algorithm, which is a generalization of \cite{GhaRuswan20} to the $T$ level case, can achieve a sample complexity of $\mathcal{O}(1/\epsilon^6)$ by using mini-batches of samples in each iteration.

Improved Complexities for Stochastic Conditional Gradient Methods under Interpolation-like Conditions

no code implementations15 Jun 2020 Tesi Xiao, Krishnakumar Balasubramanian, Saeed Ghadimi

We analyze stochastic conditional gradient methods for constrained optimization problems arising in over-parametrized machine learning.

BIG-bench Machine Learning

Multi-Point Bandit Algorithms for Nonstationary Online Nonconvex Optimization

no code implementations31 Jul 2019 Abhishek Roy, Krishnakumar Balasubramanian, Saeed Ghadimi, Prasant Mohapatra

In this paper, motivated by online reinforcement learning problems, we propose and analyze bandit algorithms for both general and structured nonconvex problems with nonstationary (or dynamic) regret as the performance measure, in both stochastic and non-stochastic settings.

Stochastic Zeroth-order Discretizations of Langevin Diffusions for Bayesian Inference

no code implementations4 Feb 2019 Abhishek Roy, Lingqing Shen, Krishnakumar Balasubramanian, Saeed Ghadimi

Our theoretical contributions extend the practical applicability of sampling algorithms to the noisy black-box and high-dimensional settings.

Bayesian Inference Stochastic Optimization +1

Zeroth-order Nonconvex Stochastic Optimization: Handling Constraints, High-Dimensionality and Saddle-Points

no code implementations NeurIPS 2018 Krishnakumar Balasubramanian, Saeed Ghadimi

In this paper, we propose and analyze zeroth-order stochastic approximation algorithms for nonconvex and convex optimization, with a focus on addressing constrained optimization, high-dimensional setting and saddle-point avoiding.

Stochastic Optimization Vocal Bursts Intensity Prediction

Generalized Uniformly Optimal Methods for Nonlinear Programming

no code implementations29 Aug 2015 Saeed Ghadimi, Guanghui Lan, Hongchao Zhang

In a similar vein, we show that some well-studied techniques for nonlinear programming, e. g., Quasi-Newton iteration, can be embedded into optimal convex optimization algorithms to possibly further enhance their numerical performance.

Accelerated Gradient Methods for Nonconvex Nonlinear and Stochastic Programming

1 code implementation14 Oct 2013 Saeed Ghadimi, Guanghui Lan

We demonstrate that by properly specifying the stepsize policy, the AG method exhibits the best known rate of convergence for solving general nonconvex smooth optimization problems by using first-order information, similarly to the gradient descent method.

Optimization and Control

Stochastic First- and Zeroth-order Methods for Nonconvex Stochastic Programming

no code implementations22 Sep 2013 Saeed Ghadimi, Guanghui Lan

In this paper, we introduce a new stochastic approximation (SA) type algorithm, namely the randomized stochastic gradient (RSG) method, for solving an important class of nonlinear (possibly nonconvex) stochastic programming (SP) problems.

Cannot find the paper you are looking for? You can Submit a new open access paper.