Stochastic Optimization

282 papers with code • 12 benchmarks • 11 datasets

Stochastic Optimization is the task of optimizing certain objective functional by generating and using stochastic random variables. Usually the Stochastic Optimization is an iterative process of generating random variables that progressively finds out the minima or the maxima of the objective functional. Stochastic Optimization is usually applied in the non-convex functional spaces where the usual deterministic optimization such as linear or quadratic programming or their variants cannot be used.

Source: ASOC: An Adaptive Parameter-free Stochastic Optimization Techinique for Continuous Variables

Libraries

Use these libraries to find Stochastic Optimization models and implementations

Latest papers with no code

Stochastic Optimization with Constraints: A Non-asymptotic Instance-Dependent Analysis

no code yet • 24 Mar 2024

Our main result is a non-asymptotic guarantee for VRPG algorithm.

A learning-based solution approach to the application placement problem in mobile edge computing under uncertainty

no code yet • 17 Mar 2024

Then, based on the distance features of each user from the available servers and their request rates, machine learning models generate decision variables for the first stage of the stochastic optimization model, which is the user-to-server request allocation, and are employed as independent decision agents that reliably mimic the optimization model.

Streamlining in the Riemannian Realm: Efficient Riemannian Optimization with Loopless Variance Reduction

no code yet • 11 Mar 2024

These methods replace the outer loop with probabilistic gradient computation triggered by a coin flip in each iteration, ensuring simpler proofs, efficient hyperparameter selection, and sharp convergence guarantees.

Provable Mutual Benefits from Federated Learning in Privacy-Sensitive Domains

no code yet • 11 Mar 2024

In this paper, we study the question of when and how a server could design a FL protocol provably beneficial for all participants.

Fill-and-Spill: Deep Reinforcement Learning Policy Gradient Methods for Reservoir Operation Decision and Control

no code yet • 7 Mar 2024

Changes in demand, various hydrological inputs, and environmental stressors are among the issues that water managers and policymakers face on a regular basis.

Public-data Assisted Private Stochastic Optimization: Power and Limitations

no code yet • 6 Mar 2024

We also study PA-DP supervised learning with \textit{unlabeled} public samples.

A Note on High-Probability Analysis of Algorithms with Exponential, Sub-Gaussian, and General Light Tails

no code yet • 5 Mar 2024

This short note describes a simple technique for analyzing probabilistic algorithms that rely on a light-tailed (but not necessarily bounded) source of randomization.

SOFIM: Stochastic Optimization Using Regularized Fisher Information Matrix

no code yet • 5 Mar 2024

This paper introduces a new stochastic optimization method based on the regularized Fisher information matrix (FIM), named SOFIM, which can efficiently utilize the FIM to approximate the Hessian matrix for finding Newton's gradient update in large-scale stochastic optimization of machine learning models.

Beyond Single-Model Views for Deep Learning: Optimization versus Generalizability of Stochastic Optimization Algorithms

no code yet • 1 Mar 2024

Our investigation encompasses a wide array of techniques, including SGD and its variants, flat-minima optimizers, and new algorithms we propose under the Basin Hopping framework.

Parameter-Free Algorithms for Performative Regret Minimization under Decision-Dependent Distributions

no code yet • 23 Feb 2024

We provide experimental results that demonstrate the numerical superiority of our algorithms over the existing method and other black-box optimistic optimization methods.