Stochastic Optimization

280 papers with code • 12 benchmarks • 11 datasets

Stochastic Optimization is the task of optimizing certain objective functional by generating and using stochastic random variables. Usually the Stochastic Optimization is an iterative process of generating random variables that progressively finds out the minima or the maxima of the objective functional. Stochastic Optimization is usually applied in the non-convex functional spaces where the usual deterministic optimization such as linear or quadratic programming or their variants cannot be used.

Source: ASOC: An Adaptive Parameter-free Stochastic Optimization Techinique for Continuous Variables

Libraries

Use these libraries to find Stochastic Optimization models and implementations

Latest papers with no code

Advancing Forest Fire Prevention: Deep Reinforcement Learning for Effective Firebreak Placement

no code yet • 12 Apr 2024

To the best of our knowledge, this study represents a pioneering effort in using Reinforcement Learning to address the aforementioned problem, offering promising perspectives in fire prevention and landscape management

Decision Transformer for Wireless Communications: A New Paradigm of Resource Management

no code yet • 8 Apr 2024

By leveraging the power of DT models learned over extensive datasets, the proposed architecture is expected to achieve rapid convergence with many fewer training epochs and higher performance in a new context, e. g., similar tasks with different state and action spaces, compared with DRL.

Transformer-based Stagewise Decomposition for Large-Scale Multistage Stochastic Optimization

no code yet • 3 Apr 2024

Solving large-scale multistage stochastic programming (MSP) problems poses a significant challenge as commonly used stagewise decomposition algorithms, including stochastic dual dynamic programming (SDDP), face growing time complexity as the subproblem size and problem count increase.

Accelerated Parameter-Free Stochastic Optimization

no code yet • 31 Mar 2024

We propose a method that achieves near-optimal rates for smooth stochastic convex optimization and requires essentially no prior knowledge of problem parameters.

Beyond Suspension: A Two-phase Methodology for Concluding Sports Leagues

no code yet • 29 Mar 2024

Methodology: We propose a data-driven model which exploits predictive and prescriptive analytics to produce a schedule for the remainder of the season comprised of a subset of originally-scheduled games.

Taming the Interacting Particle Langevin Algorithm -- the superlinear case

no code yet • 28 Mar 2024

Recent advances in stochastic optimization have yielded the interactive particle Langevin algorithm (IPLA), which leverages the notion of interacting particle systems (IPS) to efficiently sample from approximate posterior densities.

Differentially Private Distributed Nonconvex Stochastic Optimization with Quantized Communications

no code yet • 27 Mar 2024

This paper proposes a new distributed nonconvex stochastic optimization algorithm that can achieve privacy protection, communication efficiency and convergence simultaneously.

DASA: Delay-Adaptive Multi-Agent Stochastic Approximation

no code yet • 25 Mar 2024

We consider a setting in which $N$ agents aim to speedup a common Stochastic Approximation (SA) problem by acting in parallel and communicating with a central server.

Stochastic Optimization with Constraints: A Non-asymptotic Instance-Dependent Analysis

no code yet • 24 Mar 2024

Our main result is a non-asymptotic guarantee for VRPG algorithm.

A learning-based solution approach to the application placement problem in mobile edge computing under uncertainty

no code yet • 17 Mar 2024

Then, based on the distance features of each user from the available servers and their request rates, machine learning models generate decision variables for the first stage of the stochastic optimization model, which is the user-to-server request allocation, and are employed as independent decision agents that reliably mimic the optimization model.