no code implementations • 18 Oct 2024 • Qiran Dong, Paul Grigas, Vishal Gupta
We propose an alternative approach that parameterizes the solution path with a set of basis functions and solves a \emph{single} stochastic optimization problem to learn the entire solution path.
no code implementations • 6 Jun 2023 • Hyungki Im, Paul Grigas
Our findings suggest that learning solely with noisy samples is impossible without access to clean samples or strong assumptions on the distribution of the data.
no code implementations • 11 May 2023 • Mo Liu, Paul Grigas, Heyuan Liu, Zuo-Jun Max Shen
We develop the first active learning method in the predict-then-optimize framework.
no code implementations • 15 Jun 2022 • Heyuan Liu, Paul Grigas
We propose an algorithm that mixes a prediction step based on the "Smart Predict-then-Optimize (SPO)" method with a dual update step based on mirror descent.
no code implementations • 24 Oct 2021 • Meng Qi, Paul Grigas, Zuo-Jun Max Shen
In contrast to the standard approach of first estimating the distribution of uncertain parameters and then optimizing the objective based on the estimation, we propose an integrated conditional estimation-optimization (ICEO) framework that estimates the underlying conditional distribution of the random parameter while considering the structure of the optimization problem.
no code implementations • NeurIPS 2021 • Heyuan Liu, Paul Grigas
We develop risk bounds and uniform calibration results for the SPO+ loss relative to the SPO loss, which provide a quantitative way to transfer the excess surrogate risk to excess true risk.
no code implementations • 20 Apr 2021 • Alfonso Lobos, Paul Grigas, Zheng Wen
We consider an online revenue maximization problem over a finite time horizon subject to lower and upper bounds on cost.
1 code implementation • 9 Jun 2019 • Paul Grigas, Alfonso Lobos, Nathan Vermeersch
The Frank-Wolfe method and its extensions are well-suited for delivering solutions with desirable structural properties, such as sparsity or low-rank structure.
no code implementations • NeurIPS 2019 • Othman El Balghiti, Adam N. Elmachtoub, Paul Grigas, Ambuj Tewari
A natural loss function in this environment is to consider the cost of the decisions induced by the predicted parameters, in contrast to the prediction error of the parameters.
no code implementations • 20 Oct 2018 • Robert M. Freund, Paul Grigas, Rahul Mazumder
When the training data is non-separable, we show that the degree of non-separability naturally enters the analysis and informs the properties and convergence guarantees of two standard first-order methods: steepest descent (for any given norm) and stochastic gradient descent.
1 code implementation • 22 Oct 2017 • Adam N. Elmachtoub, Paul Grigas
Our SPO+ loss function can tractably handle any polyhedral, convex, or even mixed-integer optimization problem with a linear objective.
no code implementations • 6 Jun 2017 • Paul Grigas, Alfonso Lobos, Zheng Wen, Kuang-Chih Lee
We develop an optimization model and corresponding algorithm for the management of a demand-side platform (DSP), whereby the DSP aims to maximize its own profit while acquiring valuable impressions for its advertiser clients.
Optimization and Control Computer Science and Game Theory
no code implementations • 6 Nov 2015 • Robert M. Freund, Paul Grigas, Rahul Mazumder
Motivated principally by the low-rank matrix completion problem, we present an extension of the Frank-Wolfe method that is designed to induce near-optimal solutions on low-dimensional faces of the feasible region.
no code implementations • 16 May 2015 • Robert M. Freund, Paul Grigas, Rahul Mazumder
Furthermore, we show that these new algorithms for the Lasso may also be interpreted as the same master algorithm (subgradient descent), applied to a regularized version of the maximum absolute correlation loss function.
no code implementations • 4 Jul 2013 • Robert M. Freund, Paul Grigas, Rahul Mazumder
Boosting methods are highly popular and effective supervised learning methods which combine weak learners into a single accurate model with good statistical performance.