no code implementations • 21 Jun 2022 • Trang H. Tran, Lam M. Nguyen, Katya Scheinberg
In this work, we investigate the optimization aspects of the queueing model as a RL environment and provide insight to learn the optimal policy efficiently.
1 code implementation • 7 Feb 2022 • Trang H. Tran, Katya Scheinberg, Lam M. Nguyen
This rate is better than that of any other shuffling gradient methods in convex regime.
no code implementations • 18 Jan 2020 • Frank E. Curtis, Katya Scheinberg
Optimization lies at the heart of machine learning and signal processing.
no code implementations • 24 Sep 2019 • Kostas Hatalis, Alberto J. Lamadrid, Katya Scheinberg, Shalinee Kishore
However, one major shortcoming of composite quantile estimation in neural networks is the quantile crossover problem.
1 code implementation • 12 Sep 2019 • Mohammad Pirhooshyaran, Katya Scheinberg, Lawrence V. Snyder
This study introduces a framework for the forecasting, reconstruction and feature engineering of multivariate processes along with its renewable energy applications.
no code implementations • 29 May 2019 • Albert S. Berahas, Liyuan Cao, Krzysztof Choromanski, Katya Scheinberg
We then demonstrate via rigorous analysis of the variance and by numerical comparisons on reinforcement learning tasks that the Gaussian sampling method used in [Salimans et al. 2016] is significantly inferior to the orthogonal sampling used in [Choromaski et al. 2018] as well as more general interpolation methods.
no code implementations • 3 May 2019 • Albert S. Berahas, Liyuan Cao, Krzysztof Choromanski, Katya Scheinberg
To this end, we use the results in [Berahas et al., 2019] and show how each method can satisfy the sufficient conditions, possibly only with some sufficiently large probability at each iteration, as happens to be the case with Gaussian smoothing and smoothing on a sphere.
Optimization and Control
no code implementations • 28 Feb 2019 • Hiva Ghanbari, Minhan Li, Katya Scheinberg
In this work, we show that in the case of linear predictors, the expected error and the expected ranking loss can be effectively approximated by smooth functions whose closed form expressions and those of their first (and second) order derivatives depend on the first and second moments of the data distribution, which can be precomputed.
no code implementations • 25 Nov 2018 • Lam M. Nguyen, Katya Scheinberg, Martin Takáč
We develop and analyze a variant of the SARAH algorithm, which does not require computation of the exact gradient.
no code implementations • 10 Nov 2018 • Lam M. Nguyen, Phuong Ha Nguyen, Peter Richtárik, Katya Scheinberg, Martin Takáč, Marten van Dijk
We show the convergence of SGD for strongly convex objective function without using bounded gradient assumption when $\{\eta_t\}$ is a diminishing sequence and $\sum_{t=0}^\infty \eta_t \rightarrow \infty$.
no code implementations • 29 Mar 2018 • Kostas Hatalis, Shalinee Kishore, Katya Scheinberg, Alberto Lamadrid
Uncertainty analysis in the form of probabilistic forecasting can provide significant improvements in decision-making processes in the smart power grid for better integrating renewable energies such as wind.
no code implementations • ICML 2018 • Lam M. Nguyen, Phuong Ha Nguyen, Marten van Dijk, Peter Richtárik, Katya Scheinberg, Martin Takáč
In (Bottou et al., 2016), a new analysis of convergence of SGD is performed under the assumption that stochastic gradients are bounded with respect to the true gradient norm.
no code implementations • 7 Feb 2018 • Hiva Ghanbari, Katya Scheinberg
We show that even when data is not normally distributed, computed derivatives are sufficiently useful to render an efficient optimization method and high quality solutions.
no code implementations • 18 Jan 2018 • Lam M. Nguyen, Nam H. Nguyen, Dzung T. Phan, Jayant R. Kalagnanam, Katya Scheinberg
In this paper, we consider a general stochastic optimization problem which is often at the core of supervised learning, such as deep learning and linear classification.
no code implementations • 29 Dec 2017 • Frank E. Curtis, Katya Scheinberg, Rui Shi
An algorithm is proposed for solving stochastic and finite sum minimization problems.
1 code implementation • 4 Oct 2017 • Kostas Hatalis, Alberto J. Lamadrid, Katya Scheinberg, Shalinee Kishore
Multiple quantiles are estimated to form 10%, to 90% prediction intervals which are evaluated using a quantile score and reliability measures.
1 code implementation • 30 Jun 2017 • Frank E. Curtis, Katya Scheinberg
We then discuss some of the distinctive features of these optimization problems, focusing on the examples of logistic regression and the training of deep neural networks.
no code implementations • 20 May 2017 • Lam M. Nguyen, Jie Liu, Katya Scheinberg, Martin Takáč
In this paper, we study and analyze the mini-batch version of StochAstic Recursive grAdient algoritHm (SARAH), a method employing the stochastic recursive gradient, for solving empirical loss minimization for the case of nonconvex losses.
no code implementations • 20 Mar 2017 • Hiva Ghanbari, Katya Scheinberg
In this work, we utilize a Trust Region based Derivative Free Optimization (DFO-TR) method to directly maximize the Area Under Receiver Operating Characteristic Curve (AUC), which is a nonsmooth, noisy function.
no code implementations • ICML 2017 • Lam M. Nguyen, Jie Liu, Katya Scheinberg, Martin Takáč
In this paper, we propose a StochAstic Recursive grAdient algoritHm (SARAH), as well as its practical variant SARAH+, as a novel approach to the finite-sum minimization problems.
no code implementations • 10 Dec 2016 • Oktay Gunluk, Jayant Kalagnanam, Minhan Li, Matt Menickelly, Katya Scheinberg
Decision trees have been a very popular class of predictive models for decades due to their interpretability and good performance on categorical features.
no code implementations • 11 Jul 2016 • Hiva Ghanbari, Katya Scheinberg
In [19], a general, inexact, efficient proximal quasi-Newton algorithm for composite optimization problems has been proposed and a sublinear global convergence rate has been established.
1 code implementation • 26 Nov 2013 • Katya Scheinberg, Xiaocheng Tang
Recently several methods were proposed for sparse optimization which make careful use of second-order information [10, 28, 16, 3] to improve local convergence rates.
no code implementations • 27 Mar 2013 • Xiaocheng Tang, Katya Scheinberg
We propose a novel general algorithm LHAC that efficiently uses second-order information to train a class of large-scale l1-regularized problems.
no code implementations • NeurIPS 2010 • Katya Scheinberg, Shiqian Ma, Donald Goldfarb
Gaussian graphical models are of great interest in statistical learning.
no code implementations • 23 Dec 2009 • Donald Goldfarb, Shiqian Ma, Katya Scheinberg
We present in this paper first-order alternating linearization algorithms based on an alternating direction augmented Lagrangian approach for minimizing the sum of two convex functions.