no code implementations • 25 Mar 2024 • Andrew Lamperski, Tyler Lekang
Neural networks are regularly employed in adaptive control of nonlinear systems and related methods o reinforcement learning.
no code implementations • 28 Mar 2023 • Tyler Lekang, Andrew Lamperski
Classical results in neural network approximation theory show how arbitrary continuous functions can be approximated by networks with a single hidden layer, under mild assumptions on the activation function.
no code implementations • 21 Mar 2023 • Andrew Lamperski
Spectrum estimation is a fundamental methodology in the analysis of time-series data, with applications including medicine, speech analysis, and control design.
no code implementations • 27 May 2022 • Yuping Zheng, Andrew Lamperski
Langevin algorithms are gradient descent methods augmented with additive noise, and are widely used in Markov Chain Monte Carlo (MCMC) sampling, optimization, and machine learning.
no code implementations • 22 Dec 2020 • Andrew Lamperski
Langevin algorithms are gradient descent methods with additive noise.
no code implementations • 6 Sep 2019 • Jianjun Yuan, Andrew Lamperski
In order to obtain more computationally efficient algorithms, our second contribution is a novel gradient descent step size rule for strongly convex functions.
no code implementations • 18 Jun 2019 • Tyler Lekang, Andrew Lamperski
In this paper, we present simple algorithms for Dueling Bandits.
1 code implementation • 23 Jan 2019 • Jianjun Yuan, Andrew Lamperski
We propose algorithms for online principal component analysis (PCA) and variance minimization for adaptive settings.
no code implementations • NeurIPS 2018 • Jianjun Yuan, Andrew Lamperski
For convex objectives, our regret bounds generalize existing bounds, and for strongly convex objectives we give improved regret bounds.