Search Results for author: Andrew Lamperski

Found 9 papers, 1 papers with code

Approximation with Random Shallow ReLU Networks with Applications to Model Reference Adaptive Control

no code implementations25 Mar 2024 Andrew Lamperski, Tyler Lekang

Neural networks are regularly employed in adaptive control of nonlinear systems and related methods o reinforcement learning.

Function Approximation with Randomly Initialized Neural Networks for Approximate Model Reference Adaptive Control

no code implementations28 Mar 2023 Tyler Lekang, Andrew Lamperski

Classical results in neural network approximation theory show how arbitrary continuous functions can be approximated by networks with a single hidden layer, under mild assumptions on the activation function.

Non-Asymptotic Pointwise and Worst-Case Bounds for Classical Spectrum Estimators

no code implementations21 Mar 2023 Andrew Lamperski

Spectrum estimation is a fundamental methodology in the analysis of time-series data, with applications including medicine, speech analysis, and control design.

Time Series

Constrained Langevin Algorithms with L-mixing External Random Variables

no code implementations27 May 2022 Yuping Zheng, Andrew Lamperski

Langevin algorithms are gradient descent methods augmented with additive noise, and are widely used in Markov Chain Monte Carlo (MCMC) sampling, optimization, and machine learning.

Trading-Off Static and Dynamic Regret in Online Least-Squares and Beyond

no code implementations6 Sep 2019 Jianjun Yuan, Andrew Lamperski

In order to obtain more computationally efficient algorithms, our second contribution is a novel gradient descent step size rule for strongly convex functions.

Simple Algorithms for Dueling Bandits

no code implementations18 Jun 2019 Tyler Lekang, Andrew Lamperski

In this paper, we present simple algorithms for Dueling Bandits.

Online Adaptive Principal Component Analysis and Its extensions

1 code implementation23 Jan 2019 Jianjun Yuan, Andrew Lamperski

We propose algorithms for online principal component analysis (PCA) and variance minimization for adaptive settings.

Online convex optimization for cumulative constraints

no code implementations NeurIPS 2018 Jianjun Yuan, Andrew Lamperski

For convex objectives, our regret bounds generalize existing bounds, and for strongly convex objectives we give improved regret bounds.

Cannot find the paper you are looking for? You can Submit a new open access paper.