Search Results for author: Jaron Sanders

Found 6 papers, 0 papers with code

Score-Aware Policy-Gradient Methods and Performance Guarantees using Local Lyapunov Conditions: Applications to Product-Form Stochastic Networks and Queueing Systems

no code implementations5 Dec 2023 Céline Comte, Matthieu Jonckheere, Jaron Sanders, Albert Senen-Cerda

As a second contribution, we show that, under appropriate assumptions, the policy under a SAGE-based policy-gradient method has a large probability of converging to an optimal policy, provided that it starts sufficiently close to it, even with a nonconvex objective function and multiple maximizers.

Policy Gradient Methods Reinforcement Learning (RL)

Noise-Resilient Designs for Optical Neural Networks

no code implementations11 Aug 2023 Gianluca Kosmella, Ripalta Stabile, Jaron Sanders

Specifically, we investigate a probabilistic framework for the first design that establishes that the design is correct, i. e., for any feed-forward NN with Lipschitz continuous activation functions, an ONN can be constructed that produces output arbitrarily close to the original.

Detection and Evaluation of Clusters within Sequential Data

no code implementations4 Oct 2022 Alexander Van Werde, Albert Senen-Cerda, Gianluca Kosmella, Jaron Sanders

We address this issue and investigate the suitability of these clustering algorithms in exploratory data analysis of real-world sequential data.

Benchmarking Clustering +2

Asymptotic convergence rate of Dropout on shallow linear neural networks

no code implementations1 Dec 2020 Albert Senen-Cerda, Jaron Sanders

We analyze the convergence rate of gradient flows on objective functions induced by Dropout and Dropconnect, when applying them to shallow linear Neural Networks (NNs) - which can also be viewed as doing matrix factorization using a particular regularizer.

Almost Sure Convergence of Dropout Algorithms for Neural Networks

no code implementations6 Feb 2020 Albert Senen-Cerda, Jaron Sanders

We investigate the convergence and convergence rate of stochastic training algorithms for Neural Networks (NNs) that have been inspired by Dropout (Hinton et al., 2012).

Cannot find the paper you are looking for? You can Submit a new open access paper.