Search Results for author: Ayoub El Hanchi

Found 3 papers, 0 papers with code

Contrastive Learning Can Find An Optimal Basis For Approximately View-Invariant Functions

no code implementations4 Oct 2022 Daniel D. Johnson, Ayoub El Hanchi, Chris J. Maddison

We give generalization bounds for downstream linear prediction using our Kernel PCA representation, and show empirically on a set of synthetic tasks that applying Kernel PCA to contrastive learning models can indeed approximately recover the Markov chain eigenfunctions, although the accuracy depends on the kernel parameterization as well as on the augmentation strength.

Contrastive Learning Generalization Bounds

Stochastic Reweighted Gradient Descent

no code implementations23 Mar 2021 Ayoub El Hanchi, David A. Stephens

Despite the strong theoretical guarantees that variance-reduced finite-sum optimization algorithms enjoy, their applicability remains limited to cases where the memory overhead they introduce (SAG/SAGA), or the periodic full gradient computation they require (SVRG/SARAH) are manageable.

Adaptive Importance Sampling for Finite-Sum Optimization and Sampling with Decreasing Step-Sizes

no code implementations NeurIPS 2020 Ayoub El Hanchi, David A. Stephens

Reducing the variance of the gradient estimator is known to improve the convergence rate of stochastic gradient-based optimization and sampling algorithms.

Stochastic Optimization

Cannot find the paper you are looking for? You can Submit a new open access paper.