Search Results for author: David Martínez-Rubio

Found 7 papers, 2 papers with code

Accelerated Methods for Riemannian Min-Max Optimization Ensuring Bounded Geometric Penalties

no code implementations25 May 2023 David Martínez-Rubio, Christophe Roux, Christopher Criscitiello, Sebastian Pokutta

In this work, we study optimization problems of the form $\min_x \max_y f(x, y)$, where $f(x, y)$ is defined on a product Riemannian manifold $\mathcal{M} \times \mathcal{N}$ and is $\mu_x$-strongly geodesically convex (g-convex) in $x$ and $\mu_y$-strongly g-concave in $y$, for $\mu_x, \mu_y \geq 0$.

Accelerated Riemannian Optimization: Handling Constraints with a Prox to Bound Geometric Penalties

no code implementations26 Nov 2022 David Martínez-Rubio, Sebastian Pokutta

For smooth functions, we show we can implement the prox step inexactly with first-order methods in Riemannian balls of certain diameter that is enough for global accelerated optimization.

Open-Ended Question Answering Riemannian optimization

Global Riemannian Acceleration in Hyperbolic and Spherical Spaces

no code implementations7 Dec 2020 David Martínez-Rubio

We further research on the accelerated optimization phenomenon on Riemannian manifolds by introducing accelerated global first-order methods for the optimization of $L$-smooth and geodesically convex (g-convex) or $\mu$-strongly g-convex functions defined on the hyperbolic space or a subset of the sphere.

Acceleration in Hyperbolic and Spherical Spaces

no code implementations28 Sep 2020 David Martínez-Rubio

We further research on the acceleration phenomenon on Riemannian manifolds by introducing the first global first-order method that achieves the same rates as accelerated gradient descent in the Euclidean space for the optimization of smooth and geodesically convex (g-convex) or strongly g-convex functions defined on the hyperbolic space or a subset of the sphere, up to constants and log factors.

Cheap Orthogonal Constraints in Neural Networks: A Simple Parametrization of the Orthogonal and Unitary Group

3 code implementations24 Jan 2019 Mario Lezcano-Casado, David Martínez-Rubio

We demonstrate how our method constitutes a more robust approach to optimization with orthogonal constraints, showing faster, accurate, and more stable convergence in several tasks designed to test RNNs.

Decentralized Cooperative Stochastic Bandits

1 code implementation NeurIPS 2019 David Martínez-Rubio, Varun Kanade, Patrick Rebeschini

We design a fully decentralized algorithm that uses an accelerated consensus procedure to compute (delayed) estimates of the average of rewards obtained by all the agents for each arm, and then uses an upper confidence bound (UCB) algorithm that accounts for the delay and error of the estimates.

Multi-Armed Bandits

Cannot find the paper you are looking for? You can Submit a new open access paper.