Search Results for author: Guilherme França

Found 14 papers, 2 papers with code

Geometric Methods for Sampling, Optimisation, Inference and Adaptive Agents

no code implementations20 Mar 2022 Alessandro Barp, Lancelot Da Costa, Guilherme França, Karl Friston, Mark Girolami, Michael I. Jordan, Grigorios A. Pavliotis

In this chapter, we identify fundamental geometric structures that underlie the problems of sampling, optimisation, inference and adaptive decision-making.

counterfactual Decision Making

Implicit Acceleration of Gradient Flow in Overparameterized Linear Models

no code implementations1 Jan 2021 Salma Tarmoun, Guilherme França, Benjamin David Haeffele, Rene Vidal

More precisely, gradient flow preserves the difference of the Gramian~matrices of the input and output weights and we show that the amount of acceleration depends on both the magnitude of that difference (which is fixed at initialization) and the spectrum of the data.

Distributed Optimization, Averaging via ADMM, and Network Topology

1 code implementation5 Sep 2020 Guilherme França, José Bento

For simple algorithms such as gradient descent the dependency of the convergence time with the topology of this network is well-known.

Distributed Optimization

On dissipative symplectic integration with applications to gradient-based optimization

no code implementations15 Apr 2020 Guilherme França, Michael. I. Jordan, René Vidal

More specifically, we show that a generalization of symplectic integrators to nonconservative and in particular dissipative Hamiltonian systems is able to preserve rates of convergence up to a controlled error.

Gradient flows and proximal splitting methods: A unified view on accelerated and stochastic optimization

no code implementations2 Aug 2019 Guilherme França, Daniel P. Robinson, René Vidal

We show that similar discretization schemes applied to Newton's equation with an additional dissipative force, which we refer to as accelerated gradient flow, allow us to obtain accelerated variants of all these proximal algorithms -- the majority of which are new although some recover known cases in the literature.

BIG-bench Machine Learning Distributed Optimization

Conformal Symplectic and Relativistic Optimization

1 code implementation NeurIPS 2020 Guilherme França, Jeremias Sulam, Daniel P. Robinson, René Vidal

Arguably, the two most popular accelerated or momentum-based optimization methods in machine learning are Nesterov's accelerated gradient and Polyaks's heavy ball, both corresponding to different discretizations of a particular second order differential equation with friction.

Friction

A Nonsmooth Dynamical Systems Perspective on Accelerated Extensions of ADMM

no code implementations13 Aug 2018 Guilherme França, Daniel P. Robinson, René Vidal

Recently, there has been great interest in connections between continuous-time dynamical systems and optimization methods, notably in the context of accelerated methods for smooth and unconstrained problems.

An Explicit Convergence Rate for Nesterov's Method from SDP

no code implementations13 Jan 2018 Sam Safavi, Bikash Joshi, Guilherme França, José Bento

The framework of Integral Quadratic Constraints (IQC) introduced by Lessard et al. (2014) reduces the computation of upper bounds on the convergence rate of several optimization algorithms to semi-definite programming (SDP).

Kernel k-Groups via Hartigan's Method

no code implementations26 Oct 2017 Guilherme França, Maria L. Rizzo, Joshua T. Vogelstein

In this paper, we consider a formulation for the clustering problem using a weighted version of energy statistics in spaces of negative type.

Clustering Community Detection +1

How is Distributed ADMM Affected by Network Topology?

no code implementations2 Oct 2017 Guilherme França, José Bento

Here we provide a full characterization of the convergence of distributed over-relaxed ADMM for the same type of consensus problem in terms of the topology of the underlying graph.

valid

Markov Chain Lifting and Distributed ADMM

no code implementations10 Mar 2017 Guilherme França, José Bento

The time to converge to the steady state of a finite Markov chain can be greatly reduced by a lifting operation, which creates a new Markov chain on an expanded state space.

Tuning Over-Relaxed ADMM

no code implementations10 Mar 2017 Guilherme França, José Bento

The framework of Integral Quadratic Constraints (IQC) reduces the computation of upper bounds on the convergence rate of several optimization algorithms to a semi-definite program (SDP).

valid

An Explicit Rate Bound for the Over-Relaxed ADMM

no code implementations7 Dec 2015 Guilherme França, José Bento

In this paper we provide an exact analytical solution to this SDP and obtain a general and explicit upper bound on the convergence rate of the entire family of over-relaxed ADMM.

Cannot find the paper you are looking for? You can Submit a new open access paper.