1 code implementation • 31 May 2022 • Jianxin Wang, José Bento
The asymptotic mean squared test error and sensitivity of the Random Features Regression model (RFR) have been recently studied.
1 code implementation • 5 Sep 2020 • Guilherme França, José Bento
For simple algorithms such as gradient descent the dependency of the convergence time with the topology of this network is well-known.
1 code implementation • 29 Jan 2020 • Liang Mi, Azadeh Sheikholeslami, José Bento
Favoring its use are its metric properties.
1 code implementation • 22 Aug 2019 • Surjyendu Ray, Bei Jia, Sam Safavi, Tim van Opijnen, Ralph Isberg, Jason Rosch, José Bento
We use a careful combination of algorithms, software, and hardware, to develop EXACT: a tool that can explore the space of all possible phylogenetic trees, and performs exact inference under the PPM with noisy data.
1 code implementation • 11 Jul 2018 • Laurence Yang, Michael A. Saunders, Jean-Christophe Lachance, Bernhard O. Palsson, José Bento
The latter approach is important because often the known/observed biochemical reactions are not enough to explain observations, and hence there is a need to extend automatically the model complexity by learning new chemical reactions.
no code implementations • 9 Jul 2018 • Sam Safavi, José Bento
In the context of comparing graphs, we are the first to show the existence of multi-distances that simultaneously incorporate the useful property of alignment consistency (Nguyen et al. 2011), and a generalized metric property.
no code implementations • 13 Jan 2018 • Sam Safavi, Bikash Joshi, Guilherme França, José Bento
The framework of Integral Quadratic Constraints (IQC) introduced by Lessard et al. (2014) reduces the computation of upper bounds on the convergence rate of several optimization algorithms to semi-definite programming (SDP).
no code implementations • 2 Oct 2017 • Guilherme França, José Bento
Here we provide a full characterization of the convergence of distributed over-relaxed ADMM for the same type of consensus problem in terms of the topology of the underlying graph.
no code implementations • 10 Mar 2017 • Guilherme França, José Bento
The time to converge to the steady state of a finite Markov chain can be greatly reduced by a lifting operation, which creates a new Markov chain on an expanded state space.
no code implementations • 10 Mar 2017 • Guilherme França, José Bento
The framework of Integral Quadratic Constraints (IQC) reduces the computation of upper bounds on the convergence rate of several optimization algorithms to a semi-definite program (SDP).
no code implementations • 25 Feb 2017 • Jia-Jie Zhu, José Bento
We propose a new active learning by query synthesis approach using Generative Adversarial Networks (GAN).
no code implementations • 12 Jan 2016 • José Bento, Jia Jie Zhu
Metrics on the space of sets of trajectories are important for scientists in the field of computer vision, machine learning, robotics, and general artificial intelligence.
no code implementations • 7 Dec 2015 • Guilherme França, José Bento
In this paper we provide an exact analytical solution to this SDP and obtain a general and explicit upper bound on the convergence rate of the entire family of over-relaxed ADMM.
no code implementations • 12 May 2015 • Charles Mathy, Nate Derbinsky, José Bento, Jonathan Rosenthal, Jonathan Yedidia
We describe a new instance-based learning algorithm called the Boundary Forest (BF) algorithm, that can be used for supervised and unsupervised learning.
no code implementations • 7 Apr 2015 • José Bento, Nate Derbinsky, Charles Mathy, Jonathan S. Yedidia
We address the problem of planning collision-free paths for multiple agents using optimization methods known as proximal algorithms.
no code implementations • NeurIPS 2014 • Daniel Zoran, Dilip Krishnan, José Bento, Bill Freeman
The Generic Viewpoint Assumption (GVA) states that the position of the viewer or the light in a scene is not special.
no code implementations • 16 Nov 2013 • Nate Derbinsky, José Bento, Jonathan S. Yedidia
In this context, we focus on the Three-Weight Algorithm, which aims to solve general optimization problems.
no code implementations • 8 May 2013 • Nate Derbinsky, José Bento, Veit Elser, Jonathan S. Yedidia
We describe how the powerful "Divide and Concur" algorithm for constraint satisfaction can be derived as a special case of a message-passing version of the Alternating Direction Method of Multipliers (ADMM) algorithm for convex optimization, and introduce an improved message-passing algorithm based on ADMM/DC by introducing three distinct weights for messages, with "certain" and "no opinion" weights, as well as the standard weight used in ADMM/DC.