Search Results for author: José Bento

Found 18 papers, 5 papers with code

Optimal Activation Functions for the Random Features Regression Model

1 code implementation31 May 2022 Jianxin Wang, José Bento

The asymptotic mean squared test error and sensitivity of the Random Features Regression model (RFR) have been recently studied.

regression

Distributed Optimization, Averaging via ADMM, and Network Topology

1 code implementation5 Sep 2020 Guilherme França, José Bento

For simple algorithms such as gradient descent the dependency of the convergence time with the topology of this network is well-known.

Distributed Optimization

Exact inference under the perfect phylogeny model

1 code implementation22 Aug 2019 Surjyendu Ray, Bei Jia, Sam Safavi, Tim van Opijnen, Ralph Isberg, Jason Rosch, José Bento

We use a careful combination of algorithms, software, and hardware, to develop EXACT: a tool that can explore the space of all possible phylogenetic trees, and performs exact inference under the PPM with noisy data.

Estimating Cellular Goals from High-Dimensional Biological Data

1 code implementation11 Jul 2018 Laurence Yang, Michael A. Saunders, Jean-Christophe Lachance, Bernhard O. Palsson, José Bento

The latter approach is important because often the known/observed biochemical reactions are not enough to explain observations, and hence there is a need to extend automatically the model complexity by learning new chemical reactions.

Vocal Bursts Intensity Prediction

Tractable $n$-Metrics for Multiple Graphs

no code implementations9 Jul 2018 Sam Safavi, José Bento

In the context of comparing graphs, we are the first to show the existence of multi-distances that simultaneously incorporate the useful property of alignment consistency (Nguyen et al. 2011), and a generalized metric property.

Clustering

An Explicit Convergence Rate for Nesterov's Method from SDP

no code implementations13 Jan 2018 Sam Safavi, Bikash Joshi, Guilherme França, José Bento

The framework of Integral Quadratic Constraints (IQC) introduced by Lessard et al. (2014) reduces the computation of upper bounds on the convergence rate of several optimization algorithms to semi-definite programming (SDP).

How is Distributed ADMM Affected by Network Topology?

no code implementations2 Oct 2017 Guilherme França, José Bento

Here we provide a full characterization of the convergence of distributed over-relaxed ADMM for the same type of consensus problem in terms of the topology of the underlying graph.

valid

Markov Chain Lifting and Distributed ADMM

no code implementations10 Mar 2017 Guilherme França, José Bento

The time to converge to the steady state of a finite Markov chain can be greatly reduced by a lifting operation, which creates a new Markov chain on an expanded state space.

Tuning Over-Relaxed ADMM

no code implementations10 Mar 2017 Guilherme França, José Bento

The framework of Integral Quadratic Constraints (IQC) reduces the computation of upper bounds on the convergence rate of several optimization algorithms to a semi-definite program (SDP).

valid

Generative Adversarial Active Learning

no code implementations25 Feb 2017 Jia-Jie Zhu, José Bento

We propose a new active learning by query synthesis approach using Generative Adversarial Networks (GAN).

Active Learning

A metric for sets of trajectories that is practical and mathematically consistent

no code implementations12 Jan 2016 José Bento, Jia Jie Zhu

Metrics on the space of sets of trajectories are important for scientists in the field of computer vision, machine learning, robotics, and general artificial intelligence.

An Explicit Rate Bound for the Over-Relaxed ADMM

no code implementations7 Dec 2015 Guilherme França, José Bento

In this paper we provide an exact analytical solution to this SDP and obtain a general and explicit upper bound on the convergence rate of the entire family of over-relaxed ADMM.

The Boundary Forest Algorithm for Online Supervised and Unsupervised Learning

no code implementations12 May 2015 Charles Mathy, Nate Derbinsky, José Bento, Jonathan Rosenthal, Jonathan Yedidia

We describe a new instance-based learning algorithm called the Boundary Forest (BF) algorithm, that can be used for supervised and unsupervised learning.

Proximal operators for multi-agent path planning

no code implementations7 Apr 2015 José Bento, Nate Derbinsky, Charles Mathy, Jonathan S. Yedidia

We address the problem of planning collision-free paths for multiple agents using optimization methods known as proximal algorithms.

Shape and Illumination from Shading using the Generic Viewpoint Assumption

no code implementations NeurIPS 2014 Daniel Zoran, Dilip Krishnan, José Bento, Bill Freeman

The Generic Viewpoint Assumption (GVA) states that the position of the viewer or the light in a scene is not special.

Methods for Integrating Knowledge with the Three-Weight Optimization Algorithm for Hybrid Cognitive Processing

no code implementations16 Nov 2013 Nate Derbinsky, José Bento, Jonathan S. Yedidia

In this context, we focus on the Three-Weight Algorithm, which aims to solve general optimization problems.

An Improved Three-Weight Message-Passing Algorithm

no code implementations8 May 2013 Nate Derbinsky, José Bento, Veit Elser, Jonathan S. Yedidia

We describe how the powerful "Divide and Concur" algorithm for constraint satisfaction can be derived as a special case of a message-passing version of the Alternating Direction Method of Multipliers (ADMM) algorithm for convex optimization, and introduce an improved message-passing algorithm based on ADMM/DC by introducing three distinct weights for messages, with "certain" and "no opinion" weights, as well as the standard weight used in ADMM/DC.

Cannot find the paper you are looking for? You can Submit a new open access paper.