Search Results for author: Howard Heaton

Found 10 papers, 6 papers with code

Learning to Solve Integer Linear Programs with Davis-Yin Splitting

2 code implementations31 Jan 2023 Daniel Mckenzie, Samy Wu Fung, Howard Heaton

In many applications, a combinatorial problem must be repeatedly solved with similar, but distinct parameters.

Combinatorial Optimization

Explainable AI via Learning to Optimize

no code implementations29 Apr 2022 Howard Heaton, Samy Wu Fung

Indecipherable black boxes are common in machine learning (ML), but applications increasingly require explainable artificial intelligence (XAI).

Explainable artificial intelligence Explainable Artificial Intelligence (XAI)

Feasibility-based Fixed Point Networks

1 code implementation29 Apr 2021 Howard Heaton, Samy Wu Fung, Aviv Gibali, Wotao Yin

This is accomplished using feasibility-based fixed point networks (F-FPNs).

Rolling Shutter Correction

Learning to Optimize: A Primer and A Benchmark

1 code implementation23 Mar 2021 Tianlong Chen, Xiaohan Chen, Wuyang Chen, Howard Heaton, Jialin Liu, Zhangyang Wang, Wotao Yin

It automates the design of an optimization method based on its performance on a set of training problems.

Benchmarking

JFB: Jacobian-Free Backpropagation for Implicit Networks

2 code implementations23 Mar 2021 Samy Wu Fung, Howard Heaton, Qiuwei Li, Daniel Mckenzie, Stanley Osher, Wotao Yin

Unlike traditional networks, implicit networks solve a fixed point equation to compute inferences.

Learning A Minimax Optimizer: A Pilot Study

no code implementations ICLR 2021 Jiayi Shen, Xiaohan Chen, Howard Heaton, Tianlong Chen, Jialin Liu, Wotao Yin, Zhangyang Wang

We first present Twin L2O, the first dedicated minimax L2O framework consisting of two LSTMs for updating min and max variables, respectively.

Wasserstein-based Projections with Applications to Inverse Problems

2 code implementations5 Aug 2020 Howard Heaton, Samy Wu Fung, Alex Tong Lin, Stanley Osher, Wotao Yin

To bridge this gap, we present a new algorithm that takes samples from the manifold of true data as input and outputs an approximation of the projection operator onto this manifold.

Safeguarded Learned Convex Optimization

no code implementations4 Mar 2020 Howard Heaton, Xiaohan Chen, Zhangyang Wang, Wotao Yin

Our numerical examples show convergence of Safe-L2O algorithms, even when the provided data is not from the distribution of training data.

Universal Safeguarded Learned Convex Optimization with Guaranteed Convergence

no code implementations25 Sep 2019 Howard Heaton, Xiaohan Chen, Zhangyang Wang, Wotao Yin

Inferences by each network form solution estimates, and networks are trained to optimize these estimates for a particular distribution of data.

Cannot find the paper you are looking for? You can Submit a new open access paper.