Search Results for author: Max B. Paulus

Found 7 papers, 4 papers with code

Rao-Blackwellizing the Straight-Through Gumbel-Softmax Gradient Estimator

5 code implementations ICLR 2021 Max B. Paulus, Chris J. Maddison, Andreas Krause

Gradient estimation in models with discrete latent variables is a challenging problem, because the simplest unbiased estimators tend to have high variance.

A Review of the Gumbel-max Trick and its Extensions for Discrete Stochasticity in Machine Learning

1 code implementation4 Oct 2021 Iris A. M. Huijben, Wouter Kool, Max B. Paulus, Ruud J. G. van Sloun

The Gumbel-max trick is a method to draw a sample from a categorical distribution, given by its unnormalized (log-)probabilities.

BIG-bench Machine Learning

Instance-wise algorithm configuration with graph neural networks

1 code implementation10 Feb 2022 Romeo Valentin, Claudio Ferrari, Jérémy Scheurer, Andisheh Amrollahi, Chris Wendler, Max B. Paulus

We pose this task as a supervised learning problem: First, we compile a large dataset of the solver performance for various configurations and all provided MILP instances.

Combinatorial Optimization

Augment with Care: Contrastive Learning for Combinatorial Problems

no code implementations17 Feb 2022 Haonan Duan, Pashootan Vaezipoor, Max B. Paulus, Yangjun Ruan, Chris J. Maddison

While typical graph contrastive pre-training uses label-agnostic augmentations, our key insight is that many combinatorial problems have well-studied invariances, which allow for the design of label-preserving augmentations.

Contrastive Learning

Learning To Cut By Looking Ahead: Cutting Plane Selection via Imitation Learning

no code implementations27 Jun 2022 Max B. Paulus, Giulia Zarpellon, Andreas Krause, Laurent Charlin, Chris J. Maddison

Cutting planes are essential for solving mixed-integer linear problems (MILPs), because they facilitate bound improvements on the optimal solution value.

Imitation Learning

Learning to Drop Out: An Adversarial Approach to Training Sequence VAEs

no code implementations26 Sep 2022 Đorđe Miladinović, Kumar Shridhar, Kushal Jain, Max B. Paulus, Joachim M. Buhmann, Mrinmaya Sachan, Carl Allen

In principle, applying variational autoencoders (VAEs) to sequential data offers a method for controlled sequence generation, manipulation, and structured representation learning.

Representation Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.