Mutual Information Estimation

23 papers with code • 0 benchmarks • 0 datasets

To estimate mutual information from samples, specially for high-dimensional variables.

Most implemented papers

Learning deep representations by mutual information estimation and maximization

rdevon/DIM ICLR 2019

In this work, we perform unsupervised learning of representations by maximizing mutual information between an input and the output of a deep neural network encoder.

Estimating Mutual Information for Discrete-Continuous Mixtures

alexandreguichet/MFS NeurIPS 2017

We provide numerical experiments suggesting superiority of the proposed estimator compared to other heuristics of adding small continuous noise to all the samples and applying standard estimators tailored for purely continuous variables, and quantizing the samples and applying standard estimators tailored for purely discrete variables.

Scalable Mutual Information Estimation using Dependence Graphs

mrtnoshad/EDGE 27 Jan 2018

To the best of our knowledge EDGE is the first non-parametric MI estimator that can achieve parametric MSE rates with linear time complexity.

Empowerment-driven Exploration using Mutual Information Estimation

navneet-nmk/pytorch-rl 11 Oct 2018

However, many of the state of the art deep reinforcement learning algorithms, that rely on epsilon-greedy, fail on these environments.

Deep Learning for Channel Coding via Neural Mutual Information Estimation

chaeger/upper_capacity_bounds 7 Mar 2019

However, one of the drawbacks of current learning approaches is that a differentiable channel model is needed for the training of the underlying neural networks.

Practical and Consistent Estimation of f-Divergences

google-research/google-research NeurIPS 2019

The estimation of an f-divergence between two probability distributions based on samples is a fundamental problem in statistics and machine learning.

Better Long-Range Dependency By Bootstrapping A Mutual Information Regularizer

BorealisAI/BMI 28 May 2019

In this work, we develop a novel regularizer to improve the learning of long-range dependency of sequence data.

Neural Entropic Estimation: A faster path to mutual information estimation

ccha23/MI-NEE 30 May 2019

In particular, we show that MI-NEE reduces to MINE in the special case when the reference distribution is the product of marginal distributions, but faster convergence is possible by choosing the uniform distribution as the reference distribution instead.

LSMI-Sinkhorn: Semi-supervised Mutual Information Estimation with Optimal Transport

csyanbin/LSMI-Sinkhorn 5 Sep 2019

To estimate the mutual information from data, a common practice is preparing a set of paired samples $\{(\mathbf{x}_i,\mathbf{y}_i)\}_{i=1}^n \stackrel{\mathrm{i. i. d.