Mutual Information Estimation

38 papers with code • 0 benchmarks • 0 datasets

To estimate mutual information from samples, specially for high-dimensional variables.

Most implemented papers

Learning deep representations by mutual information estimation and maximization

rdevon/DIM ICLR 2019

In this work, we perform unsupervised learning of representations by maximizing mutual information between an input and the output of a deep neural network encoder.

Graph Representation Learning via Aggregation Enhancement

anonymous1252022/kr_for_gnns 30 Jan 2022

Graph neural networks (GNNs) have become a powerful tool for processing graph-structured data but still face challenges in effectively aggregating and propagating information between layers, which limits their performance.

Estimating Mutual Information for Discrete-Continuous Mixtures

alexandreguichet/MFS NeurIPS 2017

We provide numerical experiments suggesting superiority of the proposed estimator compared to other heuristics of adding small continuous noise to all the samples and applying standard estimators tailored for purely continuous variables, and quantizing the samples and applying standard estimators tailored for purely discrete variables.

Scalable Mutual Information Estimation using Dependence Graphs

mrtnoshad/EDGE 27 Jan 2018

To the best of our knowledge EDGE is the first non-parametric MI estimator that can achieve parametric MSE rates with linear time complexity.

Empowerment-driven Exploration using Mutual Information Estimation

navneet-nmk/pytorch-rl 11 Oct 2018

However, many of the state of the art deep reinforcement learning algorithms, that rely on epsilon-greedy, fail on these environments.

Deep Learning for Channel Coding via Neural Mutual Information Estimation

chaeger/upper_capacity_bounds 7 Mar 2019

However, one of the drawbacks of current learning approaches is that a differentiable channel model is needed for the training of the underlying neural networks.

Practical and Consistent Estimation of f-Divergences

google-research/google-research NeurIPS 2019

The estimation of an f-divergence between two probability distributions based on samples is a fundamental problem in statistics and machine learning.

Better Long-Range Dependency By Bootstrapping A Mutual Information Regularizer

BorealisAI/BMI 28 May 2019

In this work, we develop a novel regularizer to improve the learning of long-range dependency of sequence data.

Neural Entropic Estimation: A faster path to mutual information estimation

ccha23/MI-NEE 30 May 2019

In particular, we show that MI-NEE reduces to MINE in the special case when the reference distribution is the product of marginal distributions, but faster convergence is possible by choosing the uniform distribution as the reference distribution instead.

CCMI : Classifier based Conditional Mutual Information Estimation

sudiptodip15/ccmi 5 Jun 2019

Conditional Mutual Information (CMI) is a measure of conditional dependence between random variables X and Y, given another random variable Z.