Mutual Information Estimation
41 papers with code • 0 benchmarks • 0 datasets
To estimate mutual information from samples, specially for high-dimensional variables.
Benchmarks
These leaderboards are used to track progress in Mutual Information Estimation
Most implemented papers
Learning deep representations by mutual information estimation and maximization
In this work, we perform unsupervised learning of representations by maximizing mutual information between an input and the output of a deep neural network encoder.
Graph Representation Learning via Aggregation Enhancement
Graph neural networks (GNNs) have become a powerful tool for processing graph-structured data but still face challenges in effectively aggregating and propagating information between layers, which limits their performance.
Estimating Mutual Information for Discrete-Continuous Mixtures
We provide numerical experiments suggesting superiority of the proposed estimator compared to other heuristics of adding small continuous noise to all the samples and applying standard estimators tailored for purely continuous variables, and quantizing the samples and applying standard estimators tailored for purely discrete variables.
Scalable Mutual Information Estimation using Dependence Graphs
To the best of our knowledge EDGE is the first non-parametric MI estimator that can achieve parametric MSE rates with linear time complexity.
Empowerment-driven Exploration using Mutual Information Estimation
However, many of the state of the art deep reinforcement learning algorithms, that rely on epsilon-greedy, fail on these environments.
Deep Learning for Channel Coding via Neural Mutual Information Estimation
However, one of the drawbacks of current learning approaches is that a differentiable channel model is needed for the training of the underlying neural networks.
Practical and Consistent Estimation of f-Divergences
The estimation of an f-divergence between two probability distributions based on samples is a fundamental problem in statistics and machine learning.
Better Long-Range Dependency By Bootstrapping A Mutual Information Regularizer
In this work, we develop a novel regularizer to improve the learning of long-range dependency of sequence data.
Neural Entropic Estimation: A faster path to mutual information estimation
In particular, we show that MI-NEE reduces to MINE in the special case when the reference distribution is the product of marginal distributions, but faster convergence is possible by choosing the uniform distribution as the reference distribution instead.
CCMI : Classifier based Conditional Mutual Information Estimation
Conditional Mutual Information (CMI) is a measure of conditional dependence between random variables X and Y, given another random variable Z.