A Practical & Unified Notation for Information-Theoretic Quantities in ML

22 Jun 2021  ·  Andreas Kirsch, Yarin Gal ·

A practical notation can convey valuable intuitions and concisely express new ideas. Information theory is of importance to machine learning, but the notation for information-theoretic quantities is sometimes opaque. We propose a practical and unified notation and extend it to include information-theoretic quantities between observed outcomes (events) and random variables. This includes the point-wise mutual information known in NLP and mixed quantities such as specific surprise and specific information in the cognitive sciences and information gain in Bayesian optimal experimental design. We apply our notation to prove a version of Stirling's approximation for binomial coefficients mentioned by MacKa (2003) using new intuitions. We also concisely rederive the evidence lower bound for variational auto-encoders and variational inference in approximate Bayesian neural networks. Furthermore, we apply the notation to a popular information-theoretic acquisition function in Bayesian active learning which selects the most informative (unlabelled) samples to be labelled by an expert and extend this acquisition function to the core-set problem with the goal of selecting the most informative samples given the labels.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods