Analyzing Training Using Phase Transitions in Entropy---Part I: General Theory

2 Dec 2020  ·  Kang Gao, Bertrand Hochwald ·

We analyze phase transitions in the conditional entropy of a sequence caused by a change in the conditional variables or input distribution. Such transitions happen, for example, when training to learn the parameters of a system, since the switch from training input to data input causes a discontinuous jump in the conditional entropy of the measured system response. We show that the size of the discontinuity is of particular interest in the computation of mutual information during data transmission, and that this mutual information can be calculated as the difference between two derivatives of a single function. For large-scale systems we present a method of computing the mutual information for a system model with one-shot learning that does not require Gaussianity or linearity in the model, and does not require worst-case noise approximations or explicit estimation of any unknown parameters. The model applies to a broad range of algorithms and methods in communication, signal processing, and machine learning that employ training as part of their operation.

PDF Abstract
No code implementations yet. Submit your code now

Categories


Information Theory Information Theory

Datasets


  Add Datasets introduced or used in this paper