1 code implementation • NeurIPS 2023 • Milad Sefidgaran, Abdellatif Zaidi, Piotr Krasnowski
Rather than the mutual information between the encoder's input and the representation, which is often believed to reflect the algorithm's generalization capability in the related literature but in fact, falls short of doing so, our new bounds involve the "multi-letter" relative entropy between the distribution of the representations (or labels) of the training and test sets and a fixed prior.