Paper

Context-aware learning for generative models

This work studies the class of algorithms for learning with side-information that emerge by extending generative models with embedded context-related variables. Using finite mixture models (FMM) as the prototypical Bayesian network, we show that maximum-likelihood estimation (MLE) of parameters through expectation-maximization (EM) improves over the regular unsupervised case and can approach the performances of supervised learning, despite the absence of any explicit ground truth data labeling. By direct application of the missing information principle (MIP), the algorithms' performances are proven to range between the conventional supervised and unsupervised MLE extremities proportionally to the information content of the contextual assistance provided. The acquired benefits regard higher estimation precision, smaller standard errors, faster convergence rates and improved classification accuracy or regression fitness shown in various scenarios, while also highlighting important properties and differences among the outlined situations. Applicability is showcased with three real-world unsupervised classification scenarios employing Gaussian Mixture Models. Importantly, we exemplify the natural extension of this methodology to any type of generative model by deriving an equivalent context-aware algorithm for variational autoencoders (VAs), thus broadening the spectrum of applicability to unsupervised deep learning with artificial neural networks. The latter is contrasted with a neural-symbolic algorithm exploiting side-information.

Results in Papers With Code
(↓ scroll down to see all results)