We analyze new generalization bounds for deep learning models trained by transfer learning from a source to a target task.
Variational inference (VI) has become the method of choice for fitting many modern probabilistic models.
In this paper, we perform an empirical study on the impact of several loss functions on the performance of standard GAN models, Deep Convolutional Generative Adversarial Networks (DCGANs).
We introduce a new measure to evaluate the transferability of representations learned by classifiers.
As a case study, we transfer a learned face recognition model to CelebA attribute classification tasks, showing state of the art accuracy for tasks estimated to be highly transferable.
As an application, we apply our procedure to study two properties of a task sequence: (1) total complexity and (2) sequential heterogeneity.
We study pool-based active learning with abstention feedbacks where a labeler can abstain from labeling a queried example with some unknown abstention rate.
Second, the granularity of the updates e. g. whether the updates are local to each data point and employ message passing or global.
This paper develops variational continual learning (VCL), a simple but general framework for continual learning that fuses online variational inference (VI) and recent advances in Monte Carlo VI for neural networks.
We study pool-based active learning with abstention feedbacks, where a labeler can abstain from labeling a queried example with some unknown abstention rate.
Sparse pseudo-point approximations for Gaussian process (GP) models provide a suite of methods that support deployment of GPs in the large data regime and enable analytic intractabilities to be sidestepped.
Avoiding this devise, we propose an accelerated randomized mirror descent method for solving this problem without the strongly convex assumption.