Covariate Distribution Aware Meta-learning

Meta-learning has proven to be successful for few-shot learning across the regression, classification, and reinforcement learning paradigms. Recent approaches have adopted Bayesian interpretations to improve gradient-based meta-learners by quantifying the uncertainty of the post-adaptation estimates. Most of these works almost completely ignore the latent relationship between the covariate distribution $(p(x))$ of a task and the corresponding conditional distribution $p(y|x)$. In this paper, we identify the need to explicitly model the meta-distribution over the task covariates in a hierarchical Bayesian framework. We begin by introducing a graphical model that leverages the samples from the marginal $p(x)$ to better infer the posterior over the optimal parameters of the conditional distribution $(p(y|x))$ for each task. Based on this model we propose a computationally feasible meta-learning algorithm by introducing meaningful relaxations in our final objective. We demonstrate the gains of our algorithm over initialization based meta-learning baselines on popular classification benchmarks. Finally, to understand the potential benefit of modeling task covariates we further evaluate our method on a synthetic regression dataset.

PDF Abstract ICML Workshop 2020 PDF ICML Workshop 2020 Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here