Search Results for author: Andreas Damianou

Found 23 papers, 9 papers with code

Fast Adaptation with Linearized Neural Networks

1 code implementation2 Mar 2021 Wesley J. Maddox, Shuai Tang, Pablo Garcia Moreno, Andrew Gordon Wilson, Andreas Damianou

The inductive biases of trained neural networks are difficult to understand and, consequently, to adapt to new settings.

Domain Adaptation Gaussian Processes +2

Empirical Bayes Transductive Meta-Learning with Synthetic Gradients

2 code implementations ICLR 2020 Shell Xu Hu, Pablo G. Moreno, Yang Xiao, Xi Shen, Guillaume Obozinski, Neil D. Lawrence, Andreas Damianou

The evidence lower bound of the marginal log-likelihood of empirical Bayes decomposes as a sum of local KL divergences between the variational posterior and the true posterior on the query set of each task.

Few-Shot Image Classification Meta-Learning +3

Online Constrained Model-based Reinforcement Learning

no code implementations7 Apr 2020 Benjamin van Niekerk, Andreas Damianou, Benjamin Rosman

The environment's dynamics are learned from limited training data and can be reused in new task instances without retraining.

Gaussian Processes Model-based Reinforcement Learning +2

Similarity of Neural Networks with Gradients

3 code implementations25 Mar 2020 Shuai Tang, Wesley J. Maddox, Charlie Dickens, Tom Diethe, Andreas Damianou

A suitable similarity index for comparing learnt neural networks plays an important role in understanding the behaviour of the highly-nonlinear functions, and can provide insights on further theoretical analysis and empirical studies.

Network Pruning

Variational Information Distillation for Knowledge Transfer

2 code implementations CVPR 2019 Sungsoo Ahn, Shell Xu Hu, Andreas Damianou, Neil D. Lawrence, Zhenwen Dai

We further demonstrate the strength of our method on knowledge transfer across heterogeneous network architectures by transferring knowledge from a convolutional neural network (CNN) to a multi-layer perceptron (MLP) on CIFAR-10.

Knowledge Distillation Transfer Learning

Deep Gaussian Processes for Multi-fidelity Modeling

1 code implementation18 Mar 2019 Kurt Cutajar, Mark Pullin, Andreas Damianou, Neil Lawrence, Javier González

Multi-fidelity methods are prominently used when cheaply-obtained, but possibly biased and noisy, observations must be effectively combined with limited or expensive true data in order to construct reliable models.

Bayesian Optimization Decision Making +2

Transferring Knowledge across Learning Processes

4 code implementations ICLR 2019 Sebastian Flennerhag, Pablo G. Moreno, Neil D. Lawrence, Andreas Damianou

Approaches that transfer information contained only in the final parameters of a source model will therefore struggle.

Meta-Learning Transfer Learning

Deep Gaussian Processes with Convolutional Kernels

no code implementations5 Jun 2018 Vinayak Kumar, Vaibhav Singh, P. K. Srijith, Andreas Damianou

This has hindered the application of DGPs in computer vision tasks, an area where deep parametric models (i. e. CNNs) have made breakthroughs.

Gaussian Processes Image Classification

Leveraging Crowdsourcing Data For Deep Active Learning - An Application: Learning Intents in Alexa

no code implementations12 Mar 2018 Jie Yang, Thomas Drake, Andreas Damianou, Yoelle Maarek

Experiments show that our framework can accurately learn annotator expertise, infer true labels, and effectively reduce the amount of annotations in model training as compared to state-of-the-art approaches.

Active Learning intent-classification +1

Preferential Bayesian Optmization

no code implementations ICML 2017 Javier González, Zhenwen Dai, Andreas Damianou, Neil D. Lawrence

We present a new framework for this scenario that we call Preferential Bayesian Optimization (PBO) and that allows to find the optimum of a latent function that can only be queried through pairwise comparisons, so-called duels.

Bayesian Optimization Recommendation Systems

Preferential Bayesian Optimization

no code implementations12 Apr 2017 Javier Gonzalez, Zhenwen Dai, Andreas Damianou, Neil D. Lawrence

Bayesian optimization (BO) has emerged during the last few years as an effective approach to optimizing black-box functions where direct queries of the objective are expensive.

Bayesian Optimization Recommendation Systems

Manifold Alignment Determination: finding correspondences across different data views

no code implementations12 Jan 2017 Andreas Damianou, Neil D. Lawrence, Carl Henrik Ek

We present Manifold Alignment Determination (MAD), an algorithm for learning alignments between data points from multiple views or modalities.

Inverse Reinforcement Learning via Deep Gaussian Process

no code implementations26 Dec 2015 Ming Jin, Andreas Damianou, Pieter Abbeel, Costas Spanos

We propose a new approach to inverse reinforcement learning (IRL) based on the deep Gaussian process (deep GP) model, which is capable of learning complicated reward structures with few demonstrations.

reinforcement-learning Reinforcement Learning (RL)

Recurrent Gaussian Processes

1 code implementation20 Nov 2015 César Lincoln C. Mattos, Zhenwen Dai, Andreas Damianou, Jeremy Forth, Guilherme A. Barreto, Neil D. Lawrence

We define Recurrent Gaussian Processes (RGP) models, a general family of Bayesian nonparametric models with recurrent GP priors which are able to learn dynamical patterns from sequential data.

Gaussian Processes

Variational Auto-encoded Deep Gaussian Processes

no code implementations19 Nov 2015 Zhenwen Dai, Andreas Damianou, Javier González, Neil Lawrence

We develop a scalable deep non-parametric generative model by augmenting deep Gaussian processes with a recognition model.

Bayesian Optimization Gaussian Processes

Gaussian Process Models with Parallelization and GPU acceleration

no code implementations18 Oct 2014 Zhenwen Dai, Andreas Damianou, James Hensman, Neil Lawrence

In this work, we present an extension of Gaussian process (GP) models with sophisticated parallelization and GPU acceleration.

Factorized Topic Models

no code implementations15 Jan 2013 Cheng Zhang, Carl Henrik Ek, Andreas Damianou, Hedvig Kjellstrom

In this paper we present a modification to a latent topic model, which makes the model exploit supervision to produce a factorized representation of the observed data.

General Classification Topic Models +1

Variational Gaussian Process Dynamical Systems

no code implementations NeurIPS 2011 Andreas Damianou, Michalis K. Titsias, Neil D. Lawrence

Our work builds on recent variational approximations for Gaussian process latent variable models to allow for nonlinear dimensionality reduction simultaneously with learning a dynamical prior in the latent space.

Dimensionality Reduction Time Series +1

Cannot find the paper you are looking for? You can Submit a new open access paper.