no code implementations • 30 May 2024 • Andrea Bacciu, Enrico Palumbo, Andreas Damianou, Nicola Tonellotto, Fabrizio Silvestri
We then improved our system by proposing a version that exploits query logs called Retriever-Augmented GQR (RA-GQR).
no code implementations • 12 Mar 2024 • Andreas Damianou, Francesco Fabbri, Paul Gigioli, Marco De Nadai, Alice Wang, Enrico Palumbo, Mounia Lalmas
In the realm of personalization, integrating diverse information sources such as consumption signals and content-based representations is becoming increasingly critical to build state-of-the-art solutions.
no code implementations • 8 Mar 2024 • Marco De Nadai, Francesco Fabbri, Paul Gigioli, Alice Wang, Ang Li, Fabrizio Silvestri, Laura Kim, Shawn Lin, Vladan Radosavljevic, Sandeep Ghael, David Nyhan, Hugues Bouchard, Mounia Lalmas-Roelleke, Andreas Damianou
While promising, this move presents significant challenges for personalized recommendations.
1 code implementation • 2 Mar 2021 • Wesley J. Maddox, Shuai Tang, Pablo Garcia Moreno, Andrew Gordon Wilson, Andreas Damianou
The inductive biases of trained neural networks are difficult to understand and, consequently, to adapt to new settings.
no code implementations • ICLR 2021 • Francesco Tonolini, Pablo G. Moreno, Andreas Damianou, Roderick Murray-Smith
We propose a new probabilistic method for unsupervised recovery of corrupted data.
2 code implementations • ICLR 2020 • Shell Xu Hu, Pablo G. Moreno, Yang Xiao, Xi Shen, Guillaume Obozinski, Neil D. Lawrence, Andreas Damianou
The evidence lower bound of the marginal log-likelihood of empirical Bayes decomposes as a sum of local KL divergences between the variational posterior and the true posterior on the query set of each task.
Ranked #13 on Few-Shot Image Classification on CIFAR-FS 5-way (1-shot)
no code implementations • 7 Apr 2020 • Benjamin van Niekerk, Andreas Damianou, Benjamin Rosman
The environment's dynamics are learned from limited training data and can be reused in new task instances without retraining.
3 code implementations • 25 Mar 2020 • Shuai Tang, Wesley J. Maddox, Charlie Dickens, Tom Diethe, Andreas Damianou
A suitable similarity index for comparing learnt neural networks plays an important role in understanding the behaviour of the highly-nonlinear functions, and can provide insights on further theoretical analysis and empirical studies.
2 code implementations • 24 Nov 2019 • Bharathan Balaji, Jordan Bell-Masterson, Enes Bilgin, Andreas Damianou, Pablo Moreno Garcia, Arpit Jain, Runfei Luo, Alvaro Maggiar, Balakrishnan Narayanaswamy, Chun Ye
Reinforcement Learning (RL) has achieved state-of-the-art results in domains such as robotics and games.
2 code implementations • CVPR 2019 • Sungsoo Ahn, Shell Xu Hu, Andreas Damianou, Neil D. Lawrence, Zhenwen Dai
We further demonstrate the strength of our method on knowledge transfer across heterogeneous network architectures by transferring knowledge from a convolutional neural network (CNN) to a multi-layer perceptron (MLP) on CIFAR-10.
1 code implementation • 18 Mar 2019 • Kurt Cutajar, Mark Pullin, Andreas Damianou, Neil Lawrence, Javier González
Multi-fidelity methods are prominently used when cheaply-obtained, but possibly biased and noisy, observations must be effectively combined with limited or expensive true data in order to construct reliable models.
4 code implementations • ICLR 2019 • Sebastian Flennerhag, Pablo G. Moreno, Neil D. Lawrence, Andreas Damianou
Approaches that transfer information contained only in the final parameters of a source model will therefore struggle.
no code implementations • 5 Jun 2018 • Vinayak Kumar, Vaibhav Singh, P. K. Srijith, Andreas Damianou
This has hindered the application of DGPs in computer vision tasks, an area where deep parametric models (i. e. CNNs) have made breakthroughs.
no code implementations • 12 Mar 2018 • Jie Yang, Thomas Drake, Andreas Damianou, Yoelle Maarek
Experiments show that our framework can accurately learn annotator expertise, infer true labels, and effectively reduce the amount of annotations in model training as compared to state-of-the-art approaches.
no code implementations • ICML 2017 • Javier González, Zhenwen Dai, Andreas Damianou, Neil D. Lawrence
We present a new framework for this scenario that we call Preferential Bayesian Optimization (PBO) and that allows to find the optimum of a latent function that can only be queried through pairwise comparisons, so-called duels.
1 code implementation • 12 Jun 2017 • Clément Moulin-Frier, Tobias Fischer, Maxime Petit, Grégoire Pointeau, Jordi-Ysard Puigbo, Ugo Pattacini, Sock Ching Low, Daniel Camilleri, Phuong Nguyen, Matej Hoffmann, Hyung Jin Chang, Martina Zambelli, Anne-Laure Mealier, Andreas Damianou, Giorgio Metta, Tony J. Prescott, Yiannis Demiris, Peter Ford Dominey, Paul F. M. J. Verschure
This paper introduces a cognitive architecture for a humanoid robot to engage in a proactive, mixed-initiative exploration and manipulation of its environment, where the initiative can originate from both the human and the robot.
no code implementations • 12 Apr 2017 • Javier Gonzalez, Zhenwen Dai, Andreas Damianou, Neil D. Lawrence
Bayesian optimization (BO) has emerged during the last few years as an effective approach to optimizing black-box functions where direct queries of the objective are expensive.
no code implementations • 12 Jan 2017 • Andreas Damianou, Neil D. Lawrence, Carl Henrik Ek
We present Manifold Alignment Determination (MAD), an algorithm for learning alignments between data points from multiple views or modalities.
no code implementations • 17 Apr 2016 • Andreas Damianou, Neil D. Lawrence, Carl Henrik Ek
Inter-battery factor analysis extends this notion to multiple views of the data.
no code implementations • 26 Dec 2015 • Ming Jin, Andreas Damianou, Pieter Abbeel, Costas Spanos
We propose a new approach to inverse reinforcement learning (IRL) based on the deep Gaussian process (deep GP) model, which is capable of learning complicated reward structures with few demonstrations.
1 code implementation • 20 Nov 2015 • César Lincoln C. Mattos, Zhenwen Dai, Andreas Damianou, Jeremy Forth, Guilherme A. Barreto, Neil D. Lawrence
We define Recurrent Gaussian Processes (RGP) models, a general family of Bayesian nonparametric models with recurrent GP priors which are able to learn dynamical patterns from sequential data.
no code implementations • 19 Nov 2015 • Zhenwen Dai, Andreas Damianou, Javier González, Neil Lawrence
We develop a scalable deep non-parametric generative model by augmenting deep Gaussian processes with a recognition model.
no code implementations • 3 Sep 2015 • Andreas Damianou, Neil D. Lawrence
In this paper we refer to this task as "semi-described learning".
no code implementations • 18 Oct 2014 • Zhenwen Dai, Andreas Damianou, James Hensman, Neil Lawrence
In this work, we present an extension of Gaussian process (GP) models with sophisticated parallelization and GPU acceleration.
no code implementations • 15 Jan 2013 • Cheng Zhang, Carl Henrik Ek, Andreas Damianou, Hedvig Kjellstrom
In this paper we present a modification to a latent topic model, which makes the model exploit supervision to produce a factorized representation of the observed data.
no code implementations • NeurIPS 2011 • Andreas Damianou, Michalis K. Titsias, Neil D. Lawrence
Our work builds on recent variational approximations for Gaussian process latent variable models to allow for nonlinear dimensionality reduction simultaneously with learning a dynamical prior in the latent space.