Search Results for author: Massimiliano Patacchiola

Found 16 papers, 12 papers with code

Emotion Recognition in the Wild using Deep Neural Networks and Bayesian Classifiers

1 code implementation12 Sep 2017 Luca Surace, Massimiliano Patacchiola, Elena Battini Sönmez, William Spataro, Angelo Cangelosi

Group emotion recognition in the wild is a challenging problem, due to the unstructured environments in which everyday life pictures are taken.

Emotion Recognition General Classification

Y-Autoencoders: disentangling latent representations via sequential-encoding

1 code implementation25 Jul 2019 Massimiliano Patacchiola, Patrick Fox-Roberts, Edward Rosten

Additionally, the projection in the explicit manifold is monitored by a predictor, that is embedded in the encoder and trained end-to-end with no adversarial losses.

Image-to-Image Translation

Bayesian Meta-Learning for the Few-Shot Setting via Deep Kernels

3 code implementations NeurIPS 2020 Massimiliano Patacchiola, Jack Turner, Elliot J. Crowley, Michael O'Boyle, Amos Storkey

Recently, different machine learning methods have been introduced to tackle the challenging few-shot learning scenario that is, learning from a small labeled dataset related to a specific task.

Bayesian Inference Domain Adaptation +4

Defining Benchmarks for Continual Few-Shot Learning

2 code implementations15 Apr 2020 Antreas Antoniou, Massimiliano Patacchiola, Mateusz Ochal, Amos Storkey

Both few-shot and continual learning have seen substantial progress in the last years due to the introduction of proper benchmarks.

continual few-shot learning Continual Learning +1

Self-Supervised Relational Reasoning for Representation Learning

1 code implementation NeurIPS 2020 Massimiliano Patacchiola, Amos Storkey

In self-supervised learning, a system is tasked with achieving a surrogate objective by defining alternative targets on a set of unlabeled data.

Descriptive Image Retrieval +4

Class Imbalance in Few-Shot Learning

no code implementations1 Jan 2021 Mateusz Ochal, Massimiliano Patacchiola, Jose Vazquez, Amos Storkey, Sen Wang

Few-shot learning aims to train models on a limited number of labeled samples from a support set in order to generalize to unseen samples from a query set.

Few-Shot Learning

Few-Shot Learning with Class Imbalance

1 code implementation7 Jan 2021 Mateusz Ochal, Massimiliano Patacchiola, Amos Storkey, Jose Vazquez, Sen Wang

Few-Shot Learning (FSL) algorithms are commonly trained through Meta-Learning (ML), which exposes models to batches of tasks sampled from a meta-dataset to mimic tasks seen during evaluation.

Few-Shot Learning

How Sensitive are Meta-Learners to Dataset Imbalance?

1 code implementation ICLR Workshop Learning_to_Learn 2021 Mateusz Ochal, Massimiliano Patacchiola, Amos Storkey, Jose Vazquez, Sen Wang

Meta-Learning (ML) has proven to be a useful tool for training Few-Shot Learning (FSL) algorithms by exposure to batches of tasks sampled from a meta-dataset.

Few-Shot Learning

FiT: Parameter Efficient Few-shot Transfer Learning for Personalized and Federated Image Classification

1 code implementation17 Jun 2022 Aliaksandra Shysheya, John Bronskill, Massimiliano Patacchiola, Sebastian Nowozin, Richard E Turner

Modern deep learning systems are increasingly deployed in situations such as personalization and federated learning where it is necessary to support i) learning on small amounts of data, and ii) communication efficient distributed training protocols.

Federated Learning Few-Shot Learning +2

Contextual Squeeze-and-Excitation for Efficient Few-Shot Image Classification

1 code implementation20 Jun 2022 Massimiliano Patacchiola, John Bronskill, Aliaksandra Shysheya, Katja Hofmann, Sebastian Nowozin, Richard E. Turner

In this paper we push this Pareto frontier in the few-shot image classification setting with a key contribution: a new adaptive block called Contextual Squeeze-and-Excitation (CaSE) that adjusts a pretrained neural network on a new task to significantly improve performance with a single forward pass of the user data (context).

Few-Shot Image Classification Few-Shot Learning +1

Comparing the Efficacy of Fine-Tuning and Meta-Learning for Few-Shot Policy Imitation

1 code implementation23 Jun 2023 Massimiliano Patacchiola, Mingfei Sun, Katja Hofmann, Richard E. Turner

Despite its simplicity this baseline is competitive with meta-learning methods on a variety of conditions and is able to imitate target policies trained on unseen variations of the original environment.

Few-Shot Image Classification Few-Shot Imitation Learning +3

Transformer Neural Autoregressive Flows

no code implementations3 Jan 2024 Massimiliano Patacchiola, Aliaksandra Shysheya, Katja Hofmann, Richard E. Turner

In this paper, we propose a novel solution to these challenges by exploiting transformers to define a new class of neural flows called Transformer Neural Autoregressive Flows (T-NAFs).

Density Estimation

Cannot find the paper you are looking for? You can Submit a new open access paper.