Search Results for author: Efi Kokiopoulou

Found 12 papers, 1 papers with code

Pi-DUAL: Using Privileged Information to Distinguish Clean from Noisy Labels

no code implementations10 Oct 2023 Ke Wang, Guillermo Ortiz-Jimenez, Rodolphe Jenatton, Mark Collier, Efi Kokiopoulou, Pascal Frossard

Label noise is a pervasive problem in deep learning that often compromises the generalization performance of trained models.

Transfer and Marginalize: Explaining Away Label Noise with Privileged Information

no code implementations18 Feb 2022 Mark Collier, Rodolphe Jenatton, Efi Kokiopoulou, Jesse Berent

Supervised learning datasets often have privileged information, in the form of features which are available at training time but are not available at test time e. g. the ID of the annotator that provided the label.

A Simple Probabilistic Method for Deep Classification under Input-Dependent Label Noise

no code implementations15 Mar 2020 Mark Collier, Basil Mustafa, Efi Kokiopoulou, Rodolphe Jenatton, Jesse Berent

By tuning the softmax temperature, we improve accuracy, log-likelihood and calibration on both image classification benchmarks with controlled label noise as well as Imagenet-21k which has naturally occurring label noise.

General Classification Image Classification +2

Ranking architectures using meta-learning

no code implementations26 Nov 2019 Alina Dubatovka, Efi Kokiopoulou, Luciano Sbaiz, Andrea Gesmundo, Gabor Bartok, Jesse Berent

However, it requires a large amount of computing resources and in order to alleviate this, a performance prediction network has been recently proposed that enables efficient architecture search by forecasting the performance of candidate architectures, instead of relying on actual model training.

Meta-Learning Neural Architecture Search

Flexible Multi-task Networks by Learning Parameter Allocation

no code implementations10 Oct 2019 Krzysztof Maziarz, Efi Kokiopoulou, Andrea Gesmundo, Luciano Sbaiz, Gabor Bartok, Jesse Berent

The binary allocation variables are learned jointly with the model parameters by standard back-propagation thanks to the Gumbel-Softmax reparametrization method.

Multi-Task Learning

Gumbel-Matrix Routing for Flexible Multi-task Learning

no code implementations25 Sep 2019 Krzysztof Maziarz, Efi Kokiopoulou, Andrea Gesmundo, Luciano Sbaiz, Gabor Bartok, Jesse Berent

We propose the Gumbel-Matrix routing, a novel multi-task routing method based on the Gumbel-Softmax, that is designed to learn fine-grained parameter sharing.

Multi-Task Learning

Fast Task-Aware Architecture Inference

no code implementations15 Feb 2019 Efi Kokiopoulou, Anja Hauth, Luciano Sbaiz, Andrea Gesmundo, Gabor Bartok, Jesse Berent

At the core of our framework lies a deep value network that can predict the performance of input architectures on a task by utilizing task meta-features and the previous model training experiments performed on related tasks.

Computational Efficiency Neural Architecture Search

Cannot find the paper you are looking for? You can Submit a new open access paper.