Search Results for author: Michele Sebag

Found 10 papers, 3 papers with code

Learning meta-features for AutoML

1 code implementation ICLR 2022 Herilalaina Rakotoarison, Louisot Milijaona, Andry Rasoanaivo, Michele Sebag, Marc Schoenauer

This paper tackles the AutoML problem, aimed to automatically select an ML algorithm and its hyper-parameter configuration most appropriate to the dataset at hand.

AutoML

Boltzmann Tuning of Generative Models

no code implementations12 Apr 2021 Victor Berger, Michele Sebag

The paper focuses on the a posteriori tuning of a generative model in order to favor the generation of good instances in the sense of some external differentiable criterion.

Robust Design

Boltzman Tuning of Generative Models

no code implementations1 Jan 2021 Victor Berger, Michele Sebag

The paper focuses on the a posteriori tuning of a generative model in order to favor the generation of good instances in the sense of some external differentiable criterion.

Robust Design

Dynamic Time Lag Regression: Predicting What & When

1 code implementation ICLR 2020 Mandar Chandorkar, Cyril Furtlehner, Bala Poduval, Enrico Camporeale, Michele Sebag

DTLR differs from mainstream regression and from sequence-to-sequence learning in two respects: firstly, no ground truth (e. g., pairs of associated sub-sequences) is available; secondly, the cause signal contains much information irrelevant to the effect signal (the solar magnetic field governs the solar wind propagation in the heliosphere, of which the Earth's magnetosphere is but a minuscule region).

regression

Variational Auto-Encoder: not all failures are equal

no code implementations4 Mar 2020 Michele Sebag, Victor Berger, Michèle Sebag

We claim that a source of severe failures for Variational Auto-Encoders is the choice of the distribution class used for the observation model. A first theoretical and experimental contribution of the paper is to establish that even in the large sample limit with arbitrarily powerful neural architectures and latent space, the VAE failsif the sharpness of the distribution class does not match the scale of the data. Our second claim is that the distribution sharpness must preferably be learned by the VAE (as opposed to, fixed and optimized offline): Autonomously adjusting this sharpness allows the VAE to dynamically control the trade-off between the optimization of the reconstruction loss and the latent compression.

Toward Optimal Run Racing: Application to Deep Learning Calibration

no code implementations10 Jun 2017 Olivier Bousquet, Sylvain Gelly, Karol Kurach, Marc Schoenauer, Michele Sebag, Olivier Teytaud, Damien Vincent

This paper aims at one-shot learning of deep neural nets, where a highly parallel setting is considered to address the algorithm calibration problem - selecting the best neural architecture and learning hyper-parameter values depending on the dataset at hand.

One-Shot Learning Two-sample testing

SpikeAnts, a spiking neuron network modelling the emergence of organization in a complex system

no code implementations NeurIPS 2010 Sylvain Chevallier, Hél\`Ene Paugam-Moisy, Michele Sebag

How to enforce such a division in a decentralized and distributed way, is tackled in this paper, using a spiking neuron network architecture.

Cannot find the paper you are looking for? You can Submit a new open access paper.