1 code implementation • ICLR 2022 • Herilalaina Rakotoarison, Louisot Milijaona, Andry Rasoanaivo, Michele Sebag, Marc Schoenauer
This paper tackles the AutoML problem, aimed to automatically select an ML algorithm and its hyper-parameter configuration most appropriate to the dataset at hand.
no code implementations • 12 Apr 2021 • Victor Berger, Michele Sebag
The paper focuses on the a posteriori tuning of a generative model in order to favor the generation of good instances in the sense of some external differentiable criterion.
no code implementations • 1 Jan 2021 • Victor Berger, Michele Sebag
The paper focuses on the a posteriori tuning of a generative model in order to favor the generation of good instances in the sense of some external differentiable criterion.
1 code implementation • ICLR 2020 • Mandar Chandorkar, Cyril Furtlehner, Bala Poduval, Enrico Camporeale, Michele Sebag
DTLR differs from mainstream regression and from sequence-to-sequence learning in two respects: firstly, no ground truth (e. g., pairs of associated sub-sequences) is available; secondly, the cause signal contains much information irrelevant to the effect signal (the solar magnetic field governs the solar wind propagation in the heliosphere, of which the Earth's magnetosphere is but a minuscule region).
no code implementations • 4 Mar 2020 • Michele Sebag, Victor Berger, Michèle Sebag
We claim that a source of severe failures for Variational Auto-Encoders is the choice of the distribution class used for the observation model. A first theoretical and experimental contribution of the paper is to establish that even in the large sample limit with arbitrarily powerful neural architectures and latent space, the VAE failsif the sharpness of the distribution class does not match the scale of the data. Our second claim is that the distribution sharpness must preferably be learned by the VAE (as opposed to, fixed and optimized offline): Autonomously adjusting this sharpness allows the VAE to dynamically control the trade-off between the optimization of the reconstruction loss and the latent compression.
no code implementations • 24 Jul 2019 • Jorge G. Madrid, Hugo Jair Escalante, Eduardo F. Morales, Wei-Wei Tu, Yang Yu, Lisheng Sun-Hosoya, Isabelle Guyon, Michele Sebag
We extendAuto-Sklearn with sound and intuitive mechanisms that allow it to cope with this sort ofproblems.
1 code implementation • ICLR 2019 • Alice Schoenauer-Sebag, Louise Heinrich, Marc Schoenauer, Michele Sebag, Lani F. Wu, Steve J. Altschuler
Multi-domain learning (MDL) aims at obtaining a model with minimal average risk across multiple domains.
no code implementations • Conférence sur l'Apprentissage Automatique 2018 • Lisheng Sun-Hosoya, Isabelle Guyon, Michele Sebag
We give a brief account of the main findings of our post-hoc analysis of the first AutoML challenge (2015-2016).
no code implementations • 10 Jun 2017 • Olivier Bousquet, Sylvain Gelly, Karol Kurach, Marc Schoenauer, Michele Sebag, Olivier Teytaud, Damien Vincent
This paper aims at one-shot learning of deep neural nets, where a highly parallel setting is considered to address the algorithm calibration problem - selecting the best neural architecture and learning hyper-parameter values depending on the dataset at hand.
no code implementations • NeurIPS 2010 • Sylvain Chevallier, Hél\`Ene Paugam-Moisy, Michele Sebag
How to enforce such a division in a decentralized and distributed way, is tackled in this paper, using a spiking neuron network architecture.