Search Results for author: Michèle Sebag

Found 23 papers, 7 papers with code

Learning Large Causal Structures from Inverse Covariance Matrix via Sparse Matrix Decomposition

1 code implementation25 Nov 2022 Shuyu Dong, Kento Uemura, Akito Fujii, Shuang Chang, Yusuke Koyanagi, Koji Maruhashi, Michèle Sebag

In the context of linear structural equation models (SEMs), this paper focuses on learning causal structures from the inverse covariance matrix.

Causal Discovery

From graphs to DAGs: a low-complexity model and a scalable algorithm

1 code implementation10 Apr 2022 Shuyu Dong, Michèle Sebag

Learning directed acyclic graphs (DAGs) is long known a critical challenge at the core of probabilistic and causal modeling.

Frugal Machine Learning

no code implementations5 Nov 2021 Mikhail Evchenko, Joaquin Vanschoren, Holger H. Hoos, Marc Schoenauer, Michèle Sebag

Machine learning, already at the core of increasingly many systems and applications, is set to become even more ubiquitous with the rapid rise of wearable devices and the Internet of Things.

Activity Recognition BIG-bench Machine Learning

Variational Auto-Encoder: not all failures are equal

no code implementations4 Mar 2020 Michele Sebag, Victor Berger, Michèle Sebag

We claim that a source of severe failures for Variational Auto-Encoders is the choice of the distribution class used for the observation model. A first theoretical and experimental contribution of the paper is to establish that even in the large sample limit with arbitrarily powerful neural architectures and latent space, the VAE failsif the sharpness of the distribution class does not match the scale of the data. Our second claim is that the distribution sharpness must preferably be learned by the VAE (as opposed to, fixed and optimized offline): Autonomously adjusting this sharpness allows the VAE to dynamically control the trade-off between the optimization of the reconstruction loss and the latent compression.

Automated Machine Learning with Monte-Carlo Tree Search

2 code implementations1 Jun 2019 Herilalaina Rakotoarison, Marc Schoenauer, Michèle Sebag

The AutoML task consists of selecting the proper algorithm in a machine learning portfolio, and its hyperparameter values, in order to deliver the best performance on the dataset at hand.

AutoML Bayesian Optimization +1

Locally Linear Unsupervised Feature Selection

no code implementations ICLR 2019 Guillaume DOQUET, Michèle Sebag

The paper, interested in unsupervised feature selection, aims to retain the features best accounting for the local patterns in the data.

Dimensionality Reduction feature selection

New Losses for Generative Adversarial Learning

no code implementations3 Jul 2018 Victor Berger, Michèle Sebag

Generative Adversarial Networks (Goodfellow et al., 2014), a major breakthrough in the field of generative modeling, learn a discriminator to estimate some distance between the target and the candidate distributions.

Causal Generative Neural Networks

1 code implementation ICLR 2018 Olivier Goudet, Diviyan Kalainathan, Philippe Caillou, Isabelle Guyon, David Lopez-Paz, Michèle Sebag

We present Causal Generative Neural Networks (CGNNs) to learn functional causal models from observational data.

Causal Discovery

Learning Functional Causal Models with Generative Neural Networks

2 code implementations15 Sep 2017 Olivier Goudet, Diviyan Kalainathan, Philippe Caillou, Isabelle Guyon, David Lopez-Paz, Michèle Sebag

We introduce a new approach to functional causal modeling from observational data, called Causal Generative Neural Networks (CGNN).

Stochastic Gradient Descent: Going As Fast As Possible But Not Faster

no code implementations5 Sep 2017 Alice Schoenauer-Sebag, Marc Schoenauer, Michèle Sebag

When applied to training deep neural networks, stochastic gradient descent (SGD) often incurs steady progression phases, interrupted by catastrophic episodes in which loss and gradient norm explode.

Change Point Detection

Multi-dimensional signal approximation with sparse structured priors using split Bregman iterations

no code implementations29 Sep 2016 Yoann Isaac, Quentin Barthélemy, Cédric Gouy-Pailler, Michèle Sebag, Jamal Atif

This paper addresses the structurally-constrained sparse decomposition of multi-dimensional signals onto overcomplete families of vectors, called dictionaries.

Maximum Likelihood-based Online Adaptation of Hyper-parameters in CMA-ES

no code implementations10 Jun 2014 Ilya Loshchilov, Marc Schoenauer, Michèle Sebag, Nikolaus Hansen

The Covariance Matrix Adaptation Evolution Strategy (CMA-ES) is widely accepted as a robust derivative-free continuous optimization algorithm for non-linear and non-convex optimization problems.

Exploration vs Exploitation vs Safety: Risk-averse Multi-Armed Bandits

no code implementations6 Jan 2014 Nicolas Galichet, Michèle Sebag, Olivier Teytaud

Motivated by applications in energy management, this paper presents the Multi-Armed Risk-Aware Bandit (MARAB) algorithm.

energy management Management +1

KL-based Control of the Learning Schedule for Surrogate Black-Box Optimization

no code implementations12 Aug 2013 Ilya Loshchilov, Marc Schoenauer, Michèle Sebag

This weakness is commonly addressed through surrogate optimization, learning an estimate of the objective function a. k. a.

Sustainable Cooperative Coevolution with a Multi-Armed Bandit

no code implementations10 Apr 2013 François-Michel De Rainville, Michèle Sebag, Christian Gagné, Marc Schoenauer, Denis Laurendeau

At each iteration, the dynamic multi-armed bandit makes a decision on which species to evolve for a generation, using the history of progress made by the different species to guide the decisions.

Multi-dimensional sparse structured signal approximation using split Bregman iterations

no code implementations21 Mar 2013 Yoann Isaac, Quentin Barthélemy, Jamal Atif, Cédric Gouy-Pailler, Michèle Sebag

An extensive empirical evaluation shows how the proposed approach compares to the state of the art depending on the signal features.

Self-Adaptive Surrogate-Assisted Covariance Matrix Adaptation Evolution Strategy

1 code implementation11 Apr 2012 Ilya Loshchilov, Marc Schoenauer, Michèle Sebag

The resulting algorithm, saACM-ES, adjusts online the lifelength of the current surrogate model (the number of CMA-ES generations before learning a new surrogate) and the surrogate hyper-parameters.

Feature Selection as a One-Player Game

no code implementations International Conference on Machine Learning 2010 2010 Romaric Gaudel, Michèle Sebag

This paper formalizes Feature Selection as a Reinforcement Learning problem, leading to a provably optimal though intractable selection policy.

Automated Feature Engineering feature selection +1

Cannot find the paper you are looking for? You can Submit a new open access paper.