Search Results for author: Gilles Louppe

Found 59 papers, 43 papers with code

Harnessing machine learning for accurate treatment of overlapping opacity species in general circulation models

1 code implementation1 Nov 2023 Aaron David Schneider, Paul Mollière, Gilles Louppe, Ludmila Carone, Uffe Gråe Jørgensen, Leen Decin, Christiane Helling

For this study, we specifically examined the coupling between chemistry and radiation in GCMs and compared different methods for the mixing of opacities of different chemical species in the correlated-k assumption, when equilibrium chemistry cannot be assumed.

Robust Ocean Subgrid-Scale Parameterizations Using Fourier Neural Operators

1 code implementation4 Oct 2023 Victor Mangeleer, Gilles Louppe

In climate simulations, small-scale processes shape ocean dynamics but remain computationally expensive to resolve directly.

Score-based Data Assimilation for a Two-Layer Quasi-Geostrophic Model

1 code implementation3 Oct 2023 François Rozet, Gilles Louppe

Data assimilation addresses the problem of identifying plausible state trajectories of dynamical systems given noisy or incomplete observations.

Dynamic NeRFs for Soccer Scenes

1 code implementation13 Sep 2023 Sacha Lewin, Maxime Vandegar, Thomas Hoyoux, Olivier Barnich, Gilles Louppe

The long-standing problem of novel view synthesis has many applications, notably in sports broadcasting.

Novel View Synthesis

Score-based Data Assimilation

2 code implementations NeurIPS 2023 François Rozet, Gilles Louppe

Data assimilation, in its most comprehensive form, addresses the Bayesian inverse problem of identifying plausible state trajectories that explain noisy or incomplete observations of stochastic dynamical systems.

Policy Gradient Algorithms Implicitly Optimize by Continuation

no code implementations11 May 2023 Adrien Bolland, Gilles Louppe, Damien Ernst

First, we formulate direct policy optimization in the optimization by continuation framework.

Balancing Simulation-based Inference for Conservative Posteriors

1 code implementation21 Apr 2023 Arnaud Delaunoy, Benjamin Kurt Miller, Patrick Forré, Christoph Weniger, Gilles Louppe

We show empirically that the balanced versions tend to produce conservative posterior approximations on a wide variety of benchmarks.

Implicit representation priors meet Riemannian geometry for Bayesian robotic grasping

no code implementations18 Apr 2023 Norman Marlier, Julien Gustin, Olivier Brüls, Gilles Louppe

Robotic grasping in highly noisy environments presents complex challenges, especially with limited prior knowledge about the scene.

Bayesian Inference Robotic Grasping

Graph-informed simulation-based inference for models of active matter

no code implementations5 Apr 2023 Namid R. Stillman, Silke Henkes, Roberto Mayor, Gilles Louppe

Moreover, we demonstrate that a small number (from one to three) snapshots of the system can be used for parameter inference and that this graph-informed approach outperforms typical metrics such as the average velocity or mean square displacement of the system.

Simulation-based Bayesian inference for robotic grasping

no code implementations10 Mar 2023 Norman Marlier, Olivier Brüls, Gilles Louppe

General robotic grippers are challenging to control because of their rich nonsmooth contact dynamics and the many sources of uncertainties due to the environment or sensor noise.

Bayesian Inference Robotic Grasping

Adaptive Self-Training for Object Detection

1 code implementation7 Dec 2022 Renaud Vandeghen, Gilles Louppe, Marc Van Droogenbroeck

In this work, we introduce our method Adaptive Self-Training for Object Detection (ASTOD), which is a simple yet effective teacher-student method.

Object Object Detection +2

Towards Reliable Simulation-Based Inference with Balanced Neural Ratio Estimation

1 code implementation29 Aug 2022 Arnaud Delaunoy, Joeri Hermans, François Rozet, Antoine Wehenkel, Gilles Louppe

In this work, we introduce Balanced Neural Ratio Estimation (BNRE), a variation of the NRE algorithm designed to produce posterior approximations that tend to be more conservative, hence improving their reliability, while sharing the same Bayes optimal solution.

Robust Hybrid Learning With Expert Augmentation

1 code implementation8 Feb 2022 Antoine Wehenkel, Jens Behrmann, Hsiang Hsu, Guillermo Sapiro, Gilles Louppe, Jörn-Henrik Jacobsen

Hybrid modelling reduces the misspecification of expert models by combining them with machine learning (ML) components learned from data.

Data Augmentation valid

SAE: Sequential Anchored Ensembles

1 code implementation30 Dec 2021 Arnaud Delaunoy, Gilles Louppe

Anchored ensembles approximate the posterior by training an ensemble of neural networks on anchored losses designed for the optima to follow the Bayesian posterior.

From global to local MDI variable importances for random forests and when they are Shapley values

1 code implementation NeurIPS 2021 Antonio Sutera, Gilles Louppe, Van Anh Huynh-Thu, Louis Wehenkel, Pierre Geurts

Random forests have been widely used for their ability to provide so-called importance measures, which give insight at a global (per dataset) level on the relevance of input variables to predict a certain output.

A Trust Crisis In Simulation-Based Inference? Your Posterior Approximations Can Be Unfaithful

4 code implementations13 Oct 2021 Joeri Hermans, Arnaud Delaunoy, François Rozet, Antoine Wehenkel, Volodimir Begy, Gilles Louppe

We present extensive empirical evidence showing that current Bayesian simulation-based inference algorithms can produce computationally unfaithful posterior approximations.

Arbitrary Marginal Neural Ratio Estimation for Simulation-based Inference

1 code implementation1 Oct 2021 François Rozet, Gilles Louppe

In many areas of science, complex phenomena are modeled by stochastic parametric simulators, often featuring high-dimensional parameter spaces and intractable likelihoods.

Bayesian Inference

Simulation-based Bayesian inference for multi-fingered robotic grasping

no code implementations29 Sep 2021 Norman Marlier, Olivier Brüls, Gilles Louppe

Multi-fingered robotic grasping is an undeniable stepping stone to universal picking and dexterous manipulation.

Bayesian Inference Robotic Grasping

Truncated Marginal Neural Ratio Estimation

2 code implementations NeurIPS 2021 Benjamin Kurt Miller, Alex Cole, Patrick Forré, Gilles Louppe, Christoph Weniger

Parametric stochastic simulators are ubiquitous in science, often featuring high-dimensional input parameters and/or an intractable likelihood.

Diffusion Priors In Variational Autoencoders

no code implementations ICML Workshop INNF 2021 Antoine Wehenkel, Gilles Louppe

Among likelihood-based approaches for deep generative modelling, variational autoencoders (VAEs) offer scalable amortized posterior inference and fast sampling.


Distributional Reinforcement Learning with Unconstrained Monotonic Neural Networks

1 code implementation6 Jun 2021 Thibaut Théate, Antoine Wehenkel, Adrien Bolland, Gilles Louppe, Damien Ernst

The results highlight the main strengths and weaknesses associated with each probability metric together with an important limitation of the Wasserstein distance.

Distributional Reinforcement Learning reinforcement-learning +2

HNPE: Leveraging Global Parameters for Neural Posterior Estimation

1 code implementation NeurIPS 2021 Pedro L. C. Rodrigues, Thomas Moreau, Gilles Louppe, Alexandre Gramfort

Inferring the parameters of a stochastic model based on experimental observations is central to the scientific method.


Towards constraining warm dark matter with stellar streams through neural simulation-based inference

1 code implementation30 Nov 2020 Joeri Hermans, Nilanjan Banik, Christoph Weniger, Gianfranco Bertone, Gilles Louppe

A statistical analysis of the observed perturbations in the density of stellar streams can in principle set stringent contraints on the mass function of dark matter subhaloes, which in turn can be used to constrain the mass of the dark matter particle.

Bayesian Inference

Simulation-efficient marginal posterior estimation with swyft: stop wasting your precious time

1 code implementation27 Nov 2020 Benjamin Kurt Miller, Alex Cole, Gilles Louppe, Christoph Weniger

We present algorithms (a) for nested neural likelihood-to-evidence ratio estimation, and (b) for simulation reuse via an inhomogeneous Poisson point process cache of parameters and corresponding simulations.

Astronomy Bayesian Inference

Neural Empirical Bayes: Source Distribution Estimation and its Applications to Simulation-Based Inference

1 code implementation11 Nov 2020 Maxime Vandegar, Michael Kagan, Antoine Wehenkel, Gilles Louppe

We revisit empirical Bayes in the absence of a tractable likelihood function, as is typical in scientific domains relying on computer simulations.

Lightning-Fast Gravitational Wave Parameter Inference through Neural Amortization

no code implementations24 Oct 2020 Arnaud Delaunoy, Antoine Wehenkel, Tanja Hinderer, Samaya Nissanke, Christoph Weniger, Andrew R. Williamson, Gilles Louppe

Gravitational waves from compact binaries measured by the LIGO and Virgo detectors are routinely analyzed using Markov Chain Monte Carlo sampling algorithms.

Graphical Normalizing Flows

3 code implementations3 Jun 2020 Antoine Wehenkel, Gilles Louppe

From this new perspective, we propose the graphical normalizing flow, a new invertible transformation with either a prescribed or a learnable graphical structure.

Density Estimation

You say Normalizing Flows I see Bayesian Networks

no code implementations1 Jun 2020 Antoine Wehenkel, Gilles Louppe

Normalizing flows have emerged as an important family of deep neural networks for modelling complex probability distributions.

The frontier of simulation-based inference

no code implementations4 Nov 2019 Kyle Cranmer, Johann Brehmer, Gilles Louppe

Many domains of science have developed complex simulations to describe phenomena of interest.

Mining for Dark Matter Substructure: Inferring subhalo population properties from strong lenses with machine learning

3 code implementations4 Sep 2019 Johann Brehmer, Siddharth Mishra-Sharma, Joeri Hermans, Gilles Louppe, Kyle Cranmer

The subtle and unique imprint of dark matter substructure on extended arcs in strong lensing systems contains a wealth of information about the properties and distribution of dark matter on small scales and, consequently, about the underlying particle physics.

BIG-bench Machine Learning

Approximating two value functions instead of one: towards characterizing a new family of Deep Reinforcement Learning algorithms

3 code implementations1 Sep 2019 Matthia Sabatelli, Gilles Louppe, Pierre Geurts, Marco A. Wiering

This paper makes one step forward towards characterizing a new family of \textit{model-free} Deep Reinforcement Learning (DRL) algorithms.


Unconstrained Monotonic Neural Networks

2 code implementations NeurIPS 2019 Antoine Wehenkel, Gilles Louppe

Monotonic neural networks have recently been proposed as a way to define invertible transformations.

Density Estimation Variational Inference

Effective LHC measurements with matrix elements and machine learning

no code implementations4 Jun 2019 Johann Brehmer, Kyle Cranmer, Irina Espejo, Felix Kling, Gilles Louppe, Juan Pavez

One major challenge for the legacy measurements at the LHC is that the likelihood function is not tractable when the collected data is high-dimensional and the detector response has to be modeled.

BIG-bench Machine Learning Density Estimation

Likelihood-free MCMC with Amortized Approximate Ratio Estimators

5 code implementations ICML 2020 Joeri Hermans, Volodimir Begy, Gilles Louppe

This work introduces a novel approach to address the intractability of the likelihood and the marginal model.

Recurrent machines for likelihood-free inference

1 code implementation30 Nov 2018 Arthur Pesah, Antoine Wehenkel, Gilles Louppe

Likelihood-free inference is concerned with the estimation of the parameters of a non-differentiable stochastic simulator that best reproduce real observations.


Deep Quality-Value (DQV) Learning

3 code implementations30 Sep 2018 Matthia Sabatelli, Gilles Louppe, Pierre Geurts, Marco A. Wiering

We introduce a novel Deep Reinforcement Learning (DRL) algorithm called Deep Quality-Value (DQV) Learning.

Atari Games Q-Learning +2

Efficient Probabilistic Inference in the Quest for Physics Beyond the Standard Model

3 code implementations NeurIPS 2019 Atılım Güneş Baydin, Lukas Heinrich, Wahid Bhimji, Lei Shao, Saeid Naderiparizi, Andreas Munk, Jialin Liu, Bradley Gram-Hansen, Gilles Louppe, Lawrence Meadows, Philip Torr, Victor Lee, Prabhat, Kyle Cranmer, Frank Wood

We present a novel probabilistic programming framework that couples directly to existing large-scale simulators through a cross-platform probabilistic execution protocol, which allows general-purpose inference engines to record and control random number draws within simulators in a language-agnostic way.

Probabilistic Programming

Machine Learning in High Energy Physics Community White Paper

no code implementations8 Jul 2018 Kim Albertsson, Piero Altoe, Dustin Anderson, John Anderson, Michael Andrews, Juan Pedro Araque Espinosa, Adam Aurisano, Laurent Basara, Adrian Bevan, Wahid Bhimji, Daniele Bonacorsi, Bjorn Burkle, Paolo Calafiura, Mario Campanelli, Louis Capps, Federico Carminati, Stefano Carrazza, Yi-fan Chen, Taylor Childers, Yann Coadou, Elias Coniavitis, Kyle Cranmer, Claire David, Douglas Davis, Andrea De Simone, Javier Duarte, Martin Erdmann, Jonas Eschle, Amir Farbin, Matthew Feickert, Nuno Filipe Castro, Conor Fitzpatrick, Michele Floris, Alessandra Forti, Jordi Garra-Tico, Jochen Gemmler, Maria Girone, Paul Glaysher, Sergei Gleyzer, Vladimir Gligorov, Tobias Golling, Jonas Graw, Lindsey Gray, Dick Greenwood, Thomas Hacker, John Harvey, Benedikt Hegner, Lukas Heinrich, Ulrich Heintz, Ben Hooberman, Johannes Junggeburth, Michael Kagan, Meghan Kane, Konstantin Kanishchev, Przemysław Karpiński, Zahari Kassabov, Gaurav Kaul, Dorian Kcira, Thomas Keck, Alexei Klimentov, Jim Kowalkowski, Luke Kreczko, Alexander Kurepin, Rob Kutschke, Valentin Kuznetsov, Nicolas Köhler, Igor Lakomov, Kevin Lannon, Mario Lassnig, Antonio Limosani, Gilles Louppe, Aashrita Mangu, Pere Mato, Narain Meenakshi, Helge Meinhard, Dario Menasce, Lorenzo Moneta, Seth Moortgat, Mark Neubauer, Harvey Newman, Sydney Otten, Hans Pabst, Michela Paganini, Manfred Paulini, Gabriel Perdue, Uzziel Perez, Attilio Picazio, Jim Pivarski, Harrison Prosper, Fernanda Psihas, Alexander Radovic, Ryan Reece, Aurelius Rinkevicius, Eduardo Rodrigues, Jamal Rorie, David Rousseau, Aaron Sauers, Steven Schramm, Ariel Schwartzman, Horst Severini, Paul Seyfert, Filip Siroky, Konstantin Skazytkin, Mike Sokoloff, Graeme Stewart, Bob Stienen, Ian Stockdale, Giles Strong, Wei Sun, Savannah Thais, Karen Tomko, Eli Upfal, Emanuele Usai, Andrey Ustyuzhanin, Martin Vala, Justin Vasel, Sofia Vallecorsa, Mauro Verzetti, Xavier Vilasís-Cardona, Jean-Roch Vlimant, Ilija Vukotic, Sean-Jiun Wang, Gordon Watts, Michael Williams, Wenjing Wu, Stefan Wunsch, Kun Yang, Omar Zapata

In this document we discuss promising future research and development areas for machine learning in particle physics.

BIG-bench Machine Learning Vocal Bursts Intensity Prediction

Gradient Energy Matching for Distributed Asynchronous Gradient Descent

2 code implementations22 May 2018 Joeri Hermans, Gilles Louppe

Distributed asynchronous SGD has become widely used for deep learning in large-scale systems, but remains notorious for its instability when increasing the number of workers.

Constraining Effective Field Theories with Machine Learning

1 code implementation30 Apr 2018 Johann Brehmer, Kyle Cranmer, Gilles Louppe, Juan Pavez

We present powerful new analysis techniques to constrain effective field theories at the LHC.

BIG-bench Machine Learning

A Guide to Constraining Effective Field Theories with Machine Learning

2 code implementations30 Apr 2018 Johann Brehmer, Kyle Cranmer, Gilles Louppe, Juan Pavez

We develop, discuss, and compare several inference techniques to constrain theory parameters in collider experiments.

BIG-bench Machine Learning

Adversarial Variational Optimization of Non-Differentiable Simulators

2 code implementations22 Jul 2017 Gilles Louppe, Joeri Hermans, Kyle Cranmer

We adapt the training procedure of generative adversarial networks by replacing the differentiable generative network with a domain-specific simulator.

QCD-Aware Recursive Neural Networks for Jet Physics

5 code implementations2 Feb 2017 Gilles Louppe, Kyunghyun Cho, Cyril Becot, Kyle Cranmer

Recent progress in applying machine learning for jet physics has been built upon an analogy between calorimeters and images.

Clustering Sentence

Learning to Pivot with Adversarial Networks

5 code implementations NeurIPS 2017 Gilles Louppe, Michael Kagan, Kyle Cranmer

Several techniques for domain adaptation have been proposed to account for differences in the distribution of the data used for training and testing.

Domain Adaptation Fairness

Context-dependent feature analysis with random forests

no code implementations12 May 2016 Antonio Sutera, Gilles Louppe, Vân Anh Huynh-Thu, Louis Wehenkel, Pierre Geurts

In many cases, feature selection is often more complicated than identifying a single subset of input variables that would together explain the output.

feature selection

Ethnicity sensitive author disambiguation using semi-supervised learning

1 code implementation31 Aug 2015 Gilles Louppe, Hussein Al-Natsheh, Mateusz Susik, Eamonn Maguire

Author name disambiguation in bibliographic databases is the problem of grouping together scientific publications written by the same person, accounting for potential homonyms and/or synonyms.

Blocking Clustering

Approximating Likelihood Ratios with Calibrated Discriminative Classifiers

2 code implementations6 Jun 2015 Kyle Cranmer, Juan Pavez, Gilles Louppe

This leads to a new machine learning-based approach to likelihood-free inference that is complementary to Approximate Bayesian Computation, and which does not require a prior on the model parameters.

Dimensionality Reduction

Understanding Random Forests: From Theory to Practice

2 code implementations28 Jul 2014 Gilles Louppe

In the second part of this work, we analyse and discuss the interpretability of random forests in the eyes of variable importance measures.

Simple connectome inference from partial correlation statistics in calcium imaging

1 code implementation30 Jun 2014 Antonio Sutera, Arnaud Joly, Vincent François-Lavet, Zixiao Aaron Qiu, Gilles Louppe, Damien Ernst, Pierre Geurts

In this work, we propose a simple yet effective solution to the problem of connectome inference in calcium imaging data.

Understanding variable importances in forests of randomized trees

no code implementations NeurIPS 2013 Gilles Louppe, Louis Wehenkel, Antonio Sutera, Pierre Geurts

Despite growing interest and practical use in various scientific areas, variable importances derived from tree-based ensemble methods are not well understood from a theoretical point of view.

Cannot find the paper you are looking for? You can Submit a new open access paper.