Search Results for author: Jean-Philippe Vert

Found 32 papers, 16 papers with code

Supervised Quantile Normalization for Low Rank Matrix Factorization

no code implementations ICML 2020 Marco Cuturi, Olivier Teboul, Jonathan Niles-Weed, Jean-Philippe Vert

Low rank matrix factorization is a fundamental building block in machine learning, used for instance to summarize gene expression profile data or word-document counts.

Regression as Classification: Influence of Task Formulation on Neural Network Features

1 code implementation10 Nov 2022 Lawrence Stewart, Francis Bach, Quentin Berthet, Jean-Philippe Vert

Neural networks can be trained to solve regression problems by using gradient-based methods to minimize the square loss.

regression

Scaling ResNets in the Large-depth Regime

1 code implementation14 Jun 2022 Pierre Marion, Adeline Fermanian, Gérard Biau, Jean-Philippe Vert

initializations, the only non-trivial dynamics is for $\alpha_L = 1/\sqrt{L}$ (other choices lead either to explosion or to identity mapping).

Reverse-Complement Equivariant Networks for DNA Sequences

1 code implementation NeurIPS 2021 Vincent Mallet, Jean-Philippe Vert

As DNA sequencing technologies keep improving in scale and cost, there is a growing need to develop machine learning models to analyze DNA sequences, e. g., to decipher regulatory signals from DNA fragments bound by a particular protein of interest.

BIG-bench Machine Learning

Framing RNN as a kernel method: A neural ODE approach

1 code implementation NeurIPS 2021 Adeline Fermanian, Pierre Marion, Jean-Philippe Vert, Gérard Biau

Building on the interpretation of a recurrent neural network (RNN) as a continuous-time neural differential equation, we show, under appropriate conditions, that the solution of a RNN can be viewed as a linear function of a specific feature set of the input sequence, known as the signature.

Efficient and Modular Implicit Differentiation

1 code implementation NeurIPS 2021 Mathieu Blondel, Quentin Berthet, Marco Cuturi, Roy Frostig, Stephan Hoyer, Felipe Llinares-López, Fabian Pedregosa, Jean-Philippe Vert

In this paper, we propose automatic implicit differentiation, an efficient and modular approach for implicit differentiation of optimization problems.

Meta-Learning

Learning with Differentiable Pertubed Optimizers

no code implementations NeurIPS 2020 Quentin Berthet, Mathieu Blondel, Olivier Teboul, Marco Cuturi, Jean-Philippe Vert, Francis Bach

Machine learning pipelines often rely on optimizers procedures to make discrete decisions (e. g., sorting, picking closest neighbors, or shortest paths).

Structured Prediction

Differentiable Divergences Between Time Series

1 code implementation16 Oct 2020 Mathieu Blondel, Arthur Mensch, Jean-Philippe Vert

Soft-DTW addresses these issues, but it is not a positive definite divergence: due to the bias introduced by entropic regularization, it can be negative and it is not minimized when the time series are equal.

Dynamic Time Warping Time Series +3

On Mixup Regularization

1 code implementation10 Jun 2020 Luigi Carratino, Moustapha Cissé, Rodolphe Jenatton, Jean-Philippe Vert

We show that Mixup can be interpreted as standard empirical risk minimization estimator subject to a combination of data transformation and random perturbation of the transformed data.

Ranked #75 on Image Classification on ObjectNet (using extra training data)

Data Augmentation Image Classification

Noisy Adaptive Group Testing using Bayesian Sequential Experimental Design

1 code implementation26 Apr 2020 Marco Cuturi, Olivier Teboul, Quentin Berthet, Arnaud Doucet, Jean-Philippe Vert

Our goal in this paper is to propose new group testing algorithms that can operate in a noisy setting (tests can be mistaken) to decide adaptively (looking at past results) which groups to test next, with the goal to converge to a good detection, as quickly, and with as few tests as possible.

Experimental Design

MissDeepCausal: Causal Inference from Incomplete Data Using Deep Latent Variable Models

1 code implementation25 Feb 2020 Imke Mayer, Julie Josse, Félix Raimundo, Jean-Philippe Vert

Inferring causal effects of a treatment, intervention or policy from observational data is central to many applications.

Causal Inference Imputation

Learning with Differentiable Perturbed Optimizers

2 code implementations20 Feb 2020 Quentin Berthet, Mathieu Blondel, Olivier Teboul, Marco Cuturi, Jean-Philippe Vert, Francis Bach

Machine learning pipelines often rely on optimization procedures to make discrete decisions (e. g., sorting, picking closest neighbors, or shortest paths).

Structured Prediction

Supervised Quantile Normalization for Low-rank Matrix Approximation

no code implementations8 Feb 2020 Marco Cuturi, Olivier Teboul, Jonathan Niles-Weed, Jean-Philippe Vert

Low rank matrix factorization is a fundamental building block in machine learning, used for instance to summarize gene expression profile data or word-document counts.

Differentiable Ranking and Sorting using Optimal Transport

1 code implementation NeurIPS 2019 Marco Cuturi, Olivier Teboul, Jean-Philippe Vert

From this observation, we propose extended rank and sort operators by considering optimal transport (OT) problems (the natural relaxation for assignments) where the auxiliary measure can be any weighted measure supported on $m$ increasing values, where $m \ne n$.

ASNI: Adaptive Structured Noise Injection for shallow and deep neural networks

1 code implementation21 Sep 2019 Beyrem Khalfaoui, Joseph Boyd, Jean-Philippe Vert

Dropout is a regularisation technique in neural network training where unit activations are randomly set to zero with a given probability \emph{independently}.

Differentiable Ranks and Sorting using Optimal Transport

no code implementations28 May 2019 Marco Cuturi, Olivier Teboul, Jean-Philippe Vert

Sorting an array is a fundamental routine in machine learning, one that is used to compute rank-based statistics, cumulative distribution functions (CDFs), quantiles, or to select closest neighbors and labels.

Relating Leverage Scores and Density using Regularized Christoffel Functions

no code implementations NeurIPS 2018 Edouard Pauwels, Francis Bach, Jean-Philippe Vert

Statistical leverage scores emerged as a fundamental tool for matrix sketching and column sampling with applications to low rank approximation, regression, random feature learning and quadrature.

regression

DropLasso: A robust variant of Lasso for single cell RNA-seq data

2 code implementations26 Feb 2018 Beyrem Khalfaoui, Jean-Philippe Vert

Single-cell RNA sequencing (scRNA-seq) is a fast growing approach to measure the genome-wide transcriptome of many individual cells in parallel, but results in noisy data with many dropout events.

WHInter: A Working set algorithm for High-dimensional sparse second order Interaction models

no code implementations ICML 2018 Marine Le Morvan, Jean-Philippe Vert

Learning sparse linear models with two-way interactions is desirable in many application domains such as genomics.

Supervised Quantile Normalisation

no code implementations1 Jun 2017 Marine Le Morvan, Jean-Philippe Vert

Quantile normalisation is a popular normalisation method for data subject to unwanted variations such as images, speech, or genomic data.

Benchmark of structured machine learning methods for microbial identification from mass-spectrometry data

no code implementations24 Jun 2015 Kévin Vervier, Pierre Mahé, Jean-Baptiste Veyrieras, Jean-Philippe Vert

Structured machine learning methods were recently proposed for taking into account the structure embedded in a hierarchy and using it as additional a priori information, and could therefore allow to improve microbial identification systems.

BIG-bench Machine Learning

Large-scale Machine Learning for Metagenomics Sequence Classification

no code implementations26 May 2015 Kévin Vervier, Pierre Mahé, Maud Tournoud, Jean-Baptiste Veyrieras, Jean-Philippe Vert

In this work, we investigate the potential of modern, large-scale machine learning implementations for taxonomic affectation of next-generation sequencing reads based on their k-mers profile.

BIG-bench Machine Learning Classification +1

Tight convex relaxations for sparse matrix factorization

no code implementations NeurIPS 2014 Emile Richard, Guillaume Obozinski, Jean-Philippe Vert

Based on a new atomic norm, we propose a new convex formulation for sparse matrix factorization problems in which the number of nonzero elements of the factors is assumed fixed and known.

Clustering

Consistency of random forests

no code implementations12 May 2014 Erwan Scornet, Gérard Biau, Jean-Philippe Vert

What has greatly contributed to the popularity of forests is the fact that they can be applied to a wide range of prediction problems and have few parameters to tune.

Ensemble Learning regression

The group fused Lasso for multiple change-point detection

1 code implementation21 Jun 2011 Kevin Bleakley, Jean-Philippe Vert

We present the group fused Lasso for detection of multiple change-points shared by a set of co-occurring one-dimensional signals.

Change Point Detection

Fast detection of multiple change-points shared by many signals using group LARS

no code implementations NeurIPS 2010 Jean-Philippe Vert, Kevin Bleakley

We present a fast algorithm for the detection of multiple change-points when each is frequently shared by members of a set of co-occurring one-dimensional signals.

A bagging SVM to learn from positive and unlabeled examples

1 code implementation5 Oct 2010 Fantine Mordelet, Jean-Philippe Vert

We consider the problem of learning a binary classifier from a training set of positive and unlabeled examples, both in the inductive and in the transductive setting.

Binary Classification Information Retrieval +1

White Functionals for Anomaly Detection in Dynamical Systems

no code implementations NeurIPS 2009 Marco Cuturi, Jean-Philippe Vert, Alexandre d'Aspremont

The candidate functionals are estimated in a subset of a reproducing kernel Hilbert space associated with the set where the process takes values.

Anomaly Detection

Clustered Multi-Task Learning: A Convex Formulation

no code implementations NeurIPS 2008 Laurent Jacob, Jean-Philippe Vert, Francis R. Bach

In multi-task learning several related tasks are considered simultaneously, with the hope that by an appropriate sharing of information across tasks, each task may benefit from the others.

Multi-Task Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.