Search Results for author: Julien Audiffren

Found 14 papers, 2 papers with code

Tensor Convolutional Sparse Coding with Low-Rank activations, an application to EEG analysis

1 code implementation6 Jul 2020 Pierre Humbert, Laurent Oudre, Nivolas Vayatis, Julien Audiffren

Recently, there has been growing interest in the analysis of spectrograms of ElectroEncephaloGram (EEG), particularly to study the neural correlates of (un)-consciousness during General Anesthesia (GA).

EEG

Multivariate Convolutional Sparse Coding with Low Rank Tensor

no code implementations9 Aug 2019 Pierre Humbert, Julien Audiffren, Laurent Oudre, Nicolas Vayatis

This paper introduces a new multivariate convolutional sparse coding based on tensor algebra with a general model enforcing both element-wise sparsity and low-rankness of the activations tensors.

regression

Post-training for Deep Learning

no code implementations ICLR 2018 Thomas Moreau, Julien Audiffren

One of the main challenges of deep learning methods is the choice of an appropriate training strategy.

Unsupervised Pre-training

Bandits Dueling on Partially Ordered Sets

no code implementations NeurIPS 2017 Julien Audiffren, Liva Ralaivola

We propose an algorithm, UnchainedBandits, that efficiently finds the set of optimal arms, or Pareto front, of any poset even when pairs of comparable arms cannot be a priori distinguished from pairs of incomparable arms, with a set of minimal assumptions.

Post Training in Deep Learning with Last Kernel

1 code implementation14 Nov 2016 Thomas Moreau, Julien Audiffren

One of the main challenges of deep learning methods is the choice of an appropriate training strategy.

Unsupervised Pre-training

Decoy Bandits Dueling on a Poset

no code implementations8 Feb 2016 Julien Audiffren, Ralaivola Liva

We adress the problem of dueling bandits defined on partially ordered sets, or posets.

Cornering Stationary and Restless Mixing Bandits with Remix-UCB

no code implementations NeurIPS 2015 Julien Audiffren, Liva Ralaivola

We study the restless bandit problem where arms are associated with stationary $\varphi$-mixing processes and where rewards are therefore dependent: the question that arises from this setting is that of carefully recovering some independence by `ignoring' the values of some rewards.

Operator-valued Kernels for Learning from Functional Response Data

no code implementations28 Oct 2015 Hachem Kadri, Emmanuel Duflos, Philippe Preux, Stéphane Canu, Alain Rakotomamonjy, Julien Audiffren

In this paper we consider the problems of supervised classification and regression in the case where attributes and labels are functions: a data is represented by a set of functions, and the label is also a function.

Audio Signal Processing General Classification

Stationary Mixing Bandits

no code implementations23 Jun 2014 Julien Audiffren, Liva Ralaivola

To do so, we provide a UCB strategy together with a general regret analysis for the case where the size of the independence blocks (the ignored rewards) is fixed and we go a step beyond by providing an algorithm that is able to compute the size of the independence blocks from the data.

Equivalence of Learning Algorithms

no code implementations10 Jun 2014 Julien Audiffren, Hachem Kadri

The purpose of this paper is to introduce a concept of equivalence between machine learning algorithms.

BIG-bench Machine Learning regression

Online Learning with Multiple Operator-valued Kernels

no code implementations1 Nov 2013 Julien Audiffren, Hachem Kadri

We consider the problem of learning a vector-valued function f in an online learning setting.

General Classification

M-Power Regularized Least Squares Regression

no code implementations9 Oct 2013 Julien Audiffren, Hachem Kadri

Regularization is used to find a solution that both fits the data and is sufficiently smooth, and thereby is very effective for designing and refining learning algorithms.

regression

Stability of Multi-Task Kernel Regression Algorithms

no code implementations17 Jun 2013 Julien Audiffren, Hachem Kadri

We show that multi-task kernel regression algorithms are uniformly stable in the general case of infinite-dimensional output spaces.

Multi-Task Learning regression

Cannot find the paper you are looking for? You can Submit a new open access paper.