no code implementations • 13 Jul 2022 • Arthur Marmin, José Henrique de Morais Goulart, Cédric Févotte
It is well known that the norm of the other factor (the dictionary matrix) needs to be controlled in order to avoid an ill-posed formulation.
2 code implementations • 28 Jun 2022 • Ondřej Mokrý, Paul Magron, Thomas Oberlin, Cédric Févotte
First, we treat the missing samples as latent variables, and derive two expectation-maximization algorithms for estimating the parameters of the model, depending on whether we formulate the problem in the time- or time-frequency domain.
no code implementations • 20 Apr 2022 • Paul Magron, Cédric Févotte
We factorize the Bernoulli parameter and consider an additional Beta prior on one of the factors to further improve the model's expressive power.
1 code implementation • 15 Feb 2022 • Cassio F. Dantas, Emmanuel Soubies, Cédric Févotte
Non-negative and bounded-variable linear regression problems arise in a variety of applications in machine learning and signal processing.
1 code implementation • 10 Dec 2021 • Sixin Zhang, Emmanuel Soubies, Cédric Févotte
Non-negative matrix factorization with transform learning (TL-NMF) is a recent idea that aims at learning data representations suited to NMF.
no code implementations • 29 Jun 2021 • Arthur Marmin, José Henrique de Morais Goulart, Cédric Févotte
Our new updates are derived from a joint majorization-minimization (MM) scheme, in which an auxiliary function (a tight upper bound of the objective function) is built for the two factors jointly and minimized at each iteration.
1 code implementation • NeurIPS 2021 • Laetitia Chapel, Rémi Flamary, Haoran Wu, Cédric Févotte, Gilles Gasso
In particular, we consider majorization-minimization which leads in our setting to efficient multiplicative updates for a variety of penalties.
1 code implementation • 10 Apr 2021 • Ting Cai, Vincent Y. F. Tan, Cédric Févotte
We consider an adversarially-trained version of the nonnegative matrix factorization, a popular latent dimensionality reduction technique.
1 code implementation • 5 Mar 2021 • Camille Castera, Jérôme Bolte, Cédric Févotte, Edouard Pauwels
In view of a direct and simple improvement of vanilla SGD, this paper presents a fine-tuning of its step-sizes in the mini-batch case.
1 code implementation • 24 Feb 2021 • Paul Magron, Cédric Févotte
In this work, we introduce neural content-aware collaborative filtering, a unified framework which alleviates these limits, and extends the recently introduced neural collaborative filtering to its content-aware counterpart.
1 code implementation • 22 Feb 2021 • Cassio F. Dantas, Emmanuel Soubies, Cédric Févotte
In this work, we extend the existing Gap Safe screening framework by relaxing the global strong-concavity assumption on the dual cost function.
no code implementations • 20 Oct 2020 • Paul Magron, Cédric Févotte
These approaches are agnostic to the song content, and therefore face the cold-start problem: they cannot recommend novel songs without listening history.
no code implementations • 1 Oct 2020 • Pierre-Hugo Vial, Paul Magron, Thomas Oberlin, Cédric Févotte
Therefore, we formulate PR as a new minimization problem involving Bregman divergences.
Sound
1 code implementation • 24 Jul 2020 • Dana Lahat, Yanbin Lang, Vincent Y. F. Tan, Cédric Févotte
In this work, we provide a collection of tools for PSDMF, by showing that PSDMF algorithms can be designed based on phase retrieval (PR) and affine rank minimization (ARM) algorithms.
no code implementations • 8 Jul 2020 • Valentin Leplat, Nicolas Gillis, Cédric Févotte
We show on numerical experiments that the MU are able to obtain high resolutions in both dimensions on two applications: (1) blind unmixing of audio spectrograms: to the best of our knowledge, this is the first time a coupled NMF model is used in this context, and (2) the fusion of hyperspectral and multispectral images: we show that the MU compete favorable with state-of-the-art algorithms in particular in the presence of non-Gaussian noise.
1 code implementation • 23 Jun 2020 • Louis Filstroff, Olivier Gouvert, Cédric Févotte, Olivier Cappé
Non-negative matrix factorization (NMF) has become a well-established class of methods for the analysis of non-negative data.
1 code implementation • ICML 2020 • Olivier Gouvert, Thomas Oberlin, Cédric Févotte
In particular, our algorithm preserves the scalability of PF and can be applied to huge sparse datasets.
2 code implementations • 29 May 2019 • Camille Castera, Jérôme Bolte, Cédric Févotte, Edouard Pauwels
We prove the convergence of INNA for most deep learning problems.
1 code implementation • 20 May 2019 • Olivier Gouvert, Thomas Oberlin, Cédric Févotte
Count data are often used in recommender systems: they are widespread (song play counts, product purchases, clicks on web pages) and can reveal user preference without any explicit rating from the user.
no code implementations • 15 Mar 2019 • Rui Xia, Vincent Y. F. Tan, Louis Filstroff, Cédric Févotte
We propose a novel ranking model that combines the Bradley-Terry-Luce probability model with a nonnegative matrix factorization framework to model and uncover the presence of latent variables that influence the performance of top tennis players.
1 code implementation • 17 Dec 2018 • Alberto Lumbreras, Louis Filstroff, Cédric Févotte
In some cases, the analysis of binary matrices can be tackled with nonnegative matrix factorization (NMF), where the observed data matrix is approximated by the product of two smaller nonnegative matrices.
1 code implementation • 6 Nov 2018 • Pierre Ablin, Dylan Fagot, Herwig Wendt, Alexandre Gramfort, Cédric Févotte
Nonnegative matrix factorization (NMF) is a popular method for audio spectral unmixing.
no code implementations • 30 Jul 2018 • Yanna Cruz Cavalcanti, Thomas Oberlin, Nicolas Dobigeon, Cédric Févotte, Simon Stute, Maria-Joao Ribeiro, Clovis Tauber
Factor analysis has proven to be a relevant tool for extracting tissue time-activity curves (TACs) in dynamic PET images, since it allows for an unsupervised analysis of the data.
no code implementations • 25 Apr 2018 • Cédric Févotte, Matthieu Kowalski
In this paper we instead propose a synthesis approach, where low-rankness is imposed to the synthesis coefficients of the data signal over a given t-f dictionary (such as a Gabor frame).
no code implementations • 5 Jan 2018 • Olivier Gouvert, Thomas Oberlin, Cédric Févotte
We introduce negative binomial matrix factorization (NBMF), a matrix factorization technique specially designed for analyzing over-dispersed count data.
no code implementations • ICML 2018 • Louis Filstroff, Alberto Lumbreras, Cédric Févotte
We present novel understandings of the Gamma-Poisson (GaP) model, a probabilistic matrix factorization model for count data.
no code implementations • 11 May 2017 • Dylan Fagot, Cédric Févotte, Herwig Wendt
Traditional NMF-based signal decomposition relies on the factorization of spectral data, which is typically computed by means of short-time frequency transform.
1 code implementation • NeurIPS 2016 • Rémi Flamary, Cédric Févotte, Nicolas Courty, Valentin Emiya
Many spectral unmixing methods rely on the non-negative decomposition of spectral data onto a dictionary of spectral templates.
no code implementations • NeurIPS 2014 • Cédric Févotte, Matthieu Kowalski
Many single-channel signal decomposition techniques rely on a low-rank factorization of a time-frequency transform.
1 code implementation • 22 Jan 2014 • Cédric Févotte, Nicolas Dobigeon
This paper introduces a robust mixing model to describe hyperspectral data resulting from the mixture of several pure spectral signatures.
no code implementations • NeurIPS 2011 • Onur Dikmen, Cédric Févotte
In this paper we describe a maximum likelihood likelihood approach for dictionary learning in the multiplicative exponential noise model.
3 code implementations • 25 Nov 2011 • Vincent Y. F. Tan, Cédric Févotte
This paper addresses the estimation of the latent dimensionality in nonnegative matrix factorization (NMF) with the \beta-divergence.
1 code implementation • 8 Oct 2010 • Cédric Févotte, Jérôme Idier
The paper also describes how the proposed algorithms can be adapted to two common variants of NMF : penalized NMF (i. e., when a penalty function of the factors is added to the criterion function) and convex-NMF (when the dictionary is assumed to belong to a known subspace).