1 code implementation • 27 Jun 2024 • Vincent Micheli, Eloi Alonso, François Fleuret

Scaling up deep Reinforcement Learning (RL) methods presents a significant challenge.

no code implementations • 4 Jun 2024 • Bálint Máté, François Fleuret, Tristan Bereau

Thermodynamic integration (TI) offers a rigorous method for estimating free-energy differences by integrating over a sequence of interpolating conformational ensembles.

1 code implementation • 20 May 2024 • Eloi Alonso, Adam Jelley, Vincent Micheli, Anssi Kanervisto, Amos Storkey, Tim Pearce, François Fleuret

Motivated by this paradigm shift, we introduce DIAMOND (DIffusion As a Model Of eNvironment Dreams), a reinforcement learning agent trained in a diffusion world model.

2 code implementations • 13 May 2024 • Ke Wang, Nikolaos Dimitriadis, Guillermo Ortiz-Jimenez, François Fleuret, Pascal Frossard

For this reason, we propose Consensus Merging, an algorithm that eliminates such weights and improves the general performance of existing model merging approaches.

1 code implementation • 15 Apr 2024 • Arnaud Pannatier, Evann Courdier, François Fleuret

Autoregressive models, such as the GPT family, use a fixed order, usually left-to-right, to generate sequences.

no code implementations • 1 Jan 2024 • Bálint Máté, François Fleuret

In particular, we propose to approximate a time-dependent operator $\mathcal V_t$ whose time integral provides a mapping between the functional distributions of the free theory $[\mathcal D\phi(x)] \mathcal Z_0^{-1} e^{-\mathcal S_{0}[\phi(x)]}$ and of the target theory $[\mathcal D\phi(x)]\mathcal Z^{-1}e^{-\mathcal S[\phi(x)]}$.

no code implementations • 1 Nov 2023 • Evann Courdier, Prabhu Teja Sivaprasad, François Fleuret

We study the problem of improving the efficiency of segmentation transformers by using disparate amounts of computation for different parts of the image.

1 code implementation • 1 Jun 2023 • Matteo Pagliardini, Daniele Paliotta, Martin Jaggi, François Fleuret

While many works have proposed schemes to sparsify the attention patterns and reduce the computational overhead of self-attention, those are often limited by implementations concerns and end up imposing a simple and static structure over the attention matrix.

no code implementations • 10 Feb 2023 • Daniele Paliotta, Mathieu Alain, Bálint Máté, François Fleuret

We present the Graph Forward-Forward (GFF) algorithm, an extension of the Forward-Forward procedure to graphs, able to handle features distributed over a graph's nodes.

1 code implementation • 18 Jan 2023 • Bálint Máté, François Fleuret

We introduce a training objective for continuous normalizing flows that can be used in the absence of samples but in the presence of an energy function.

1 code implementation • CVPR 2023 • Mohammad Mahdi Johari, Camilla Carta, François Fleuret

We present ESLAM, an efficient implicit neural representation method for Simultaneous Localization and Mapping (SLAM).

no code implementations • 25 Oct 2022 • Bálint Máté, François Fleuret

Consider a one-parameter family of Boltzmann distributions $p_t(x) = \tfrac{1}{Z_t}e^{-S_t(x)}$.

no code implementations • 20 Oct 2022 • Arnaud Pannatier, Kyle Matoba, François Fleuret

Notably, our model reduces the root mean square error (RMSE) for wind nowcasting from 9. 24 to 7. 98 and for heat diffusion tasks from 0. 126 to 0. 084.

1 code implementation • 18 Oct 2022 • Nikolaos Dimitriadis, Pascal Frossard, François Fleuret

In Multi-Task Learning (MTL), tasks may compete and limit the performance achieved on each other, rather than guiding the optimization to a solution, superior to all its single-task trained counterparts.

2 code implementations • 1 Sep 2022 • Vincent Micheli, Eloi Alonso, François Fleuret

Deep reinforcement learning agents are notoriously sample inefficient, which considerably limits their application to real-world problems.

Ranked #7 on Atari Games 100k on Atari 100k

1 code implementation • 30 May 2022 • Bálint Máté, Samuel Klein, Tobias Golling, François Fleuret

On the other hand, neural networks only perform a forward pass on the input, there is neither a notion of an inverse of a neural network nor is there one of its likelihood contribution.

no code implementations • 2 Mar 2022 • Kyle Matoba, Nikolaos Dimitriadis, François Fleuret

Over the decade since deep neural networks became state of the art image classifiers there has been a tendency towards less use of max pooling: the function that takes the largest of nearby pixels in an image.

no code implementations • 17 Feb 2022 • Anssi Kanervisto, Stephanie Milani, Karolis Ramanauskas, Nicholay Topin, Zichuan Lin, Junyou Li, Jianing Shi, Deheng Ye, Qiang Fu, Wei Yang, Weijun Hong, Zhongyue Huang, Haicheng Chen, Guangjun Zeng, Yue Lin, Vincent Micheli, Eloi Alonso, François Fleuret, Alexander Nikulin, Yury Belousov, Oleg Svidchenko, Aleksei Shpilman

With this in mind, we hosted the third edition of the MineRL ObtainDiamond competition, MineRL Diamond 2021, with a separate track in which we permitted any solution to promote the participation of newcomers.

1 code implementation • 11 Feb 2022 • Evann Courdier, François Fleuret

Semantic segmentation is a well-addressed topic in the computer vision literature, but the design of fast and accurate video processing networks remains challenging.

1 code implementation • 9 Feb 2022 • Matteo Pagliardini, Martin Jaggi, François Fleuret, Sai Praneeth Karimireddy

This behavior can hinder the transferability of trained models by (i) favoring the learning of simpler but spurious features -- present in the training data but absent from the test data -- and (ii) by only leveraging a small subset of predictive features.

1 code implementation • 20 Dec 2021 • Arnaud Pannatier, Ricardo Picatoste, François Fleuret

This paper proposes a simple yet efficient high-altitude wind nowcasting pipeline.

2 code implementations • CVPR 2022 • Mohammad Mahdi Johari, Yann Lepoittevin, François Fleuret

To render a novel view, the geometry reasoner first constructs cascaded cost volumes for each nearby source view.

1 code implementation • 19 Oct 2021 • Prabhu Teja Sivaprasad, François Fleuret

Data samples generated by several real world processes are dynamic in nature \textit{i. e.}, their characteristics vary with time.

no code implementations • 8 Sep 2021 • Bálint Máté, François Fleuret

This algorithm first runs any approximate-PCA method to get an initial estimate of the principal components (priming), and then applies an exact PCA in the subspace they span.

1 code implementation • EMNLP 2021 • Vincent Micheli, François Fleuret

Pretrained language models demonstrate strong performance in most NLP tasks when fine-tuned on small task-specific datasets.

no code implementations • 13 Apr 2021 • Vincent Micheli, Quentin Heinrich, François Fleuret, Wacim Belblidia

Attention is a key component of the now ubiquitous pre-trained language models.

no code implementations • 26 Jan 2021 • Karthigan Sinnathamby, Chang-Yu Hou, Lalitha Venkataramanan, Vasileios-Marios Gkortsas, François Fleuret

Following the work of arXiv:2101. 09512, we are interested in clustering a given multi-variate series in an unsupervised manner.

no code implementations • 23 Jan 2021 • Karthigan Sinnathamby, Chang-Yu Hou, Lalitha Venkataramanan, Vasileios-Marios Gkortsas, François Fleuret

We are interested in clustering parts of a given single multi-variate series in an unsupervised manner.

no code implementations • 1 Jan 2021 • Kyle Matoba, François Fleuret

To apply an algorithm in a sensitive domain it is important to understand the set of input values that result in specific decisions.

no code implementations • EMNLP 2020 • Vincent Micheli, Martin d'Hoffschmidt, François Fleuret

Recent advances in language modeling have led to computationally intensive and resource-demanding state-of-the-art models.

Ranked #7 on Question Answering on FQuAD

1 code implementation • NeurIPS 2020 • Apoorv Vyas, Angelos Katharopoulos, François Fleuret

This results in a model with linear complexity with respect to the sequence length for a fixed number of clusters.

Automatic Speech Recognition
Automatic Speech Recognition (ASR)
**+1**

6 code implementations • ICML 2020 • Angelos Katharopoulos, Apoorv Vyas, Nikolaos Pappas, François Fleuret

Transformers achieve remarkable performance in several tasks but due to their quadratic complexity, with respect to the input's length, they are prohibitively slow for very long sequences.

Ranked #5 on Offline RL on D4RL

no code implementations • 8 Feb 2020 • Vincent Micheli, Karthigan Sinnathamby, François Fleuret

We introduce a new reinforcement learning approach combining a planning quasi-metric (PQM) that estimates the number of steps required to go from any state to another, with task-specific "aimers" that compute a target state to reach a given goal.

no code implementations • ICML 2020 • Prabhu Teja Sivaprasad, Florian Mai, Thijs Vogels, Martin Jaggi, François Fleuret

The performance of optimizers, particularly in deep learning, depends considerably on their chosen hyperparameter configuration.

2 code implementations • 3 May 2019 • Angelos Katharopoulos, François Fleuret

We show that sampling from the attention distribution results in an unbiased estimator of the full model with minimal variance, and we derive an unbiased estimator of the gradient that we use to train our model end-to-end with a normal SGD procedure.

no code implementations • NeurIPS 2019 • Tatjana Chavdarova, Gauthier Gidel, François Fleuret, Simon Lacoste-Julien

We study the effect of the stochastic gradient noise on the training of generative adversarial networks (GANs) and show that it can prevent the convergence of standard game optimization methods, while the batch version converges.

2 code implementations • ICML 2018 • Angelos Katharopoulos, François Fleuret

Deep neural network training spends most of the computation on examples that are properly handled, and could be ignored.

no code implementations • CVPR 2018 • Tatjana Chavdarova, François Fleuret

The Generative Adversarial Networks (GANs) have demonstrated impressive performance for data synthesis, and are now used in a wide range of computer vision tasks.

no code implementations • 28 Jul 2017 • Tatjana Chavdarova, Pierre Baqué, Stéphane Bouquet, Andrii Maksai, Cijo Jose, Louis Lettry, Pascal Fua, Luc van Gool, François Fleuret

People detection methods are highly sensitive to the perpetual occlusions among the targets.

1 code implementation • 31 May 2017 • Angelos Katharopoulos, François Fleuret

Importance sampling has been successfully used to accelerate stochastic optimization in many convex problems.

2 code implementations • ICCV 2017 • Pierre Baqué, François Fleuret, Pascal Fua

People detection in single 2D images has improved greatly in recent years.

Ranked #8 on Multiview Detection on MultiviewX

no code implementations • 15 Feb 2017 • Tatjana Chavdarova, François Fleuret

The former does not exploit joint information, whereas the latter deals with ambiguous input due to the foreground blobs becoming more and more interconnected as the number of targets increases.

no code implementations • CVPR 2017 • Timur Bagautdinov, Alexandre Alahi, François Fleuret, Pascal Fua, Silvio Savarese

We present a unified framework for understanding human social behaviors in raw image sequences.

Ranked #2 on Action Recognition on Volleyball

no code implementations • CVPR 2017 • Pierre Baqué, François Fleuret, Pascal Fua

Mean Field inference is central to statistical physics.

no code implementations • 13 Oct 2016 • François Fleuret

We investigate how a residual network can learn to predict the dynamics of interacting shapes purely as an image-to-image regression task.

1 code implementation • NeurIPS 2017 • James Newling, François Fleuret

We run experiments showing that algorithm clarans (Ng et al., 2005) finds better K-medoids solutions than the Voronoi iteration algorithm.

Data Structures and Algorithms

no code implementations • 23 May 2016 • James Newling, François Fleuret

We present a new algorithm, trimed, for obtaining the medoid of a set, that is the element of the set which minimises the mean distance to all other elements.

1 code implementation • NeurIPS 2016 • James Newling, François Fleuret

A new algorithm is proposed which accelerates the mini-batch k-means algorithm of Sculley (2010) by using the distance bounding approach of Elkan (2003).

2 code implementations • 8 Feb 2016 • James Newling, François Fleuret

We propose a novel accelerated exact k-means algorithm, which performs better than the current state-of-the-art low-dimensional algorithm in 18 of 22 experiments, running up to 3 times faster.

no code implementations • NeurIPS 2015 • Mohammad E. Khan, Pierre Baque, François Fleuret, Pascal Fua

Secondly, we use the proximal framework to derive efficient variational algorithms for non-conjugate models.

no code implementations • CVPR 2016 • Pierre Baqué, Timur Bagautdinov, François Fleuret, Pascal Fua

Mean-field variational inference is one of the most popular approaches to inference in discrete random fields.

no code implementations • 20 Feb 2015 • Pierre Baqué, Jean-Hubert Hours, François Fleuret, Pascal Fua

Mean-Field is an efficient way to approximate a posterior distribution in complex graphical models and constitutes the most popular class of Bayesian variational approximation methods.

no code implementations • NeurIPS 2013 • Leonidas Lefakis, François Fleuret

We propose to train an ensemble with the help of a reservoir in which the learning algorithm can store a limited number of samples.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.