Search Results for author: Andrea Agostinelli

Found 10 papers, 3 papers with code

MAD Speech: Measures of Acoustic Diversity of Speech

no code implementations16 Apr 2024 Matthieu Futeral, Andrea Agostinelli, Marco Tagliasacchi, Neil Zeghidour, Eugene Kharitonov

Using these datasets, we demonstrate that our proposed metrics achieve a stronger agreement with the ground-truth diversity than baselines.

MusicRL: Aligning Music Generation to Human Preferences

no code implementations6 Feb 2024 Geoffrey Cideron, Sertan Girgin, Mauro Verzetti, Damien Vincent, Matej Kastelic, Zalán Borsos, Brian McWilliams, Victor Ungureanu, Olivier Bachem, Olivier Pietquin, Matthieu Geist, Léonard Hussenot, Neil Zeghidour, Andrea Agostinelli

MusicRL is a pretrained autoregressive MusicLM (Agostinelli et al., 2023) model of discrete audio tokens finetuned with reinforcement learning to maximise sequence-level rewards.

Music Generation

Brain2Music: Reconstructing Music from Human Brain Activity

no code implementations20 Jul 2023 Timo I. Denk, Yu Takagi, Takuya Matsuyama, Andrea Agostinelli, Tomoya Nakai, Christian Frank, Shinji Nishimoto

The process of reconstructing experiences from human brain activity offers a unique lens into how the brain interprets and represents the world.

Music Generation Retrieval

SingSong: Generating musical accompaniments from singing

no code implementations30 Jan 2023 Chris Donahue, Antoine Caillon, Adam Roberts, Ethan Manilow, Philippe Esling, Andrea Agostinelli, Mauro Verzetti, Ian Simon, Olivier Pietquin, Neil Zeghidour, Jesse Engel

We present SingSong, a system that generates instrumental music to accompany input vocals, potentially offering musicians and non-musicians alike an intuitive new way to create music featuring their own voice.

Audio Generation Retrieval

How stable are Transferability Metrics evaluations?

no code implementations4 Apr 2022 Andrea Agostinelli, Michal Pándy, Jasper Uijlings, Thomas Mensink, Vittorio Ferrari

Transferability metrics is a maturing field with increasing interest, which aims at providing heuristics for selecting the most suitable source models to transfer to a given target dataset, without fine-tuning them all.

Image Classification Semantic Segmentation

Transferability Metrics for Selecting Source Model Ensembles

no code implementations CVPR 2022 Andrea Agostinelli, Jasper Uijlings, Thomas Mensink, Vittorio Ferrari

We address the problem of ensemble selection in transfer learning: Given a large pool of source models we want to select an ensemble of models which, after fine-tuning on the target training set, yields the best performance on the target test set.

Semantic Segmentation Transfer Learning

Transferability Estimation using Bhattacharyya Class Separability

no code implementations CVPR 2022 Michal Pándy, Andrea Agostinelli, Jasper Uijlings, Vittorio Ferrari, Thomas Mensink

Then, we estimate their pairwise class separability using the Bhattacharyya coefficient, yielding a simple and effective measure of how well the source model transfers to the target task.

Classification Image Classification +2

Memory-Efficient Episodic Control Reinforcement Learning with Dynamic Online k-means

1 code implementation21 Nov 2019 Andrea Agostinelli, Kai Arulkumaran, Marta Sarrico, Pierre Richemond, Anil Anthony Bharath

Recently, neuro-inspired episodic control (EC) methods have been developed to overcome the data-inefficiency of standard deep reinforcement learning approaches.

Atari Games Clustering +3

Sample-Efficient Reinforcement Learning with Maximum Entropy Mellowmax Episodic Control

1 code implementation21 Nov 2019 Marta Sarrico, Kai Arulkumaran, Andrea Agostinelli, Pierre Richemond, Anil Anthony Bharath

Deep networks have enabled reinforcement learning to scale to more complex and challenging domains, but these methods typically require large quantities of training data.

Atari Games reinforcement-learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.