Search Results for author: Emilian Postolache

Found 10 papers, 7 papers with code

Naturalistic Music Decoding from EEG Data via Latent Diffusion Models

no code implementations15 May 2024 Emilian Postolache, Natalia Polouliakh, Hiroaki Kitano, Akima Connelly, Emanuele Rodolà, Luca Cosmo, Taketo Akama

In this article, we explore the potential of using latent diffusion models, a family of powerful generative models, for the task of reconstructing naturalistic music from electroencephalogram (EEG) recordings.

EEG

COCOLA: Coherence-Oriented Contrastive Learning of Musical Audio Representations

1 code implementation25 Apr 2024 Ruben Ciranni, Emilian Postolache, Giorgio Mariani, Michele Mancusi, Luca Cosmo, Emanuele Rodolà

We present COCOLA (Coherence-Oriented Contrastive Learning for Audio), a contrastive learning method for musical audio representations that captures the harmonic and rhythmic coherence between samples.

Contrastive Learning Music Generation

Generalized Multi-Source Inference for Text Conditioned Music Diffusion Models

1 code implementation18 Mar 2024 Emilian Postolache, Giorgio Mariani, Luca Cosmo, Emmanouil Benetos, Emanuele Rodolà

Multi-Source Diffusion Models (MSDM) allow for compositional musical generation tasks: generating a set of coherent sources, creating accompaniments, and performing source separation.

Zero-Shot Duet Singing Voices Separation with Diffusion Models

1 code implementation13 Nov 2023 Chin-Yun Yu, Emilian Postolache, Emanuele Rodolà, György Fazekas

In this paper, we examine this problem in the context of duet singing voices separation, and propose a method to enforce the coherency of singer identity by splitting the mixture into overlapping segments and performing posterior sampling in an auto-regressive manner, conditioning on the previous segment.

SyncFusion: Multimodal Onset-synchronized Video-to-Audio Foley Synthesis

no code implementations23 Oct 2023 Marco Comunità, Riccardo F. Gramaccioni, Emilian Postolache, Emanuele Rodolà, Danilo Comminiello, Joshua D. Reiss

Sound design involves creatively selecting, recording, and editing sound effects for various media like cinema, video games, and virtual/augmented reality.

Accelerating Transformer Inference for Translation via Parallel Decoding

3 code implementations17 May 2023 Andrea Santilli, Silvio Severino, Emilian Postolache, Valentino Maiorca, Michele Mancusi, Riccardo Marin, Emanuele Rodolà

We propose to reframe the standard greedy autoregressive decoding of MT with a parallel formulation leveraging Jacobi and Gauss-Seidel fixed-point iteration methods for fast inference.

Machine Translation Translation

Multi-Source Diffusion Models for Simultaneous Music Generation and Separation

1 code implementation4 Feb 2023 Giorgio Mariani, Irene Tallini, Emilian Postolache, Michele Mancusi, Luca Cosmo, Emanuele Rodolà

In this work, we define a diffusion-based generative model capable of both music synthesis and source separation by learning the score of the joint probability density of sources sharing a context.

Imputation Music Generation

Latent Autoregressive Source Separation

1 code implementation9 Jan 2023 Emilian Postolache, Giorgio Mariani, Michele Mancusi, Andrea Santilli, Luca Cosmo, Emanuele Rodolà

Autoregressive models have achieved impressive results over a wide range of domains in terms of generation quality and downstream task performance.

Dimensionality Reduction

Adversarial Permutation Invariant Training for Universal Sound Separation

no code implementations21 Oct 2022 Emilian Postolache, Jordi Pons, Santiago Pascual, Joan Serrà

Universal sound separation consists of separating mixes with arbitrary sounds of different types, and permutation invariant training (PIT) is used to train source agnostic models that do so.

Cannot find the paper you are looking for? You can Submit a new open access paper.