Search Results for author: Chris Chafe

Found 5 papers, 0 papers with code

Content Adaptive Front End For Audio Signal Processing

no code implementations18 Mar 2023 Prateek Verma, Chris Chafe

In this work, we propose a way of computing a content-adaptive learnable time-frequency representation.

Audio Signal Processing Scene Understanding

One-Shot Acoustic Matching Of Audio Signals -- Learning to Hear Music In Any Room/ Concert Hall

no code implementations27 Oct 2022 Prateek Verma, Chris Chafe, Jonathan Berger

Typically, researchers use an excitation such as a pistol shot or balloon pop as an impulse signal with which an auralization can be created.

A Generative Model for Raw Audio Using Transformer Architectures

no code implementations30 Jun 2021 Prateek Verma, Chris Chafe

We show how causal transformer generative models can be used for raw waveform synthesis.

Audio Synthesis

A Deep Learning Approach for Low-Latency Packet Loss Concealment of Audio Signals in Networked Music Performance Applications

no code implementations14 Jul 2020 Prateek Verma, Alessandro Ilic Mezza, Chris Chafe, Cristina Rottondi

Networked Music Performance (NMP) is envisioned as a potential game changer among Internet applications: it aims at revolutionizing the traditional concept of musical interaction by enabling remote musicians to interact and perform together through a telecommunication network.

Packet Loss Concealment

Neuralogram: A Deep Neural Network Based Representation for Audio Signals

no code implementations10 Apr 2019 Prateek Verma, Chris Chafe, Jonathan Berger

We propose the Neuralogram -- a deep neural network based representation for understanding audio signals which, as the name suggests, transforms an audio signal to a dense, compact representation based upon embeddings learned via a neural architecture.

Music Recommendation

Cannot find the paper you are looking for? You can Submit a new open access paper.