7 code implementations • 11 Apr 2024 • Chin-Yun Yu, Christopher Mitcheltree, Alistair Carson, Stefan Bilbao, Joshua D. Reiss, György Fazekas
Infinite impulse response filters are an essential building block of many time-varying audio systems, such as audio effects and synthesisers.
no code implementations • 23 Oct 2023 • Marco Comunità, Riccardo F. Gramaccioni, Emilian Postolache, Emanuele Rodolà, Danilo Comminiello, Joshua D. Reiss
Sound design involves creatively selecting, recording, and editing sound effects for various media like cinema, video games, and virtual/augmented reality.
no code implementations • 26 Sep 2023 • Mateo Cámara, Zhiyuan Xu, Yisu Zong, José Luis Blanco, Joshua D. Reiss
We present a non-supervised approach to optimize and evaluate the synthesis of non-speech audio effects from a speech production model.
1 code implementation • 22 May 2023 • Christopher Mitcheltree, Christian J. Steinmetz, Marco Comunità, Joshua D. Reiss
Low frequency oscillator (LFO) driven audio effects such as phaser, flanger, and chorus, modify an input signal using time-varying filters and delays, resulting in characteristic sweeping or widening effects.
no code implementations • 1 Nov 2022 • Marco Comunità, Christian J. Steinmetz, Huy Phan, Joshua D. Reiss
Deep learning approaches for black-box modelling of audio effects have shown promise, however, the majority of existing work focuses on nonlinear effects with behaviour on relatively short time-scales, such as guitar amplifiers and distortion.
1 code implementation • 6 Dec 2021 • Christian J. Steinmetz, Joshua D. Reiss
Applications of deep learning for audio effects often focus on modeling analog effects or learning to control effects to emulate a trained audio engineer.
no code implementations • 18 Oct 2021 • Marco Comunità, Huy Phan, Joshua D. Reiss
Footsteps are among the most ubiquitous sound effects in multimedia applications.
1 code implementation • 7 Oct 2021 • Joseph T. Colonel, Christian J. Steinmetz, Marcus Michelen, Joshua D. Reiss
In this work, we address some of these limitations by learning a direct mapping from the target magnitude response to the filter coefficient space with a neural network trained on millions of random filters.
1 code implementation • 4 Oct 2021 • Christian J. Steinmetz, Joshua D. Reiss
In this work, we propose WaveBeat, an end-to-end approach for joint beat and downbeat tracking operating directly on waveforms.
1 code implementation • 11 Feb 2021 • Christian J. Steinmetz, Joshua D. Reiss
Deep learning approaches have demonstrated success in modeling analog audio effects.
1 code implementation • 6 Dec 2020 • Marco Comunità, Dan Stowell, Joshua D. Reiss
Despite the popularity of guitar effects, there is very little existing research on classification and parameter estimation of specific plugins or effect units from guitar recordings.
1 code implementation • 8 Oct 2020 • Christian J. Steinmetz, Joshua D. Reiss
By processing audio signals in the time-domain with randomly weighted temporal convolutional networks (TCNs), we uncover a wide range of novel, yet controllable overdrive effects.
1 code implementation • 22 Oct 2019 • Marco A. Martínez Ramírez, Emmanouil Benetos, Joshua D. Reiss
Plate and spring reverberators are electromechanical systems first used and researched as means to substitute real room reverberation.
no code implementations • 15 May 2019 • Marco A. Martínez Ramírez, Emmanouil Benetos, Joshua D. Reiss
Audio processors whose parameters are modified periodically over time are often referred as time-varying or modulation based audio effects.
1 code implementation • 31 Jan 2019 • William J. Wilkinson, Michael Riis Andersen, Joshua D. Reiss, Dan Stowell, Arno Solin
A typical audio signal processing pipeline includes multiple disjoint analysis stages, including calculation of a time-frequency representation followed by spectrogram-based feature analysis.
1 code implementation • 6 Nov 2018 • William J. Wilkinson, Michael Riis Andersen, Joshua D. Reiss, Dan Stowell, Arno Solin
In audio signal processing, probabilistic time-frequency models have many benefits over their non-probabilistic counterparts.
1 code implementation • 15 Oct 2018 • Marco Martínez, Joshua D. Reiss
In the context of music production, distortion effects are mainly used for aesthetic reasons and are usually applied to electric musical instruments.
no code implementations • 2 Feb 2018 • William J. Wilkinson, Joshua D. Reiss, Dan Stowell
Recent advances in analysis of subband amplitude envelopes of natural sounds have resulted in convincing synthesis, showing subband amplitudes to be a crucial component of perception.