Resynthesis
12 papers with code • 2 benchmarks • 2 datasets
Most implemented papers
Detecting the Unexpected via Image Resynthesis
In this paper, we tackle the more realistic scenario where unexpected objects of unknown classes can appear at test time.
Differentiable Time-Frequency Scattering on GPU
Joint time-frequency scattering (JTFS) is a convolutional operator in the time-frequency domain which extracts spectrotemporal modulations at various rates and scales.
Generative Spoken Language Modeling from Raw Audio
We introduce Generative Spoken Language Modeling, the task of learning the acoustic and linguistic characteristics of a language from raw audio (no text, no labels), and a set of metrics to automatically evaluate the learned representations at acoustic and linguistic levels for both encoding and generation.
Speech Resynthesis from Discrete Disentangled Self-Supervised Representations
We propose using self-supervised discrete representations for the task of speech resynthesis.
Unifying Probabilistic Models for Time-Frequency Analysis
In audio signal processing, probabilistic time-frequency models have many benefits over their non-probabilistic counterparts.
Coordinate-based Texture Inpainting for Pose-Guided Image Generation
Since the input photograph always observes only a part of the surface, we suggest a new inpainting method that completes the texture of the human body.
On Adversarial Mixup Resynthesis
In this paper, we explore new approaches to combining information encoded within the learned representations of auto-encoders.
Parametric Resynthesis with neural vocoders
We propose to utilize the high quality speech generation capability of neural vocoders for noise suppression.
Spectral Processing of COVID-19 Time-Series Data
The presence of oscillations in aggregated COVID-19 data not only raises questions about the data's accuracy, it hinders understanding of the pandemic.
Dynamical Variational Autoencoders: A Comprehensive Review
Recently, a series of papers have presented different extensions of the VAE to process sequential data, which model not only the latent space but also the temporal dependencies within a sequence of data vectors and corresponding latent vectors, relying on recurrent neural networks or state-space models.