Search Results for author: Mikołaj Bińkowski

Found 8 papers, 5 papers with code

A review of two decades of correlations, hierarchies, networks and clustering in financial markets

no code implementations1 Mar 2017 Gautier Marti, Frank Nielsen, Mikołaj Bińkowski, Philippe Donnat

We review the state of the art of clustering financial time series and the study of their correlations alongside other interaction networks.

BIG-bench Machine Learning Clustering +3

Autoregressive Convolutional Neural Networks for Asynchronous Time Series

2 code implementations ICML 2018 Mikołaj Bińkowski, Gautier Marti, Philippe Donnat

We propose Significance-Offset Convolutional Neural Network, a deep convolutional network architecture for regression of multivariate asynchronous time series.

Time Series Time Series Analysis

Demystifying MMD GANs

7 code implementations ICLR 2018 Mikołaj Bińkowski, Danica J. Sutherland, Michael Arbel, Arthur Gretton

We investigate the training and performance of generative adversarial networks using the Maximum Mean Discrepancy (MMD) as critic, termed MMD GANs.

On gradient regularizers for MMD GANs

1 code implementation NeurIPS 2018 Michael Arbel, Danica J. Sutherland, Mikołaj Bińkowski, Arthur Gretton

We propose a principled method for gradient-based regularization of the critic of GAN-like models trained by adversarially optimizing the kernel of a Maximum Mean Discrepancy (MMD).

Image Generation

Unsupervised one-to-many image translation

no code implementations ICLR 2019 Samuel Lavoie-Marchildon, Sebastien Lachapelle, Mikołaj Bińkowski, Aaron Courville, Yoshua Bengio, R. Devon Hjelm

We perform completely unsupervised one-sided image to image translation between a source domain $X$ and a target domain $Y$ such that we preserve relevant underlying shared semantics (e. g., class, size, shape, etc).

Translation Unsupervised Image-To-Image Translation

Batch weight for domain adaptation with mass shift

no code implementations29 May 2019 Mikołaj Bińkowski, R. Devon Hjelm, Aaron Courville

We also provide rigorous probabilistic setting for domain transfer and new simplified objective for training transfer networks, an alternative to complex, multi-component loss functions used in the current state-of-the art image-to-image translation models.

Domain Adaptation Image-to-Image Translation +1

High Fidelity Speech Synthesis with Adversarial Networks

3 code implementations ICLR 2020 Mikołaj Bińkowski, Jeff Donahue, Sander Dieleman, Aidan Clark, Erich Elsen, Norman Casagrande, Luis C. Cobo, Karen Simonyan

However, their application in the audio domain has received limited attention, and autoregressive models, such as WaveNet, remain the state of the art in generative modelling of audio signals such as human speech.

Generative Adversarial Network Speech Synthesis +1

End-to-End Adversarial Text-to-Speech

2 code implementations ICLR 2021 Jeff Donahue, Sander Dieleman, Mikołaj Bińkowski, Erich Elsen, Karen Simonyan

Modern text-to-speech synthesis pipelines typically involve multiple processing stages, each of which is designed or learnt independently from the rest.

Adversarial Text Dynamic Time Warping +2

Cannot find the paper you are looking for? You can Submit a new open access paper.