Search Results for author: Sixin Zhang

Found 12 papers, 7 papers with code

Generalized Rectifier Wavelet Covariance Models For Texture Synthesis

1 code implementation ICLR 2022 Antoine Brochard, Sixin Zhang, Stéphane Mallat

State-of-the-art maximum entropy models for texture synthesis are built from statistics relying on image representations defined by convolutional neural networks (CNN).

Texture Synthesis

On the Nash equilibrium of moment-matching GANs for stationary Gaussian processes

no code implementations14 Mar 2022 Sixin Zhang

Generative Adversarial Networks (GANs) learn an implicit generative model from data samples through a two-player game.

Gaussian Processes

On the Relationships between Transform-Learning NMF and Joint-Diagonalization

no code implementations10 Dec 2021 Sixin Zhang, Emmanuel Soubies, Cédric Févotte

Non-negative matrix factorization with transform learning (TL-NMF) is a recent idea that aims at learning data representations suited to NMF.

Particle gradient descent model for point process generation

1 code implementation27 Oct 2020 Antoine Brochard, Bartłomiej Błaszczyszyn, Stéphane Mallat, Sixin Zhang

The target measure is generated via a deterministic gradient descent algorithm, so as to match a set of statistics of the given, observed realization.

Point Processes Topological Data Analysis

Maximum Entropy Models from Phase Harmonic Covariances

2 code implementations22 Nov 2019 Sixin Zhang, Stéphane Mallat

The covariance of a stationary process $X$ is diagonalized by a Fourier transform.

Statistical learning of geometric characteristics of wireless networks

no code implementations19 Dec 2018 Antoine Brochard, Bartłomiej Błaszczyszyn, Stéphane Mallat, Sixin Zhang

To approximate (interpolate) the marking function, in our baseline approach, we build a statistical regression model of the marks with respect some local point distance representation.

Point Processes

Phase Harmonic Correlations and Convolutional Neural Networks

2 code implementations29 Oct 2018 Stéphane Mallat, Sixin Zhang, Gaspar Rochette

For wavelet filters, we show numerically that signals having sparse wavelet coefficients can be recovered from few phase harmonic correlations, which provide a compressive representation

Time Series

Distributed stochastic optimization for deep learning (thesis)

no code implementations7 May 2016 Sixin Zhang

We also find a surprising connection between the momentum SGD and the EASGD method with a negative moving average rate.

Image Classification Stochastic Optimization

Deep learning with Elastic Averaging SGD

9 code implementations NeurIPS 2015 Sixin Zhang, Anna Choromanska, Yann Lecun

We empirically demonstrate that in the deep learning setting, due to the existence of many local optima, allowing more exploration can lead to the improved performance.

Image Classification Stochastic Optimization

No More Pesky Learning Rates

no code implementations6 Jun 2012 Tom Schaul, Sixin Zhang, Yann Lecun

The performance of stochastic gradient descent (SGD) depends critically on how learning rates are tuned and decreased over time.

Cannot find the paper you are looking for? You can Submit a new open access paper.