no code implementations • 4 Oct 2019 • Guillaume Verdon, Jacob Marks, Sasha Nanda, Stefan Leichenauer, Jack Hidary
We introduce a new class of generative quantum-neural-network-based models called Quantum Hamiltonian-Based Models (QHBMs).
2 code implementations • 26 Sep 2019 • Guillaume Verdon, Trevor McCourt, Enxhell Luzhnica, Vikash Singh, Stefan Leichenauer, Jack Hidary
We introduce Quantum Graph Neural Networks (QGNN), a new class of quantum neural network ansatze which are tailored to represent quantum processes which have a graph structure, and are particularly suitable to be executed on distributed quantum systems over a quantum network.
1 code implementation • 28 Jun 2019 • Martin Ganahl, Ashley Milsted, Stefan Leichenauer, Jack Hidary, Guifre Vidal
We use the MERA to approximate the ground state wave function of the infinite, one-dimensional transverse field Ising model at criticality, and extract conformal data from the optimized ansatz.
Computational Physics
1 code implementation • 7 Jun 2019 • Stavros Efthymiou, Jack Hidary, Stefan Leichenauer
We demonstrate the use of tensor networks for image classification with the TensorNetwork open source library.
2 code implementations • 3 May 2019 • Chase Roberts, Ashley Milsted, Martin Ganahl, Adam Zalcman, Bruce Fontaine, Yijian Zou, Jack Hidary, Guifre Vidal, Stefan Leichenauer
TensorNetwork is an open source library for implementing tensor network algorithms.
3 code implementations • 3 May 2019 • Ashley Milsted, Martin Ganahl, Stefan Leichenauer, Jack Hidary, Guifre Vidal
TensorNetwork is an open source library for implementing tensor network algorithms in TensorFlow.
no code implementations • 12 Mar 2019 • Andrzej Banburski, Qianli Liao, Brando Miranda, Lorenzo Rosasco, Fernanda De La Torre, Jack Hidary, Tomaso Poggio
In particular, gradient descent induces a dynamics of the normalized weights which converge for $t \to \infty$ to an equilibrium which corresponds to a minimum norm (or maximum margin) solution.
3 code implementations • 25 Jul 2018 • Qianli Liao, Brando Miranda, Andrzej Banburski, Jack Hidary, Tomaso Poggio
Given two networks with the same training loss on a dataset, when would they have drastically different test losses and errors?
no code implementations • 29 Jun 2018 • Tomaso Poggio, Qianli Liao, Brando Miranda, Andrzej Banburski, Xavier Boix, Jack Hidary
Here we prove a similar result for nonlinear multilayer DNNs near zero minima of the empirical loss.
no code implementations • 30 Dec 2017 • Tomaso Poggio, Kenji Kawaguchi, Qianli Liao, Brando Miranda, Lorenzo Rosasco, Xavier Boix, Jack Hidary, Hrushikesh Mhaskar
In this note, we show that the dynamics associated to gradient descent minimization of nonlinear networks is topologically equivalent, near the asymptotically stable minima of the empirical error, to linear gradient system in a quadratic potential with a degenerate (for square loss) or almost degenerate (for logistic or crossentropy loss) Hessian.