Search Results for author: Jack Hidary

Found 10 papers, 6 papers with code

Quantum Hamiltonian-Based Models and the Variational Quantum Thermalizer Algorithm

no code implementations4 Oct 2019 Guillaume Verdon, Jacob Marks, Sasha Nanda, Stefan Leichenauer, Jack Hidary

We introduce a new class of generative quantum-neural-network-based models called Quantum Hamiltonian-Based Models (QHBMs).

Quantum Graph Neural Networks

2 code implementations26 Sep 2019 Guillaume Verdon, Trevor McCourt, Enxhell Luzhnica, Vikash Singh, Stefan Leichenauer, Jack Hidary

We introduce Quantum Graph Neural Networks (QGNN), a new class of quantum neural network ansatze which are tailored to represent quantum processes which have a graph structure, and are particularly suitable to be executed on distributed quantum systems over a quantum network.

Clustering

TensorNetwork on TensorFlow: Entanglement Renormalization for quantum critical lattice models

1 code implementation28 Jun 2019 Martin Ganahl, Ashley Milsted, Stefan Leichenauer, Jack Hidary, Guifre Vidal

We use the MERA to approximate the ground state wave function of the infinite, one-dimensional transverse field Ising model at criticality, and extract conformal data from the optimized ansatz.

Computational Physics

TensorNetwork for Machine Learning

1 code implementation7 Jun 2019 Stavros Efthymiou, Jack Hidary, Stefan Leichenauer

We demonstrate the use of tensor networks for image classification with the TensorNetwork open source library.

BIG-bench Machine Learning General Classification +2

Theory III: Dynamics and Generalization in Deep Networks

no code implementations12 Mar 2019 Andrzej Banburski, Qianli Liao, Brando Miranda, Lorenzo Rosasco, Fernanda De La Torre, Jack Hidary, Tomaso Poggio

In particular, gradient descent induces a dynamics of the normalized weights which converge for $t \to \infty$ to an equilibrium which corresponds to a minimum norm (or maximum margin) solution.

A Surprising Linear Relationship Predicts Test Performance in Deep Networks

3 code implementations25 Jul 2018 Qianli Liao, Brando Miranda, Andrzej Banburski, Jack Hidary, Tomaso Poggio

Given two networks with the same training loss on a dataset, when would they have drastically different test losses and errors?

General Classification Generalization Bounds

Theory IIIb: Generalization in Deep Networks

no code implementations29 Jun 2018 Tomaso Poggio, Qianli Liao, Brando Miranda, Andrzej Banburski, Xavier Boix, Jack Hidary

Here we prove a similar result for nonlinear multilayer DNNs near zero minima of the empirical loss.

Binary Classification

Theory of Deep Learning III: explaining the non-overfitting puzzle

no code implementations30 Dec 2017 Tomaso Poggio, Kenji Kawaguchi, Qianli Liao, Brando Miranda, Lorenzo Rosasco, Xavier Boix, Jack Hidary, Hrushikesh Mhaskar

In this note, we show that the dynamics associated to gradient descent minimization of nonlinear networks is topologically equivalent, near the asymptotically stable minima of the empirical error, to linear gradient system in a quadratic potential with a degenerate (for square loss) or almost degenerate (for logistic or crossentropy loss) Hessian.

General Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.