Search Results for author: Aaron Zweig

Found 9 papers, 2 papers with code

Symmetric Single Index Learning

no code implementations3 Oct 2023 Aaron Zweig, Joan Bruna

Learning this model with SGD is relatively well-understood, whereby the so-called information exponent of the link function governs a polynomial sample complexity rate.

On Single Index Models beyond Gaussian Data

no code implementations28 Jul 2023 Joan Bruna, Loucas Pillaud-Vivien, Aaron Zweig

Sparse high-dimensional functions have arisen as a rich framework to study the behavior of gradient-descent methods using shallow neural networks, showcasing their ability to perform feature learning beyond linear models.

Towards Antisymmetric Neural Ansatz Separation

no code implementations5 Aug 2022 Aaron Zweig, Joan Bruna

We study separations between two fundamental models (or \emph{Ans\"atze}) of antisymmetric functions, that is, functions $f$ of the form $f(x_{\sigma(1)}, \ldots, x_{\sigma(N)}) = \text{sign}(\sigma)f(x_1, \ldots, x_N)$, where $\sigma$ is any permutation.

Exponential Separations in Symmetric Neural Networks

no code implementations2 Jun 2022 Aaron Zweig, Joan Bruna

In this work we demonstrate a novel separation between symmetric neural network architectures.

Neural Algorithms for Graph Navigation

no code implementations NeurIPS Workshop LMCA 2020 Aaron Zweig, Nesreen Ahmed, Theodore L. Willke, Guixiang Ma

The application of deep reinforcement learning (RL) to graph learning and meta-learning admits challenges from both topics.

Graph Learning Meta-Learning +2

A Functional Perspective on Learning Symmetric Functions with Neural Networks

no code implementations16 Aug 2020 Aaron Zweig, Joan Bruna

Symmetric functions, which take as input an unordered, fixed-size set, are known to be universally representable by neural networks that enforce permutation invariance.

Generalization Bounds

Provably Efficient Third-Person Imitation from Offline Observation

no code implementations27 Feb 2020 Aaron Zweig, Joan Bruna

Domain adaptation in imitation learning represents an essential step towards improving generalizability.

Domain Adaptation Imitation Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.