no code implementations • ICML 2020 • Samy Jelassi, Carles Domingo-Enrich, Damien Scieur, Arthur Mensch, Joan Bruna
Data-driven modeling increasingly requires to find a Nash equilibrium in multi-player games, e. g. when training GANs.
no code implementations • 10 Jan 2025 • Yossi Arjevani, Joan Bruna, Joe Kileel, Elzbieta Polak, Matthew Trager
We study shallow neural networks with polynomial activations.
1 code implementation • 23 Jul 2024 • Noah Amsel, Gilad Yehudai, Joan Bruna
Attention-based mechanisms are widely used in machine learning, most prominently in transformers.
no code implementations • 30 Jun 2024 • Joan Bruna, Jiequn Han
Score-based diffusion models have significantly advanced high-dimensional data generation across various domains, by learning a denoising oracle (or score) from datasets.
no code implementations • 5 Jun 2024 • Lei Chen, Joan Bruna, Alberto Bietti
In addition to the ability to generate fluent text in various languages, large language models have been successful at tasks that involve basic forms of logical "reasoning" over their context.
no code implementations • 8 Mar 2024 • Alex Damian, Loucas Pillaud-Vivien, Jason D. Lee, Joan Bruna
Single-Index Models are high-dimensional regression problems with planted structure, whereby labels depend on an unknown one-dimensional projection of the input via a generic, non-linear, and potentially non-deterministic transformation.
1 code implementation • 4 Dec 2023 • Carles Domingo-Enrich, Jiequn Han, Brandon Amos, Joan Bruna, Ricky T. Q. Chen
Our work introduces Stochastic Optimal Control Matching (SOCM), a novel Iterative Diffusion Optimization (IDO) technique for stochastic optimal control that stems from the same philosophy as the conditional score matching loss for diffusion models.
no code implementations • 30 Oct 2023 • Alberto Bietti, Joan Bruna, Loucas Pillaud-Vivien
We study gradient flow on the multi-index regression problem for high-dimensional Gaussian data.
no code implementations • 3 Oct 2023 • Aaron Zweig, Joan Bruna
Learning this model with SGD is relatively well-understood, whereby the so-called information exponent of the link function governs a polynomial sample complexity rate.
no code implementations • 28 Jul 2023 • Joan Bruna, Loucas Pillaud-Vivien, Aaron Zweig
Sparse high-dimensional functions have arisen as a rich framework to study the behavior of gradient-descent methods using shallow neural networks, showcasing their ability to perform feature learning beyond linear models.
1 code implementation • NeurIPS 2023 • Vignesh Kothapalli, Tom Tirer, Joan Bruna
We start with an empirical study that shows that a decrease in within-class variability is also prevalent in the node-wise classification setting, however, not to the extent observed in the instance-wise case.
1 code implementation • 31 May 2023 • Florentin Guth, Etienne Lempereur, Joan Bruna, Stéphane Mallat
There is a growing gap between the impressive results of deep image generative models and classical algorithms that offer theoretical guarantees.
no code implementations • 24 Mar 2023 • Karl Otness, Laure Zanna, Joan Bruna
Subgrid parameterizations, which represent physical processes occurring below the resolution of current climate models, are an important component in producing accurate, long-term predictions for the climate.
no code implementations • 28 Oct 2022 • Zhengdao Chen, Eric Vanden-Eijnden, Joan Bruna
To understand the training dynamics of neural networks (NNs), prior studies have considered the infinite-width mean-field (MF) limit of two-layer NN, establishing theoretical guarantees of its convergence under gradient flow training as well as its approximation and generalization capabilities.
no code implementations • 27 Oct 2022 • Alberto Bietti, Joan Bruna, Clayton Sanford, Min Jae Song
Single-index models are a class of functions given by an unknown univariate ``link'' function applied to an unknown one-dimensional projection of the input.
no code implementations • 5 Aug 2022 • Aaron Zweig, Joan Bruna
We study separations between two fundamental models (or \emph{Ans\"atze}) of antisymmetric functions, that is, functions $f$ of the form $f(x_{\sigma(1)}, \ldots, x_{\sigma(N)}) = \text{sign}(\sigma)f(x_1, \ldots, x_N)$, where $\sigma$ is any permutation.
no code implementations • 6 Jul 2022 • Grégoire Sergeant-Perthuis, Jakob Maier, Joan Bruna, Edouard Oyallon
In the context of Neural Networks defined over $\mathcal{M}$, it indicates that point-wise non-linear operators are the only universal family that commutes with any group of symmetries, and justifies their systematic use in combination with dedicated linear operators commuting with specific symmetries.
no code implementations • 8 Jun 2022 • Lei Chen, Joan Bruna
Gradient Descent (GD) is a powerful workhorse of modern machine learning thanks to its scalability and efficiency in high-dimensional spaces.
no code implementations • 2 Jun 2022 • Aaron Zweig, Joan Bruna
In this work we demonstrate a novel separation between symmetric neural network architectures.
1 code implementation • 2 Jun 2022 • David Brandfonbrener, Alberto Bietti, Jacob Buckman, Romain Laroche, Joan Bruna
Several recent works have proposed a class of algorithms for the offline reinforcement learning (RL) problem that we will refer to as return-conditioned supervised learning (RCSL).
no code implementations • 22 Apr 2022 • Zhengdao Chen, Eric Vanden-Eijnden, Joan Bruna
We study the optimization of wide neural networks (NNs) via gradient flow (GF) in setups that allow feature learning while admitting non-asymptotic global convergence guarantees.
1 code implementation • 2 Mar 2022 • Joan Bruna, Benjamin Peherstorfer, Eric Vanden-Eijnden
Neural Galerkin schemes build on the Dirac-Frenkel variational principle to train networks by minimizing the residual sequentially over time, which enables adaptively collecting new training data in a self-informed manner that is guided by the dynamics described by the partial differential equations.
no code implementations • 16 Feb 2022 • Tom Tirer, Joan Bruna
Specifically, it has been shown that the learned features (the output of the penultimate layer) of within-class samples converge to their mean, and the means of different classes exhibit a certain tight frame structure, which is also aligned with the last layer's weights.
no code implementations • 14 Feb 2022 • Carles Domingo-Enrich, Joan Bruna
Min-max optimization problems arise in several key machine learning setups, including adversarial learning and generative modeling.
no code implementations • 7 Dec 2021 • Ilias Zadik, Min Jae Song, Alexander S. Wein, Joan Bruna
Prior work on many similar inference tasks portends that such lower bounds strongly suggest the presence of an inherent statistical-to-computational gap for clustering, that is, a parameter regime where the clustering task is statistically possible but no polynomial-time algorithm succeeds.
no code implementations • 2 Dec 2021 • David Brandfonbrener, William F. Whitney, Rajesh Ranganath, Joan Bruna
We introduce quantile filtered imitation learning (QFIL), a novel policy improvement operator designed for offline reinforcement learning.
no code implementations • NeurIPS 2021 • Alberto Bietti, Luca Venturi, Joan Bruna
Many supervised learning problems involve high-dimensional data such as images, text, or graphs.
no code implementations • CVPR 2022 • Francis Williams, Zan Gojcic, Sameh Khamis, Denis Zorin, Joan Bruna, Sanja Fidler, Or Litany
We present Neural Kernel Fields: a novel method for reconstructing implicit 3D shapes based on a learned kernel ridge regression.
no code implementations • 25 Nov 2021 • Yihan He, Joan Bruna
In this example, we provide non-asymptotic bounds that highly depend on the sparsity of the receptive field constructed by the algorithm.
no code implementations • 12 Oct 2021 • Stefan Kolek, Duc Anh Nguyen, Ron Levie, Joan Bruna, Gitta Kutyniok
We present the Rate-Distortion Explanation (RDE) framework, a mathematically well-founded method for explaining black-box model decisions.
1 code implementation • 7 Oct 2021 • Stefan Kolek, Duc Anh Nguyen, Ron Levie, Joan Bruna, Gitta Kutyniok
We present CartoonX (Cartoon Explanation), a novel model-agnostic explanation method tailored towards image classifiers and based on the rate-distortion explanation (RDE) framework.
no code implementations • ICLR 2022 • Zhengdao Chen, Eric Vanden-Eijnden, Joan Bruna
We study the optimization of over-parameterized shallow and multi-layer neural networks (NNs) in a regime that allows feature learning while admitting non-asymptotic global convergence guarantees.
1 code implementation • 9 Aug 2021 • Karl Otness, Arvi Gjoka, Joan Bruna, Daniele Panozzo, Benjamin Peherstorfer, Teseo Schneider, Denis Zorin
Simulating physical systems is a core component of scientific computing, encompassing a wide range of physical domains and applications.
no code implementations • 11 Jul 2021 • Carles Domingo-Enrich, Alberto Bietti, Marylou Gabrié, Joan Bruna, Eric Vanden-Eijnden
In the feature-learning regime, this dual formulation justifies using a two time-scale gradient ascent-descent (GDA) training algorithm in which one updates concurrently the particles in the sample space and the neurons in the parameter space of the energy.
no code implementations • NeurIPS 2021 • Min Jae Song, Ilias Zadik, Joan Bruna
More precisely, our reduction shows that any polynomial-time algorithm (not necessarily gradient-based) for learning such functions under small noise implies a polynomial-time quantum algorithm for solving worst-case lattice problems, whose hardness form the foundation of lattice-based cryptography.
1 code implementation • NeurIPS 2021 • David Brandfonbrener, William F. Whitney, Rajesh Ranganath, Joan Bruna
In addition, we hypothesize that the strong performance of the one-step algorithm is due to a combination of favorable structure in the environment and behavior policy.
no code implementations • 14 Jun 2021 • Alberto Bietti, Luca Venturi, Joan Bruna
Many supervised learning problems involve high-dimensional data such as images, text, or graphs.
6 code implementations • 27 Apr 2021 • Michael M. Bronstein, Joan Bruna, Taco Cohen, Petar Veličković
The last decade has witnessed an experimental revolution in data science and machine learning, epitomised by deep learning methods.
1 code implementation • 15 Apr 2021 • Carles Domingo-Enrich, Alberto Bietti, Eric Vanden-Eijnden, Joan Bruna
Energy-based models (EBMs) are a simple yet powerful framework for generative modeling.
no code implementations • 10 Mar 2021 • Yossi Arjevani, Joan Bruna, Michael Field, Joe Kileel, Matthew Trager, Francis Williams
In this note, we consider the highly nonconvex optimization problem associated with computing the rank decomposition of symmetric tensors.
no code implementations • 2 Feb 2021 • Luca Venturi, Samy Jelassi, Tristan Ozuch, Joan Bruna
The first contribution of this paper is to extend such results to a more general class of functions, namely functions with piece-wise oscillatory structure, by building on the proof strategy of (Eldan and Shamir, 2016).
no code implementations • 1 Feb 2021 • Cinjon Resnick, Or Litany, Cosmas Heiß, Hugo Larochelle, Joan Bruna, Kyunghyun Cho
We propose a self-supervised framework to learn scene representations from video that are automatically delineated into background, characters, and their animations.
no code implementations • 11 Nov 2020 • Cinjon Resnick, Or Litany, Hugo Larochelle, Joan Bruna, Kyunghyun Cho
We propose a self-supervised framework to learn scene representations from video that are automatically delineated into objects and background.
1 code implementation • ICLR 2021 • Lei Chen, Zhengdao Chen, Joan Bruna
From the perspective of expressive power, this work compares multi-layer Graph Neural Networks (GNNs) with a simplified alternative that we call Graph-Augmented Multi-Layer Perceptrons (GA-MLPs), which first augments node features with certain multi-hop operators on the graph and then applies an MLP in a node-wise fashion.
1 code implementation • 21 Sep 2020 • Tom Tirer, Joan Bruna, Raja Giryes
A major factor in the success of deep neural networks is the use of sophisticated architectures rather than the classical multilayer perceptron (MLP).
no code implementations • NeurIPS 2020 • Zhengdao Chen, Grant M. Rotskoff, Joan Bruna, Eric Vanden-Eijnden
Furthermore, if the mean-field dynamics converges to a measure that interpolates the training data, we prove that the asymptotic deviation eventually vanishes in the CLT scaling.
no code implementations • 16 Aug 2020 • Aaron Zweig, Joan Bruna
Symmetric functions, which take as input an unordered, fixed-size set, are known to be universally representable by neural networks that enforce permutation invariance.
no code implementations • 28 Jul 2020 • Donsub Rim, Luca Venturi, Joan Bruna, Benjamin Peherstorfer
Classical reduced models are low-rank approximations using a fixed basis designed to achieve dimensionality reduction of large-scale systems.
no code implementations • 1 Jul 2020 • Cosmas Heiß, Ron Levie, Cinjon Resnick, Gitta Kutyniok, Joan Bruna
It is widely recognized that the predictions of deep neural networks are difficult to parse relative to simpler approaches.
1 code implementation • 27 Jun 2020 • David Brandfonbrener, William F. Whitney, Rajesh Ranganath, Joan Bruna
We show that this discrepancy is due to the \emph{action-stability} of their objectives.
1 code implementation • CVPR 2021 • Francis Williams, Matthew Trager, Joan Bruna, Denis Zorin
We present Neural Splines, a technique for 3D surface reconstruction that is based on random feature kernels arising from infinitely-wide shallow ReLU networks.
no code implementations • 18 Jun 2020 • Jaume de Dios, Joan Bruna
The analysis of neural network training beyond their linearization regime remains an outstanding open question, even in the simplest setup of a single hidden-layer.
no code implementations • NeurIPS 2020 • Yossi Arjevani, Joan Bruna, Bugra Can, Mert Gürbüzbalaban, Stefanie Jegelka, Hongzhou Lin
We introduce a framework for designing primal methods under the decentralized optimization setting where local functions are smooth and strongly convex.
no code implementations • 19 May 2020 • Joan Bruna, Oded Regev, Min Jae Song, Yi Tang
We introduce a continuous analogue of the Learning with Errors (LWE) problem, which we name CLWE.
1 code implementation • 2 Mar 2020 • Jad Rahme, Samy Jelassi, Joan Bruna, S. Matthew Weinberg
Designing an incentive compatible auction that maximizes expected revenue is a central problem in Auction Design.
no code implementations • 27 Feb 2020 • Aaron Zweig, Joan Bruna
Domain adaptation in imitation learning represents an essential step towards improving generalizability.
no code implementations • NeurIPS 2020 • Carles Domingo-Enrich, Samy Jelassi, Arthur Mensch, Grant Rotskoff, Joan Bruna
Our method identifies mixed equilibria in high dimensions and is demonstrably effective for training mixtures of GANs.
1 code implementation • NeurIPS 2020 • Zhengdao Chen, Lei Chen, Soledad Villar, Joan Bruna
We also prove positive results for k-WL and k-IGNs as well as negative results for k-WL with a finite number of iterations.
no code implementations • 30 Nov 2019 • Cinjon Resnick, Zeping Zhan, Joan Bruna
Our first contribution is to show that this test is insufficient and that models which perform poorly (strongly) on linear classification can perform strongly (weakly) on more involved tasks like temporal activity localization.
no code implementations • 21 Oct 2019 • Fernando Gama, Joan Bruna, Alejandro Ribeiro
In this paper, we are set to study the effect that a change in the underlying graph topology that supports the signal has on the output of a GNN.
no code implementations • ICLR 2020 • Matthew Trager, Kathlén Kohn, Joan Bruna
The critical locus of the loss function of a neural network is determined by the geometry of the functional space and by the parameterization of this space by the network's weights.
no code implementations • 25 Sep 2019 • Timothée Lacroix, Guillaume Obozinski, Joan Bruna, Nicolas Usunier
However, as we show in this paper through experiments on standard benchmarks of link prediction in knowledge bases, ComplEx, a variant of CP, achieves similar performances to recent approaches based on Tucker decomposition on all operating points in terms of number of parameters.
no code implementations • NeurIPS 2019 • Francis Williams, Matthew Trager, Claudio Silva, Daniele Panozzo, Denis Zorin, Joan Bruna
We show that the gradient dynamics of such networks are determined by the gradient flow in a non-redundant parameterization of the network function.
1 code implementation • NeurIPS 2019 • Stéphane d'Ascoli, Levent Sagun, Joan Bruna, Giulio Biroli
The aim of this work is to understand this fact through the lens of dynamics in the loss landscape.
1 code implementation • NeurIPS 2019 • Fernando Gama, Joan Bruna, Alejandro Ribeiro
In this work, we extend scattering transforms to network data by using multiresolution graph wavelets, whose computation can be obtained by means of graph convolutions.
1 code implementation • NeurIPS 2019 • Zhengdao Chen, Soledad Villar, Lei Chen, Joan Bruna
We further develop a framework of the expressive power of GNNs that incorporates both of these viewpoints using the language of sigma-algebra, through which we compare the expressive power of different types of GNNs together with other graph isomorphism tests.
Ranked #30 on
Graph Regression
on ZINC-500k
1 code implementation • 29 May 2019 • Carles Domingo Enrich, Samy Jelassi, Carles Domingo-Enrich, Damien Scieur, Arthur Mensch, Joan Bruna
Data-driven modeling increasingly requires to find a Nash equilibrium in multi-player games, e. g. when training GANs.
no code implementations • ICLR 2020 • David Brandfonbrener, Joan Bruna
Then, we show how environments that are more reversible induce dynamics that are better for TD learning and prove global convergence to the true value function for well-conditioned function approximators.
1 code implementation • NeurIPS 2019 • Joe Kileel, Matthew Trager, Joan Bruna
We study deep neural networks with polynomial activations, particularly their expressive power.
no code implementations • 11 May 2019 • Fernando Gama, Joan Bruna, Alejandro Ribeiro
Graph neural networks (GNNs) have emerged as a powerful tool for nonlinear processing of graph signals, exhibiting success in recommender systems, power outage prediction, and motion planning, among others.
no code implementations • ICLR 2019 • Cinjon Resnick, Roberta Raileanu, Sanyam Kapoor, Alexander Peysakhovich, Kyunghyun Cho, Joan Bruna
Our contributions are that we analytically characterize the types of environments where Backplay can improve training speed, demonstrate the effectiveness of Backplay both in large grid worlds and a complex four player zero-sum game (Pommerman), and show that Backplay compares favorably to other competitive methods known to improve sample efficiency.
1 code implementation • 29 Apr 2019 • Jihun Oh, Kyunghyun Cho, Joan Bruna
As an efficient and scalable graph neural network, GraphSAGE has enabled an inductive capability for inferring unseen nodes or graphs by aggregating subsampled local neighborhoods and by learning in a mini-batch gradient descent fashion.
no code implementations • 5 Feb 2019 • Grant Rotskoff, Samy Jelassi, Joan Bruna, Eric Vanden-Eijnden
Neural networks with a large number of parameters admit a mean-field description, which has recently served as a theoretical explanation for the favorable training properties of "overparameterized" models.
2 code implementations • 28 Dec 2018 • Mathieu Andreux, Tomás Angles, Georgios Exarchakis, Roberto Leonarduzzi, Gaspar Rochette, Louis Thiry, John Zarka, Stéphane Mallat, Joakim andén, Eugene Belilovsky, Joan Bruna, Vincent Lostanlen, Muawiz Chaudhary, Matthew J. Hirn, Edouard Oyallon, Sixin Zhang, Carmine Cella, Michael Eickenberg
The wavelet scattering transform is an invariant signal representation suitable for many signal processing and machine learning applications.
1 code implementation • CVPR 2019 • Francis Williams, Teseo Schneider, Claudio Silva, Denis Zorin, Joan Bruna, Daniele Panozzo
We propose the use of a deep neural network as a geometric prior for surface reconstruction.
2 code implementations • ICLR 2018 • Zhengdao Chen, Xiang Li, Joan Bruna
This graph inference task can be recast as a node-wise graph classification problem, and, as such, computational detection thresholds can be translated in terms of learning within appropriate models.
1 code implementation • 17 Sep 2018 • Nicholas Choma, Federico Monti, Lisa Gerhardt, Tomasz Palczewski, Zahra Ronaghi, Prabhat, Wahid Bhimji, Michael M. Bronstein, Spencer R. Klein, Joan Bruna
Tasks involving the analysis of geometric (graph- and manifold-structured) data have recently gained prominence in the machine learning community, giving birth to a rapidly developing field of geometric deep learning.
no code implementations • 6 Sep 2018 • David Folqué, Sainbayar Sukhbaatar, Arthur Szlam, Joan Bruna
A desirable property of an intelligent agent is its ability to understand its environment to quickly generalize to novel tasks and compose simpler tasks into more complex ones.
1 code implementation • 18 Jul 2018 • Cinjon Resnick, Roberta Raileanu, Sanyam Kapoor, Alexander Peysakhovich, Kyunghyun Cho, Joan Bruna
Our contributions are that we analytically characterize the types of environments where Backplay can improve training speed, demonstrate the effectiveness of Backplay both in large grid worlds and a complex four player zero-sum game (Pommerman), and show that Backplay compares favorably to other competitive methods known to improve sample efficiency.
no code implementations • ICLR 2019 • Fernando Gama, Alejandro Ribeiro, Joan Bruna
Stability is a key aspect of data analysis.
no code implementations • 18 Feb 2018 • Luca Venturi, Afonso S. Bandeira, Joan Bruna
Focusing on a class of two-layer neural networks defined by smooth (but generally non-linear) activation functions, we identify a notion of intrinsic dimension and show that it provides necessary and sufficient conditions for the absence of spurious valleys.
no code implementations • 6 Jan 2018 • Joan Bruna, Stephane Mallat
Asymptotic properties of maximum entropy microcanonical and macrocanonical processes and their convergence to Gibbs measures are reviewed.
no code implementations • 13 Dec 2017 • Rene Vidal, Joan Bruna, Raja Giryes, Stefano Soatto
Recently there has been a dramatic increase in the performance of recognition systems due to the introduction of deep architectures for representation learning and classification.
6 code implementations • 10 Nov 2017 • Victor Garcia, Joan Bruna
We propose to study the problem of few-shot learning with the prism of inference on a partially observed graphical model, constructed from a collection of input images whose label can be either observed or not.
3 code implementations • 22 Jun 2017 • Alex Nowak, Soledad Villar, Afonso S. Bandeira, Joan Bruna
Inverse problems correspond to a certain type of optimization problems formulated over appropriate input distributions.
1 code implementation • 2 Jun 2017 • Thomas Moreau, Joan Bruna
Sparse coding is a core building block in many data analysis and machine learning pipelines.
1 code implementation • CVPR 2018 • Ilya Kostrikov, Zhongshi Jiang, Daniele Panozzo, Denis Zorin, Joan Bruna
We study data-driven representations for three-dimensional triangle meshes, which are one of the prevalent objects used to represent 3D geometry.
4 code implementations • ICLR 2019 • Zhengdao Chen, Xiang Li, Joan Bruna
We show that, in a data-driven manner and without access to the underlying generative models, they can match or even surpass the performance of the belief propagation algorithm on binary and multi-class stochastic block models, which is believed to reach the computational threshold.
Ranked #1 on
Community Detection
on Amazon
(Accuracy-NE metric, using extra
training data)
no code implementations • 24 Nov 2016 • Michael M. Bronstein, Joan Bruna, Yann Lecun, Arthur Szlam, Pierre Vandergheynst
In many applications, such geometric data are large and complex (in the case of social networks, on the scale of billions), and are natural targets for machine learning techniques.
1 code implementation • ICLR 2018 • Alex Nowak-Vila, David Folqué, Joan Bruna
Moreover, thanks to the dynamic aspect of our architecture, we can incorporate the computational complexity as a regularization term that can be optimized by backpropagation.
1 code implementation • 4 Nov 2016 • C. Daniel Freeman, Joan Bruna
Our theoretical work quantifies and formalizes two important \emph{folklore} facts: (i) the landscape of deep linear networks has a radically different topology from that of deep half-rectified ones, and (ii) that the energy landscape in the non-linear case is fundamentally controlled by the interplay between the smoothness of the data distribution and model over-parametrization.
1 code implementation • 27 Oct 2016 • Shariq Mobin, Joan Bruna
The human auditory system is able to distinguish the vocal source of thousands of speakers, yet not much is known about what features the auditory system uses to do this.
no code implementations • 18 Sep 2016 • Ivan Dokmanić, Joan Bruna, Stéphane Mallat, Maarten de Hoop
We propose a new approach to linear ill-posed inverse problems.
Computational Engineering, Finance, and Science
1 code implementation • 1 Sep 2016 • Thomas Moreau, Joan Bruna
Sparse coding is a core building block in many data analysis and machine learning pipelines.
1 code implementation • 18 Nov 2015 • Joan Bruna, Pablo Sprechmann, Yann Lecun
Inverse problems in image and audio, and super-resolution in particular, can be seen as high-dimensional structured prediction problems, where the goal is to characterize the conditional distribution of a high-resolution output given its low-resolution corrupted observation.
3 code implementations • 16 Jun 2015 • Mikael Henaff, Joan Bruna, Yann Lecun
Deep Learning's recent successes have mostly relied on Convolutional Networks, which exploit fundamental statistical properties of images, sounds and video data: the local stationarity and multi-scale compositional structure, that allows expressing long range interactions in terms of shorter, localized interactions.
no code implementations • 9 Apr 2015 • Ross Goroshin, Joan Bruna, Jonathan Tompson, David Eigen, Yann Lecun
Current state-of-the-art classification and detection algorithms rely on supervised training.
no code implementations • 11 Mar 2015 • Joan Bruna, Soumith Chintala, Yann Lecun, Serkan Piantino, Arthur Szlam, Mark Tygert
Courtesy of the exact correspondence, the remarkably rich and rigorous body of mathematical analysis for wavelets applies directly to (complex-valued) convnets.
no code implementations • 22 Dec 2014 • Pablo Sprechmann, Joan Bruna, Yann Lecun
In this report we describe an ongoing line of research for solving single-channel source separation problems.
1 code implementation • 20 Dec 2014 • MarcAurelio Ranzato, Arthur Szlam, Joan Bruna, Michael Mathieu, Ronan Collobert, Sumit Chopra
We propose a strong baseline model for unsupervised feature learning using video data.
no code implementations • ICCV 2015 • Ross Goroshin, Joan Bruna, Jonathan Tompson, David Eigen, Yann Lecun
Current state-of-the-art classification and detection algorithms rely on supervised training.
no code implementations • 9 Jun 2014 • Sainbayar Sukhbaatar, Joan Bruna, Manohar Paluri, Lubomir Bourdev, Rob Fergus
The availability of large labeled datasets has allowed Convolutional Network models to achieve impressive recognition results.
no code implementations • NeurIPS 2014 • Emily Denton, Wojciech Zaremba, Joan Bruna, Yann Lecun, Rob Fergus
We present techniques for speeding up the test-time evaluation of large convolutional networks, designed for object recognition tasks.
4 code implementations • 21 Dec 2013 • Joan Bruna, Wojciech Zaremba, Arthur Szlam, Yann Lecun
Convolutional Neural Networks are extremely efficient architectures in image and audio recognition tasks, thanks to their ability to exploit the local translational invariance of signal classes over their domain.
12 code implementations • 21 Dec 2013 • Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, Rob Fergus
Deep neural networks are highly expressive models that have recently achieved state of the art performance on speech and visual recognition tasks.
no code implementations • 16 Nov 2013 • Dilip Krishnan, Joan Bruna, Rob Fergus
Blind deconvolution has made significant progress in the past decade.
no code implementations • 16 Nov 2013 • Joan Bruna, Arthur Szlam, Yann Lecun
In this work we compute lower Lipschitz bounds of $\ell_p$ pooling operators for $p=1, 2, \infty$ as well as $\ell_p$ pooling operators preceded by half-rectification layers.
1 code implementation • 5 Mar 2012 • Joan Bruna, Stéphane Mallat
A wavelet scattering network computes a translation invariant image representation, which is stable to deformations and preserves high frequency information for classification.
no code implementations • 12 Nov 2010 • Joan Bruna, Stéphane Mallat
A scattering vector is a local descriptor including multiscale and multi-direction co-occurrence information.