Search Results for author: Alberto Bietti

Found 20 papers, 12 papers with code

When does return-conditioned supervised learning work for offline reinforcement learning?

no code implementations2 Jun 2022 David Brandfonbrener, Alberto Bietti, Jacob Buckman, Romain Laroche, Joan Bruna

Several recent works have proposed a class of algorithms for the offline reinforcement learning (RL) problem that we will refer to as return-conditioned supervised learning (RCSL).

reinforcement-learning

On the (Non-)Robustness of Two-Layer Neural Networks in Different Learning Regimes

no code implementations22 Mar 2022 Elvis Dohmatob, Alberto Bietti

To better understand these factors, we provide a precise study of robustness and generalization in different scenarios, from initialization to the end of training in different regimes, as well as intermediate scenarios, where initialization still plays a role due to "lazy" training.

Efficient Kernel UCB for Contextual Bandits

1 code implementation11 Feb 2022 Houssam Zenati, Alberto Bietti, Eustache Diemert, Julien Mairal, Matthieu Martin, Pierre Gaillard

While standard methods require a O(CT^3) complexity where T is the horizon and the constant C is related to optimizing the UCB rule, we propose an efficient contextual algorithm for large-scale problems.

Multi-Armed Bandits

On the Sample Complexity of Learning under Geometric Stability

no code implementations NeurIPS 2021 Alberto Bietti, Luca Venturi, Joan Bruna

Many supervised learning problems involve high-dimensional data such as images, text, or graphs.

Dual Training of Energy-Based Models with Overparametrized Shallow Neural Networks

no code implementations11 Jul 2021 Carles Domingo-Enrich, Alberto Bietti, Marylou Gabrié, Joan Bruna, Eric Vanden-Eijnden

In the feature-learning regime, this dual formulation justifies using a two time-scale gradient ascent-descent (GDA) training algorithm in which one updates concurrently the particles in the sample space and the neurons in the parameter space of the energy.

On the Sample Complexity of Learning under Invariance and Geometric Stability

no code implementations14 Jun 2021 Alberto Bietti, Luca Venturi, Joan Bruna

Many supervised learning problems involve high-dimensional data such as images, text, or graphs.

On the Universality of Graph Neural Networks on Large Random Graphs

1 code implementation NeurIPS 2021 Nicolas Keriven, Alberto Bietti, Samuel Vaiter

In the large graph limit, GNNs are known to converge to certain "continuous" models known as c-GNNs, which directly enables a study of their approximation power on random graph models.

Stochastic Block Model

On Energy-Based Models with Overparametrized Shallow Neural Networks

1 code implementation15 Apr 2021 Carles Domingo-Enrich, Alberto Bietti, Eric Vanden-Eijnden, Joan Bruna

Energy-based models (EBMs) are a simple yet powerful framework for generative modeling.

Approximation and Learning with Deep Convolutional Models: a Kernel Perspective

1 code implementation ICLR 2022 Alberto Bietti

The empirical success of deep convolutional networks on tasks involving high-dimensional data such as images or audio suggests that they can efficiently approximate certain functions that are well-suited for such tasks.

Additive models Generalization Bounds +1

Deep Equals Shallow for ReLU Networks in Kernel Regimes

1 code implementation ICLR 2021 Alberto Bietti, Francis Bach

Deep networks are often considered to be more expressive than shallow ones in terms of approximation.

Convergence and Stability of Graph Convolutional Networks on Large Random Graphs

1 code implementation NeurIPS 2020 Nicolas Keriven, Alberto Bietti, Samuel Vaiter

We study properties of Graph Convolutional Networks (GCNs) by analyzing their behavior on standard models of random graphs, where nodes are represented by random latent variables and edges are drawn according to a similarity kernel.

Counterfactual Learning of Stochastic Policies with Continuous Actions: from Models to Offline Evaluation

1 code implementation22 Apr 2020 Houssam Zenati, Alberto Bietti, Matthieu Martin, Eustache Diemert, Julien Mairal

Counterfactual reasoning from logged data has become increasingly important for many applications such as web advertising or healthcare.

Model Selection

On the Inductive Bias of Neural Tangent Kernels

1 code implementation NeurIPS 2019 Alberto Bietti, Julien Mairal

State-of-the-art neural networks are heavily over-parameterized, making the optimization algorithm a crucial ingredient for learning predictive models with good generalization properties.

Inductive Bias

A Kernel Perspective for Regularizing Deep Neural Networks

1 code implementation30 Sep 2018 Alberto Bietti, Grégoire Mialon, Dexiong Chen, Julien Mairal

We propose a new point of view for regularizing deep neural networks by using the norm of a reproducing kernel Hilbert space (RKHS).

A Contextual Bandit Bake-off

1 code implementation12 Feb 2018 Alberto Bietti, Alekh Agarwal, John Langford

Contextual bandit algorithms are essential for solving many real-world interactive machine learning problems.

Invariance and Stability of Deep Convolutional Representations

no code implementations NeurIPS 2017 Alberto Bietti, Julien Mairal

In this paper, we study deep signal representations that are near-invariant to groups of transformations and stable to the action of diffeomorphisms without losing signal information.

Group Invariance, Stability to Deformations, and Complexity of Deep Convolutional Representations

1 code implementation9 Jun 2017 Alberto Bietti, Julien Mairal

The success of deep convolutional architectures is often attributed in part to their ability to learn multiscale and invariant representations of natural signals.

Generalization Bounds

Cannot find the paper you are looking for? You can Submit a new open access paper.