Search Results for author: Akram Erraqabi

Found 9 papers, 3 papers with code

Temporal Abstractions-Augmented Temporally Contrastive Learning: An Alternative to the Laplacian in RL

no code implementations21 Mar 2022 Akram Erraqabi, Marlos C. Machado, Mingde Zhao, Sainbayar Sukhbaatar, Alessandro Lazaric, Ludovic Denoyer, Yoshua Bengio

In reinforcement learning, the graph Laplacian has proved to be a valuable tool in the task-agnostic setting, with applications ranging from skill discovery to reward shaping.

Continuous Control Contrastive Learning +1

Flexible Learning of Sparse Neural Networks via Constrained $L_0$ Regularization

no code implementations NeurIPS Workshop LatinX_in_AI 2021 Jose Gallego-Posada, Juan Ramirez De Los Rios, Akram Erraqabi

We propose to approach the problem of learning $L_0$-sparse networks using a constrained formulation of the optimization problem.

Combining adaptive algorithms and hypergradient method: a performance and robustness study

no code implementations ICLR 2019 Akram Erraqabi, Nicolas Le Roux

Wilson et al. (2017) showed that, when the stepsize schedule is properly designed, stochastic gradient generalizes better than ADAM (Kingma & Ba, 2014).

A3T: Adversarially Augmented Adversarial Training

no code implementations12 Jan 2018 Akram Erraqabi, Aristide Baratin, Yoshua Bengio, Simon Lacoste-Julien

Recent research showed that deep neural networks are highly sensitive to so-called adversarial perturbations, which are tiny perturbations of the input data purposely designed to fool a machine learning classifier.

Adversarial Robustness BIG-bench Machine Learning +1

Image Segmentation by Iterative Inference from Conditional Score Estimation

1 code implementation ICLR 2018 Adriana Romero, Michal Drozdzal, Akram Erraqabi, Simon Jégou, Yoshua Bengio

We experimentally find that the proposed iterative inference from conditional score estimation by conditional denoising autoencoders performs better than comparable models based on CRFs or those not using any explicit modeling of the conditional joint distribution of outputs.

Denoising Image Segmentation +1

On Random Weights for Texture Generation in One Layer Neural Networks

no code implementations19 Dec 2016 Mihir Mongia, Kundan Kumar, Akram Erraqabi, Yoshua Bengio

Recent work in the literature has shown experimentally that one can use the lower layers of a trained convolutional neural network (CNN) to model natural textures.

Texture Synthesis

Diet Networks: Thin Parameters for Fat Genomics

5 code implementations28 Nov 2016 Adriana Romero, Pierre Luc Carrier, Akram Erraqabi, Tristan Sylvain, Alex Auvolat, Etienne Dejoie, Marc-André Legault, Marie-Pierre Dubé, Julie G. Hussin, Yoshua Bengio

It is based on the idea that we can first learn or provide a distributed representation for each input feature (e. g. for each position in the genome where variations are observed), and then learn (with another neural network called the parameter prediction network) how to map a feature's distributed representation to the vector of parameters specific to that feature in the classifier neural network (the weights which link the value of the feature to each of the hidden units).

Parameter Prediction

Cannot find the paper you are looking for? You can Submit a new open access paper.