Search Results for author: Mattia Rigotti

Found 12 papers, 6 papers with code

$\sbf{\delta^2}$-exploration for Reinforcement Learning

no code implementations29 Sep 2021 Rong Zhu, Mattia Rigotti

Effectively tackling the \emph{exploration-exploitation dilemma} is still a major challenge in reinforcement learning.

General Reinforcement Learning Q-Learning +1

Attention-based Interpretability with Concept Transformers

no code implementations ICLR 2022 Mattia Rigotti, Christoph Miksovic, Ioana Giurgiu, Thomas Gschwind, Paolo Scotton

In particular, we design the Concept Transformer, a deep learning module that exposes explanations of the output of a model in which it is embedded in terms of attention over user-defined high-level concepts.

Deep Bandits Show-Off: Simple and Efficient Exploration with Deep Networks

1 code implementation NeurIPS 2021 Rong Zhu, Mattia Rigotti

Bayesian exploration strategies like Thompson Sampling resolve this trade-off in a principled way by modeling and updating the distribution of the parameters of the action-value function, the outcome model of the environment.

Efficient Exploration Multi-Armed Bandits

Alleviating Noisy Data in Image Captioning with Cooperative Distillation

no code implementations21 Dec 2020 Pierre Dognin, Igor Melnyk, Youssef Mroueh, Inkit Padhi, Mattia Rigotti, Jarret Ross, Yair Schiff

Image captioning systems have made substantial progress, largely due to the availability of curated datasets like Microsoft COCO or Vizwiz that have accurate descriptions of their corresponding images.

Image Captioning

Image Captioning as an Assistive Technology: Lessons Learned from VizWiz 2020 Challenge

1 code implementation21 Dec 2020 Pierre Dognin, Igor Melnyk, Youssef Mroueh, Inkit Padhi, Mattia Rigotti, Jarret Ross, Yair Schiff, Richard A. Young, Brian Belgodere

Image captioning has recently demonstrated impressive progress largely owing to the introduction of neural network algorithms trained on curated dataset like MS-COCO.

Image Captioning

Self-correcting Q-Learning

no code implementations2 Dec 2020 Rong Zhu, Mattia Rigotti

The Q-learning algorithm is known to be affected by the maximization bias, i. e. the systematic overestimation of action values, an important issue that has recently received renewed attention.


Tabular Transformers for Modeling Multivariate Time Series

1 code implementation3 Nov 2020 Inkit Padhi, Yair Schiff, Igor Melnyk, Mattia Rigotti, Youssef Mroueh, Pierre Dognin, Jerret Ross, Ravi Nair, Erik Altman

This results in two architectures for tabular time series: one for learning representations that is analogous to BERT and can be pre-trained end-to-end and used in downstream tasks, and one that is akin to GPT and can be used for generation of realistic synthetic tabular sequences.

Fraud Detection Synthetic Data Generation +1

Unbalanced Sobolev Descent

1 code implementation NeurIPS 2020 Youssef Mroueh, Mattia Rigotti

USD transports particles along gradient flows of the witness function of the Sobolev-Fisher discrepancy (advection step) and reweighs the mass of particles with respect to this witness function (reaction step).

Sobolev Independence Criterion

1 code implementation NeurIPS 2019 Youssef Mroueh, Tom Sercu, Mattia Rigotti, Inkit Padhi, Cicero dos Santos

In the kernel version we show that SIC can be cast as a convex optimization problem by introducing auxiliary variables that play an important role in feature selection as they are normalized feature importance scores.

Feature Importance feature selection

Efficient ConvNets for Analog Arrays

no code implementations3 Jul 2018 Malte J. Rasch, Tayfun Gokmen, Mattia Rigotti, Wilfried Haensch

Analog arrays are a promising upcoming hardware technology with the potential to drastically speed up deep learning.

Beyond Backprop: Online Alternating Minimization with Auxiliary Variables

1 code implementation24 Jun 2018 Anna Choromanska, Benjamin Cowen, Sadhana Kumaravel, Ronny Luss, Mattia Rigotti, Irina Rish, Brian Kingsbury, Paolo DiAchille, Viatcheslav Gurev, Ravi Tejwani, Djallel Bouneffouf

Despite significant recent advances in deep neural networks, training them remains a challenge due to the highly non-convex nature of the objective function.

Energy-efficient neuromorphic classifiers

no code implementations1 Jul 2015 Daniel Martí, Mattia Rigotti, Mingoo Seok, Stefano Fusi

We also show that the energy consumption of the IBM chip is typically 2 or more orders of magnitude lower than that of conventional digital machines when implementing classifiers with comparable performance.

Cannot find the paper you are looking for? You can Submit a new open access paper.