Search Results for author: Mattia Rigotti

Found 24 papers, 8 papers with code

Energy-efficient neuromorphic classifiers

no code implementations1 Jul 2015 Daniel Martí, Mattia Rigotti, Mingoo Seok, Stefano Fusi

We also show that the energy consumption of the IBM chip is typically 2 or more orders of magnitude lower than that of conventional digital machines when implementing classifiers with comparable performance.

Beyond Backprop: Online Alternating Minimization with Auxiliary Variables

1 code implementation24 Jun 2018 Anna Choromanska, Benjamin Cowen, Sadhana Kumaravel, Ronny Luss, Mattia Rigotti, Irina Rish, Brian Kingsbury, Paolo DiAchille, Viatcheslav Gurev, Ravi Tejwani, Djallel Bouneffouf

Despite significant recent advances in deep neural networks, training them remains a challenge due to the highly non-convex nature of the objective function.

Efficient ConvNets for Analog Arrays

no code implementations3 Jul 2018 Malte J. Rasch, Tayfun Gokmen, Mattia Rigotti, Wilfried Haensch

Analog arrays are a promising upcoming hardware technology with the potential to drastically speed up deep learning.

Sobolev Independence Criterion

1 code implementation NeurIPS 2019 Youssef Mroueh, Tom Sercu, Mattia Rigotti, Inkit Padhi, Cicero dos Santos

In the kernel version we show that SIC can be cast as a convex optimization problem by introducing auxiliary variables that play an important role in feature selection as they are normalized feature importance scores.

Feature Importance feature selection

Unbalanced Sobolev Descent

1 code implementation NeurIPS 2020 Youssef Mroueh, Mattia Rigotti

USD transports particles along gradient flows of the witness function of the Sobolev-Fisher discrepancy (advection step) and reweighs the mass of particles with respect to this witness function (reaction step).

Tabular Transformers for Modeling Multivariate Time Series

1 code implementation3 Nov 2020 Inkit Padhi, Yair Schiff, Igor Melnyk, Mattia Rigotti, Youssef Mroueh, Pierre Dognin, Jerret Ross, Ravi Nair, Erik Altman

This results in two architectures for tabular time series: one for learning representations that is analogous to BERT and can be pre-trained end-to-end and used in downstream tasks, and one that is akin to GPT and can be used for generation of realistic synthetic tabular sequences.

Fraud Detection Synthetic Data Generation +2

Self-correcting Q-Learning

no code implementations2 Dec 2020 Rong Zhu, Mattia Rigotti

The Q-learning algorithm is known to be affected by the maximization bias, i. e. the systematic overestimation of action values, an important issue that has recently received renewed attention.

Q-Learning

Image Captioning as an Assistive Technology: Lessons Learned from VizWiz 2020 Challenge

1 code implementation21 Dec 2020 Pierre Dognin, Igor Melnyk, Youssef Mroueh, Inkit Padhi, Mattia Rigotti, Jarret Ross, Yair Schiff, Richard A. Young, Brian Belgodere

Image captioning has recently demonstrated impressive progress largely owing to the introduction of neural network algorithms trained on curated dataset like MS-COCO.

Image Captioning Navigate

Alleviating Noisy Data in Image Captioning with Cooperative Distillation

no code implementations21 Dec 2020 Pierre Dognin, Igor Melnyk, Youssef Mroueh, Inkit Padhi, Mattia Rigotti, Jarret Ross, Yair Schiff

Image captioning systems have made substantial progress, largely due to the availability of curated datasets like Microsoft COCO or Vizwiz that have accurate descriptions of their corresponding images.

Image Captioning

Deep Bandits Show-Off: Simple and Efficient Exploration with Deep Networks

1 code implementation NeurIPS 2021 Rong Zhu, Mattia Rigotti

Bayesian exploration strategies like Thompson Sampling resolve this trade-off in a principled way by modeling and updating the distribution of the parameters of the action-value function, the outcome model of the environment.

Efficient Exploration Multi-Armed Bandits +1

Attention-based Interpretability with Concept Transformers

no code implementations ICLR 2022 Mattia Rigotti, Christoph Miksovic, Ioana Giurgiu, Thomas Gschwind, Paolo Scotton

In particular, we design the Concept Transformer, a deep learning module that exposes explanations of the output of a model in which it is embedded in terms of attention over user-defined high-level concepts.

$\sbf{\delta^2}$-exploration for Reinforcement Learning

no code implementations29 Sep 2021 Rong Zhu, Mattia Rigotti

Effectively tackling the \emph{exploration-exploitation dilemma} is still a major challenge in reinforcement learning.

General Reinforcement Learning Q-Learning +2

Compositional generalization through abstract representations in human and artificial neural networks

no code implementations15 Sep 2022 Takuya Ito, Tim Klinger, Douglas H. Schultz, John D. Murray, Michael W. Cole, Mattia Rigotti

Our findings give empirical support to the role of compositional generalization in human behavior, implicate abstract representations as its neural implementation, and illustrate that these representations can be embedded into ANNs by designing simple and efficient pretraining procedures.

Zero-shot Generalization

Model-Assisted Labeling via Explainability for Visual Inspection of Civil Infrastructures

no code implementations22 Sep 2022 Klara Janouskova, Mattia Rigotti, Ioana Giurgiu, Cristiano Malossi

These are used within an assisted labeling framework where the annotators can interact with them as proposal segmentation masks by deciding to accept, reject or modify them, and interactions are logged as weak labels to further refine the classifier.

Segmentation

Active Learning for Imbalanced Civil Infrastructure Data

no code implementations19 Oct 2022 Thomas Frick, Diego Antognini, Mattia Rigotti, Ioana Giurgiu, Benjamin Grewe, Cristiano Malossi

Unfortunately, annotation costs are incredibly high as our proprietary civil engineering dataset must be annotated by highly trained engineers.

Active Learning

Estimating the Adversarial Robustness of Attributions in Text with Transformers

no code implementations18 Dec 2022 Adam Ivankay, Mattia Rigotti, Ivan Girardi, Chiara Marchiori, Pascal Frossard

Finally, with experiments on several text classification architectures, we show that TEA consistently outperforms current state-of-the-art AR estimators, yielding perturbations that alter explanations to a greater extent while being more fluent and less perceptible.

Adversarial Robustness text-classification +2

Adaptive Conformal Regression with Jackknife+ Rescaled Scores

no code implementations31 May 2023 Nicolas Deutschmann, Mattia Rigotti, Maria Rodriguez Martinez

We address this with a new adaptive method based on rescaling conformal scores with an estimate of local score distribution, inspired by the Jackknife+ method, which enables the use of calibration data in conformal scores without breaking calibration-test exchangeability.

Conformal Prediction Prediction Intervals +1

DARE: Towards Robust Text Explanations in Biomedical and Healthcare Applications

1 code implementation5 Jul 2023 Adam Ivankay, Mattia Rigotti, Pascal Frossard

This results in our DomainAdaptiveAREstimator (DARE) attribution robustness estimator, allowing us to properly characterize the domain-specific robustness of faithful explanations.

Unraveling the Key Components of OOD Generalization via Diversification

no code implementations26 Dec 2023 Harold Benoit, Liangze Jiang, Andrei Atanov, Oğuzhan Fatih Kar, Mattia Rigotti, Amir Zamir

We show that (1) diversification methods are highly sensitive to the distribution of the unlabeled data used for diversification and can underperform significantly when away from a method-specific sweet spot.

On the generalization capacity of neural networks during generic multimodal reasoning

1 code implementation26 Jan 2024 Takuya Ito, Soham Dan, Mattia Rigotti, James Kozloski, Murray Campbell

On the other hand, neither of these architectural features led to productive generalization, suggesting fundamental limitations of existing architectures for specific types of multimodal generalization.

Multimodal Reasoning Systematic Generalization

Outline-Guided Object Inpainting with Diffusion Models

no code implementations26 Feb 2024 Markus Pobitzer, Filip Janicki, Mattia Rigotti, Cristiano Malossi

We achieve that by creating variations of the available annotated object instances in a way that preserves the provided mask annotations, thereby resulting in new image-mask pairs to be added to the set of annotated images.

Image Augmentation Instance Segmentation +3

Cannot find the paper you are looking for? You can Submit a new open access paper.