Search Results for author: Ginevra Carbone

Found 9 papers, 5 papers with code

Conditioning Score-Based Generative Models by Neuro-Symbolic Constraints

no code implementations31 Aug 2023 Davide Scassola, Sebastiano Saccani, Ginevra Carbone, Luca Bortolussi

Score-based and diffusion models have emerged as effective approaches for both conditional and unconditional generation.

Time Series

On the Robustness of Bayesian Neural Networks to Adversarial Attacks

2 code implementations13 Jul 2022 Luca Bortolussi, Ginevra Carbone, Luca Laurenti, Andrea Patane, Guido Sanguinetti, Matthew Wicker

Despite significant efforts, both practical and theoretical, training deep learning models robust to adversarial attacks is still an open problem.

Variational Inference

Scalable Stochastic Parametric Verification with Stochastic Variational Smoothed Model Checking

no code implementations11 May 2022 Luca Bortolussi, Francesca Cairoli, Ginevra Carbone, Paolo Pulcini

As observations are costly and noisy, smMC is framed as a Bayesian inference problem so that the estimates have an additional quantification of the uncertainty.

Bayesian Inference Computational Efficiency +2

Abstraction of Markov Population Dynamics via Generative Adversarial Nets

1 code implementation24 Jun 2021 Francesca Cairoli, Ginevra Carbone, Luca Bortolussi

Markov Population Models are a widespread formalism used to model the dynamics of complex systems, with applications in Systems Biology and many other fields.

Resilience of Bayesian Layer-Wise Explanations under Adversarial Attacks

1 code implementation22 Feb 2021 Ginevra Carbone, Guido Sanguinetti, Luca Bortolussi

We empirically show that interpretations provided by Bayesian Neural Networks are considerably more stable under adversarial perturbations of the inputs and even under direct attacks to the explanations.

General Classification

Random Projections for Improved Adversarial Robustness

no code implementations18 Feb 2021 Ginevra Carbone, Guido Sanguinetti, Luca Bortolussi

We propose two training techniques for improving the robustness of Neural Networks to adversarial attacks, i. e. manipulations of the inputs that are maliciously crafted to fool networks into incorrect predictions.

Adversarial Robustness Dimensionality Reduction

Adversarial Learning of Robust and Safe Controllers for Cyber-Physical Systems

no code implementations4 Sep 2020 Luca Bortolussi, Francesca Cairoli, Ginevra Carbone, Francesco Franchina, Enrico Regolin

We introduce a novel learning-based approach to synthesize safe and robust controllers for autonomous Cyber-Physical Systems and, at the same time, to generate challenging tests.

ETC-NLG: End-to-end Topic-Conditioned Natural Language Generation

1 code implementation25 Aug 2020 Ginevra Carbone, Gabriele Sarti

We first test the effectiveness of our approach in a low-resource setting for Italian, evaluating the conditioning for both topic models and gold annotations.

Attribute Computational Efficiency +2

Robustness of Bayesian Neural Networks to Gradient-Based Attacks

1 code implementation NeurIPS 2020 Ginevra Carbone, Matthew Wicker, Luca Laurenti, Andrea Patane, Luca Bortolussi, Guido Sanguinetti

Vulnerability to adversarial attacks is one of the principal hurdles to the adoption of deep learning in safety-critical applications.

Variational Inference

Cannot find the paper you are looking for? You can Submit a new open access paper.