no code implementations • 31 Aug 2023 • Davide Scassola, Sebastiano Saccani, Ginevra Carbone, Luca Bortolussi
Score-based and diffusion models have emerged as effective approaches for both conditional and unconditional generation.
2 code implementations • 13 Jul 2022 • Luca Bortolussi, Ginevra Carbone, Luca Laurenti, Andrea Patane, Guido Sanguinetti, Matthew Wicker
Despite significant efforts, both practical and theoretical, training deep learning models robust to adversarial attacks is still an open problem.
no code implementations • 11 May 2022 • Luca Bortolussi, Francesca Cairoli, Ginevra Carbone, Paolo Pulcini
As observations are costly and noisy, smMC is framed as a Bayesian inference problem so that the estimates have an additional quantification of the uncertainty.
1 code implementation • 24 Jun 2021 • Francesca Cairoli, Ginevra Carbone, Luca Bortolussi
Markov Population Models are a widespread formalism used to model the dynamics of complex systems, with applications in Systems Biology and many other fields.
1 code implementation • 22 Feb 2021 • Ginevra Carbone, Guido Sanguinetti, Luca Bortolussi
We empirically show that interpretations provided by Bayesian Neural Networks are considerably more stable under adversarial perturbations of the inputs and even under direct attacks to the explanations.
no code implementations • 18 Feb 2021 • Ginevra Carbone, Guido Sanguinetti, Luca Bortolussi
We propose two training techniques for improving the robustness of Neural Networks to adversarial attacks, i. e. manipulations of the inputs that are maliciously crafted to fool networks into incorrect predictions.
no code implementations • 4 Sep 2020 • Luca Bortolussi, Francesca Cairoli, Ginevra Carbone, Francesco Franchina, Enrico Regolin
We introduce a novel learning-based approach to synthesize safe and robust controllers for autonomous Cyber-Physical Systems and, at the same time, to generate challenging tests.
1 code implementation • 25 Aug 2020 • Ginevra Carbone, Gabriele Sarti
We first test the effectiveness of our approach in a low-resource setting for Italian, evaluating the conditioning for both topic models and gold annotations.
1 code implementation • NeurIPS 2020 • Ginevra Carbone, Matthew Wicker, Luca Laurenti, Andrea Patane, Luca Bortolussi, Guido Sanguinetti
Vulnerability to adversarial attacks is one of the principal hurdles to the adoption of deep learning in safety-critical applications.