Score-based and diffusion models have emerged as effective approaches for both conditional and unconditional generation.
In this work, we propose a novel adversarial defence mechanism for image classification - CARSO - blending the paradigms of adversarial training and adversarial purification in a mutually-beneficial, robustness-enhancing way.
Despite their impressive performance in classification, neural networks are known to be vulnerable to adversarial attacks.
State of the art Machine Learning (ML) approaches are mostly based on gradient descent optimisation in continuous spaces, while learning logic is framed in the discrete syntactic space of formulae.
We consider the problem of predictive monitoring (PM), i. e., predicting at runtime the satisfaction of a desired property from the current system's state.
Despite significant efforts, both practical and theoretical, training deep learning models robust to adversarial attacks is still an open problem.
As observations are costly and noisy, smMC is framed as a Bayesian inference problem so that the estimates have an additional quantification of the uncertainty.
We consider the problem of predictive monitoring (PM), i. e., predicting at runtime future violations of a system from the current state.
Markov Population Models are a widespread formalism used to model the dynamics of complex systems, with applications in Systems Biology and many other fields.
To understand the long-run behavior of Markov population models, the computation of the stationary distribution is often a crucial part.
We empirically show that interpretations provided by Bayesian Neural Networks are considerably more stable under adversarial perturbations of the inputs and even under direct attacks to the explanations.
We propose two training techniques for improving the robustness of Neural Networks to adversarial attacks, i. e. manipulations of the inputs that are maliciously crafted to fool networks into incorrect predictions.
Many probabilistic inference problems such as stochastic filtering or the computation of rare event probabilities require model analysis under initial and terminal constraints.
We introduce a novel learning-based approach to synthesize safe and robust controllers for autonomous Cyber-Physical Systems and, at the same time, to generate challenging tests.
In this paper, we propose the novel theoretical framework of density-embedded layers, generalizing the transformation represented by a neuron.
The deterministic rate equation (DRE) gives a macroscopic approximation as a compact system of differential equations that estimate the average populations for each species, but it may be inaccurate in the case of nonlinear interaction dynamics.
Vulnerability to adversarial attacks is one of the principal hurdles to the adoption of deep learning in safety-critical applications.
We investigate global adversarial robustness guarantees for machine learning models.
The success of modern Artificial Intelligence (AI) technologies depends critically on the ability to learn non-linear functional dependencies from large, high dimensional data sets.
We consider the problem of mining signal temporal logical requirements from a dataset of regular (good) and anomalous (bad) trajectories of a dynamical system.
Biological systems are often modelled at different levels of abstraction depending on the particular aims/resources of a study.
We present a novel approach to learn the formulae characterising the emergent behaviour of a dynamical system from system observations.
By discussing two examples, we show how to approximate the distribution of the robustness score and its key indicators: the average robustness and the conditional average robustness.