no code implementations • 17 Apr 2024 • Emanuele La Malfa, Gabriele La Malfa, Giuseppe Nicosia, Vito Latora
For the novel metrics, in addition to the existing ones, we provide a mathematical formalisation for Fully Connected, AutoEncoder, Convolutional and Recurrent neural networks, of which we vary the activation functions and the number of hidden layers.
no code implementations • 15 Dec 2023 • Chandresh Pravin, Ivan Martino, Giuseppe Nicosia, Varun Ojha
We define three \textit{filtering scores} for quantifying the fragility, robustness and antifragility characteristics of DNN parameters based on the performances for (i) clean dataset, (ii) adversarial dataset, and (iii) the difference in performances of clean and adversarial datasets.
no code implementations • 14 Nov 2022 • Varun Ojha, Bartolomeo Panto, Giuseppe Nicosia
The paper proposes a novel adaptive search space decomposition method and a novel gradient-free optimization-based formulation for the pre- and post-buckling analyses of space truss structures.
no code implementations • 12 Sep 2022 • Emanuele La Malfa, Gabriele La Malfa, Claudio Caprioli, Giuseppe Nicosia, Vito Latora
Deep Neural Networks are, from a physical perspective, graphs whose `links` and `vertices` iteratively process data and solve tasks sub-optimally.
1 code implementation • 11 Jul 2022 • Varun Ojha, Jon Timmis, Giuseppe Nicosia
We present a comprehensive global sensitivity analysis of two single-objective and two multi-objective state-of-the-art global optimization evolutionary algorithms as an algorithm configuration problem.
1 code implementation • 4 Feb 2022 • Varun Ojha, Giuseppe Nicosia
We propose a novel algorithm called Backpropagation Neural Tree (BNeuralT), which is a stochastic computational dendritic tree.
Ranked #1000000000 on Image Classification on MNIST
no code implementations • 31 Jan 2022 • Chandresh Pravin, Ivan Martino, Giuseppe Nicosia, Varun Ojha
In this paper, we evaluate the robustness of state-of-the-art image classification models trained on the MNIST and CIFAR10 datasets against the fast gradient sign method attack, a simple yet effective method of deceiving neural networks.
1 code implementation • 6 Oct 2021 • Emanuele La Malfa, Gabriele La Malfa, Giuseppe Nicosia, Vito Latora
In this paper, we interpret Deep Neural Networks with Complex Network Theory.
1 code implementation • 9 Oct 2020 • Varun Ojha, Giuseppe Nicosia
We propose an algorithm and a new method to tackle the classification problems.
Ranked #1 on General Classification on iris