no code implementations • 17 Apr 2024 • Emanuele La Malfa, Gabriele La Malfa, Giuseppe Nicosia, Vito Latora
For the novel metrics, in addition to the existing ones, we provide a mathematical formalisation for Fully Connected, AutoEncoder, Convolutional and Recurrent neural networks, of which we vary the activation functions and the number of hidden layers.
no code implementations • 5 Feb 2024 • Fangru Lin, Emanuele La Malfa, Valentin Hofmann, Elle Michelle Yang, Anthony Cohn, Janet B. Pierrehumbert
Reasoning about asynchronous plans is challenging since it requires sequential and parallel planning to optimize time costs.
no code implementations • 17 Jan 2024 • Emanuele La Malfa, Christoph Weinhuber, Orazio Torre, Fangru Lin, Anthony Cohn, Nigel Shadbolt, Michael Wooldridge
We investigate the extent to which Large Language Models (LLMs) can simulate the execution of computer code and algorithms.
no code implementations • 28 Sep 2023 • Emanuele La Malfa, Aleksandar Petrov, Simon Frieder, Christoph Weinhuber, Ryan Burnell, Raza Nazar, Anthony G. Cohn, Nigel Shadbolt, Michael Wooldridge
This paper has two goals: on the one hand, we delineate how the aforementioned challenges act as impediments to the accessibility, replicability, reliability, and trustworthiness of LMaaS.
1 code implementation • NeurIPS 2023 • Aleksandar Petrov, Emanuele La Malfa, Philip H. S. Torr, Adel Bibi
Recent language models have shown impressive multilingual performance, even when not explicitly trained for it.
1 code implementation • 31 Oct 2022 • Emanuele La Malfa, Matthew Wicker, Marta Kwiatkowska
In this paper, focusing on the ability of language models to represent syntax, we propose a framework to assess the consistency and robustness of linguistic representations.
no code implementations • 12 Sep 2022 • Emanuele La Malfa, Gabriele La Malfa, Claudio Caprioli, Giuseppe Nicosia, Vito Latora
Deep Neural Networks are, from a physical perspective, graphs whose `links` and `vertices` iteratively process data and solve tasks sub-optimally.
1 code implementation • 13 Dec 2021 • Emanuele La Malfa, Marta Kwiatkowska
There is growing evidence that the classical notion of adversarial robustness originally introduced for images has been adopted as a de facto standard by a large part of the NLP research community.
1 code implementation • 6 Oct 2021 • Emanuele La Malfa, Gabriele La Malfa, Giuseppe Nicosia, Vito Latora
In this paper, we interpret Deep Neural Networks with Complex Network Theory.
1 code implementation • 8 May 2021 • Emanuele La Malfa, Agnieszka Zbrzezny, Rhiannon Michelmore, Nicola Paoletti, Marta Kwiatkowska
We build on abduction-based explanations for ma-chine learning and develop a method for computing local explanations for neural network models in natural language processing (NLP).
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Emanuele La Malfa, Min Wu, Luca Laurenti, Benjie Wang, Anthony Hartshorn, Marta Kwiatkowska
Neural network NLP models are vulnerable to small modifications of the input that maintain the original meaning but result in a different prediction.