Search Results for author: Emanuele La Malfa

Found 11 papers, 6 papers with code

Deep Neural Networks via Complex Network Theory: a Perspective

no code implementations17 Apr 2024 Emanuele La Malfa, Gabriele La Malfa, Giuseppe Nicosia, Vito Latora

For the novel metrics, in addition to the existing ones, we provide a mathematical formalisation for Fully Connected, AutoEncoder, Convolutional and Recurrent neural networks, of which we vary the activation functions and the number of hidden layers.

Graph-enhanced Large Language Models in Asynchronous Plan Reasoning

no code implementations5 Feb 2024 Fangru Lin, Emanuele La Malfa, Valentin Hofmann, Elle Michelle Yang, Anthony Cohn, Janet B. Pierrehumbert

Reasoning about asynchronous plans is challenging since it requires sequential and parallel planning to optimize time costs.

Code Simulation Challenges for Large Language Models

no code implementations17 Jan 2024 Emanuele La Malfa, Christoph Weinhuber, Orazio Torre, Fangru Lin, Anthony Cohn, Nigel Shadbolt, Michael Wooldridge

We investigate the extent to which Large Language Models (LLMs) can simulate the execution of computer code and algorithms.

Language Models as a Service: Overview of a New Paradigm and its Challenges

no code implementations28 Sep 2023 Emanuele La Malfa, Aleksandar Petrov, Simon Frieder, Christoph Weinhuber, Ryan Burnell, Raza Nazar, Anthony G. Cohn, Nigel Shadbolt, Michael Wooldridge

This paper has two goals: on the one hand, we delineate how the aforementioned challenges act as impediments to the accessibility, replicability, reliability, and trustworthiness of LMaaS.

Benchmarking

Emergent Linguistic Structures in Neural Networks are Fragile

1 code implementation31 Oct 2022 Emanuele La Malfa, Matthew Wicker, Marta Kwiatkowska

In this paper, focusing on the ability of language models to represent syntax, we propose a framework to assess the consistency and robustness of linguistic representations.

Language Modelling

Deep Neural Networks as Complex Networks

no code implementations12 Sep 2022 Emanuele La Malfa, Gabriele La Malfa, Claudio Caprioli, Giuseppe Nicosia, Vito Latora

Deep Neural Networks are, from a physical perspective, graphs whose `links` and `vertices` iteratively process data and solve tasks sub-optimally.

The King is Naked: on the Notion of Robustness for Natural Language Processing

1 code implementation13 Dec 2021 Emanuele La Malfa, Marta Kwiatkowska

There is growing evidence that the classical notion of adversarial robustness originally introduced for images has been adopted as a de facto standard by a large part of the NLP research community.

Adversarial Robustness

On Guaranteed Optimal Robust Explanations for NLP Models

1 code implementation8 May 2021 Emanuele La Malfa, Agnieszka Zbrzezny, Rhiannon Michelmore, Nicola Paoletti, Marta Kwiatkowska

We build on abduction-based explanations for ma-chine learning and develop a method for computing local explanations for neural network models in natural language processing (NLP).

Sentiment Analysis

Cannot find the paper you are looking for? You can Submit a new open access paper.