Search Results for author: Daniel Cunnington

Found 8 papers, 4 papers with code

The Role of Foundation Models in Neuro-Symbolic Learning and Reasoning

no code implementations2 Feb 2024 Daniel Cunnington, Mark Law, Jorge Lobo, Alessandra Russo

In this paper, we leverage the implicit knowledge within foundation models to enhance the performance in NeSy tasks, whilst reducing the amount of data labelling and manual engineering.

Language Modelling Large Language Model

Can we Constrain Concept Bottleneck Models to Learn Semantically Meaningful Input Features?

no code implementations1 Feb 2024 Jack Furby, Daniel Cunnington, Dave Braines, Alun Preece

We hypothesise that this occurs when concept annotations are inaccurate or how input features should relate to concepts is unclear.

Symbolic Learning for Material Discovery

no code implementations30 Nov 2023 Daniel Cunnington, Flaviu Cipcigan, Rodrigo Neumann Barros Ferreira, Jonathan Booth

Discovering new materials is essential to solve challenges in climate change, sustainability and healthcare.

Towards a Deeper Understanding of Concept Bottleneck Models Through End-to-End Explanation

1 code implementation7 Feb 2023 Jack Furby, Daniel Cunnington, Dave Braines, Alun Preece

Concept Bottleneck Models (CBMs) first map raw input(s) to a vector of human-defined concepts, before using this vector to predict a final classification.

Neuro-Symbolic Learning of Answer Set Programs from Raw Data

1 code implementation25 May 2022 Daniel Cunnington, Mark Law, Jorge Lobo, Alessandra Russo

A promising direction for achieving this goal is Neuro-Symbolic AI, which aims to combine the interpretability of symbolic techniques with the ability of deep learning to learn from raw data.

Decision Making

FF-NSL: Feed-Forward Neural-Symbolic Learner

1 code implementation24 Jun 2021 Daniel Cunnington, Mark Law, Alessandra Russo, Jorge Lobo

To address this limitation, we propose a neural-symbolic learning framework, called Feed-Forward Neural-Symbolic Learner (FFNSL), that integrates a logic-based machine learning system capable of learning from noisy examples, with neural networks, in order to learn interpretable knowledge from labelled unstructured data.

Inductive logic programming

NSL: Hybrid Interpretable Learning From Noisy Raw Data

no code implementations9 Dec 2020 Daniel Cunnington, Alessandra Russo, Mark Law, Jorge Lobo, Lance Kaplan

Using the scoring function of FastLAS, NSL searches for short, interpretable rules that generalise over such noisy examples.

Inductive logic programming

Synthetic Ground Truth Generation for Evaluating Generative Policy Models

1 code implementation26 Apr 2019 Daniel Cunnington, Graham White, Geeth de Mel

Generative Policy-based Models aim to enable a coalition of systems, be they devices or services to adapt according to contextual changes such as environmental factors, user preferences and different tasks whilst adhering to various constraints and regulations as directed by a managing party or the collective vision of the coalition.

Cannot find the paper you are looking for? You can Submit a new open access paper.