Search Results for author: Matthew L. Daggitt

Found 10 papers, 3 papers with code

NLP Verification: Towards a General Methodology for Certifying Robustness

no code implementations15 Mar 2024 Marco Casadio, Tanvi Dinkar, Ekaterina Komendantskaya, Luca Arnaboldi, Matthew L. Daggitt, Omri Isac, Guy Katz, Verena Rieser, Oliver Lemon

We propose a number of practical NLP methods that can help to quantify the effects of the embedding gap; and in particular we propose the metric of falsifiability of semantic subspaces as another fundamental metric to be reported as part of the NLP verification pipeline.

Efficient compilation of expressive problem space specifications to neural network solvers

1 code implementation24 Jan 2024 Matthew L. Daggitt, Wen Kokke, Robert Atkey

Recent work has described the presence of the embedding gap in neural network verification.

Vehicle: Bridging the Embedding Gap in the Verification of Neuro-Symbolic Programs

1 code implementation12 Jan 2024 Matthew L. Daggitt, Wen Kokke, Robert Atkey, Natalia Slusarz, Luca Arnaboldi, Ekaterina Komendantskaya

Neuro-symbolic programs -- programs containing both machine learning components and traditional symbolic code -- are becoming increasingly widespread.

ANTONIO: Towards a Systematic Method of Generating NLP Benchmarks for Verification

no code implementations6 May 2023 Marco Casadio, Luca Arnaboldi, Matthew L. Daggitt, Omri Isac, Tanvi Dinkar, Daniel Kienitz, Verena Rieser, Ekaterina Komendantskaya

In particular, many known neural network verification methods that work for computer vision and other numeric datasets do not work for NLP.

Logic of Differentiable Logics: Towards a Uniform Semantics of DL

no code implementations19 Mar 2023 Natalia Ślusarz, Ekaterina Komendantskaya, Matthew L. Daggitt, Robert Stewart, Kathrin Stark

A DL consists of a syntax in which specifications are stated and an interpretation function that translates expressions in the syntax into loss functions.

Why Robust Natural Language Understanding is a Challenge

no code implementations21 Jun 2022 Marco Casadio, Ekaterina Komendantskaya, Verena Rieser, Matthew L. Daggitt, Daniel Kienitz, Luca Arnaboldi, Wen Kokke

With the proliferation of Deep Machine Learning into real-life applications, a particular property of this technology has been brought to attention: robustness Neural Networks notoriously present low robustness and can be highly sensitive to small input perturbations.

Natural Language Understanding

Vehicle: Interfacing Neural Network Verifiers with Interactive Theorem Provers

no code implementations10 Feb 2022 Matthew L. Daggitt, Wen Kokke, Robert Atkey, Luca Arnaboldi, Ekaterina Komendantskya

However, although work has managed to incorporate the results of these verifiers to prove larger properties of individual systems, there is currently no general methodology for bridging the gap between verifiers and interactive theorem provers (ITPs).

Automated Theorem Proving

Actions You Can Handle: Dependent Types for AI Plans

no code implementations24 May 2021 Alasdair Hill, Ekaterina Komendantskaya, Matthew L. Daggitt, Ronald P. A. Petrick

Verification of AI is a challenge that has engineering, algorithmic and programming language components.

Neural Network Robustness as a Verification Property: A Principled Case Study

1 code implementation3 Apr 2021 Marco Casadio, Ekaterina Komendantskaya, Matthew L. Daggitt, Wen Kokke, Guy Katz, Guy Amir, Idan Refaeli

Neural networks are very successful at detecting patterns in noisy data, and have become the technology of choice in many fields.

Data Augmentation

Cannot find the paper you are looking for? You can Submit a new open access paper.