no code implementations • 15 Mar 2024 • Marco Casadio, Tanvi Dinkar, Ekaterina Komendantskaya, Luca Arnaboldi, Matthew L. Daggitt, Omri Isac, Guy Katz, Verena Rieser, Oliver Lemon
We propose a number of practical NLP methods that can help to quantify the effects of the embedding gap; and in particular we propose the metric of falsifiability of semantic subspaces as another fundamental metric to be reported as part of the NLP verification pipeline.
1 code implementation • 24 Jan 2024 • Matthew L. Daggitt, Wen Kokke, Robert Atkey
Recent work has described the presence of the embedding gap in neural network verification.
1 code implementation • 12 Jan 2024 • Matthew L. Daggitt, Wen Kokke, Robert Atkey, Natalia Slusarz, Luca Arnaboldi, Ekaterina Komendantskaya
Neuro-symbolic programs -- programs containing both machine learning components and traditional symbolic code -- are becoming increasingly widespread.
no code implementations • 6 May 2023 • Marco Casadio, Luca Arnaboldi, Matthew L. Daggitt, Omri Isac, Tanvi Dinkar, Daniel Kienitz, Verena Rieser, Ekaterina Komendantskaya
In particular, many known neural network verification methods that work for computer vision and other numeric datasets do not work for NLP.
no code implementations • 19 Mar 2023 • Natalia Ślusarz, Ekaterina Komendantskaya, Matthew L. Daggitt, Robert Stewart, Kathrin Stark
A DL consists of a syntax in which specifications are stated and an interpretation function that translates expressions in the syntax into loss functions.
no code implementations • 14 Jul 2022 • Natalia Slusarz, Ekaterina Komendantskaya, Matthew L. Daggitt, Robert Stewart
What difference does a specific choice of DL make in the context of continuous verification?
no code implementations • 21 Jun 2022 • Marco Casadio, Ekaterina Komendantskaya, Verena Rieser, Matthew L. Daggitt, Daniel Kienitz, Luca Arnaboldi, Wen Kokke
With the proliferation of Deep Machine Learning into real-life applications, a particular property of this technology has been brought to attention: robustness Neural Networks notoriously present low robustness and can be highly sensitive to small input perturbations.
no code implementations • 10 Feb 2022 • Matthew L. Daggitt, Wen Kokke, Robert Atkey, Luca Arnaboldi, Ekaterina Komendantskya
However, although work has managed to incorporate the results of these verifiers to prove larger properties of individual systems, there is currently no general methodology for bridging the gap between verifiers and interactive theorem provers (ITPs).
no code implementations • 24 May 2021 • Alasdair Hill, Ekaterina Komendantskaya, Matthew L. Daggitt, Ronald P. A. Petrick
Verification of AI is a challenge that has engineering, algorithmic and programming language components.
1 code implementation • 3 Apr 2021 • Marco Casadio, Ekaterina Komendantskaya, Matthew L. Daggitt, Wen Kokke, Guy Katz, Guy Amir, Idan Refaeli
Neural networks are very successful at detecting patterns in noisy data, and have become the technology of choice in many fields.