Search Results for author: Ekaterina Komendantskaya

Found 17 papers, 2 papers with code

NLP Verification: Towards a General Methodology for Certifying Robustness

no code implementations15 Mar 2024 Marco Casadio, Tanvi Dinkar, Ekaterina Komendantskaya, Luca Arnaboldi, Omri Isac, Matthew L. Daggitt, Guy Katz, Verena Rieser, Oliver Lemon

We propose a number of practical NLP methods that can help to identify the effects of the embedding gap; and in particular we propose the metric of falsifiability of semantic subpspaces as another fundamental metric to be reported as part of the NLP verification pipeline.

Vehicle: Bridging the Embedding Gap in the Verification of Neuro-Symbolic Programs

1 code implementation12 Jan 2024 Matthew L. Daggitt, Wen Kokke, Robert Atkey, Natalia Slusarz, Luca Arnaboldi, Ekaterina Komendantskaya

Neuro-symbolic programs -- programs containing both machine learning components and traditional symbolic code -- are becoming increasingly widespread.

ANTONIO: Towards a Systematic Method of Generating NLP Benchmarks for Verification

no code implementations6 May 2023 Marco Casadio, Luca Arnaboldi, Matthew L. Daggitt, Omri Isac, Tanvi Dinkar, Daniel Kienitz, Verena Rieser, Ekaterina Komendantskaya

In particular, many known neural network verification methods that work for computer vision and other numeric datasets do not work for NLP.

Logic of Differentiable Logics: Towards a Uniform Semantics of DL

no code implementations19 Mar 2023 Natalia Ślusarz, Ekaterina Komendantskaya, Matthew L. Daggitt, Robert Stewart, Kathrin Stark

A DL consists of a syntax in which specifications are stated and an interpretation function that translates expressions in the syntax into loss functions.

CheckINN: Wide Range Neural Network Verification in Imandra (Extended)

no code implementations21 Jul 2022 Remi Desmartin, Grant Passmore, Ekaterina Komendantskaya, Matthew Daggitt

Neural networks are increasingly relied upon as components of complex safety-critical systems such as autonomous vehicles.

Autonomous Vehicles

Why Robust Natural Language Understanding is a Challenge

no code implementations21 Jun 2022 Marco Casadio, Ekaterina Komendantskaya, Verena Rieser, Matthew L. Daggitt, Daniel Kienitz, Luca Arnaboldi, Wen Kokke

With the proliferation of Deep Machine Learning into real-life applications, a particular property of this technology has been brought to attention: robustness Neural Networks notoriously present low robustness and can be highly sensitive to small input perturbations.

Natural Language Understanding

Network robustness as a mathematical property: training, evaluation and attack

no code implementations29 Sep 2021 Marco Casadio, Matthew L Daggitt, Ekaterina Komendantskaya, Wen Kokke, Robert Stewart

We also perform experiments to compare the applicability and efficacy of different training methods for ensuring the network obeys these different definitions.

Actions You Can Handle: Dependent Types for AI Plans

no code implementations24 May 2021 Alasdair Hill, Ekaterina Komendantskaya, Matthew L. Daggitt, Ronald P. A. Petrick

Verification of AI is a challenge that has engineering, algorithmic and programming language components.

Neural Network Robustness as a Verification Property: A Principled Case Study

1 code implementation3 Apr 2021 Marco Casadio, Ekaterina Komendantskaya, Matthew L. Daggitt, Wen Kokke, Guy Katz, Guy Amir, Idan Refaeli

Neural networks are very successful at detecting patterns in noisy data, and have become the technology of choice in many fields.

Data Augmentation

Proof-Carrying Plans: a Resource Logic for AI Planning

no code implementations10 Aug 2020 Alasdair Hill, Ekaterina Komendantskaya, Ronald P. A. Petrick

In this paper, we present a novel resource logic, the Proof Carrying Plans (PCP) logic that can be used to verify plans produced by AI planners.

ML4PG in Computer Algebra verification

no code implementations26 Feb 2013 Jónathan Heras, Ekaterina Komendantskaya

ML4PG is a machine-learning extension that provides statistical proof hints during the process of Coq/SSReflect proof development.

BIG-bench Machine Learning

Recycling Proof Patterns in Coq: Case Studies

no code implementations25 Jan 2013 Jónathan Heras, Ekaterina Komendantskaya

Development of Interactive Theorem Provers has led to the creation of big libraries and varied infrastructures for formal proofs.

BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.