Search Results for author: Lucas Cordeiro

Found 9 papers, 1 papers with code

Interventional Probing in High Dimensions: An NLI Case Study

no code implementations20 Apr 2023 Julia Rozanova, Marco Valentino, Lucas Cordeiro, Andre Freitas

Probing strategies have been shown to detect the presence of various linguistic features in large language models; in particular, semantic features intermediate to the "natural logic" fragment of the Natural Language Inference task (NLI).

Natural Language Inference Vocal Bursts Intensity Prediction

AIREPAIR: A Repair Platform for Neural Networks

1 code implementation24 Nov 2022 Xidan Song, Youcheng Sun, Mustafa A. Mustafa, Lucas Cordeiro

We present AIREPAIR, a platform for repairing neural networks.

Towards Global Neural Network Abstractions with Locally-Exact Reconstruction

no code implementations21 Oct 2022 Edoardo Manino, Iury Bessa, Lucas Cordeiro

Unfortunately, existing abstraction techniques are slack, which limits their applicability to small local regions of the input domain.

Montague semantics and modifier consistency measurement in neural language models

no code implementations10 Oct 2022 Danilo S. Carvalho, Edoardo Manino, Julia Rozanova, Lucas Cordeiro, André Freitas

At the same time, the need for interpretability has elicited questions on their intrinsic properties and capabilities.

Fairness

QNNVerifier: A Tool for Verifying Neural Networks using SMT-Based Model Checking

no code implementations25 Nov 2021 Xidan Song, Edoardo Manino, Luiz Sena, Erickson Alves, Eddie de Lima Filho, Iury Bessa, Mikel Lujan, Lucas Cordeiro

QNNVerifier is the first open-source tool for verifying implementations of neural networks that takes into account the finite word-length (i. e. quantization) of their operands.

Quantization

Incremental Verification of Fixed-Point Implementations of Neural Networks

no code implementations21 Dec 2020 Luiz Sena, Erickson Alves, Iury Bessa, Eddie Filho, Lucas Cordeiro

We have implemented the proposed approach on top of the efficient SMT-based bounded model checker (ESBMC), and its experimental results show that it can successfully verify safety properties, in actual implementations of ANNs, and generate real adversarial cases in MLPs.

Cannot find the paper you are looking for? You can Submit a new open access paper.