Search Results for author: Nicholas Asher

Found 40 papers, 4 papers with code

Parallel Discourse Annotations on a Corpus of Short Texts

no code implementations LREC 2016 Manfred Stede, Stergos Afantenos, Andreas Peldszus, Nicholas Asher, J{\'e}r{\'e}my Perret

We present the first corpus of texts annotated with two alternative approaches to discourse structure, Rhetorical Structure Theory (Mann and Thompson, 1988) and Segmented Discourse Representation Theory (Asher and Lascarides, 2003).

Discourse Structure and Dialogue Acts in Multiparty Dialogue: the STAC Corpus

no code implementations LREC 2016 Nicholas Asher, Julie Hunter, Mathieu Morey, Benamara Farah, Stergos Afantenos

This paper describes the STAC resource, a corpus of multi-party chats annotated for discourse structure in the style of SDRT (Asher and Lascarides, 2003; Lascarides and Asher, 2009).

A Dependency Perspective on RST Discourse Parsing and Evaluation

no code implementations CL 2018 Mathieu Morey, Philippe Muller, Nicholas Asher

This allows us to characterize families of parsing strategies across the different frameworks, in particular with respect to the notion of headedness.

Constituency Parsing Dependency Parsing +1

Bias in Semantic and Discourse Interpretation

no code implementations29 Jun 2018 Nicholas Asher, Soumya Paul

In this paper, we show how game-theoretic work on conversation combined with a theory of discourse structure provides a framework for studying interpretive bias.

Apprentissage faiblement supervis\'e de la structure discursive (Learning discourse structure using weak supervision )

no code implementations JEPTALNRECITAL 2019 Sonia Badene, Catherine Thompson, Nicholas Asher, Jean-Pierre Lorr{\'e}

Nous d{\'e}crivons nos exp{\'e}rimentations sur l{'}attachement des unit{\'e}s discursives pour former une structure, en utilisant le paradigme du data programming dans lequel peu ou pas d{'}annotations sont utilis{\'e}es pour construire un ensemble de donn{\'e}es d{'}entra{\^\i}nement {``}bruit{\'e}{''}.

Analyse faiblement supervis\'ee de conversation en actes de dialogue (Weakly supervised dialog act analysis)

no code implementations JEPTALNRECITAL 2019 Catherine Thompson, Nicholas Asher, Philippe Muller, J{\'e}r{\'e}my Auguste

Nous nous int{\'e}ressons ici {\`a} l{'}analyse de conversation par chat dans un contexte orient{\'e}-t{\^a}che avec un conseiller technique s{'}adressant {\`a} un client, o{\`u} l{'}objectif est d{'}{\'e}tiqueter les {\'e}nonc{\'e}s en actes de dialogue, pour alimenter des analyses des conversations en aval.

Data Programming for Learning Discourse Structure

no code implementations ACL 2019 Sonia Badene, Kate Thompson, Jean-Pierre Lorr{\'e}, Nicholas Asher

This paper investigates the advantages and limits of data programming for the task of learning discourse structure.

Weak Supervision for Learning Discourse Structure

no code implementations IJCNLP 2019 Sonia Badene, Kate Thompson, Jean-Pierre Lorr{\'e}, Nicholas Asher

We show that on our task the generative model outperforms both deep learning architectures as well as more traditional ML approaches when learning discourse structure{---}it even outperforms the combination of deep learning methods and hand-crafted features.

Adequate and fair explanations

no code implementations21 Jan 2020 Nicholas Asher, Soumya Paul, Chris Russell

This partiality makes it possible to hide explicit biases present in the algorithm that may be injurious or unfair. We investigate how easy it is to uncover these biases in providing complete and fair explanations by exploiting the structure of the set of counterfactuals providing a complete local explanation.

counterfactual

On Relating "Why?" and "Why Not?" Explanations

no code implementations1 Jan 2021 Alexey Ignatiev, Nina Narodytska, Nicholas Asher, Joao Marques-Silva

Explanations of Machine Learning (ML) models often address a ‘Why?’ question.

Efficient Explanations for Knowledge Compilation Languages

no code implementations4 Jul 2021 Xuanxiang Huang, Yacine Izza, Alexey Ignatiev, Martin C. Cooper, Nicholas Asher, Joao Marques-Silva

Knowledge compilation (KC) languages find a growing number of practical uses, including in Constraint Programming (CP) and in Machine Learning (ML).

Negation

Transport-based Counterfactual Models

1 code implementation30 Aug 2021 Lucas de Lara, Alberto González-Sanz, Nicholas Asher, Laurent Risser, Jean-Michel Loubes

We address the problem of designing realistic and feasible counterfactuals in the absence of a causal model.

Causal Inference counterfactual +1

Interpretive Blindness

no code implementations19 Oct 2021 Nicholas Asher, Julie Hunter

We model here an epistemic bias we call \textit{interpretive blindness} (IB).

Analyzing Semantic Faithfulness of Language Models via Input Intervention on Question Answering

1 code implementation21 Dec 2022 Akshay Chaturvedi, Swarnadeep Bhar, Soumadeep Saha, Utpal Garain, Nicholas Asher

While transformer models achieve high performance on standard question answering tasks, we show that they fail to be semantically faithful once we perform these interventions for a significant number of cases (~50% for deletion intervention, and ~20% drop in accuracy for negation intervention).

Conversational Question Answering Negation

How optimal transport can tackle gender biases in multi-class neural-network classifiers for job recommendations?

no code implementations27 Feb 2023 Fanny Jourdan, Titon Tshiongo Kaninku, Nicholas Asher, Jean-Michel Loubes, Laurent Risser

To anticipate the certification of recommendation systems using textual data, we then used it on the Bios dataset, for which the learning task consists in predicting the occupation of female and male individuals, based on their LinkedIn biography.

Multi-class Classification Recommendation Systems

COCKATIEL: COntinuous Concept ranKed ATtribution with Interpretable ELements for explaining neural net classifiers on NLP tasks

1 code implementation11 May 2023 Fanny Jourdan, Agustin Picard, Thomas Fel, Laurent Risser, Jean Michel Loubes, Nicholas Asher

COCKATIEL is a novel, post-hoc, concept-based, model-agnostic XAI technique that generates meaningful explanations from the last layer of a neural net model trained on an NLP classification task by using Non-Negative Matrix Factorization (NMF) to discover the concepts the model leverages to make predictions and by exploiting a Sensitivity Analysis to estimate accurately the importance of each of these concepts for the model.

Explainable Artificial Intelligence (XAI) Sentiment Analysis

Are fairness metric scores enough to assess discrimination biases in machine learning?

no code implementations8 Jun 2023 Fanny Jourdan, Laurent Risser, Jean-Michel Loubes, Nicholas Asher

This paper presents novel experiments shedding light on the shortcomings of current metrics for assessing biases of gender discrimination made by machine learning algorithms on textual data.

Fairness

Limits for Learning with Language Models

no code implementations21 Jun 2023 Nicholas Asher, Swarnadeep Bhar, Akshay Chaturvedi, Julie Hunter, Soumya Paul

With the advent of large language models (LLMs), the trend in NLP has been to train LLMs on vast amounts of data to solve diverse language understanding and generation tasks.

TaCo: Targeted Concept Removal in Output Embeddings for NLP via Information Theory and Explainability

1 code implementation11 Dec 2023 Fanny Jourdan, Louis Béthune, Agustin Picard, Laurent Risser, Nicholas Asher

In evaluation, we show that the proposed post-hoc approach significantly reduces gender-related associations in NLP models while preserving the overall performance and functionality of the models.

Fairness

Strong hallucinations from negation and how to fix them

no code implementations16 Feb 2024 Nicholas Asher, Swarnadeep Bhar

Despite great performance on many tasks, language models (LMs) still struggle with reasoning, sometimes providing responses that cannot possibly be true because they stem from logical incoherence.

Natural Language Inference Negation

Modality-Agnostic fMRI Decoding of Vision and Language

no code implementations18 Mar 2024 Mitja Nikolaus, Milad Mozafari, Nicholas Asher, Leila Reddy, Rufin VanRullen

Previous studies have shown that it is possible to map brain activation data of subjects viewing images onto the feature representation space of not only vision models (modality-specific decoding) but also language models (cross-modal decoding).

Cannot find the paper you are looking for? You can Submit a new open access paper.