Search Results for author: Zhixue Zhao

Found 6 papers, 5 papers with code

Comparing Explanation Faithfulness between Multilingual and Monolingual Fine-tuned Language Models

1 code implementation19 Mar 2024 Zhixue Zhao, Nikolaos Aletras

Previous studies have explored how different factors affect faithfulness, mainly in the context of monolingual English models.

Investigating Hallucinations in Pruned Large Language Models for Abstractive Summarization

1 code implementation15 Nov 2023 George Chrysostomou, Zhixue Zhao, Miles Williams, Nikolaos Aletras

Despite the remarkable performance of generative large language models (LLMs) on abstractive summarization, they face two significant challenges: their considerable size and tendency to hallucinate.

Abstractive Text Summarization Hallucination +1

Incorporating Attribution Importance for Improving Faithfulness Metrics

1 code implementation17 May 2023 Zhixue Zhao, Nikolaos Aletras

Widely used faithfulness metrics, such as sufficiency and comprehensiveness use a hard erasure criterion, i. e. entirely removing or retaining the top most important tokens ranked by a given FA and observing the changes in predictive likelihood.

On the Impact of Temporal Concept Drift on Model Explanations

1 code implementation17 Oct 2022 Zhixue Zhao, George Chrysostomou, Kalina Bontcheva, Nikolaos Aletras

Explanation faithfulness of model predictions in natural language processing is typically evaluated on held-out data from the same temporal distribution as the training data (i. e. synchronous settings).

Text Classification

SS-BERT: Mitigating Identity Terms Bias in Toxic Comment Classification by Utilising the Notion of "Subjectivity" and "Identity Terms"

no code implementations6 Sep 2021 Zhixue Zhao, Ziqi Zhang, Frank Hopfgartner

Toxic comment classification models are often found biased toward identity terms which are terms characterizing a specific group of people such as "Muslim" and "black".

Toxic Comment Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.