Search Results for author: Mateusz Lango

Found 9 papers, 4 papers with code

Multi-criteria approach for selecting an explanation from the set of counterfactuals produced by an ensemble of explainers

1 code implementation20 Mar 2024 Ignacy Stępka, Mateusz Lango, Jerzy Stefanowski

Counterfactuals are widely used to explain ML model predictions by providing alternative scenarios for obtaining the more desired predictions.

counterfactual

Leak, Cheat, Repeat: Data Contamination and Evaluation Malpractices in Closed-Source LLMs

no code implementations6 Feb 2024 Simone Balloccu, Patrícia Schmidtová, Mateusz Lango, Ondřej Dušek

Natural Language Processing (NLP) research is increasingly focusing on the use of Large Language Models (LLMs), with some of the most popular ones being either fully or partially closed-source.

The Problem of Coherence in Natural Language Explanations of Recommendations

1 code implementation18 Dec 2023 Jakub Raczyński, Mateusz Lango, Jerzy Stefanowski

Providing natural language explanations for recommendations is particularly useful from the perspective of a non-expert user.

Coherence Evaluation

Critic-Driven Decoding for Mitigating Hallucinations in Data-to-text Generation

1 code implementation25 Oct 2023 Mateusz Lango, Ondřej Dušek

Our method does not need any changes to the underlying LM's architecture or training procedure and can thus be combined with any model and decoding operating on word probabilities.

Data-to-Text Generation Hallucination +1

Three Ways of Using Large Language Models to Evaluate Chat

2 code implementations12 Aug 2023 Ondřej Plátek, Vojtěch Hudeček, Patricia Schmidtová, Mateusz Lango, Ondřej Dušek

This paper describes the systems submitted by team6 for ChatEval, the DSTC 11 Track 4 competition.

Chatbot

With a Little Help from the Authors: Reproducing Human Evaluation of an MT Error Detector

no code implementations12 Aug 2023 Ondřej Plátek, Mateusz Lango, Ondřej Dušek

This work presents our efforts to reproduce the results of the human evaluation experiment presented in the paper of Vamvas and Sennrich (2022), which evaluated an automatic system detecting over- and undertranslations (translations containing more or less information than the original) in machine translation (MT) outputs.

Machine Translation Translation

Cannot find the paper you are looking for? You can Submit a new open access paper.