Search Results for author: Huiyuan Lai

Found 16 papers, 11 papers with code

Human Perception in Natural Language Generation

no code implementations ACL (GEM) 2021 Lorenzo De Mattei, Huiyuan Lai, Felice Dell’Orletta, Malvina Nissim

We ask subjects whether they perceive as human-produced a bunch of texts, some of which are actually human-written, while others are automatically generated.

Text Generation

Multi-perspective Alignment for Increasing Naturalness in Neural Machine Translation

no code implementations11 Dec 2024 Huiyuan Lai, Esther Ploeger, Rik van Noord, Antonio Toral

Neural machine translation (NMT) systems amplify lexical biases present in their training data, leading to artificially impoverished language in output translations.

Diversity Machine Translation +2

Towards Tailored Recovery of Lexical Diversity in Literary Machine Translation

no code implementations30 Aug 2024 Esther Ploeger, Huiyuan Lai, Rik van Noord, Antonio Toral

Thus, rather than aiming for the rigid increase of lexical diversity, we reframe the task as recovering what is lost in the machine translation process.

Diversity Machine Translation +1

Fine-tuning with HED-IT: The impact of human post-editing for dialogical language models

no code implementations11 Jun 2024 Daniela Occhipinti, Michele Marchi, Irene Mondella, Huiyuan Lai, Felice Dell'Orletta, Malvina Nissim, Marco Guerini

Results from both human and automatic evaluation show that the different quality of training data is clearly perceived and it has an impact also on the models trained on such data.

mCoT: Multilingual Instruction Tuning for Reasoning Consistency in Language Models

1 code implementation4 Jun 2024 Huiyuan Lai, Malvina Nissim

Large language models (LLMs) with Chain-of-thought (CoT) have recently emerged as a powerful technique for eliciting reasoning to improve various downstream tasks.

Math

Responsibility Perspective Transfer for Italian Femicide News

1 code implementation1 Jun 2023 Gosse Minnema, Huiyuan Lai, Benedetta Muscato, Malvina Nissim

Different ways of linguistically expressing the same real-world event can lead to different perceptions of what happened.

Pre-Trained Language-Meaning Models for Multilingual Parsing and Generation

1 code implementation31 May 2023 Chunliu Wang, Huiyuan Lai, Malvina Nissim, Johan Bos

Pre-trained language models (PLMs) have achieved great success in NLP and have recently been used for tasks in computational semantics.

Cross-Lingual Transfer DRS Parsing +2

Multilingual Multi-Figurative Language Detection

1 code implementation31 May 2023 Huiyuan Lai, Antonio Toral, Malvina Nissim

Figures of speech help people express abstract concepts and evoke stronger emotions than literal expressions, thereby making texts more creative and engaging.

Language Modelling Sentence

Multidimensional Evaluation for Text Style Transfer Using ChatGPT

1 code implementation26 Apr 2023 Huiyuan Lai, Antonio Toral, Malvina Nissim

We investigate the potential of ChatGPT as a multidimensional evaluator for the task of \emph{Text Style Transfer}, alongside, and in comparison to, existing automatic metrics as well as human judgements.

Style Transfer Text Style Transfer

Multi-Figurative Language Generation

1 code implementation COLING 2022 Huiyuan Lai, Malvina Nissim

Figurative language generation is the task of reformulating a given text in the desired figure of speech while still being faithful to the original context.

Form Language Modelling +2

Human Judgement as a Compass to Navigate Automatic Metrics for Formality Transfer

1 code implementation HumEval (ACL) 2022 Huiyuan Lai, Jiali Mao, Antonio Toral, Malvina Nissim

Although text style transfer has witnessed rapid development in recent years, there is as yet no established standard for evaluation, which is performed using several automatic metrics, lacking the possibility of always resorting to human judgement.

Navigate Style Transfer +1

On the interaction of automatic evaluation and task framing in headline style transfer

1 code implementation ACL (EvalNLGEval, INLG) 2020 Lorenzo De Mattei, Michele Cafagna, Huiyuan Lai, Felice Dell'Orletta, Malvina Nissim, Albert Gatt

An ongoing debate in the NLG community concerns the best way to evaluate systems, with human evaluation often being considered the most reliable method, compared to corpus-based metrics.

Style Transfer

Cannot find the paper you are looking for? You can Submit a new open access paper.