Search Results for author: Jonathan Mallinson

Found 15 papers, 6 papers with code

Felix: Flexible Text Editing Through Tagging and Insertion

3 code implementations Findings of the Association for Computational Linguistics 2020 Jonathan Mallinson, Aliaksei Severyn, Eric Malmi, Guillermo Garrido

We achieve this by decomposing the text-editing task into two sub-tasks: tagging to decide on the subset of input tokens and their order in the output text and insertion to in-fill the missing tokens in the output not present in the input.

Automatic Post-Editing Language Modelling +4

A Simple Recipe for Multilingual Grammatical Error Correction

2 code implementations ACL 2021 Sascha Rothe, Jonathan Mallinson, Eric Malmi, Sebastian Krause, Aliaksei Severyn

This paper presents a simple recipe to train state-of-the-art multilingual Grammatical Error Correction (GEC) models.

 Ranked #1 on Grammatical Error Correction on Falko-MERLIN (using extra training data)

Grammatical Error Correction

RED-ACE: Robust Error Detection for ASR using Confidence Embeddings

1 code implementation14 Mar 2022 Zorik Gekhman, Dina Zverinski, Jonathan Mallinson, Genady Beryozkin

ASR Error Detection (AED) models aim to post-process the output of Automatic Speech Recognition (ASR) systems, in order to detect transcription errors.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +1

Sentence Compression for Arbitrary Languages via Multilingual Pivoting

1 code implementation EMNLP 2018 Jonathan Mallinson, Rico Sennrich, Mirella Lapata

In this paper we advocate the use of bilingual corpora which are abundantly available for training sentence compression models.

Machine Translation Sentence +3

Small Language Models Improve Giants by Rewriting Their Outputs

1 code implementation22 May 2023 Giorgos Vernikos, Arthur Bražinskas, Jakub Adamek, Jonathan Mallinson, Aliaksei Severyn, Eric Malmi

Despite the impressive performance of large language models (LLMs), they often lag behind specialized models in various tasks.

Few-Shot Learning In-Context Learning +2

Learning to Paraphrase for Question Answering

no code implementations EMNLP 2017 Li Dong, Jonathan Mallinson, Siva Reddy, Mirella Lapata

Question answering (QA) systems are sensitive to the many different ways natural language expresses the same information need.

Question Answering Sentence

Learning Paraphrastic Sentence Embeddings from Back-Translated Bitext

no code implementations EMNLP 2017 John Wieting, Jonathan Mallinson, Kevin Gimpel

We consider the problem of learning general-purpose, paraphrastic sentence embeddings in the setting of Wieting et al. (2016b).

Machine Translation Sentence +2

EdiT5: Semi-Autoregressive Text-Editing with T5 Warm-Start

no code implementations24 May 2022 Jonathan Mallinson, Jakub Adamek, Eric Malmi, Aliaksei Severyn

This is achieved by decomposing the generation process into three sub-tasks: (1) tagging to decide on the subset of input tokens to be preserved in the output, (2) re-ordering to define their order in the output text, and (3) insertion to infill the missing tokens that are not present in the input.

Grammatical Error Correction Sentence +1

Text Generation with Text-Editing Models

no code implementations NAACL (ACL) 2022 Eric Malmi, Yue Dong, Jonathan Mallinson, Aleksandr Chuklin, Jakub Adamek, Daniil Mirylenka, Felix Stahlberg, Sebastian Krause, Shankar Kumar, Aliaksei Severyn

Text-editing models have recently become a prominent alternative to seq2seq models for monolingual text-generation tasks such as grammatical error correction, simplification, and style transfer.

Grammatical Error Correction Hallucination +2

Teaching Small Language Models to Reason

no code implementations16 Dec 2022 Lucie Charlotte Magister, Jonathan Mallinson, Jakub Adamek, Eric Malmi, Aliaksei Severyn

Chain of thought prompting successfully improves the reasoning capabilities of large language models, achieving state of the art results on a range of datasets.

GSM8K Knowledge Distillation

West-of-N: Synthetic Preference Generation for Improved Reward Modeling

no code implementations22 Jan 2024 Alizée Pace, Jonathan Mallinson, Eric Malmi, Sebastian Krause, Aliaksei Severyn

The success of reinforcement learning from human feedback (RLHF) in language model alignment is strongly dependent on the quality of the underlying reward model.

Language Modelling

Cannot find the paper you are looking for? You can Submit a new open access paper.