Search Results for author: Machel Reid

Found 15 papers, 10 papers with code

PARADISE”:" Exploiting Parallel Data for Multilingual Sequence-to-Sequence Pretraining

no code implementations RepL4NLP (ACL) 2022 Machel Reid, Mikel Artetxe

Despite the success of multilingual sequence-to-sequence pretraining, most existing approaches rely on monolingual corpora and do not make use of the strong cross-lingual signal contained in parallel data.

Cross-Lingual Natural Language Inference Denoising +2

On the Impact of Data Augmentation on Downstream Performance in Natural Language Processing

no code implementations insights (ACL) 2022 Itsuki Okimura, Machel Reid, Makoto Kawano, Yutaka Matsuo

The reason for this is that within NLP, the impact of proposed data augmentation methods on performance has not been evaluated in a unified manner, and effective data augmentation methods are unclear.

Computer Vision Data Augmentation +1

Low-Resource Machine Translation Using Cross-Lingual Language Model Pretraining

no code implementations NAACL (AmericasNLP) 2021 Francis Zheng, Machel Reid, Edison Marrese-Taylor, Yutaka Matsuo

This paper describes UTokyo’s submission to the AmericasNLP 2021 Shared Task on machine translation systems for indigenous languages of the Americas.

Language Modelling Machine Translation +1

Learning to Model Editing Processes

1 code implementation24 May 2022 Machel Reid, Graham Neubig

We introduce baseline results and metrics on this task, finding that modeling editing processes improves performance on a variety of axes on both our proposed task and related downstream tasks compared to previous single-step models of edits.

Machine Translation Style Transfer +1

Large Language Models are Zero-Shot Reasoners

1 code implementation24 May 2022 Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, Yusuke Iwasawa

Pretrained large language models (LLMs) are widely used in many sub-fields of natural language processing (NLP) and generally known as excellent few-shot learners with task-specific exemplars.

Arithmetic Reasoning Few-Shot Learning +1

Can Wikipedia Help Offline Reinforcement Learning?

1 code implementation28 Jan 2022 Machel Reid, Yutaro Yamada, Shixiang Shane Gu

In this paper, we look to take advantage of this formulation of reinforcement learning as sequence modeling and investigate the transferability of pre-trained sequence models on other domains (vision, language) when finetuned on offline RL tasks (control, games).

Offline RL reinforcement-learning

PARADISE: Exploiting Parallel Data for Multilingual Sequence-to-Sequence Pretraining

1 code implementation4 Aug 2021 Machel Reid, Mikel Artetxe

Despite the success of multilingual sequence-to-sequence pretraining, most existing approaches rely on monolingual corpora, and do not make use of the strong cross-lingual signal contained in parallel data.

Cross-Lingual Natural Language Inference Denoising +2

LEWIS: Levenshtein Editing for Unsupervised Text Style Transfer

1 code implementation Findings (ACL) 2021 Machel Reid, Victor Zhong

Moreover, compared to previous methods on unsupervised data synthesis, our method results in higher quality parallel style pairs and improves model performance.

Pretrained Language Models Style Transfer +2

Subformer: A Parameter Reduced Transformer

no code implementations1 Jan 2021 Machel Reid, Edison Marrese-Taylor, Yutaka Matsuo

We also perform equally well as Transformer-big with 40% less parameters and outperform the model by 0. 7 BLEU with 12M less parameters.

Abstractive Text Summarization Language Modelling +3

Variational Inference for Learning Representations of Natural Language Edits

1 code implementation20 Apr 2020 Edison Marrese-Taylor, Machel Reid, Yutaka Matsuo

Document editing has become a pervasive component of the production of information, with version control systems enabling edits to be efficiently stored and applied.

Natural Language Processing Variational Inference

Combining Pretrained High-Resource Embeddings and Subword Representations for Low-Resource Languages

no code implementations9 Mar 2020 Machel Reid, Edison Marrese-Taylor, Yutaka Matsuo

The contrast between the need for large amounts of data for current Natural Language Processing (NLP) techniques, and the lack thereof, is accentuated in the case of African languages, most of which are considered low-resource.

Natural Language Processing Translation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.