Search Results for author: Colin Cherry

Found 57 papers, 6 papers with code

Inverted Projection for Robust Speech Translation

no code implementations ACL (IWSLT) 2021 Dirk Padfield, Colin Cherry

Traditional translation systems trained on written documents perform well for text-based translation but not as well for speech-based applications.

Translation

Bilingual Methods for Adaptive Training Data Selection for Machine Translation

no code implementations AMTA 2016 Boxing Chen, Roland Kuhn, George Foster, Colin Cherry, Fei Huang

In this paper, we propose a new data selection method which uses semi-supervised convolutional neural networks based on bitokens (Bi-SSCNNs) for training machine translation systems from a large bilingual corpus.

Machine Translation NMT +2

A Natural Diet: Towards Improving Naturalness of Machine Translation Output

no code implementations Findings (ACL) 2022 Markus Freitag, David Vilar, David Grangier, Colin Cherry, George Foster

In this work we propose a method for training MT systems to achieve a more natural style, i. e. mirroring the style of text originally written in the target language.

Machine Translation Sentence +1

When Scaling Meets LLM Finetuning: The Effect of Data, Model and Finetuning Method

no code implementations27 Feb 2024 Biao Zhang, Zhongtao Liu, Colin Cherry, Orhan Firat

While large language models (LLMs) often adopt finetuning to unlock their capabilities for downstream applications, our understanding on the inductive biases (especially the scaling properties) of different finetuning methods is still limited.

Machine Translation

To Diverge or Not to Diverge: A Morphosyntactic Perspective on Machine Translation vs Human Translation

no code implementations2 Jan 2024 Jiaming Luo, Colin Cherry, George Foster

We conduct a large-scale fine-grained comparative analysis of machine translations (MT) against human translations (HT) through the lens of morphosyntactic divergence.

Attribute Machine Translation +1

Searching for Needles in a Haystack: On the Role of Incidental Bilingualism in PaLM's Translation Capability

no code implementations17 May 2023 Eleftheria Briakou, Colin Cherry, George Foster

We investigate the role of incidental bilingualism -- the unintentional consumption of bilingual signals, including translation examples -- in explaining the translation capabilities of large language models, taking the Pathways Language Model (PaLM) as a case study.

Language Modelling Machine Translation +1

PaLM 2 Technical Report

1 code implementation17 May 2023 Rohan Anil, Andrew M. Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, Eric Chu, Jonathan H. Clark, Laurent El Shafey, Yanping Huang, Kathy Meier-Hellstern, Gaurav Mishra, Erica Moreira, Mark Omernick, Kevin Robinson, Sebastian Ruder, Yi Tay, Kefan Xiao, Yuanzhong Xu, Yujing Zhang, Gustavo Hernandez Abrego, Junwhan Ahn, Jacob Austin, Paul Barham, Jan Botha, James Bradbury, Siddhartha Brahma, Kevin Brooks, Michele Catasta, Yong Cheng, Colin Cherry, Christopher A. Choquette-Choo, Aakanksha Chowdhery, Clément Crepy, Shachi Dave, Mostafa Dehghani, Sunipa Dev, Jacob Devlin, Mark Díaz, Nan Du, Ethan Dyer, Vlad Feinberg, Fangxiaoyu Feng, Vlad Fienber, Markus Freitag, Xavier Garcia, Sebastian Gehrmann, Lucas Gonzalez, Guy Gur-Ari, Steven Hand, Hadi Hashemi, Le Hou, Joshua Howland, Andrea Hu, Jeffrey Hui, Jeremy Hurwitz, Michael Isard, Abe Ittycheriah, Matthew Jagielski, Wenhao Jia, Kathleen Kenealy, Maxim Krikun, Sneha Kudugunta, Chang Lan, Katherine Lee, Benjamin Lee, Eric Li, Music Li, Wei Li, Yaguang Li, Jian Li, Hyeontaek Lim, Hanzhao Lin, Zhongtao Liu, Frederick Liu, Marcello Maggioni, Aroma Mahendru, Joshua Maynez, Vedant Misra, Maysam Moussalem, Zachary Nado, John Nham, Eric Ni, Andrew Nystrom, Alicia Parrish, Marie Pellat, Martin Polacek, Alex Polozov, Reiner Pope, Siyuan Qiao, Emily Reif, Bryan Richter, Parker Riley, Alex Castro Ros, Aurko Roy, Brennan Saeta, Rajkumar Samuel, Renee Shelby, Ambrose Slone, Daniel Smilkov, David R. So, Daniel Sohn, Simon Tokumine, Dasha Valter, Vijay Vasudevan, Kiran Vodrahalli, Xuezhi Wang, Pidong Wang, ZiRui Wang, Tao Wang, John Wieting, Yuhuai Wu, Kelvin Xu, Yunhan Xu, Linting Xue, Pengcheng Yin, Jiahui Yu, Qiao Zhang, Steven Zheng, Ce Zheng, Weikang Zhou, Denny Zhou, Slav Petrov, Yonghui Wu

Through extensive evaluations on English and multilingual language, and reasoning tasks, we demonstrate that PaLM 2 has significantly improved quality on downstream tasks across different model sizes, while simultaneously exhibiting faster and more efficient inference compared to PaLM.

Code Generation Common Sense Reasoning +6

The unreasonable effectiveness of few-shot learning for machine translation

no code implementations2 Feb 2023 Xavier Garcia, Yamini Bansal, Colin Cherry, George Foster, Maxim Krikun, Fangxiaoyu Feng, Melvin Johnson, Orhan Firat

We demonstrate the potential of few-shot translation systems, trained with unpaired language data, for both high and low-resource language pairs.

Few-Shot Learning Machine Translation +2

Prompting PaLM for Translation: Assessing Strategies and Performance

no code implementations16 Nov 2022 David Vilar, Markus Freitag, Colin Cherry, Jiaming Luo, Viresh Ratnakar, George Foster

Large language models (LLMs) that have been trained on multilingual but not parallel text exhibit a remarkable ability to translate between languages.

Language Modelling Machine Translation +1

XTREME-S: Evaluating Cross-lingual Speech Representations

no code implementations21 Mar 2022 Alexis Conneau, Ankur Bapna, Yu Zhang, Min Ma, Patrick von Platen, Anton Lozhkov, Colin Cherry, Ye Jia, Clara Rivera, Mihir Kale, Daan van Esch, Vera Axelrod, Simran Khanuja, Jonathan H. Clark, Orhan Firat, Michael Auli, Sebastian Ruder, Jason Riesa, Melvin Johnson

Covering 102 languages from 10+ language families, 3 different domains and 4 task families, XTREME-S aims to simplify multilingual speech representation evaluation, as well as catalyze research in "universal" speech representation learning.

Representation Learning Retrieval +4

Data Scaling Laws in NMT: The Effect of Noise and Architecture

no code implementations4 Feb 2022 Yamini Bansal, Behrooz Ghorbani, Ankush Garg, Biao Zhang, Maxim Krikun, Colin Cherry, Behnam Neyshabur, Orhan Firat

In this work, we study the effect of varying the architecture and training data quality on the data scaling properties of Neural Machine Translation (NMT).

Language Modelling Machine Translation +1

mSLAM: Massively multilingual joint pre-training for speech and text

no code implementations3 Feb 2022 Ankur Bapna, Colin Cherry, Yu Zhang, Ye Jia, Melvin Johnson, Yong Cheng, Simran Khanuja, Jason Riesa, Alexis Conneau

We present mSLAM, a multilingual Speech and LAnguage Model that learns cross-lingual cross-modal representations of speech and text by pre-training jointly on large amounts of unlabeled speech and text in multiple languages.

intent-classification Intent Classification +4

Can Multilinguality benefit Non-autoregressive Machine Translation?

no code implementations16 Dec 2021 Sweta Agrawal, Julia Kreutzer, Colin Cherry

Non-autoregressive (NAR) machine translation has recently achieved significant improvements, and now outperforms autoregressive (AR) models on some benchmarks, providing an efficient alternative to AR inference.

Machine Translation Translation

Assessing Reference-Free Peer Evaluation for Machine Translation

no code implementations NAACL 2021 Sweta Agrawal, George Foster, Markus Freitag, Colin Cherry

Reference-free evaluation has the potential to make machine translation evaluation substantially more scalable, allowing us to pivot easily to new languages or domains.

Machine Translation Translation

Simultaneous Translation

no code implementations EMNLP 2020 Liang Huang, Colin Cherry, Mingbo Ma, Naveen Arivazhagan, Zhongjun He

Simultaneous translation, which performs translation concurrently with the source speech, is widely useful in many scenarios such as international conferences, negotiations, press releases, legal proceedings, and medicine.

Machine Translation speech-recognition +3

Sentence Boundary Augmentation For Neural Machine Translation Robustness

no code implementations21 Oct 2020 Daniel Li, Te I, Naveen Arivazhagan, Colin Cherry, Dirk Padfield

Specifically, in the context of long-form speech translation systems, where the input transcripts come from Automatic Speech Recognition (ASR), the NMT models have to handle errors including phoneme substitutions, grammatical structure, and sentence boundaries, all of which pose challenges to NMT robustness.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +7

Human-Paraphrased References Improve Neural Machine Translation

1 code implementation WMT (EMNLP) 2020 Markus Freitag, George Foster, David Grangier, Colin Cherry

When used in place of original references, the paraphrased versions produce metric scores that correlate better with human judgment.

Machine Translation NMT +1

Inference Strategies for Machine Translation with Conditional Masking

no code implementations EMNLP 2020 Julia Kreutzer, George Foster, Colin Cherry

Conditional masked language model (CMLM) training has proven successful for non-autoregressive and semi-autoregressive sequence generation tasks, such as machine translation.

Language Modelling Machine Translation +1

Re-translation versus Streaming for Simultaneous Translation

no code implementations WS 2020 Naveen Arivazhagan, Colin Cherry, Wolfgang Macherey, George Foster

There has been great progress in improving streaming machine translation, a simultaneous paradigm where the system appends to a growing hypothesis as more source content becomes available.

Attribute Data Augmentation +2

Re-Translation Strategies For Long Form, Simultaneous, Spoken Language Translation

1 code implementation6 Dec 2019 Naveen Arivazhagan, Colin Cherry, Te I, Wolfgang Macherey, Pallavi Baljekar, George Foster

As this scenario allows for revisions to our incremental translations, we adopt a re-translation approach to simultaneous translation, where the source is repeatedly translated from scratch as it grows.

Machine Translation speech-recognition +2

Monotonic Infinite Lookback Attention for Simultaneous Machine Translation

no code implementations ACL 2019 Naveen Arivazhagan, Colin Cherry, Wolfgang Macherey, Chung-Cheng Chiu, Semih Yavuz, Ruoming Pang, Wei Li, Colin Raffel

Simultaneous machine translation begins to translate each source sentence before the source speaker is finished speaking, with applications to live and streaming scenarios.

Machine Translation NMT +2

Thinking Slow about Latency Evaluation for Simultaneous Machine Translation

no code implementations31 May 2019 Colin Cherry, George Foster

Simultaneous machine translation attempts to translate a source sentence before it is finished being spoken, with applications to translation of spoken language for live streaming and conversation.

Machine Translation Sentence +1

Lingvo: a Modular and Scalable Framework for Sequence-to-Sequence Modeling

2 code implementations21 Feb 2019 Jonathan Shen, Patrick Nguyen, Yonghui Wu, Zhifeng Chen, Mia X. Chen, Ye Jia, Anjuli Kannan, Tara Sainath, Yuan Cao, Chung-Cheng Chiu, Yanzhang He, Jan Chorowski, Smit Hinsu, Stella Laurenzo, James Qin, Orhan Firat, Wolfgang Macherey, Suyog Gupta, Ankur Bapna, Shuyuan Zhang, Ruoming Pang, Ron J. Weiss, Rohit Prabhavalkar, Qiao Liang, Benoit Jacob, Bowen Liang, HyoukJoong Lee, Ciprian Chelba, Sébastien Jean, Bo Li, Melvin Johnson, Rohan Anil, Rajat Tibrewal, Xiaobing Liu, Akiko Eriguchi, Navdeep Jaitly, Naveen Ari, Colin Cherry, Parisa Haghani, Otavio Good, Youlong Cheng, Raziel Alvarez, Isaac Caswell, Wei-Ning Hsu, Zongheng Yang, Kuan-Chieh Wang, Ekaterina Gonina, Katrin Tomanek, Ben Vanik, Zelin Wu, Llion Jones, Mike Schuster, Yanping Huang, Dehao Chen, Kazuki Irie, George Foster, John Richardson, Klaus Macherey, Antoine Bruguier, Heiga Zen, Colin Raffel, Shankar Kumar, Kanishka Rao, David Rybach, Matthew Murray, Vijayaditya Peddinti, Maxim Krikun, Michiel A. U. Bacchiani, Thomas B. Jablin, Rob Suderman, Ian Williams, Benjamin Lee, Deepti Bhatia, Justin Carlson, Semih Yavuz, Yu Zhang, Ian McGraw, Max Galkin, Qi Ge, Golan Pundak, Chad Whipkey, Todd Wang, Uri Alon, Dmitry Lepikhin, Ye Tian, Sara Sabour, William Chan, Shubham Toshniwal, Baohua Liao, Michael Nirschl, Pat Rondon

Lingvo is a Tensorflow framework offering a complete solution for collaborative deep learning research, with a particular focus towards sequence-to-sequence models.

Sequence-To-Sequence Speech Recognition

Shaping the Narrative Arc: An Information-Theoretic Approach to Collaborative Dialogue

no code implementations31 Jan 2019 Kory W. Mathewson, Pablo Samuel Castro, Colin Cherry, George Foster, Marc G. Bellemare

We consider the problem of designing an artificial agent capable of interacting with humans in collaborative dialogue to produce creative, engaging narratives.

Specificity

Efficient Sequence Labeling with Actor-Critic Training

1 code implementation30 Sep 2018 Saeed Najafi, Colin Cherry, Grzegorz Kondrak

We set out to establish RNNs as an attractive alternative to CRFs for sequence labeling.

Decision Making NER +1

Revisiting Character-Based Neural Machine Translation with Capacity and Compression

no code implementations EMNLP 2018 Colin Cherry, George Foster, Ankur Bapna, Orhan Firat, Wolfgang Macherey

Translating characters instead of words or word-fragments has the potential to simplify the processing pipeline for neural machine translation (NMT), and improve results by eliminating hyper-parameters and manual feature engineering.

Feature Engineering Machine Translation +2

Cost Weighting for Neural Machine Translation Domain Adaptation

no code implementations WS 2017 Boxing Chen, Colin Cherry, George Foster, Samuel Larkin

We compare cost weighting to two traditional domain adaptation techniques developed for statistical machine translation: data selection and sub-corpus weighting.

Domain Adaptation Machine Translation +1

End-to-End Multi-View Networks for Text Classification

no code implementations19 Apr 2017 Hongyu Guo, Colin Cherry, Jiang Su

For a bag-of-words representation, each view focuses on a different subset of the text's words.

General Classification text-classification +1

A Dataset for Detecting Stance in Tweets

no code implementations LREC 2016 Saif Mohammad, Svetlana Kiritchenko, Parinaz Sobhani, Xiaodan Zhu, Colin Cherry

Apart from stance, the tweets are also annotated for whether the target of interest is the target of opinion in the tweet.

Cannot find the paper you are looking for? You can Submit a new open access paper.