Search Results for author: Weiting Tan

Found 10 papers, 6 papers with code

Streaming Sequence Transduction through Dynamic Compression

1 code implementation2 Feb 2024 Weiting Tan, Yunmo Chen, Tongfei Chen, Guanghui Qin, Haoran Xu, Heidi C. Zhang, Benjamin Van Durme, Philipp Koehn

We introduce STAR (Stream Transduction with Anchor Representations), a novel Transformer-based model designed for efficient sequence-to-sequence transduction over streams.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +1

The Language Barrier: Dissecting Safety Challenges of LLMs in Multilingual Contexts

no code implementations23 Jan 2024 Lingfeng Shen, Weiting Tan, Sihao Chen, Yunmo Chen, Jingyu Zhang, Haoran Xu, Boyuan Zheng, Philipp Koehn, Daniel Khashabi

As the influence of large language models (LLMs) spans across global communities, their safety challenges in multilingual settings become paramount for alignment research.

Contrastive Preference Optimization: Pushing the Boundaries of LLM Performance in Machine Translation

1 code implementation16 Jan 2024 Haoran Xu, Amr Sharaf, Yunmo Chen, Weiting Tan, Lingfeng Shen, Benjamin Van Durme, Kenton Murray, Young Jin Kim

However, even the top-performing 13B LLM-based translation models, like ALMA, does not match the performance of state-of-the-art conventional encoder-decoder translation models or larger-scale LLMs such as GPT-4.

Machine Translation Translation

Structure-Aware Path Inference for Neural Finite State Transducers

no code implementations21 Dec 2023 Weiting Tan, Chu-Cheng Lin, Jason Eisner

In this paper, we focus on the resulting challenge of imputing the latent alignment path that explains a given pair of input and output strings (e. g., during training).

Narrowing the Gap between Zero- and Few-shot Machine Translation by Matching Styles

no code implementations4 Nov 2023 Weiting Tan, Haoran Xu, Lingfeng Shen, Shuyue Stella Li, Kenton Murray, Philipp Koehn, Benjamin Van Durme, Yunmo Chen

Large language models trained primarily in a monolingual setting have demonstrated their ability to generalize to machine translation using zero- and few-shot examples with in-context learning.

In-Context Learning Machine Translation +1

Flatness-Aware Prompt Selection Improves Accuracy and Sample Efficiency

1 code implementation18 May 2023 Lingfeng Shen, Weiting Tan, Boyuan Zheng, Daniel Khashabi

We provide theoretical foundations for this metric and its relationship with other prompt selection metrics, providing a comprehensive understanding of existing methods.

Multilingual Representation Distillation with Contrastive Learning

no code implementations10 Oct 2022 Weiting Tan, Kevin Heffernan, Holger Schwenk, Philipp Koehn

Multilingual sentence representations from large models encode semantic information from two or more languages and can be used for different cross-lingual information retrieval and matching tasks.

Contrastive Learning Cross-Lingual Information Retrieval +2

Cannot find the paper you are looking for? You can Submit a new open access paper.