Search Results for author: Scott Roy

Found 7 papers, 1 papers with code

Machine Translation Pre-training for Data-to-Text Generation - A Case Study in Czech

no code implementations INLG (ACL) 2020 Mihir Kale, Scott Roy

Since the structured data is generally expressed in English, text generation into other languages involves elements of translation, transliteration and copying - elements already encoded in neural machine translation systems.

Data-to-Text Generation Translation +2

Using Machine Translation to Localize Task Oriented NLG Output

no code implementations9 Jul 2021 Scott Roy, Cliff Brunk, Kyu-Young Kim, Justin Zhao, Markus Freitag, Mihir Kale, Gagan Bansal, Sidharth Mudgal, Chris Varano

One of the challenges in a task oriented natural language application like the Google Assistant, Siri, or Alexa is to localize the output to many languages.

Domain Adaptation Machine Translation +1

Machine Translation Pre-training for Data-to-Text Generation -- A Case Study in Czech

no code implementations5 Apr 2020 Mihir Kale, Scott Roy

Since the structured data is generally expressed in English, text generation into other languages involves elements of translation, transliteration and copying - elements already encoded in neural machine translation systems.

Data-to-Text Generation Translation +1

APE at Scale and its Implications on MT Evaluation Biases

no code implementations WS 2019 Markus Freitag, Isaac Caswell, Scott Roy

In this work, we train an Automatic Post-Editing (APE) model and use it to reveal biases in standard Machine Translation (MT) evaluation procedures.

Automatic Post-Editing NMT +1

Unsupervised Natural Language Generation with Denoising Autoencoders

1 code implementation EMNLP 2018 Markus Freitag, Scott Roy

Generating text from structured data is important for various tasks such as question answering and dialog systems.

Denoising Question Answering +2

Contextual LSTM (CLSTM) models for Large scale NLP tasks

no code implementations19 Feb 2016 Shalini Ghosh, Oriol Vinyals, Brian Strope, Scott Roy, Tom Dean, Larry Heck

We evaluate CLSTM on three specific NLP tasks: word prediction, next sentence selection, and sentence topic prediction.

Paraphrase Generation Question Answering +2

Online Models for Content Optimization

no code implementations NeurIPS 2008 Deepak Agarwal, Bee-Chung Chen, Pradheep Elango, Nitin Motgi, Seung-Taek Park, Raghu Ramakrishnan, Scott Roy, Joe Zachariah

It is now deployed on a major Internet portal, and selects articles to serve to hundreds of millions of user visits per day, significantly increasing the number of user clicks over the original manual approach, in which editors periodically selected articles to display.

Cannot find the paper you are looking for? You can Submit a new open access paper.