Search Results for author: Michael Thomas Wojnowicz

Found 1 papers, 1 papers with code

PROPS: Probabilistic personalization of black-box sequence models

1 code implementation5 Mar 2019 Michael Thomas Wojnowicz, Xuan Zhao

In particular, we construct a baseline language model by training a LSTM on the entire Wikipedia corpus of 2. 5 million articles (around 6. 6 billion words), and then use PROPS to provide lightweight customization into a personalized language model of President Donald J. Trump's tweeting.

Language Modelling Transfer Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.