no code implementations • WS 2018 • Shereen Oraby, Lena Reed, Shubhangi Tandon, T. S. Sharath, Stephanie Lukin, Marilyn Walker
We show that our most explicit model can simultaneously achieve high fidelity to both semantic and stylistic goals: this model adds a context vector of 36 stylistic parameters as input to the hidden state of the encoder at each time step, showing the benefits of explicit stylistic supervision, even when the amount of training data is large.
no code implementations • E2E NLG Challenge System Descriptions 2018 • Shereen Oraby, Lena Reed, Shubhangi Tandon, Stephanie Lukin, Marilyn A. Walker
In the area of natural language generation (NLG), there has been a great deal of interest in end-to-end (E2E) neural models that learn and generate natural language sentence realizations in one step.
Ranked #7 on Data-to-Text Generation on E2E NLG Challenge (using extra training data)
no code implementations • 10 Sep 2017 • Kevin K. Bowden, Shereen Oraby, Amita Misra, Jiaqi Wu, Stephanie Lukin
In order to build dialogue systems to tackle the ambitious task of holding social conversations, we argue that we need a data driven approach that includes insight into human conversational chit chat, and which incorporates different natural language processing modules.
no code implementations • LREC 2014 • Reid Swanson, Stephanie Lukin, Luke Eisenberg, Thomas Chase Corcoran, Marilyn A. Walker
The language used in online forums differs in many ways from that of traditional language resources such as news.
no code implementations • WS 2013 • Stephanie Lukin, Marilyn Walker
Our first phase, using crowdsourced nasty indicators, achieves 58% precision and 49% recall, which increases to 75% precision and 62% recall when we bootstrap over the first level with generalized syntactic patterns.