Search Results for author: Stephanie Lukin

Found 7 papers, 0 papers with code

Controlling Personality-Based Stylistic Variation with Neural Natural Language Generators

no code implementations WS 2018 Shereen Oraby, Lena Reed, Shubhangi Tandon, T. S. Sharath, Stephanie Lukin, Marilyn Walker

We show that our most explicit model can simultaneously achieve high fidelity to both semantic and stylistic goals: this model adds a context vector of 36 stylistic parameters as input to the hidden state of the encoder at each time step, showing the benefits of explicit stylistic supervision, even when the amount of training data is large.

TNT-NLG, System 1: Using a statistical NLG to massively augment crowd-sourced data for neural generation

no code implementations E2E NLG Challenge System Descriptions 2018 Shereen Oraby, Lena Reed, Shubhangi Tandon, Stephanie Lukin, Marilyn A. Walker

In the area of natural language generation (NLG), there has been a great deal of interest in end-to-end (E2E) neural models that learn and generate natural language sentence realizations in one step.

Ranked #7 on Data-to-Text Generation on E2E NLG Challenge (using extra training data)

Data-to-Text Generation Machine Translation +2

Data-Driven Dialogue Systems for Social Agents

no code implementations10 Sep 2017 Kevin K. Bowden, Shereen Oraby, Amita Misra, Jiaqi Wu, Stephanie Lukin

In order to build dialogue systems to tackle the ambitious task of holding social conversations, we argue that we need a data driven approach that includes insight into human conversational chit chat, and which incorporates different natural language processing modules.

Retrieval

Getting Reliable Annotations for Sarcasm in Online Dialogues

no code implementations LREC 2014 Reid Swanson, Stephanie Lukin, Luke Eisenberg, Thomas Chase Corcoran, Marilyn A. Walker

The language used in online forums differs in many ways from that of traditional language resources such as news.

Really? Well. Apparently Bootstrapping Improves the Performance of Sarcasm and Nastiness Classifiers for Online Dialogue

no code implementations WS 2013 Stephanie Lukin, Marilyn Walker

Our first phase, using crowdsourced nasty indicators, achieves 58% precision and 49% recall, which increases to 75% precision and 62% recall when we bootstrap over the first level with generalized syntactic patterns.

Cannot find the paper you are looking for? You can Submit a new open access paper.