Search Results for author: Rohun Saxena

Found 2 papers, 1 papers with code

Do Massively Pretrained Language Models Make Better Storytellers?

1 code implementation CONLL 2019 Abigail See, Aneesh Pappu, Rohun Saxena, Akhila Yerukola, Christopher D. Manning

Large neural language models trained on massive amounts of text have emerged as a formidable strategy for Natural Language Understanding tasks.

Natural Language Understanding Story Generation

Amanuensis: The Programmer's Apprentice

no code implementations29 Jun 2018 Thomas Dean, Maurice Chiang, Marcus Gomez, Nate Gruver, Yousef Hindy, Michelle Lam, Peter Lu, Sophia Sanchez, Rohun Saxena, Michael Smith, Lucy Wang, Catherine Wong

This document provides an overview of the material covered in a course taught at Stanford in the spring quarter of 2018.

Cannot find the paper you are looking for? You can Submit a new open access paper.