Search Results for author: Stuart M. Shieber

Found 13 papers, 9 papers with code

Memory-Augmented Recurrent Neural Networks Can Learn Generalized Dyck Languages

2 code implementations8 Nov 2019 Mirac Suzgun, Sebastian Gehrmann, Yonatan Belinkov, Stuart M. Shieber

We introduce three memory-augmented Recurrent Neural Networks (MARNNs) and explore their capabilities on a series of simple language modeling tasks whose solutions require stack-based mechanisms.

Language Modelling

Don't Take the Premise for Granted: Mitigating Artifacts in Natural Language Inference

1 code implementation ACL 2019 Yonatan Belinkov, Adam Poliak, Stuart M. Shieber, Benjamin Van Durme, Alexander M. Rush

In contrast to standard approaches to NLI, our methods predict the probability of a premise given a hypothesis and NLI label, discouraging models from ignoring the premise.

Natural Language Inference

LSTM Networks Can Perform Dynamic Counting

no code implementations WS 2019 Mirac Suzgun, Sebastian Gehrmann, Yonatan Belinkov, Stuart M. Shieber

In this paper, we systematically assess the ability of standard recurrent networks to perform dynamic counting and to encode hierarchical representations.

On Evaluating the Generalization of LSTM Models in Formal Languages

1 code implementation WS 2019 Mirac Suzgun, Yonatan Belinkov, Stuart M. Shieber

Recurrent Neural Networks (RNNs) are theoretically Turing-complete and established themselves as a dominant model for language processing.

Learning Neural Templates for Text Generation

2 code implementations EMNLP 2018 Sam Wiseman, Stuart M. Shieber, Alexander M. Rush

While neural, encoder-decoder models have had significant empirical success in text generation, there remain several unaddressed problems with this style of generation.

Text Generation

Adapting Sequence Models for Sentence Correction

1 code implementation EMNLP 2017 Allen Schmaltz, Yoon Kim, Alexander M. Rush, Stuart M. Shieber

In a controlled experiment of sequence-to-sequence approaches for the task of sentence correction, we find that character-based models are generally more effective than word-based models and models that encode subword information via convolutions, and that modeling the output data as a series of diffs improves effectiveness over standard approaches.

Machine Translation Translation

Challenges in Data-to-Document Generation

4 code implementations EMNLP 2017 Sam Wiseman, Stuart M. Shieber, Alexander M. Rush

Recent neural models have shown significant progress on the problem of generating short descriptive texts conditioned on a small number of database records.

Data-to-Text Generation

Word Ordering Without Syntax

1 code implementation EMNLP 2016 Allen Schmaltz, Alexander M. Rush, Stuart M. Shieber

Recent work on word ordering has argued that syntactic structure is important, or even required, for effectively recovering the order of a sentence.

Language Modelling

Sentence-Level Grammatical Error Identification as Sequence-to-Sequence Correction

no code implementations WS 2016 Allen Schmaltz, Yoon Kim, Alexander M. Rush, Stuart M. Shieber

We demonstrate that an attention-based encoder-decoder model can be used for sentence-level grammatical error identification for the Automated Evaluation of Scientific Writing (AESW) Shared Task 2016.

Learning Global Features for Coreference Resolution

1 code implementation NAACL 2016 Sam Wiseman, Alexander M. Rush, Stuart M. Shieber

There is compelling evidence that coreference prediction would benefit from modeling global information about entity-clusters.

Coreference Resolution

Cannot find the paper you are looking for? You can Submit a new open access paper.