Search Results for author: Mirac Suzgun

Found 5 papers, 4 papers with code

Prompt-and-Rerank: A Method for Zero-Shot and Few-Shot Arbitrary Textual Style Transfer with Small Language Models

1 code implementation23 May 2022 Mirac Suzgun, Luke Melas-Kyriazi, Dan Jurafsky

We propose a method for arbitrary textual style transfer (TST)--the task of transforming a text into any given style--utilizing general-purpose pre-trained language models.

Style Transfer

Monte Carlo Tree Search for Interpreting Stress in Natural Language

1 code implementation LTEDI (ACL) 2022 Kyle Swanson, Joy Hsu, Mirac Suzgun

Using a dataset of Reddit posts that exhibit stress, we demonstrate the ability of our MCTS algorithm to identify interpretable explanations for a person's feeling of stress in both a context-dependent and context-independent manner.

Memory-Augmented Recurrent Neural Networks Can Learn Generalized Dyck Languages

2 code implementations8 Nov 2019 Mirac Suzgun, Sebastian Gehrmann, Yonatan Belinkov, Stuart M. Shieber

We introduce three memory-augmented Recurrent Neural Networks (MARNNs) and explore their capabilities on a series of simple language modeling tasks whose solutions require stack-based mechanisms.

Language Modelling

LSTM Networks Can Perform Dynamic Counting

no code implementations WS 2019 Mirac Suzgun, Sebastian Gehrmann, Yonatan Belinkov, Stuart M. Shieber

In this paper, we systematically assess the ability of standard recurrent networks to perform dynamic counting and to encode hierarchical representations.

On Evaluating the Generalization of LSTM Models in Formal Languages

1 code implementation WS 2019 Mirac Suzgun, Yonatan Belinkov, Stuart M. Shieber

Recurrent Neural Networks (RNNs) are theoretically Turing-complete and established themselves as a dominant model for language processing.

Cannot find the paper you are looking for? You can Submit a new open access paper.