Search Results for author: Anna Rumshisky

Found 43 papers, 4 papers with code

Multi-Stream Transformers

no code implementations21 Jul 2021 Mikhail Burtsev, Anna Rumshisky

Transformer-based encoder-decoder models produce a fused token-wise representation after every encoder layer.

An Efficient DP-SGD Mechanism for Large Scale NLP Models

no code implementations14 Jul 2021 Christophe Dupuy, Radhika Arava, Rahul Gupta, Anna Rumshisky

However, the data used to train NLU models may contain private information such as addresses or phone numbers, particularly when drawn from human subjects.

Natural Language Understanding

A guide to the dataset explosion in QA, NLI, and commonsense reasoning

no code implementations COLING 2020 Anna Rogers, Anna Rumshisky

Question answering, natural language inference and commonsense reasoning are increasingly popular as general NLP system benchmarks, driving both modeling and dataset work.

Natural Language Inference Question Answering

Update Frequently, Update Fast: Retraining Semantic Parsing Systems in a Fraction of Time

no code implementations15 Oct 2020 Vladislav Lialin, Rahul Goel, Andrey Simanovsky, Anna Rumshisky, Rushin Shah

To reduce training time, one can fine-tune the previously trained model on each patch, but naive fine-tuning exhibits catastrophic forgetting - degradation of the model performance on the data not represented in the data patch.

Continual Learning Goal-Oriented Dialogue Systems +1

Towards Visual Dialog for Radiology

no code implementations WS 2020 Olga Kovaleva, Chaitanya Shivade, Satyan Kashyap, a, Karina Kanjaria, Joy Wu, Deddeh Ballah, Adam Coy, Alex Karargyris, ros, Yufan Guo, David Beymer Beymer, Anna Rumshisky, V Mukherjee, ana Mukherjee

Using MIMIC-CXR, an openly available database of chest X-ray images, we construct both a synthetic and a real-world dataset and provide baseline scores achieved by state-of-the-art models.

Question Answering Visual Dialog +1

When BERT Plays the Lottery, All Tickets Are Winning

no code implementations EMNLP 2020 Sai Prasanna, Anna Rogers, Anna Rumshisky

Large Transformer-based models were shown to be reducible to a smaller number of self-attention heads and layers.

A Primer in BERTology: What we know about how BERT works

no code implementations27 Feb 2020 Anna Rogers, Olga Kovaleva, Anna Rumshisky

Transformer-based models have pushed state of the art in many areas of NLP, but our understanding of what is behind their success is still limited.

Calls to Action on Social Media: Detection, Social Impact, and Censorship Potential

no code implementations WS 2019 Anna Rogers, Olga Kovaleva, Anna Rumshisky

Calls to action on social media are known to be effective means of mobilization in social movements, and a frequent target of censorship.

Injecting Hierarchy with U-Net Transformers

2 code implementations16 Oct 2019 David Donahue, Vladislav Lialin, Anna Rumshisky

The Transformer architecture has become increasingly popular over the past two years, owing to its impressive performance on a number of natural language processing (NLP) tasks.

Machine Translation

Memory-Augmented Recurrent Networks for Dialogue Coherence

no code implementations16 Oct 2019 David Donahue, Yuanliang Meng, Anna Rumshisky

The first design features a sequence-to-sequence architecture with two separate NTM modules, one for each participant in the conversation.

Language Modelling

NarrativeTime: Dense High-Speed Temporal Annotation on a Timeline

no code implementations29 Aug 2019 Anna Rogers, Gregory Smelkov, Anna Rumshisky

We present NarrativeTime, a new timeline-based annotation scheme for temporal order of events in text, and a new densely annotated fiction corpus comparable to TimeBank-Dense.

Chunking

Solving Math Word Problems with Double-Decoder Transformer

no code implementations28 Aug 2019 Yuanliang Meng, Anna Rumshisky

This paper proposes a Transformer-based model to generate equations for math word problems.

Revealing the Dark Secrets of BERT

1 code implementation IJCNLP 2019 Olga Kovaleva, Alexey Romanov, Anna Rogers, Anna Rumshisky

BERT-based architectures currently give state-of-the-art performance on many NLP tasks, but little is known about the exact mechanisms that contribute to its success.

What's in a Name? Reducing Bias in Bios without Access to Protected Attributes

no code implementations NAACL 2019 Alexey Romanov, Maria De-Arteaga, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, Anna Rumshisky, Adam Tauman Kalai

In the context of mitigating bias in occupation classification, we propose a method for discouraging correlation between the predicted probability of an individual's true occupation and a word embedding of their name.

Word Embeddings

Adversarial Text Generation Without Reinforcement Learning

no code implementations11 Oct 2018 David Donahue, Anna Rumshisky

This is largely because sequences of text are discrete, and thus gradients cannot propagate from the discriminator to the generator.

Adversarial Text Text Generation

Triad-based Neural Network for Coreference Resolution

1 code implementation COLING 2018 Yuanliang Meng, Anna Rumshisky

We propose a triad-based neural network system that generates affinity scores between entity mentions for coreference resolution.

Coreference Resolution

Adversarial Decomposition of Text Representation

2 code implementations NAACL 2019 Alexey Romanov, Anna Rumshisky, Anna Rogers, David Donahue

We show that the proposed method is capable of fine-grained controlled change of these aspects of the input sentence.

RuSentiment: An Enriched Sentiment Analysis Dataset for Social Media in Russian

no code implementations COLING 2018 Anna Rogers, Alexey Romanov, Anna Rumshisky, Svitlana Volkova, Mikhail Gronas, Alex Gribov

This paper presents RuSentiment, a new dataset for sentiment analysis of social media posts in Russian, and a new set of comprehensive annotation guidelines that are extensible to other languages.

Active Learning General Classification +2

Forced Apart: Discovering Disentangled Representations Without Exhaustive Labels

no code implementations ICLR 2018 Alexey Romanov, Anna Rumshisky

Learning a better representation with neural networks is a challenging problem, which has been tackled from different perspectives in the past few years.

Tracking Bias in News Sources Using Social Media: the Russia-Ukraine Maidan Crisis of 2013--2014

no code implementations WS 2017 Peter Potash, Alexey Romanov, Anna Rumshisky, Mikhail Gronas

We show that on the task of predicting which side is likely to prefer a given article, a Naive Bayes classifier can record 90. 3{\%} accuracy looking only at domain names of the news sources.

Towards Debate Automation: a Recurrent Model for Predicting Debate Winners

no code implementations EMNLP 2017 Peter Potash, Anna Rumshisky

In this paper we introduce a practical first step towards the creation of an automated debate agent: a state-of-the-art recurrent predictive model for predicting debate winners.

Text Generation

SemEval-2017 Task 6: \#HashtagWars: Learning a Sense of Humor

no code implementations SEMEVAL 2017 Peter Potash, Alexey Romanov, Anna Rumshisky

This paper describes a new shared task for humor understanding that attempts to eschew the ubiquitous binary approach to humor detection and focus on comparative humor ranking instead.

Humor Detection

Forced to Learn: Discovering Disentangled Representations Without Exhaustive Labels

no code implementations1 May 2017 Alexey Romanov, Anna Rumshisky

Learning a better representation with neural networks is a challenging problem, which was tackled extensively from different prospectives in the past few years.

Here's My Point: Joint Pointer Architecture for Argument Mining

no code implementations EMNLP 2017 Peter Potash, Alexey Romanov, Anna Rumshisky

One of the major goals in automated argumentation mining is to uncover the argument structure present in argumentative text.

Argument Mining

#HashtagWars: Learning a Sense of Humor

no code implementations9 Dec 2016 Peter Potash, Alexey Romanov, Anna Rumshisky

Our best supervised system achieved 63. 7% accuracy, suggesting that this task is much more difficult than comparable humor detection tasks.

Humor Detection

Evaluating Creative Language Generation: The Case of Rap Lyric Ghostwriting

no code implementations WS 2018 Peter Potash, Alexey Romanov, Anna Rumshisky

The goal of this paper is to develop evaluation methods for one such task, ghostwriting of rap lyrics, and to provide an explicit, quantifiable foundation for the goals and future directions of this task.

Text Generation

Normalization of Relative and Incomplete Temporal Expressions in Clinical Narratives

no code implementations16 Oct 2015 Weiyi Sun, Anna Rumshisky, Ozlem Uzuner

We analyze the RI-TIMEXes in temporally annotated corpora and propose two hypotheses regarding the normalization of RI-TIMEXes in the clinical narrative domain: the anchor point hypothesis and the anchor relation hypothesis.

General Classification Multi-Label Classification +2

Word Sense Inventories by Non-Experts.

no code implementations LREC 2012 Anna Rumshisky, Nick Botchan, Sophie Kushkuley, James Pustejovsky

In this paper, we explore different strategies for implementing a crowdsourcing methodology for a single-step construction of an empirically-derived sense inventory and the corresponding sense-annotated corpus.

Word Sense Disambiguation

Cannot find the paper you are looking for? You can Submit a new open access paper.