Search Results for author: Alex Wang

Found 18 papers, 9 papers with code

Overview of the SustaiNLP 2020 Shared Task

no code implementations EMNLP (sustainlp) 2020 Alex Wang, Thomas Wolf

We describe the SustaiNLP 2020 shared task: efficient inference on the SuperGLUE benchmark (Wang et al., 2019).

QuestEval: Summarization Asks for Fact-based Evaluation

2 code implementations EMNLP 2021 Thomas Scialom, Paul-Alexis Dray, Patrick Gallinari, Sylvain Lamprier, Benjamin Piwowarski, Jacopo Staiano, Alex Wang

Summarization evaluation remains an open research problem: current metrics such as ROUGE are known to be limited and to correlate poorly with human judgments.

Question Answering

Label Representations in Modeling Classification as Text Generation

no code implementations Asian Chapter of the Association for Computational Linguistics 2020 Xinyi Chen, Jingxian Xu, Alex Wang

Several recent state-of-the-art transfer learning methods model classification tasks as text generation, where labels are represented as strings for the model to generate.

Classification Text Classification +2

Asking and Answering Questions to Evaluate the Factual Consistency of Summaries

2 code implementations ACL 2020 Alex Wang, Kyunghyun Cho, Mike Lewis

QAGS is based on the intuition that if we ask questions about a summary and its source, we will receive similar answers if the summary is factually consistent with the source.

Abstractive Text Summarization

A Generalized Framework of Sequence Generation with Application to Undirected Sequence Models

1 code implementation29 May 2019 Elman Mansimov, Alex Wang, Sean Welleck, Kyunghyun Cho

We investigate this problem by proposing a generalized model of sequence generation that unifies decoding in directed and undirected models.

Language understanding Machine Translation +4

SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems

2 code implementations NeurIPS 2019 Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, Samuel R. Bowman

In the last year, new models and methods for pretraining and transfer learning have driven striking performance improvements across a range of language understanding tasks.

Language understanding Transfer Learning

Looking for ELMo's friends: Sentence-Level Pretraining Beyond Language Modeling

no code implementations ICLR 2019 Samuel R. Bowman, Ellie Pavlick, Edouard Grave, Benjamin Van Durme, Alex Wang, Jan Hula, Patrick Xia, Raghavendra Pappagari, R. Thomas McCoy, Roma Patel, Najoung Kim, Ian Tenney, Yinghui Huang, Katherin Yu, Shuning Jin, Berlin Chen

Work on the problem of contextualized word representation—the development of reusable neural network components for sentence understanding—has recently seen a surge of progress centered on the unsupervised pretraining task of language modeling with methods like ELMo (Peters et al., 2018).

Language Modelling

Probing What Different NLP Tasks Teach Machines about Function Word Comprehension

no code implementations SEMEVAL 2019 Najoung Kim, Roma Patel, Adam Poliak, Alex Wang, Patrick Xia, R. Thomas McCoy, Ian Tenney, Alexis Ross, Tal Linzen, Benjamin Van Durme, Samuel R. Bowman, Ellie Pavlick

Our results show that pretraining on language modeling performs the best on average across our probing tasks, supporting its widespread use for pretraining state-of-the-art NLP models, and CCG supertagging and NLI pretraining perform comparably.

CCG Supertagging Language Modelling +1

On Measuring Social Biases in Sentence Encoders

1 code implementation NAACL 2019 Chandler May, Alex Wang, Shikha Bordia, Samuel R. Bowman, Rachel Rudinger

The Word Embedding Association Test shows that GloVe and word2vec word embeddings exhibit human-like implicit biases based on gender, race, and other social constructs (Caliskan et al., 2017).

Word Embeddings

Can You Tell Me How to Get Past Sesame Street? Sentence-Level Pretraining Beyond Language Modeling

no code implementations ACL 2019 Alex Wang, Jan Hula, Patrick Xia, Raghavendra Pappagari, R. Thomas McCoy, Roma Patel, Najoung Kim, Ian Tenney, Yinghui Huang, Katherin Yu, Shuning Jin, Berlin Chen, Benjamin Van Durme, Edouard Grave, Ellie Pavlick, Samuel R. Bowman

Natural language understanding has recently seen a surge of progress with the use of sentence encoders like ELMo (Peters et al., 2018a) and BERT (Devlin et al., 2019) which are pretrained on variants of language modeling.

Fine-tuning Language Modelling +3

GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding

6 code implementations WS 2018 Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, Samuel R. Bowman

For natural language understanding (NLU) technology to be maximally useful, both practically and as a scientific object of study, it must be general: it must be able to process language in a way that is not exclusively tailored to any one specific task or dataset.

Language understanding Natural Language Inference +2

Clustering Stable Instances of Euclidean k-means

no code implementations4 Dec 2017 Abhratanu Dutta, Aravindan Vijayaraghavan, Alex Wang

We design efficient algorithms that provably recover the optimal clustering for instances that are additive perturbation stable.

Clustering Stable Instances of Euclidean k-means.

no code implementations NeurIPS 2017 Aravindan Vijayaraghavan, Abhratanu Dutta, Alex Wang

To address this disconnect, we study the following question: what properties of real-world instances will enable us to design efficient algorithms and prove guarantees for finding the optimal clustering?

Cannot find the paper you are looking for? You can Submit a new open access paper.