Search Results for author: Emma Strubell

Found 21 papers, 10 papers with code

Comparing Span Extraction Methods for Semantic Role Labeling

1 code implementation ACL (spnlp) 2021 Zhisong Zhang, Emma Strubell, Eduard Hovy

In this work, we empirically compare span extraction methods for the task of semantic role labeling (SRL).

Semantic Role Labeling

On the Benefit of Syntactic Supervision for Cross-lingual Transfer in Semantic Role Labeling

1 code implementation EMNLP 2021 Zhisong Zhang, Emma Strubell, Eduard Hovy

Although recent developments in neural architectures and pre-trained representations have greatly increased state-of-the-art model performance on fully-supervised semantic role labeling (SRL), the task remains challenging for languages where supervised SRL training data are not abundant.

Benchmark Cross-Lingual Transfer +1

Measuring the Carbon Intensity of AI in Cloud Instances

no code implementations10 Jun 2022 Jesse Dodge, Taylor Prewitt, Remi Tachet des Combes, Erika Odmark, Roy Schwartz, Emma Strubell, Alexandra Sasha Luccioni, Noah A. Smith, Nicole DeCario, Will Buchanan

By providing unprecedented access to computational resources, cloud computing has enabled rapid growth in technologies such as machine learning, the computational demands of which incur a high energy cost and a commensurate carbon footprint.

Computer Vision Language Modelling +1

Train Flat, Then Compress: Sharpness-Aware Minimization Learns More Compressible Models

no code implementations25 May 2022 Clara Na, Sanket Vaibhav Mehta, Emma Strubell

Model compression by way of parameter pruning, quantization, or distillation has recently gained popularity as an approach for reducing the computational requirements of modern deep neural network models for NLP.

Model Compression Quantization

An Empirical Investigation of the Role of Pre-training in Lifelong Learning

1 code implementation16 Dec 2021 Sanket Vaibhav Mehta, Darshan Patil, Sarath Chandar, Emma Strubell

We investigate existing methods in the context of large, pre-trained models and evaluate their performance on a variety of text and image classification tasks, including a large-scale study using a novel dataset of 15 diverse NLP tasks.

Continual Learning Image Classification

Unsupervised Domain Adaptation Via Pseudo-labels And Objectness Constraints

no code implementations29 Sep 2021 Rajshekhar Das, Jonathan Francis, Sanket Vaibhav Mehta, Jean Oh, Emma Strubell, Jose Moura

Crucially, the objectness constraint is agnostic to the ground-truth semantic segmentation labels and, therefore, remains appropriate for unsupervised adaptation settings.

Semantic Segmentation Superpixels +1

End-to-end Quantized Training via Log-Barrier Extensions

no code implementations1 Jan 2021 Juncheng B Li, Shuhui Qu, Xinjian Li, Emma Strubell, Florian Metze

Quantization of neural network parameters and activations has emerged as a successful approach to reducing the model size and inference time on hardware that sup-ports native low-precision arithmetic.

Quantization

Energy and Policy Considerations for Deep Learning in NLP

3 code implementations ACL 2019 Emma Strubell, Ananya Ganesh, Andrew McCallum

Recent progress in hardware and methodology for training neural networks has ushered in a new generation of large networks trained on abundant data.

Syntax Helps ELMo Understand Semantics: Is Syntax Still Relevant in a Deep Neural Architecture for SRL?

no code implementations WS 2018 Emma Strubell, Andrew McCallum

Do unsupervised methods for learning rich, contextualized token representations obviate the need for explicit modeling of linguistic structure in neural network models for semantic role labeling (SRL)?

Semantic Role Labeling Word Embeddings

Linguistically-Informed Self-Attention for Semantic Role Labeling

1 code implementation EMNLP 2018 Emma Strubell, Patrick Verga, Daniel Andor, David Weiss, Andrew McCallum

Unlike previous models which require significant pre-processing to prepare linguistic features, LISA can incorporate syntax using merely raw tokens as input, encoding the sequence only once to simultaneously perform parsing, predicate detection and role labeling for all predicates.

Dependency Parsing Multi-Task Learning +4

Simultaneously Self-Attending to All Mentions for Full-Abstract Biological Relation Extraction

1 code implementation NAACL 2018 Patrick Verga, Emma Strubell, Andrew McCallum

Most work in relation extraction forms a prediction by looking at a short span of text within a single sentence containing a single entity pair mention.

Relation Extraction

Automatically Extracting Action Graphs from Materials Science Synthesis Procedures

no code implementations18 Nov 2017 Sheshera Mysore, Edward Kim, Emma Strubell, Ao Liu, Haw-Shiuan Chang, Srikrishna Kompella, Kevin Huang, Andrew McCallum, Elsa Olivetti

In this work, we present a system for automatically extracting structured representations of synthesis procedures from the texts of materials science journal articles that describe explicit, experimental syntheses of inorganic compounds.

Attending to All Mention Pairs for Full Abstract Biological Relation Extraction

no code implementations23 Oct 2017 Patrick Verga, Emma Strubell, Ofer Shai, Andrew McCallum

We propose a model to consider all mention and entity pairs simultaneously in order to make a prediction.

Relation Extraction

Dependency Parsing with Dilated Iterated Graph CNNs

no code implementations WS 2017 Emma Strubell, Andrew McCallum

Dependency parses are an effective way to inject linguistic knowledge into many downstream tasks, and many practitioners wish to efficiently parse sentences at scale.

Benchmark Dependency Parsing

Fast and Accurate Entity Recognition with Iterated Dilated Convolutions

4 code implementations EMNLP 2017 Emma Strubell, Patrick Verga, David Belanger, Andrew McCallum

Today when many practitioners run basic NLP on the entire web and large-volume traffic, faster methods are paramount to saving time and energy costs.

NER Structured Prediction

Multilingual Relation Extraction using Compositional Universal Schema

1 code implementation NAACL 2016 Patrick Verga, David Belanger, Emma Strubell, Benjamin Roth, Andrew McCallum

In response, this paper introduces significant further improvements to the coverage and flexibility of universal schema relation extraction: predictions for entities unseen in training and multilingual transfer learning to domains with no annotation.

Relation Extraction Slot Filling +2

Learning Dynamic Feature Selection for Fast Sequential Prediction

no code implementations IJCNLP 2015 Emma Strubell, Luke Vilnis, Kate Silverstein, Andrew McCallum

We present paired learning and inference algorithms for significantly reducing computation and increasing speed of the vector dot products in the classifiers that are at the heart of many NLP components.

feature selection named-entity-recognition +4

Training for Fast Sequential Prediction Using Dynamic Feature Selection

no code implementations30 Oct 2014 Emma Strubell, Luke Vilnis, Andrew McCallum

We present paired learning and inference algorithms for significantly reducing computation and increasing speed of the vector dot products in the classifiers that are at the heart of many NLP components.

feature selection Part-Of-Speech Tagging

Cannot find the paper you are looking for? You can Submit a new open access paper.