Search Results for author: Michael Bloodgood

Found 27 papers, 0 papers with code

Support Vector Machine Active Learning Algorithms with Query-by-Committee versus Closest-to-Hyperplane Selection

no code implementations24 Jan 2018 Michael Bloodgood

This paper investigates and evaluates support vector machine active learning algorithms for use with imbalanced datasets, which commonly arise in many applications such as information extraction applications.

Active Learning General Classification +3

Impact of Batch Size on Stopping Active Learning for Text Classification

no code implementations24 Jan 2018 Garrett Beatty, Ethan Kochis, Michael Bloodgood

When using active learning, smaller batch sizes are typically more efficient from a learning efficiency perspective.

Active Learning General Classification +3

Acquisition of Translation Lexicons for Historically Unwritten Languages via Bridging Loanwords

no code implementations WS 2017 Michael Bloodgood, Benjamin Strauss

With the advent of informal electronic communications such as social media, colloquial languages that were historically unwritten are being written for the first time in heavily code-switched environments.

Machine Translation Translation

Using Global Constraints and Reranking to Improve Cognates Detection

no code implementations ACL 2017 Michael Bloodgood, Benjamin Strauss

Global constraints and reranking have not been used in cognates detection research to date.

Filtering Tweets for Social Unrest

no code implementations20 Feb 2017 Alan Mishler, Kevin Wonus, Wendy Chambers, Michael Bloodgood

Since the events of the Arab Spring, there has been increased interest in using social media to anticipate social unrest.

Translation Memory Retrieval Methods

no code implementations EACL 2014 Michael Bloodgood, Benjamin Strauss

Although detailed accounts of the matching algorithms used in commercial systems can't be found in the literature, it is widely believed that edit distance algorithms are used.

Retrieval Translation

Analysis of Stopping Active Learning based on Stabilizing Predictions

no code implementations WS 2013 Michael Bloodgood, John Grothendieck

Specifically, if the Kappa agreement between two models exceeds a threshold T (where $T>0$), then the difference in F-measure performance between those models is bounded above by $\frac{4(1-T)}{T}$ in all cases.

Active Learning

Use of Modality and Negation in Semantically-Informed Syntactic MT

no code implementations5 Feb 2015 Kathryn Baker, Michael Bloodgood, Bonnie J. Dorr, Chris Callison-Burch, Nathaniel W. Filardo, Christine Piatko, Lori Levin, Scott Miller

We apply our MN annotation scheme to statistical machine translation using a syntactic framework that supports the inclusion of semantic annotations.

Machine Translation Negation +1

Annotating Cognates and Etymological Origin in Turkic Languages

no code implementations13 Jan 2015 Benjamin S. Mericli, Michael Bloodgood

Our method strives to balance the amount of research effort the annotator expends with the utility of the annotations for supporting research on improving automated translation lexicon induction.

Translation

Rapid Adaptation of POS Tagging for Domain Specific Uses

no code implementations31 Oct 2014 John E. Miller, Michael Bloodgood, Manabu Torii, K. Vijay-Shanker

Part-of-speech (POS) tagging is a fundamental component for performing natural language tasks such as parsing, information extraction, and question answering.

Part-Of-Speech Tagging POS +2

Detecting Structural Irregularity in Electronic Dictionaries Using Language Modeling

no code implementations29 Oct 2014 Paul Rodrigues, David Zajic, David Doermann, Michael Bloodgood, Peng Ye

Dictionaries are often developed using tools that save to Extensible Markup Language (XML)-based standards.

Language Modelling

Correcting Errors in Digital Lexicographic Resources Using a Dictionary Manipulation Language

no code implementations28 Oct 2014 David Zajic, Michael Maxwell, David Doermann, Paul Rodrigues, Michael Bloodgood

We describe a paradigm for combining manual and automatic error correction of noisy structured lexicographic data.

Bucking the Trend: Large-Scale Cost-Focused Active Learning for Statistical Machine Translation

no code implementations21 Oct 2014 Michael Bloodgood, Chris Callison-Burch

We explore how to improve machine translation systems by adding more translation data in situations where we already have substantial resources.

Active Learning Machine Translation +1

A Modality Lexicon and its use in Automatic Tagging

no code implementations17 Oct 2014 Kathryn Baker, Michael Bloodgood, Bonnie J. Dorr, Nathaniel W. Filardo, Lori Levin, Christine Piatko

Specifically, we describe the construction of a modality annotation scheme, a modality lexicon, and two automated modality taggers that were built using the lexicon and annotation scheme.

Machine Translation Translation

Semantically-Informed Syntactic Machine Translation: A Tree-Grafting Approach

no code implementations24 Sep 2014 Kathryn Baker, Michael Bloodgood, Chris Callison-Burch, Bonnie J. Dorr, Nathaniel W. Filardo, Lori Levin, Scott Miller, Christine Piatko

We describe a unified and coherent syntactic framework for supporting a semantically-informed syntactic approach to statistical machine translation.

Machine Translation Translation

A Method for Stopping Active Learning Based on Stabilizing Predictions and the Need for User-Adjustable Stopping

no code implementations17 Sep 2014 Michael Bloodgood, K. Vijay-Shanker

A survey of existing methods for stopping active learning (AL) reveals the needs for methods that are: more widely applicable; more aggressive in saving annotations; and more stable across changing datasets.

Active Learning

An Approach to Reducing Annotation Costs for BioNLP

no code implementations12 Sep 2014 Michael Bloodgood, K. Vijay-Shanker

There is a broad range of BioNLP tasks for which active learning (AL) can significantly reduce annotation costs and a specific AL algorithm we have developed is particularly effective in reducing annotation costs for these tasks.

Active Learning Binary Classification +1

Stopping Active Learning based on Predicted Change of F Measure for Text Classification

no code implementations26 Jan 2019 Michael Altschuler, Michael Bloodgood

During active learning, an effective stopping method allows users to limit the number of annotations, which is cost effective.

Active Learning General Classification +2

Early Forecasting of Text Classification Accuracy and F-Measure with Active Learning

no code implementations20 Jan 2020 Thomas Orth, Michael Bloodgood

An important capability for improving the utility of stopping methods is to effectively forecast the performance of the text classification models.

Active Learning General Classification +2

Impact of Stop Sets on Stopping Active Learning for Text Classification

no code implementations8 Jan 2022 Luke Kurlandski, Michael Bloodgood

This paper shows the choice of the stop set can have a significant impact on the performance of stopping methods and the impact is different for stability-based methods from that on confidence-based methods.

Active Learning text-classification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.