Search Results for author: Rebecca Sharp

Found 21 papers, 4 papers with code

Rule Based Event Extraction for Artificial Social Intelligence

no code implementations PANDL (COLING) 2022 Remo Nitschke, Yuwei Wang, Chen Chen, Adarsh Pyarelal, Rebecca Sharp

Natural language (as opposed to structured communication modes such as Morse code) is by far the most common mode of communication between humans, and can thus provide significant insight into both individual mental states and interpersonal dynamics.

Event Extraction

A Human-machine Interface for Few-shot Rule Synthesis for Information Extraction

no code implementations NAACL (ACL) 2022 Robert Vacareanu, George C.G. Barbosa, Enrique Noriega-Atala, Gus Hahn-Powell, Rebecca Sharp, Marco A. Valenzuela-Escárcega, Mihai Surdeanu

We propose a system that assists a user in constructing transparent information extraction models, consisting of patterns (or rules) written in a declarative language, through program synthesis. Users of our system can specify their requirements through the use of examples, which are collected with a search interface. The rule-synthesis system proposes rule candidates and the results of applying them on a textual corpus; the user has the option to accept the candidate, request another option, or adjust the examples provided to the system. Through an interactive evaluation, we show that our approach generates high-precision rules even in a 1-shot setting.

Relation Extraction

Taxonomy Builder: a Data-driven and User-centric Tool for Streamlining Taxonomy Construction

no code implementations NAACL (HCINLP) 2022 Mihai Surdeanu, John Hungerford, Yee Seng Chan, Jessica MacBride, Benjamin Gyori, Andrew Zupon, Zheng Tang, Haoling Qiu, Bonan Min, Yan Zverev, Caitlin Hilverman, Max Thomas, Walter Andrews, Keith Alcock, Zeyu Zhang, Michael Reynolds, Steven Bethard, Rebecca Sharp, Egoitz Laparra

An existing domain taxonomy for normalizing content is often assumed when discussing approaches to information extraction, yet often in real-world scenarios there is none. When one does exist, as the information needs shift, it must be continually extended.

Pretrained Language Models Text Summarization

An Unsupervised Method for Learning Representations of Multi-word Expressions for Semantic Classification

no code implementations COLING 2020 Robert Vacareanu, Marco A. Valenzuela-Esc{\'a}rcega, Rebecca Sharp, Mihai Surdeanu

This paper explores an unsupervised approach to learning a compositional representation function for multi-word expressions (MWEs), and evaluates it on the Tratz dataset, which associates two-word expressions with the semantic relation between the compound constituents (e. g. the label employer is associated with the noun compound government agency) (Tratz, 2011).

MathAlign: Linking Formula Identifiers to their Contextual Natural Language Descriptions

no code implementations LREC 2020 Maria Alexeeva, Rebecca Sharp, Marco A. Valenzuela-Esc{\'a}rcega, Jennifer Kadowaki, Adarsh Pyarelal, Clayton Morrison

Extending machine reading approaches to extract mathematical concepts and their descriptions is useful for a variety of tasks, ranging from mathematical information retrieval to increasing accessibility of scientific documents for the visually impaired.

Information Retrieval Reading Comprehension +1

AutoMATES: Automated Model Assembly from Text, Equations, and Software

1 code implementation21 Jan 2020 Adarsh Pyarelal, Marco A. Valenzuela-Escarcega, Rebecca Sharp, Paul D. Hein, Jon Stephens, Pratik Bhandari, HeuiChan Lim, Saumya Debray, Clayton T. Morrison

Models of complicated systems can be represented in different ways - in scientific papers, they are represented using natural language text as well as equations.

On the Importance of Delexicalization for Fact Verification

no code implementations IJCNLP 2019 Sandeep Suntwal, Mithun Paul, Rebecca Sharp, Mihai Surdeanu

As expected, even though this method achieves high accuracy when evaluated in the same domain, the performance in the target domain is poor, marginally above chance. To mitigate this dependence on lexicalized information, we experiment with several strategies for masking out names by replacing them with their semantic category, coupled with a unique identifier to mark that the same or new entities are referenced between claim and evidence.

Fact Verification Natural Language Inference +2

Semi-Supervised Teacher-Student Architecture for Relation Extraction

no code implementations WS 2019 Fan Luo, Ajay Nagesh, Rebecca Sharp, Mihai Surdeanu

Generating a large amount of training data for information extraction (IE) is either costly (if annotations are created manually), or runs the risk of introducing noisy instances (if distant supervision is used).

Binary Relation Extraction Denoising

A mostly unlexicalized model for recognizing textual entailment

no code implementations WS 2018 Mithun Paul, Rebecca Sharp, Mihai Surdeanu

For example, such a system trained in the news domain may learn that a sentence like {``}Palestinians recognize Texas as part of Mexico{''} tends to be unsupported, but this fact (and its corresponding lexicalized cues) have no value in, say, a scientific domain.

Fake News Detection Information Retrieval +3

Deep Affix Features Improve Neural Named Entity Recognizers

1 code implementation SEMEVAL 2018 Vikas Yadav, Rebecca Sharp, Steven Bethard

We propose a practical model for named entity recognition (NER) that combines word and character-level information with a specific learned representation of the prefixes and suffixes of the word.

Feature Engineering Morphological Analysis +2

Framing QA as Building and Ranking Intersentence Answer Justifications

no code implementations CL 2017 Peter Jansen, Rebecca Sharp, Mihai Surdeanu, Peter Clark

Our best configuration answers 44{\%} of the questions correctly, where the top justifications for 57{\%} of these correct answers contain a compelling human-readable justification that explains the inference required to arrive at the correct answer.

Multiple-choice Question Answering

Creating Causal Embeddings for Question Answering with Minimal Supervision

no code implementations EMNLP 2016 Rebecca Sharp, Mihai Surdeanu, Peter Jansen, Peter Clark, Michael Hammond

We argue that a better approach is to look for answers that are related to the question in a relevant way, according to the information need of the question, which may be determined through task-specific embeddings.

Question Answering Word Embeddings

Cannot find the paper you are looking for? You can Submit a new open access paper.