Semantic Role Labeling
136 papers with code • 7 benchmarks • 14 datasets
Semantic role labeling aims to model the predicate-argument structure of a sentence and is often described as answering "Who did what to whom". BIO notation is typically used for semantic role labeling.
Example:
Housing | starts | are | expected | to | quicken | a | bit | from | August’s | pace |
---|---|---|---|---|---|---|---|---|---|---|
B-ARG1 | I-ARG1 | O | O | O | V | B-ARG2 | I-ARG2 | B-ARG3 | I-ARG3 | I-ARG3 |
Datasets
Most implemented papers
Deep contextualized word representations
We introduce a new type of deep contextualized word representation that models both (1) complex characteristics of word use (e. g., syntax and semantics), and (2) how these uses vary across linguistic contexts (i. e., to model polysemy).
The Natural Language Decathlon: Multitask Learning as Question Answering
Though designed for decaNLP, MQAN also achieves state of the art results on the WikiSQL semantic parsing task in the single-task setting.
An Incremental Parser for Abstract Meaning Representation
We describe a transition-based parser for AMR that parses sentences left-to-right, in linear time.
Men Also Like Shopping: Reducing Gender Bias Amplification using Corpus-level Constraints
Language is increasingly being used to define rich visual recognition problems with supporting image collections sourced from the web.
Large-Scale QA-SRL Parsing
We present a new large-scale corpus of Question-Answer driven Semantic Role Labeling (QA-SRL) annotations, and the first high-quality QA-SRL parser.
LINSPECTOR: Multilingual Probing Tasks for Word Representations
We present a reusable methodology for creation and evaluation of such tests in a multilingual setting.
Simple BERT Models for Relation Extraction and Semantic Role Labeling
We present simple BERT-based models for relation extraction and semantic role labeling.
Generalizing Natural Language Analysis through Span-relation Representations
Natural language processing covers a wide variety of tasks predicting syntax, semantics, and information content, and usually each type of output is generated with specially designed architectures.
A Simple and Accurate Syntax-Agnostic Neural Model for Dependency-based Semantic Role Labeling
However, when automatically predicted part-of-speech tags are provided as input, it substantially outperforms all previous local models and approaches the best reported results on the English CoNLL-2009 dataset.