Sentence Classification
99 papers with code • 6 benchmarks • 14 datasets
Libraries
Use these libraries to find Sentence Classification models and implementationsDatasets
Most implemented papers
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers.
Convolutional Neural Networks for Sentence Classification
We report on a series of experiments with convolutional neural networks (CNN) trained on top of pre-trained word vectors for sentence-level classification tasks.
A Sensitivity Analysis of (and Practitioners' Guide to) Convolutional Neural Networks for Sentence Classification
Convolutional Neural Networks (CNNs) have recently achieved remarkably strong performance on the practically important task of sentence classification (kim 2014, kalchbrenner 2014, johnson 2014).
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
Biomedical text mining is becoming increasingly important as the number of biomedical documents rapidly grows.
PubMed 200k RCT: a Dataset for Sequential Sentence Classification in Medical Abstracts
First, the majority of datasets for sequential short-text classification (i. e., classification of short texts that appear in sequences) are small: we hope that releasing a new large dataset will help develop more accurate algorithms for this task.
What you can cram into a single vector: Probing sentence embeddings for linguistic properties
Although much effort has recently been devoted to training high-quality sentence embeddings, we still have a poor understanding of what they are capturing.
Neural Networks for Joint Sentence Classification in Medical Paper Abstracts
Existing models based on artificial neural networks (ANNs) for sentence classification often do not incorporate the context in which sentences appear, and classify sentences individually.
SciBERT: A Pretrained Language Model for Scientific Text
Obtaining large-scale annotated data for NLP tasks in the scientific domain is challenging and expensive.
Neural Semantic Encoders
We present a memory augmented neural network for natural language understanding: Neural Semantic Encoders.
Character-level and Multi-channel Convolutional Neural Networks for Large-scale Authorship Attribution
Convolutional neural networks (CNNs) have demonstrated superior capability for extracting information from raw signals in computer vision.