Chunking
67 papers with code • 5 benchmarks • 5 datasets
Chunking, also known as shallow parsing, identifies continuous spans of tokens that form syntactic units such as noun phrases or verb phrases.
Example:
Vinken | , | 61 | years | old |
---|---|---|---|---|
B-NLP | I-NP | I-NP | I-NP | I-NP |
Libraries
Use these libraries to find Chunking models and implementationsLatest papers with no code
Symmetrical SyncMap for Imbalanced General Chunking Problems
The main idea is to apply equal updates from negative and positive feedback loops by symmetrical activation.
Abstractive Summarization of Large Document Collections Using GPT
This paper proposes a method of abstractive summarization designed to scale to document collections instead of individual documents.
Fine-tuned vs. Prompt-tuned Supervised Representations: Which Better Account for Brain Language Representations?
If so, what kind of NLU task leads a pre-trained model to better decode the information represented in the human brain?
Chunking: Forgetting Matters in Continual Learning even without Changing Tasks
Motivated by an analysis of the linear case, we show that per-chunk weight averaging improves performance in the chunking setting and that this performance transfers to the full CL setting.
Exploring RWKV for Memory Efficient and Low Latency Streaming ASR
Recently, self-attention-based transformers and conformers have been introduced as alternatives to RNNs for ASR acoustic modeling.
One ACT Play: Single Demonstration Behavior Cloning with Action Chunking Transformers
We achieve this goal by using linear transforms to augment the single demonstration, generating a set of trajectories for a wide range of initial conditions.
RoboAgent: Generalization and Efficiency in Robot Manipulation via Semantic Augmentations and Action Chunking
The grand aim of having a single robot that can manipulate arbitrary objects in diverse settings is at odds with the paucity of robotics datasets.
Chunked Lists versus Extensible Arrays for Text Inversion
In our 2017 work on in-memory list-based text inversion [Hawking and Billerbeck.
MultiSChuBERT: Effective Multimodal Fusion for Scholarly Document Quality Prediction
Using BERT$_{\textrm{BASE}}$ embeddings, on the (log) number of citations prediction task with the ACL-BiblioMetry dataset, our MultiSChuBERT (text+visual) model obtains an $R^{2}$ score of 0. 454 compared to 0. 432 for the SChuBERT (text only) model.
TBIN: Modeling Long Textual Behavior Data for CTR Prediction
Click-through rate (CTR) prediction plays a pivotal role in the success of recommendations.