ChuLo: Chunk-Level Key Information Representation for Long Document Processing

14 Oct 2024  ·  Yan Li, Soyeon Caren Han, Yue Dai, Feiqi Cao ·

Transformer-based models have achieved remarkable success in various Natural Language Processing (NLP) tasks, yet their ability to handle long documents is constrained by computational limitations. Traditional approaches, such as truncating inputs, sparse self-attention, and chunking, attempt to mitigate these issues, but they often lead to information loss and hinder the model's ability to capture long-range dependencies. In this paper, we introduce ChuLo, a novel chunk representation method for long document classification that addresses these limitations. Our ChuLo groups input tokens using unsupervised keyphrase extraction, emphasizing semantically important keyphrase based chunk to retain core document content while reducing input length. This approach minimizes information loss and improves the efficiency of Transformer-based models. Preserving all tokens in long document understanding, especially token classification tasks, is especially important to ensure that fine-grained annotations, which depend on the entire sequence context, are not lost. We evaluate our method on multiple long document classification tasks and long document token classification tasks, demonstrating its effectiveness through comprehensive qualitative and quantitative analyses.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Named Entity Recognition CoNLL 2012 ChuLo Micro F1 93.34 # 1
Multilabel Text Classification EURLEX57K ChuLo Micro F1 73.32 # 1
Named Entity Recognition GUM ChuLo Micro F1 95.55 # 1
Document Classification Hyperpartisan News Detection ChuLo Accuracy 95.38 # 1
Document Classification LUN ChuLo Accuracy 64.40 # 1

Methods


No methods listed for this paper. Add relevant methods here