Long-range modeling

51 papers with code • 2 benchmarks • 4 datasets

A new task for testing the long-sequence modeling capabilities and efficiency of language models.

Image credit: SCROLLS: Standardized CompaRison Over Long Language Sequences


Use these libraries to find Long-range modeling models and implementations
2 papers

Most implemented papers

Efficiently Modeling Long Sequences with Structured State Spaces

hazyresearch/state-spaces ICLR 2022

A central goal of sequence modeling is designing a single principled model that can address sequence data across a range of modalities and tasks, particularly on long-range dependencies.

Mega: Moving Average Equipped Gated Attention

facebookresearch/mega 21 Sep 2022

The design choices in the Transformer attention mechanism, including weak inductive bias and quadratic computational complexity, have limited its application for modeling long sequences.

Long Range Arena: A Benchmark for Efficient Transformers

google-research/long-range-arena 8 Nov 2020

In the recent months, a wide spectrum of efficient, fast Transformers have been proposed to tackle this problem, more often than not claiming superior or comparable model quality to vanilla Transformer models.

Disentangling and Unifying Graph Convolutions for Skeleton-Based Action Recognition

kenziyuliu/ms-g3d CVPR 2020

Spatial-temporal graphs have been widely used by skeleton-based action recognition algorithms to model human action dynamics.

Simplified State Space Layers for Sequence Modeling

lindermanlab/S5 9 Aug 2022

Models using structured state space sequence (S4) layers have achieved state-of-the-art performance on long-range sequence modeling tasks.

Hungry Hungry Hippos: Towards Language Modeling with State Space Models

hazyresearch/h3 28 Dec 2022

First, we use synthetic language modeling tasks to understand the gap between SSMs and attention.

VM-UNet: Vision Mamba UNet for Medical Image Segmentation

jcruan519/vm-unet 4 Feb 2024

To our best knowledge, this is the first medical image segmentation model constructed based on the pure SSM-based model.

SCROLLS: Standardized CompaRison Over Long Language Sequences

tau-nlp/scrolls 10 Jan 2022

NLP benchmarks have largely focused on short texts, such as sentences and paragraphs, even though long texts comprise a considerable amount of natural language in the wild.

Diagonal State Spaces are as Effective as Structured State Spaces

hazyresearch/state-spaces 27 Mar 2022

Modeling long range dependencies in sequential data is a fundamental step towards attaining human-level performance in many modalities such as text, vision, audio and video.