Procedural Text Understanding

6 papers with code • 1 benchmarks • 1 datasets

This task has no description! Would you like to contribute one?

Datasets


Most implemented papers

Tracking the World State with Recurrent Entity Networks

facebook/MemNN 12 Dec 2016

The EntNet sets a new state-of-the-art on the bAbI tasks, and is the first method to solve all the tasks in the 10k training examples setting.

Query-Reduction Networks for Question Answering

uwnlp/qrn 14 Jun 2016

In this paper, we study the problem of question answering when reasoning over multiple facts is required.

Time-Stamped Language Model: Teaching Language Models to Understand the Flow of Events

HLR/TSLM NAACL 2021

This enables us to use pre-trained transformer-based language models on other QA benchmarks by adapting those to the procedural text understanding.

Coalescing Global and Local Information for Procedural Text Understanding

mayer123/cgli COLING 2022

In this paper, we propose Coalescing Global and Local Information (CGLI), a new model that builds entity- and timestep-aware input representations (local input) considering the whole context (global input), and we jointly model the entity states with a structured prediction objective (global output).

CLMSM: A Multi-Task Learning Framework for Pre-training on Procedural Text

manavkapadnis/clmsm_emnlp_2023 22 Oct 2023

In this paper, we propose CLMSM, a domain-specific, continual pre-training framework, that learns from a large set of procedural recipes.

Order-Based Pre-training Strategies for Procedural Text Understanding

abhi1nandy2/order_as_supervision 6 Apr 2024

In this paper, we propose sequence-based pretraining methods to enhance procedural understanding in natural language processing.