Introduced by Gardent et al. in Creating Training Corpora for NLG Micro-Planners

The WebNLG corpus comprises of sets of triplets describing facts (entities and relations between them) and the corresponding facts in form of natural language text. The corpus contains sets with up to 7 triplets each along with one or more reference texts for each set. The test set is split into two parts: seen, containing inputs created for entities and relations belonging to DBpedia categories that were seen in the training data, and unseen, containing inputs extracted for entities and relations belonging to 5 unseen categories.

Initially, the dataset was used for the WebNLG natural language generation challenge which consists of mapping the sets of triplets to text, including referring expression generation, aggregation, lexicalization, surface realization, and sentence segmentation. The corpus is also used for a reverse task of triplets extraction.

Versioning history of the dataset can be found here.

It's also available here: https://huggingface.co/datasets/web_nlg Note: "The v3 release (release_v3.0_en, release_v3.0_ru) for the WebNLG2020 challenge also supports a semantic parsing task."

Source: Step-by-Step: Separating Planning from Realization in Neural Data-to-Text Generation


Paper Code Results Date Stars


Similar Datasets