Table-to-Text Generation

43 papers with code • 8 benchmarks • 6 datasets

**Here is the provided data converted into a table format for clarity: COUNTRIES 1971-2010 2011 2012 2013 2014 2015 2016 2017 2018

Saudi Arabia 2742962 222247 358560 270502 312489 522750 462598 143363 100910 U.A.E 1595574 156353 182630 273234 350522 326986 295647 275436 208635 Oman 394436 53525 69407 47794 39793 47788 45085 42362 27202 Qatar 82043 5121 7320 8119 10042 12741 9706 11592 20993 Bahrain 94599 10641 10530 9600 9226 9029 8226 7919 5745 Kuwait 180755 173 5 229 132 164 770 773 493 South Korea 15343 12 7 12 46 13 17 9 13 Malaysia 23410 2092 1309 2031 20577 20216 10625 7174 9881 China 1717 180 220 155 254 355 482 457 854 Algeria 878 7 2 7 36 211 259 461 213 Angola 601 8 6 8 1 22 22 12 11 Azerbaijan 51 0 3 98 22 8 8 8 20 Brunei 998 79 74 67 48 85 85 212 225 Cameroon 48 15 0 0 3 2 0 1 4 Croatia 44 1 0 0 0 0 0 0 0 Cyprus 922 71 129 111 278 500 990 1729 1644 Gabon 299 2 4 1 8 0 0 2 0 Gen-Island 195 0 0 0 0 2 0 0 0 Germany 187 11 23 26 23 43 38 64 103 Greece 542 0 0 0 0 2 3 2 3 Guinea 144 15 12 13 6 10 11 6 11 Hong Kong 252 26 17 20 38 29 38 54 57 Iran 12586 14 3 26 5 65 37 100 20 Iraq 68135 0 32 951 1041 709 543 599 756 Italy 17763 2875 3361 2068 1563 431 242 141 86 Japan 380 48 62 44 69 82 102 153 258 Jordan 5341 178 279 345 328 321 282 285 170 Kenya 67 11 8 6 3 11 15 8 17 Lebanon 432 30 23 15 57 33 42 24 27 Libya 72112 490 1872 4543 2121 8 0 4 8 Morocco 44 0 0 0 2 0 0 1 5 Nigeria 2665 166 142 117 113 106 104 75 115

Most implemented papers

Prefix-Tuning: Optimizing Continuous Prompts for Generation

XiangLi1999/PrefixTuning ACL 2021

Fine-tuning is the de facto way to leverage large pretrained language models to perform downstream tasks.

Table-to-text Generation by Structure-aware Seq2seq Learning

tyliupku/wiki2bio 27 Nov 2017

In the decoding phase, dual attention mechanism which contains word level attention and field level attention is proposed to model the semantic relevance between the generated description and the table.

What Makes Good In-Context Examples for GPT-$3$?

stanfordnlp/dsp 17 Jan 2021

Inspired by the recent success of leveraging a retrieval module to augment large-scale neural network models, we propose to retrieve examples that are semantically-similar to a test sample to formulate its corresponding prompt.

Neural Text Generation from Structured Data with Application to the Biography Domain

parajain/data-to-text EMNLP 2016

This paper introduces a neural model for concept-to-text generation that scales to large, rich domains.

Arithmetic-Based Pretraining -- Improving Numeracy of Pretrained Language Models

ukplab/emnlp2022-reasoning-aware-pretraining 13 May 2022

In this paper, we propose a new extended pretraining approach called Arithmetic-Based Pretraining that jointly addresses both in one extended pretraining step without requiring architectural changes or pretraining from scratch.

QTSumm: Query-Focused Summarization over Tabular Data

yale-nlp/qtsumm 23 May 2023

Motivated by this, we define a new query-focused table summarization task, where text generation models have to perform human-like reasoning and analysis over the given table to generate a tailored summary.

Investigating Table-to-Text Generation Capabilities of LLMs in Real-World Information Seeking Scenarios

yale-nlp/llm-t2t 24 May 2023

These include the LogicNLG and our newly-constructed LoTNLG datasets for data insight generation, along with the FeTaQA and our newly-constructed F2WTQ datasets for query-based generation.

Order-Planning Neural Text Generation From Structured Data

anindyasarkarIITH/Structure_data_to_summary 1 Sep 2017

Generating texts from structured data (e. g., a table) is important for various natural language processing tasks such as question answering and dialog systems.

Describing a Knowledge Base

EagleW/Describing_a_Knowledge_Base WS 2018

We aim to automatically generate natural language descriptions about an input structured knowledge base (KB).

Handling Divergent Reference Texts when Evaluating Table-to-Text Generation

KaijuML/parent ACL 2019

Automatically constructed datasets for generating text from semi-structured data (tables), such as WikiBio, often contain reference texts that diverge from the information in the corresponding semi-structured data.