Search Results for author: Craig Thomson

Found 11 papers, 3 papers with code

Studying the Impact of Filling Information Gaps on the Output Quality of Neural Data-to-Text

1 code implementation INLG (ACL) 2020 Craig Thomson, Zhijie Zhao, Somayajulu Sripada

It is unfair to expect neural data-to-text to produce high quality output when there are gaps between system input data and information contained in the training text.

Data-to-Text Generation

Shared Task on Evaluating Accuracy

no code implementations INLG (ACL) 2020 Ehud Reiter, Craig Thomson

We propose a shared task on methodologies and algorithms for evaluating the accuracy of generated texts, specifically summaries of basketball games produced from basketball box score and other game data.

AI in Energy Digital Twining: A Reinforcement Learning-based Adaptive Digital Twin Model for Green Cities

no code implementations28 Jan 2024 Lal Verda Cakir, Kubra Duran, Craig Thomson, Matthew Broadbent, Berk Canberk

This is caused by the lack of right-time data capturing in traditional approaches, resulting in inaccurate modelling and high resource and energy consumption challenges.

GEMv2: Multilingual NLG Benchmarking in a Single Line of Code

no code implementations22 Jun 2022 Sebastian Gehrmann, Abhik Bhattacharjee, Abinaya Mahendiran, Alex Wang, Alexandros Papangelis, Aman Madaan, Angelina McMillan-Major, Anna Shvets, Ashish Upadhyay, Bingsheng Yao, Bryan Wilie, Chandra Bhagavatula, Chaobin You, Craig Thomson, Cristina Garbacea, Dakuo Wang, Daniel Deutsch, Deyi Xiong, Di Jin, Dimitra Gkatzia, Dragomir Radev, Elizabeth Clark, Esin Durmus, Faisal Ladhak, Filip Ginter, Genta Indra Winata, Hendrik Strobelt, Hiroaki Hayashi, Jekaterina Novikova, Jenna Kanerva, Jenny Chim, Jiawei Zhou, Jordan Clive, Joshua Maynez, João Sedoc, Juraj Juraska, Kaustubh Dhole, Khyathi Raghavi Chandu, Laura Perez-Beltrachini, Leonardo F. R. Ribeiro, Lewis Tunstall, Li Zhang, Mahima Pushkarna, Mathias Creutz, Michael White, Mihir Sanjay Kale, Moussa Kamal Eddine, Nico Daheim, Nishant Subramani, Ondrej Dusek, Paul Pu Liang, Pawan Sasanka Ammanamanchi, Qi Zhu, Ratish Puduppully, Reno Kriz, Rifat Shahriyar, Ronald Cardenas, Saad Mahamood, Salomey Osei, Samuel Cahyawijaya, Sanja Štajner, Sebastien Montella, Shailza, Shailza Jolly, Simon Mille, Tahmid Hasan, Tianhao Shen, Tosin Adewumi, Vikas Raunak, Vipul Raheja, Vitaly Nikolaev, Vivian Tsai, Yacine Jernite, Ying Xu, Yisi Sang, Yixin Liu, Yufang Hou

This problem is especially pertinent in natural language generation which requires ever-improving suites of datasets, metrics, and human evaluation to make definitive claims.

Benchmarking Text Generation

Generation Challenges: Results of the Accuracy Evaluation Shared Task

1 code implementation INLG (ACL) 2021 Craig Thomson, Ehud Reiter

The Shared Task on Evaluating Accuracy focused on techniques (both manual and automatic) for evaluating the factual accuracy of texts produced by neural NLG systems, in a sports-reporting domain.

Shared Task on Evaluating Accuracy in Natural Language Generation

no code implementations22 Jun 2020 Ehud Reiter, Craig Thomson

We propose a shared task on methodologies and algorithms for evaluating the accuracy of generated texts.

Text Generation

Comprehension Driven Document Planning in Natural Language Generation Systems

no code implementations WS 2018 Craig Thomson, Ehud Reiter, Somayajulu Sripada

This paper proposes an approach to NLG system design which focuses on generating output text which can be more easily processed by the reader.

Text Generation

Cannot find the paper you are looking for? You can Submit a new open access paper.