Search Results for author: Laura Perez-Beltrachini

Found 26 papers, 7 papers with code

Representation of linguistic and domain knowledge for second language learning in virtual worlds

no code implementations LREC 2012 Alex Denis, re, Ingrid Falk, Claire Gardent, Laura Perez-Beltrachini

There has been much debate, both theoretical and practical, on how to link ontologies and lexicons in natural language processing (NLP) applications.

Text Generation

Building RDF Content for Data-to-Text Generation

no code implementations COLING 2016 Laura Perez-Beltrachini, Rania Sayed, Claire Gardent

In Natural Language Generation (NLG), one important limitation is the lack of common benchmarks on which to train, evaluate and compare data-to-text generators.

Data-to-Text Generation

A Statistical, Grammar-Based Approach to Microplanning

no code implementations CL 2017 Claire Gardent, Laura Perez-Beltrachini

Although there has been much work in recent years on data-driven natural language generation, little attention has been paid to the fine-grained interactions that arise during microplanning between aggregation, surface realization, and sentence segmentation.

Sentence Sentence segmentation +1

Analysing Data-To-Text Generation Benchmarks

no code implementations WS 2017 Laura Perez-Beltrachini, Claire Gardent

Recently, several data-sets associating data to text have been created to train data-to-text surface realisers.

Data-to-Text Generation

Creating Training Corpora for NLG Micro-Planners

no code implementations ACL 2017 Claire Gardent, Anastasia Shimorina, Shashi Narayan, Laura Perez-Beltrachini

In this paper, we present a novel framework for semi-automatically creating linguistically challenging micro-planning data-to-text corpora from existing Knowledge Bases.

Data-to-Text Generation Referring Expression +3

Bootstrapping Generators from Noisy Data

1 code implementation NAACL 2018 Laura Perez-Beltrachini, Mirella Lapata

A core step in statistical data-to-text generation concerns learning correspondences between structured data representations (e. g., facts in a database) and associated texts.

Data-to-Text Generation

Automatic Construction of Evaluation Suites for Natural Language Generation Datasets

no code implementations16 Jun 2021 Simon Mille, Kaustubh D. Dhole, Saad Mahamood, Laura Perez-Beltrachini, Varun Gangal, Mihir Kale, Emiel van Miltenburg, Sebastian Gehrmann

By applying this framework to the GEM generation benchmark, we propose an evaluation suite made of 80 challenge sets, demonstrate the kinds of analyses that it enables and shed light onto the limits of current generation models.

Text Generation

Multi-Document Summarization withDeterminantal Point Process Attention

no code implementations Journal of Artificial Intelligence Research 2021 Laura Perez-Beltrachini, Mirella Lapata

The ability to convey relevant and diverse information is critical in multi-documentsummarization and yet remains elusive for neural seq-to-seq models whose outputs are of-ten redundant and fail to correctly cover important details.

Document Summarization Multi-Document Summarization

Models and Datasets for Cross-Lingual Summarisation

1 code implementation EMNLP 2021 Laura Perez-Beltrachini, Mirella Lapata

We present a cross-lingual summarisation corpus with long documents in a source language associated with multi-sentence summaries in a target language.

Sentence

GEMv2: Multilingual NLG Benchmarking in a Single Line of Code

no code implementations22 Jun 2022 Sebastian Gehrmann, Abhik Bhattacharjee, Abinaya Mahendiran, Alex Wang, Alexandros Papangelis, Aman Madaan, Angelina McMillan-Major, Anna Shvets, Ashish Upadhyay, Bingsheng Yao, Bryan Wilie, Chandra Bhagavatula, Chaobin You, Craig Thomson, Cristina Garbacea, Dakuo Wang, Daniel Deutsch, Deyi Xiong, Di Jin, Dimitra Gkatzia, Dragomir Radev, Elizabeth Clark, Esin Durmus, Faisal Ladhak, Filip Ginter, Genta Indra Winata, Hendrik Strobelt, Hiroaki Hayashi, Jekaterina Novikova, Jenna Kanerva, Jenny Chim, Jiawei Zhou, Jordan Clive, Joshua Maynez, João Sedoc, Juraj Juraska, Kaustubh Dhole, Khyathi Raghavi Chandu, Laura Perez-Beltrachini, Leonardo F. R. Ribeiro, Lewis Tunstall, Li Zhang, Mahima Pushkarna, Mathias Creutz, Michael White, Mihir Sanjay Kale, Moussa Kamal Eddine, Nico Daheim, Nishant Subramani, Ondrej Dusek, Paul Pu Liang, Pawan Sasanka Ammanamanchi, Qi Zhu, Ratish Puduppully, Reno Kriz, Rifat Shahriyar, Ronald Cardenas, Saad Mahamood, Salomey Osei, Samuel Cahyawijaya, Sanja Štajner, Sebastien Montella, Shailza, Shailza Jolly, Simon Mille, Tahmid Hasan, Tianhao Shen, Tosin Adewumi, Vikas Raunak, Vipul Raheja, Vitaly Nikolaev, Vivian Tsai, Yacine Jernite, Ying Xu, Yisi Sang, Yixin Liu, Yufang Hou

This problem is especially pertinent in natural language generation which requires ever-improving suites of datasets, metrics, and human evaluation to make definitive claims.

Benchmarking Text Generation

Semantic Parsing for Conversational Question Answering over Knowledge Graphs

1 code implementation28 Jan 2023 Laura Perez-Beltrachini, Parag Jain, Emilio Monti, Mirella Lapata

In this paper, we are interested in developing semantic parsers which understand natural language questions embedded in a conversation with a user and ground them to formal queries over definitions in a general purpose knowledge graph (KG) with very large vocabularies (covering thousands of concept names and relations, and millions of entities).

Conversational Question Answering Knowledge Graphs +1

Improving User Controlled Table-To-Text Generation Robustness

1 code implementation20 Feb 2023 Hanxu Hu, Yunqing Liu, Zhongyi Yu, Laura Perez-Beltrachini

In this work we study user controlled table-to-text generation where users explore the content in a table by selecting cells and reading a natural language description thereof automatically produce by a natural language generator.

Table-to-Text Generation

Fine-Grained Natural Language Inference Based Faithfulness Evaluation for Diverse Summarisation Tasks

1 code implementation27 Feb 2024 Huajian Zhang, Yumo Xu, Laura Perez-Beltrachini

We study existing approaches to leverage off-the-shelf Natural Language Inference (NLI) models for the evaluation of summary faithfulness and argue that these are sub-optimal due to the granularity level considered for premises and hypotheses.

Natural Language Inference Sentence

Cannot find the paper you are looking for? You can Submit a new open access paper.