Search Results for author: Xinnuo Xu

Found 10 papers, 5 papers with code

Graph Guided Question Answer Generation for Procedural Question-Answering

no code implementations24 Jan 2024 Hai X. Pham, Isma Hadji, Xinnuo Xu, Ziedune Degutyte, Jay Rainey, Evangelos Kazakos, Afsaneh Fazly, Georgios Tzimiropoulos, Brais Martinez

The key technological enabler is a novel mechanism for automatic question-answer generation from procedural text which can ingest large amounts of textual instructions and produce exhaustive in-domain QA training data.

Answer Generation Question-Answer-Generation +1

Compositional Generalization for Data-to-Text Generation

no code implementations5 Dec 2023 Xinnuo Xu, Ivan Titov, Mirella Lapata

Data-to-text generation involves transforming structured data, often represented as predicate-argument tuples, into coherent textual descriptions.

Data-to-Text Generation Sentence

MiRANews: Dataset and Benchmarks for Multi-Resource-Assisted News Summarization

1 code implementation Findings (EMNLP) 2021 Xinnuo Xu, Ondřej Dušek, Shashi Narayan, Verena Rieser, Ioannis Konstas

We show via data analysis that it's not only the models which are to blame: more than 27% of facts mentioned in the gold summaries of MiRANews are better grounded on assisting documents than in the main source articles.

Document Summarization Multi-Document Summarization +2

AUGNLG: Few-shot Natural Language Generation using Self-trained Data Augmentation

1 code implementation ACL 2021 Xinnuo Xu, Guoyin Wang, Young-Bum Kim, Sungjin Lee

Natural Language Generation (NLG) is a key component in a task-oriented dialogue system, which converts the structured meaning representation (MR) to the natural language.

Data Augmentation Retrieval +2

AGGGEN: Ordering and Aggregating while Generating

1 code implementation ACL 2021 Xinnuo Xu, Ondřej Dušek, Verena Rieser, Ioannis Konstas

We present AGGGEN (pronounced 'again'), a data-to-text model which re-introduces two explicit sentence planning stages into neural data-to-text systems: input ordering and input aggregation.

Sentence

Fact-based Content Weighting for Evaluating Abstractive Summarisation

no code implementations ACL 2020 Xinnuo Xu, Ond{\v{r}}ej Du{\v{s}}ek, Jingyi Li, Verena Rieser, Ioannis Konstas

Abstractive summarisation is notoriously hard to evaluate since standard word-overlap-based metrics are insufficient.

Unsupervised Dialogue Spectrum Generation for Log Dialogue Ranking

no code implementations WS 2019 Xinnuo Xu, Yizhe Zhang, Lars Liden, Sungjin Lee

Although the data-driven approaches of some recent bot building platforms make it possible for a wide range of users to easily create dialogue systems, those platforms don{'}t offer tools for quickly identifying which log dialogues contain problems.

Better Conversations by Modeling, Filtering, and Optimizing for Coherence and Diversity

1 code implementation EMNLP 2018 Xinnuo Xu, Ond{\v{r}}ej Du{\v{s}}ek, Ioannis Konstas, Verena Rieser

We present three enhancements to existing encoder-decoder models for open-domain conversational agents, aimed at effectively modeling coherence and promoting output diversity: (1) We introduce a measure of coherence as the GloVe embedding similarity between the dialogue context and the generated response, (2) we filter our training corpora based on the measure of coherence to obtain topically coherent and lexically diverse context-response pairs, (3) we then train a response generator using a conditional variational autoencoder model that incorporates the measure of coherence as a latent variable and uses a context gate to guarantee topical consistency with the context and promote lexical diversity.

Dialogue Generation

Better Conversations by Modeling,Filtering,and Optimizing for Coherence and Diversity

2 code implementations18 Sep 2018 Xinnuo Xu, Ondřej Dušek, Ioannis Konstas, Verena Rieser

We present three enhancements to existing encoder-decoder models for open-domain conversational agents, aimed at effectively modeling coherence and promoting output diversity: (1) We introduce a measure of coherence as the GloVe embedding similarity between the dialogue context and the generated response, (2) we filter our training corpora based on the measure of coherence to obtain topically coherent and lexically diverse context-response pairs, (3) we then train a response generator using a conditional variational autoencoder model that incorporates the measure of coherence as a latent variable and uses a context gate to guarantee topical consistency with the context and promote lexical diversity.

Cannot find the paper you are looking for? You can Submit a new open access paper.