Search Results for author: Ond{\v{r}}ej Du{\v{s}}ek

Found 24 papers, 4 papers with code

Discovering Dialogue Slots with Weak Supervision

no code implementations ACL 2021 Vojt{\v{e}}ch Hude{\v{c}}ek, Ond{\v{r}}ej Du{\v{s}}ek, Zhou Yu

Our model demonstrates state-of-the-art performance in slot tagging without labeled training data on four different dialogue domains.

Response Generation Task-Oriented Dialogue Systems

Fact-based Content Weighting for Evaluating Abstractive Summarisation

no code implementations ACL 2020 Xinnuo Xu, Ond{\v{r}}ej Du{\v{s}}ek, Jingyi Li, Verena Rieser, Ioannis Konstas

Abstractive summarisation is notoriously hard to evaluate since standard word-overlap-based metrics are insufficient.

Expand and Filter: CUNI and LMU Systems for the WNGT 2020 Duolingo Shared Task

no code implementations WS 2020 Jind{\v{r}}ich Libovick{\'y}, Zden{\v{e}}k Kasner, Jind{\v{r}}ich Helcl, Ond{\v{r}}ej Du{\v{s}}ek

While the use of additional data and our classifier filter were able to improve results, the paraphrasing model produced too many invalid outputs to further improve the output quality.

Translation

Neural Generation for Czech: Data and Baselines

no code implementations WS 2019 Ond{\v{r}}ej Du{\v{s}}ek, Filip Jur{\v{c}}{\'\i}{\v{c}}ek

We present the first dataset targeted at end-to-end NLG in Czech in the restaurant domain, along with several strong baseline models using the sequence-to-sequence approach.

Data-to-Text Generation Language Modelling

Better Conversations by Modeling, Filtering, and Optimizing for Coherence and Diversity

1 code implementation EMNLP 2018 Xinnuo Xu, Ond{\v{r}}ej Du{\v{s}}ek, Ioannis Konstas, Verena Rieser

We present three enhancements to existing encoder-decoder models for open-domain conversational agents, aimed at effectively modeling coherence and promoting output diversity: (1) We introduce a measure of coherence as the GloVe embedding similarity between the dialogue context and the generated response, (2) we filter our training corpora based on the measure of coherence to obtain topically coherent and lexically diverse context-response pairs, (3) we then train a response generator using a conditional variational autoencoder model that incorporates the measure of coherence as a latent variable and uses a context gate to guarantee topical consistency with the context and promote lexical diversity.

Dialogue Generation

Cannot find the paper you are looking for? You can Submit a new open access paper.